markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Welp. Seems like that trip identification code is a little too basic. Let's try to investigate what might be the problem:
sql = '''WITH trips AS(SELECT lineid, trip_id, EXTRACT('minutes' FROM MAX(estimated_arrival) - MIN(estimated_arrival)) as trip_duration FROM test_day_final GROUP BY lineid, trip_id) SELECT * FROM trips ORDER BY trip_duration LIMIT 10''' pandasql.read_sql(sql, con) sql = '''WITH one_stop_trips AS(SELECT lineid, trip_id, ARRAY_AGG(station_char) AS stations FROM test_day_final GROUP BY lineid, trip_id HAVING COUNT(1) = 1) SELECT lineid, unnest(stations) as station_char, COUNT(1) "Number of Trips" FROM one_stop_trips GROUP BY lineid, station_char ORDER BY lineid, "Number of Trips" DESC''' pandasql.read_sql(sql, con)
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
So for the most part we seem to have issues with identifying trip start/end at the termini. |Line | One stop Trips at Termini| |-----|--------------------------:| |1 | 766| |2 | 481| | 4 | 791| So approximately half of "extra trips" are one stop trips at termini. Let's see the overall distribution of number of stops for each trip we've inferred and compare that to the ideal.
sql = '''WITH inferred_trips AS(SELECT lineid, trip_id, COUNT(1) as stops FROM test_day_final GROUP BY lineid, trip_id ), inferred_trip_length AS( SELECT lineid, stops, COUNT(trip_id) as obs_trips FROM inferred_trips GROUP BY lineid, stops) , gtfs_trip_lengths AS(SELECT route_short_name::INT AS lineid, trip_id, COUNT(1) as stops FROM gtfs.stop_times INNER JOIN gtfs.trips USING (trip_id) INNER JOIN gtfs.routes USING (route_id) INNER JOIN gtfs.calendar USING (service_id) WHERE monday AND route_type = 1 AND route_short_name != '3' GROUP BY route_short_name, trip_id ) ,gtfs_trip_length_distro AS (SELECT lineid, stops, COUNT(trip_id) as num_trips FROM gtfs_trip_lengths GROUP BY lineid, stops) SELECT lineid, stops, COALESCE(num_trips,0) as scheduled, COUNT(inferred_trips.trip_id) as observed FROM inferred_trips FULL OUTER JOIN gtfs_trip_length_distro USING (lineid, stops) GROUP BY lineid, stops, num_trips ORDER BY lineid, stops ''' trip_lengths = pandasql.read_sql(sql, con) line_one = trip_lengths[trip_lengths['lineid'] == 1] fig, ax = plt.subplots(figsize=(16,9)) line_one.plot(x='stops', y='scheduled', kind='bar', ax=ax,position=0, color='red') line_one.plot(x='stops', y='observed', sharey=True, sharex=True, kind='bar', ax=ax, position=1, color='blue') ax.set_title('Line 1 Distribution of Trip Lengths') ax.yaxis.set_label('Number of trips')
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
So we are certainly getting 1, 2, and 3 stop trips that shouldn't exist and undercounting the more appropriate number of trips. The one-stop trips are primarily at termini. What is happening with the 2, 3 stop trips...?
sql_2_3 = '''SELECT stops, COUNT(1) FROM ( SELECT array_agg(station_char ORDER BY estimated_arrival) AS stops FROM test_day_final WHERE lineid = 1 GROUP BY trip_id HAVING COUNT(1) =2 OR COUNT(1) = 3 )grouped_trips GROUP BY stops ORDER BY COUNT(1) DESC LIMIT 10 ''' pandasql.read_sql(sql_2_3, con)
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
The top "trips" are from Bloor-Spadina to Yonge via St. George and vice-versa, but these are the stations on line 2... Trying to group arrival-departure times first Assuming a maximum time for any given train to dwell at a station, we will test this procedure on lines 1-2 first.
sql = ''' CREATE MATERIALIZED VIEW test_day_stop_arrival AS SELECT trainid, lineid, traindirection, stationid, station_char, MIN(create_date + timint * interval '1 minute') AS expected_arrival, timint, train_message, FROM test_day WHERE (timint < 1 OR train_message = 'AtStation') GROUP BY trainid, lineid, traindirection, stationid, station_char, '''
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
The actual conversion is done with rawpy and Pillow which are imported below.
import rawpy import PIL
simple-convert/simple-convert.ipynb
neothemachine/rawpy-notebooks
unlicense
Opening the RAW image Opening a RAW image is as simple as calling rawpy.imread.
raw = rawpy.imread('../images/RAW_NIKON_D3X.NEF')
simple-convert/simple-convert.ipynb
neothemachine/rawpy-notebooks
unlicense
Note that imread() behaves similar to Python's built-in open() function, meaning that the opened file has to be closed again later on. Processing the RAW image When processing RAW images we have to decide how to handle the white balance. A common option is to just use the white balance values that are stored in the RAW image when the picture was shot. To do that, the use_camera_wb parameter has to be set to True.
rgb = raw.postprocess(use_camera_wb=True)
simple-convert/simple-convert.ipynb
neothemachine/rawpy-notebooks
unlicense
The return value of postprocess() is a numpy array which we can display with matplotlib's imshow() function.
print(rgb.dtype, rgb.shape) imshow(rgb)
simple-convert/simple-convert.ipynb
neothemachine/rawpy-notebooks
unlicense
If the camera white balance does not look right, then it can also be estimated from the image itself with the use_auto_wb parameter.
rgb2 = raw.postprocess(use_auto_wb=True) imshow(rgb2)
simple-convert/simple-convert.ipynb
neothemachine/rawpy-notebooks
unlicense
In this example the white balance values stored from the camera look more natural, so we will use the first version. Saving the processed image Saving the processed image (a numpy array) in a standard format is easily done with Pillow.
PIL.Image.fromarray(rgb).save('image.jpg', quality=90, optimize=True) PIL.Image.fromarray(rgb).save('image.tiff')
simple-convert/simple-convert.ipynb
neothemachine/rawpy-notebooks
unlicense
Closing the RAW image It is important to close the RAW image again after we are done with processing.
raw.close()
simple-convert/simple-convert.ipynb
neothemachine/rawpy-notebooks
unlicense
Using context managers rawpy also supports context managers for opening/closing RAW images. In that case, the conversion code would look like below.
with rawpy.imread('../images/RAW_NIKON_D3X.NEF') as raw: rgb = raw.postprocess(use_camera_wb=True) PIL.Image.fromarray(rgb).save('image.jpg')
simple-convert/simple-convert.ipynb
neothemachine/rawpy-notebooks
unlicense
Pytorch Introduction ```bash installation on a mac for more information on installation refer to the following link: http://pytorch.org/ conda install pytorch torchvision -c pytorch ``` At its core, PyTorch provides two main features: An n-dimensional Tensor, similar to numpy array but can run on GPUs. PyTorch provides many functions for operating on these Tensors, thus it can be used as a general purpose scientific computing tool. Automatic differentiation for building and training neural networks. Let's dive in by looking at some examples: Linear Regression
# make up some trainig data and specify the type to be float, i.e. np.float32 # We DO not recommend double, i.e. np.float64, especially on the GPU. GPUs have bad # double precision performance since they are optimized for float32 X_train = np.asarray([3.3, 4.4, 5.5, 6.71, 6.93, 4.168, 9.779, 6.182, 7.59, 2.167, 7.042, 10.791, 5.313, 7.997, 5.654, 9.27, 3.1], dtype = np.float32) X_train = X_train.reshape(-1, 1) y_train = np.asarray([1.7, 2.76, 2.09, 3.19, 1.694, 1.573, 3.366, 2.596, 2.53, 1.221, 2.827, 3.465, 1.65, 2.904, 2.42, 2.94, 1.3], dtype = np.float32) y_train = y_train.reshape(-1, 1) # Convert numpy array to Pytorch Tensors X = torch.FloatTensor(X_train) y = torch.FloatTensor(y_train)
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
Here we start defining the linear regression model, recall that in linear regression, we are optimizing for the squared loss. \begin{align} L = \frac{1}{2}(y-(Xw + b))^2 \end{align}
# with linear regression, we apply a linear transformation # to the incoming data, i.e. y = Xw + b, here we only have a 1 # dimensional data, thus the feature size will be 1 model = nn.Linear(in_features=1, out_features=1) # although we can write our own loss function, the nn module # also contains definitions of popular loss functions; here # we use the MSELoss, a.k.a the L2 loss, and size_average parameter # simply divides it with the number of examples criterion = nn.MSELoss(size_average=True) # Then we use the optim module to define an Optimizer that will update the weights of # the model for us. Here we will use SGD; but it contains many other # optimization algorithms. The first argument to the SGD constructor tells the # optimizer the parameters that it should update learning_rate = 0.01 optimizer = optim.SGD(model.parameters(), lr=learning_rate) # start the optimization process n_epochs = 100 for _ in range(n_epochs): # torch accumulates the gradients, thus before running new things # use the optimizer object to zero all of the gradients for the # variables it will update (which are the learnable weights of the model), # think in terms of refreshing the gradients before doing the another round of update optimizer.zero_grad() # forward pass: compute predicted y by passing X to the model output = model(X) # compute the loss function loss = criterion(output, y) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # call the step function on an Optimizer makes an update to its parameters optimizer.step() # plot the data and the fitted line to confirm the result # change default style figure and font size plt.rcParams['figure.figsize'] = 8, 6 plt.rcParams['font.size'] = 14 # convert a torch FloatTensor back to a numpy ndarray # here, we also call .detach to detach the result from the computation history, # to prevent future computations on it from being tracked y_pred = model(X).detach().numpy() plt.plot(X_train, y_train, 'ro', label='Original data') plt.plot(X_train, y_pred, label='Fitted line') plt.legend() plt.show() # to get the parameters, i.e. weight and bias from the model, # we can use the state_dict() attribute from the model that # we've defined model.state_dict() # or we could get it from the model's parameter # which by itself is a generator list(model.parameters())
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
Linear Regression Version 2 A better way of defining our model is to inherit the nn.Module class, to use it all we need to do is define our model's forward pass and the nn.Module will automatically define the backward method for us, where the gradients will be computed using autograd.
class LinearRegression(nn.Module): def __init__(self, in_features, out_features): super().__init__() # boilerplate call self.in_features = in_features self.out_features = out_features self.linear = nn.Linear(in_features, out_features) def forward(self, x): out = self.linear(x) return out # same optimization process n_epochs = 100 learning_rate = 0.01 criterion = nn.MSELoss(size_average=True) model = LinearRegression(in_features=1, out_features=1) # when we defined our LinearRegression class, we've assigned # a neural network's component/layer to a class variable in the # __init__ function, and now notice that we can directly call # .parameters() on the class we've defined due to some Python magic # from the Pytorch devs optimizer = optim.SGD(model.parameters(), lr=learning_rate) for epoch in range(n_epochs): # forward + backward + optimize optimizer.zero_grad() output = model(X) loss = criterion(output, y) loss.backward() optimizer.step() # print the loss per 20 epoch this time if (epoch + 1) % 20 == 0: # starting from pytorch 0.4.0, we use .item to get a python number from a # torch scalar, before loss.item() looks something like loss.data[0] print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch + 1, n_epochs, loss.item()))
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
After training our model, we can also save the model's parameter and load it back into the model in the future
checkpoint_path = 'model.pkl' torch.save(model.state_dict(), checkpoint_path) model.load_state_dict(torch.load(checkpoint_path)) y_pred = model(X).detach().numpy() plt.plot(X_train, y_train, 'ro', label='Original data') plt.plot(X_train, y_pred, label='Fitted line') plt.legend() plt.show()
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
Logistic Regression Let's now look at a classification example, here we'll define a logistic regression that takes in a bag of words representation of some text and predicts over two labels "English" and "Spanish".
# define some toy dataset train_data = [ ('me gusta comer en la cafeteria'.split(), 'SPANISH'), ('Give it to me'.split(), 'ENGLISH'), ('No creo que sea una buena idea'.split(), 'SPANISH'), ('No it is not a good idea to get lost at sea'.split(), 'ENGLISH') ] test_data = [ ('Yo creo que si'.split(), 'SPANISH'), ('it is lost on me'.split(), 'ENGLISH') ]
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
The next code chunk create words to index mappings. To build our bag of words (BoW) representation, we need to assign each word in our vocabulary an unique index. Let's say our entire corpus only consists of two words "hello" and "world", with "hello" corresponding to index 0 and "world" to index 1. Then the BoW vector for the sentence "hello world hello world" will be [2, 2], i.e. the count for the word "hello" will be at position 0 of the array and so on.
idx_to_label = ['SPANISH', 'ENGLISH'] label_to_idx = {"SPANISH": 0, "ENGLISH": 1} word_to_idx = {} for sent, _ in train_data + test_data: for word in sent: if word not in word_to_idx: word_to_idx[word] = len(word_to_idx) print(word_to_idx) VOCAB_SIZE = len(word_to_idx) NUM_LABELS = len(label_to_idx)
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
Next we define our model using the inherenting from nn.Module approach and also two helper functions to convert our data to torch Tensors so we can use to during training.
class BoWClassifier(nn.Module): def __init__(self, vocab_size, num_labels): super().__init__() self.linear = nn.Linear(vocab_size, num_labels) def forward(self, bow_vector): """ When we're performing a classification, after passing through the linear layer or also known as the affine layer we also need pass it through the softmax layer to convert a vector of real numbers into probability distribution, here we use log softmax for numerical stability reasons. """ return F.log_softmax(self.linear(bow_vector), dim = 1) def make_bow_vector(sentence, word_to_idx): vector = torch.zeros(len(word_to_idx)) for word in sentence: vector[word_to_idx[word]] += 1 return vector.view(1, -1) def make_target(label, label_to_idx): return torch.LongTensor([label_to_idx[label]])
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
We are now ready to train this!
model = BoWClassifier(VOCAB_SIZE, NUM_LABELS) # note that instead of using NLLLoss (negative log likelihood), # we could have used CrossEntropyLoss and remove the log_softmax # function call in our forward method. The CrossEntropyLoss docstring # explicitly states that this criterion combines `LogSoftMax` and # `NLLLoss` in one single class. criterion = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.1) n_epochs = 100 for epoch in range(n_epochs): for instance, label in train_data: bow_vector = make_bow_vector(instance, word_to_idx) target = make_target(label, label_to_idx) # standard step to perform the forward and backward step model.zero_grad() log_probs = model(bow_vector) loss = criterion(log_probs, target) loss.backward() optimizer.step() # we can also wrap the code block in with torch.no_grad(): to # prevent history tracking, this is often used in model inferencing, # or when evaluating the model as we won't be needing the gradient during # this stage with torch.no_grad(): # predict on the test data to check if the model actually learned anything for instance, label in test_data: bow_vec = make_bow_vector(instance, word_to_idx) log_probs = model(bow_vec) y_pred = np.argmax(log_probs[0].numpy()) label_pred = idx_to_label[y_pred] print('true label: ', label, ' predicted label: ', label_pred)
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
Recurrent Neural Network (RNN) The idea behind RNN is to make use of sequential information that exists in our dataset. In feedforward neural network, we assume that all inputs and outputs are independent of each other. But for some tasks, this might not be the best way to tackle the problem. For example, in Natural Language Processing (NLP) applications, if we wish to predict the next word in a sentence (one business application of this is Swiftkey), then we could imagine that knowing the word that comes before it can come in handy. Vanilla RNN The input $x$ will be a sequence of words, and each $x_t$ is a single word. And because of how matrix multiplication works, we can't simply use a word index like (36) as an input, instead we represent each word as a one-hot vector with a size of the total number of vocabulary. For example, the word with index 36 have the value 1 at position 36 and the rest of the value in the vector would all be 0's.
torch.manual_seed(777) # suppose we have a # one hot encoding for each char in 'hello' # and the sequence length for the word 'hello' is 5 seq_len = 5 h = [1, 0, 0, 0] e = [0, 1, 0, 0] l = [0, 0, 1, 0] o = [0, 0, 0, 1] # here we specify a single RNN cell with the property of # input_dim (4) -> output_dim (2) # batch_first explained in the following rnn_cell = nn.RNN(input_size=4, hidden_size=2, batch_first=True) # our input shape should be of shape # (batch, seq_len, input_size) when batch_first=True; # the input size basically refers to the number of features # (seq_len, batch_size, input_size) when batch_first=False (default) # thus we reshape our input to the appropriate size, torch.view is # equivalent to numpy.reshape inputs = torch.Tensor([h, e, l, l, o]) inputs = inputs.view(1, 5, -1) # our hidden is the weights that gets passed along the cells, # here we initialize some random values for it: # (batch, num_layers * num_directions, hidden_size) for batch_first=True # disregard the second argument as of now hidden = torch.zeros(1, 1, 2) out, hidden = rnn_cell(inputs, hidden) print('sequence input size', inputs.size()) print('out size', out.size()) print('sequence size', hidden.size()) # the first value returned by the rnn cell is all # of the hidden state throughout the sequence, while # the second value is the most recent hidden state; # hence we can compare the last slice of the the first # value with the second value to confirm that they are # the same print('\ncomparing rnn cell output:') print(out[:, -1, :]) hidden[0]
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
In the next section, we'll teach our RNN to produce "ihello" from "hihell".
# create an index to character mapping idx2char = ['h', 'i', 'e', 'l', 'o'] # Teach hihell -> ihello x_data = [[0, 1, 0, 2, 3, 3]] # hihell x_one_hot = [[[1, 0, 0, 0, 0], # h 0 [0, 1, 0, 0, 0], # i 1 [1, 0, 0, 0, 0], # h 0 [0, 0, 1, 0, 0], # e 2 [0, 0, 0, 1, 0], # l 3 [0, 0, 0, 1, 0]]] # l 3 x_one_hot = np.array(x_one_hot) y_data = np.array([1, 0, 2, 3, 3, 4]) # ihello # As we have one batch of samples, we will change them to variables only once inputs = torch.Tensor(x_one_hot) labels = torch.LongTensor(y_data) # hyperparameters seq_len = 6 # |hihell| == 6, equivalent to time step input_size = 5 # one-hot size batch_size = 1 # one sentence per batch num_layers = 1 # one-layer rnn num_classes = 5 # predicting 5 distinct character hidden_size = 4 # output from the RNN class RNN(nn.Module): """ The RNN model will be a RNN followed by a linear layer, i.e. a fully-connected layer """ def __init__(self, seq_len, num_classes, input_size, hidden_size, num_layers): super().__init__() self.seq_len = seq_len self.num_layers = num_layers self.input_size = input_size self.num_classes = num_classes self.hidden_size = hidden_size self.rnn = nn.RNN(input_size, hidden_size, batch_first=True) self.linear = nn.Linear(hidden_size, num_classes) def forward(self, x): # assuming batch_first = True for RNN cells batch_size = x.size(0) hidden = self._init_hidden(batch_size) x = x.view(batch_size, self.seq_len, self.input_size) # apart from the output, rnn also gives us the hidden # cell, this gives us the opportunity to pass it to # the next cell if needed; we won't be needing it here # because the nn.RNN already computed all the time steps # for us. rnn_out will of size [batch_size, seq_len, hidden_size] rnn_out, _ = self.rnn(x, hidden) linear_out = self.linear(rnn_out.view(-1, hidden_size)) return linear_out def _init_hidden(self, batch_size): """ Initialize hidden cell states, assuming batch_first = True for RNN cells """ return torch.zeros(batch_size, self.num_layers, self.hidden_size) # Set loss, optimizer and the RNN model torch.manual_seed(777) rnn = RNN(seq_len, num_classes, input_size, hidden_size, num_layers) print('network architecture:\n', rnn) # train the model num_epochs = 15 criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(rnn.parameters(), lr=0.1) for epoch in range(1, num_epochs + 1): optimizer.zero_grad() outputs = rnn(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # check the current predicted string # max gives the maximum value and its # corresponding index, we will only # be needing the index _, idx = outputs.max(dim = 1) idx = idx.detach().numpy() result_str = [idx2char[c] for c in idx] print('epoch: {}, loss: {:1.3f}'.format(epoch, loss.item())) print('Predicted string: ', ''.join(result_str))
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
LSTM The example below uses an LSTM to generate part of speech tags. The usage of LSTM API is essentially the same as the RNN we were using in the last section. Expect in this example, we will prepare the word to index mapping ourselves and as for the modeling part, we will add an embedding layer before the LSTM layer, this is a common technique in NLP applications. So for each word, instead of using the one hot encoding way of representation the data (which can be inefficient and it treats all words as independent entities with no relationships amongst each other), word embeddings will compress them into a lower dimension that encode the semantics of the words, i.e. how similar each word is used within our given corpus.
# These will usually be more like 32 or 64 dimensional. # We will keep them small for this toy example EMBEDDING_SIZE = 6 HIDDEN_SIZE = 6 training_data = [ ("The dog ate the apple".split(), ["DET", "NN", "V", "DET", "NN"]), ("Everybody read that book".split(), ["NN", "V", "DET", "NN"]) ] idx_to_tag = ['DET', 'NN', 'V'] tag_to_idx = {'DET': 0, 'NN': 1, 'V': 2} word_to_idx = {} for sent, tags in training_data: for word in sent: if word not in word_to_idx: word_to_idx[word] = len(word_to_idx) word_to_idx def prepare_sequence(seq, to_idx): """Convert sentence/sequence to torch Tensors""" idxs = [to_idx[w] for w in seq] return torch.LongTensor(idxs) seq = training_data[0][0] inputs = prepare_sequence(seq, word_to_idx) inputs class LSTMTagger(nn.Module): def __init__(self, embedding_size, hidden_size, vocab_size, tagset_size): super().__init__() self.embedding_size = embedding_size self.hidden_size = hidden_size self.vocab_size = vocab_size self.tagset_size = tagset_size self.embedding = nn.Embedding(vocab_size, embedding_size) self.lstm = nn.LSTM(embedding_size, hidden_size) self.hidden2tag = nn.Linear(hidden_size, tagset_size) def forward(self, x): embed = self.embedding(x) hidden = self._init_hidden() # the second dimension refers to the batch size, which we've hard-coded # it as 1 throughout the example lstm_out, lstm_hidden = self.lstm(embed.view(len(x), 1, -1), hidden) output = self.hidden2tag(lstm_out.view(len(x), -1)) return output def _init_hidden(self): # the dimension semantics are [num_layers, batch_size, hidden_size] return (torch.rand(1, 1, self.hidden_size), torch.rand(1, 1, self.hidden_size)) model = LSTMTagger(EMBEDDING_SIZE, HIDDEN_SIZE, len(word_to_idx), len(tag_to_idx)) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.1) epochs = 300 for epoch in range(epochs): for sentence, tags in training_data: model.zero_grad() sentence = prepare_sequence(sentence, word_to_idx) target = prepare_sequence(tags, tag_to_idx) output = model(sentence) loss = criterion(output, target) loss.backward() optimizer.step() inputs = prepare_sequence(training_data[0][0], word_to_idx) tag_scores = model(inputs) # validating that the sentence "the dog ate the apple". # the correct tag should be DET NOUN VERB DET NOUN print('expected target: ', training_data[0][1]) tag_scores = tag_scores.detach().numpy() tag = [idx_to_tag[idx] for idx in np.argmax(tag_scores, axis = 1)] print('generated target: ', tag)
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
Start with MTBLS315, the malaria vs fever dataset. Could get ~0.85 AUC for whole dataset.
# Get the data ### Subdivide the data into a feature table local_path = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/' data_path = local_path + '/revo_healthcare/data/processed/MTBLS315/'\ 'uhplc_pos/xcms_camera_results.csv' ## Import the data and remove extraneous columns df = pd.read_csv(data_path, index_col=0) df.shape df.head() # Show me a distribution of retention time widths rt_width = df['rtmax']-df['rtmin'] #sns.violinplot(rt_width, inner='box') #sns.rugplot(rt_width) rt_width.shape sns.kdeplot(rt_width, kernel='gau') plt.ylabel('Probability') plt.xlabel('retention-width') plt.title('Distribution of retention-time widths') # Show me a distribution of intensity values intensities = df['X1001_P'] normalized_intensities = df['X1001_P'].div(df['X1001_P'].max()) #sns.kdeplot(df['X1001_P']) sns.kdeplot(intensities) plt.xlabel("Normalized Intensity") plt.ylabel('probability \(not correct, but w/e\)') plt.title('Distribution of intensities')
notebooks/Effects_of_retention_time_on_classification/retention_time_regions_and_classifiiability.ipynb
irockafe/revo_healthcare
mit
Almost everything is below 30sec rt-window
# Show me a scatterplot of m/z rt dots # distribution along mass-axis and rt axist plt.scatter(df['mz'], df['rt'], s=normalized_intensities*100,) plt.xlabel('mz') plt.ylabel('rt') plt.title('mz vs. rt') plt.show() import numpy as np import matplotlib.pyplot as plt from matplotlib.ticker import NullFormatter def plot_mz_rt(df, path, rt_bounds): # the random data x = df['rt'] y = df['mz'] print np.max(x) print np.max(y) nullfmt = NullFormatter() # no labels # definitions for the axes left, width = 0.1, 0.65 bottom, height = 0.1, 0.65 bottom_h = left_h = left + width + 0.02 rect_scatter = [left, bottom, width, height] rect_histx = [left, bottom_h, width, 0.2] rect_histy = [left_h, bottom, 0.2, height] # start with a rectangular Figure #fig = plt.figure(1, figsize=(8, 8)) fig = plt.figure(1, figsize=(10,10)) axScatter = plt.axes(rect_scatter) axHistx = plt.axes(rect_histx) axHisty = plt.axes(rect_histy) # no labels axHistx.xaxis.set_major_formatter(nullfmt) axHisty.yaxis.set_major_formatter(nullfmt) # the scatter plot: axScatter.scatter(x, y, s=1) # now determine nice limits by hand: binwidth = 0.25 #xymax = np.max([np.max(np.fabs(x)), np.max(np.fabs(y))]) #lim = (int(xymax/binwidth) + 1) * binwidth x_min = np.min(x)-50 x_max = np.max(x)+50 axScatter.set_xlim(x_min, x_max ) y_min = np.min(y)-50 y_max = np.max(y)+50 axScatter.set_ylim(y_min, y_max) # Add vertical red line between 750-1050 retention time ''' plt.plot([0,1], [0,1], linestyle = '--', lw=2, color='r', label='Luck', alpha=0.5) ''' print 'ymin: ', y_min # Add vertical/horizontal lines to scatter and histograms axScatter.axvline(x=rt_bounds[0], lw=2, color='r', alpha=0.5) axScatter.axvline(x=rt_bounds[1], lw=2, color='r', alpha=0.5) #axHistx.axvline(x=rt_bounds[0], lw=2, color='r', alpha=0.5) #axHistx.axvline(x=rt_bounds[1], lw=2, color='r', alpha=0.5) #bins = np.arange(-lim, lim + binwidth, binwidth) bins = 100 axHistx.hist(x, bins=bins) axHisty.hist(y, bins=bins, orientation='horizontal') axHistx.set_xlim(axScatter.get_xlim()) axHisty.set_ylim(axScatter.get_ylim()) axScatter.set_ylabel('m/z', fontsize=30) axScatter.set_xlabel('Retention Time', fontsize=30) axHistx.set_ylabel('# of Features', fontsize=20) axHisty.set_xlabel('# of Features', fontsize=20) plt.savefig(path, format='pdf') plt.show() my_path = '/home/irockafe/Dropbox (MIT)/'\ 'Alm_Lab/projects/revo_healthcare/'\ 'presentations/eric_bose/poop.pdf' plot_mz_rt(df, my_path, (750, 1050)) print 'Maximum retention time', x_max # divide feature table into slices of retention time def get_rt_slice(df, rt_bounds): ''' PURPOSE: Given a tidy feature table with 'mz' and 'rt' column headers, retain only the features whose rt is between rt_left and rt_right INPUT: df - a tidy pandas dataframe with 'mz' and 'rt' column headers rt_left, rt_right: the boundaries of your rt_slice, in seconds ''' out_df = df.loc[ (df['rt'] > rt_bounds[0]) & (df['rt'] < rt_bounds[1])] return out_df def sliding_window_rt(df, rt_width, step=rt_width*0.25): # get range of values [(0, rt_width) #get_rt_slice(df, ) rt_min = np.min(df['rt']) rt_max = np.max(df['rt']) # define the ranges left_bound = np.arange(rt_min, rt_max, step) right_bound = left_bound + rt_width rt_bounds = zip(left_bound, right_bound) for rt_slice in rt_bounds: rt_window = get_rt_slice(df, rt_slice) #print rt_window.head() print 'shape', rt_window.shape raise hee # TODO Send to ml pipeline here? Or separate function? print type(np.float64(3.5137499999999999)) a = get_rt_slice(df, (750, 1050)) print 'Original dataframe shape: ', df.shape print '\n Shape:', a.shape, '\n\n\n\n' print df sliding_window_rt(df, 100) # Convert selected slice to feature table, X, get labels y
notebooks/Effects_of_retention_time_on_classification/retention_time_regions_and_classifiiability.ipynb
irockafe/revo_healthcare
mit
<h2> Show me the distribution of features from alzheimers dataset </h2>
### Subdivide the data into a feature table local_path = '/home/irockafe/Dropbox (MIT)/Alm_Lab/'\ 'projects' data_path = local_path + '/revo_healthcare/data/processed/MTBLS72/positive_mode/'\ 'mtbls_no_retcor_bw2.csv' ## Import the data and remove extraneous columns df = pd.read_csv(data_path, index_col=0) df.shape df.head() # Make a new index of mz:rt mz = df.loc[:,"mz"].astype('str') rt = df.loc[:,"rt"].astype('str') idx = mz+':'+rt df.index = idx df.head() a = get_rt_slice(df, (0,100)) print df.shape print a.shape my_path = '/home/irockafe/Dropbox (MIT)/'\ 'Alm_Lab/projects/revo_healthcare/'\ 'presentations/eric_bose/alzheimers_mz_rt_scatter.pdf' print np.max(df['rt']) plot_mz_rt(df, my_path, (550, 670)) # Make chromatography gradient images fig = plt.figure(1, figsize=(8,8)) #axScatter = plt.axes() plt.plot([0,1500], [0, 1], lw=4, label='Normal', alpha=0.4) #plt.xlim([0, 30]) #plt.ylim([0,1]) plt.ylabel("% Hydrophobic Solvent", fontsize=20) plt.xlabel('Time', fontsize=20) plt.plot([650, 750, 1050], [750.0/1500, 750.0/1500, 1050.0/1500], lw=4, color='r', label='Faster') plt.xlim([0, 1400]) plt.ylim([0,1]) plt.ylabel("% Hydrophobic Solvent", fontsize=20) plt.xlabel('Retention Time', fontsize=20) plt.legend(fontsize=12) plt.axvline(x=750, lw=2, color='r', alpha=0.3) plt.axvline(x=1060, lw=2, color='r',alpha=0.3) plt.savefig('/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revo_healthcare/presentations/eric_bose/gradient_example.pdf') plt.show() # Make chromatography gradient images fig = plt.figure() #axScatter = plt.axes()
notebooks/Effects_of_retention_time_on_classification/retention_time_regions_and_classifiiability.ipynb
irockafe/revo_healthcare
mit
1ª Questão: Carregando o Dataset (Glass)
# Carregando o Wine Dataset (https://archive.ics.uci.edu/ml/datasets/Wine) data = pd.read_csv("wine.data") X = data.iloc[:,1:].values y = data.iloc[:,0].values # Pre-processing the data (for PCA) X = (X - X.mean(axis=0)) / X.std(axis=0)
2017/09-clustering/cl_otacilio_bezerra.ipynb
abevieiramota/data-science-cookbook
mit
Visualização dos Dados
# Plotando uma visualização 3-Dimensional dos Dados # Podemos observar que os dados (em 3-Dimensões) são extremamente superpostos pcaData = PCA(n_components=3).fit_transform(X) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(pcaData[:,0], pcaData[:,1], pcaData[:,2], c=y, cmap=plt.cm.Dark2) plt.show()
2017/09-clustering/cl_otacilio_bezerra.ipynb
abevieiramota/data-science-cookbook
mit
Aplicação do K-Means
# Criamos o objeto da classe KMeans kmeans = KMeans(n_clusters=2, random_state=0) # Realizamos a Clusterização kmeans.fit(X) clts = kmeans.predict(X) # Plotando uma visualização 3-Dimensional dos Dados, agora com os clusteres designados pelo K-Means # Compare a visualização com o gráfico da celula de cima fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(pcaData[:,0], pcaData[:,1], pcaData[:,2], c=clts, cmap=plt.cm.Dark2) plt.show()
2017/09-clustering/cl_otacilio_bezerra.ipynb
abevieiramota/data-science-cookbook
mit
Métricas de Avaliação
# Utilizamos três métricas de avaliação dos Clusteres, com base nos dados já classificados: # -> Homogeneity: porcentagem relativa ao objetivo de ter, em cada cluster, apenas membros de uma mesma classe # -> Completeness: porcentagem relativa ao objetivo de ter todos os membros de uma classe no mesmo cluster # -> V-Measure: medida que relaciona Homogeneity com Completeness, e é equivalente à uma métrica conhecida como NMI (Normalized Mutual Information). homoScore = metrics.homogeneity_score(y, clts) complScore = metrics.completeness_score(y, clts) vMeasureScore = metrics.v_measure_score(y, clts) print("### Avaliação ({0} Clusters) ###".format(kmeans.n_clusters)) print("Homogeneity: \t{0:.3}".format(homoScore)) print("Completeness: \t{0:.3}".format(complScore)) print("V-Measure: \t{0:.3}".format(vMeasureScore))
2017/09-clustering/cl_otacilio_bezerra.ipynb
abevieiramota/data-science-cookbook
mit
2ª Questão Implementando o Método do Cotovelo
# Método do Cotevelo baseado na Inertia (Soma Quadrática da Distância Intra-Cluster de cada Ponto) numK = np.arange(1,10); inertias = [] for i in numK: print(".", end="") kmeans.n_clusters = i kmeans.fit(X) inertias.append(kmeans.inertia_) # Plotagens plt.figure() plt.title("Elbow Method") plt.xlabel("Num of Clusters"); plt.ylabel("Inertia") plt.plot(numK, inertias, 'bo-') plt.show()
2017/09-clustering/cl_otacilio_bezerra.ipynb
abevieiramota/data-science-cookbook
mit
Questão 3
# Realizamos a Clusterização, agora com um número selecionado de Clusteres kmeans.n_clusters = 3 kmeans.fit(X) clts = kmeans.predict(X) # Visualização das Métricas de Avaliação homoScore = metrics.homogeneity_score(y, clts) complScore = metrics.completeness_score(y, clts) vMeasureScore = metrics.v_measure_score(y, clts) print("### Avaliação ({0} Clusters) ###".format(kmeans.n_clusters)) print("Homogeneity: \t{0:.3}".format(homoScore)) print("Completeness: \t{0:.3}".format(complScore)) print("V-Measure: \t{0:.3}".format(vMeasureScore)) # Plotando uma visualização 3-Dimensional dos Dados, agora com os clusteres designados pelo K-Means # Compare a visualização com os gráficos anteriores fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(pcaData[:,0], pcaData[:,1], pcaData[:,2], c=clts, cmap=plt.cm.Dark2) plt.show()
2017/09-clustering/cl_otacilio_bezerra.ipynb
abevieiramota/data-science-cookbook
mit
Load Protein Interactions Select columns of interest and drop empty rows.
url1 = 'https://s3-us-west-1.amazonaws.com/graphistry.demo.data/BIOGRID-ALL-3.3.123.tab2.txt.gz' rawdata = pandas.read_table(url1, na_values=['-'], engine='c', compression='gzip') # If using local data, comment the two lines above and uncomment the line below # pandas.read_table('./data/BIOGRID-ALL-3.3.123.tab2.txt', na_values=['-'], engine='c') cols = ['BioGRID ID Interactor A', 'BioGRID ID Interactor B', 'Official Symbol Interactor A', 'Official Symbol Interactor B', 'Pubmed ID', 'Author', 'Throughput'] interactions = rawdata[cols].dropna() interactions[:3]
demos/demos_by_use_case/bio/BiogridDemo.ipynb
graphistry/pygraphistry
bsd-3-clause
Let's have a quick peak at the data Bind the columns storing the source/destination of each edge. This is the bare minimum to create a visualization.
g = graphistry.bind(source="BioGRID ID Interactor A", destination="BioGRID ID Interactor B") g.plot(interactions.sample(10000))
demos/demos_by_use_case/bio/BiogridDemo.ipynb
graphistry/pygraphistry
bsd-3-clause
A Fancier Visualization With Custom Labels and Colors Let's lookup the name and organism of each protein in the BioGrid indentification DB.
# This downloads 170 MB, it might take some time. url2 = 'https://s3-us-west-1.amazonaws.com/graphistry.demo.data/BIOGRID-IDENTIFIERS-3.3.123.tab.txt.gz' raw_proteins = pandas.read_table(url2, na_values=['-'], engine='c', compression='gzip') # If using local data, comment the two lines above and uncomment the line below # raw_proteins = pandas.read_table('./data/BIOGRID-IDENTIFIERS-3.3.123.tab.txt', na_values=['-'], engine='c') protein_ids = raw_proteins[['BIOGRID_ID', 'ORGANISM_OFFICIAL_NAME']].drop_duplicates() \ .rename(columns={'ORGANISM_OFFICIAL_NAME': 'ORGANISM'}) protein_ids[:3]
demos/demos_by_use_case/bio/BiogridDemo.ipynb
graphistry/pygraphistry
bsd-3-clause
We extract the proteins referenced as either sources or targets of interactions.
source_proteins = interactions[["BioGRID ID Interactor A", "Official Symbol Interactor A"]].copy() \ .rename(columns={'BioGRID ID Interactor A': 'BIOGRID_ID', 'Official Symbol Interactor A': 'SYMBOL'}) target_proteins = interactions[["BioGRID ID Interactor B", "Official Symbol Interactor B"]].copy() \ .rename(columns={'BioGRID ID Interactor B': 'BIOGRID_ID', 'Official Symbol Interactor B': 'SYMBOL'}) all_proteins = pandas.concat([source_proteins, target_proteins], ignore_index=True).drop_duplicates() all_proteins[:3]
demos/demos_by_use_case/bio/BiogridDemo.ipynb
graphistry/pygraphistry
bsd-3-clause
We join on the indentification DB to get the organism in which each protein belongs.
protein_labels = pandas.merge(all_proteins, protein_ids, how='left', left_on='BIOGRID_ID', right_on='BIOGRID_ID') protein_labels[:3]
demos/demos_by_use_case/bio/BiogridDemo.ipynb
graphistry/pygraphistry
bsd-3-clause
We assign colors to proteins based on their organism.
colors = protein_labels.ORGANISM.unique().tolist() protein_labels['Color'] = protein_labels.ORGANISM.map(lambda x: colors.index(x))
demos/demos_by_use_case/bio/BiogridDemo.ipynb
graphistry/pygraphistry
bsd-3-clause
For convenience, let's add links to PubMed and RCSB.
def makeRcsbLink(id): if isinstance(id, str): url = 'http://www.rcsb.org/pdb/gene/' + id.upper() return '<a target="_blank" href="%s">%s</a>' % (url, id.upper()) else: return 'n/a' protein_labels.SYMBOL = protein_labels.SYMBOL.map(makeRcsbLink) protein_labels[:3] def makePubmedLink(id): url = 'http://www.ncbi.nlm.nih.gov/pubmed/?term=%s' % id return '<a target="_blank" href="%s">%s</a>' % (url, id) interactions['Pubmed ID'] = interactions['Pubmed ID'].map(makePubmedLink) interactions[:3]
demos/demos_by_use_case/bio/BiogridDemo.ipynb
graphistry/pygraphistry
bsd-3-clause
Plotting We bind columns to labels and colors and we are good to go.
# This will upload ~10MB of data, be patient! g2 = g.bind(node='BIOGRID_ID', edge_title='Author', point_title='SYMBOL', point_color='Color') g2.plot(interactions, protein_labels)
demos/demos_by_use_case/bio/BiogridDemo.ipynb
graphistry/pygraphistry
bsd-3-clause
Intro to Sparse Data and Embeddings Learning Objectives: * Convert movie-review string data to a sparse feature vector * Implement a sentiment-analysis linear model using a sparse feature vector * Implement a sentiment-analysis DNN model using an embedding that projects data into two dimensions * Visualize the embedding to see what the model has learned about the relationships between words In this exercise, we'll explore sparse data and work with embeddings using text data from movie reviews (from the ACL 2011 IMDB dataset). This data has already been processed into tf.Example format. Setup Let's import our dependencies and download the training and test data. tf.keras includes a file download and caching tool that we can use to retrieve the data sets.
from __future__ import print_function import collections import io import math import matplotlib.pyplot as plt import numpy as np import pandas as pd import tensorflow as tf from IPython import display from sklearn import metrics tf.logging.set_verbosity(tf.logging.ERROR) train_url = 'https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/train.tfrecord' train_path = tf.keras.utils.get_file(train_url.split('/')[-1], train_url) test_url = 'https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/test.tfrecord' test_path = tf.keras.utils.get_file(test_url.split('/')[-1], test_url)
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Building a Sentiment Analysis Model Let's train a sentiment-analysis model on this data that predicts if a review is generally favorable (label of 1) or unfavorable (label of 0). To do so, we'll turn our string-value terms into feature vectors by using a vocabulary, a list of each term we expect to see in our data. For the purposes of this exercise, we've created a small vocabulary that focuses on a limited set of terms. Most of these terms were found to be strongly indicative of favorable or unfavorable, but some were just added because they're interesting. Each term in the vocabulary is mapped to a coordinate in our feature vector. To convert the string-value terms for an example into this vector format, we encode such that each coordinate gets a value of 0 if the vocabulary term does not appear in the example string, and a value of 1 if it does. Terms in an example that don't appear in the vocabulary are thrown away. NOTE: We could of course use a larger vocabulary, and there are special tools for creating these. In addition, instead of just dropping terms that are not in the vocabulary, we can introduce a small number of OOV (out-of-vocabulary) buckets to which you can hash the terms not in the vocabulary. We can also use a feature hashing approach that hashes each term, instead of creating an explicit vocabulary. This works well in practice, but loses interpretability, which is useful for this exercise. See see the tf.feature_column module for tools handling this. Building the Input Pipeline First, let's configure the input pipeline to import our data into a TensorFlow model. We can use the following function to parse the training and test data (which is in TFRecord format) and return a dict of the features and the corresponding labels.
def _parse_function(record): """Extracts features and labels. Args: record: File path to a TFRecord file Returns: A `tuple` `(labels, features)`: features: A dict of tensors representing the features labels: A tensor with the corresponding labels. """ features = { "terms": tf.VarLenFeature(dtype=tf.string), # terms are strings of varying lengths "labels": tf.FixedLenFeature(shape=[1], dtype=tf.float32) # labels are 0 or 1 } parsed_features = tf.parse_single_example(record, features) terms = parsed_features['terms'].values labels = parsed_features['labels'] return {'terms':terms}, labels
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
To confirm our function is working as expected, let's construct a TFRecordDataset for the training data, and map the data to features and labels using the function above.
# Create the Dataset object. ds = tf.data.TFRecordDataset(train_path) # Map features and labels with the parse function. ds = ds.map(_parse_function) ds
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Run the following cell to retrieve the first example from the training data set.
n = ds.make_one_shot_iterator().get_next() sess = tf.Session() sess.run(n)
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Now, let's build a formal input function that we can pass to the train() method of a TensorFlow Estimator object.
# Create an input_fn that parses the tf.Examples from the given files, # and split them into features and targets. def _input_fn(input_filenames, num_epochs=None, shuffle=True): # Same code as above; create a dataset and map features and labels. ds = tf.data.TFRecordDataset(input_filenames) ds = ds.map(_parse_function) if shuffle: ds = ds.shuffle(10000) # Our feature data is variable-length, so we pad and batch # each field of the dataset structure to whatever size is necessary. ds = ds.padded_batch(25, ds.output_shapes) ds = ds.repeat(num_epochs) # Return the next batch of data. features, labels = ds.make_one_shot_iterator().get_next() return features, labels
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Task 1: Use a Linear Model with Sparse Inputs and an Explicit Vocabulary For our first model, we'll build a LinearClassifier model using 50 informative terms; always start simple! The following code constructs the feature column for our terms. The categorical_column_with_vocabulary_list function creates a feature column with the string-to-feature-vector mapping.
# 50 informative terms that compose our model vocabulary informative_terms = ("bad", "great", "best", "worst", "fun", "beautiful", "excellent", "poor", "boring", "awful", "terrible", "definitely", "perfect", "liked", "worse", "waste", "entertaining", "loved", "unfortunately", "amazing", "enjoyed", "favorite", "horrible", "brilliant", "highly", "simple", "annoying", "today", "hilarious", "enjoyable", "dull", "fantastic", "poorly", "fails", "disappointing", "disappointment", "not", "him", "her", "good", "time", "?", ".", "!", "movie", "film", "action", "comedy", "drama", "family") terms_feature_column = tf.feature_column.categorical_column_with_vocabulary_list(key="terms", vocabulary_list=informative_terms)
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Next, we'll construct the LinearClassifier, train it on the training set, and evaluate it on the evaluation set. After you read through the code, run it and see how you do.
my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1) my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0) feature_columns = [ terms_feature_column ] classifier = tf.estimator.LinearClassifier( feature_columns=feature_columns, optimizer=my_optimizer, ) classifier.train( input_fn=lambda: _input_fn([train_path]), steps=1000) evaluation_metrics = classifier.evaluate( input_fn=lambda: _input_fn([train_path]), steps=1000) print("Training set metrics:") for m in evaluation_metrics: print(m, evaluation_metrics[m]) print("---") evaluation_metrics = classifier.evaluate( input_fn=lambda: _input_fn([test_path]), steps=1000) print("Test set metrics:") for m in evaluation_metrics: print(m, evaluation_metrics[m]) print("---")
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Task 2: Use a Deep Neural Network (DNN) Model The above model is a linear model. It works quite well. But can we do better with a DNN model? Let's swap in a DNNClassifier for the LinearClassifier. Run the following cell, and see how you do.
##################### Here's what we changed ################################## classifier = tf.estimator.DNNClassifier( # feature_columns=[tf.feature_column.indicator_column(terms_feature_column)], # hidden_units=[20,20], # optimizer=my_optimizer, # ) # ############################################################################### try: classifier.train( input_fn=lambda: _input_fn([train_path]), steps=1000) evaluation_metrics = classifier.evaluate( input_fn=lambda: _input_fn([train_path]), steps=1) print("Training set metrics:") for m in evaluation_metrics: print(m, evaluation_metrics[m]) print("---") evaluation_metrics = classifier.evaluate( input_fn=lambda: _input_fn([test_path]), steps=1) print("Test set metrics:") for m in evaluation_metrics: print(m, evaluation_metrics[m]) print("---") except ValueError as err: print(err)
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Task 3: Use an Embedding with a DNN Model In this task, we'll implement our DNN model using an embedding column. An embedding column takes sparse data as input and returns a lower-dimensional dense vector as output. NOTE: An embedding_column is usually the computationally most efficient option to use for training a model on sparse data. In an optional section at the end of this exercise, we'll discuss in more depth the implementational differences between using an embedding_column and an indicator_column, and the tradeoffs of selecting one over the other. In the following code, do the following: Define the feature columns for the model using an embedding_column that projects the data into 2 dimensions (see the TF docs for more details on the function signature for embedding_column). Define a DNNClassifier with the following specifications: Two hidden layers of 20 units each Adagrad optimization with a learning rate of 0.1 A gradient_clip_norm of 5.0 NOTE: In practice, we might project to dimensions higher than 2, like 50 or 100. But for now, 2 dimensions is easy to visualize. Hint
# Here's a example code snippet you might use to define the feature columns: terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2) feature_columns = [ terms_embedding_column ]
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Complete the Code Below
########################## YOUR CODE HERE ###################################### terms_embedding_column = # Define the embedding column feature_columns = # Define the feature columns classifier = # Define the DNNClassifier ################################################################################ classifier.train( input_fn=lambda: _input_fn([train_path]), steps=1000) evaluation_metrics = classifier.evaluate( input_fn=lambda: _input_fn([train_path]), steps=1000) print("Training set metrics:") for m in evaluation_metrics: print(m, evaluation_metrics[m]) print("---") evaluation_metrics = classifier.evaluate( input_fn=lambda: _input_fn([test_path]), steps=1000) print("Test set metrics:") for m in evaluation_metrics: print(m, evaluation_metrics[m]) print("---")
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Solution Click below for a solution.
########################## SOLUTION CODE ######################################## terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2) feature_columns = [ terms_embedding_column ] my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1) my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0) classifier = tf.estimator.DNNClassifier( feature_columns=feature_columns, hidden_units=[20,20], optimizer=my_optimizer ) ################################################################################# classifier.train( input_fn=lambda: _input_fn([train_path]), steps=1000) evaluation_metrics = classifier.evaluate( input_fn=lambda: _input_fn([train_path]), steps=1000) print("Training set metrics:") for m in evaluation_metrics: print(m, evaluation_metrics[m]) print("---") evaluation_metrics = classifier.evaluate( input_fn=lambda: _input_fn([test_path]), steps=1000) print("Test set metrics:") for m in evaluation_metrics: print(m, evaluation_metrics[m]) print("---")
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Task 4: Convince yourself there's actually an embedding in there The above model used an embedding_column, and it seemed to work, but this doesn't tell us much about what's going on internally. How can we check that the model is actually using an embedding inside? To start, let's look at the tensors in the model:
classifier.get_variable_names()
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Okay, we can see that there is an embedding layer in there: 'dnn/input_from_feature_columns/input_layer/terms_embedding/...'. (What's interesting here, by the way, is that this layer is trainable along with the rest of the model just as any hidden layer is.) Is the embedding layer the correct shape? Run the following code to find out. NOTE: Remember, in our case, the embedding is a matrix that allows us to project a 50-dimensional vector down to 2 dimensions.
classifier.get_variable_value('dnn/input_from_feature_columns/input_layer/terms_embedding/embedding_weights').shape
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Spend some time manually checking the various layers and shapes to make sure everything is connected the way you would expect it would be. Task 5: Examine the Embedding Let's now take a look at the actual embedding space, and see where the terms end up in it. Do the following: 1. Run the following code to see the embedding we trained in Task 3. Do things end up where you'd expect? Re-train the model by rerunning the code in Task 3, and then run the embedding visualization below again. What stays the same? What changes? Finally, re-train the model again using only 10 steps (which will yield a terrible model). Run the embedding visualization below again. What do you see now, and why?
import numpy as np import matplotlib.pyplot as plt embedding_matrix = classifier.get_variable_value('dnn/input_from_feature_columns/input_layer/terms_embedding/embedding_weights') for term_index in range(len(informative_terms)): # Create a one-hot encoding for our term. It has 0s everywhere, except for # a single 1 in the coordinate that corresponds to that term. term_vector = np.zeros(len(informative_terms)) term_vector[term_index] = 1 # We'll now project that one-hot vector into the embedding space. embedding_xy = np.matmul(term_vector, embedding_matrix) plt.text(embedding_xy[0], embedding_xy[1], informative_terms[term_index]) # Do a little setup to make sure the plot displays nicely. plt.rcParams["figure.figsize"] = (15, 15) plt.xlim(1.2 * embedding_matrix.min(), 1.2 * embedding_matrix.max()) plt.ylim(1.2 * embedding_matrix.min(), 1.2 * embedding_matrix.max()) plt.show()
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Task 6: Try to improve the model's performance See if you can refine the model to improve performance. A couple things you may want to try: Changing hyperparameters, or using a different optimizer like Adam (you may only gain one or two accuracy percentage points following these strategies). Adding additional terms to informative_terms. There's a full vocabulary file with all 30,716 terms for this data set that you can use at: https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/terms.txt You can pick out additional terms from this vocabulary file, or use the whole thing via the categorical_column_with_vocabulary_file feature column.
# Download the vocabulary file. terms_url = 'https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/terms.txt' terms_path = tf.keras.utils.get_file(terms_url.split('/')[-1], terms_url) # Create a feature column from "terms", using a full vocabulary file. informative_terms = None with io.open(terms_path, 'r', encoding='utf8') as f: # Convert it to a set first to remove duplicates. informative_terms = list(set(f.read().split())) terms_feature_column = tf.feature_column.categorical_column_with_vocabulary_list(key="terms", vocabulary_list=informative_terms) terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2) feature_columns = [ terms_embedding_column ] my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1) my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0) classifier = tf.estimator.DNNClassifier( feature_columns=feature_columns, hidden_units=[10,10], optimizer=my_optimizer ) classifier.train( input_fn=lambda: _input_fn([train_path]), steps=1000) evaluation_metrics = classifier.evaluate( input_fn=lambda: _input_fn([train_path]), steps=1000) print("Training set metrics:") for m in evaluation_metrics: print(m, evaluation_metrics[m]) print("---") evaluation_metrics = classifier.evaluate( input_fn=lambda: _input_fn([test_path]), steps=1000) print("Test set metrics:") for m in evaluation_metrics: print(m, evaluation_metrics[m]) print("---")
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Include D3 script:
%%d3 <script src="https://d3js.org/d3.v3.js"></script>
visualization/day46-embed-d3.ipynb
csiu/100daysofcode
mit
Example 01: Hello world
%%d3 <div></div> <script> d3.select("div").text("Hello world") </script>
visualization/day46-embed-d3.ipynb
csiu/100daysofcode
mit
Example 02: Simple rectangle
%%d3 <g></g> <script> d3.select("g").append("svg").append("rect") .attr("x", 150) .attr("y", 50) .attr("width", 50) .attr("height", 140); </script>
visualization/day46-embed-d3.ipynb
csiu/100daysofcode
mit
Example 03: Functions, style, and polygons
%%d3 <g></g> <script> function CalculateStarPoints(centerX, centerY, arms, outerRadius, innerRadius) { var results = ""; var angle = Math.PI / arms * 2; for (var i = 0; i < 2 * arms; i++) { var r = (i & 1) == 0 ? outerRadius : innerRadius; var pointX = centerX + Math.cos(i * angle) * r; var pointY = centerY + Math.sin(i * angle) * r; // Our first time we simply append the coordinates, subsequet times // we append a ", " to distinguish each coordinate pair. if (i == 0) { results = pointX + "," + pointY; } else { results += ", " + pointX + "," + pointY; } } return results; } d3.select("g").append("svg") .append("polygon") .attr("visibility", "visible") .attr("points", CalculateStarPoints(100, 100, 5, 30, 15)); d3.select("g").append("svg") .append("polygon") .attr("visibility", "visible") .attr("points", CalculateStarPoints(100, 100, 5, 30, 15)) .style("fill", "lime") .style("stroke", "purple") .style("stroke-width", "5") .style("fill-rule","evenodd"); </script>
visualization/day46-embed-d3.ipynb
csiu/100daysofcode
mit
Kink with using Jupyter Notebook When I load the Collision Detection example by Mike Bostock in the Jupyter Notebook, it doesn't work. Nothing is rendered (see following). However, this is not an issue with JSFiddle.
%%d3 <script src="https://d3js.org/d3.v3.js"></script> This example is modified from <a href="https://bl.ocks.org/mbostock/3231298">https://bl.ocks.org/mbostock/3231298</a> <body></body> <script> var width = 500, height = 500; var nodes = d3.range(200).map(function() { return { radius: Math.random() * 12 + 4 }; }), root = nodes[0], color = d3.scale.category10(); root.radius = 0; root.fixed = true; var force = d3.layout.force() .gravity(0.05) .charge(function(d, i) { return i ? 0 : -1500; }) .nodes(nodes) .size([width, height]); force.start(); var svg = d3.select("body").append("svg") .attr("width", width) .attr("height", height); svg.selectAll("circle") .data(nodes.slice(1)) .enter().append("circle") .attr("r", function(d) { return d.radius; }) .style("fill", function(d, i) { return color(i % 3); }); force.on("tick", function(e) { var q = d3.geom.quadtree(nodes), i = 0, n = nodes.length; while (++i < n) q.visit(collide(nodes[i])); svg.selectAll("circle") .attr("cx", function(d) { return d.x; }) .attr("cy", function(d) { return d.y; }); }); svg.on("mousemove", function() { var p1 = d3.mouse(this); root.px = p1[0]; root.py = p1[1]; force.resume(); }); function collide(node) { var r = node.radius + 16, nx1 = node.x - r, nx2 = node.x + r, ny1 = node.y - r, ny2 = node.y + r; return function(quad, x1, y1, x2, y2) { if (quad.point && (quad.point !== node)) { var x = node.x - quad.point.x, y = node.y - quad.point.y, l = Math.sqrt(x * x + y * y), r = node.radius + quad.point.radius; if (l < r) { l = (l - r) / l * .5; node.x -= x *= l; node.y -= y *= l; quad.point.x += x; quad.point.y += y; } } return x1 > nx2 || x2 < nx1 || y1 > ny2 || y2 < ny1; }; } </script>
visualization/day46-embed-d3.ipynb
csiu/100daysofcode
mit
If the regular expression r that is defined below is written in the style of the lecture notes, it reads: $$(\texttt{a}\cdot\texttt{b} + \texttt{b}\cdot\texttt{a})^*$$
r = parse('(ab + ba)*') r converter = RegExp2NFA({'a', 'b'})
Python/Test-Regexp-2-NFA.ipynb
Danghor/Formal-Languages
gpl-2.0
We use converter to create a non-deterministic <span style="font-variant:small-caps;">Fsm</span> nfa that accepts the language described by the regular expression r.
nfa = converter.toNFA(r) nfa %run FSM-2-Dot.ipynb
Python/Test-Regexp-2-NFA.ipynb
Danghor/Formal-Languages
gpl-2.0
I have to use the method render below, because somehow the method displayis buggy and cuts of parts of the graph.
d = nfa2dot(nfa) d.render(view=True)
Python/Test-Regexp-2-NFA.ipynb
Danghor/Formal-Languages
gpl-2.0
Python 基礎 iPython (or jupyter) のヘルプテクニック これを見ている人は jupyter notebook を使っていると思うが、次のコードを動かしてみてほしい。
np.array?
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
iPython では ? をつけて実行することで、 Docstring (各プログラムの説明文) を簡単に参照することができる。 もし「関数はしってるんだけど、引数が分からない」という場合に試してみよう。 なお Python スクリプトにおける Docstring の書き方は Python def hogehoge(): """ docstring here! """ return 0 である。(試しに次も実行してみよう)
def hogehoge(): """ docstring here! """ return 0 hogehoge?
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
初めてのオブジェクト指向言語 Python は オブジェクト指向言語 かつ ライブラリを使う上で必須の知識 のため、簡単におさらいする。 class を簡単に言えば「C言語における構造体の発展させ、変数だけでなく関数も内包できて、引き継ぎもできるやつ」である メモリ付き電卓を例に、以下のようなクラスを用意した。
class memory_sum: c = None def __init__(self, a): self.a = a print("run __init__ ") def __call__(self, b): self.c = b + self.a print("__call__\t:", b, "+", self.a, "=", self.c) def show_sum(self): print("showsum()\t:", self.c)
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
クラスは一種の 枠組み なので 実体を用意(インスタンス) する このとき コンストラクタ と呼ばれる、インスタンスの初期化関数 def __init__() が実行される
A = memory_sum(15)
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
インスタンスは特に関数を呼び出されない場合 def __call__() が実行される
A(30)
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
もちろん関数を呼び出すこともできる
A.show_sum()
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
またインスタンス内の変数へ、直接アクセスできる
A.c
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
クラスの引き継ぎ(継承)とは、すでに定義済みのクラスを引用することである。 例として memory_sum を引きついで、引き算機能もつけた場合以下のようになる。
class sum_sub(memory_sum): def sum(self, a, b): self.a = a self.b = b self.c = a + b print(self.c) def sub(self, a, b): self.a = a self.b = b self.c = a - b print(self.c) def show_result(self): print(self.c) B = sum_sub(30) B.sum(30, 10) B.sub(30, 10) B.show_result()
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
sum_sub は memory_sum を継承しているため、 memory_sum で定義した関数も利用できる
B.show_sum()
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
これだけ知っていれば Chainer のコードも多少は読めるようになる。 オブジェクト指向に興味が湧いた場合は、 Python 公式チュートリアル 平澤 章 著『オブジェクト指向でなぜつくるのか』 中山 清喬, 国本 大悟 著『スッキリわかるJava入門』 がおすすめである(特にオブジェクト指向とJavaの結びつきは強いので、言語違いとは言わず読んでみてほしい) Chainer 活性化関数の確認 chainer.functions では活性化関数や損失関数など基本的な関数が定義されている ReLU (ランプ関数) 隠れ層の活性化関数として、今日用いられている関数。 入力値が $0$ 以下なら $0$ を返し、$0$ 以上なら入力値をそのまま出力するだけの関数。 $$ ReLU(x)=\max(0, x) $$ 「数学的に微分可能なの?」と疑問になった方は鋭く、数学的には微分可能ではないものの、微分は以下のように定義している(らしい)。 $$ \frac{d ReLU(x)}{dx} = (if ~ 0 \leq x)~ 1,~ (else)~ 0 $$
arr = np.arange(-10, 10, 0.1) arr1 = F.relu(arr, use_cudnn=False) plt.plot(arr, arr1.data)
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
Sigmoid 関数 みんな大好きシグモイド関数 $$ sigmoid(x)= \frac{1}{1 + \exp(-x)} $$
arr = np.arange(-10, 10, 0.1) arr2 = F.sigmoid(arr, use_cudnn=False) plt.plot(arr, arr2.data)
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
softmax 関数 softmax は正規化指数関数とも言われ、値を確率にすることができる。 $$ softmax(x_i) = \frac{exp(x_i)}{\sum_j exp(x_j)} $$ また損失関数として交差エントロピーと組み合わせることで、多クラス分類を行なうことができる。 chainer.links.Classifier() における実装において、デフォルトが softmax_cross_entropy である
arr = chainer.Variable(np.array([[-5.0, 0.5, 6.0, 10.0]], dtype=np.float32)) plt.plot(F.softmax(arr).data[0]) print("softmax適用後の値: ", F.softmax(arr).data[0]) print("総和: ", sum(F.softmax(arr).data[0]))
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
Variable クラスについて chainer においては、通常の配列や numpy配列 や cupy配列 をそのまま使うのではなく、 Variable というクラスを利用する (chianer 1.1 以降は自動的に Variable クラスにラッピングされるらしい) Variable クラスではデータアクセスや勾配計算などを容易に行える。 順伝搬
x1 = chainer.Variable(np.array([1]).astype(np.float32)) x2 = chainer.Variable(np.array([2]).astype(np.float32)) x3 = chainer.Variable(np.array([3]).astype(np.float32))
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
試しに下式を計算する(順方向の計算) $$ y = (x_1 - 2 x_2 - 1)^2 + (x_2 x_3 - 1)^2 + 1 $$ 各パラメータを当てはめると $$ y = (1 - 2 \times 2 - 1)^2 + (2 \times 3 - 1)^2 + 1 = (-4)^2 + 5^2 + 1 = 42$$
y = (x1 - 2 * x2 - 1)**2 + (x2 * x3 - 1)**2 + 1 y.data
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
逆伝搬 では今度は y の微分値を求める(逆方向の計算)
y.backward()
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
$$ \frac{\delta y}{\delta x_1} = 2(x_1 - 2 x_2 - 1) = 2(1 - 2 \times 2 - 1) = -8$$
x1.grad
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
$$ \frac{\delta y}{\delta x_2} = -4 (x_1 - 2 x_2 - 1) + 2 x_3 ( x_2 x_3 - 1) = -4 (1 - 2 \times 2 - 1) + 2 \times 3 ( 2 \times 3 - 1) = 46 $$
x2.grad
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
$$ \frac{\delta y}{\delta x_3} = 2 x_2 ( x_2 x_3 - 1) = 2 \times 2 (2 \times 3 - 1) = 20$$
x3.grad
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
links クラスについて chainer.links は chainer.Variable のサブセットのような存在 ニューラルネットにおいて、ある層から次の層へデータを変換する(線形作用素)関数は $$ \boldsymbol{y} = W \boldsymbol{x} + \boldsymbol{b} $$ として表現でき、Chainer においては次のように表される。
l = L.Linear(2, 3) l.W.data l.b.data x = chainer.Variable(np.array(range(4)).astype(np.float32).reshape(2,2)) y = l(x) y.data
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
$$ \boldsymbol{y} = W \boldsymbol{x} + \boldsymbol{b} $$ に当てはめ、確認すると同一であることがわかる
x.data.dot(l.W.data.T) + l.b.data # bias は 0 なので足しても足さなくとも同じ
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
データセットの読み込み
train, test = chainer.datasets.get_mnist()
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
train と test の型を確認する
type(train)
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
chainer.datasets.tuple_dataset.TupleDataset ということがわかる (実は chainer.datasets.get_mnist() を確認すれば自明である) では train の中身はどうか
print(len(train[0][0])) print(type(train[0][0])) print(train[0][1]) print(type(train[0][1]))
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
単純に 画像データ と 正解ラベル がセットになっている なので画像データの方を reshape(28, 28) することで、画像として表示することも可能である (ただし 0 - 255 の値ではなく 0.0 - 1.0 になっているので注意) また chainer.datasets.get_mnist(ndim=2) とした場合は reshape 不要になる
plt.imshow(train[0][0].reshape(28,28)) plt.gray() # gray scale にする plt.grid() train_iter = chainer.iterators.SerialIterator(train, 100) np.shape(train_iter.dataset)
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
出力ファイルの処理 ```Python # Dump a computational graph from 'loss' variable at the first iteration # The "main" refers to the target link of the "main" optimizer. trainer.extend(extensions.dump_graph('main/loss')) # Write a log of evaluation statistics for each epoch trainer.extend(extensions.LogReport()) `` で出力される **log** と **cg.dot`** の処理方法についてここでは簡単に紹介する。 log 実は log は単なる json ファイルなのだが、馴染みがない人にとっては扱い方が地味面倒である。 ここでは pandas を利用し、データを処理、グラフ化してみる。 すでに import pandas as pd されており、 ./result/log にターゲットとなる log があるという状態において
log = pd.read_json('./result/log')
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
とするだけで json ファイルの読み込みは完了である。 次に log のテーブルを見てみると、
log
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
Python # Print selected entries of the log to stdout # Here "main" refers to the target link of the "main" optimizer again, and # "validation" refers to the default name of the Evaluator extension. # Entries other than 'epoch' are reported by the Classifier link, called by # either the updater or the evaluator. trainer.extend(extensions.PrintReport( ['epoch', 'main/loss', 'validation/main/loss', 'main/accuracy', 'validation/main/accuracy', 'elapsed_time'])) によってコンソールに出力されたものと同じものが出て来ることがわかる。 では値だけでは分からないので、訓練とテストの過程をグラフ化してみよう。
epoch = log['epoch'] plt.plot(epoch, log['main/accuracy']) plt.plot(epoch, log['validation/main/accuracy']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.legend(loc='best')
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
Before running any queries using BigQuery, you need to first authenticate yourself by running the following cell. If you are running it for the first time, it will ask you to follow a link to log in using your Gmail account, and accept the data access requests to your profile. Once this is done, it will generate a string of verification code, which you should paste back to the cell below and press enter.
auth.authenticate_user()
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Querying the MIMIC-III Demo Dataset Now we are ready to actually start following the "Cohort Selection" exercise adapted from the MIMIC cohort selection tutorial on GitHub. Because all datasets related to this Datathon are hosted on Google Cloud, there is no need to set up a local database or bring up a local Jupyter instance. Instead, we only need to connect to a BigQuery client with the desired Google Cloud project. The MIMIC-III demo data is hosted on the "datathon-datasets" project. Let's see what datasets are available in this project. For more information about BigQuery's Client object (and much more), please refer to BigQuery Documentation.
client = bigquery.Client(project='datathon-datasets') datasets = client.list_datasets() for dataset in datasets: did = dataset.dataset_id print('Dataset "%s" has the following tables:' % did) for table in client.list_tables(client.dataset(did)): print(' ' + table.table_id)
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Another way to list all BigQuery tables in a Google Cloud Project is to go to the BigQuery site directly, e.g. https://bigquery.cloud.google.com/welcome/datathon-datasets. On the left panel, you will see the mimic_demo dataset, under which you will see the table names as above once you click and expand on the link. To view the details of a table, simply click on it (for example the icustays table). Then, on the right side of the window, you will have to option to see the schema, metadata and preview of rows tabs. The data-hosting project datathon-datasets has read-only access, and as of May 2018 no longer grants job-running permission. As a result, you need to set a default project that you have BigQuery access to. For this example, we use datathon-client-00, but you may need to create your own GCP project if you don't have access to this one.
#@title Setting default project ID (you may need to create your own) {display-mode:"both"} project_id='datathon-client-00' # @param os.environ["GOOGLE_CLOUD_PROJECT"]=project_id
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Let's now run some queries adapted from the MIMIC cohort selection tutorial. First, let's preview the subject_id, hadm_id, and icustay_id columns of the icustays table.
def run_query(query): return pd.io.gbq.read_gbq(query, project_id=project_id, verbose=False, configuration={'query':{'useLegacySql': False}}) run_query(''' SELECT subject_id, hadm_id, icustay_id FROM `datathon-datasets.mimic_demo.icustays` LIMIT 10 ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
The LIMIT 10 clause in the query is handy for limiting the size of the output frame during query writing for easier viewing, and we can drop this clause once the query is finalized to run over the whole dataset. One thing to note is that even with the LIMIT clause, running a query may still incur a cost, up to the full query without LIMIT, so the best way to preview data in a table is the preview tab in the BigQuery interface. Please rest assured though that Google is sponsoring this Datathon event, so there will not be any cost for running queries in the provided Datathon projects during the event. Now, let us try some Google SQL functions. Please consult the reference page at https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators for all available functions and operators. Here is an example how TIMESTAMP_DIFF can be used for finding how many hours patients ICU stay lasted. Notice how the query result is stored and used in a pandas dataframe.
df = run_query(''' SELECT subject_id, hadm_id, icustay_id, intime, outtime, TIMESTAMP_DIFF(outtime, intime, HOUR) as icu_stay_hours FROM `datathon-datasets.mimic_demo.icustays` LIMIT 10 ''') df.head()
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Here is the BigQuery query to list some patients whose ICU stay is at least 2 days. Note that you can use AS in the SELECT clause to rename a field in the output, and you can omit the table prefix if there is no ambiguity.
run_query(''' WITH co AS ( SELECT subject_id, hadm_id, icustay_id, TIMESTAMP_DIFF(outtime, intime, DAY) AS icu_length_of_stay FROM `datathon-datasets.mimic_demo.icustays` LIMIT 10) SELECT subject_id, co.hadm_id AS hadm_ID, co.icustay_id, co.icu_length_of_stay FROM co WHERE icu_length_of_stay >= 2 ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Now, instead of filtering out ICU stays of length 1 day or less, let's label all the ICU stays with an integer, either 1 for stays of length 2 days or more, or 0 for stays shorter than 2 days. The resulting table is called a "cohort table" in the original MIMIC-III tutorial.
run_query(''' WITH co AS ( SELECT subject_id, hadm_id, icustay_id, TIMESTAMP_DIFF(outtime, intime, DAY) AS icu_length_of_stay FROM `datathon-datasets.mimic_demo.icustays` LIMIT 10) SELECT subject_id, hadm_id, icustay_id, icu_length_of_stay, IF(icu_length_of_stay < 2, 1, 0) AS exclusion_los FROM co ORDER BY icustay_id''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Let's now try a query that requires table joining: include the patient's age at the time of ICU admittance. This is computed by the date difference in years between the ICU intime and the patient's date of birth. The former is available in the icustays table, and the latter resides in the dob column of the patients table.
run_query(''' WITH co AS ( SELECT icu.subject_id, icu.hadm_id, icu.icustay_id, pat.dob, TIMESTAMP_DIFF(icu.outtime, icu.intime, DAY) AS icu_length_of_stay, DATE_DIFF(DATE(icu.intime), DATE(pat.dob), YEAR) AS age FROM `datathon-datasets.mimic_demo.icustays` AS icu INNER JOIN `datathon-datasets.mimic_demo.patients` AS pat ON icu.subject_id = pat.subject_id LIMIT 10) SELECT subject_id, hadm_id, icustay_id, icu_length_of_stay, co.age, IF(icu_length_of_stay < 2, 1, 0) AS exclusion_los FROM co ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
It is somewhat surprising to see a patient whose age is 300! This raises the question whether the age distribution of all patients is sane. We can verify this by querying the quantiles of patients' ages. Notice that we have removed the LIMIT 10 clause in the inner query, but the result is only one row, containing an array of 11 ingeter ages.
run_query(''' WITH co AS ( SELECT DATE_DIFF(DATE(icu.intime), DATE(pat.dob), YEAR) AS age FROM `datathon-datasets.mimic_demo.icustays` AS icu INNER JOIN `datathon-datasets.mimic_demo.patients` AS pat ON icu.subject_id = pat.subject_id) SELECT APPROX_QUANTILES(age, 10) AS age_quantiles FROM co ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
The result says that the minimum age (0th percentile) is 17, the 10th percentile is 49, the 20th percentile is 62, and so on, and 300 is the maximum (100-th percentile). The distribution looks good, and 300 could be an outlier caused by inaccurate data collection.
run_query(''' SELECT DATE_DIFF(DATE(icu.intime), DATE(pat.dob), YEAR) AS age FROM `datathon-datasets.mimic_demo.icustays` AS icu INNER JOIN `datathon-datasets.mimic_demo.patients` AS pat ON icu.subject_id = pat.subject_id ORDER BY age DESC LIMIT 10 ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0