markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Welp. Seems like that trip identification code is a little too basic. Let's try to investigate what might be the problem:
sql = '''WITH trips AS(SELECT lineid, trip_id, EXTRACT('minutes' FROM MAX(estimated_arrival) - MIN(estimated_arrival)) as trip_duration FROM test_day_final GROUP BY lineid, trip_id) SELECT * FROM trips ORDER BY trip_duration LIMIT 10''' pandasql.read_sql(sql, con) sql = '''WITH one_stop_trips AS(SELECT lineid, trip_id...
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
So for the most part we seem to have issues with identifying trip start/end at the termini. |Line | One stop Trips at Termini| |-----|--------------------------:| |1 | 766| |2 | 481| | 4 | 791| So approximately half of "extra trips" are one stop trips at termini. Let's see the overall distribution of number ...
sql = '''WITH inferred_trips AS(SELECT lineid, trip_id, COUNT(1) as stops FROM test_day_final GROUP BY lineid, trip_id ), inferred_trip_length AS( SELECT lineid, stops, COUNT(trip_id) as obs_trips FROM inferred_trips GROUP BY lineid, stops) , gtfs_trip_lengths AS(SELECT route_short_name::INT AS lineid, trip_id, COUNT(1...
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
So we are certainly getting 1, 2, and 3 stop trips that shouldn't exist and undercounting the more appropriate number of trips. The one-stop trips are primarily at termini. What is happening with the 2, 3 stop trips...?
sql_2_3 = '''SELECT stops, COUNT(1) FROM ( SELECT array_agg(station_char ORDER BY estimated_arrival) AS stops FROM test_day_final WHERE lineid = 1 GROUP BY trip_id HAVING COUNT(1) =2 OR COUNT(1) = 3 )grou...
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
The top "trips" are from Bloor-Spadina to Yonge via St. George and vice-versa, but these are the stations on line 2... Trying to group arrival-departure times first Assuming a maximum time for any given train to dwell at a station, we will test this procedure on lines 1-2 first.
sql = ''' CREATE MATERIALIZED VIEW test_day_stop_arrival AS SELECT trainid, lineid, traindirection, stationid, station_char, MIN(create_date + timint * interval '1 minute') AS expected_arrival, timint, train_message, FROM test_day WHERE (timint < 1 OR tr...
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
The actual conversion is done with rawpy and Pillow which are imported below.
import rawpy import PIL
simple-convert/simple-convert.ipynb
neothemachine/rawpy-notebooks
unlicense
Opening the RAW image Opening a RAW image is as simple as calling rawpy.imread.
raw = rawpy.imread('../images/RAW_NIKON_D3X.NEF')
simple-convert/simple-convert.ipynb
neothemachine/rawpy-notebooks
unlicense
Note that imread() behaves similar to Python's built-in open() function, meaning that the opened file has to be closed again later on. Processing the RAW image When processing RAW images we have to decide how to handle the white balance. A common option is to just use the white balance values that are stored in the RAW...
rgb = raw.postprocess(use_camera_wb=True)
simple-convert/simple-convert.ipynb
neothemachine/rawpy-notebooks
unlicense
The return value of postprocess() is a numpy array which we can display with matplotlib's imshow() function.
print(rgb.dtype, rgb.shape) imshow(rgb)
simple-convert/simple-convert.ipynb
neothemachine/rawpy-notebooks
unlicense
If the camera white balance does not look right, then it can also be estimated from the image itself with the use_auto_wb parameter.
rgb2 = raw.postprocess(use_auto_wb=True) imshow(rgb2)
simple-convert/simple-convert.ipynb
neothemachine/rawpy-notebooks
unlicense
In this example the white balance values stored from the camera look more natural, so we will use the first version. Saving the processed image Saving the processed image (a numpy array) in a standard format is easily done with Pillow.
PIL.Image.fromarray(rgb).save('image.jpg', quality=90, optimize=True) PIL.Image.fromarray(rgb).save('image.tiff')
simple-convert/simple-convert.ipynb
neothemachine/rawpy-notebooks
unlicense
Closing the RAW image It is important to close the RAW image again after we are done with processing.
raw.close()
simple-convert/simple-convert.ipynb
neothemachine/rawpy-notebooks
unlicense
Using context managers rawpy also supports context managers for opening/closing RAW images. In that case, the conversion code would look like below.
with rawpy.imread('../images/RAW_NIKON_D3X.NEF') as raw: rgb = raw.postprocess(use_camera_wb=True) PIL.Image.fromarray(rgb).save('image.jpg')
simple-convert/simple-convert.ipynb
neothemachine/rawpy-notebooks
unlicense
Pytorch Introduction ```bash installation on a mac for more information on installation refer to the following link: http://pytorch.org/ conda install pytorch torchvision -c pytorch ``` At its core, PyTorch provides two main features: An n-dimensional Tensor, similar to numpy array but can run on GPUs. PyTorch provid...
# make up some trainig data and specify the type to be float, i.e. np.float32 # We DO not recommend double, i.e. np.float64, especially on the GPU. GPUs have bad # double precision performance since they are optimized for float32 X_train = np.asarray([3.3, 4.4, 5.5, 6.71, 6.93, 4.168, 9.779, 6.182, 7.59, ...
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
Here we start defining the linear regression model, recall that in linear regression, we are optimizing for the squared loss. \begin{align} L = \frac{1}{2}(y-(Xw + b))^2 \end{align}
# with linear regression, we apply a linear transformation # to the incoming data, i.e. y = Xw + b, here we only have a 1 # dimensional data, thus the feature size will be 1 model = nn.Linear(in_features=1, out_features=1) # although we can write our own loss function, the nn module # also contains definitions of popu...
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
Linear Regression Version 2 A better way of defining our model is to inherit the nn.Module class, to use it all we need to do is define our model's forward pass and the nn.Module will automatically define the backward method for us, where the gradients will be computed using autograd.
class LinearRegression(nn.Module): def __init__(self, in_features, out_features): super().__init__() # boilerplate call self.in_features = in_features self.out_features = out_features self.linear = nn.Linear(in_features, out_features) def forward(self, x): out = self...
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
After training our model, we can also save the model's parameter and load it back into the model in the future
checkpoint_path = 'model.pkl' torch.save(model.state_dict(), checkpoint_path) model.load_state_dict(torch.load(checkpoint_path)) y_pred = model(X).detach().numpy() plt.plot(X_train, y_train, 'ro', label='Original data') plt.plot(X_train, y_pred, label='Fitted line') plt.legend() plt.show()
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
Logistic Regression Let's now look at a classification example, here we'll define a logistic regression that takes in a bag of words representation of some text and predicts over two labels "English" and "Spanish".
# define some toy dataset train_data = [ ('me gusta comer en la cafeteria'.split(), 'SPANISH'), ('Give it to me'.split(), 'ENGLISH'), ('No creo que sea una buena idea'.split(), 'SPANISH'), ('No it is not a good idea to get lost at sea'.split(), 'ENGLISH') ] test_data = [ ('Yo creo que si'.split(), ...
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
The next code chunk create words to index mappings. To build our bag of words (BoW) representation, we need to assign each word in our vocabulary an unique index. Let's say our entire corpus only consists of two words "hello" and "world", with "hello" corresponding to index 0 and "world" to index 1. Then the BoW vector...
idx_to_label = ['SPANISH', 'ENGLISH'] label_to_idx = {"SPANISH": 0, "ENGLISH": 1} word_to_idx = {} for sent, _ in train_data + test_data: for word in sent: if word not in word_to_idx: word_to_idx[word] = len(word_to_idx) print(word_to_idx) VOCAB_SIZE = len(word_to_idx) NUM_LABELS = len(label_t...
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
Next we define our model using the inherenting from nn.Module approach and also two helper functions to convert our data to torch Tensors so we can use to during training.
class BoWClassifier(nn.Module): def __init__(self, vocab_size, num_labels): super().__init__() self.linear = nn.Linear(vocab_size, num_labels) def forward(self, bow_vector): """ When we're performing a classification, after passing through the linear layer or also known...
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
We are now ready to train this!
model = BoWClassifier(VOCAB_SIZE, NUM_LABELS) # note that instead of using NLLLoss (negative log likelihood), # we could have used CrossEntropyLoss and remove the log_softmax # function call in our forward method. The CrossEntropyLoss docstring # explicitly states that this criterion combines `LogSoftMax` and # `NLLLo...
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
Recurrent Neural Network (RNN) The idea behind RNN is to make use of sequential information that exists in our dataset. In feedforward neural network, we assume that all inputs and outputs are independent of each other. But for some tasks, this might not be the best way to tackle the problem. For example, in Natural La...
torch.manual_seed(777) # suppose we have a # one hot encoding for each char in 'hello' # and the sequence length for the word 'hello' is 5 seq_len = 5 h = [1, 0, 0, 0] e = [0, 1, 0, 0] l = [0, 0, 1, 0] o = [0, 0, 0, 1] # here we specify a single RNN cell with the property of # input_dim (4) -> output_dim (2) # batch_...
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
In the next section, we'll teach our RNN to produce "ihello" from "hihell".
# create an index to character mapping idx2char = ['h', 'i', 'e', 'l', 'o'] # Teach hihell -> ihello x_data = [[0, 1, 0, 2, 3, 3]] # hihell x_one_hot = [[[1, 0, 0, 0, 0], # h 0 [0, 1, 0, 0, 0], # i 1 [1, 0, 0, 0, 0], # h 0 [0, 0, 1, 0, 0], # e 2 [0, 0,...
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
LSTM The example below uses an LSTM to generate part of speech tags. The usage of LSTM API is essentially the same as the RNN we were using in the last section. Expect in this example, we will prepare the word to index mapping ourselves and as for the modeling part, we will add an embedding layer before the LSTM layer,...
# These will usually be more like 32 or 64 dimensional. # We will keep them small for this toy example EMBEDDING_SIZE = 6 HIDDEN_SIZE = 6 training_data = [ ("The dog ate the apple".split(), ["DET", "NN", "V", "DET", "NN"]), ("Everybody read that book".split(), ["NN", "V", "DET", "NN"]) ] idx_to_tag = ['DET', ...
deep_learning/rnn/1_pytorch_rnn.ipynb
ethen8181/machine-learning
mit
Start with MTBLS315, the malaria vs fever dataset. Could get ~0.85 AUC for whole dataset.
# Get the data ### Subdivide the data into a feature table local_path = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/' data_path = local_path + '/revo_healthcare/data/processed/MTBLS315/'\ 'uhplc_pos/xcms_camera_results.csv' ## Import the data and remove extraneous columns df = pd.read_csv(data_path, index_col=0) df...
notebooks/Effects_of_retention_time_on_classification/retention_time_regions_and_classifiiability.ipynb
irockafe/revo_healthcare
mit
Almost everything is below 30sec rt-window
# Show me a scatterplot of m/z rt dots # distribution along mass-axis and rt axist plt.scatter(df['mz'], df['rt'], s=normalized_intensities*100,) plt.xlabel('mz') plt.ylabel('rt') plt.title('mz vs. rt') plt.show() import numpy as np import matplotlib.pyplot as plt from matplotlib.ticker import NullFormatter def pl...
notebooks/Effects_of_retention_time_on_classification/retention_time_regions_and_classifiiability.ipynb
irockafe/revo_healthcare
mit
<h2> Show me the distribution of features from alzheimers dataset </h2>
### Subdivide the data into a feature table local_path = '/home/irockafe/Dropbox (MIT)/Alm_Lab/'\ 'projects' data_path = local_path + '/revo_healthcare/data/processed/MTBLS72/positive_mode/'\ 'mtbls_no_retcor_bw2.csv' ## Import the data and remove extraneous columns df = pd.read_csv(data_path, index_col=0) df.shape df....
notebooks/Effects_of_retention_time_on_classification/retention_time_regions_and_classifiiability.ipynb
irockafe/revo_healthcare
mit
1ª Questão: Carregando o Dataset (Glass)
# Carregando o Wine Dataset (https://archive.ics.uci.edu/ml/datasets/Wine) data = pd.read_csv("wine.data") X = data.iloc[:,1:].values y = data.iloc[:,0].values # Pre-processing the data (for PCA) X = (X - X.mean(axis=0)) / X.std(axis=0)
2017/09-clustering/cl_otacilio_bezerra.ipynb
abevieiramota/data-science-cookbook
mit
Visualização dos Dados
# Plotando uma visualização 3-Dimensional dos Dados # Podemos observar que os dados (em 3-Dimensões) são extremamente superpostos pcaData = PCA(n_components=3).fit_transform(X) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(pcaData[:,0], pcaData[:,1], pcaData[:,2], c=y, cmap=plt.cm.Dark2) plt...
2017/09-clustering/cl_otacilio_bezerra.ipynb
abevieiramota/data-science-cookbook
mit
Aplicação do K-Means
# Criamos o objeto da classe KMeans kmeans = KMeans(n_clusters=2, random_state=0) # Realizamos a Clusterização kmeans.fit(X) clts = kmeans.predict(X) # Plotando uma visualização 3-Dimensional dos Dados, agora com os clusteres designados pelo K-Means # Compare a visualização com o gráfico da celula de cima fig = plt.f...
2017/09-clustering/cl_otacilio_bezerra.ipynb
abevieiramota/data-science-cookbook
mit
Métricas de Avaliação
# Utilizamos três métricas de avaliação dos Clusteres, com base nos dados já classificados: # -> Homogeneity: porcentagem relativa ao objetivo de ter, em cada cluster, apenas membros de uma mesma classe # -> Completeness: porcentagem relativa ao objetivo de ter todos os membros de uma classe no mesmo cluster # -> V-Me...
2017/09-clustering/cl_otacilio_bezerra.ipynb
abevieiramota/data-science-cookbook
mit
2ª Questão Implementando o Método do Cotovelo
# Método do Cotevelo baseado na Inertia (Soma Quadrática da Distância Intra-Cluster de cada Ponto) numK = np.arange(1,10); inertias = [] for i in numK: print(".", end="") kmeans.n_clusters = i kmeans.fit(X) inertias.append(kmeans.inertia_) # Plotagens plt.figure() plt.title("Elbow Method") plt...
2017/09-clustering/cl_otacilio_bezerra.ipynb
abevieiramota/data-science-cookbook
mit
Questão 3
# Realizamos a Clusterização, agora com um número selecionado de Clusteres kmeans.n_clusters = 3 kmeans.fit(X) clts = kmeans.predict(X) # Visualização das Métricas de Avaliação homoScore = metrics.homogeneity_score(y, clts) complScore = metrics.completeness_score(y, clts) vMeasureScore = metrics.v_measure_score(y, c...
2017/09-clustering/cl_otacilio_bezerra.ipynb
abevieiramota/data-science-cookbook
mit
Load Protein Interactions Select columns of interest and drop empty rows.
url1 = 'https://s3-us-west-1.amazonaws.com/graphistry.demo.data/BIOGRID-ALL-3.3.123.tab2.txt.gz' rawdata = pandas.read_table(url1, na_values=['-'], engine='c', compression='gzip') # If using local data, comment the two lines above and uncomment the line below # pandas.read_table('./data/BIOGRID-ALL-3.3.123.tab2.txt', ...
demos/demos_by_use_case/bio/BiogridDemo.ipynb
graphistry/pygraphistry
bsd-3-clause
Let's have a quick peak at the data Bind the columns storing the source/destination of each edge. This is the bare minimum to create a visualization.
g = graphistry.bind(source="BioGRID ID Interactor A", destination="BioGRID ID Interactor B") g.plot(interactions.sample(10000))
demos/demos_by_use_case/bio/BiogridDemo.ipynb
graphistry/pygraphistry
bsd-3-clause
A Fancier Visualization With Custom Labels and Colors Let's lookup the name and organism of each protein in the BioGrid indentification DB.
# This downloads 170 MB, it might take some time. url2 = 'https://s3-us-west-1.amazonaws.com/graphistry.demo.data/BIOGRID-IDENTIFIERS-3.3.123.tab.txt.gz' raw_proteins = pandas.read_table(url2, na_values=['-'], engine='c', compression='gzip') # If using local data, comment the two lines above and uncomment the line bel...
demos/demos_by_use_case/bio/BiogridDemo.ipynb
graphistry/pygraphistry
bsd-3-clause
We extract the proteins referenced as either sources or targets of interactions.
source_proteins = interactions[["BioGRID ID Interactor A", "Official Symbol Interactor A"]].copy() \ .rename(columns={'BioGRID ID Interactor A': 'BIOGRID_ID', 'Official Symbol Interactor A': 'SYMBOL'}) target_proteins = interactions[["BioGRI...
demos/demos_by_use_case/bio/BiogridDemo.ipynb
graphistry/pygraphistry
bsd-3-clause
We join on the indentification DB to get the organism in which each protein belongs.
protein_labels = pandas.merge(all_proteins, protein_ids, how='left', left_on='BIOGRID_ID', right_on='BIOGRID_ID') protein_labels[:3]
demos/demos_by_use_case/bio/BiogridDemo.ipynb
graphistry/pygraphistry
bsd-3-clause
We assign colors to proteins based on their organism.
colors = protein_labels.ORGANISM.unique().tolist() protein_labels['Color'] = protein_labels.ORGANISM.map(lambda x: colors.index(x))
demos/demos_by_use_case/bio/BiogridDemo.ipynb
graphistry/pygraphistry
bsd-3-clause
For convenience, let's add links to PubMed and RCSB.
def makeRcsbLink(id): if isinstance(id, str): url = 'http://www.rcsb.org/pdb/gene/' + id.upper() return '<a target="_blank" href="%s">%s</a>' % (url, id.upper()) else: return 'n/a' protein_labels.SYMBOL = protein_labels.SYMBOL.map(makeRcsbLink) protein_labels[:3] def makePubmedLink...
demos/demos_by_use_case/bio/BiogridDemo.ipynb
graphistry/pygraphistry
bsd-3-clause
Plotting We bind columns to labels and colors and we are good to go.
# This will upload ~10MB of data, be patient! g2 = g.bind(node='BIOGRID_ID', edge_title='Author', point_title='SYMBOL', point_color='Color') g2.plot(interactions, protein_labels)
demos/demos_by_use_case/bio/BiogridDemo.ipynb
graphistry/pygraphistry
bsd-3-clause
Intro to Sparse Data and Embeddings Learning Objectives: * Convert movie-review string data to a sparse feature vector * Implement a sentiment-analysis linear model using a sparse feature vector * Implement a sentiment-analysis DNN model using an embedding that projects data into two dimensions * Visualize the embeddin...
from __future__ import print_function import collections import io import math import matplotlib.pyplot as plt import numpy as np import pandas as pd import tensorflow as tf from IPython import display from sklearn import metrics tf.logging.set_verbosity(tf.logging.ERROR) train_url = 'https://download.mlcc.google.co...
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Building a Sentiment Analysis Model Let's train a sentiment-analysis model on this data that predicts if a review is generally favorable (label of 1) or unfavorable (label of 0). To do so, we'll turn our string-value terms into feature vectors by using a vocabulary, a list of each term we expect to see in our data. For...
def _parse_function(record): """Extracts features and labels. Args: record: File path to a TFRecord file Returns: A `tuple` `(labels, features)`: features: A dict of tensors representing the features labels: A tensor with the corresponding labels. """ features = { "terms": tf.Va...
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
To confirm our function is working as expected, let's construct a TFRecordDataset for the training data, and map the data to features and labels using the function above.
# Create the Dataset object. ds = tf.data.TFRecordDataset(train_path) # Map features and labels with the parse function. ds = ds.map(_parse_function) ds
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Run the following cell to retrieve the first example from the training data set.
n = ds.make_one_shot_iterator().get_next() sess = tf.Session() sess.run(n)
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Now, let's build a formal input function that we can pass to the train() method of a TensorFlow Estimator object.
# Create an input_fn that parses the tf.Examples from the given files, # and split them into features and targets. def _input_fn(input_filenames, num_epochs=None, shuffle=True): # Same code as above; create a dataset and map features and labels. ds = tf.data.TFRecordDataset(input_filenames) ds = ds.map(_parse_...
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Task 1: Use a Linear Model with Sparse Inputs and an Explicit Vocabulary For our first model, we'll build a LinearClassifier model using 50 informative terms; always start simple! The following code constructs the feature column for our terms. The categorical_column_with_vocabulary_list function creates a feature colum...
# 50 informative terms that compose our model vocabulary informative_terms = ("bad", "great", "best", "worst", "fun", "beautiful", "excellent", "poor", "boring", "awful", "terrible", "definitely", "perfect", "liked", "worse", "waste", "entertaining", "love...
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Next, we'll construct the LinearClassifier, train it on the training set, and evaluate it on the evaluation set. After you read through the code, run it and see how you do.
my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1) my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0) feature_columns = [ terms_feature_column ] classifier = tf.estimator.LinearClassifier( feature_columns=feature_columns, optimizer=my_optimizer, ) classifier.train( input_fn...
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Task 2: Use a Deep Neural Network (DNN) Model The above model is a linear model. It works quite well. But can we do better with a DNN model? Let's swap in a DNNClassifier for the LinearClassifier. Run the following cell, and see how you do.
##################### Here's what we changed ################################## classifier = tf.estimator.DNNClassifier( # feature_columns=[tf.feature_column.indicator_column(terms_feature_column)], # hidden_units=[20,20], # ...
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Task 3: Use an Embedding with a DNN Model In this task, we'll implement our DNN model using an embedding column. An embedding column takes sparse data as input and returns a lower-dimensional dense vector as output. NOTE: An embedding_column is usually the computationally most efficient option to use for training a mod...
# Here's a example code snippet you might use to define the feature columns: terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2) feature_columns = [ terms_embedding_column ]
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Complete the Code Below
########################## YOUR CODE HERE ###################################### terms_embedding_column = # Define the embedding column feature_columns = # Define the feature columns classifier = # Define the DNNClassifier ################################################################################ classifier.tra...
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Solution Click below for a solution.
########################## SOLUTION CODE ######################################## terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2) feature_columns = [ terms_embedding_column ] my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1) my_optimizer = tf.contrib.estimator.c...
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Task 4: Convince yourself there's actually an embedding in there The above model used an embedding_column, and it seemed to work, but this doesn't tell us much about what's going on internally. How can we check that the model is actually using an embedding inside? To start, let's look at the tensors in the model:
classifier.get_variable_names()
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Okay, we can see that there is an embedding layer in there: 'dnn/input_from_feature_columns/input_layer/terms_embedding/...'. (What's interesting here, by the way, is that this layer is trainable along with the rest of the model just as any hidden layer is.) Is the embedding layer the correct shape? Run the following c...
classifier.get_variable_value('dnn/input_from_feature_columns/input_layer/terms_embedding/embedding_weights').shape
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Spend some time manually checking the various layers and shapes to make sure everything is connected the way you would expect it would be. Task 5: Examine the Embedding Let's now take a look at the actual embedding space, and see where the terms end up in it. Do the following: 1. Run the following code to see the embed...
import numpy as np import matplotlib.pyplot as plt embedding_matrix = classifier.get_variable_value('dnn/input_from_feature_columns/input_layer/terms_embedding/embedding_weights') for term_index in range(len(informative_terms)): # Create a one-hot encoding for our term. It has 0s everywhere, except for # a single...
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Task 6: Try to improve the model's performance See if you can refine the model to improve performance. A couple things you may want to try: Changing hyperparameters, or using a different optimizer like Adam (you may only gain one or two accuracy percentage points following these strategies). Adding additional terms t...
# Download the vocabulary file. terms_url = 'https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/terms.txt' terms_path = tf.keras.utils.get_file(terms_url.split('/')[-1], terms_url) # Create a feature column from "terms", using a full vocabulary file. informative_terms = None with io.open(terms_path,...
intro_to_sparse_data_and_embeddings_ipynb.ipynb
takasawada/tarsh
gpl-2.0
Include D3 script:
%%d3 <script src="https://d3js.org/d3.v3.js"></script>
visualization/day46-embed-d3.ipynb
csiu/100daysofcode
mit
Example 01: Hello world
%%d3 <div></div> <script> d3.select("div").text("Hello world") </script>
visualization/day46-embed-d3.ipynb
csiu/100daysofcode
mit
Example 02: Simple rectangle
%%d3 <g></g> <script> d3.select("g").append("svg").append("rect") .attr("x", 150) .attr("y", 50) .attr("width", 50) .attr("height", 140); </script>
visualization/day46-embed-d3.ipynb
csiu/100daysofcode
mit
Example 03: Functions, style, and polygons
%%d3 <g></g> <script> function CalculateStarPoints(centerX, centerY, arms, outerRadius, innerRadius) { var results = ""; var angle = Math.PI / arms * 2; for (var i = 0; i < 2 * arms; i++) { var r = (i & 1) == 0 ? outerRadius : innerRadius; var pointX = centerX + Math.cos(i * angle) * r; var pointY ...
visualization/day46-embed-d3.ipynb
csiu/100daysofcode
mit
Kink with using Jupyter Notebook When I load the Collision Detection example by Mike Bostock in the Jupyter Notebook, it doesn't work. Nothing is rendered (see following). However, this is not an issue with JSFiddle.
%%d3 <script src="https://d3js.org/d3.v3.js"></script> This example is modified from <a href="https://bl.ocks.org/mbostock/3231298">https://bl.ocks.org/mbostock/3231298</a> <body></body> <script> var width = 500, height = 500; var nodes = d3.range(200).map(function() { return { radius: Math.random() * ...
visualization/day46-embed-d3.ipynb
csiu/100daysofcode
mit
If the regular expression r that is defined below is written in the style of the lecture notes, it reads: $$(\texttt{a}\cdot\texttt{b} + \texttt{b}\cdot\texttt{a})^*$$
r = parse('(ab + ba)*') r converter = RegExp2NFA({'a', 'b'})
Python/Test-Regexp-2-NFA.ipynb
Danghor/Formal-Languages
gpl-2.0
We use converter to create a non-deterministic <span style="font-variant:small-caps;">Fsm</span> nfa that accepts the language described by the regular expression r.
nfa = converter.toNFA(r) nfa %run FSM-2-Dot.ipynb
Python/Test-Regexp-2-NFA.ipynb
Danghor/Formal-Languages
gpl-2.0
I have to use the method render below, because somehow the method displayis buggy and cuts of parts of the graph.
d = nfa2dot(nfa) d.render(view=True)
Python/Test-Regexp-2-NFA.ipynb
Danghor/Formal-Languages
gpl-2.0
Python 基礎 iPython (or jupyter) のヘルプテクニック これを見ている人は jupyter notebook を使っていると思うが、次のコードを動かしてみてほしい。
np.array?
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
iPython では ? をつけて実行することで、 Docstring (各プログラムの説明文) を簡単に参照することができる。 もし「関数はしってるんだけど、引数が分からない」という場合に試してみよう。 なお Python スクリプトにおける Docstring の書き方は Python def hogehoge(): """ docstring here! """ return 0 である。(試しに次も実行してみよう)
def hogehoge(): """ docstring here! """ return 0 hogehoge?
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
初めてのオブジェクト指向言語 Python は オブジェクト指向言語 かつ ライブラリを使う上で必須の知識 のため、簡単におさらいする。 class を簡単に言えば「C言語における構造体の発展させ、変数だけでなく関数も内包できて、引き継ぎもできるやつ」である メモリ付き電卓を例に、以下のようなクラスを用意した。
class memory_sum: c = None def __init__(self, a): self.a = a print("run __init__ ") def __call__(self, b): self.c = b + self.a print("__call__\t:", b, "+", self.a, "=", self.c) def show_sum(self): print("showsum()\t:", self.c)
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
クラスは一種の 枠組み なので 実体を用意(インスタンス) する このとき コンストラクタ と呼ばれる、インスタンスの初期化関数 def __init__() が実行される
A = memory_sum(15)
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
インスタンスは特に関数を呼び出されない場合 def __call__() が実行される
A(30)
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
もちろん関数を呼び出すこともできる
A.show_sum()
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
またインスタンス内の変数へ、直接アクセスできる
A.c
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
クラスの引き継ぎ(継承)とは、すでに定義済みのクラスを引用することである。 例として memory_sum を引きついで、引き算機能もつけた場合以下のようになる。
class sum_sub(memory_sum): def sum(self, a, b): self.a = a self.b = b self.c = a + b print(self.c) def sub(self, a, b): self.a = a self.b = b self.c = a - b print(self.c) def show_result(self): print(self.c) B = sum_s...
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
sum_sub は memory_sum を継承しているため、 memory_sum で定義した関数も利用できる
B.show_sum()
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
これだけ知っていれば Chainer のコードも多少は読めるようになる。 オブジェクト指向に興味が湧いた場合は、 Python 公式チュートリアル 平澤 章 著『オブジェクト指向でなぜつくるのか』 中山 清喬, 国本 大悟 著『スッキリわかるJava入門』 がおすすめである(特にオブジェクト指向とJavaの結びつきは強いので、言語違いとは言わず読んでみてほしい) Chainer 活性化関数の確認 chainer.functions では活性化関数や損失関数など基本的な関数が定義されている ReLU (ランプ関数) 隠れ層の活性化関数として、今日用いられている関数。 入力値が $0$ 以下なら $0$ を返し、$0$ 以上な...
arr = np.arange(-10, 10, 0.1) arr1 = F.relu(arr, use_cudnn=False) plt.plot(arr, arr1.data)
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
Sigmoid 関数 みんな大好きシグモイド関数 $$ sigmoid(x)= \frac{1}{1 + \exp(-x)} $$
arr = np.arange(-10, 10, 0.1) arr2 = F.sigmoid(arr, use_cudnn=False) plt.plot(arr, arr2.data)
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
softmax 関数 softmax は正規化指数関数とも言われ、値を確率にすることができる。 $$ softmax(x_i) = \frac{exp(x_i)}{\sum_j exp(x_j)} $$ また損失関数として交差エントロピーと組み合わせることで、多クラス分類を行なうことができる。 chainer.links.Classifier() における実装において、デフォルトが softmax_cross_entropy である
arr = chainer.Variable(np.array([[-5.0, 0.5, 6.0, 10.0]], dtype=np.float32)) plt.plot(F.softmax(arr).data[0]) print("softmax適用後の値: ", F.softmax(arr).data[0]) print("総和: ", sum(F.softmax(arr).data[0]))
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
Variable クラスについて chainer においては、通常の配列や numpy配列 や cupy配列 をそのまま使うのではなく、 Variable というクラスを利用する (chianer 1.1 以降は自動的に Variable クラスにラッピングされるらしい) Variable クラスではデータアクセスや勾配計算などを容易に行える。 順伝搬
x1 = chainer.Variable(np.array([1]).astype(np.float32)) x2 = chainer.Variable(np.array([2]).astype(np.float32)) x3 = chainer.Variable(np.array([3]).astype(np.float32))
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
試しに下式を計算する(順方向の計算) $$ y = (x_1 - 2 x_2 - 1)^2 + (x_2 x_3 - 1)^2 + 1 $$ 各パラメータを当てはめると $$ y = (1 - 2 \times 2 - 1)^2 + (2 \times 3 - 1)^2 + 1 = (-4)^2 + 5^2 + 1 = 42$$
y = (x1 - 2 * x2 - 1)**2 + (x2 * x3 - 1)**2 + 1 y.data
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
逆伝搬 では今度は y の微分値を求める(逆方向の計算)
y.backward()
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
$$ \frac{\delta y}{\delta x_1} = 2(x_1 - 2 x_2 - 1) = 2(1 - 2 \times 2 - 1) = -8$$
x1.grad
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
$$ \frac{\delta y}{\delta x_2} = -4 (x_1 - 2 x_2 - 1) + 2 x_3 ( x_2 x_3 - 1) = -4 (1 - 2 \times 2 - 1) + 2 \times 3 ( 2 \times 3 - 1) = 46 $$
x2.grad
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
$$ \frac{\delta y}{\delta x_3} = 2 x_2 ( x_2 x_3 - 1) = 2 \times 2 (2 \times 3 - 1) = 20$$
x3.grad
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
links クラスについて chainer.links は chainer.Variable のサブセットのような存在 ニューラルネットにおいて、ある層から次の層へデータを変換する(線形作用素)関数は $$ \boldsymbol{y} = W \boldsymbol{x} + \boldsymbol{b} $$ として表現でき、Chainer においては次のように表される。
l = L.Linear(2, 3) l.W.data l.b.data x = chainer.Variable(np.array(range(4)).astype(np.float32).reshape(2,2)) y = l(x) y.data
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
$$ \boldsymbol{y} = W \boldsymbol{x} + \boldsymbol{b} $$ に当てはめ、確認すると同一であることがわかる
x.data.dot(l.W.data.T) + l.b.data # bias は 0 なので足しても足さなくとも同じ
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
データセットの読み込み
train, test = chainer.datasets.get_mnist()
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
train と test の型を確認する
type(train)
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
chainer.datasets.tuple_dataset.TupleDataset ということがわかる (実は chainer.datasets.get_mnist() を確認すれば自明である) では train の中身はどうか
print(len(train[0][0])) print(type(train[0][0])) print(train[0][1]) print(type(train[0][1]))
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
単純に 画像データ と 正解ラベル がセットになっている なので画像データの方を reshape(28, 28) することで、画像として表示することも可能である (ただし 0 - 255 の値ではなく 0.0 - 1.0 になっているので注意) また chainer.datasets.get_mnist(ndim=2) とした場合は reshape 不要になる
plt.imshow(train[0][0].reshape(28,28)) plt.gray() # gray scale にする plt.grid() train_iter = chainer.iterators.SerialIterator(train, 100) np.shape(train_iter.dataset)
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
出力ファイルの処理 ```Python # Dump a computational graph from 'loss' variable at the first iteration # The "main" refers to the target link of the "main" optimizer. trainer.extend(extensions.dump_graph('main/loss')) # Write a log of evaluation statistics for each epoch trainer.extend(extensions.LogReport()) `` で出力...
log = pd.read_json('./result/log')
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
とするだけで json ファイルの読み込みは完了である。 次に log のテーブルを見てみると、
log
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
Python # Print selected entries of the log to stdout # Here "main" refers to the target link of the "main" optimizer again, and # "validation" refers to the default name of the Evaluator extension. # Entries other than 'epoch' are reported by the Classifier link, called by # either the updater or th...
epoch = log['epoch'] plt.plot(epoch, log['main/accuracy']) plt.plot(epoch, log['validation/main/accuracy']) plt.xlabel('epoch') plt.ylabel('accuracy') plt.legend(loc='best')
ニューラルネット入門_補足資料.ipynb
akinorihomma/Computer-Science-2
unlicense
Before running any queries using BigQuery, you need to first authenticate yourself by running the following cell. If you are running it for the first time, it will ask you to follow a link to log in using your Gmail account, and accept the data access requests to your profile. Once this is done, it will generate a stri...
auth.authenticate_user()
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Querying the MIMIC-III Demo Dataset Now we are ready to actually start following the "Cohort Selection" exercise adapted from the MIMIC cohort selection tutorial on GitHub. Because all datasets related to this Datathon are hosted on Google Cloud, there is no need to set up a local database or bring up a local Jupyter i...
client = bigquery.Client(project='datathon-datasets') datasets = client.list_datasets() for dataset in datasets: did = dataset.dataset_id print('Dataset "%s" has the following tables:' % did) for table in client.list_tables(client.dataset(did)): print(' ' + table.table_id)
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Another way to list all BigQuery tables in a Google Cloud Project is to go to the BigQuery site directly, e.g. https://bigquery.cloud.google.com/welcome/datathon-datasets. On the left panel, you will see the mimic_demo dataset, under which you will see the table names as above once you click and expand on the link. To ...
#@title Setting default project ID (you may need to create your own) {display-mode:"both"} project_id='datathon-client-00' # @param os.environ["GOOGLE_CLOUD_PROJECT"]=project_id
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Let's now run some queries adapted from the MIMIC cohort selection tutorial. First, let's preview the subject_id, hadm_id, and icustay_id columns of the icustays table.
def run_query(query): return pd.io.gbq.read_gbq(query, project_id=project_id, verbose=False, configuration={'query':{'useLegacySql': False}}) run_query(''' SELECT subject_id, hadm_id, icustay_id FROM `datathon-datasets.mimic_demo.icustays` LIMIT 10 ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
The LIMIT 10 clause in the query is handy for limiting the size of the output frame during query writing for easier viewing, and we can drop this clause once the query is finalized to run over the whole dataset. One thing to note is that even with the LIMIT clause, running a query may still incur a cost, up to the full...
df = run_query(''' SELECT subject_id, hadm_id, icustay_id, intime, outtime, TIMESTAMP_DIFF(outtime, intime, HOUR) as icu_stay_hours FROM `datathon-datasets.mimic_demo.icustays` LIMIT 10 ''') df.head()
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Here is the BigQuery query to list some patients whose ICU stay is at least 2 days. Note that you can use AS in the SELECT clause to rename a field in the output, and you can omit the table prefix if there is no ambiguity.
run_query(''' WITH co AS ( SELECT subject_id, hadm_id, icustay_id, TIMESTAMP_DIFF(outtime, intime, DAY) AS icu_length_of_stay FROM `datathon-datasets.mimic_demo.icustays` LIMIT 10) SELECT subject_id, co.hadm_id AS hadm_ID, co.icustay_id, co.icu_length_of_stay FROM co WHERE icu_length_of_st...
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Now, instead of filtering out ICU stays of length 1 day or less, let's label all the ICU stays with an integer, either 1 for stays of length 2 days or more, or 0 for stays shorter than 2 days. The resulting table is called a "cohort table" in the original MIMIC-III tutorial.
run_query(''' WITH co AS ( SELECT subject_id, hadm_id, icustay_id, TIMESTAMP_DIFF(outtime, intime, DAY) AS icu_length_of_stay FROM `datathon-datasets.mimic_demo.icustays` LIMIT 10) SELECT subject_id, hadm_id, icustay_id, icu_length_of_stay, IF(icu_length_of_stay < 2, 1, 0) AS exclusion_l...
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
Let's now try a query that requires table joining: include the patient's age at the time of ICU admittance. This is computed by the date difference in years between the ICU intime and the patient's date of birth. The former is available in the icustays table, and the latter resides in the dob column of the patients tab...
run_query(''' WITH co AS ( SELECT icu.subject_id, icu.hadm_id, icu.icustay_id, pat.dob, TIMESTAMP_DIFF(icu.outtime, icu.intime, DAY) AS icu_length_of_stay, DATE_DIFF(DATE(icu.intime), DATE(pat.dob), YEAR) AS age FROM `datathon-datasets.mimic_demo.icustays` AS icu INNER JOIN `datathon-datas...
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
It is somewhat surprising to see a patient whose age is 300! This raises the question whether the age distribution of all patients is sane. We can verify this by querying the quantiles of patients' ages. Notice that we have removed the LIMIT 10 clause in the inner query, but the result is only one row, containing an ar...
run_query(''' WITH co AS ( SELECT DATE_DIFF(DATE(icu.intime), DATE(pat.dob), YEAR) AS age FROM `datathon-datasets.mimic_demo.icustays` AS icu INNER JOIN `datathon-datasets.mimic_demo.patients` AS pat ON icu.subject_id = pat.subject_id) SELECT APPROX_QUANTILES(age, 10) AS age_quantiles FROM co ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0
The result says that the minimum age (0th percentile) is 17, the 10th percentile is 49, the 20th percentile is 62, and so on, and 300 is the maximum (100-th percentile). The distribution looks good, and 300 could be an outlier caused by inaccurate data collection.
run_query(''' SELECT DATE_DIFF(DATE(icu.intime), DATE(pat.dob), YEAR) AS age FROM `datathon-datasets.mimic_demo.icustays` AS icu INNER JOIN `datathon-datasets.mimic_demo.patients` AS pat ON icu.subject_id = pat.subject_id ORDER BY age DESC LIMIT 10 ''')
datathon/anzics18/tutorial.ipynb
GoogleCloudPlatform/healthcare
apache-2.0