markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
循环层返回变量的说明可以参考 :numref:`sec_rnn-concise`。下面,我们实例化[**上述编码器的实现**]:我们使用一个两层门控循环单元编码器,其隐藏单元数为$16$。给定一小批量的输入序列`X`(批量大小为$4$,时间步为$7$)。在完成所有时间步后,最后一层的隐状态的输出是一个张量(`output`由编码器的循环层返回),其形状为(时间步数,批量大小,隐藏单元数)。
encoder = Seq2SeqEncoder(vocab_size=10, embed_size=8, num_hiddens=16, num_layers=2) X = tf.zeros((4, 7)) output, state = encoder(X, training=False) output.shape
_____no_output_____
MIT
09_recurrent_neural_networks/tf_seq2seq.ipynb
JiaheXu/Machine-Learning-Codebase
由于这里使用的是门控循环单元,所以在最后一个时间步的多层隐状态的形状是(隐藏层的数量,批量大小,隐藏单元的数量)。如果使用长短期记忆网络,`state`中还将包含记忆单元信息。
len(state), [element.shape for element in state]
_____no_output_____
MIT
09_recurrent_neural_networks/tf_seq2seq.ipynb
JiaheXu/Machine-Learning-Codebase
[**解码器**]:label:`sec_seq2seq_decoder`正如上文提到的,编码器输出的上下文变量$\mathbf{c}$对整个输入序列$x_1, \ldots, x_T$进行编码。来自训练数据集的输出序列$y_1, y_2, \ldots, y_{T'}$,对于每个时间步$t'$(与输入序列或编码器的时间步$t$不同),解码器输出$y_{t'}$的概率取决于先前的输出子序列$y_1, \ldots, y_{t'-1}$和上下文变量$\mathbf{c}$,即$P(y_{t'} \mid y_1, \ldots, y_{t'-1}, \mathbf{c})$。为了在序列上模型化这种条件概率,我们可以使用另一个循环神经网络作为解码器。在输出序列上的任意时间步$t^\prime$,循环神经网络将来自上一时间步的输出$y_{t^\prime-1}$和上下文变量$\mathbf{c}$作为其输入,然后在当前时间步将它们和上一隐状态$\mathbf{s}_{t^\prime-1}$转换为隐状态$\mathbf{s}_{t^\prime}$。因此,可以使用函数$g$来表示解码器的隐藏层的变换:$$\mathbf{s}_{t^\prime} = g(y_{t^\prime-1}, \mathbf{c}, \mathbf{s}_{t^\prime-1}).$$:eqlabel:`eq_seq2seq_s_t`在获得解码器的隐状态之后,我们可以使用输出层和softmax操作来计算在时间步$t^\prime$时输出$y_{t^\prime}$的条件概率分布$P(y_{t^\prime} \mid y_1, \ldots, y_{t^\prime-1}, \mathbf{c})$。根据 :numref:`fig_seq2seq`,当实现解码器时,我们直接使用编码器最后一个时间步的隐状态来初始化解码器的隐状态。这就要求使用循环神经网络实现的编码器和解码器具有相同数量的层和隐藏单元。为了进一步包含经过编码的输入序列的信息,上下文变量在所有的时间步与解码器的输入进行拼接(concatenate)。为了预测输出词元的概率分布,在循环神经网络解码器的最后一层使用全连接层来变换隐状态。 解码器输入: hidden state ‘c’, 当前词向量 ‘x’, 输出:目标语言词向量 ‘output’ 解码器要得到解码是的hidden state需要c和x, 所以解码器的RNNcell的输入是embed_size + num_hiddens 维 解码器最后要根据hidden state得到目标语言,所以最后要加一个nn.Linear(num_hiddens, vocab_size)
class Decoder(tf.keras.layers.Layer): """编码器-解码器架构的基本解码器接口""" def __init__(self, **kwargs): super(Decoder, self).__init__(**kwargs) def init_state(self, enc_outputs, *args): raise NotImplementedError def call(self, X, state, **kwargs): raise NotImplementedError class Seq2SeqDecoder(d2l.Decoder): """用于序列到序列学习的循环神经网络解码器""" def __init__(self, vocab_size, embed_size, num_hiddens, num_layers, dropout=0, **kwargs): super().__init__(**kwargs) self.embedding = tf.keras.layers.Embedding(vocab_size, embed_size) self.rnn = tf.keras.layers.RNN(tf.keras.layers.StackedRNNCells( [tf.keras.layers.GRUCell(num_hiddens, dropout=dropout) for _ in range(num_layers)]), return_sequences=True, return_state=True) self.dense = tf.keras.layers.Dense(vocab_size) def init_state(self, enc_outputs, *args): return enc_outputs[1] def call(self, X, state, **kwargs): # 输出'X'的形状:(`batch_size`, `num_steps`, `embed_size`) X = self.embedding(X) # 广播`context`,使其具有与`X`相同的`num_steps` context = tf.repeat(tf.expand_dims(state[-1], axis=1), repeats=X.shape[1], axis=1) X_and_context = tf.concat((X, context), axis=2) rnn_output = self.rnn(X_and_context, state, **kwargs) output = self.dense(rnn_output[0]) # `output`的形状: (`batch_size`, `num_steps`, `vocab_size`) # `state`是一个包含`num_layers`个元素的列表,每个元素的形状: (`batch_size`, `num_hiddens`) return output, rnn_output[1:]
_____no_output_____
MIT
09_recurrent_neural_networks/tf_seq2seq.ipynb
JiaheXu/Machine-Learning-Codebase
下面,我们用与前面提到的编码器中相同的超参数来[**实例化解码器**]。如我们所见,解码器的输出形状变为(批量大小,时间步数,词表大小),其中张量的最后一个维度存储预测的词元分布。
decoder = Seq2SeqDecoder(vocab_size=10, embed_size=8, num_hiddens=16, num_layers=2) state = decoder.init_state(encoder(X)) output, state = decoder(X, state, training=False) output.shape, len(state), state[0].shape
_____no_output_____
MIT
09_recurrent_neural_networks/tf_seq2seq.ipynb
JiaheXu/Machine-Learning-Codebase
损失函数在每个时间步,解码器预测了输出词元的概率分布。类似于语言模型,可以使用softmax来获得分布,并通过计算交叉熵损失函数来进行优化。回想一下 :`machine_translation`中,特定的填充词元被添加到序列的末尾,因此不同长度的序列可以以相同形状的小批量加载。但是,我们应该将填充词元的预测排除在损失函数的计算之外。为此,我们可以使用下面的`sequence_mask`函数[**通过零值化屏蔽不相关的项**],以便后面任何不相关预测的计算都是与零的乘积,结果都等于零。例如,如果两个序列的有效长度(不包括填充词元)分别为$1$和$2$,则第一个序列的第一项和第二个序列的前两项之后的剩余项将被清除为零。
#@save def sequence_mask(X, valid_len, value=0): """在序列中屏蔽不相关的项""" maxlen = X.shape[1] mask = tf.range(start=0, limit=maxlen, dtype=tf.float32)[ None, :] < tf.cast(valid_len[:, None], dtype=tf.float32) if len(X.shape) == 3: return tf.where(tf.expand_dims(mask, axis=-1), X, value) else: return tf.where(mask, X, value) X = tf.constant([[1, 2, 3], [4, 5, 6]]) sequence_mask(X, tf.constant([1, 2]))
_____no_output_____
MIT
09_recurrent_neural_networks/tf_seq2seq.ipynb
JiaheXu/Machine-Learning-Codebase
(**我们还可以使用此函数屏蔽最后几个轴上的所有项。**)如果愿意,也可以使用指定的非零值来替换这些项。
X = tf.ones((2,3,4)) sequence_mask(X, tf.constant([1, 2]), value=-1)
_____no_output_____
MIT
09_recurrent_neural_networks/tf_seq2seq.ipynb
JiaheXu/Machine-Learning-Codebase
现在,我们可以[**通过扩展softmax交叉熵损失函数来遮蔽不相关的预测**]。最初,所有预测词元的掩码都设置为1。一旦给定了有效长度,与填充词元对应的掩码将被设置为0。最后,将所有词元的损失乘以掩码,以过滤掉损失中填充词元产生的不相关预测。
#@save class MaskedSoftmaxCELoss(tf.keras.losses.Loss): """带遮蔽的softmax交叉熵损失函数""" def __init__(self, valid_len): super().__init__(reduction='none') self.valid_len = valid_len # `pred` 的形状:(`batch_size`, `num_steps`, `vocab_size`) # `label` 的形状:(`batch_size`, `num_steps`) # `valid_len` 的形状:(`batch_size`,) def call(self, label, pred): weights = tf.ones_like(label, dtype=tf.float32) weights = sequence_mask(weights, self.valid_len) label_one_hot = tf.one_hot(label, depth=pred.shape[-1]) unweighted_loss = tf.keras.losses.CategoricalCrossentropy( from_logits=True, reduction='none')(label_one_hot, pred) weighted_loss = tf.reduce_mean((unweighted_loss*weights), axis=1) return weighted_loss
_____no_output_____
MIT
09_recurrent_neural_networks/tf_seq2seq.ipynb
JiaheXu/Machine-Learning-Codebase
我们可以创建三个相同的序列来进行[**代码健全性检查**],然后分别指定这些序列的有效长度为$4$、$2$和$0$。结果就是,第一个序列的损失应为第二个序列的两倍,而第三个序列的损失应为零。
loss = MaskedSoftmaxCELoss(tf.constant([4, 2, 0])) loss(tf.ones((3,4), dtype = tf.int32), tf.ones((3, 4, 10))).numpy()
_____no_output_____
MIT
09_recurrent_neural_networks/tf_seq2seq.ipynb
JiaheXu/Machine-Learning-Codebase
[**训练**]:label:`sec_seq2seq_training`在下面的循环训练过程中,如 :numref:`fig_seq2seq`所示,特定的序列开始词元(“&lt;bos&gt;”)和原始的输出序列(不包括序列结束词元“&lt;eos&gt;”)拼接在一起作为解码器的输入。这被称为*强制教学*(teacher forcing),因为原始的输出序列(词元的标签)被送入解码器。或者,将来自上一个时间步的*预测*得到的词元作为解码器的当前输入。
#@save def train_seq2seq(net, data_iter, lr, num_epochs, tgt_vocab, device): """训练序列到序列模型""" optimizer = tf.keras.optimizers.Adam(learning_rate=lr) animator = d2l.Animator(xlabel="epoch", ylabel="loss", xlim=[10, num_epochs]) for epoch in range(num_epochs): timer = d2l.Timer() metric = d2l.Accumulator(2) # 训练损失总和,词元数量 for batch in data_iter: X, X_valid_len, Y, Y_valid_len = [x for x in batch] bos = tf.reshape(tf.constant([tgt_vocab['<bos>']] * Y.shape[0]), shape=(-1, 1)) dec_input = tf.concat([bos, Y[:, :-1]], 1) # 强制教学 with tf.GradientTape() as tape: Y_hat, _ = net(X, dec_input, X_valid_len, training=True) l = MaskedSoftmaxCELoss(Y_valid_len)(Y, Y_hat) gradients = tape.gradient(l, net.trainable_variables) gradients = d2l.grad_clipping(gradients, 1) optimizer.apply_gradients(zip(gradients, net.trainable_variables)) num_tokens = tf.reduce_sum(Y_valid_len).numpy() metric.add(tf.reduce_sum(l), num_tokens) if (epoch + 1) % 10 == 0: animator.add(epoch + 1, (metric[0] / metric[1],)) print(f'loss {metric[0] / metric[1]:.3f}, {metric[1] / timer.stop():.1f} ' f'tokens/sec on {str(device)}')
_____no_output_____
MIT
09_recurrent_neural_networks/tf_seq2seq.ipynb
JiaheXu/Machine-Learning-Codebase
现在,在机器翻译数据集上,我们可以[**创建和训练一个循环神经网络“编码器-解码器”模型**]用于序列到序列的学习。
class EncoderDecoder(tf.keras.Model): """编码器-解码器架构的基类""" def __init__(self, encoder, decoder, **kwargs): super(EncoderDecoder, self).__init__(**kwargs) self.encoder = encoder self.decoder = decoder def call(self, enc_X, dec_X, *args, **kwargs): enc_outputs = self.encoder(enc_X, *args, **kwargs) dec_state = self.decoder.init_state(enc_outputs, *args) return self.decoder(dec_X, dec_state, **kwargs) embed_size, num_hiddens, num_layers, dropout = 32, 32, 2, 0.1 batch_size, num_steps = 64, 10 lr, num_epochs, device = 0.005, 300, d2l.try_gpu() train_iter, src_vocab, tgt_vocab = d2l.load_data_nmt(batch_size, num_steps) encoder = Seq2SeqEncoder(len(src_vocab), embed_size, num_hiddens, num_layers, dropout) decoder = Seq2SeqDecoder(len(tgt_vocab), embed_size, num_hiddens, num_layers, dropout) net = EncoderDecoder(encoder, decoder) train_seq2seq(net, train_iter, lr, num_epochs, tgt_vocab, device)
loss 0.027, 2120.5 tokens/sec on <tensorflow.python.eager.context._EagerDeviceContext object at 0x000001E3A0071D40>
MIT
09_recurrent_neural_networks/tf_seq2seq.ipynb
JiaheXu/Machine-Learning-Codebase
[**预测**]为了采用一个接着一个词元的方式预测输出序列,每个解码器当前时间步的输入都将来自于前一时间步的预测词元。与训练类似,序列开始词元(“&lt;bos&gt;”)在初始时间步被输入到解码器中。该预测过程如 :`seq2seq_predict`所示,当输出序列的预测遇到序列结束词元(“&lt;eos&gt;”)时,预测就结束了。我们将在 :`beam-search`中介绍不同的序列生成策略。
#@save def predict_seq2seq(net, src_sentence, src_vocab, tgt_vocab, num_steps, save_attention_weights=False): """序列到序列模型的预测""" src_tokens = src_vocab[src_sentence.lower().split(' ')] + [ src_vocab['<eos>']] enc_valid_len = tf.constant([len(src_tokens)]) src_tokens = d2l.truncate_pad(src_tokens, num_steps, src_vocab['<pad>']) # 添加批量轴 enc_X = tf.expand_dims(src_tokens, axis=0) enc_outputs = net.encoder(enc_X, enc_valid_len, training=False) dec_state = net.decoder.init_state(enc_outputs, enc_valid_len) # 添加批量轴 dec_X = tf.expand_dims(tf.constant([tgt_vocab['<bos>']]), axis=0) output_seq, attention_weight_seq = [], [] for _ in range(num_steps): Y, dec_state = net.decoder(dec_X, dec_state, training=False) # 我们使用具有预测最高可能性的词元,作为解码器在下一时间步的输入 dec_X = tf.argmax(Y, axis=2) pred = tf.squeeze(dec_X, axis=0) # 保存注意力权重 if save_attention_weights: attention_weight_seq.append(net.decoder.attention_weights) # 一旦序列结束词元被预测,输出序列的生成就完成了 if pred == tgt_vocab['<eos>']: break output_seq.append(pred.numpy()) return ' '.join(tgt_vocab.to_tokens(tf.reshape(output_seq, shape = -1).numpy().tolist())), attention_weight_seq
_____no_output_____
MIT
09_recurrent_neural_networks/tf_seq2seq.ipynb
JiaheXu/Machine-Learning-Codebase
预测序列的评估我们可以通过与真实的标签序列进行比较来评估预测序列。虽然 :cite:`Papineni.Roukos.Ward.ea.2002`提出的BLEU(bilingual evaluation understudy)最先是用于评估机器翻译的结果,但现在它已经被广泛用于测量许多应用的输出序列的质量。原则上说,对于预测序列中的任意$n$元语法(n-grams),BLEU的评估都是这个$n$元语法是否出现在标签序列中。我们将BLEU定义为:$$ \exp\left(\min\left(0, 1 - \frac{\mathrm{len}_{\text{label}}}{\mathrm{len}_{\text{pred}}}\right)\right) \prod_{n=1}^k p_n^{1/2^n},$$:eqlabel:`eq_bleu`其中$\mathrm{len}_{\text{label}}$表示标签序列中的词元数和$\mathrm{len}_{\text{pred}}$表示预测序列中的词元数,$k$是用于匹配的最长的$n$元语法。另外,用$p_n$表示$n$元语法的精确度,它是两个数量的比值:第一个是预测序列与标签序列中匹配的$n$元语法的数量,第二个是预测序列中$n$元语法的数量的比率。具体地说,给定标签序列$A$、$B$、$C$、$D$、$E$、$F$和预测序列$A$、$B$、$B$、$C$、$D$,我们有$p_1 = 4/5$、$p_2 = 3/4$、$p_3 = 1/3$和$p_4 = 0$。根据 :eqref:`eq_bleu`中BLEU的定义,当预测序列与标签序列完全相同时,BLEU为$1$。此外,由于$n$元语法越长则匹配难度越大,所以BLEU为更长的$n$元语法的精确度分配更大的权重。具体来说,当$p_n$固定时,$p_n^{1/2^n}$会随着$n$的增长而增加(原始论文使用$p_n^{1/n}$)。而且,由于预测的序列越短获得的$p_n$值越高,所以 :eqref:`eq_bleu`中乘法项之前的系数用于惩罚较短的预测序列。例如,当$k=2$时,给定标签序列$A$、$B$、$C$、$D$、$E$、$F$和预测序列$A$、$B$,尽管$p_1 = p_2 = 1$,惩罚因子$\exp(1-6/2) \approx 0.14$会降低BLEU。[**BLEU的代码实现**]如下。
def bleu(pred_seq, label_seq, k): #@save """计算BLEU""" pred_tokens, label_tokens = pred_seq.split(' '), label_seq.split(' ') len_pred, len_label = len(pred_tokens), len(label_tokens) score = math.exp(min(0, 1 - len_label / len_pred)) for n in range(1, k + 1): num_matches, label_subs = 0, collections.defaultdict(int) for i in range(len_label - n + 1): label_subs[''.join(label_tokens[i: i + n])] += 1 for i in range(len_pred - n + 1): if label_subs[''.join(pred_tokens[i: i + n])] > 0: num_matches += 1 label_subs[''.join(pred_tokens[i: i + n])] -= 1 score *= math.pow(num_matches / (len_pred - n + 1), math.pow(0.5, n)) return score
_____no_output_____
MIT
09_recurrent_neural_networks/tf_seq2seq.ipynb
JiaheXu/Machine-Learning-Codebase
最后,利用训练好的循环神经网络“编码器-解码器”模型,[**将几个英语句子翻译成法语**],并计算BLEU的最终结果。
engs = ['go .', "i lost .", 'he\'s calm .', 'i\'m home .'] fras = ['va !', 'j\'ai perdu .', 'il est calme .', 'je suis chez moi .'] for eng, fra in zip(engs, fras): translation, attention_weight_seq = predict_seq2seq( net, eng, src_vocab, tgt_vocab, num_steps) print(f'{eng} => {translation}, bleu {bleu(translation, fra, k=2):.3f}')
go . => va !, bleu 1.000 i lost . => j'ai perdu ., bleu 1.000 he's calm . => il est bon ?, bleu 0.537 i'm home . => je suis chez moi ., bleu 1.000
MIT
09_recurrent_neural_networks/tf_seq2seq.ipynb
JiaheXu/Machine-Learning-Codebase
Now You Code 2: Character FrequencyWrite a program to input some text (a word or a sentence). The program should create a histogram of each character in the text and it's frequency. For example the text `apple` has a frequency `a:1, p:2, l:1, e:1`Some advice:- build a dictionary of each character where the character is the key and the value is the number of occurences of that character.- omit spaces, in the input text, and they cannot be represented as dictionary keys- convert the input text to lower case, so that `A` and `a` are counted as the same character.After you count the characters:- sort the dictionary keys alphabetically, - print out the character distributionExample Run:```Enter some text: Michael is a Man from Mississppi.. : 1a : 3c : 1e : 1f : 1h : 1i : 5l : 1m : 4n : 1o : 1p : 2r : 1s : 5``` Step 1: Problem AnalysisInputs: stringOutputs: frequency of each character (besides space)Algorithm (Steps in Program):- create empty dictionary- input the string- make string lowercase and remove spaces- for each character in the text, set its dictionary value to 0- for each character in the text, increase its dictionary value by one- sort the dictionary keys alphabetically- for each key, print the key and its corresponding value
## Step 2: Write code here fr = {} text = input("Enter text: ") text = text.lower().replace(' ','') for char in text: fr[char] = 0 for char in text: fr[char] = fr[char] + 1 for key in sorted(fr.keys()): print(key, ':', fr[key])
Enter text: Perfectly balanced, as all things should be. , : 1 . : 1 a : 4 b : 2 c : 2 d : 2 e : 4 f : 1 g : 1 h : 2 i : 1 l : 5 n : 2 o : 1 p : 1 r : 1 s : 3 t : 2 u : 1 y : 1
MIT
content/lessons/10/Now-You-Code/NYC2-Character-Frequency.ipynb
MahopacHS/spring2019-ditoccoa0302
Deep face recognition with Keras, Dlib and OpenCVFace recognition identifies persons on face images or video frames. In a nutshell, a face recognition system extracts features from an input face image and compares them to the features of labeled faces in a database. Comparison is based on a feature similarity metric and the label of the most similar database entry is used to label the input image. If the similarity value is below a certain threshold the input image is labeled as *unknown*. Comparing two face images to determine if they show the same person is known as face verification.This notebook uses a deep convolutional neural network (CNN) to extract features from input images. It follows the approach described in [[1]](https://arxiv.org/abs/1503.03832) with modifications inspired by the [OpenFace](http://cmusatyalab.github.io/openface/) project. [Keras](https://keras.io/) is used for implementing the CNN, [Dlib](http://dlib.net/) and [OpenCV](https://opencv.org/) for aligning faces on input images. Face recognition performance is evaluated on a small subset of the [LFW](http://vis-www.cs.umass.edu/lfw/) dataset which you can replace with your own custom dataset e.g. with images of your family and friends if you want to further experiment with this notebook. After an overview of the CNN architecure and how the model can be trained, it is demonstrated how to:- Detect, transform, and crop faces on input images. This ensures that faces are aligned before feeding them into the CNN. This preprocessing step is very important for the performance of the neural network.- Use the CNN to extract 128-dimensional representations, or *embeddings*, of faces from the aligned input images. In embedding space, Euclidean distance directly corresponds to a measure of face similarity. - Compare input embedding vectors to labeled embedding vectors in a database. Here, a support vector machine (SVM) and a KNN classifier, trained on labeled embedding vectors, play the role of a database. Face recognition in this context means using these classifiers to predict the labels i.e. identities of new inputs. Environment setupFor running this notebook, create and activate a new [virtual environment](https://docs.python.org/3/tutorial/venv.html) and install the packages listed in [requirements.txt](requirements.txt) with `pip install -r requirements.txt`. Furthermore, you'll need a local copy of Dlib's face landmarks data file for running face alignment:
import bz2 import os from urllib.request import urlopen def download_landmarks(dst_file): url = 'http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2' decompressor = bz2.BZ2Decompressor() with urlopen(url) as src, open(dst_file, 'wb') as dst: data = src.read(1024) while len(data) > 0: dst.write(decompressor.decompress(data)) data = src.read(1024) dst_dir = 'models' dst_file = os.path.join(dst_dir, 'landmarks.dat') if not os.path.exists(dst_file): os.makedirs(dst_dir) download_landmarks(dst_file)
_____no_output_____
Apache-2.0
face-recognition.ipynb
008karan/face-recognition-1
CNN architecture and trainingThe CNN architecture used here is a variant of the inception architecture [[2]](https://arxiv.org/abs/1409.4842). More precisely, it is a variant of the NN4 architecture described in [[1]](https://arxiv.org/abs/1503.03832) and identified as [nn4.small2](https://cmusatyalab.github.io/openface/models-and-accuracies/model-definitions) model in the OpenFace project. This notebook uses a Keras implementation of that model whose definition was taken from the [Keras-OpenFace](https://github.com/iwantooxxoox/Keras-OpenFace) project. The architecture details aren't too important here, it's only useful to know that there is a fully connected layer with 128 hidden units followed by an L2 normalization layer on top of the convolutional base. These two top layers are referred to as the *embedding layer* from which the 128-dimensional embedding vectors can be obtained. The complete model is defined in [model.py](model.py) and a graphical overview is given in [model.png](model.png). A Keras version of the nn4.small2 model can be created with `create_model()`.
from model import create_model nn4_small2 = create_model()
_____no_output_____
Apache-2.0
face-recognition.ipynb
008karan/face-recognition-1
Model training aims to learn an embedding $f(x)$ of image $x$ such that the squared L2 distance between all faces of the same identity is small and the distance between a pair of faces from different identities is large. This can be achieved with a *triplet loss* $L$ that is minimized when the distance between an anchor image $x^a_i$ and a positive image $x^p_i$ (same identity) in embedding space is smaller than the distance between that anchor image and a negative image $x^n_i$ (different identity) by at least a margin $\alpha$.$$L = \sum^{m}_{i=1} \large[ \small {\mid \mid f(x_{i}^{a}) - f(x_{i}^{p})) \mid \mid_2^2} - {\mid \mid f(x_{i}^{a}) - f(x_{i}^{n})) \mid \mid_2^2} + \alpha \large ] \small_+$$$[z]_+$ means $max(z,0)$ and $m$ is the number of triplets in the training set. The triplet loss in Keras is best implemented with a custom layer as the loss function doesn't follow the usual `loss(input, target)` pattern. This layer calls `self.add_loss` to install the triplet loss:
from keras import backend as K from keras.models import Model from keras.layers import Input, Layer # Input for anchor, positive and negative images in_a = Input(shape=(96, 96, 3)) in_p = Input(shape=(96, 96, 3)) in_n = Input(shape=(96, 96, 3)) # Output for anchor, positive and negative embedding vectors # The nn4_small model instance is shared (Siamese network) emb_a = nn4_small2(in_a) emb_p = nn4_small2(in_p) emb_n = nn4_small2(in_n) class TripletLossLayer(Layer): def __init__(self, alpha, **kwargs): self.alpha = alpha super(TripletLossLayer, self).__init__(**kwargs) def triplet_loss(self, inputs): a, p, n = inputs p_dist = K.sum(K.square(a-p), axis=-1) n_dist = K.sum(K.square(a-n), axis=-1) return K.sum(K.maximum(p_dist - n_dist + self.alpha, 0), axis=0) def call(self, inputs): loss = self.triplet_loss(inputs) self.add_loss(loss) return loss # Layer that computes the triplet loss from anchor, positive and negative embedding vectors triplet_loss_layer = TripletLossLayer(alpha=0.2, name='triplet_loss_layer')([emb_a, emb_p, emb_n]) # Model that can be trained with anchor, positive negative images nn4_small2_train = Model([in_a, in_p, in_n], triplet_loss_layer)
_____no_output_____
Apache-2.0
face-recognition.ipynb
008karan/face-recognition-1
During training, it is important to select triplets whose positive pairs $(x^a_i, x^p_i)$ and negative pairs $(x^a_i, x^n_i)$ are hard to discriminate i.e. their distance difference in embedding space should be less than margin $\alpha$, otherwise, the network is unable to learn a useful embedding. Therefore, each training iteration should select a new batch of triplets based on the embeddings learned in the previous iteration. Assuming that a generator returned from a `triplet_generator()` call can generate triplets under these constraints, the network can be trained with:
from data import triplet_generator # triplet_generator() creates a generator that continuously returns # ([a_batch, p_batch, n_batch], None) tuples where a_batch, p_batch # and n_batch are batches of anchor, positive and negative RGB images # each having a shape of (batch_size, 96, 96, 3). generator = triplet_generator() nn4_small2_train.compile(loss=None, optimizer='adam') nn4_small2_train.fit_generator(generator, epochs=10, steps_per_epoch=100) # Please note that the current implementation of the generator only generates # random image data. The main goal of this code snippet is to demonstrate # the general setup for model training. In the following, we will anyway # use a pre-trained model so we don't need a generator here that operates # on real training data. I'll maybe provide a fully functional generator # later.
_____no_output_____
Apache-2.0
face-recognition.ipynb
008karan/face-recognition-1
The above code snippet should merely demonstrate how to setup model training. But instead of actually training a model from scratch we will now use a pre-trained model as training from scratch is very expensive and requires huge datasets to achieve good generalization performance. For example, [[1]](https://arxiv.org/abs/1503.03832) uses a dataset of 200M images consisting of about 8M identities. The OpenFace project provides [pre-trained models](https://cmusatyalab.github.io/openface/models-and-accuracies/pre-trained-models) that were trained with the public face recognition datasets [FaceScrub](http://vintage.winklerbros.net/facescrub.html) and [CASIA-WebFace](http://arxiv.org/abs/1411.7923). The Keras-OpenFace project converted the weights of the pre-trained nn4.small2.v1 model to [CSV files](https://github.com/iwantooxxoox/Keras-OpenFace/tree/master/weights) which were then [converted here](face-recognition-convert.ipynb) to a binary format that can be loaded by Keras with `load_weights`:
nn4_small2_pretrained = create_model() nn4_small2_pretrained.load_weights('weights/nn4.small2.v1.h5')
_____no_output_____
Apache-2.0
face-recognition.ipynb
008karan/face-recognition-1
Custom dataset To demonstrate face recognition on a custom dataset, a small subset of the [LFW](http://vis-www.cs.umass.edu/lfw/) dataset is used. It consists of 100 face images of [10 identities](images). The metadata for each image (file and identity name) are loaded into memory for later processing.
import numpy as np import os.path class IdentityMetadata(): def __init__(self, base, name, file): # dataset base directory self.base = base # identity name self.name = name # image file name self.file = file def __repr__(self): return self.image_path() def image_path(self): return os.path.join(self.base, self.name, self.file) def load_metadata(path): metadata = [] for i in os.listdir(path): for f in os.listdir(os.path.join(path, i)): #checking file extention. Allowing only '.jpg' and '.jpeg' files. ext = os.path.splitext(f)[1] if ext == '.jpg' or ext == '.jpeg': metadata.append(IdentityMetadata(path, i, f)) return np.array(metadata) metadata = load_metadata('images')
_____no_output_____
Apache-2.0
face-recognition.ipynb
008karan/face-recognition-1
Face alignment The nn4.small2.v1 model was trained with aligned face images, therefore, the face images from the custom dataset must be aligned too. Here, we use [Dlib](http://dlib.net/) for face detection and [OpenCV](https://opencv.org/) for image transformation and cropping to produce aligned 96x96 RGB face images. By using the [AlignDlib](align.py) utility from the OpenFace project this is straightforward:
import cv2 import matplotlib.pyplot as plt import matplotlib.patches as patches from align import AlignDlib %matplotlib inline def load_image(path): img = cv2.imread(path, 1) # OpenCV loads images with color channels # in BGR order. So we need to reverse them return img[...,::-1] # Initialize the OpenFace face alignment utility alignment = AlignDlib('models/landmarks.dat') # Load an image of Jacques Chirac jc_orig = load_image(metadata[2].image_path()) # Detect face and return bounding box bb = alignment.getLargestFaceBoundingBox(jc_orig) # Transform image using specified face landmark indices and crop image to 96x96 jc_aligned = alignment.align(96, jc_orig, bb, landmarkIndices=AlignDlib.OUTER_EYES_AND_NOSE) # Show original image plt.subplot(131) plt.imshow(jc_orig) # Show original image with bounding box plt.subplot(132) plt.imshow(jc_orig) plt.gca().add_patch(patches.Rectangle((bb.left(), bb.top()), bb.width(), bb.height(), fill=False, color='red')) # Show aligned image plt.subplot(133) plt.imshow(jc_aligned);
_____no_output_____
Apache-2.0
face-recognition.ipynb
008karan/face-recognition-1
As described in the OpenFace [pre-trained models](https://cmusatyalab.github.io/openface/models-and-accuracies/pre-trained-models) section, landmark indices `OUTER_EYES_AND_NOSE` are required for model nn4.small2.v1. Let's implement face detection, transformation and cropping as `align_image` function for later reuse.
def align_image(img): return alignment.align(96, img, alignment.getLargestFaceBoundingBox(img), landmarkIndices=AlignDlib.OUTER_EYES_AND_NOSE)
_____no_output_____
Apache-2.0
face-recognition.ipynb
008karan/face-recognition-1
Embedding vectors Embedding vectors can now be calculated by feeding the aligned and scaled images into the pre-trained network.
embedded = np.zeros((metadata.shape[0], 128)) for i, m in enumerate(metadata): img = load_image(m.image_path()) img = align_image(img) # scale RGB values to interval [0,1] img = (img / 255.).astype(np.float32) # obtain embedding vector for image embedded[i] = nn4_small2_pretrained.predict(np.expand_dims(img, axis=0))[0]
_____no_output_____
Apache-2.0
face-recognition.ipynb
008karan/face-recognition-1
Let's verify on a single triplet example that the squared L2 distance between its anchor-positive pair is smaller than the distance between its anchor-negative pair.
def distance(emb1, emb2): return np.sum(np.square(emb1 - emb2)) def show_pair(idx1, idx2): plt.figure(figsize=(8,3)) plt.suptitle(f'Distance = {distance(embedded[idx1], embedded[idx2]):.2f}') plt.subplot(121) plt.imshow(load_image(metadata[idx1].image_path())) plt.subplot(122) plt.imshow(load_image(metadata[idx2].image_path())); show_pair(2, 3) show_pair(2, 12)
_____no_output_____
Apache-2.0
face-recognition.ipynb
008karan/face-recognition-1
As expected, the distance between the two images of Jacques Chirac is smaller than the distance between an image of Jacques Chirac and an image of Gerhard Schröder (0.30 < 1.12). But we still do not know what distance threshold $\tau$ is the best boundary for making a decision between *same identity* and *different identity*. Distance threshold To find the optimal value for $\tau$, the face verification performance must be evaluated on a range of distance threshold values. At a given threshold, all possible embedding vector pairs are classified as either *same identity* or *different identity* and compared to the ground truth. Since we're dealing with skewed classes (much more negative pairs than positive pairs), we use the [F1 score](https://en.wikipedia.org/wiki/F1_score) as evaluation metric instead of [accuracy](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html).
from sklearn.metrics import f1_score, accuracy_score distances = [] # squared L2 distance between pairs identical = [] # 1 if same identity, 0 otherwise num = len(metadata) for i in range(num - 1): for j in range(1, num): distances.append(distance(embedded[i], embedded[j])) identical.append(1 if metadata[i].name == metadata[j].name else 0) distances = np.array(distances) identical = np.array(identical) thresholds = np.arange(0.3, 1.0, 0.01) f1_scores = [f1_score(identical, distances < t) for t in thresholds] acc_scores = [accuracy_score(identical, distances < t) for t in thresholds] opt_idx = np.argmax(f1_scores) # Threshold at maximal F1 score opt_tau = thresholds[opt_idx] # Accuracy at maximal F1 score opt_acc = accuracy_score(identical, distances < opt_tau) # Plot F1 score and accuracy as function of distance threshold plt.plot(thresholds, f1_scores, label='F1 score'); plt.plot(thresholds, acc_scores, label='Accuracy'); plt.axvline(x=opt_tau, linestyle='--', lw=1, c='lightgrey', label='Threshold') plt.title(f'Accuracy at threshold {opt_tau:.2f} = {opt_acc:.3f}'); plt.xlabel('Distance threshold') plt.legend();
_____no_output_____
Apache-2.0
face-recognition.ipynb
008karan/face-recognition-1
The face verification accuracy at $\tau$ = 0.56 is 95.7%. This is not bad given a baseline of 89% for a classifier that always predicts *different identity* (there are 980 pos. pairs and 8821 neg. pairs) but since nn4.small2.v1 is a relatively small model it is still less than what can be achieved by state-of-the-art models (> 99%). The following two histograms show the distance distributions of positive and negative pairs and the location of the decision boundary. There is a clear separation of these distributions which explains the discriminative performance of the network. One can also spot some strong outliers in the positive pairs class but these are not further analyzed here.
dist_pos = distances[identical == 1] dist_neg = distances[identical == 0] plt.figure(figsize=(12,4)) plt.subplot(121) plt.hist(dist_pos) plt.axvline(x=opt_tau, linestyle='--', lw=1, c='lightgrey', label='Threshold') plt.title('Distances (pos. pairs)') plt.legend(); plt.subplot(122) plt.hist(dist_neg) plt.axvline(x=opt_tau, linestyle='--', lw=1, c='lightgrey', label='Threshold') plt.title('Distances (neg. pairs)') plt.legend();
_____no_output_____
Apache-2.0
face-recognition.ipynb
008karan/face-recognition-1
Face recognition Given an estimate of the distance threshold $\tau$, face recognition is now as simple as calculating the distances between an input embedding vector and all embedding vectors in a database. The input is assigned the label (i.e. identity) of the database entry with the smallest distance if it is less than $\tau$ or label *unknown* otherwise. This procedure can also scale to large databases as it can be easily parallelized. It also supports one-shot learning, as adding only a single entry of a new identity might be sufficient to recognize new examples of that identity.A more robust approach is to label the input using the top $k$ scoring entries in the database which is essentially [KNN classification](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) with a Euclidean distance metric. Alternatively, a linear [support vector machine](https://en.wikipedia.org/wiki/Support_vector_machine) (SVM) can be trained with the database entries and used to classify i.e. identify new inputs. For training these classifiers we use 50% of the dataset, for evaluation the other 50%.
from sklearn.preprocessing import LabelEncoder from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import LinearSVC targets = np.array([m.name for m in metadata]) encoder = LabelEncoder() encoder.fit(targets) # Numerical encoding of identities y = encoder.transform(targets) train_idx = np.arange(metadata.shape[0]) % 2 != 0 test_idx = np.arange(metadata.shape[0]) % 2 == 0 # 50 train examples of 10 identities (5 examples each) X_train = embedded[train_idx] # 50 test examples of 10 identities (5 examples each) X_test = embedded[test_idx] y_train = y[train_idx] y_test = y[test_idx] knn = KNeighborsClassifier(n_neighbors=1, metric='euclidean') svc = LinearSVC() knn.fit(X_train, y_train) svc.fit(X_train, y_train) acc_knn = accuracy_score(y_test, knn.predict(X_test)) acc_svc = accuracy_score(y_test, svc.predict(X_test)) print(f'KNN accuracy = {acc_knn}, SVM accuracy = {acc_svc}')
KNN accuracy = 0.96, SVM accuracy = 0.98
Apache-2.0
face-recognition.ipynb
008karan/face-recognition-1
The KNN classifier achieves an accuracy of 96% on the test set, the SVM classifier 98%. Let's use the SVM classifier to illustrate face recognition on a single example.
import warnings # Suppress LabelEncoder warning warnings.filterwarnings('ignore') example_idx = 29 example_image = load_image(metadata[test_idx][example_idx].image_path()) example_prediction = svc.predict([embedded[test_idx][example_idx]]) example_identity = encoder.inverse_transform(example_prediction)[0] plt.imshow(example_image) plt.title(f'Recognized as {example_identity}');
_____no_output_____
Apache-2.0
face-recognition.ipynb
008karan/face-recognition-1
Seems reasonable :-) Classification results should actually be checked whether (a subset of) the database entries of the predicted identity have a distance less than $\tau$, otherwise one should assign an *unknown* label. This step is skipped here but can be easily added. Dataset visualization To embed the dataset into 2D space for displaying identity clusters, [t-distributed Stochastic Neighbor Embedding](https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding) (t-SNE) is applied to the 128-dimensional embedding vectors. Except from a few outliers, identity clusters are well separated.
from sklearn.manifold import TSNE X_embedded = TSNE(n_components=2).fit_transform(embedded) for i, t in enumerate(set(targets)): idx = targets == t plt.scatter(X_embedded[idx, 0], X_embedded[idx, 1], label=t) plt.legend(bbox_to_anchor=(1, 1));
_____no_output_____
Apache-2.0
face-recognition.ipynb
008karan/face-recognition-1
Mapping water extent and rainfall using WOfS and CHIRPS* **Products used:** [wofs_ls](https://explorer.digitalearth.africa/products/wofs_ls),[rainfall_chirps_monthly](https://explorer.digitalearth.africa/products/rainfall_chirps_monthly)
**Keywords**: :index:`data used; WOfS`, :index:`data used; CHIRPS`, :index:`water; extent`, :index:`analysis; time series`
_____no_output_____
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
BackgroundThe United Nations have prescribed 17 "Sustainable Development Goals" (SDGs). This notebook attempts to monitor SDG Indicator 6.6.1 - change in the extent of water-related ecosystems. Indicator 6.6.1 has 4 sub-indicators: i. The spatial extent of water-related ecosystems ii. The quantity of water contained within these ecosystems iii. The quality of water within these ecosystems iv. The health or state of these ecosystemsThis notebook primarily focuses on the first sub-indicator - spatial extents. DescriptionThe notebook loads WOfS feature layers to map the spatial extent of water bodies. It also loads and plots monthly total rainfall from CHIRPS. The last section will compare the water extent between two periods to allow visulazing where change is occuring.*** Load packagesImport Python packages that are used for the analysis.
%matplotlib inline import datacube import matplotlib.pyplot as plt from deafrica_tools.dask import create_local_dask_cluster from deafrica_tools.datahandling import wofs_fuser from long_term_water_extent import ( load_vector_file, get_resampled_labels, resample_water_observations, resample_rainfall_observations, calculate_change_in_extent, compare_extent_and_rainfall, )
_____no_output_____
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
Set up a Dask clusterDask can be used to better manage memory use and conduct the analysis in parallel.
create_local_dask_cluster()
_____no_output_____
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
Connect to Data Cube
dc = datacube.Datacube(app="long_term_water_extent")
_____no_output_____
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
Analysis parametersThe following cell sets the parameters, which define the area of interest and the length of time to conduct the analysis over.* Upload a vector file for your water extent and your catchment to the `data` folder.* Set the time range you want to use.* Set the resampling strategy. Possible options include: * `"1Y"` - Annual resampling, use this option for longer term monitoring * `"QS-DEC"` - Quarterly resampling from December * `"3M"` - Three-monthly resampling * `"1M"` - Monthly resamplingFor more details on resampling timeframes, see the [xarray](https://xarray.pydata.org/en/v0.8.2/generated/xarray.Dataset.resample.htmlr29) and [pandas](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.htmldateoffset-objects) documentation.
water_extent_vector_file = "data/lake_baringo_extent.geojson" water_catchment_vector_file = "data/lake_baringo_catchment.geojson" time_range = ("2018-07", "2021") resample_strategy = "Q-DEC" dask_chunks = dict(x=1000, y=1000)
_____no_output_____
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
Get waterbody and catchment geometriesThe next cell will extract the waterbody and catchment geometries from the supplied vector files, which will be used to load Water Observations from Space and the CHIRPS rainfall products.
extent, extent_geometry = load_vector_file(water_extent_vector_file) catchment, catchment_geometry = load_vector_file(water_catchment_vector_file)
_____no_output_____
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
Load Water Observation from Space for WaterbodyThe first step is to load the Water Observations from Space product using the extent geometry.
extent_query = { "time": time_range, "resolution": (-30, 30), "output_crs": "EPSG:6933", "geopolygon": extent_geometry, "group_by": "solar_day", "dask_chunks":dask_chunks } wofs_ds = dc.load(product="wofs_ls", fuse_func=wofs_fuser, **extent_query)
_____no_output_____
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
Identify water in each resampling periodThe second step is to resample the observations to get a consistent measure of the waterbody, and then calculate the classified as water for each period.
resampled_water_ds, resampled_water_area_ds = resample_water_observations( wofs_ds, resample_strategy ) date_range_labels = get_resampled_labels(wofs_ds, resample_strategy)
_____no_output_____
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
Plot the change in water area over time
fig, ax = plt.subplots(figsize=(15, 5)) ax.plot( date_range_labels, resampled_water_area_ds.values, color="red", marker="^", markersize=4, linewidth=1, ) plt.xticks(date_range_labels, rotation=65) plt.title(f"Observed Area of Water from {time_range[0]} to {time_range[1]}") plt.ylabel("Waterbody area (km$^2$)") plt.tight_layout()
_____no_output_____
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
Load CHIRPS monthly rainfall
catchment_query = { "time": time_range, "resolution": (-5000, 5000), "output_crs": "EPSG:6933", "geopolygon": catchment_geometry, "group_by": "solar_day", "dask_chunks":dask_chunks } rainfall_ds = dc.load(product="rainfall_chirps_monthly", **catchment_query)
_____no_output_____
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
Resample to estimate rainfall for each time periodThis is done by taking calculating the average rainfall over the extent of the catchment, then summing these averages over the resampling period to estimate the total rainfall for the catchment.
catchment_rainfall_resampled_ds = resample_rainfall_observations( rainfall_ds, resample_strategy, catchment )
_____no_output_____
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
Compare waterbody area to catchment rainfallThis step plots the summed average rainfall for the catchment area over each period as a histogram, overlaid with the waterbody area calculated previously.
figure = compare_extent_and_rainfall( resampled_water_area_ds, catchment_rainfall_resampled_ds, "mm", date_range_labels )
_____no_output_____
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
Save the figure
figure.savefig("waterarea_and_rainfall.png", bbox_inches="tight")
_____no_output_____
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
Compare water extent for two different periodsFor the next step, enter a baseline date, and an analysis date to construct a plot showing where water appeared, as well as disappeared, by comparing the two dates.
baseline_time = "2018-07-01" analysis_time = "2021-10-01" figure = calculate_change_in_extent(baseline_time, analysis_time, resampled_water_ds)
_____no_output_____
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
Save figure
figure.savefig("waterarea_change.png", bbox_inches="tight")
_____no_output_____
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
--- Additional information**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). Digital Earth Africa data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks).**Compatible datacube version:**
print(datacube.__version__)
1.8.6
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
**Last Tested:**
from datetime import datetime datetime.today().strftime('%Y-%m-%d')
_____no_output_____
Apache-2.0
Use_cases/Monitoring_water_extent/Monitoring_water_extent_WOfS.ipynb
FlexiGroBots-H2020/deafrica-sandbox-notebooks
Overview 1. Project Instructions & Prerequisites2. Learning Objectives3. Data Preparation4. Create Categorical Features with TF Feature Columns5. Create Continuous/Numerical Features with TF Feature Columns6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers7. Evaluating Potential Model Biases with Aequitas Toolkit 1. Project Instructions & Prerequisites Project Instructions **Context**: EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical industry and regulators to [make decisions on clinical trials](https://www.fda.gov/news-events/speeches-fda-officials/breaking-down-barriers-between-clinical-trials-and-clinical-care-incorporating-real-world-evidence). You are a data scientist for an exciting unicorn healthcare startup that has created a groundbreaking diabetes drug that is ready for clinical trial testing. It is a very unique and sensitive drug that requires administering the drug over at least 5-7 days of time in the hospital with frequent monitoring/testing and patient medication adherence training with a mobile application. You have been provided a patient dataset from a client partner and are tasked with building a predictive model that can identify which type of patients the company should focus their efforts testing this drug on. Target patients are people that are likely to be in the hospital for this duration of time and will not incur significant additional costs for administering this drug to the patient and monitoring. In order to achieve your goal you must build a regression model that can predict the estimated hospitalization time for a patient and use this to select/filter patients for your study. **Expected Hospitalization Time Regression Model:** Utilizing a synthetic dataset(denormalized at the line level augmentation) built off of the UCI Diabetes readmission dataset, students will build a regression model that predicts the expected days of hospitalization time and then convert this to a binary prediction of whether to include or exclude that patient from the clinical trial.This project will demonstrate the importance of building the right data representation at the encounter level, with appropriate filtering and preprocessing/feature engineering of key medical code sets. This project will also require students to analyze and interpret their model for biases across key demographic groups. Please see the project rubric online for more details on the areas your project will be evaluated. Dataset Due to healthcare PHI regulations (HIPAA, HITECH), there are limited number of publicly available datasets and some datasets require training and approval. So, for the purpose of this exercise, we are using a dataset from UC Irvine(https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008) that has been modified for this course. Please note that it is limited in its representation of some key features such as diagnosis codes which are usually an unordered list in 835s/837s (the HL7 standard interchange formats used for claims and remits). **Data Schema**The dataset reference information can be https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/. There are two CSVs that provide more details on the fields and some of the mapped values. Project Submission When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "student_project_submission.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "utils.py" and "student_utils.py" files in your submission. The student_utils.py should be where you put most of your code that you write and the summary and text explanations should be written inline in the notebook. Once you download these files, compress them into one zip file for submission. Prerequisites - Intermediate level knowledge of Python- Basic knowledge of probability and statistics- Basic knowledge of machine learning concepts- Installation of Tensorflow 2.0 and other dependencies(conda environment.yml or virtualenv requirements.txt file provided) Environment Setup For step by step instructions on creating your environment, please go to https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/README.md. 2. Learning Objectives By the end of the project, you will be able to - Use the Tensorflow Dataset API to scalably extract, transform, and load datasets and build datasets aggregated at the line, encounter, and patient data levels(longitudinal) - Analyze EHR datasets to check for common issues (data leakage, statistical properties, missing values, high cardinality) by performing exploratory data analysis. - Create categorical features from Key Industry Code Sets (ICD, CPT, NDC) and reduce dimensionality for high cardinality features by using embeddings - Create derived features(bucketing, cross-features, embeddings) utilizing Tensorflow feature columns on both continuous and categorical input features - SWBAT use the Tensorflow Probability library to train a model that provides uncertainty range predictions that allow for risk adjustment/prioritization and triaging of predictions - Analyze and determine biases for a model for key demographic groups by evaluating performance metrics across groups by using the Aequitas framework 3. Data Preparation
# from __future__ import absolute_import, division, print_function, unicode_literals import os import numpy as np import seaborn as sns import tensorflow as tf from tensorflow.keras import layers import tensorflow_probability as tfp import matplotlib.pyplot as plt import pandas as pd import aequitas as ae from sklearn.metrics import roc_auc_score, accuracy_score, f1_score, classification_report, precision_score, recall_score # Put all of the helper functions in utils from utils import build_vocab_files, show_group_stats_viz, aggregate_dataset, preprocess_df, df_to_dataset, posterior_mean_field, prior_trainable from functools import partial pd.set_option('display.max_columns', 500) # this allows you to make changes and save in student_utils.py and the file is reloaded every time you run a code block %load_ext autoreload %autoreload #OPEN ISSUE ON MAC OSX for TF model training import os os.environ['KMP_DUPLICATE_LIB_OK']='True'
_____no_output_____
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Dataset Loading and Schema Review Load the dataset and view a sample of the dataset along with reviewing the schema reference files to gain a deeper understanding of the dataset. The dataset is located at the following path https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/starter_code/data/final_project_dataset.csv. Also, review the information found in the data schema https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/
dataset_path = "./data/final_project_dataset.csv" df = pd.read_csv(dataset_path) # Line Test try: assert len(df) > df['encounter_id'].nunique() print("Dataset could be at the line level") except: print("Dataset is not at the line level")
Dataset could be at the line level
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Determine Level of Dataset (Line or Encounter) **Question 1**: Based off of analysis of the data, what level is this dataset? Is it at the line or encounter level? Are there any key fields besides the encounter_id and patient_nbr fields that we should use to aggregate on? Knowing this information will help inform us what level of aggregation is necessary for future steps and is a step that is often overlooked. **Student Response** : The dataset is at line level and needs to be converted to encounter level. The dataset should be aggregated on encounter_id, patient_nbr and principal_diagnosis_code. Analyze Dataset **Question 2**: Utilizing the library of your choice (recommend Pandas and Seaborn or matplotlib though), perform exploratory data analysis on the dataset. In particular be sure to address the following questions: - a. Field(s) with high amount of missing/zero values - b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape? - c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature) - d. Please describe the demographic distributions in the dataset for the age and gender fields. **OPTIONAL**: Use the Tensorflow Data Validation and Analysis library to complete. - The Tensorflow Data Validation and Analysis library(https://www.tensorflow.org/tfx/data_validation/get_started) is a useful tool for analyzing and summarizing dataset statistics. It is especially useful because it can scale to large datasets that do not fit into memory. - Note that there are some bugs that are still being resolved with Chrome v80 and we have moved away from using this for the project. **Student Response**: 1. Fields with high amount of missing/null values are: *weight, payer_code, medical_speciality, number_outpatients, number_inpatients, number_emergency, num_procedures, ndc_codes.*1. Numerical values having Gaussian Distribution are: *num_lab_procedures, number_medication.*1. Fields having high cardinality are: *encounter_id, patient_nbr, other_diagnosis_codes.* It is because there there are 71,518 patients and more than 1 Lac encounters in the dataset and each encounter have various diagnoisis codes. This can also be reviewed by looking the Tensorflow Data Validation statistics.1. Demographic distributions is shown below.
def check_null_df(df): return pd.DataFrame({ 'percent_null' : df.isna().sum() / len(df) * 100, 'percent_zero' : df.isin([0]).sum() / len(df) * 100, 'percent_missing' : df.isin(['?', '?|?', 'Unknown/Invalid']).sum() / len(df) * 100, }) check_null_df(df) plt.figure(figsize=(8, 5)) sns.countplot(x = 'age', data = df) plt.figure(figsize=(8, 5)) sns.countplot(x = 'gender', data = df) plt.figure(figsize=(8, 5)) sns.countplot(x = 'age', hue = 'gender', data = df) plt.figure(figsize=(8, 5)) sns.distplot(df['num_lab_procedures']) plt.figure(figsize=(8, 5)) sns.distplot(df['num_medications']) ######NOTE: The visualization will only display in Chrome browser. ######## # First install below libraries and then restart the kernel to visualize. # !pip install tensorflow-data-validation # !pip install apache-beam[interactive] import tensorflow_data_validation as tfdv full_data_stats = tfdv.generate_statistics_from_dataframe(dataframe=df) tfdv.visualize_statistics(full_data_stats) schema = tfdv.infer_schema(statistics=full_data_stats) tfdv.display_schema(schema=schema) categorical_columns_list = ['A1Cresult', 'age', 'change', 'gender', 'max_glu_serum', 'medical_specialty', 'payer_code', 'race', 'readmitted', 'weight'] def count_unique_values(df): cat_df = df return pd.DataFrame({ 'columns' : cat_df.columns, 'cardinality' : cat_df.nunique() }).reset_index(drop = True).sort_values(by = 'cardinality', ascending = False) count_unique_values(df)
_____no_output_____
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Reduce Dimensionality of the NDC Code Feature **Question 3**: NDC codes are a common format to represent the wide variety of drugs that are prescribed for patient care in the United States. The challenge is that there are many codes that map to the same or similar drug. You are provided with the ndc drug lookup file https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/ndc_lookup_table.csv derived from the National Drug Codes List site(https://ndclist.com/). Please use this file to come up with a way to reduce the dimensionality of this field and create a new field in the dataset called "generic_drug_name" in the output dataframe.
#NDC code lookup file ndc_code_path = "./medication_lookup_tables/final_ndc_lookup_table" ndc_code_df = pd.read_csv(ndc_code_path) from student_utils import reduce_dimension_ndc def reduce_dimension_ndc(df, ndc_code_df): ''' df: pandas dataframe, input dataset ndc_df: pandas dataframe, drug code dataset used for mapping in generic names return: df: pandas dataframe, output dataframe with joined generic drug name ''' mapping = dict(ndc_code_df[['NDC_Code', 'Non-proprietary Name']].values) mapping['nan'] = np.nan df['generic_drug_name'] = df['ndc_code'].astype(str).apply(lambda x : mapping[x]) return df reduce_dim_df = reduce_dimension_ndc(df, ndc_code_df) reduce_dim_df.head() # Number of unique values should be less for the new output field assert df['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique() print('Number of ndc_code: ', df['ndc_code'].nunique()) print('Number of drug name: ', reduce_dim_df['generic_drug_name'].nunique())
Number of ndc_code: 251 Number of drug name: 22
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Select First Encounter for each Patient **Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.
def select_first_encounter(df): ''' df: pandas dataframe, dataframe with all encounters return: - first_encounter_df: pandas dataframe, dataframe with only the first encounter for a given patient ''' df.sort_values(by = 'encounter_id') first_encounters = df.groupby('patient_nbr')['encounter_id'].first().values first_encounter_df = df[df['encounter_id'].isin(first_encounters)] # first_encounter_df = first_encounter_df.groupby('encounter_id').first().reset_index() return first_encounter_df first_encounter_df = select_first_encounter(reduce_dim_df) first_encounter_df.head() # unique patients in transformed dataset unique_patients = first_encounter_df['patient_nbr'].nunique() print("Number of unique patients:{}".format(unique_patients)) # unique encounters in transformed dataset unique_encounters = first_encounter_df['encounter_id'].nunique() print("Number of unique encounters:{}".format(unique_encounters)) original_unique_patient_number = reduce_dim_df['patient_nbr'].nunique() # number of unique patients should be equal to the number of unique encounters and patients in the final dataset assert original_unique_patient_number == unique_patients assert original_unique_patient_number == unique_encounters print("Tests passed!!")
Number of unique patients:71518 Number of unique encounters:71518 Tests passed!!
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Aggregate Dataset to Right Level for Modeling In order to provide a broad scope of the steps and to prevent students from getting stuck with data transformations, we have selected the aggregation columns and provided a function to build the dataset at the appropriate level. The 'aggregate_dataset" function that you can find in the 'utils.py' file can take the preceding dataframe with the 'generic_drug_name' field and transform the data appropriately for the project. To make it simpler for students, we are creating dummy columns for each unique generic drug name and adding those are input features to the model. There are other options for data representation but this is out of scope for the time constraints of the course.
exclusion_list = [ 'generic_drug_name', 'ndc_code'] grouping_field_list = [c for c in first_encounter_df.columns if c not in exclusion_list] agg_drug_df, ndc_col_list = aggregate_dataset(first_encounter_df, grouping_field_list, 'generic_drug_name') assert len(agg_drug_df) == agg_drug_df['patient_nbr'].nunique() == agg_drug_df['encounter_id'].nunique() ndc_col_list
_____no_output_____
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Prepare Fields and Cast Dataset Feature Selection **Question 5**: After you have aggregated the dataset to the right level, we can do feature selection (we will include the ndc_col_list, dummy column features too). In the block below, please select the categorical and numerical features that you will use for the model, so that we can create a dataset subset. For the payer_code and weight fields, please provide whether you think we should include/exclude the field in our model and give a justification/rationale for this based off of the statistics of the data. Feel free to use visualizations or summary statistics to support your choice. **Student response**: We should exclude both payer_code and weight in our model because of large missing values.
plt.figure(figsize=(8, 5)) sns.countplot(x = 'payer_code', data = agg_drug_df) plt.figure(figsize=(8, 5)) sns.countplot(x = 'number_emergency', data = agg_drug_df) count_unique_values(agg_drug_df[grouping_field_list]) ''' Please update the list to include the features you think are appropriate for the model and the field that we will be using to train the model. There are three required demographic features for the model and I have inserted a list with them already in the categorical list. These will be required for later steps when analyzing data splits and model biases. ''' required_demo_col_list = ['race', 'gender', 'age'] student_categorical_col_list = [ 'change', 'primary_diagnosis_code' ] + required_demo_col_list + ndc_col_list student_numerical_col_list = [ 'number_inpatient', 'number_emergency', 'num_lab_procedures', 'number_diagnoses','num_medications','num_procedures'] PREDICTOR_FIELD = 'time_in_hospital' def select_model_features(df, categorical_col_list, numerical_col_list, PREDICTOR_FIELD, grouping_key='patient_nbr'): selected_col_list = [grouping_key] + [PREDICTOR_FIELD] + categorical_col_list + numerical_col_list return agg_drug_df[selected_col_list] selected_features_df = select_model_features(agg_drug_df, student_categorical_col_list, student_numerical_col_list, PREDICTOR_FIELD)
_____no_output_____
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Preprocess Dataset - Casting and Imputing We will cast and impute the dataset before splitting so that we do not have to repeat these steps across the splits in the next step. For imputing, there can be deeper analysis into which features to impute and how to impute but for the sake of time, we are taking a general strategy of imputing zero for only numerical features. OPTIONAL: What are some potential issues with this approach? Can you recommend a better way and also implement it?
processed_df = preprocess_df(selected_features_df, student_categorical_col_list, student_numerical_col_list, PREDICTOR_FIELD, categorical_impute_value='nan', numerical_impute_value=0)
/home/workspace/starter_code/utils.py:29: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df[predictor] = df[predictor].astype(float) /home/workspace/starter_code/utils.py:31: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df[c] = cast_df(df, c, d_type=str) /home/workspace/starter_code/utils.py:33: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df[numerical_column] = impute_df(df, numerical_column, numerical_impute_value)
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Split Dataset into Train, Validation, and Test Partitions **Question 6**: In order to prepare the data for being trained and evaluated by a deep learning model, we will split the dataset into three partitions, with the validation partition used for optimizing the model hyperparameters during training. One of the key parts is that we need to be sure that the data does not accidently leak across partitions.Please complete the function below to split the input dataset into three partitions(train, validation, test) with the following requirements.- Approximately 60%/20%/20% train/validation/test split- Randomly sample different patients into each data partition- **IMPORTANT** Make sure that a patient's data is not in more than one partition, so that we can avoid possible data leakage.- Make sure that the total number of unique patients across the splits is equal to the total number of unique patients in the original dataset- Total number of rows in original dataset = sum of rows across all three dataset partitions
def patient_dataset_splitter(df, patient_key='patient_nbr'): ''' df: pandas dataframe, input dataset that will be split patient_key: string, column that is the patient id return: - train: pandas dataframe, - validation: pandas dataframe, - test: pandas dataframe, ''' df[student_numerical_col_list] = df[student_numerical_col_list].astype(float) train_val_df = df.sample(frac = 0.8, random_state=3) train_df = train_val_df.sample(frac = 0.8, random_state=3) val_df = train_val_df.drop(train_df.index) test_df = df.drop(train_val_df.index) return train_df.reset_index(drop = True), val_df.reset_index(drop = True), test_df.reset_index(drop = True) #from student_utils import patient_dataset_splitter d_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr') assert len(d_train) + len(d_val) + len(d_test) == len(processed_df) print("Test passed for number of total rows equal!") assert (d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()) == agg_drug_df['patient_nbr'].nunique() print("Test passed for number of unique patients being equal!")
Test passed for number of unique patients being equal!
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Demographic Representation Analysis of Split After the split, we should check to see the distribution of key features/groups and make sure that there is representative samples across the partitions. The show_group_stats_viz function in the utils.py file can be used to group and visualize different groups and dataframe partitions. Label Distribution Across Partitions Below you can see the distributution of the label across your splits. Are the histogram distribution shapes similar across partitions?
show_group_stats_viz(processed_df, PREDICTOR_FIELD) show_group_stats_viz(d_train, PREDICTOR_FIELD) show_group_stats_viz(d_test, PREDICTOR_FIELD)
time_in_hospital 1.0 2159 2.0 2486 3.0 2576 4.0 1842 5.0 1364 6.0 1041 7.0 814 8.0 584 9.0 404 10.0 307 11.0 242 12.0 182 13.0 165 14.0 138 dtype: int64 AxesSubplot(0.125,0.125;0.775x0.755)
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Demographic Group Analysis We should check that our partitions/splits of the dataset are similar in terms of their demographic profiles. Below you can see how we might visualize and analyze the full dataset vs. the partitions.
# Full dataset before splitting patient_demo_features = ['race', 'gender', 'age', 'patient_nbr'] patient_group_analysis_df = processed_df[patient_demo_features].groupby('patient_nbr').head(1).reset_index(drop=True) show_group_stats_viz(patient_group_analysis_df, 'gender') # Training partition show_group_stats_viz(d_train, 'gender') # Test partition show_group_stats_viz(d_test, 'gender')
gender Female 7631 Male 6672 Unknown/Invalid 1 dtype: int64 AxesSubplot(0.125,0.125;0.775x0.755)
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Convert Dataset Splits to TF Dataset We have provided you the function to convert the Pandas dataframe to TF tensors using the TF Dataset API. Please note that this is not a scalable method and for larger datasets, the 'make_csv_dataset' method is recommended -https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset.
# Convert dataset from Pandas dataframes to TF dataset batch_size = 128 diabetes_train_ds = df_to_dataset(d_train, PREDICTOR_FIELD, batch_size=batch_size) diabetes_val_ds = df_to_dataset(d_val, PREDICTOR_FIELD, batch_size=batch_size) diabetes_test_ds = df_to_dataset(d_test, PREDICTOR_FIELD, batch_size=batch_size) # We use this sample of the dataset to show transformations later diabetes_batch = next(iter(diabetes_train_ds))[0] def demo(feature_column, example_batch): feature_layer = tf.keras.layers.DenseFeatures(feature_column) print(feature_layer(example_batch))
_____no_output_____
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
4. Create Categorical Features with TF Feature Columns Build Vocabulary for Categorical Features Before we can create the TF categorical features, we must first create the vocab files with the unique values for a given field that are from the **training** dataset. Below we have provided a function that you can use that only requires providing the pandas train dataset partition and the list of the categorical columns in a list format. The output variable 'vocab_file_list' will be a list of the file paths that can be used in the next step for creating the categorical features.
vocab_file_list = build_vocab_files(d_train, student_categorical_col_list) assert len(vocab_file_list) == len(student_categorical_col_list)
_____no_output_____
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Create Categorical Features with Tensorflow Feature Column API **Question 7**: Using the vocab file list from above that was derived fromt the features you selected earlier, please create categorical features with the Tensorflow Feature Column API, https://www.tensorflow.org/api_docs/python/tf/feature_column. Below is a function to help guide you.
def create_tf_categorical_feature_cols(categorical_col_list, vocab_dir='./diabetes_vocab/'): ''' categorical_col_list: list, categorical field list that will be transformed with TF feature column vocab_dir: string, the path where the vocabulary text files are located return: output_tf_list: list of TF feature columns ''' output_tf_list = [] for c in categorical_col_list: vocab_file_path = os.path.join(vocab_dir, c + "_vocab.txt") ''' Which TF function allows you to read from a text file and create a categorical feature You can use a pattern like this below... tf_categorical_feature_column = tf.feature_column....... ''' diagnosis_vocab = tf.feature_column.categorical_column_with_vocabulary_file(c, vocab_file_path, num_oov_buckets = 1) tf_categorical_feature_column = tf.feature_column.indicator_column(diagnosis_vocab) output_tf_list.append(tf_categorical_feature_column) return output_tf_list tf_cat_col_list = create_tf_categorical_feature_cols(student_categorical_col_list) test_cat_var1 = tf_cat_col_list[0] print("Example categorical field:\n{}".format(test_cat_var1)) demo(test_cat_var1, diabetes_batch)
Example categorical field: IndicatorColumn(categorical_column=VocabularyFileCategoricalColumn(key='change', vocabulary_file='./diabetes_vocab/change_vocab.txt', vocabulary_size=3, num_oov_buckets=1, dtype=tf.string, default_value=-1)) WARNING:tensorflow:Layer dense_features_27 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx. If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2. To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
5. Create Numerical Features with TF Feature Columns **Question 8**: Using the TF Feature Column API(https://www.tensorflow.org/api_docs/python/tf/feature_column/), please create normalized Tensorflow numeric features for the model. Try to use the z-score normalizer function below to help as well as the 'calculate_stats_from_train_data' function.
from student_utils import create_tf_numeric_feature def create_tf_numeric_feature(col, MEAN, STD, default_value=0): ''' col: string, input numerical column name MEAN: the mean for the column in the training data STD: the standard deviation for the column in the training data default_value: the value that will be used for imputing the field return: tf_numeric_feature: tf feature column representation of the input field ''' normalizer_fn = lambda col, m, s : (col - m) / s normalizer = partial(normalizer_fn, m = MEAN, s = STD) tf_numeric_feature = tf.feature_column.numeric_column(col, normalizer_fn = normalizer, dtype = tf.float64, default_value = default_value) return tf_numeric_feature
_____no_output_____
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
For simplicity the create_tf_numerical_feature_cols function below uses the same normalizer function across all features(z-score normalization) but if you have time feel free to analyze and adapt the normalizer based off the statistical distributions. You may find this as a good resource in determining which transformation fits best for the data https://developers.google.com/machine-learning/data-prep/transform/normalization.
def calculate_stats_from_train_data(df, col): mean = df[col].describe()['mean'] std = df[col].describe()['std'] return mean, std def create_tf_numerical_feature_cols(numerical_col_list, train_df): tf_numeric_col_list = [] for c in numerical_col_list: mean, std = calculate_stats_from_train_data(train_df, c) tf_numeric_feature = create_tf_numeric_feature(c, mean, std) tf_numeric_col_list.append(tf_numeric_feature) return tf_numeric_col_list tf_cont_col_list = create_tf_numerical_feature_cols(student_numerical_col_list, d_train) test_cont_var1 = tf_cont_col_list[0] print("Example continuous field:\n{}\n".format(test_cont_var1)) demo(test_cont_var1, diabetes_batch)
Example continuous field: NumericColumn(key='number_inpatient', shape=(1,), default_value=(0,), dtype=tf.float64, normalizer_fn=functools.partial(<function create_tf_numeric_feature.<locals>.<lambda> at 0x7f7ffe9b6290>, m=0.17600664176006642, s=0.6009985590232482)) WARNING:tensorflow:Layer dense_features_28 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx. If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2. To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers Use DenseFeatures to combine features for model Now that we have prepared categorical and numerical features using Tensorflow's Feature Column API, we can combine them into a dense vector representation for the model. Below we will create this new input layer, which we will call 'claim_feature_layer'.
claim_feature_columns = tf_cat_col_list + tf_cont_col_list claim_feature_layer = tf.keras.layers.DenseFeatures(claim_feature_columns)
_____no_output_____
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Build Sequential API Model from DenseFeatures and TF Probability Layers Below we have provided some boilerplate code for building a model that connects the Sequential API, DenseFeatures, and Tensorflow Probability layers into a deep learning model. There are many opportunities to further optimize and explore different architectures through benchmarking and testing approaches in various research papers, loss and evaluation metrics, learning curves, hyperparameter tuning, TF probability layers, etc. Feel free to modify and explore as you wish. **OPTIONAL**: Come up with a more optimal neural network architecture and hyperparameters. Share the process in discovering the architecture and hyperparameters.
def build_sequential_model(feature_layer): model = tf.keras.Sequential([ feature_layer, tf.keras.layers.Dense(512, activation='relu'), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(64, activation='relu'), tfp.layers.DenseVariational(1+1, posterior_mean_field, prior_trainable), tfp.layers.DistributionLambda( lambda t:tfp.distributions.Normal(loc=t[..., :1], scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]) ) ), ]) return model def build_diabetes_model(train_ds, val_ds, feature_layer, epochs=5, loss_metric='mse'): model = build_sequential_model(feature_layer) model.compile(optimizer='rmsprop', loss=loss_metric, metrics=[loss_metric]) early_stop = tf.keras.callbacks.EarlyStopping(monitor=loss_metric, patience=3) model_checkpoint = tf.keras.callbacks.ModelCheckpoint('saved_models/bestmodel.h5', monitor='val_loss', verbose=0, save_best_only=True, mode='auto') history = model.fit(train_ds, validation_data=val_ds, callbacks=[early_stop], epochs=epochs) return model, history diabetes_model, history = build_diabetes_model(diabetes_train_ds, diabetes_val_ds, claim_feature_layer, epochs=20)
Train for 358 steps, validate for 90 steps Epoch 1/20 358/358 [==============================] - 12s 33ms/step - loss: 25.3230 - mse: 25.1798 - val_loss: 22.8669 - val_mse: 22.5741 Epoch 2/20 358/358 [==============================] - 13s 36ms/step - loss: 15.7285 - mse: 15.1443 - val_loss: 13.9822 - val_mse: 13.2914 Epoch 3/20 358/358 [==============================] - 13s 37ms/step - loss: 12.7352 - mse: 11.9670 - val_loss: 11.7411 - val_mse: 11.0843 Epoch 4/20 358/358 [==============================] - 13s 35ms/step - loss: 11.2485 - mse: 10.4313 - val_loss: 10.4873 - val_mse: 9.5640 Epoch 5/20 358/358 [==============================] - 13s 35ms/step - loss: 10.3242 - mse: 9.5285 - val_loss: 9.8177 - val_mse: 9.1636 Epoch 6/20 358/358 [==============================] - 8s 24ms/step - loss: 10.2267 - mse: 9.4504 - val_loss: 10.4423 - val_mse: 9.6653 Epoch 7/20 358/358 [==============================] - 13s 35ms/step - loss: 9.2538 - mse: 8.3019 - val_loss: 9.8968 - val_mse: 9.1000 Epoch 8/20 358/358 [==============================] - 13s 37ms/step - loss: 8.9934 - mse: 8.1093 - val_loss: 8.8550 - val_mse: 8.1466 Epoch 9/20 358/358 [==============================] - 13s 36ms/step - loss: 8.7026 - mse: 7.9876 - val_loss: 9.2839 - val_mse: 8.8448 Epoch 10/20 358/358 [==============================] - 13s 37ms/step - loss: 8.5294 - mse: 7.6984 - val_loss: 8.0266 - val_mse: 7.2838 Epoch 11/20 358/358 [==============================] - 8s 23ms/step - loss: 8.5896 - mse: 7.7653 - val_loss: 7.9876 - val_mse: 7.3438 Epoch 12/20 358/358 [==============================] - 8s 23ms/step - loss: 8.1592 - mse: 7.3578 - val_loss: 8.5204 - val_mse: 7.5046 Epoch 13/20 358/358 [==============================] - 8s 23ms/step - loss: 8.1387 - mse: 7.3121 - val_loss: 8.0225 - val_mse: 7.3049 Epoch 14/20 358/358 [==============================] - 8s 23ms/step - loss: 7.8314 - mse: 7.0780 - val_loss: 7.9314 - val_mse: 7.0396 Epoch 15/20 358/358 [==============================] - 10s 27ms/step - loss: 7.5737 - mse: 6.7585 - val_loss: 7.8283 - val_mse: 7.0017 Epoch 16/20 358/358 [==============================] - 8s 21ms/step - loss: 7.5256 - mse: 6.8017 - val_loss: 7.5964 - val_mse: 6.9889 Epoch 17/20 358/358 [==============================] - 8s 21ms/step - loss: 7.5827 - mse: 6.7757 - val_loss: 7.8556 - val_mse: 7.1059 Epoch 18/20 358/358 [==============================] - 8s 21ms/step - loss: 7.4014 - mse: 6.5900 - val_loss: 7.4390 - val_mse: 6.5678 Epoch 19/20 358/358 [==============================] - 8s 22ms/step - loss: 7.3756 - mse: 6.5670 - val_loss: 7.4403 - val_mse: 6.6895 Epoch 20/20 358/358 [==============================] - 8s 22ms/step - loss: 7.1834 - mse: 6.3069 - val_loss: 7.8813 - val_mse: 7.2031
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Show Model Uncertainty Range with TF Probability **Question 9**: Now that we have trained a model with TF Probability layers, we can extract the mean and standard deviation for each prediction. Please fill in the answer for the m and s variables below. The code for getting the predictions is provided for you below.
feature_list = student_categorical_col_list + student_numerical_col_list diabetes_x_tst = dict(d_test[feature_list]) diabetes_yhat = diabetes_model(diabetes_x_tst) preds = diabetes_model.predict(diabetes_test_ds) def get_mean_std_from_preds(diabetes_yhat): ''' diabetes_yhat: TF Probability prediction object ''' m = diabetes_yhat.mean() s = diabetes_yhat.stddev() return m, s m, s = get_mean_std_from_preds(diabetes_yhat)
_____no_output_____
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Show Prediction Output
prob_outputs = { "pred": preds.flatten(), "actual_value": d_test['time_in_hospital'].values, "pred_mean": m.numpy().flatten(), "pred_std": s.numpy().flatten() } prob_output_df = pd.DataFrame(prob_outputs) prob_output_df.head()
_____no_output_____
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Convert Regression Output to Classification Output for Patient Selection **Question 10**: Given the output predictions, convert it to a binary label for whether the patient meets the time criteria or does not (HINT: use the mean prediction numpy array). The expected output is a numpy array with a 1 or 0 based off if the prediction meets or doesnt meet the criteria.
def get_student_binary_prediction(df, col): ''' df: pandas dataframe prediction output dataframe col: str, probability mean prediction field return: student_binary_prediction: pandas dataframe converting input to flattened numpy array and binary labels ''' student_binary_prediction = df[col].apply(lambda x : 1 if x >= 5 else 0) return student_binary_prediction student_binary_prediction = get_student_binary_prediction(prob_output_df, 'pred_mean')
_____no_output_____
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Add Binary Prediction to Test Dataframe Using the student_binary_prediction output that is a numpy array with binary labels, we can use this to add to a dataframe to better visualize and also to prepare the data for the Aequitas toolkit. The Aequitas toolkit requires that the predictions be mapped to a binary label for the predictions (called 'score' field) and the actual value (called 'label_value').
def add_pred_to_test(test_df, pred_np, demo_col_list): for c in demo_col_list: test_df[c] = test_df[c].astype(str) test_df['score'] = pred_np test_df['label_value'] = test_df['time_in_hospital'].apply(lambda x: 1 if x >=5 else 0) return test_df pred_test_df = add_pred_to_test(d_test, student_binary_prediction, ['race', 'gender']) pred_test_df[['patient_nbr', 'gender', 'race', 'time_in_hospital', 'score', 'label_value']].head()
_____no_output_____
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Model Evaluation Metrics **Question 11**: Now it is time to use the newly created binary labels in the 'pred_test_df' dataframe to evaluate the model with some common classification metrics. Please create a report summary of the performance of the model and be sure to give the ROC AUC, F1 score(weighted), class precision and recall scores. For the report please be sure to include the following three parts:- With a non-technical audience in mind, explain the precision-recall tradeoff in regard to how you have optimized your model.- What are some areas of improvement for future iterations? Precision-Recall Tradeoff* Tradeoff means increasing one parameter leads to decreasing of the other.* Precision is the fraction of correct positives among the total predicted positives.* Recall is the fraction of correct positives among the total positives in the dataset.* precision-recall tradeoff occur due to increasing one of the parameter(precision or recall) while keeping the model same. Improvements* Recall seems to be quite low, so we can further try to improve the score.
# AUC, F1, precision and recall # Summary print(classification_report(pred_test_df['label_value'], pred_test_df['score'])) f1_score(pred_test_df['label_value'], pred_test_df['score'], average='weighted') accuracy_score(pred_test_df['label_value'], pred_test_df['score']) roc_auc_score(pred_test_df['label_value'], pred_test_df['score']) precision_score(pred_test_df['label_value'], pred_test_df['score']) recall_score(pred_test_df['label_value'], pred_test_df['score'])
_____no_output_____
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
7. Evaluating Potential Model Biases with Aequitas Toolkit Prepare Data For Aequitas Bias Toolkit Using the gender and race fields, we will prepare the data for the Aequitas Toolkit.
# Aequitas from aequitas.preprocessing import preprocess_input_df from aequitas.group import Group from aequitas.plotting import Plot from aequitas.bias import Bias from aequitas.fairness import Fairness ae_subset_df = pred_test_df[['race', 'gender', 'score', 'label_value']] ae_df, _ = preprocess_input_df(ae_subset_df) g = Group() xtab, _ = g.get_crosstabs(ae_df) absolute_metrics = g.list_absolute_metrics(xtab) clean_xtab = xtab.fillna(-1) aqp = Plot() b = Bias()
/opt/conda/lib/python3.7/site-packages/aequitas/group.py:143: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df['score'] = df['score'].astype(float)
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Reference Group Selection Below we have chosen the reference group for our analysis but feel free to select another one.
# test reference group with Caucasian Male bdf = b.get_disparity_predefined_groups(clean_xtab, original_df=ae_df, ref_groups_dict={'race':'Caucasian', 'gender':'Male' }, alpha=0.05, check_significance=False) f = Fairness() fdf = f.get_group_value_fairness(bdf)
get_disparity_predefined_group()
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
Race and Gender Bias Analysis for Patient Selection **Question 12**: For the gender and race fields, please plot two metrics that are important for patient selection below and state whether there is a significant bias in your model across any of the groups along with justification for your statement.
# Plot two metrics # Is there significant bias in your model for either race or gender? aqp.plot_group_metric(clean_xtab, 'fpr', min_group_size=0.05) aqp.plot_group_metric(clean_xtab, 'tpr', min_group_size=0.05) aqp.plot_group_metric(clean_xtab, 'fnr', min_group_size=0.05) aqp.plot_group_metric(clean_xtab, 'tnr', min_group_size=0.05)
_____no_output_____
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
There isn't any significant bias in the model for either race or gender. Fairness Analysis Example - Relative to a Reference Group **Question 13**: Earlier we defined our reference group and then calculated disparity metrics relative to this grouping. Please provide a visualization of the fairness evaluation for this reference group and analyze whether there is disparity.
# Reference group fairness plot aqp.plot_fairness_disparity(bdf, group_metric='fnr', attribute_name='race', significance_alpha=0.05, min_group_size=0.05) aqp.plot_fairness_disparity(fdf, group_metric='fnr', attribute_name='gender', significance_alpha=0.05, min_group_size=0.05) aqp.plot_fairness_disparity(fdf, group_metric='fpr', attribute_name='race', significance_alpha=0.05, min_group_size=0.05)
_____no_output_____
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
There isn't any disparity in the model for either race or gender.
aqp.plot_fairness_group(fdf, group_metric='fpr', title=True, min_group_size=0.05) aqp.plot_fairness_group(fdf, group_metric='fnr', title=True)
_____no_output_____
MIT
code/student_project.ipynb
aaryapatel007/Patient-Selection-for-Diabetes-Drug-Testing
*First, let's read in the data and necessary libraries*
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from mypy import print_side_by_side from mypy import display_side_by_side #https://stackoverflow.com/a/44923103/8067752 %matplotlib inline pd.options.mode.chained_assignment = None b_cal = pd.read_csv('boston_calendar.csv') s_cal = pd.read_csv('seatle_calendar.csv') b_list = pd.read_csv('boston_listings.csv') s_list = pd.read_csv('seatle_listings.csv') b_rev = pd.read_csv('boston_reviews.csv') s_rev = pd.read_csv('seatle_reviews.csv')
_____no_output_____
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
_______________________________________________________________________________________________________________________ Task 1: Business Understanding *(With some Data Preperation)* *My work flow will be as follows, I will explore the data with some cleaning to get enough insights to formulate questions, then, within every question I will follow the rest of the steps of the CRISP-DM framework.* Step 1: Basic Exploration with some cleaning*To be familiarized with the Data and to gather insights to formulate questions* > **Boston & Seatle Calendar**
display_side_by_side(b_cal.head(), s_cal.head(), titles = ['b_cal', 's_cal'])
_____no_output_____
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
*Check the sizes of cols and rows & check Nulls*
print_side_by_side('Boston Cal:', 'Seatle Cal:', b=0) print_side_by_side('Shape:',b_cal.shape,"Shape:", s_cal.shape) print_side_by_side("Cols with nulls: ", b_cal.isnull().sum()[b_cal.isnull().sum()>0].index[0],"Cols with nulls: ", s_cal.isnull().sum()[s_cal.isnull().sum()>0].index[0]) print_side_by_side("Null prop of price column: ", round(b_cal.price.isnull().sum()/b_cal.shape[0], 2),"Null prop of price column: ", round(s_cal.price.isnull().sum()/s_cal.shape[0], 2)) print_side_by_side("Proportion of False(unit unavailable):", round(b_cal.available[b_cal.available =='f' ].count()/b_cal.shape[0],2),"Proportion of False(unit unavailable):", round(s_cal.available[s_cal.available =='f' ].count()/s_cal.shape[0],2)) print_side_by_side("Nulls when units are available: ", b_cal[b_cal['available']== 't']['price'].isnull().sum(),"Nulls when units are available: ", s_cal[s_cal['available']== 't']['price'].isnull().sum() ) print('\n')
Boston Cal: Seatle Cal: Shape: (1308890 4) Shape: (1393570 4) Cols with nulls: price Cols with nulls: price Null prop of price column: 0.51 Null prop of price column: 0.33 Proportion of False(unit unavailable): 0.51 Proportion of False(unit unavailable): 0.33 Nulls when units are available: 0 Nulls when units are available: 0
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
*Let's do some cleaning, first, let's transfer `date` column to datetime to ease manipulation and analysis. I will also create a dataframe with seperate date items from the Date column, to check the time interval along which the data was collected. In addition to that, let's transform `price` and `available` into numerical values*
def create_dateparts(df, date_col): """ INPUT df -pandas dataframe date_col -list of columns to break down into columns of years,months and days. OUTPUT df - a dataframe with columns of choice transformed in to columns of date parts(years,months and days) """ df['date'] = pd.to_datetime(df.date) b_date_df = pd.DataFrame() b_date_df['year'] = df['date'].dt.year b_date_df['month'] = df['date'].dt.month b_date_df['day'] =df['date'].dt.strftime("%A") #b_date_df['dow'] =df['date'].dt.day df = df.join(b_date_df) return df ####################### def get_period_df(df): """ INPUT df -pandas dataframe OUTPUT df - a dataframe grouped to show the span of all the entries """ period =pd.DataFrame(df.groupby(['year','month'], sort = True)['day'].value_counts()) period = period.rename(columns={'day':'count'}, level=0) period = period.reset_index().sort_values(by=['year', 'month', 'day']).reset_index(drop = True) return period ############################# def to_float(df, float_cols): """ INPUT df -pandas dataframe float_cols -list of columns to transform to float OUTPUT df - a dataframe with columns of choice transformed to float """ for col in float_cols: df[col] = df[col].str.replace('$', "", regex = False) df[col] = df[col].str.replace('%', "", regex = False) df[col] = df[col].str.replace(',', "", regex = False) for col in float_cols: df[col] = df[col].astype(float) return df ############################# def bool_nums(df, bool_cols): """ INPUT df -pandas dataframe bool_cols -list of columns with true or false strings OUTPUT df - a dataframe with columns of choice transforemed into binary values """ for col in bool_cols: df[col] = df[col].apply(lambda x: 1 if x == 't' else 0 ) df = df.reset_index(drop= True) return df
_____no_output_____
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
*Let's take a look at the resulted DataFrames after executing the previous fuc=nctions. I flipped the Boston calendar to have it start in ascending order like Seatle.*
b_cal_1 = to_float(b_cal, ['price']) s_cal_1 = to_float(s_cal, ['price']) b_cal_1 = create_dateparts(b_cal_1, 'date') s_cal_1 = create_dateparts(s_cal_1, 'date') b_cal_1 = bool_nums(b_cal_1, ['available']) s_cal_1 = bool_nums(s_cal_1, ['available']) b_cal_1 = b_cal_1.iloc[::-1].reset_index(drop=True) display_side_by_side(b_cal_1.head(3),s_cal_1.head(3), titles = ['b_cal_1', 's_cal_1'])
_____no_output_____
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
*Let's take a look at the resulted time intervals for Both Boston and Seatle calendar tables*
b_period =get_period_df(b_cal_1) s_period =get_period_df(s_cal_1) display_side_by_side(b_period.head(1), b_period.tail(1), titles = ['Boston Period']) display_side_by_side(s_period.head(1), s_period.tail(1), titles = ['Seatle Period']) print("Number of unique Listing IDs in Boston Calendar: ", len(b_cal_1.listing_id.unique())) print("Number of unique Listing IDs in Seatle Calendar: ", len(s_cal_1.listing_id.unique())) print('\n') #b_period.iloc[0], s_period.iloc[0], b =0)
_____no_output_____
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
*Seems like they both span a year, through which all the listings are tracked in terms of availability. When we group by year and month; the count is equivalent to the numbers of the unique ids because all the ids are spanning the same interval. Let's check any anomalies*
def check_anomalies(df, col): list_ids_not_year_long = [] for i in sorted(list(df[col].unique())): if df[df[col]== i].shape[0] != 365: list_ids_not_year_long.append(i) print("Entry Ids that don't span 1 year: " , list_ids_not_year_long) #Boston check_anomalies(b_cal_1, 'listing_id') #Seatle check_anomalies(s_cal_1, 'listing_id') ## check this entry in Boston Calendar print("Span of the entries for this listing, should be 365: ", b_cal_1[b_cal_1['listing_id']== 12898806].shape[0]) ## 2 years, seems like a duplicate as 730 = 365 * 2 one_or_two = pd.DataFrame(b_cal_1[b_cal_1['listing_id']==12898806].groupby(['year', 'month', 'day'])['day'].count()).day.unique()[0] print("Should be 1: ", one_or_two) ## It indeed is :) b_cal_1 = b_cal_1.drop_duplicates() print("Size of anomaly listing, Should be = 365: ", b_cal_1.drop_duplicates()[b_cal_1.drop_duplicates().listing_id==12898806]['listing_id'].size) print("After removing duplicates, Span of the entries for this listing, should be 365: ", b_cal_1[b_cal_1['listing_id']== 12898806].shape[0]) print("After removing duplicates, shape is: ", b_cal_1.shape) # b_cal_1.to_csv('b_cal_1.csv') # s_cal_1.to_csv('s_cal_1.csv')
_____no_output_____
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
_______________________________________________________________________________________________________________________ Comments: [Boston & Seatle Calendar]- The datasets have information about listing dates, availability and price tracked over a year for ever listing id- There are no data entry errors, all nulls are due to the structuring of the Data (the listings that weren't available has no price)- I added 4 cols that contain dateparts that will aid further analysis and modeling- The Boston calendar Dataset ranges through `365`days from `6th of September'16` to `5th of September'17`, No nulls with `1308525` rows and `8` cols- The Seatle calendar Dataset ranges through `365`days from `4th of January'16` to `2nd of January'17`, No nulls with `1393570` rows and `8` cols- Number of unique Listing IDs in Boston Calendar: `3585`- Number of unique Listing IDs in Seatle Calendar: `3818`- It seems that the table is not documenting any rentals it just shows if the unit is available at a certain time and the price then. _______________________________________________________________________________________________________________________ Step 1: Continue - > **Boston & Seatle Listings**
b_list.head(1) #s_list.head(10)
_____no_output_____
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
*Check the sizes of cols & rows & check Nulls*
print_side_by_side("Boston listings size :", b_list.shape, "Seatle listings size :", s_list.shape) print_side_by_side("Number of Non-null cols in Boston listings: ", np.sum(b_list.isnull().sum()==0) ,"Number of Non-null cols in Seatle listings: ", np.sum(s_list.isnull().sum()==0)) set_difference = set(b_list.columns) - set(s_list.columns) print("Columns in Boston but not in Seatle: ", set_difference) print('\n')
Boston listings size : (3585 95) Seatle listings size : (3818 92) Number of Non-null cols in Boston listings: 51 Number of Non-null cols in Seatle listings: 47 Columns in Boston but not in Seatle: {'interaction', 'house_rules', 'access'}
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
*Let's go through the columns of this table as they are a lot, decide on which would be useful, which would be ignored and which would be transformed based on intuition.* > **to_parts:**(Divide into ranges)>* *maximum_nights* > > > **to_count:** (Provide a count)> * *amenities* > * *host_verifications* > > >**to_dummy:** (Convert into dummy variables)>* *amenities* > > >**to_len_text:** (provide length of text)>* *name* >* *host_about* >* *summary* >* *description* >* *neighborhood_overview* >* *transit* > >>**to_days:** (calculate the difference between both columns to have a meaningful value of host_since in days)>* *host_since*>* *last_review*> >>**to_float:**(Transform to float)>* *cleaning_fee* >* *host_response_rate* >* *host_acceptance_rate* >* *host_response_rate* >* *host_acceptance_rate* >* *extra_people* >* *price* > >> **to_binary:** (Transform to binary)>* *host_has_profile_pic* >* *host_identity_verified* >* *host_is_superhost* >* *is_location_exact* >* *instant_bookable* >* *require_guest_profile_picture* >* *require_guest_phone_verification* > >>**to_drop:**(Columns to be dropped)>**reason: little use:** >* *listing_url, scrape_id, last_scraped, experiences_offered, thumbnail_url,xl_picture_url, medium_url,*>* *host_id, host_url, host_thumbnail_url, host_picture_url, host_total_listings_count, neighbourhood,* >* *neighbourhood_group_cleansed, state, country_code, country, latitude, longitude,*>* *has_availability, calendar_last_scraped, host_name, picture_url, space, first_review, *>> >**reason: Nulls, text, only in Boston:** >* *access , interaction, house_rules*>>>**reason> Nulls, 0 variability or extreme variability:** >* *square_feet* ------------- *90% Null boston 97% Null seatle* >* *weekly_price*-------------*75% Null boston 47% Null seatle* >* *monthly_price*------------*75% Null boston 60% Null seatle* >* *security_deposit*---------*65% Null boston 51% Null seatle* >* *notes*---------------------*55% Null boston 42% Null seatle* >* *jurisdiction_names*---------*100% Null in both* >* *license*--------------------*100% Null in both* >* *required_license*-----------*100% Null in both* >* *street*---------------------*High variability* *Let's write anymore functions needed to carry on these suggested changes*
drop_cols = ['listing_url', 'scrape_id', 'last_scraped', 'experiences_offered', 'thumbnail_url','xl_picture_url', 'medium_url', 'host_id', 'host_url', 'host_thumbnail_url', 'host_picture_url', 'host_total_listings_count', 'neighbourhood', 'neighbourhood_group_cleansed','state', 'country_code', 'country', 'latitude', 'longitude', 'has_availability', 'calendar_last_scraped', 'host_name','square_feet', 'weekly_price', 'monthly_price', 'security_deposit', 'notes', 'jurisdiction_names', 'license', 'requires_license', 'street', 'picture_url', 'space','first_review', 'house_rules', 'access', 'interaction'] float_cols = ['cleaning_fee', 'host_response_rate','host_acceptance_rate','host_response_rate', 'host_acceptance_rate','extra_people','price'] len_text_cols = ['name', 'host_about', 'summary', 'description','neighborhood_overview', 'transit'] count_cols = ['amenities', 'host_verifications'] d_col = [ 'amenities'] part_col = ['maximum_nights'] bool_cols = ['host_has_profile_pic', 'host_identity_verified', 'host_is_superhost', 'is_location_exact', 'instant_bookable', 'require_guest_profile_picture' , 'require_guest_phone_verification' ] day_cols = [ 'host_since', 'last_review'] ########################################################################################################################### def to_drop(df, drop_cols): """ INPUT df -pandas dataframe drop_cols -list of columns to drop OUTPUT df - a dataframe with columns of choice dropped """ for col in drop_cols: if col in list(df.columns): df = df.drop(col, axis = 1) else: continue return df ################################# def to_len_text(df, len_text_cols): """ INPUT df -pandas dataframe len_text_cols- list of columns to return the length of text of their values OUTPUT df - a dataframe with columns of choice transformed to len(values) instead of long text """ df_new = df.copy() len_text = [] new_len_text_cols = [] for col in len_text_cols: new_len_text_cols.append("len_"+col) for i in df_new[col]: #print(col,i) try: len_text.append(len(i)) except: len_text.append(i) #print('\n'*10) df_new = df_new.drop(col, axis = 1) len_text_col = pd.Series(len_text) len_text_col = len_text_col.reset_index(drop = True) #print(len_text_col) df_new['len_'+col]= len_text_col len_text = [] df_new[new_len_text_cols] = df_new[new_len_text_cols].fillna(0) return df_new, new_len_text_cols ######################### def to_parts(df, part_col): """ INPUT df -pandas dataframe part_col -list of columns to divide into "week or less" and "more than a week" depending on values OUTPUT df - a dataframe with columns of choice transformed to ranges of "week or less" and "more than a week" """ def to_apply(val): if val <= 7: val = '1 Week or less' elif (val >7) and (val<=14): val = '1 week to 2 weeks' elif (val >14) and (val<=30): val = '2 weeks to 1 month' elif (val >30) and (val>=60): val = '1 month to 2 months' elif (val >60) and (val>=90): val = '2 month to 3 months' elif (val >90) and (val>=180): val = '3 month to 6 months' else: val = 'More than 6 months' return val for part in part_col: df[part]= df[part].apply(to_apply) return df ############################ def to_count(df, count_cols): """ INPUT df -pandas dataframe count_cols -list of columns to count the string items within each value OUTPUT df - a dataframe with columns of choice transformed to a count of values """ def to_apply(val): if "{" in val: val = val.replace('}', "").replace('{', "").replace("'","" ).replace('"',"" ).replace("''", "").strip().split(',') elif "[" in val: val = val.replace('[',"" ).replace(']',"" ).replace("'","" ).strip().split(",") return len(val) for col in count_cols: df['count_'+col]= df[col].apply(to_apply) return df ######################## def to_items(df, d_col): """ INPUT df -pandas dataframe d_col -list of columns to divide the values to clean list of items OUTPUT df - a dataframe with columns of choice cleaned and returns the values as lists """ def to_apply(val): if "{" in val: val = val.replace('}', "").replace('{', "").replace("'","" ).replace('"',"" ).replace("''", "").lower().split(',') elif "[" in val: val = val.replace('[',"" ).replace(']',"" ).replace("'","" ).lower().split(",") return val def to_apply1(val): new_val = [] if val == 'None': new_val.append(val) for i in list(val): if (i != "") and ('translation' not in i.lower()): new_val.append(i.strip()) return new_val def to_apply2(val): if 'None' in val: return ['none'] elif len((val)) == 0: return ['none'] else: return list(val) for col in d_col: df[col]= df[col].apply(to_apply) df[col]= df[col].apply(to_apply1) df[col]= df[col].apply(to_apply2) return df def items_counter(df, d_col): """ INPUT df -pandas dataframe count_col -list of columns to with lists as values to count OUTPUT all_strings - a dictionary with the count of every value every list within every series """ all_strings= {} def to_apply(val): for i in val: if i in list(all_strings.keys()): all_strings[i]+=1 else: all_strings[i]=1 df[d_col].apply(to_apply) return all_strings ################################### def to_days(df, day_cols, na_date): """ INPUT df -pandas dataframe day_cols -list of columns to divide the values to clean list of items OUTPUT df - a dataframe with columns of choice cleaned and returns the values as lists """ #Since Boston lisitngs span from September'16 to september'17, we can impute using the month of march'16 #Since Seatle lisitngs span from January'16 to January'17, we can impute using the month of june'16 df = df.copy() df[[day_cols[0], day_cols[1]]]=df[[day_cols[0], day_cols[1]]].apply(pd.to_datetime) df = df.dropna(subset= [day_cols[0]], how ='any', axis = 0) df[day_cols[1]] = df[day_cols[1]].fillna(pd.to_datetime(na_date)) df[day_cols[0]]= (df[day_cols[1]] - df[day_cols[0]]).apply(lambda x: round(x.value/(864*1e11)),2) df= df.drop(day_cols[1], axis =1 ) df = df.reset_index(drop= True) return df ########################################################################################################################### def applier(df1,df2,drop = True, float_=True, len_text= True, count= True, items = True, parts = True , count_items = True, bool_num = True, days = True): """ INPUT df1,df2 - 2 pandas dataframes drop,float_,len_text, count, parts, date_time - Boolean values that corresponds to previosuly defined functions OUTPUT df - a clean dataframe that has undergone previously defined functions according to the boolean prameters passed """ while drop: df1 = to_drop(df1, drop_cols) df2 =to_drop(df2, drop_cols) break while float_: df1 =to_float(df1, float_cols) df2 =to_float(df2, float_cols) break while len_text: df1, nltc = to_len_text(df1, len_text_cols) df2, nltc = to_len_text(df2, len_text_cols) break while parts: df1 = to_parts(df1, part_col) df2 = to_parts(df2, part_col) break while count: df1 = to_count(df1, count_cols) df2 = to_count(df2, count_cols) df1 = df1.drop('host_verifications', axis =1 ) df2 = df2.drop('host_verifications', axis =1 ) break while items: df1 = to_items(df1, d_col) df2 = to_items(df2, d_col) break while count_items: b_amens_count = pd.Series(items_counter(df1,'amenities')).reset_index().rename(columns = {'index':'amenities', 0:'count'}).sort_values(by='count', ascending =False).reset_index(drop =True) s_amens_count = pd.Series(items_counter(df2, 'amenities')).reset_index().rename(columns = {'index':'amenities', 0:'count'}).sort_values(by='count', ascending =False).reset_index(drop =True) a_counts = [b_amens_count,s_amens_count] break while bool_num: df1 = bool_nums(df1, bool_cols) df2 = bool_nums(df2, bool_cols) break while days: df1 = to_days(df1, day_cols, '2016-04-1') df2 = to_days(df2, day_cols, '2016-06-1') break if count_items: return df1, df2 ,a_counts else: return df1,df2 b_list_1, s_list_1, a_counts = applier(b_list, s_list)
_____no_output_____
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
*Amenities seems like a good indicator of price as a response variable so let's have it dummified***This function takes forever(6 mins),so, it's commented out and I use the resulted dataframes that were written to CSV files**
# %%time # def to_dummy(df1,df2, col1, cols_ref1,cols_ref2): # def construct(df,col, cols_ref): # count = 0 # for val2 in df[col]: # lister = [] # for val1 in cols_ref[col]: # if val1 in val2: # lister.append(1) # else: # lister.append(0) # cols_ref = cols_ref.join(pd.Series(lister, name = count)) # count+=1 # cols_ref = cols_ref.drop('count', axis = 1).transpose() # cols_ref.columns = list(cols_ref.iloc[0,:]) # return cols_ref # b_amens_1 =construct(df1, col1,cols_ref1) # s_amens_1 =construct(df2, col1,cols_ref2) # b_amens_1 = b_amens_1.drop('none', axis = 1) #.drop(0,axis=0).reset_index(drop= True) # b_amens_1 = b_amens_1.iloc[1:,:] # b_amens_1.columns = ["{}_{}".format(col1,col) for col in b_amens_1.columns] # s_amens_1 = s_amens_1.iloc[1:,:] # s_amens_1 = s_amens_1.drop('none', axis = 1) # s_amens_1.columns = ["{}_{}".format(col1,col) for col in s_amens_1.columns] # b_dummies = b_amens_1.reset_index(drop =True) # s_dummies = s_amens_1.reset_index(drop =True) # df1 = df1.join(b_dummies) # df2 = df2.join(s_dummies) # df1 = df1.drop([col1], axis = 1) # df2 = df2.drop([col1], axis = 1) # return b_dummies, s_dummies, df1, df2 # b_d, s_d,b_list_d, s_list_d = to_dummy(b_list_1, s_list_1, 'amenities', # b_a_counts, s_a_counts) # b_list_d.to_csv('b_list_d.csv') # s_list_d.to_csv('s_list_d.csv') b_list_d = pd.read_csv('b_list_d.csv', index_col = 0) s_list_d = pd.read_csv('s_list_d.csv', index_col = 0)
_____no_output_____
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
*Check the nulls again*
df1= (b_list_d.isnull().sum()[b_list_d.isnull().sum()>0]/b_list_d.shape[0]*100).reset_index().rename(columns ={'index':'col_name',0:'nulls_proportion'}) df2 = (s_list_d.isnull().sum()[s_list_d.isnull().sum()>0]/s_list_d.shape[0]*100).reset_index().rename(columns ={'index':'col_name',0:'nulls_proportion'}) display_side_by_side(df1,df2, titles =['b_list_d_Nulls','s_list_d_Nulls' ])
_____no_output_____
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
_______________________________________________________________________________________________________________________ Comments: [Boston & Seatle Listings]- Boston listings size : `3585`, `95`/ Seatle listings size : `3818`, `92`- Number of Non-null cols in Boston listings: `51`, around half- Number of Non-null cols in Seatle listings: `47`, around half- I wrote a series of functions that commenced some basic cleaning to ease analysis, with the option to switch off any of them depending on the future requirements of the analyses, some of what was done:>- Columns with relatively high number nulls or that have little to no forseeable use were removed >- Took the charachter length of the values in some of the cols with long text entries and many unique values, possibly the length of some fields maybe correlated somewhat with price.>- Columns with dates are transformed into Datetime, numerical values that were in text to floats>- Columns `amenities`and `host_verifications`were taken as counts, `amenities` was then dummified, for its seeming importance. >- `maximum_nights`column seems to lack some integrity so I divided it into time periods > Columns with t and f strings were converted into binary data. >- Difference between `host_since`and `last_review` was computed in days to `host_since`>- All columns with only 't' or 'f' values were transformed in to binary values.- **After the basic cleaning and the dummification of `amenities`:** ~Boston listings size : `3585`, `98`/ Seatle listings size : `3818`, `98`. ~There are still nulls to deal with in case of modeling, but that depends on the requirements of each question. _______________________________________________________________________________________________________________________ Step 1: Continue - > **Boston & Seatle Reviews**
#b_rev.head(3) s_rev.head(3)
_____no_output_____
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
*Check the sizes of cols & rows & check Nulls*
print_side_by_side("Boston reviews size:", b_rev.shape,"Seatle reviews size:", s_rev.shape) print_side_by_side("No. of unique listing ids:", b_rev.listing_id.unique().size,"No. of unique listing ids:", s_rev.listing_id.unique().size) print_side_by_side("Number of Non-null cols in Boston Reviews:", np.sum(b_rev.isnull().sum()==0), "Number of Non-null cols in Seatle Reviews:", np.sum(s_rev.isnull().sum()==0)) print_side_by_side("Null cols % in Boston:", (b_rev.isnull().sum()[b_rev.isnull().sum()>0]/b_rev.shape[0]*100).to_string(), "Null cols % in Seatle:", (s_rev.isnull().sum()[s_rev.isnull().sum()>0]/s_rev.shape[0]*100).to_string()) print_side_by_side("Null cols no. in Boston:",(b_rev.isnull().sum()[b_rev.isnull().sum()>0]).to_string(), "Null cols no. in Seatle:", (s_rev.isnull().sum()[s_rev.isnull().sum()>0]).to_string()) print('\n')
Boston reviews size: (68275 6) Seatle reviews size: (84849 6) No. of unique listing ids: 2829 No. of unique listing ids: 3191 Number of Non-null cols in Boston Reviews: 5 Number of Non-null cols in Seatle Reviews: 5 Null cols % in Boston: comments 0.077627 Null cols % in Seatle: comments 0.021214 Null cols no. in Boston: comments 53 Null cols no. in Seatle: comments 18
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
**To extract analytical insights from the reviews entries, they ought to be transformed from text to numerical scores, to do so I will follow some steps:** *1) Find all the words -excluding any non alphanumeric charachters - in each Dataset***As the function takes 4 mins to execute, I commented it out and passed the resulted word lists as dfs to CSV files that were added to the project instead of running it in the notebook again.**
#%%time # def get_words(df, col): # """ # INPUT # df -pandas dataframe # col -column of which the values are text # # OUTPUT # df - a dataframe with a single colum of all the words # """ # all_strings = [] # for val in df[col]: # try: # val_strings = [''.join(filter(str.isalnum, i.lower())) for i in val.split() if len(i)>3] # except: # continue # for word in val_strings: # if word not in all_strings: # all_strings.append(word) # val_strings = [] # return pd.Series(all_strings).to_frame().reset_index(drop = True).rename(columns = {0:'words'}) # boston_words = get_words(b_rev, 'comments') # seatle_words = get_words(s_rev, 'comments') # boston_words.to_csv('boston_words.csv') # seatle_words.to_csv('seatle_words.csv') boston_words = pd.read_csv('drafts/boston_words.csv', index_col= 0) seatle_words = pd.read_csv('drafts/seatle_words.csv', index_col= 0) print("Boston words no.: ", boston_words.shape[0]) print("Seatle words no.: ", seatle_words.shape[0]) display_side_by_side(boston_words.head(5), seatle_words.head(5), titles = [ 'Boston', 'Seatle'])
Boston words no.: 54261 Seatle words no.: 50627
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
*2) Read in positive and negative english word lists that are used for sentiment analysis* Citation:* Using this resource https://www.cs.uic.edu/~liub/FBS/sentiment-analysis.htmllexicon I downloaded a list of words with positive and negative connotations used for sentiment analysis* *Based on the book*: > Sentiment Analysis and Opinion Mining (Introduction and Survey), Morgan & Claypool, May 2012.
positive_words = pd.read_csv('drafts/positive-words.txt', sep = '\t',encoding="ISO-8859-1") negative_words = pd.read_csv('drafts/negative-words.txt', sep = '\t',encoding="ISO-8859-1") positive_words = positive_words.iloc[29:,:].reset_index(drop = True).rename(columns = {';;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;':'words'}) negative_words = negative_words.iloc[31:,:].reset_index(drop = True).rename(columns = {';;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;':'words'}) b_pos = np.intersect1d(np.array(boston_words['words'].astype(str)), np.array(positive_words['words']),assume_unique=True) b_neg = np.intersect1d(np.array(boston_words['words'].astype(str)), np.array(negative_words['words']),assume_unique=True) s_pos = np.intersect1d(np.array(seatle_words['words'].astype(str)), np.array(positive_words['words']),assume_unique=True) s_neg = np.intersect1d(np.array(seatle_words['words'].astype(str)), np.array(negative_words['words']),assume_unique=True) print_side_by_side('Positive words count: ', positive_words.shape[0] ,'Negative words count: ', negative_words.shape[0]) print_side_by_side("No. of positive words in Boston Reviews: ", len(b_pos) ,"No. of negative words in Boston Reviews: ", len(b_neg)) print_side_by_side("No. of positive words in Seatle Reviews: ", len(s_pos) ,"No. of negative words in Seatle Reviews: ", len(s_neg)) print('\n')
Positive words count: 2005 Negative words count: 4781 No. of positive words in Boston Reviews: 1147 No. of negative words in Boston Reviews: 1507 No. of positive words in Seatle Reviews: 1235 No. of negative words in Seatle Reviews: 1556
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
*3) Let's translate the reviews from other languages to English**Let's start with dropping the nulls, check the language of the reviews using `langdetect`, prepare the non english `comments` to be translated*
##Dependency googletrans-4.0.0rc1 ##langdetect # b_rev = b_rev.dropna(subset=['comments'], how = 'any', axis = 0) # s_rev = s_rev.dropna(subset=['comments'], how = 'any', axis = 0) # %%time # b_rev_t = b_rev.copy() # s_rev_t = s_rev.copy() # from langdetect import detect # def lang_check(val): # try: # return detect(val) # except: # return val # b_rev_t['review_lang']=b_rev['comments'].apply(lang_check) # s_rev_t['review_lang']=s_rev['comments'].apply(lang_check) # b_rev_t.to_csv('b_rev_t.csv') # s_rev_t.to_csv('s_rev_t.csv') # b_rev_t = pd.read_csv('b_rev_t.csv', index_col = 0) #s_rev_t = pd.read_csv('s_rev_t.csv', index_col = 0) # print('Proportion of non English reviews in Boston: ' ,b_rev_t[b_rev_t['review_lang']!= 'en'].shape[0]/b_rev_t.shape[0]) # print('Proportion of non English reviews in Seattle: ',s_rev_t[s_rev_t['review_lang']!= 'en'].shape[0]/s_rev_t.shape[0]) print(f"""Proportion of non English reviews in Boston: 0.05436662660138958 Proportion of non English reviews in Seattle: 0.012424703233487757""") # b_to_trans =b_rev_t[b_rev_t['review_lang']!= 'en'] # s_to_trans =s_rev_t[s_rev_t['review_lang']!= 'en'] # b_to_trans['comments'] = b_to_trans['comments'].map(lambda val : str([re.sub(r"[^a-zA-Z0-9]+", '. ', k) for k in val.split("\n")]).replace('['," ").replace(']',"").replace("'","")) # s_to_trans['comments'] = s_to_trans['comments'].map(lambda val : str([re.sub(r"[^a-zA-Z0-9]+", '. ', k) for k in val.split("\n")]).replace('['," ").replace(']',"").replace("'",""))
Proportion of non English reviews in Boston: 0.05436662660138958 Proportion of non English reviews in Seattle: 0.012424703233487757
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
*Since googletrans library is extremely unstable, I break down the non-English reviews in Boston into 4 dataframes*
# def trans_slicer(df,df1 = 0,df2 = 0,df3 = 0, df4 = 0): # dfs=[] # for i in [df1,df2,df3,df4]: # i = df[0:1000] # df = df.drop(index = i.index.values,axis = 0).reset_index(drop= True) # dfs.append(i.reset_index(drop =True)) # # df = df.drop(index = range(0,df.shape[0],1),axis = 0).reset_index(drop= True) # return dfs # df1, df2, df3, df4 = trans_slicer(b_to_trans) # %%time # import re # import time # import googletrans # import httpx # from googletrans import Translator # timeout = httpx.Timeout(10) # 5 seconds timeout # translator = Translator(timeout=timeout) # def text_trans(val): # vals = translator.translate(val, dest='en').text # time.sleep(10) # return vals # ############################################################ # df1['t_comments'] = df2['comments'].apply(text_trans) # df1.to_csv('df2.csv') # df2['t_comments'] = df2['comments'].apply(text_trans) # df2.to_csv('df2.csv') # df3['t_comments'] = df3['comments'].apply(text_trans) # df3.to_csv('df3.csv') # df4['t_comments'] = df4['comments'].apply(text_trans) # df4.to_csv('df4.csv') # #4########################################################### # s_to_trans['t_comments'] = s_to_trans['comments'].apply(text_trans) # s_to_trans.to_csv('s_translate.csv') # dfs = df1.append(df2) # dfs = dfs.append(df3) # dfs = dfs.append(df4) # dfs.index = b_to_trans.index # b_to_trans = dfs # b_to_trans['comments'] = b_to_trans['t_comments'] # b_to_trans = b_to_trans.drop(columns =['t_comments'],axis = 1) #b_rev_t = b_rev_t.drop(index =b_to_trans.index,axis = 0) #b_rev_t = b_rev_t.append(b_to_trans) #b_rev_t = b_rev_t.sort_index(axis = 0).reset_index(drop= True) # b_rev_t['comments'] = b_rev_t['comments'].apply(lambda x: x.replace('.',' ')) # b_rev_t.to_csv('b_rev_translated.csv') # s_to_trans['comments'] = s_to_trans['t_comments'] # s_to_trans = s_to_trans.drop(columns =['t_comments'],axis = 1) # s_rev_t = s_rev_t.drop(index =s_to_trans.index,axis = 0) # s_rev_t = s_rev_t.append(s_to_trans) # s_rev_t = s_rev_t.sort_index(axis = 0).reset_index(drop= True) # s_rev_t['comments'] = s_rev_t['comments'].apply(lambda x: x.replace('.',' ')) # s_rev_t.to_csv('s_rev_translated.csv')
_____no_output_____
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
*Since googletrans takes around 3 hours to translate 1000 entries, that took some time, here are the resulted DataFrames*
b_rev_trans = pd.read_csv('b_rev_translated.csv', index_col =0) s_rev_trans = pd.read_csv('s_rev_translated.csv', index_col =0)
_____no_output_____
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
*4) Add a scores column using the previous resource as a reference to evaulate the score of each review*
# %%time # def create_scores(df,col, df_pos_array, df_neg_array): # """ # INPUT # df -pandas dataframe # col -column with text reviews to be transformed in to positive and negative scores # pos_array- array with reference positive words for the passed df # neg_array- array with reference negative words for the passed df # OUTPUT # df - a dataframe with a score column containing positive and negative scores" # """ # def get_score(val): # val_strings = [''.join(filter(str.isalnum, i.lower())) for i in str(val).split() if len(i)>3] # pos_score = len(np.intersect1d(np.array(val_strings).astype(object), df_pos_array, assume_unique =True)) # neg_score = len(np.intersect1d(np.array(val_strings).astype(object), df_neg_array, assume_unique =True)) # return pos_score - neg_score +1 # df['score']= df[col].apply(get_score) # return df # b_rev_score = create_scores(b_rev_trans, 'comments', b_pos, b_neg) # s_rev_score = create_scores(s_rev_trans, 'comments', s_pos, s_neg) # b_rev_score.to_csv('b_rev_score.csv') # s_rev_score.to_csv('s_rev_score.csv')
_____no_output_____
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
*As this function takes a while as well, let's write the results into to csv files and read the frames again and then show some samples.*
b_rev_score = pd.read_csv('b_rev_score.csv', index_col = 0) s_rev_score = pd.read_csv('s_rev_score.csv', index_col = 0) sub_b_rev = b_rev_score.iloc[:,[5,6,7]] sub_s_rev = s_rev_score.iloc[:,[5,6,7]] display_side_by_side(sub_b_rev.head(3), sub_s_rev.head(3), titles= ['Boston Reviews', 'Seatle_reviews'])
_____no_output_____
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
_______________________________________________________________________________________________________________________ Comments: [Boston & Seatle Reviews]- Boston reviews size : (68275, 6)- Seatle reviews size : (84849, 6)- Nulls are only in `comments`columns in both Datasets: - Null percentage in Boston Reviews: 0.08%- Null percentage in Seatle Reviews: 0.02%- I added a score column to both tables to reflect positive or negative reviews numerically with the aid of an external resource. _______________________________________________________________________________________________________________________ Step 2: Formulating Questions*After going through the data I think those questions would be of interest:* *Q: How can you compare the reviews in both cities ?* *Q: What aspects of a listing influences the price in both cities?* *Q: How can we predict the price?* *Q: How do prices vary through the year in both cities ? when is the season and off season in both cities?* _______________________________________________________________________________________________________________________ *Q: How can you compare the reviews in both cities ?* *Let's attempt to statistically describe the reviews in both cities*
print_side_by_side(' Boston: ', ' Seattle: ', b = 0) print_side_by_side(' Maximum score : ', b_rev_score.iloc[b_rev_score.score.idxmax()].score, ' Maximum Score : ', s_rev_score.iloc[s_rev_score.score.idxmax()].score) print_side_by_side(' Minimum Score : ', b_rev_score.iloc[b_rev_score.score.idxmin()].score, ' Minimum Score : ', s_rev_score.iloc[s_rev_score.score.idxmin()].score) print_side_by_side(' Most common score: ', b_rev_score['score'].mode().to_string(), ' Most common score: ', s_rev_score['score'].mode().to_string()) print_side_by_side(' Mean score: ', round(b_rev_score['score'].mean(),2) ,' Mean score: ', round(s_rev_score['score'].mean(),2)) print_side_by_side(' Median score: ',round( b_rev_score['score'].median(),2), ' Median score: ', s_rev_score['score'].median()) print_side_by_side(' Standard deviation: ', round(b_rev_score['score'].std(),2) ,' Standard deviation: ', round(s_rev_score['score'].std(),2)) # print_side_by_side(' Z score of -2: ', round(b_rev_score['score'].mean()-2*round(b_rev_score['score'].std(),2),1) # ,' Z score of -2: ', round(s_rev_score['score'].mean()-2*round(s_rev_score['score'].std(),2)),1) # print('Score: ', b_rev_score.iloc[b_rev_score.score.idxmax()].score) # b_rev_score.iloc[b_rev_score.score.idxmax()].comments plt.figure(figsize = (14,8)) plt.subplot(2,1,1) plt.title('Boston Reviews', fontsize = 18) sns.kdeplot(b_rev_score.score, bw_adjust=2) plt.axvline(x= b_rev_score['score'].mean(), color = 'orange', alpha = 0.6) plt.axvline(x= b_rev_score['score'].median(), color = 'gray', alpha = 0.6) plt.xlim(-15,30) plt.xlabel('', fontsize = 14) plt.ylabel('Count', fontsize = 14) plt.legend(['Scores','mean', 'median']) order = np.arange(-15,31,3) plt.xticks(order,order, fontsize = 12) plt.subplot(2,1,2) plt.title('Seattle Reviews', fontsize = 18) sns.kdeplot(s_rev_score.score, bw_adjust=2) plt.axvline(x= s_rev_score['score'].mean(), color = 'orange', alpha = 0.6) plt.axvline(x= s_rev_score['score'].median(), color = 'gray', alpha = 0.6) plt.xlim(-15,30) plt.xlabel('Scores', fontsize = 18) plt.ylabel('Count', fontsize = 14) plt.legend(['Scores','mean','median']) plt.xticks(order,order, fontsize = 12) plt.tight_layout();
Boston: Seattle: Maximum score : 34 Maximum Score : 39 Minimum Score : -15 Minimum Score : -15 Most common score: 0 5 Most common score: 0 5 Mean score: 5.84 Mean score: 6.55 Median score: 5.0 Median score: 6.0 Standard deviation: 3.19 Standard deviation: 3.28
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
>* **The scores clearly follow a normal distribution in both cities, with close standard deviations**>* **The mean score of Seattle (6.55) is a bit higher than Boston (5.84)**>* **The median score in both cities is a bit less than the mean which indicates a slight right skew** *Let'stake a look on the boxplots to have more robust insights*
plt.figure(figsize = (15,6)) plt.subplot(2,1,1) plt.title('Boston Reviews', fontsize = 18) sns.boxplot(data = b_rev_score, x = b_rev_score.score) plt.axvline(x= b_rev_score['score'].mean(), color = 'orange', alpha = 0.6) # plt.axvline(x= b_rev_score['score'].mean()+2*round(b_rev_score['score'].std(),2), color = 'red', alpha = 0.6) # plt.axvline(x= b_rev_score['score'].mean()-2*round(b_rev_score['score'].std(),2), color = 'red', alpha = 0.6) plt.xlim(-3,15) plt.ylabel('Count', fontsize = 16) order = np.arange(-3,15,1) plt.xticks(order,order, fontsize = 13) plt.xlabel('') plt.subplot(2,1,2) plt.title('Seattle Reviews', fontsize = 18) sns.boxplot(data = s_rev_score, x = s_rev_score.score) plt.axvline(x= s_rev_score['score'].mean(), color = 'orange', alpha = 0.6) # plt.axvline(x= s_rev_score['score'].mean()+2*round(s_rev_score['score'].std(),2), color = 'red', alpha = 0.6) # plt.axvline(x= s_rev_score['score'].mean()-2*round(s_rev_score['score'].std(),2), color = 'red', alpha = 0.6) plt.xlim(-3,15) plt.xlabel('Scores', fontsize = 18) plt.ylabel('Count', fontsize = 16) plt.xticks(order,order, fontsize = 13) plt.tight_layout();
_____no_output_____
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
>* **50% of The scores in both cities lies between 4 and 8**>* **The IQR of the scores in both cities lies between -2 to 14** *Finally, what's the proportion of negative scores in both cities*
b_rev_score['grade']= b_rev_score['score'].apply(lambda x: 1 if x >0 else 0) s_rev_score['grade']= s_rev_score['score'].apply(lambda x: 1 if x >0 else 0) print_side_by_side('Boston: ', 'Seattle: ', b=0) print_side_by_side('Negative reviews proportion: ', round(b_rev_score['grade'][b_rev_score.grade == 0].count()/b_rev_score.shape[0],3), 'Negative reviews proportion: ', round(s_rev_score['grade'][s_rev_score.grade == 0].count()/s_rev_score.shape[0],3))
Boston: Seattle: Negative reviews proportion: 0.012 Negative reviews proportion: 0.005
MIT
.ipynb_checkpoints/Project-1-checkpoint.ipynb
Zainy1453/Project-ND-1
AppendixHao Lu 04/04/2020 In this notebook, we simulated EEG data with the method described in the paper by Bharadwaj and Shinn-Cunningham (2014) and analyzed the data with the toolbox proposed in the same paper.The function was modifed so the values of thee variables within the function can be extracted and studied.Reference: Bharadwaj, H. M., & Shinn-Cunningham, B. G. (2014). Rapid acquisition of auditory subcortical steady state responses using multichannel recordings. Clinical Neurophysiology, 125(9), 1878-1888.
# import packages import numpy as np import matplotlib.pyplot as plt import pickle import random from scipy import linalg from anlffr import spectral,dpss sfreq = 10000 random.seed(2020) phase_list = [random.uniform(-np.pi,np.pi) for i in range(32)]
_____no_output_____
MIT
mtcplv_discrepancy.ipynb
HaoLu-a/cPCA-erratum
The phase of the signal from 32 channels were randomly sampled from a uniform distribution
plt.plot(phase_list) plt.xlabel('Number of Channel') plt.ylabel('Phase of signal')
_____no_output_____
MIT
mtcplv_discrepancy.ipynb
HaoLu-a/cPCA-erratum