markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Create train/test split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=.33, random_state=42) print("%d training samples" % x_train.shape[0]) print("%d test samples" % x_test.shape[0])
30021 training samples 14787 test samples
BSD-3-Clause
examples/02_advanced_usage.ipynb
gavehan/openTSNE
Create a t-SNE embeddingLike in the *simple_usage* notebook, we will run the standard t-SNE optimization.This example shows the standard t-SNE optimization. Much can be done in order to better preserve global structure and improve embedding quality. Please refer to the *preserving_global_structure* notebook for some examples. **1. Compute the affinities between data points**
%%time affinities_train = affinity.PerplexityBasedNN( x_train, perplexity=30, metric="euclidean", n_jobs=8, random_state=42, verbose=True, )
===> Finding 90 nearest neighbors using Annoy approximate search using euclidean distance... --> Time elapsed: 3.78 seconds ===> Calculating affinity matrix... --> Time elapsed: 0.43 seconds CPU times: user 19.3 s, sys: 794 ms, total: 20.1 s Wall time: 4.22 s
BSD-3-Clause
examples/02_advanced_usage.ipynb
gavehan/openTSNE
**2. Generate initial coordinates for our embedding**
%time init_train = initialization.pca(x_train, random_state=42)
CPU times: user 448 ms, sys: 88.3 ms, total: 536 ms Wall time: 86.9 ms
BSD-3-Clause
examples/02_advanced_usage.ipynb
gavehan/openTSNE
**3. Construct the `TSNEEmbedding` object**
embedding_train = TSNEEmbedding( init_train, affinities_train, negative_gradient_method="fft", n_jobs=8, verbose=True, )
_____no_output_____
BSD-3-Clause
examples/02_advanced_usage.ipynb
gavehan/openTSNE
**4. Optimize embedding** 1. Early exaggeration phase
%time embedding_train_1 = embedding_train.optimize(n_iter=250, exaggeration=12, momentum=0.5) utils.plot(embedding_train_1, y_train, colors=utils.MACOSKO_COLORS)
_____no_output_____
BSD-3-Clause
examples/02_advanced_usage.ipynb
gavehan/openTSNE
2. Regular optimization
%time embedding_train_2 = embedding_train_1.optimize(n_iter=500, momentum=0.8) utils.plot(embedding_train_2, y_train, colors=utils.MACOSKO_COLORS)
_____no_output_____
BSD-3-Clause
examples/02_advanced_usage.ipynb
gavehan/openTSNE
Transform
%%time embedding_test = embedding_train_2.prepare_partial( x_test, initialization="median", k=25, perplexity=5, ) utils.plot(embedding_test, y_test, colors=utils.MACOSKO_COLORS) %time embedding_test_1 = embedding_test.optimize(n_iter=250, learning_rate=0.1, momentum=0.8) utils.plot(embedding_test_1, y_test, colors=utils.MACOSKO_COLORS)
_____no_output_____
BSD-3-Clause
examples/02_advanced_usage.ipynb
gavehan/openTSNE
TogetherWe superimpose the transformed points onto the original embedding with larger opacity.
fig, ax = plt.subplots(figsize=(8, 8)) utils.plot(embedding_train_2, y_train, colors=utils.MACOSKO_COLORS, alpha=0.25, ax=ax) utils.plot(embedding_test_1, y_test, colors=utils.MACOSKO_COLORS, alpha=0.75, ax=ax)
_____no_output_____
BSD-3-Clause
examples/02_advanced_usage.ipynb
gavehan/openTSNE
Practice: BiLSTM for PoS Tagging*This notebook is based on [open-source implementation](https://github.com/bentrevett/pytorch-pos-tagging) of PoS Tagging in PyTorch.* IntroductionIn this series we'll be building a machine learning model that produces an output for every element in an input sequence, using PyTorch and TorchText. Specifically, we will be inputting a sequence of text and the model will output a part-of-speech (PoS) tag for each token in the input text. This can also be used for named entity recognition (NER), where the output for each token will be what type of entity, if any, the token is.In this notebook, we'll be implementing a multi-layer bi-directional LSTM (BiLSTM) to predict PoS tags using the Universal Dependencies English Web Treebank (UDPOS) dataset. Preparing DataFirst, let's import the necessary Python modules.
import torch import torch.nn as nn import torch.optim as optim from torchtext.legacy import data from torchtext.legacy import datasets import spacy import numpy as np import time import random
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
Next, we'll set the random seeds for reproducability.
SEED = 1234 random.seed(SEED) np.random.seed(SEED) torch.manual_seed(SEED) torch.backends.cudnn.deterministic = True
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
One of the key parts of TorchText is the `Field`. The `Field` handles how your dataset is processed.Our `TEXT` field handles how the text that we need to tag is dealt with. All we do here is set `lower = True` which lowercases all of the text.Next we'll define the `Fields` for the tags. This dataset actually has two different sets of tags, [universal dependency (UD) tags](https://universaldependencies.org/u/pos/) and [Penn Treebank (PTB) tags](https://www.sketchengine.eu/penn-treebank-tagset/). We'll only train our model on the UD tags, but will load the PTB tags to show how they could be used instead.`UD_TAGS` handles how the UD tags should be handled. Our `TEXT` vocabulary - which we'll build later - will have *unknown* tokens in it, i.e. tokens that are not within our vocabulary. However, we won't have unknown tags as we are dealing with a finite set of possible tags. TorchText `Fields` initialize a default unknown token, ``, which we remove by setting `unk_token = None`.`PTB_TAGS` does the same as `UD_TAGS`, but handles the PTB tags instead.
TEXT = data.Field(lower = True) UD_TAGS = data.Field(unk_token = None) PTB_TAGS = data.Field(unk_token = None)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We then define `fields`, which handles passing our fields to the dataset.Note that order matters, if you only wanted to load the PTB tags your field would be:```fields = (("text", TEXT), (None, None), ("ptbtags", PTB_TAGS))```Where `None` tells TorchText to not load those tags.
fields = (("text", TEXT), ("udtags", UD_TAGS), ("ptbtags", PTB_TAGS))
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
Next, we load the UDPOS dataset using our defined fields.
train_data, valid_data, test_data = datasets.UDPOS.splits(fields)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We can check how many examples are in each section of the dataset by checking their length.
print(f"Number of training examples: {len(train_data)}") print(f"Number of validation examples: {len(valid_data)}") print(f"Number of testing examples: {len(test_data)}")
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
Let's print out an example:
print(vars(train_data.examples[0]))
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We can also view the text and tags separately:
print(vars(train_data.examples[0])['text']) print(vars(train_data.examples[0])['udtags']) print(vars(train_data.examples[0])['ptbtags'])
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
Next, we'll build the vocabulary - a mapping of tokens to integers. We want some unknown tokens within our dataset in order to replicate how this model would be used in real life, so we set the `min_freq` to 2 which means only tokens that appear twice in the training set will be added to the vocabulary and the rest will be replaced by `` tokens.We also load the [GloVe](https://nlp.stanford.edu/projects/glove/) pre-trained token embeddings. Specifically, the 100-dimensional embeddings that have been trained on 6 billion tokens. Using pre-trained embeddings usually leads to improved performance - although admittedly the dataset used in this tutorial is too small to take advantage of the pre-trained embeddings. `unk_init` is used to initialize the token embeddings which are not in the pre-trained embedding vocabulary. By default this sets those embeddings to zeros, however it is better to not have them all initialized to the same value, so we initialize them from a Normal/Gaussian distribution.These pre-trained vectors are now loaded into our vocabulary and we will initialize our model with these values later.
MIN_FREQ = 2 TEXT.build_vocab(train_data, min_freq = MIN_FREQ, vectors = "glove.6B.100d", unk_init = torch.Tensor.normal_) UD_TAGS.build_vocab(train_data) PTB_TAGS.build_vocab(train_data)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We can check how many tokens and tags are in our vocabulary by getting their length:
print(f"Unique tokens in TEXT vocabulary: {len(TEXT.vocab)}") print(f"Unique tokens in UD_TAG vocabulary: {len(UD_TAGS.vocab)}") print(f"Unique tokens in PTB_TAG vocabulary: {len(PTB_TAGS.vocab)}")
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
Exploring the vocabulary, we can check the most common tokens within our texts:
print(TEXT.vocab.freqs.most_common(20))
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We can see the vocabularies for both of our tags:
print(UD_TAGS.vocab.itos) print(PTB_TAGS.vocab.itos)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We can also see how many of each tag are in our vocabulary:
print(UD_TAGS.vocab.freqs.most_common()) print(PTB_TAGS.vocab.freqs.most_common())
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We can also view how common each of the tags are within the training set:
def tag_percentage(tag_counts): total_count = sum([count for tag, count in tag_counts]) tag_counts_percentages = [(tag, count, count/total_count) for tag, count in tag_counts] return tag_counts_percentages print("Tag\t\tCount\t\tPercentage\n") for tag, count, percent in tag_percentage(UD_TAGS.vocab.freqs.most_common()): print(f"{tag}\t\t{count}\t\t{percent*100:4.1f}%") print("Tag\t\tCount\t\tPercentage\n") for tag, count, percent in tag_percentage(PTB_TAGS.vocab.freqs.most_common()): print(f"{tag}\t\t{count}\t\t{percent*100:4.1f}%")
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
The final part of data preparation is handling the iterator. This will be iterated over to return batches of data to process. Here, we set the batch size and the `device` - which is used to place the batches of tensors on our GPU, if we have one.
BATCH_SIZE = 128 device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits( (train_data, valid_data, test_data), batch_size = BATCH_SIZE, device = device)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
Building the ModelNext up, we define our model - a multi-layer bi-directional LSTM. The image below shows a simplified version of the model with only one LSTM layer and omitting the LSTM's cell state for clarity.![](assets/pos-bidirectional-lstm.png)The model takes in a sequence of tokens, $X = \{x_1, x_2,...,x_T\}$, passes them through an embedding layer, $e$, to get the token embeddings, $e(X) = \{e(x_1), e(x_2), ..., e(x_T)\}$.These embeddings are processed - one per time-step - by the forward and backward LSTMs. The forward LSTM processes the sequence from left-to-right, whilst the backward LSTM processes the sequence right-to-left, i.e. the first input to the forward LSTM is $x_1$ and the first input to the backward LSTM is $x_T$. The LSTMs also take in the the hidden, $h$, and cell, $c$, states from the previous time-step$$h^{\rightarrow}_t = \text{LSTM}^{\rightarrow}(e(x^{\rightarrow}_t), h^{\rightarrow}_{t-1}, c^{\rightarrow}_{t-1})$$$$h^{\leftarrow}_t=\text{LSTM}^{\leftarrow}(e(x^{\leftarrow}_t), h^{\leftarrow}_{t-1}, c^{\leftarrow}_{t-1})$$After the whole sequence has been processed, the hidden and cell states are then passed to the next layer of the LSTM.The initial hidden and cell states, $h_0$ and $c_0$, for each direction and layer are initialized to a tensor full of zeros.We then concatenate both the forward and backward hidden states from the final layer of the LSTM, $H = \{h_1, h_2, ... h_T\}$, where $h_1 = [h^{\rightarrow}_1;h^{\leftarrow}_T]$, $h_2 = [h^{\rightarrow}_2;h^{\leftarrow}_{T-1}]$, etc. and pass them through a linear layer, $f$, which is used to make the prediction of which tag applies to this token, $\hat{y}_t = f(h_t)$.When training the model, we will compare our predicted tags, $\hat{Y}$ against the actual tags, $Y$, to calculate a loss, the gradients w.r.t. that loss, and then update our parameters.We implement the model detailed above in the `BiLSTMPOSTagger` class.`nn.Embedding` is an embedding layer and the input dimension should be the size of the input (text) vocabulary. We tell it what the index of the padding token is so it does not update the padding token's embedding entry.`nn.LSTM` is the LSTM. We apply dropout as regularization between the layers, if we are using more than one.`nn.Linear` defines the linear layer to make predictions using the LSTM outputs. We double the size of the input if we are using a bi-directional LSTM. The output dimensions should be the size of the tag vocabulary.We also define a dropout layer with `nn.Dropout`, which we use in the `forward` method to apply dropout to the embeddings and the outputs of the final layer of the LSTM.
class BiLSTMPOSTagger(nn.Module): def __init__(self, input_dim, embedding_dim, hidden_dim, output_dim, n_layers, bidirectional, dropout, pad_idx): super().__init__() self.embedding = nn.Embedding(input_dim, embedding_dim, padding_idx = pad_idx) self.lstm = nn.LSTM(embedding_dim, hidden_dim, num_layers = n_layers, bidirectional = bidirectional, dropout = dropout if n_layers > 1 else 0) self.fc = nn.Linear(hidden_dim * 2 if bidirectional else hidden_dim, output_dim) self.dropout = nn.Dropout(dropout) def forward(self, text): #text = [sent len, batch size] #pass text through embedding layer embedded = self.dropout(self.embedding(text)) #embedded = [sent len, batch size, emb dim] #pass embeddings into LSTM outputs, (hidden, cell) = self.lstm(embedded) #outputs holds the backward and forward hidden states in the final layer #hidden and cell are the backward and forward hidden and cell states at the final time-step #output = [sent len, batch size, hid dim * n directions] #hidden/cell = [n layers * n directions, batch size, hid dim] #we use our outputs to make a prediction of what the tag should be predictions = self.fc(self.dropout(outputs)) #predictions = [sent len, batch size, output dim] return predictions
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
Training the ModelNext, we instantiate the model. We need to ensure the embedding dimensions matches that of the GloVe embeddings we loaded earlier.The rest of the hyperparmeters have been chosen as sensible defaults, though there may be a combination that performs better on this model and dataset.The input and output dimensions are taken directly from the lengths of the respective vocabularies. The padding index is obtained using the vocabulary and the `Field` of the text.
INPUT_DIM = len(TEXT.vocab) EMBEDDING_DIM = 100 HIDDEN_DIM = 128 OUTPUT_DIM = len(UD_TAGS.vocab) N_LAYERS = 2 BIDIRECTIONAL = True DROPOUT = 0.25 PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token] model = BiLSTMPOSTagger(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM, N_LAYERS, BIDIRECTIONAL, DROPOUT, PAD_IDX)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We initialize the weights from a simple Normal distribution. Again, there may be a better initialization scheme for this model and dataset.
def init_weights(m): for name, param in m.named_parameters(): nn.init.normal_(param.data, mean = 0, std = 0.1) model.apply(init_weights)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
Next, a small function to tell us how many parameters are in our model. Useful for comparing different models.
def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters')
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We'll now initialize our model's embedding layer with the pre-trained embedding values we loaded earlier.This is done by getting them from the vocab's `.vectors` attribute and then performing a `.copy` to overwrite the embedding layer's current weights.
pretrained_embeddings = TEXT.vocab.vectors print(pretrained_embeddings.shape) model.embedding.weight.data.copy_(pretrained_embeddings)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
It's common to initialize the embedding of the pad token to all zeros. This, along with setting the `padding_idx` in the model's embedding layer, means that the embedding should always output a tensor full of zeros when a pad token is input.
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM) print(model.embedding.weight.data)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We then define our optimizer, used to update our parameters w.r.t. their gradients. We use Adam with the default learning rate.
optimizer = optim.Adam(model.parameters())
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
Next, we define our loss function, cross-entropy loss.Even though we have no `` tokens within our tag vocab, we still have `` tokens. This is because all sentences within a batch need to be the same size. However, we don't want to calculate the loss when the target is a `` token as we aren't training our model to recognize padding tokens.We handle this by setting the `ignore_index` in our loss function to the index of the padding token in our tag vocabulary.
TAG_PAD_IDX = UD_TAGS.vocab.stoi[UD_TAGS.pad_token] criterion = nn.CrossEntropyLoss(ignore_index = TAG_PAD_IDX)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We then place our model and loss function on our GPU, if we have one.
model = model.to(device) criterion = criterion.to(device)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We will be using the loss value between our predicted and actual tags to train the network, but ideally we'd like a more interpretable way to see how well our model is doing - accuracy.The issue is that we don't want to calculate accuracy over the `` tokens as we aren't interested in predicting them.The function below only calculates accuracy over non-padded tokens. `non_pad_elements` is a tensor containing the indices of the non-pad tokens within an input batch. We then compare the predictions of those elements with the labels to get a count of how many predictions were correct. We then divide this by the number of non-pad elements to get our accuracy value over the batch.
def categorical_accuracy(preds, y, tag_pad_idx): """ Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8 """ max_preds = preds.argmax(dim = 1, keepdim = True) # get the index of the max probability non_pad_elements = (y != tag_pad_idx).nonzero() correct = max_preds[non_pad_elements].squeeze(1).eq(y[non_pad_elements]) return correct.sum() / torch.FloatTensor([y[non_pad_elements].shape[0]]).to(device)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
Next is the function that handles training our model.We first set the model to `train` mode to turn on dropout/batch-norm/etc. (if used). Then we iterate over our iterator, which returns a batch of examples. For each batch: - we zero the gradients over the parameters from the last gradient calculation- insert the batch of text into the model to get predictions- as PyTorch loss functions cannot handle 3-dimensional predictions we reshape our predictions- calculate the loss and accuracy between the predicted tags and actual tags- call `backward` to calculate the gradients of the parameters w.r.t. the loss- take an optimizer `step` to update the parameters- add to the running total of loss and accuracy
def train(model, iterator, optimizer, criterion, tag_pad_idx): epoch_loss = 0 epoch_acc = 0 model.train() for batch in iterator: text = batch.text tags = batch.udtags optimizer.zero_grad() #text = [sent len, batch size] predictions = model(text) #predictions = [sent len, batch size, output dim] #tags = [sent len, batch size] predictions = predictions.view(-1, predictions.shape[-1]) tags = tags.view(-1) #predictions = [sent len * batch size, output dim] #tags = [sent len * batch size] loss = criterion(predictions, tags) acc = categorical_accuracy(predictions, tags, tag_pad_idx) loss.backward() optimizer.step() epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
The `evaluate` function is similar to the `train` function, except with changes made so we don't update the model's parameters.`model.eval()` is used to put the model in evaluation mode, so dropout/batch-norm/etc. are turned off. The iteration loop is also wrapped in `torch.no_grad` to ensure we don't calculate any gradients. We also don't need to call `optimizer.zero_grad()` and `optimizer.step()`.
def evaluate(model, iterator, criterion, tag_pad_idx): epoch_loss = 0 epoch_acc = 0 model.eval() with torch.no_grad(): for batch in iterator: text = batch.text tags = batch.udtags predictions = model(text) predictions = predictions.view(-1, predictions.shape[-1]) tags = tags.view(-1) loss = criterion(predictions, tags) acc = categorical_accuracy(predictions, tags, tag_pad_idx) epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
Next, we have a small function that tells us how long an epoch takes.
def epoch_time(start_time, end_time): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
Finally, we train our model!After each epoch we check if our model has achieved the best validation loss so far. If it has then we save the parameters of this model and we will use these "best" parameters to calculate performance over our test set.
N_EPOCHS = 15 best_valid_loss = float('inf') for epoch in range(N_EPOCHS): start_time = time.time() train_loss, train_acc = train(model, train_iterator, optimizer, criterion, TAG_PAD_IDX) valid_loss, valid_acc = evaluate(model, valid_iterator, criterion, TAG_PAD_IDX) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'tut1-model.pt') print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s') print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%') print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We then load our "best" parameters and evaluate performance on the test set.
model.load_state_dict(torch.load('tut1-model.pt')) test_loss, test_acc = evaluate(model, test_iterator, criterion, TAG_PAD_IDX) print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
Inference88% accuracy looks pretty good, but let's see our model tag some actual sentences.We define a `tag_sentence` function that will:- put the model into evaluation mode- tokenize the sentence with spaCy if it is not a list- lowercase the tokens if the `Field` did- numericalize the tokens using the vocabulary- find out which tokens are not in the vocabulary, i.e. are `` tokens- convert the numericalized tokens into a tensor and add a batch dimension- feed the tensor into the model- get the predictions over the sentence- convert the predictions into readable tagsAs well as returning the tokens and tags, it also returns which tokens were `` tokens.
def tag_sentence(model, device, sentence, text_field, tag_field): model.eval() if isinstance(sentence, str): nlp = spacy.load('en') tokens = [token.text for token in nlp(sentence)] else: tokens = [token for token in sentence] if text_field.lower: tokens = [t.lower() for t in tokens] numericalized_tokens = [text_field.vocab.stoi[t] for t in tokens] unk_idx = text_field.vocab.stoi[text_field.unk_token] unks = [t for t, n in zip(tokens, numericalized_tokens) if n == unk_idx] token_tensor = torch.LongTensor(numericalized_tokens) token_tensor = token_tensor.unsqueeze(-1).to(device) predictions = model(token_tensor) top_predictions = predictions.argmax(-1) predicted_tags = [tag_field.vocab.itos[t.item()] for t in top_predictions] return tokens, predicted_tags, unks
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We'll get an already tokenized example from the training set and test our model's performance.
example_index = 1 sentence = vars(train_data.examples[example_index])['text'] actual_tags = vars(train_data.examples[example_index])['udtags'] print(sentence)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We can then use our `tag_sentence` function to get the tags. Notice how the tokens referring to subject of the sentence, the "respected cleric", are both `` tokens!
tokens, pred_tags, unks = tag_sentence(model, device, sentence, TEXT, UD_TAGS) print(unks)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We can then check how well it did. Surprisingly, it got every token correct, including the two that were unknown tokens!
print("Pred. Tag\tActual Tag\tCorrect?\tToken\n") for token, pred_tag, actual_tag in zip(tokens, pred_tags, actual_tags): correct = '✔' if pred_tag == actual_tag else '✘' print(f"{pred_tag}\t\t{actual_tag}\t\t{correct}\t\t{token}")
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
Let's now make up our own sentence and see how well the model does.Our example sentence below has every token within the model's vocabulary.
sentence = 'The Queen will deliver a speech about the conflict in North Korea at 1pm tomorrow.' tokens, tags, unks = tag_sentence(model, device, sentence, TEXT, UD_TAGS) print(unks)
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
Looking at the sentence it seems like it gave sensible tags to every token!
print("Pred. Tag\tToken\n") for token, tag in zip(tokens, tags): print(f"{tag}\t\t{token}")
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
We've now seen how to implement PoS tagging with PyTorch and TorchText! The BiLSTM isn't a state-of-the-art model, in terms of performance, but is a strong baseline for PoS tasks and is a good tool to have in your arsenal. Going deeperWhat if we could combine word-level and char-level approaches? ![title](https://i.postimg.cc/tT9hsBfj/ive-put-an-rnn-in-your-rnn-so-you-can-train-an-rnn-on-every-step-of-your-rnn-training-loop.jpg)Actually, we can. Let's use LSTM or GRU to generate embedding for every word on char-level.![title](https://guillaumegenthial.github.io/assets/char_representation.png)*Image source: https://guillaumegenthial.github.io/sequence-tagging-with-tensorflow.html*![title](https://guillaumegenthial.github.io/assets/bi-lstm.png)*Image source: https://guillaumegenthial.github.io/sequence-tagging-with-tensorflow.html* To do that we need to make few adjustments to the code above
# Now lets try both word and character embeddings WORD = data.Field(lower = True) UD_TAG = data.Field(unk_token = None) PTB_TAG = data.Field(unk_token = None) # We'll use NestedField to tokenize each word into list of chars CHAR_NESTING = data.Field(tokenize=list, init_token="<bos>", eos_token="<eos>") CHAR = data.NestedField(CHAR_NESTING)#, init_token="<bos>", eos_token="<eos>") fields = [(('word', 'char'), (WORD, CHAR)), ('udtag', UD_TAG), ('ptbtag', PTB_TAG)] train_data, valid_data, test_data = datasets.UDPOS.splits(fields) # train, val, test = datasets.UDPOS.splits(fields=fields) print(train_data.fields) print(len(train_data)) print(vars(train_data[0])) WORD.build_vocab( train_data, min_freq = MIN_FREQ, vectors="glove.6B.100d", unk_init = torch.Tensor.normal_ ) CHAR.build_vocab(train_data) UD_TAG.build_vocab(train_data) PTB_TAG.build_vocab(train_data) print(f"Unique tokens in WORD vocabulary: {len(WORD.vocab)}") print(f"Unique tokens in CHAR vocabulary: {len(CHAR.vocab)}") print(f"Unique tokens in UD_TAG vocabulary: {len(UD_TAG.vocab)}") print(f"Unique tokens in PTB_TAG vocabulary: {len(PTB_TAG.vocab)}") BATCH_SIZE = 64 device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits( (train_data, valid_data, test_data), batch_size = BATCH_SIZE, device = device) batch = next(iter(train_iterator)) text = batch.word chars = batch.char tags = batch.udtag class BiLSTMPOSTaggerWithChars(nn.Module): def __init__(self, word_input_dim, word_embedding_dim, char_input_dim, char_embedding_dim, char_hidden_dim, hidden_dim, output_dim, n_layers, bidirectional, dropout, pad_idx): super().__init__() self.char_embedding = # YOUR CODE HERE self.char_gru = # YOUR CODE HERE self.word_embedding = nn.Embedding(word_input_dim, word_embedding_dim, padding_idx = pad_idx) self.lstm = nn.LSTM(word_embedding_dim + # YOUR CODE HERE, hidden_dim, num_layers = n_layers, bidirectional = bidirectional, dropout = dropout if n_layers > 1 else 0) self.fc = nn.Linear(hidden_dim * 2 if bidirectional else hidden_dim, output_dim) self.dropout = nn.Dropout(dropout) def forward(self, text, chars): #text = [sent len, batch size] #pass text through embedding layer embedded = self.dropout(self.word_embedding(text)) #embedded = [sent len, batch size, emb dim] chars_embedded = # YOUR CODE HERE hid_from_chars = # YOUR CODE HERE embedded_with_chars = torch.cat([embedded, hid_from_chars], dim=2) #pass embeddings into LSTM outputs, (hidden, cell) = self.lstm(embedded_with_chars) # outputs, (hidden, cell) = self.lstm(hid) #outputs holds the backward and forward hidden states in the final layer #hidden and cell are the backward and forward hidden and cell states at the final time-step #output = [sent len, batch size, hid dim * n directions] #hidden/cell = [n layers * n directions, batch size, hid dim] #we use our outputs to make a prediction of what the tag should be predictions = self.fc(self.dropout(outputs)) #predictions = [sent len, batch size, output dim] return predictions INPUT_DIM = len(WORD.vocab) EMBEDDING_DIM = 100 HIDDEN_DIM = 160 CHAR_INPUT_DIM = 112 CHAR_EMBEDDING_DIM = 30 CHAR_HIDDEN_DIM = 30 OUTPUT_DIM = len(UD_TAGS.vocab) N_LAYERS = 2 BIDIRECTIONAL = True DROPOUT = 0.25 PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token] model = BiLSTMPOSTaggerWithChars( INPUT_DIM, EMBEDDING_DIM, CHAR_INPUT_DIM, CHAR_EMBEDDING_DIM, CHAR_HIDDEN_DIM, HIDDEN_DIM, OUTPUT_DIM, N_LAYERS, BIDIRECTIONAL, DROPOUT, PAD_IDX )
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
**Congratulations, you've got LSTM which relies on GRU output on each step.**Now we need only to train it. Same actions, very small adjustments.
def init_weights(m): for name, param in m.named_parameters(): nn.init.normal_(param.data, mean = 0, std = 0.1) model.apply(init_weights) def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters') pretrained_embeddings = TEXT.vocab.vectors print(pretrained_embeddings.shape) model.word_embedding.weight.data.copy_(pretrained_embeddings) model.word_embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM) print(model.word_embedding.weight.data) optimizer = optim.Adam(model.parameters()) TAG_PAD_IDX = UD_TAGS.vocab.stoi[UD_TAGS.pad_token] criterion = nn.CrossEntropyLoss(ignore_index = TAG_PAD_IDX) model = model.to(device) criterion = criterion.to(device) def train(model, iterator, optimizer, criterion, tag_pad_idx): epoch_loss = 0 epoch_acc = 0 model.train() for batch in iterator: text = batch.word chars = batch.char tags = batch.udtag optimizer.zero_grad() #text = [sent len, batch size] predictions = model(text, chars) #predictions = [sent len, batch size, output dim] #tags = [sent len, batch size] predictions = predictions.view(-1, predictions.shape[-1]) tags = tags.view(-1) #predictions = [sent len * batch size, output dim] #tags = [sent len * batch size] loss = criterion(predictions, tags) acc = categorical_accuracy(predictions, tags, tag_pad_idx) loss.backward() optimizer.step() epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) def evaluate(model, iterator, criterion, tag_pad_idx): epoch_loss = 0 epoch_acc = 0 model.eval() with torch.no_grad(): for batch in iterator: text = batch.word chars = batch.char tags = batch.udtag predictions = model(text, chars) predictions = predictions.view(-1, predictions.shape[-1]) tags = tags.view(-1) loss = criterion(predictions, tags) acc = categorical_accuracy(predictions, tags, tag_pad_idx) epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) N_EPOCHS = 15 best_valid_loss = float('inf') for epoch in range(N_EPOCHS): start_time = time.time() train_loss, train_acc = train(model, train_iterator, optimizer, criterion, TAG_PAD_IDX) valid_loss, valid_acc = evaluate(model, valid_iterator, criterion, TAG_PAD_IDX) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'tut2-model.pt') print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s') print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%') print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%') # Let's take a look at the model from the last epoch test_loss, test_acc = evaluate(model, test_iterator, criterion, TAG_PAD_IDX) print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%') # And at the best checkpoint (based on validation score) model.load_state_dict(torch.load('tut2-model.pt')) test_loss, test_acc = evaluate(model, test_iterator, criterion, TAG_PAD_IDX) print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
_____no_output_____
MIT
week05_transformer_pos_tagging/week05_bilstm_for_pos_tagging.ipynb
JustM57/natural-language-processing
1. Imports
import tensorflow as tf from PIL.Image import DecompressionBombError import matplotlib.pyplot as plt import json import numpy as np import cv2 import praw,requests import psaw import datetime as dt import os import sys from tensorflow.keras import models from tensorflow.keras import layers from tensorflow.keras import optimizers import random
_____no_output_____
MIT
src/main.ipynb
kevinlinxc/ArtRater
2. Functions
#Checks if two images are identical, returns true if they are, returns true if a file is blank def compare2images(original,duplicate): if original is None or duplicate is None: return True #delete emtpy pictures if original.shape == duplicate.shape: #print("The images have same size and channels") difference = cv2.subtract(original, duplicate) b, g, r = cv2.split(difference) if cv2.countNonZero(b) == 0 and cv2.countNonZero(g) == 0 and cv2.countNonZero(r) == 0: return True else: return False else: return False #A simple progress bar for transparency on the 20000 image processing tasks def progress(purpose,currentcount, maxcount): sys.stdout.write('\r') sys.stdout.write("{}: {:.1f}%".format(purpose,(100/(maxcount-1)*currentcount))) sys.stdout.flush() #custom image data generator following this example https://www.pyimagesearch.com/2018/12/24/how-to-use-keras-fit-and-fit_generator-a-hands-on-tutorial/ def custom_file_image_generator(inputPath,bs,mode="train",aug=None, max = 1, frompath="picsnew"): f = open(inputPath, "r") while True: images = [] labels = [] while len(images)<bs: line = f.readline() if line == "": f.seek(0) line = f.readline() # if we are evaluating we should now break from our # loop to ensure we don't continue to fill up the # batch from samples at the beginning of the file if mode == "eval": break label = int(line.split(".")[0].split("_")[0]) stripped = line.strip('\n') image = plt.imread(f"{frompath}{stripped}") #Removes alpha channel image = np.float32(image)[:,:,:3] #Neceesary resizing to avoid PIL pixel cap while image.shape[0] * image.shape[1]>89478485: image = cv2.resize(image, (0,0), fx=0.5, fy=0.5) cv2.cvtColor(image,cv2.COLOR_RGB2BGR) images.append(image) labels.append(label/max) labels = np.asarray(labels).T yield(np.asarray(images),labels)
_____no_output_____
MIT
src/main.ipynb
kevinlinxc/ArtRater
3. Set up Reddit api for downloading images
#Set up API keys from .gitignored file with open('config.json') as config_file: config = json.load(config_file)['keys'] # Sign into Reddit using API Key reddit = praw.Reddit(user_agent="Downloading images from r/art for a machine learning project", client_id=config['client_id'], client_secret=config['client_secret'], username=config['username'], password=config['password'])
_____no_output_____
MIT
src/main.ipynb
kevinlinxc/ArtRater
4. Downloading pictures from Reddit r/art using PSAW and PRAW
#187mb for 200 pics, approx 18.7gb for 20000 #Time periods to choose to download from Jan12018 = int(dt.datetime(2018,1,1).timestamp()) Jan12019 = int(dt.datetime(2019,1,1).timestamp()) Jan12020 = int(dt.datetime(2020,1,1).timestamp()) Jan12021 = int(dt.datetime(2021,1,1).timestamp()) #Pass a PRAW instance into PSAW so that scores are available api = psaw.PushshiftAPI(reddit) #Number of posts to try and download n = 30000 #Path to download to dlpath = "pics2/" print("Looking for posts using Pushshift...") #this step takes a while posts = list(api.search_submissions(after = Jan12019, before=Jan12020, subreddit='art', limit = n*10)) numpostsfound = len(posts) print(f"Number of posts found: {numpostsfound}") counter = 0 for post in posts: if post.score>1: progress("Downloading",counter,numpostsfound) counter +=1 url = (post.url) #Save score for ML training, and post id for unique file names file_name = str(post.score) + "_" + str(post.id) + ".jpg" try: #use requests to get image r = requests.get(url) fullfilename = dlpath + file_name #save image with open(fullfilename,"wb") as f: f.write(r.content) except ( requests.ConnectionError, requests.exceptions.ReadTimeout, requests.exceptions.Timeout, requests.exceptions.ConnectTimeout, ) as e: print(e) files = [f for f in os.listdir(dlpath) if os.path.isfile(os.path.join(dlpath, f))] #Number of files downloaded not always the same as requested due to connection errors print(f'\nNumber of files downloaded: {len(files)}')
_____no_output_____
MIT
src/main.ipynb
kevinlinxc/ArtRater
5. Processing code that removes pictures that are deleted/corruptMight need to run multiple times if low on ram
#Path to delete bad pictures from path = "pics2/" files = [f for f in os.listdir(path) if os.path.isfile(os.path.join(path, f))] cull = [] counter = 0 length = len(files) print(f"Original Number of files: {length}") #Template of a bad picture deletedtemplate = cv2.imread("exampledeleted.jpg") deletedtemplate2 = cv2.imread("exampledeleted2.jpg") for file in files: progress("Deleting bad files",counter,length) counter+=1 fullfilename = path + file candidate = cv2.imread(fullfilename) #if it's the same picture as the template or the picture is None if compare2images(deletedtemplate,candidate) or compare2images(deletedtemplate2,candidate): #delete os.remove(fullfilename) files2 = [f for f in os.listdir(path) if os.path.isfile(os.path.join(path, f))] print(f"\nFinal Number of files: {len(files2)}")
Original Number of files: 11377 Deleting bad files: 100.0% Final Number of files: 11364
MIT
src/main.ipynb
kevinlinxc/ArtRater
6. Preprocessing code that corrects grayscale images to RGB and rescales pictures to have maximum width or height of 1000If I ran nn training with large images, it would take too long, and if I ran on google colab,I wouldn't have the drive space for all the pictrues
#Path being read from path = 'pics2/' #Path writing to path2 = 'picsfix/' files = [f for f in os.listdir(path) if os.path.isfile(os.path.join(path, f))] length = len(files) counter = 0 failures = [] for file in files: try: progress("Resizing and fixing pictures",counter,length) #OpenCV doesn't open jpegs img = plt.imread(f'{path}{file}') if len(img.shape) <3: # print(file) # print(img.shape) img= cv2.cvtColor(img,cv2.COLOR_GRAY2RGB) #Resize to 1000 max largestdim = max(img.shape) targetlargestdim = 1000 scaling= targetlargestdim / largestdim #print(scaling) if(scaling<1): #If image is already smaller, don't bother upscaling smaller = cv2.resize(img, (0,0), fx=scaling, fy=scaling) else: smaller = img filename = path2+file plt.imsave(filename,smaller) counter += 1 except DecompressionBombError as e: print(file) print("Decomp error") print("\ndone")
_____no_output_____
MIT
src/main.ipynb
kevinlinxc/ArtRater
7. Plot histogram of scores to see how bad the bias towards lower scores is
path = "picsfix2" files = [f for f in os.listdir(path) if os.path.isfile(os.path.join(path, f))] numpics = len(files) labelsall = [] for file in files: labelsall.append(int(file.split(".")[0].split("_")[0])) #print(labelsall) plt.hist(labelsall) plt.yscale("log") plt.ylabel("Frequency") plt.xlabel("Score") plt.title("Post score distribution, log scale") plt.show()
_____no_output_____
MIT
src/main.ipynb
kevinlinxc/ArtRater
7. Split files into training and testing sets and write the names of files to txt filesI'm following [this guide](https://www.pyimagesearch.com/2018/12/24/how-to-use-keras-fit-and-fit_generator-a-hands-on-tutorial/)and using a file reader to reset the index to 0 seemed like the easiest solution to mimic what the author set up.
#Path of pictures to split and write txts for path = "picsfix2/" trainingpath = "training.txt" testingpath = "testing.txt" files = [f for f in os.listdir(path) if os.path.isfile(os.path.join(path, f))] #randomize to avoid passing pictures to the neural net in alphabetical order random.shuffle(files) #print(files) trainindex = int(np.round(0.8 * len(files))) training = files[0:trainindex] testing = files[trainindex:] with open(trainingpath, 'w') as f: for item in training: f.write("%s\n" % item) with open(testingpath, 'w') as f: for item in testing: f.write("%s\n" % item)
_____no_output_____
MIT
src/main.ipynb
kevinlinxc/ArtRater
8. Actual neural net training using Convolutional Neural NetI didn't have enough ram to train locally, so I ended porting to Google Colab and training there.The trainin has not been succesful so far, and I haven't taken the time to diagnose why yet.
#Path of preprocessed pictures path = "picsfix2/" #Paths of training and testing txts that have file names trainPath = 'training.txt' testpath = 'testing.txt' #Get all file names files = [f for f in os.listdir(path) if os.path.isfile(os.path.join(path, f))] numpics = len(files) labelsall = [] for file in files: labelsall.append(int(file.split(".")[0].split("_")[0])) highestScore = max(labelsall) print(f'Highest score: {highestScore}') #Store all image arrays and image names in a list input_shape=(None, None,3) NUM_EPOCHS = 12 BS = 1 NUM_TRAIN_IMAGES = int(np.round(0.8 * len(files))) NUM_TEST_IMAGES = len(files)-NUM_TRAIN_IMAGES traingen = custom_file_image_generator(trainPath,BS, "train" , None,highestScore, path) testgen = custom_file_image_generator(testpath,BS, "train", None,highestScore, path) tf.keras.backend.clear_session() conv_model = models.Sequential() #normalize pictures to [0 1] conv_model.add(layers.experimental.preprocessing.Rescaling(1./255)) conv_model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=input_shape)) conv_model.add(layers.GlobalMaxPooling2D()) # conv_model.add(layers.Conv2D(64, (3, 3), activation='relu')) # conv_model.add(layers.MaxPooling2D(pool_size=(2, 2))) conv_model.add(layers.Flatten()) #conv_model.add(layers.Dropout(0.2)) conv_model.add(layers.Dense(512, activation='relu')) conv_model.add(layers.Dense(1, activation='linear')) LEARNING_RATE = 1e-4 conv_model.compile(loss=tf.keras.losses.MeanSquaredError(), optimizer=optimizers.RMSprop(lr=LEARNING_RATE), metrics=['acc']) history_conv = conv_model.fit(traingen, steps_per_epoch= NUM_TRAIN_IMAGES // BS, validation_data=testgen, validation_steps = NUM_TEST_IMAGES // BS, epochs=NUM_EPOCHS) modelfilename = 'art2.h5' conv_model.save(modelfilename) # plt.plot(history_conv.history['loss']) # plt.plot(history_conv.history['val_loss']) # plt.title('model loss') # plt.ylabel('loss') # plt.xlabel('epoch') # plt.legend(['train loss', 'val loss'], loc='upper right') # plt.show() # # # plt.plot(history_conv.history['acc']) # plt.plot(history_conv.history['val_acc']) # plt.title('model accuracy') # plt.ylabel('accuracy (%)') # plt.xlabel('epoch') # plt.legend(['train accuracy', 'val accuracy'], loc='lower right') # plt.show()
Highest score: 54157 Epoch 1/12 1443/14918 [=>............................] - ETA: 1:37:21 - loss: 0.0014 - acc: 0.0049
MIT
src/main.ipynb
kevinlinxc/ArtRater
load data
import pandas as pd import joblib from urllib.request import urlopen import numpy as np df_url = 'https://c620.s3-ap-northeast-1.amazonaws.com/c620_train.csv' c_url = 'https://c620.s3-ap-northeast-1.amazonaws.com/c620_col_names.pkl' df = pd.read_csv(df_url,index_col=0) c = joblib.load(urlopen(c_url)) case_col = c['case'] feed_col = c['x41'] op_col = c['density']+c['yRefluxRate']+c['yHeatDuty']+c['yControl'] sp_col = c['vent_gas_sf'] + c['distillate_sf'] + c['sidedraw_sf'] + c['bottoms_sf'] wt_col = c['vent_gas_x']+c['distillate_x']+c['sidedraw_x']+c['bottoms_x'] all_col = case_col + feed_col + op_col + sp_col + wt_col print(len(case_col)) print(len(feed_col)) print(len(op_col)) print(len(sp_col)) print(len(wt_col)) df[all_col].head(1) case_col_idx = [all_col.index(i) for i in case_col] feed_col_idx = [all_col.index(i) for i in feed_col] op_col_idx = [all_col.index(i) for i in op_col] sp_col_idx = [all_col.index(i) for i in sp_col] wt_col_idx = [all_col.index(i) for i in wt_col]
_____no_output_____
MIT
notebook/experiment/gan_style_model.ipynb
skywalker0803r/c620
preprocess data
from sklearn.utils import shuffle from torch.utils.data import TensorDataset,DataLoader from sklearn.preprocessing import MinMaxScaler import torch # split data df = shuffle(df) p1 = int(len(df)*0.8) p2 = int(len(df)*0.9) # to FloatTensor train = torch.FloatTensor(df[all_col].values[:p1]) vaild = torch.FloatTensor(df[all_col].values[p1:p2]) test = torch.FloatTensor(df[all_col].values[p2:]) # create DataLoader trainset = TensorDataset(train[:,case_col_idx],train[:,feed_col_idx],train[:,op_col_idx],train[:,sp_col_idx],train[:,wt_col_idx]) train_iter = DataLoader(trainset,batch_size=64) vaildset = TensorDataset(vaild[:,case_col_idx],vaild[:,feed_col_idx],vaild[:,op_col_idx],vaild[:,sp_col_idx],vaild[:,wt_col_idx]) vaild_iter = DataLoader(vaildset,batch_size=64) testset = TensorDataset(test[:,case_col_idx],test[:,feed_col_idx],test[:,op_col_idx],test[:,sp_col_idx],test[:,wt_col_idx]) test_iter = DataLoader(testset,batch_size=64)
_____no_output_____
MIT
notebook/experiment/gan_style_model.ipynb
skywalker0803r/c620
def model,loss,optimizer
import torch from torch import nn import torch.nn.functional as F def mlp(sizes, activation, output_activation=nn.Identity): layers = [] for j in range(len(sizes)-1): act = activation if j < len(sizes)-2 else output_activation layers += [nn.Linear(sizes[j], sizes[j+1]), act()] return nn.Sequential(*layers) class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.op_model = mlp( [len(case_col)+len(feed_col),128,len(op_col)], nn.ReLU ) self.sp_model = mlp( [len(case_col)+len(feed_col)+len(op_col),128,len(wt_col)], nn.ReLU, nn.Sigmoid ) def forward(self,case,feed): op = self.op_model(torch.cat((case,feed),dim=-1)).clone() sp = self.sp_model(torch.cat((case,feed,op),dim=-1)).clone() for idx in range(41): sp[:,[idx,idx+41,idx+41*2,idx+41*3]] = self.normalize(sp[:,[idx,idx+41,idx+41*2,idx+41*3]]) s1,s2,s3,s4 = sp[:,:41],sp[:,41:41*2],sp[:,41*2:41*3],sp[:,41*3:41*4] w1,w2,w3,w4 = self.sp2wt(feed,s1),self.sp2wt(feed,s2),self.sp2wt(feed,s3),self.sp2wt(feed,s4) wt = torch.cat((w1,w2,w3,w4),dim=-1) return op,sp,wt @staticmethod def normalize(x): return x / x.sum(dim=1).reshape(-1,1) @staticmethod def sp2wt(x,s): a = 100*x*s b = torch.diag(x@s.T).reshape(-1,1) b = torch.clamp(b,1e-8,float('inf')) return a/b # model optimizer loss_fn model = Model() optimizer = torch.optim.Adam(model.parameters()) loss_fn = nn.SmoothL1Loss() # forward test for case,feed,op,sp,wt in train_iter: op_hat,sp_hat,wt_hat = model(case,feed) print(op_hat.shape) print(sp_hat.shape) print(wt_hat.shape) break
torch.Size([64, 10]) torch.Size([64, 164]) torch.Size([64, 164])
MIT
notebook/experiment/gan_style_model.ipynb
skywalker0803r/c620
tensorboard
from torch.utils.tensorboard import SummaryWriter writer = SummaryWriter() case,feed,op,sp,wt = next(iter(train_iter)) writer.add_graph(model,[case,feed]) writer.close() %load_ext tensorboard %tensorboard --logdir runs
The tensorboard extension is already loaded. To reload it, use: %reload_ext tensorboard
MIT
notebook/experiment/gan_style_model.ipynb
skywalker0803r/c620
train model
import matplotlib.pyplot as plt from tqdm import tqdm_notebook as tqdm from copy import deepcopy # train step def train_step(model): model.train() total_loss = 0 for t,(case,feed,op,sp,wt) in enumerate(train_iter): op_hat,sp_hat,wt_hat = model(case,feed) op_loss = loss_fn(op_hat,op) sp_loss = loss_fn(sp_hat,sp) wt_loss = loss_fn(wt_hat,wt) Sidedraw_Benzene_loss = loss_fn( wt_hat[:,wt_col.index('Tatoray Stripper C620 Operation_Sidedraw Production Rate and Composition_Benzene_wt%')], case[:,case_col.index('Tatoray Stripper C620 Operation_Specifications_Spec 3 : Benzene in Sidedraw_wt%')]) loss = op_loss + sp_loss + wt_loss + Sidedraw_Benzene_loss # update model loss.backward() optimizer.step() optimizer.zero_grad() total_loss += loss.item() return total_loss/(t+1) # valid step def valid_step(model): model.eval() total_loss = 0 for t,(case,feed,op,sp,wt) in enumerate(vaild_iter): op_hat,sp_hat,wt_hat = model(case,feed) op_loss = loss_fn(op_hat,op) sp_loss = loss_fn(sp_hat,sp) wt_loss = loss_fn(wt_hat,wt) Sidedraw_Benzene_loss = loss_fn( wt_hat[:,wt_col.index('Tatoray Stripper C620 Operation_Sidedraw Production Rate and Composition_Benzene_wt%')], case[:,case_col.index('Tatoray Stripper C620 Operation_Specifications_Spec 3 : Benzene in Sidedraw_wt%')]) loss = op_loss + sp_loss + wt_loss + Sidedraw_Benzene_loss total_loss += loss.item() return total_loss/(t+1) def train(model,max_epochs): history = {'train_loss':[],'valid_loss':[]} current_loss = np.inf best_model = None for i in tqdm(range(max_epochs)): history['train_loss'].append(train_step(model)) history['valid_loss'].append(valid_step(model)) if i % 10 == 0: print("epoch:{} train_loss:{:.4f} valid_loss:{:.4f}".format(i,history['train_loss'][-1],history['valid_loss'][-1])) if history['valid_loss'][-1] <= current_loss: best_model = deepcopy(model.eval()) current_loss = history['valid_loss'][-1] model = deepcopy(best_model.eval()) plt.plot(history['train_loss'],label='train_loss') plt.plot(history['valid_loss'],label='valid_loss') plt.legend() plt.show() return model model = train(model,max_epochs=250) op_pred,sp_pred,wt_pred = model(test[:,case_col_idx],test[:,feed_col_idx]) wt_pred = pd.DataFrame(wt_pred.detach().cpu().numpy(),columns=wt_col) op_pred = pd.DataFrame(op_pred.detach().cpu().numpy(),columns=op_col) wt_real = pd.DataFrame(test[:,wt_col_idx].detach().cpu().numpy(),columns=wt_col) op_real = pd.DataFrame(test[:,op_col_idx].detach().cpu().numpy(),columns=op_col) from sklearn.metrics import r2_score,mean_squared_error import numpy as np import warnings warnings.filterwarnings('ignore') def mape(y_true, y_pred, e = 2e-2): y_true, y_pred = np.array(y_true), np.array(y_pred) mask = y_true > e y_true, y_pred = y_true[mask], y_pred[mask] return np.mean(np.abs((y_true - y_pred) / y_true)) * 100 def show_metrics(y_real,y_pred,e=2e-2): res = pd.DataFrame(index=y_pred.columns,columns=['R2','MSE','MAPE']) for i in y_pred.columns: res.loc[i,'R2'] = np.clip(r2_score(y_real[i],y_pred[i]),0,1) res.loc[i,'MSE'] = mean_squared_error(y_real[i],y_pred[i]) res.loc[i,'MAPE'] = mape(y_real[i],y_pred[i],e) res.loc['AVG'] = res.mean(axis=0) return res show_metrics(op_real,op_pred) show_metrics(wt_real,wt_pred) case = pd.DataFrame(test[:,case_col_idx].detach().cpu().numpy(),columns=case_col) case.iloc[:,[2]] wt_pred.iloc[:,[89]] wt_real.head() wt_pred.head() op_real.head() op_pred.head()
_____no_output_____
MIT
notebook/experiment/gan_style_model.ipynb
skywalker0803r/c620
練習1. 数値 `x` の絶対値を求める関数 `absolute(x)` を定義してください。 (Pythonには `abs` という組み込み関数が用意されていますが。)2. `x` が正ならば 1、負ならば -1、ゼロならば 0 を返す `sign(x)` という関数を定義してください。定義ができたら、その次のセルを実行して、`True` のみが表示されることを確認してください。
def absolute(x): ... def sign(x): ... print(absolute(5) == 5) print(absolute(-5) == 5) print(absolute(0) == 0) print(sign(5) == 1) print(sign(-5) == -1) print(sign(0) == 0)
_____no_output_____
MIT
1/1-3.ipynb
petrluner/utpython_lab
練習の解答
def absolute(x): if x<0: return -x else: return x def sign(x): if x<0: return -1 if x>0: return 1 return 0
_____no_output_____
MIT
1/1-3.ipynb
petrluner/utpython_lab
ObjectiveIn my previous tutorial, I showed how to use `apply_rows` and `apply_chunks` methods in cuDF to implement customized data transformations. Under the hood, they are all using [Numba library](https://numba.pydata.org/) to compile the normal python code into GPU kernels. Numba is an excellent python library that accelerates the numerical computations. Most importantly, Numba has direct CUDA programming support. For detailed information, please check out this [Numba CUDA document](https://numba.pydata.org/numba-doc/dev/cuda/index.html). As we know, the underlying data structure of cuDF is a GPU version of Apache Arrow. We can directly pass the GPU array around without the copying operation. Once we have the nice Numba library and standard GPU array, the sky is the limit. In this tutorial, I will show how to use Numba CUDA to accelerate cuDF data transformation and how to step by step accelerate it using CUDA programming tricks. The following experiments are performed at DGX V100 node. A simple exampleAs usual, I am going to start with a simple example of doubling the numbers in an array:
import cudf import numpy as np from numba import cuda array_len = 1000 number_of_threads = 128 number_of_blocks = (array_len + (number_of_threads - 1)) // number_of_threads df = cudf.DataFrame() df['in'] = np.arange(array_len, dtype=np.float64) @cuda.jit def double_kernel(result, array_len): """ double each element of the array """ i = cuda.grid(1) if i < array_len: result[i] = result[i] * 2.0 before = df['in'].sum() gpu_array = df['in'].to_gpu_array() print(type(gpu_array)) double_kernel[(number_of_blocks,), (number_of_threads,)](gpu_array, array_len) after = df['in'].sum() assert(np.isclose(before * 2.0, after))
<class 'numba.cuda.cudadrv.devicearray.DeviceNDArray'>
Apache-2.0
cudf/notebooks_numba_cuDF_integration.ipynb
rocketmlhq/rapids-notebooks
From the output of this code, it shows the underlying GPU array is of type `numba.cuda.cudadrv.devicearray.DeviceNDArray`. We can directly pass it to the kernel function that is compiled by the `cuda.jit`. Because we passed in the reference, the effect of number transformation will automatically show up in the original cuDF Dataframe. Note we have to manually enter the block size and grid size, which gives us the maximum of GPU programming control. The `cuda.grid` is a convenient method to compute the absolute position for the threads. It is equivalent to the normal `block_id * block_dim + thread_id` formula. Practical example BaselineWe will work on the moving average problem as the last time. Because we have the full control of the grid and block size allocation, the vanilla moving average implementation code is much simpler compared to the `apply_chunks` implementation.
%reset -s -f import cudf import numpy as np import pandas as pd from numba import cuda import numba import time array_len = int(5e8) average_window = 3000 number_of_threads = 128 number_of_blocks = (array_len + (number_of_threads - 1)) // number_of_threads df = cudf.DataFrame() df['in'] = np.arange(array_len, dtype=np.float64) df['out'] = np.arange(array_len, dtype=np.float64) @cuda.jit def kernel1(in_arr, out_arr, average_length, arr_len): s = numba.cuda.local.array(1, numba.float64) s[0] = 0.0 i = cuda.grid(1) if i < arr_len: if i < average_length-1: out_arr[i] = np.inf else: for j in range(0, average_length): s[0] += in_arr[i-j] out_arr[i] = s[0] / np.float64(average_length) gpu_in = df['in'].to_gpu_array() gpu_out = df['out'].to_gpu_array() start = time.time() kernel1[(number_of_blocks,), (number_of_threads,)](gpu_in, gpu_out, average_window, array_len) cuda.synchronize() end = time.time() print('Numba with comipile time', end-start) start = time.time() kernel1[(number_of_blocks,), (number_of_threads,)](gpu_in, gpu_out, average_window, array_len) cuda.synchronize() end = time.time() print('Numba without comipile time', end-start) pdf = pd.DataFrame() pdf['in'] = np.arange(array_len, dtype=np.float64) start = time.time() pdf['out'] = pdf.rolling(average_window).mean() end = time.time() print('pandas time', end-start) assert(np.isclose(pdf.out.values[average_window:].mean(), df.out.to_array()[average_window:].mean()))
Numba with comipile time 2.067620038986206 Numba without comipile time 1.9229750633239746 pandas time 5.2932703495025635
Apache-2.0
cudf/notebooks_numba_cuDF_integration.ipynb
rocketmlhq/rapids-notebooks
Note, in order to compare the computation time accurately, I launch the kernel twice. The first time kernel launching will include the kernel compilation time. In this example, it takes 1.9s for the kernel to run without compilation. Use shared memoryIn the baseline code, each thread is reading the numbers from the global memory. When doing the moving average, the same number is read multiple times by different threads. GPU global memory IO, in this case, is the speed bottleneck. To mitigate it, we load the data into shared memory for each of the computation blocks. Then the threads are doing summation from the numbers in the cache. To do the moving average for the elements at the beginning of the array, we make sure to load the `average_window` more data in the shared_memory.
%reset -s -f import cudf import numpy as np import pandas as pd from numba import cuda import numba import time array_len = int(5e8) average_window = 3000 number_of_threads = 128 number_of_blocks = (array_len + (number_of_threads - 1)) // number_of_threads shared_buffer_size = number_of_threads + average_window - 1 df = cudf.DataFrame() df['in'] = np.arange(array_len, dtype=np.float64) df['out'] = np.arange(array_len, dtype=np.float64) @cuda.jit def kernel1(in_arr, out_arr, average_length, arr_len): block_size = cuda.blockDim.x shared = cuda.shared.array(shape=(shared_buffer_size), dtype=numba.float64) i = cuda.grid(1) tx = cuda.threadIdx.x # Block id in a 1D grid bid = cuda.blockIdx.x starting_id = bid * block_size shared[tx + average_length - 1] = in_arr[i] cuda.syncthreads() for j in range(0, average_length - 1, block_size): if (tx + j) < average_length - 1: shared[tx + j] = in_arr[starting_id - average_length + 1 + tx + j] cuda.syncthreads() s = numba.cuda.local.array(1, numba.float64) s[0] = 0.0 if i < arr_len: if i < average_length-1: out_arr[i] = np.inf else: for j in range(0, average_length): s[0] += shared[tx + average_length - 1 - j] out_arr[i] = s[0] / np.float64(average_length) gpu_in = df['in'].to_gpu_array() gpu_out = df['out'].to_gpu_array() start = time.time() kernel1[(number_of_blocks,), (number_of_threads,)](gpu_in, gpu_out, average_window, array_len) cuda.synchronize() end = time.time() print('Numba with comipile time', end-start) start = time.time() kernel1[(number_of_blocks,), (number_of_threads,)](gpu_in, gpu_out, average_window, array_len) cuda.synchronize() end = time.time() print('Numba without comipile time', end-start) pdf = pd.DataFrame() pdf['in'] = np.arange(array_len, dtype=np.float64) start = time.time() pdf['out'] = pdf.rolling(average_window).mean() end = time.time() print('pandas time', end-start) assert(np.isclose(pdf.out.values[average_window:].mean(), df.out.to_array()[average_window:].mean()))
Numba with comipile time 1.3115026950836182 Numba without comipile time 1.085998773574829 pandas time 5.594487428665161
Apache-2.0
cudf/notebooks_numba_cuDF_integration.ipynb
rocketmlhq/rapids-notebooks
Running this, the computation time is reduced to 1.09s without kernel compilation time. Reduced redundant summationsEach thread in the above code is doing one moving average in a for-loop. It is easy to see that there are a lot of redundant summation operations done by different threads. To reduce the redundancy, the following code is changed to let each thread to compute a consecutive number of moving averages. The later moving average step is able to reuse the sum of the previous steps. This eliminated `thread_tile` number of for-loops.
%reset -s -f import cudf import numpy as np import pandas as pd from numba import cuda import numba import time array_len = int(5e8) average_window = 3000 number_of_threads = 64 thread_tile = 48 number_of_blocks = (array_len + (number_of_threads * thread_tile - 1)) // (number_of_threads * thread_tile) shared_buffer_size = number_of_threads * thread_tile + average_window - 1 df = cudf.DataFrame() df['in'] = np.arange(array_len, dtype=np.float64) df['out'] = np.arange(array_len, dtype=np.float64) @cuda.jit def kernel1(in_arr, out_arr, average_length, arr_len): block_size = cuda.blockDim.x shared = cuda.shared.array(shape=(shared_buffer_size), dtype=numba.float64) tx = cuda.threadIdx.x # Block id in a 1D grid bid = cuda.blockIdx.x starting_id = bid * block_size * thread_tile for j in range(thread_tile): shared[tx + j * block_size + average_length - 1] = in_arr[starting_id + tx + j * block_size] cuda.syncthreads() for j in range(0, average_length - 1, block_size): if (tx + j) < average_length - 1: shared[tx + j] = in_arr[starting_id - average_length + 1 + tx + j] cuda.syncthreads() s = numba.cuda.local.array(1, numba.float64) first = False s[0] = 0.0 for k in range(thread_tile): i = starting_id + tx * thread_tile + k if i < arr_len: if i < average_length-1: out_arr[i] = np.inf else: if not first: for j in range(0, average_length): s[0] += shared[tx * thread_tile + k + average_length - 1 - j] s[0] = s[0] / np.float64(average_length) out_arr[i] = s[0] first = True else: s[0] = s[0] + (shared[tx * thread_tile + k + average_length - 1] - shared[tx * thread_tile + k + average_length - 1 - average_length]) / np.float64(average_length) out_arr[i] = s[0] gpu_in = df['in'].to_gpu_array() gpu_out = df['out'].to_gpu_array() start = time.time() kernel1[(number_of_blocks,), (number_of_threads,)](gpu_in, gpu_out, average_window, array_len) cuda.synchronize() end = time.time() print('Numba with comipile time', end-start) start = time.time() kernel1[(number_of_blocks,), (number_of_threads,)](gpu_in, gpu_out, average_window, array_len) cuda.synchronize() end = time.time() print('Numba without comipile time', end-start) pdf = pd.DataFrame() pdf['in'] = np.arange(array_len, dtype=np.float64) start = time.time() pdf['out'] = pdf.rolling(average_window).mean() end = time.time() print('pandas time', end-start) assert(np.isclose(pdf.out.values[average_window:].mean(), df.out.to_array()[average_window:].mean()))
Numba with comipile time 0.6331000328063965 Numba without comipile time 0.30219364166259766 pandas time 6.03054666519165
Apache-2.0
cudf/notebooks_numba_cuDF_integration.ipynb
rocketmlhq/rapids-notebooks
DS107 Big Data : Lesson Five Companion Notebook Table of Contents * [Table of Contents](DS107L5_toc) * [Page 1 - Introduction](DS107L5_page_1) * [Page 2 - Spark](DS107L5_page_2) * [Page 3 - Running Spark in Hadoop](DS107L5_page_3) * [Page 4 - Spark Data Storage](DS107L5_page_4) * [Page 5 - Introduction to Scala](DS107L5_page_5) * [Page 6 - Using Spark 2.0](DS107L5_page_6) * [Page 7 - Using Spark SQL](DS107L5_page_7) * [Page 8 - Spark Shell](DS107L5_page_8) * [Page 9 - Decision Trees in Spark MLLib](DS107L5_page_9) * [Page 10 - Decision Trees and Accuracy](DS107L5_page_10) * [Page 11 - Hyperparameter Tuning](DS107L5_page_11) * [Page 12 - Best Fit Model](DS107L5_page_12) * [Page 13 - Key Terms](DS107L5_page_13) * [Page 14 - Lesson 5 Practice Hands-On](DS107L5_page_14) * [Page 15 - Lesson 5 Practice Hands-On Solution](DS107L5_page_15) * [Page 16 - Lesson 5 Practice Hands-On Solution - Alternative Assignment](DS107L5_page_16) Page 1 - Overview of this Module[Back to Top](DS107L5_toc)
from IPython.display import VimeoVideo # Tutorial Video Name: Spark 2.0 and Zeppelin VimeoVideo('388865681', width=720, height=480)
_____no_output_____
MIT
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/RECAP_DS/06_BIG_DATA/L05.ipynb
okara83/Becoming-a-Data-Scientist
6. Similarity Functions - **Created by Andrés Segura Tinoco**- **Created on May 20, 2019**- **Updated on Mar 19, 2021** In statistics and related fields, a **similarity measure** or similarity function is a real-valued function that quantifies the similarity between two objects. In short, a similarity function quantifies how much alike two data objects are [1]. 6.1. Common similarity functions
# Load the Python libraries from math import * from decimal import Decimal from scipy import stats as ss import sklearn.metrics.pairwise as sm import math
_____no_output_____
MIT
similarity-functions/SimilarityFunctions.ipynb
ansegura7/Algorithms
\begin{align} similarity(X, Y) = d(X, Y) = \sqrt{\sum_{i=1}^n (X_i - Y_i)^2} \tag{1}\end{align}
# (1) Euclidean distance function def euclidean_distance(x, y): return sqrt(sum(pow(a-b,2) for a, b in zip(x, y)))
_____no_output_____
MIT
similarity-functions/SimilarityFunctions.ipynb
ansegura7/Algorithms
\begin{align} similarity(X, Y) = d(X, Y) = \sum_{i=1}^n |X_i - Y_i| \tag{2}\end{align}
# (2) manhattan distance function def manhattan_distance(x, y): return sum(abs(a-b) for a,b in zip(x,y))
_____no_output_____
MIT
similarity-functions/SimilarityFunctions.ipynb
ansegura7/Algorithms
\begin{align} similarity(X, Y) = d(X, Y) = (\sum_{i=1}^n |X_i - Y_i|^p)^\frac{1}{p} \tag{3}\end{align}
# (3) Minkowski distance function def _nth_root(value, n_root): root_value = 1/float(n_root) return round(Decimal(value) ** Decimal(root_value),3) def minkowski_distance(x, y, p = 3): return float(_nth_root(sum(pow(abs(a-b), p) for a,b in zip(x, y)), p))
_____no_output_____
MIT
similarity-functions/SimilarityFunctions.ipynb
ansegura7/Algorithms
\begin{align} similarity(X, Y) = cos(\theta) = \frac{\vec{X}.\vec{Y}}{\|\vec{X}\|.\|\vec{Y}\|} = \frac{\sum_{i=1}^n X_i.Y_i}{\sqrt{\sum_{i=1}^n X_i^2}.\sqrt{\sum_{i=1}^n Y_i^2}} \tag{4}\end{align}
# (4) Cosine similarity function def _square_rooted(x): return round(sqrt(sum([a*a for a in x])),3) def cosine_similarity(x, y): numerator = sum(a*b for a,b in zip(x,y)) denominator = _square_rooted(x) * _square_rooted(y) return round(numerator/float(denominator),3)
_____no_output_____
MIT
similarity-functions/SimilarityFunctions.ipynb
ansegura7/Algorithms
\begin{align} similarity(X, Y) = \frac{cov(X, Y)}{\sigma_X . \sigma_Y} = \frac{\sum_{i=1}^n (X_i - \bar{X}).(Y_i - \bar{Y})}{\sqrt{\sum_{i=1}^n (X_i - \bar{X})^2 . (Y_i - \bar{Y})^2}} \tag{5}\end{align}
# (5) Pearson similarity function def _avg(x): assert len(x) > 0 return float(sum(x)) / len(x) def pearson_similarity(x, y): assert len(x) == len(y) n = len(x) assert n > 0 avg_x = _avg(x) avg_y = _avg(y) diffprod = 0 xdiff2 = 0 ydiff2 = 0 for idx in range(n): xdiff = x[idx] - avg_x ydiff = y[idx] - avg_y diffprod += xdiff * ydiff xdiff2 += xdiff * xdiff ydiff2 += ydiff * ydiff return diffprod / math.sqrt(xdiff2 * ydiff2)
_____no_output_____
MIT
similarity-functions/SimilarityFunctions.ipynb
ansegura7/Algorithms
\begin{align} similarity(X, Y) = J(X, Y) = \frac{|X \cap Y|}{|X \cup Y|} = \frac{|X \cap Y|}{|X| + |Y| - |X \cap Y|} \tag{6}\end{align}
# (6) Jaccard similarity function def jaccard_similarity(x, y): intersection_cardinality = len(set.intersection(*[set(x), set(y)])) union_cardinality = len(set.union(*[set(x), set(y)])) return intersection_cardinality / float(union_cardinality)
_____no_output_____
MIT
similarity-functions/SimilarityFunctions.ipynb
ansegura7/Algorithms
6.2. Manual examples
# Vectors x = [-4.593481, -5.478033, 1.127111, 1.252885, -2.286953] # Messi y = [-4.080334, -3.406618, 4.334073, -0.485612, -2.817897] # CR z = [-4.048185, -5.546171, 0.505673, 0.616553, -1.730906] # Neymar
_____no_output_____
MIT
similarity-functions/SimilarityFunctions.ipynb
ansegura7/Algorithms
Euclidean distance
euclidean_distance(x, y) euclidean_distance(x, z)
_____no_output_____
MIT
similarity-functions/SimilarityFunctions.ipynb
ansegura7/Algorithms
Manhattan distance
manhattan_distance(x, y) manhattan_distance(x, z)
_____no_output_____
MIT
similarity-functions/SimilarityFunctions.ipynb
ansegura7/Algorithms
Minkowski distance
minkowski_distance(x, y) minkowski_distance(x, z)
_____no_output_____
MIT
similarity-functions/SimilarityFunctions.ipynb
ansegura7/Algorithms
Cosine similarity
cosine_similarity(x, y) cosine_similarity(x, z)
_____no_output_____
MIT
similarity-functions/SimilarityFunctions.ipynb
ansegura7/Algorithms
Pearson similarity
pearson_similarity(x, y) pearson_similarity(x, z)
_____no_output_____
MIT
similarity-functions/SimilarityFunctions.ipynb
ansegura7/Algorithms
Jaccard similarity
a = [0, 1, 2, 3, 4, 5] b = [-1, 1, 2, 0, 3, 5] jaccard_similarity(a, b)
_____no_output_____
MIT
similarity-functions/SimilarityFunctions.ipynb
ansegura7/Algorithms
6.3. Sklearn examples
corr = sm.euclidean_distances([x], [y]) float(corr[0]) corr = sm.manhattan_distances([x], [y]) float(corr[0]) corr = sm.cosine_similarity([x], [y]) float(corr[0]) corr, p_value = ss.pearsonr(x, y) corr
_____no_output_____
MIT
similarity-functions/SimilarityFunctions.ipynb
ansegura7/Algorithms
This notebook is part of the `nbsphinx` documentation: https://nbsphinx.readthedocs.io/. Pre-Executing NotebooksAutomatically executing notebooks during the Sphinx build process is an important feature of `nbsphinx`.However, there are a few use cases where pre-executing a notebook and storing the outputs might be preferable.Storing any output will, by default, stop ``nbsphinx`` from executing the notebook. Long-Running CellsIf you are doing some very time-consuming computations, it might not be feasible to re-execute the notebook every time you build your Sphinx documentation.So just do it once -- when you happen to have the time -- and then just keep the output.
import time %time time.sleep(60 * 60) 6 * 7
CPU times: user 160 ms, sys: 56 ms, total: 216 ms Wall time: 1h 1s
MIT
doc/pre-executed.ipynb
gehuazhen/nbsphinx
If you *do* want to execute your notebooks, but some cells run for a long time, you can change the timeout, see [Cell Execution Timeout](timeout.ipynb). Rare LibrariesYou might have created results with a library that's hard to install and therefore you have only managed to install it on one very old computer in the basement, so you probably cannot run this whenever you build your Sphinx docs.
from a_very_rare_library import calculate_the_answer calculate_the_answer()
_____no_output_____
MIT
doc/pre-executed.ipynb
gehuazhen/nbsphinx
ExceptionsIf an exception is raised during the Sphinx build process, it is stopped (the build process, not the exception!).If you want to show to your audience how an exception looks like, you have two choices:1. Allow errors -- either generally or on a per-notebook or per-cell basis -- see [Ignoring Errors](allow-errors.ipynb) ([per cell](allow-errors-per-cell.ipynb)).1. Execute the notebook beforehand and save the results, like it's done in this example notebook:
1 / 0
_____no_output_____
MIT
doc/pre-executed.ipynb
gehuazhen/nbsphinx
Client-specific OutputsWhen `nbsphinx` executes notebooks,it uses the `nbconvert` module to do so.Certain Jupyter clients might produce outputthat differs from what `nbconvert` would produce.To preserve those original outputs,the notebook has to be executed and savedbefore running Sphinx.For example,the JupyterLab help system shows the help text as cell outputs,while executing with `nbconvert` doesn't produce any output.
sorted?
_____no_output_____
MIT
doc/pre-executed.ipynb
gehuazhen/nbsphinx
Understanding ROS node APIs in Arduino Following is a basic structure ROS Arduino node. We can see the function of each line of code:
#include <ros.h> ros::NodeHandle nh; void setup() { nh.initNode(); } void loop() { nh.spinOnce(); }
_____no_output_____
MIT
Modules/Bonus Module - Arduino-ROS Interface/2. Understanding ROS node APIs in Arduino.ipynb
mlsdpk/ROS_Basic_Course
INIT
# transforms transform = transforms.Compose([ transforms.ToTensor(), ]) # dataset root = '/Users/Marco/Documents/DATASETS/GUITAR-FX-DIST/Mono_Discrete/Features' excl_folders = ['MT2'] spectra_folder= 'mel_22050_1024_512' proc_settings_csv = 'proc_settings.csv' max_num_settings=3 dataset = dataset.FxDataset(root=root, excl_folders=excl_folders, spectra_folder=spectra_folder, processed_settings_csv=proc_settings_csv, max_num_settings=max_num_settings, transform=transform) dataset.init_dataset() # dataset.generate_mel() # split split = datasplit.DataSplit(dataset, shuffle=True) # loaders train_loader, val_loader, test_loader = split.get_split(batch_size=100) print('dataset size: ', len(dataset)) print('train set size: ', len(split.train_sampler)) print('val set size: ', len(split.val_sampler)) print('test set size: ', len(split.test_sampler)) dataset.fx_to_label
dataset size: 123552 train set size: 88956 val set size: 9885 test set size: 24711
BSD-3-Clause
src/train_setnetcond_on_mono_disc.ipynb
mcomunita/gfx_classifier
TRAIN SetNetCond
# model setnetcond = models.SettingsNetCond(n_settings= dataset.max_num_settings, mel_shape=dataset.mel_shape, num_embeddings=dataset.num_fx, embedding_dim=50) # optimizer optimizer = optim.Adam(setnetcond.parameters(), lr=0.001) # loss function loss_func = nn.MSELoss(reduction='mean') print(setnetcond) # SAVE models_folder = '../../models_and_results/models' model_name = '20210409_setnetcond_mono_disc_best' results_folder = '../../models_and_results/results' results_subfolder = '20210409_setnetcond_mono_disc' # TRAIN and TEST SettingsNetCond OVER MULTIPLE EPOCHS train_set_size = len(split.train_sampler) val_set_size = len(split.val_sampler) test_set_size = len(split.test_sampler) all_train_losses, all_val_losses, all_test_losses = [],[],[] all_train_correct, all_val_correct, all_test_correct = [],[],[] all_train_results, all_val_results, all_test_results = [],[],[] best_val_correct = 0 early_stop_counter = 0 start = time.time() for epoch in range(100): train_loss, train_correct, train_results = trainer.train_settings_cond_net( model=setnetcond, optimizer=optimizer, train_loader=train_loader, train_sampler=split.train_sampler, epoch=epoch, loss_function=loss_func, device=device ) val_loss, val_correct, val_results = trainer.val_settings_cond_net( model=setnetcond, val_loader=val_loader, val_sampler=split.val_sampler, loss_function=loss_func, device='cpu' ) test_loss, test_correct, test_results = trainer.test_settings_cond_net( model=setnetcond, test_loader=test_loader, test_sampler=split.test_sampler, loss_function=loss_func, device='cpu' ) # save model if val_correct > best_val_correct: best_val_correct = val_correct torch.save(setnetcond, '%s/%s' % (models_folder, model_name)) early_stop_counter = 0 print('\n=== saved best model ===\n') else: early_stop_counter += 1 # append results all_train_losses.append(train_loss) all_val_losses.append(val_loss) all_test_losses.append(test_loss) all_train_correct.append(train_correct) all_val_correct.append(val_correct) all_test_correct.append(test_correct) all_train_results.append(train_results) all_val_results.append(val_results) all_test_results.append(test_results) if early_stop_counter == 15: print('\n--- early stop ---\n') break stop = time.time() print(f"Training time: {stop - start}s") # BEST RESULTS print('Accuracy: ', 100 * max(all_train_correct) / train_set_size) print('Epoch: ', np.argmax(all_train_correct)) print() print('Accuracy: ', 100 * max(all_val_correct) / val_set_size) print('Epoch: ', np.argmax(all_val_correct)) print() print('Accuracy: ', 100 * max(all_test_correct) / test_set_size) print('Epoch: ', np.argmax(all_test_correct)) print() # SAVE RESULTS - all losses, all correct, best results all_train_losses_npy = np.array(all_train_losses) all_train_correct_npy = np.array(all_train_correct) best_train_results_npy = np.array(all_train_results[95]) all_val_losses_npy = np.array(all_val_losses) all_val_correct_npy = np.array(all_val_correct) best_val_results_npy = np.array(all_val_results[95]) all_test_losses_npy = np.array(all_test_losses) all_test_correct_npy = np.array(all_test_correct) best_test_results_npy = np.array(all_test_results[95]) fx_labels_npy = np.array(list(dataset.fx_to_label.keys())) np.save(file=('%s/%s/%s' % (results_folder, results_subfolder, 'all_train_losses')), arr=all_train_losses_npy) np.save(file=('%s/%s/%s' % (results_folder, results_subfolder, 'all_train_correct')), arr=all_train_correct_npy) np.save(file=('%s/%s/%s' % (results_folder, results_subfolder, 'best_train_results')), arr=best_train_results_npy) np.save(file=('%s/%s/%s' % (results_folder, results_subfolder, 'all_val_losses')), arr=all_val_losses_npy) np.save(file=('%s/%s/%s' % (results_folder, results_subfolder, 'all_val_correct')), arr=all_val_correct_npy) np.save(file=('%s/%s/%s' % (results_folder, results_subfolder, 'best_val_results')), arr=best_val_results_npy) np.save(file=('%s/%s/%s' % (results_folder, results_subfolder, 'all_test_losses')), arr=all_test_losses_npy) np.save(file=('%s/%s/%s' % (results_folder, results_subfolder, 'all_test_correct')), arr=all_test_correct_npy) np.save(file=('%s/%s/%s' % (results_folder, results_subfolder, 'best_test_results')), arr=best_test_results_npy) np.save(file=('%s/%s/%s' % (results_folder, results_subfolder, 'fx_labels')), arr=fx_labels_npy)
<ipython-input-7-490bdd120e03>:4: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. best_train_results_npy = np.array(all_train_results[95]) <ipython-input-7-490bdd120e03>:8: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. best_val_results_npy = np.array(all_val_results[95]) <ipython-input-7-490bdd120e03>:12: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. best_test_results_npy = np.array(all_test_results[95])
BSD-3-Clause
src/train_setnetcond_on_mono_disc.ipynb
mcomunita/gfx_classifier
Quickstart Example with Multi-class Classificatoin Data---This notebook provides an example of conducting OPE of an evaluate policy using multi-class classification dataset as logged bandit feedback data.Our example with multi-class classification data contains the follwoing four major steps:- (1) Bandit Reduction- (2) Off-Policy Learning- (3) Off-Policy Evaluation- (4) Evaluation of OPE EstimatorsPlease see [../examples/multiclass](../examples/multiclass) for a more sophisticated example of the evaluation of OPE with multi-class classification datasets.
import numpy as np from sklearn.datasets import load_digits from sklearn.ensemble import RandomForestClassifier as RandomForest from sklearn.linear_model import LogisticRegression # import open bandit pipeline (obp) import obp from obp.dataset import MultiClassToBanditReduction from obp.ope import ( OffPolicyEvaluation, RegressionModel, InverseProbabilityWeighting, DirectMethod, DoublyRobust ) # obp version print(obp.__version__)
0.3.3
Apache-2.0
examples/quickstart/multiclass.ipynb
nmasahiro/zr-obp
(1) Bandit ReductionWe prepare easy-to-use interface for bandit reduction of multi-class classificatoin dataset: `MultiClassToBanditReduction` class in the dataset module.It takes feature vectors (`X`), class labels (`y`), classifier to construct behavior policy (`base_classifier_b`), paramter of behavior policy (`alpha_b`) as inputs and generates a bandit dataset that can be used to evaluate the performance of decision making policies (obtained by `off-policy learning`) and OPE estimators.
# load raw digits data # `return_X_y` splits feature vectors and labels, instead of returning a Bunch object X, y = load_digits(return_X_y=True) # convert the raw classification data into a logged bandit dataset # we construct a behavior policy using Logistic Regression and parameter alpha_b # given a pair of a feature vector and a label (x, c), create a pair of a context vector and reward (x, r) # where r = 1 if the output of the behavior policy is equal to c and r = 0 otherwise # please refer to https://zr-obp.readthedocs.io/en/latest/_autosummary/obp.dataset.multiclass.html for the details dataset = MultiClassToBanditReduction( X=X, y=y, base_classifier_b=LogisticRegression(max_iter=1000, random_state=12345), alpha_b=0.8, dataset_name="digits", ) # split the original data into training and evaluation sets dataset.split_train_eval(eval_size=0.7, random_state=12345) # obtain logged bandit feedback generated by behavior policy bandit_feedback = dataset.obtain_batch_bandit_feedback(random_state=12345) # `bandit_feedback` is a dictionary storing logged bandit feedback bandit_feedback
_____no_output_____
Apache-2.0
examples/quickstart/multiclass.ipynb
nmasahiro/zr-obp
(2) Off-Policy LearningAfter generating logged bandit feedback, we now obtain an evaluation policy using the training set.
# obtain action choice probabilities by an evaluation policy # we construct an evaluation policy using Random Forest and parameter alpha_e action_dist = dataset.obtain_action_dist_by_eval_policy( base_classifier_e=RandomForest(random_state=12345), alpha_e=0.9, )
_____no_output_____
Apache-2.0
examples/quickstart/multiclass.ipynb
nmasahiro/zr-obp
(3) Off-Policy Evaluation (OPE)OPE attempts to estimate the performance of evaluation policies using their action choice probabilities.Here, we use the **InverseProbabilityWeighting (IPW)**, **DirectMethod (DM)**, and **Doubly Robust (DR)** estimators and visualize the OPE results.
# estimate the mean reward function by using ML model (Logistic Regression here) # the estimated rewards are used by model-dependent estimators such as DM and DR regression_model = RegressionModel( n_actions=dataset.n_actions, base_model=LogisticRegression(random_state=12345, max_iter=1000), ) # please refer to https://arxiv.org/abs/2002.08536 about the details of the cross-fitting procedure. estimated_rewards_by_reg_model = regression_model.fit_predict( context=bandit_feedback["context"], action=bandit_feedback["action"], reward=bandit_feedback["reward"], n_folds=3, # use 3-fold cross-fitting random_state=12345, ) # estimate the policy value of the evaluation policy based on their action choice probabilities # it is possible to set multiple OPE estimators to the `ope_estimators` argument ope = OffPolicyEvaluation( bandit_feedback=bandit_feedback, ope_estimators=[InverseProbabilityWeighting(), DirectMethod(), DoublyRobust()] ) # estimate the policy value of IPWLearner with Logistic Regression estimated_policy_value, estimated_interval = ope.summarize_off_policy_estimates( action_dist=action_dist, estimated_rewards_by_reg_model=estimated_rewards_by_reg_model ) print(estimated_interval, '\n') # visualize estimated policy values of the evaluation policy with Logistic Regression by the three OPE estimators # and their 95% confidence intervals (estimated by nonparametric bootstrap method) ope.visualize_off_policy_estimates( action_dist=action_dist, estimated_rewards_by_reg_model=estimated_rewards_by_reg_model, n_bootstrap_samples=10000, # number of resampling performed in the bootstrap procedure random_state=12345, )
mean 95.0% CI (lower) 95.0% CI (upper) ipw 0.890339 0.826724 0.975248 dm 0.787085 0.779634 0.793370 dr 0.882536 0.808637 0.937305
Apache-2.0
examples/quickstart/multiclass.ipynb
nmasahiro/zr-obp
(4) Evaluation of OPE estimatorsOur final step is **the evaluation of OPE**, which evaluates and compares the estimation accuracy of OPE estimators.With the multi-class classification data, we can calculate the ground-truth policy value of the evaluation policy. Therefore, we can compare the policy values estimated by OPE estimators with the ground-turth to evaluate OPE estimators.
# calculate the ground-truth performance of the evaluation policy ground_truth = dataset.calc_ground_truth_policy_value(action_dist=action_dist) print(f'ground-truth policy value (classification accuracy): {ground_truth}') # evaluate the estimation performances of OPE estimators # by comparing the estimated policy value of the evaluation policy and its ground-truth. # `evaluate_performance_of_estimators` returns a dictionary containing estimation performances of given estimators relative_ee = ope.summarize_estimators_comparison( ground_truth_policy_value=ground_truth, action_dist=action_dist, estimated_rewards_by_reg_model=estimated_rewards_by_reg_model, metric="relative-ee", # "relative-ee" (relative estimation error) or "se" (squared error) ) # estimation performances of the three estimators (lower means accurate) relative_ee
_____no_output_____
Apache-2.0
examples/quickstart/multiclass.ipynb
nmasahiro/zr-obp
Filter comparison
import scipy from scipy import signal import matplotlib.pyplot as plt import numpy as np import scipy.io as sio import matplotlib.pyplot as plt import statistics as stats import pandas as pd from scipy.fft import fft, fftfreq, fftshift from scipy import signal from scipy.signal import savgol_filter from scipy.signal.signaltools import wiener def highfilter(input_signal): #filtro: 'hp' high pass, 'low': low pass b, a = signal.butter(3, 0.05, 'hp') y = signal.filtfilt(b, a, input_signal) return y def lowfilter(input_signal): #filtro: 'hp' high pass, 'low': low pass b, a = signal.butter(3, 0.05, 'low') y = signal.filtfilt(b, a, input_signal) return y HA1 = sio.loadmat('H-A-1.mat') Channel1 = HA1['Channel_1'] canal1 = Channel1.T[0] t = np.linspace(0, 9, len(canal1)) fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2) fig.set_size_inches(18,9.5) fig.suptitle('Comparison of different filters', fontsize = 15) ax1.plot(t, canal1, alpha = 0.5) ax1.plot(t, highfilter(canal1), 'tab:blue') ax1.set_xlabel('Time') ax1.set_ylabel('Amplitude') ax1.set_title('Highpass-filter') ax2.plot(t, canal1, 'tab:orange', alpha = 0.5) ax2.plot(t, lowfilter(canal1), 'tab:orange') ax2.set_xlabel('Time') ax2.set_ylabel('Amplitude') ax2.set_title('Lowpass-filter') ax3.plot(t, canal1, 'tab:green', alpha = 0.5) ax3.plot(t, savgol_filter(canal1, 5, 2), 'tab:green') ax3.set_xlabel('Time') ax3.set_ylabel('Amplitude') ax3.set_title('Savitzky-Golay filter') ax4.plot(t, canal1, 'tab:red', alpha = 0.5) filtered_img = wiener(canal1, 99) ax4.plot(t, filtered_img, 'tab:red') ax4.set_xlabel('Time') ax4.set_ylabel('Amplitude') ax4.set_title('Wiener filter')
_____no_output_____
MIT
FailurePrediction/VariableRotationalSpeed/GraphicalComparisons/FilterComparison.ipynb
judithspd/predictive-maintenance
Question 1:Write a program that calculates and prints the value according to the given formula:Q = Square root of [(2 * C * D)/H]Following are the fixed values of C and H:C is 50. H is 30.D is the variable whose values should be input to your program in a comma-separated sequence.ExampleLet us assume the following comma separated input sequence is given to the program:100,150,180The output of the program should be:18,22,24
import math C = 50 H = 30 numbers = input('Please enter the comma-separated values of D: ').split(',') output = [] for D in numbers: Q = math.sqrt((2*C*int(D))/H) output.append(i) print(round(Q), end=' ')
Please enter the comma-separated values of D: 23,50,86 9 13 17
CNRI-Python
Programming_Assingment13.ipynb
14vpankaj/iNeuron_Programming_Assignments
Question 2:Write a program which takes 2 digits, X,Y as input and generates a 2-dimensional array.The element value in the i-th row and j-th column of the array should be i*j.Note: i=0,1.., X-1; j=0,1,¡¬Y-1.ExampleSuppose the following inputs are given to the program:3,5Then, the output of the program should be:[[0, 0, 0, 0, 0], [0, 1, 2, 3, 4], [0, 2, 4, 6, 8]]
def Matrix(x,y): M = [] for i in range(x): row = [] for j in range(y): row.append(i*j) M.append(row) return M X = int(input('Enter the value of X: ')) Y = int(input('Enter the value of Y: ')) Matrix(X,Y)
Enter the value of X: 3 Enter the value of Y: 5
CNRI-Python
Programming_Assingment13.ipynb
14vpankaj/iNeuron_Programming_Assignments
Question 3:Write a program that accepts a comma separated sequence of words as input and prints the words in a comma-separated sequence after sorting them alphabetically.Suppose the following input is supplied to the program:without,hello,bag,worldThen, the output should be:bag,hello,without,world
string = input('Please enter a comma separated sequence of words: ').split(',') string.sort() print(','.join(string))
Please enter a comma separated sequence of words: without,hello,bag,world bag,hello,without,world
CNRI-Python
Programming_Assingment13.ipynb
14vpankaj/iNeuron_Programming_Assignments
Question 4:Write a program that accepts a sequence of whitespace separated words as input and prints the words after removing all duplicate words and sorting them alphanumerically.Suppose the following input is supplied to the program:hello world and practice makes perfect and hello world againThen, the output should be:again and hello makes perfect practice world
string = input('Enter the sequence of white separated words: ').split(' ') print(' '.join(sorted(set(string))))
Enter the sequence of white separated words: hello world and practice makes perfect and hello world again again and hello makes perfect practice world
CNRI-Python
Programming_Assingment13.ipynb
14vpankaj/iNeuron_Programming_Assignments
Question 5:Write a program that accepts a sentence and calculate the number of letters and digits.Suppose the following input is supplied to the program:hello world! 123Then, the output should be:LETTERS 10DIGITS 3
string = input('Enter a sentence: ') letter = 0 digit = 0 for i in string: if i.isalpha(): letter += 1 elif i.isdigit(): digit += 1 else: pass print('LETTERS', letter) print('DIGITS', digit)
Enter a sentence: hello world! 123 LETTERS 10 DIGITS 3
CNRI-Python
Programming_Assingment13.ipynb
14vpankaj/iNeuron_Programming_Assignments