text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# REMINDER FOR NEXT TIME: When generating 16000 classes takes ages. Make it faster. At least put everything in a python script. Try to eliminate some buckle...
# Use the imageloader from pytoch
Check the link: https://github.com/pytorch/vision#imagefolder
The documentation here is as well useful: https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py#L66
The images should be arranged in the following way:
`root/dog/xxx.png
root/dog/xxy.png
root/dog/xxz.png
`
`
root/cat/123.png
root/cat/nsdf3.png
root/cat/asd932_.png
`
`
dset.ImageFolder(root="root folder path", [transform, target_transform])
`
**Members:**
`
self.classes - The class names as a list
self.class_to_idx - Corresponding class indices
self.imgs - The list of (image path, class-index) tuples
`
**Atributes:**
* *`classes`* (list): list of the class names.
* *`class_to_idx`* (dict): Dict with items (class_name, class_index).
* *`imgs`* (list): List of (image path, class_index) tuples.
## Notes on torchvision transformations:
- Compose([...])
- Scale()
- CenterCrop()
- RandomCrop()
- RandomHorizontalFlip
- RandomSizedCrop()
- Pad()
- Lambda()
```
import os
import numpy as np
from torchvision import transforms
from PIL import Image
from skimage import color
from skimage.transform import rotate as pad_rotation
from tqdm import tqdm
from matplotlib import pyplot as plt
# General variables
root_path = './surrogate_dataset'
classes_path = os.path.join(root_path, 'classes_folder_16000_set_4')
nb_classes = 16000 # this will be the number of classes
# Here I saved the images that are going to define the labels (in this cell):
# Get the images from the unlabeled subset
def read_images(path):
with open(path, 'rb') as f:
# read whole file in uint8 chunks
everything = np.fromfile(f, dtype=np.uint8)
# We force the data into 3x96x96 chunks, since the
# images are stored in "column-major order", meaning
# that "the first 96*96 values are the red channel,
# the next 96*96 are green, and the last are blue."
# The -1 is since the size of the pictures depends
# on the input file, and this way numpy determines
# the size on its own.
images = np.reshape(everything, (-1, 3, 96, 96))
# Now transpose the images into a standard image format
# readable by, for example, matplotlib.imshow
# You might want to comment this line or reverse the shuffle
# if you will use a learning algorithm like CNN, since they like
# their channels separated.
images = np.transpose(images, (0, 3, 2, 1))
return images
unlab_set = read_images('./data/stl10_binary/unlabeled_X.bin')
np.random.seed(42)
# indexes drawn in a set to avoid duplicates
indexes = set()
while len(indexes) < nb_classes:
indexes.add(np.random.randint(unlab_set.shape[0]))
# Save the images in a folder
if not os.path.exists(classes_path):
os.makedirs(classes_path)
toPill = transforms.Compose([transforms.ToPILImage()])
for num, idx in tqdm(enumerate(indexes)):
path = os.path.join(classes_path, str(num).zfill(len(str(nb_classes))))
#print path
image = toPill(unlab_set[idx])
image.save(path + '.png')
# code to compare the clustered images...
indexes_list = [a for a in indexes]
idx1 = indexes_list[616] # here I put the children number fro the code in the clustering folder
idx2 = indexes_list[750]
toPill = transforms.Compose([transforms.ToPILImage()])
image1 = toPill(unlab_set[idx1])
image2 = toPill(unlab_set[idx2])
print "Indexes on the unlabeled set from STL10:",
print idx1, idx2
plt.figure(figsize=(15,15))
plt.subplot(121)
plt.imshow(image1)
plt.axis('off')
plt.subplot(122)
plt.imshow(image2)
plt.axis('off')
plt.show()
```
| github_jupyter |
```
# in colab, please run:
!pip install transformers sentence-transformers datasets rouge_score nltk
```
### [In-class version] Summarization
This notebook will guide you through the basics of test summarization within a nlp_cource seminar.
Later this day, the notebook will be replaced by a longer "homework" version.
```
import nltk
import numpy as np
nltk.download('punkt')
nltk.download('stopwords')
import datasets
data = datasets.load_dataset("multi_news")
train_dataset, val_dataset = data['train'], data['validation']
SEMINAR_MODE = True
if SEMINAR_MODE:
val_dataset = [val_dataset[i] for i in range(0, len(val_dataset), 5)]
example = val_dataset[42]
sources = tuple(filter(len, map(str.strip, example['document'].split('|||||'))))
for i, source in enumerate(sources):
print(f"SOURCE #{i}: {source}\n{'=' * 50}\n")
print("SUMMARY:\n", example['summary'])
MAX_WORDS = 100
def summarize_baseline(doc: str, max_words=MAX_WORDS):
sentences = nltk.sent_tokenize('\n'.join(doc.split('|||||')))
summary = []
num_words = 0
for sent in sentences:
sentence_length = len(nltk.word_tokenize(sent))
if num_words + sentence_length > max_words:
break
num_words += sentence_length
summary.append(sent)
return ' '.join(summary)
print(summarize_baseline(val_dataset[42]['document']))
```
### Okay, but is it any good?
```
from rouge_score import rouge_scorer
from tqdm.auto import trange
scorer = rouge_scorer.RougeScorer(['rouge1', 'rougeL'], use_stemmer=True)
scores = scorer.score(target='The quick brown fox jumps over the lazy dog',
prediction='The quick brown dog jumps on the log.')
print(scores['rouge1'].fmeasure, scores['rougeL'].fmeasure)
def compute_rouge_f1(dataset, predictions):
<YOUR CODE: compute mean f-measures for Rouge-1 and Rouge-L>
return mean_r1, mean_rL
baseline_predictions = [summarize_baseline(row['document']) for row in val_dataset]
baseline_rouge1, baseline_rougeL = compute_rouge_f1(val_dataset, baseline_predictions)
print("Rouge-1:", baseline_rouge1)
print("Rouge-L:", baseline_rougeL)
if SEMINAR_MODE:
assert abs(baseline_rouge1 - 0.26632) < 1e-4 and abs(baseline_rougeL - 0.14617) < 1e-4
print("Well done!")
```
### Neural extractive summarization

```
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(lowercase=True, max_features=50_000)
vectorizer.fit([item['document'] for item in train_dataset])
encode_func = lambda texts: vectorizer.transform(texts).toarray()
doc = val_dataset[42]
documents = tuple(filter(len, map(str.strip, doc['document'].split('|||||'))))
sentences_by_doc = [nltk.sent_tokenize(doc) for doc in documents]
sentences = [sent for document in sentences_by_doc for sent in document]
sentence_lengths = np.array([len(nltk.word_tokenize(sent)) for sent in sentences])
sentence_embeddings = encode_func(sentences)
document_embeddings = encode_func(list(map('\n'.join, sentences_by_doc)))
print("Sentence embeddings shape:", sentence_embeddings.shape)
print("Document embedding shape:", document_embeddings.shape)
# Compute cosine similarities between each pair of sentences
sentence_similarities = <YOUR CODE HERE>
# ... and also between each sentence and each document
document_similarities = <YOUR CODE HERE>
assert sentence_similarities.shape == (len(sentences), len(sentences))
assert sentence_similarities.shape == (len(sentences), len(documents))
import matplotlib.pyplot as plt
%matplotlib inline
plt.title("Sentence-to-sentence similarities")
plt.imshow(sentence_similarities)
plt.show()
plt.title("Sentence-to-document similarities")
plt.imshow(document_similarities.T)
plt.show()
import networkx
plt.figure(figsize=(12, 6))
networkx.draw_networkx(networkx.from_numpy_array(sentence_similarities > 0.1))
def choose_summary_greedy(sentences, sentence_scores, sentence_similarities, sentence_lengths,
max_words=MAX_WORDS, sim_threshold=0.9):
assert sentence_scores.shape == (len(sentences),)
chosen_sentences = []
max_similarities = np.zeros(len(sentences))
num_words = 0
for i in range(len(sentences)):
mask = (sentence_lengths <= (max_words - num_words)) & (max_similarities < sim_threshold)
if not np.any(mask):
break
best_sentence_index = np.argmax(sentence_scores * mask)
chosen_sentences.append(sentences[best_sentence_index])
max_similarities = np.maximum(max_similarities, sentence_similarities[best_sentence_index])
num_words += sentence_lengths[best_sentence_index]
return chosen_sentences
sentence_scores = sentence_similarities.mean(axis=-1)
summary_sentences = choose_summary_greedy(
sentences, sentence_scores, sentence_similarities, sentence_lengths,
max_words=MAX_WORDS, sim_threshold=0.7)
print(summary_sentences)
```
### Putting it all together
```
def summarize_one(document, max_words=MAX_WORDS, sim_threshold=0.7):
documents = tuple(filter(len, map(str.strip, document.split('|||||'))))
sentences_by_doc = [nltk.sent_tokenize(doc) for doc in documents]
sentences = [sent for document in sentences_by_doc for sent in document]
sentence_lengths = np.array([len(nltk.word_tokenize(sent)) for sent in sentences])
# use encode_func to compute embedding matrices
sentence_embeddings = <YOUR CODE HERE>
document_embeddings = <YOUR CODE HERE>
# compute pairwise similarities between sentences and sentence-document pairs
sentence_similarities = <YOUR CODE HERE>
document_similarities = <YOUR CODE HERE>
# Compute the scores s.t. higher score corresponds to better sentences.
# There are many ways to devise such a function, try them for yourself and see which works best.
# Here's a few inspirations:
# - mean similarity to 3 nearest sentences [please start with this one]
# - page-rank scores that use similarity matrix as connectivity matrix
# - distance to the nearest cluster in embedding space using k-means clustering
sentence_scores = <YOUR CODE HERE>
summary_sentences = choose_summary_greedy(
sentences, sentence_scores, sentence_similarities, sentence_lengths,
max_words=max_words, sim_threshold=sim_threshold)
return '\n'.join(summary_sentences)
print(summarize_one(val_dataset[2]['document']))
our_summaries = []
for i in trange(len(val_dataset)):
our_summaries.append(summarize_one(val_dataset[i]['document']))
our_rouge1, our_rougel = compute_rouge_f1(val_dataset, our_summaries)
print("Rouge-1:", our_rouge1)
print("Rouge-L:", our_rougel)
```
### Can we do better than TF-IDF?
```
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('sentence-transformers/LaBSE').train(False)
emb1, emb2, emb3, emb4 = model.encode(
['Hello, world!', 'Greeting, universe!', 'Hello, John!', "A cat sat on the mat."]
)
print("Sim(hello world, hello john) =", emb1 @ emb3)
print("Sim(hello world, greetings universe) =", emb1 @ emb2)
print("Sim(hello world, a cat sat on the mat)=", emb1 @ emb4)
encode_func = model.encode
doc = val_dataset[50]
documents = tuple(filter(len, map(str.strip, doc['document'].split('|||||'))))
sentences_by_doc = [nltk.sent_tokenize(doc) for doc in documents]
sentences = [sent for document in sentences_by_doc for sent in document]
sentence_lengths = np.array([len(nltk.word_tokenize(sent)) for sent in sentences])
sentence_embeddings = encode_func(sentences)
document_embeddings = encode_func(list(map('\n'.join, sentences_by_doc)))
print("Sentence embeddings shape:", sentence_embeddings.shape)
print("Document embedding shape:", document_embeddings.shape)
sentence_similarities = sentence_embeddings @ sentence_embeddings.T
document_similarities = sentence_embeddings @ document_embeddings.T
plt.title("Sentence-to-sentence similarities")
plt.imshow(sentence_similarities)
plt.show()
plt.title("Sentence-to-document similarities")
plt.imshow(document_similarities.T)
plt.show()
our_summaries = []
for i in trange(len(val_dataset)):
our_summaries.append(summarize_one(val_dataset[i]['document']))
our_rouge1, our_rougel = compute_rouge_f1(val_dataset, our_summaries)
print("Rouge-1:", our_rouge1)
print("Rouge-L:", our_rougel)
```
### Call the cavalry!
[Pegasus](https://arxiv.org/abs/1912.08777) is an *abstractive* summarization model based on a large pre-trained transformer. Before doing any summarizaton, the model is pre-trained on a combination of MLM and a specialized objective called Gap Sentence Generation: predicting an entire sentence omitted from the middle of the text.
```
import transformers
pegasus = transformers.pipeline("summarization", "google/pegasus-multi_news")
print(example['document'])
document = example['document'].split('|||||')[0]
print("SUMMARY:", pegasus([document], min_length=5, max_length=100)[0]['summary_text'])
```
| github_jupyter |
## Customizing datasets in fastai
```
from fastai.gen_doc.nbdoc import *
from fastai.vision import *
```
In this tutorial, we'll see how to create custom subclasses of [`ItemBase`](/core.html#ItemBase) or [`ItemList`](/data_block.html#ItemList) while retaining everything the fastai library has to offer. To allow basic functions to work consistently across various applications, the fastai library delegates several tasks to one of those specific objects, and we'll see here which methods you have to implement to be able to have everything work properly. But first let's take a step back to see where you'll use your end result.
## Links with the data block API
The data block API works by allowing you to pick a class that is responsible to get your items and another class that is charged with getting your targets. Combined together, they create a pytorch [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) that is then wrapped inside a [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader). The training set, validation set and maybe test set are then all put in a [`DataBunch`](/basic_data.html#DataBunch).
The data block API allows you to mix and match what class your inputs have, what class your targets have, how to do the split between train and validation set, then how to create the [`DataBunch`](/basic_data.html#DataBunch), but if you have a very specific kind of input/target, the fastai classes might no be sufficient to you. This tutorial is there to explain what is needed to create a new class of items and what methods are important to implement or override.
It goes in two phases: first we focus on what you need to create a custom [`ItemBase`](/core.html#ItemBase) class (which is the type of your inputs/targets) then on how to create your custom [`ItemList`](/data_block.html#ItemList) (which is basically a set of [`ItemBase`](/core.html#ItemBase)) while highlighting which methods are called by the library.
## Creating a custom [`ItemBase`](/core.html#ItemBase) subclass
The fastai library contains three basic types of [`ItemBase`](/core.html#ItemBase) that you might want to subclass:
- [`Image`](/vision.image.html#Image) for vision applications
- [`Text`](/text.data.html#Text) for text applications
- [`TabularLine`](/tabular.data.html#TabularLine) for tabular applications
Whether you decide to create your own item class or to subclass one of the above, here is what you need to implement:
### Basic attributes
Those are the more important attributes your custom [`ItemBase`](/core.html#ItemBase) needs as they're used everywhere in the fastai library:
- `ItemBase.data` is the thing that is passed to pytorch when you want to create a [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader). This is what needs to be fed to your model. Note that it might be different from the representation of your item since you might want something that is more understandable.
- `__str__` representation: if applicable, this is what will be displayed when the fastai library has to show your item.
If we take the example of a [`MultiCategory`](/core.html#MultiCategory) object `o` for instance:
- `o.data` is a tensor where the tags are one-hot encoded
- `str(o)` returns the tags separated by ;
If you want to code the way data augmentation should be applied to your custom `Item`, you should write an `apply_tfms` method. This is what will be called if you apply a [`transform`](/vision.transform.html#vision.transform) block in the data block API.
### Example: ImageTuple
For cycleGANs, we need to create a custom type of items since we feed the model tuples of images. Let's look at how to code this. The basis is to code the [`data`](/vision.data.html#vision.data) attribute that is what will be given to the model. Note that we still keep track of the initial object (usuall in an `obj` attrivute) to be able to show nice representations later on. Here the object is the tuple of images and the data their underlying tensors normalized between -1 and 1.
```
class ImageTuple(ItemBase):
def __init__(self, img1, img2):
self.img1,self.img2 = img1,img2
self.obj,self.data = (img1,img2),[-1+2*img1.data,-1+2*img2.data]
```
Then we want to apply data augmentation to our tuple of images. That's done by writing and `apply_tfms` method as we saw before. Here we just pass that call to the two underlying images then update the data.
```
def apply_tfms(self, tfms, **kwargs):
self.img1 = self.img1.apply_tfms(tfms, **kwargs)
self.img2 = self.img2.apply_tfms(tfms, **kwargs)
self.data = [-1+2*self.img1.data,-1+2*self.img2.data]
return self
```
We define a last method to stack the two images next ot each other, which we will use later for a customized `show_batch`/ `show_results` behavior.
```
def to_one(self): return Image(0.5+torch.cat(self.data,2)/2)
```
This is all your need to create your custom [`ItemBase`](/core.html#ItemBase). You won't be able to use it until you have put it inside your custom [`ItemList`](/data_block.html#ItemList) though, so you should continue reading the next section.
## Creating a custom [`ItemList`](/data_block.html#ItemList) subclass
This is the main class that allows you to group your inputs or your targets in the data block API. You can then use any of the splitting or labelling methods before creating a [`DataBunch`](/basic_data.html#DataBunch). To make sure everything is properly working, here is what you need to know.
### Class variables
Whether you're directly subclassing [`ItemList`](/data_block.html#ItemList) or one of the particular fastai ones, make sure to know the content of the following three variables as you may need to adjust them:
- `_bunch` contains the name of the class that will be used to create a [`DataBunch`](/basic_data.html#DataBunch)
- `_processor` contains a class (or a list of classes) of [`PreProcessor`](/data_block.html#PreProcessor) that will then be used as the default to create processor for this [`ItemList`](/data_block.html#ItemList)
- `_label_cls` contains the class that will be used to create the labels by default
`_label_cls` is the first to be used in the data block API, in the labelling function. If this variable is set to `None`, the label class will be set to [`CategoryList`](/data_block.html#CategoryList), [`MultiCategoryList`](/data_block.html#MultiCategoryList) or [`FloatList`](/data_block.html#FloatList) depending on the type of the first item. The default can be overridden by passing a `label_cls` in the kwargs of the labelling function.
`_processor` is the second to be used. The processors are called at the end of the labelling to apply some kind of function on your items. The default processor of the inputs can be overriden by passing a `processor` in the kwargs when creating the [`ItemList`](/data_block.html#ItemList), the default processor of the targets can be overridden by passing a `processor` in the kwargs of the labelling function.
Processors are useful for pre-processing some data, but you also need to put in their state any variable you want to save for the call of `data.export()` before creating a [`Learner`](/basic_train.html#Learner) object for inference: the state of the [`ItemList`](/data_block.html#ItemList) isn't saved there, only their processors. For instance `SegmentationProcessor`'s only reason to exist is to save the dataset classes, and during the process call, it doesn't do anything apart from setting the `classes` and `c` attributes to its dataset.
``` python
class SegmentationProcessor(PreProcessor):
def __init__(self, ds:ItemList): self.classes = ds.classes
def process(self, ds:ItemList): ds.classes,ds.c = self.classes,len(self.classes)
```
`_bunch` is the last class variable used in the data block. When you type the final `databunch()`, the data block API calls the `_bunch.create` method with the `_bunch` of the inputs.
### Keeping \_\_init\_\_ arguments
If you pass additional arguments in your `__init__` call that you save in the state of your [`ItemList`](/data_block.html#ItemList), we have to make sure they are also passed along in the `new` method as this one is used to create your training and validation set when splitting. To do that, you just have to add their names in the `copy_new` argument of your custom [`ItemList`](/data_block.html#ItemList), preferably during the `__init__`. Here we will need two collections of filenames (for the two type of images) so we make sure the second one is copied like this:
```python
def __init__(self, items, itemsB=None, **kwargs):
super().__init__(items, **kwargs)
self.itemsB = itemsB
self.copy_new.append('itemsB')
```
Be sure to keep the kwargs as is, as they contain all the additional stuff you can pass to an [`ItemList`](/data_block.html#ItemList).
### Important methods
#### - get
The most important method you have to implement is `get`: this one will enable your custom [`ItemList`](/data_block.html#ItemList) to generate an [`ItemBase`](/core.html#ItemBase) from the thing stored in its `items` array. For instance an [`ImageList`](/vision.data.html#ImageList) has the following `get` method:
``` python
def get(self, i):
fn = super().get(i)
res = self.open(fn)
self.sizes[i] = res.size
return res
```
The first line basically looks at `self.items[i]` (which is a filename). The second line opens it since the `open`method is just
``` python
def open(self, fn): return open_image(fn)
```
The third line is there for [`ImagePoints`](/vision.image.html#ImagePoints) or [`ImageBBox`](/vision.image.html#ImageBBox) targets that require the size of the input [`Image`](/vision.image.html#Image) to be created. Note that if you are building a custom target class and you need the size of an image, you should call `self.x.size[i]`.
```
jekyll_note("""If you just want to customize the way an `Image` is opened, subclass `Image` and just change the
`open` method.""")
```
#### - reconstruct
This is the method that is called in `data.show_batch()`, `learn.predict()` or `learn.show_results()` to transform a pytorch tensor back in an [`ItemBase`](/core.html#ItemBase). In a way, it does the opposite of calling `ItemBase.data`. It should take a tensor `t` and return the same kind of thing as the `get` method.
In some situations ([`ImagePoints`](/vision.image.html#ImagePoints), [`ImageBBox`](/vision.image.html#ImageBBox) for instance) you need to have a look at the corresponding input to rebuild your item. In this case, you should have a second argument called `x` (don't change that name). For instance, here is the `reconstruct` method of [`PointsItemList`](/vision.data.html#PointsItemList):
```python
def reconstruct(self, t, x): return ImagePoints(FlowField(x.size, t), scale=False)
```
#### - analyze_pred
This is the method that is called in `learn.predict()` or `learn.show_results()` to transform predictions in an output tensor suitable for `reconstruct`. For instance we may need to take the maximum argument (for [`Category`](/core.html#Category)) or the predictions greater than a certain threshold (for [`MultiCategory`](/core.html#MultiCategory)). It should take a tensor, along with optional kwargs and return a tensor.
For instance, here is the `analyze_pred` method of [`MultiCategoryList`](/data_block.html#MultiCategoryList):
```python
def analyze_pred(self, pred, thresh:float=0.5): return (pred >= thresh).float()
```
`thresh` can then be passed as kwarg during the calls to `learn.predict()` or `learn.show_results()`.
### Advanced show methods
If you want to use methods such a `data.show_batch()` or `learn.show_results()` with a brand new kind of [`ItemBase`](/core.html#ItemBase) you will need to implement two other methods. In both cases, the generic function will grab the tensors of inputs, targets and predictions (if applicable), reconstruct the corresponding [`ItemBase`](/core.html#ItemBase) (as seen before) but it will delegate to the [`ItemList`](/data_block.html#ItemList) the way to display the results.
``` python
def show_xys(self, xs, ys, **kwargs)->None:
def show_xyzs(self, xs, ys, zs, **kwargs)->None:
```
In both cases `xs` and `ys` represent the inputs and the targets, in the second case `zs` represent the predictions. They are lists of the same length that depend on the `rows` argument you passed. The kwargs are passed from `data.show_batch()` / `learn.show_results()`. As an example, here is the source code of those methods in [`ImageList`](/vision.data.html#ImageList):
``` python
def show_xys(self, xs, ys, figsize:Tuple[int,int]=(9,10), **kwargs):
"Show the `xs` and `ys` on a figure of `figsize`. `kwargs` are passed to the show method."
rows = int(math.sqrt(len(xs)))
fig, axs = plt.subplots(rows,rows,figsize=figsize)
for i, ax in enumerate(axs.flatten() if rows > 1 else [axs]):
xs[i].show(ax=ax, y=ys[i], **kwargs)
plt.tight_layout()
def show_xyzs(self, xs, ys, zs, figsize:Tuple[int,int]=None, **kwargs):
"""Show `xs` (inputs), `ys` (targets) and `zs` (predictions) on a figure of `figsize`.
`kwargs` are passed to the show method."""
figsize = ifnone(figsize, (6,3*len(xs)))
fig,axs = plt.subplots(len(xs), 2, figsize=figsize)
fig.suptitle('Ground truth / Predictions', weight='bold', size=14)
for i,(x,y,z) in enumerate(zip(xs,ys,zs)):
x.show(ax=axs[i,0], y=y, **kwargs)
x.show(ax=axs[i,1], y=z, **kwargs)
```
Linked to this method is the class variable `_show_square` of an [`ItemList`](/data_block.html#ItemList). It defaults to `False` but if it's `True`, the `show_batch` method will send `rows * rows` `xs` and `ys` to `show_xys` (so that it shows a square of inputs/targets), like here for images.
### Example: ImageTupleList
Continuing our custom item example, we create a custom [`ItemList`](/data_block.html#ItemList) class that will wrap those `ImageTuple`s properly. The first thing is to write a custom `__init__` method (since we need a list of filenames here) which means we also have to change the `new` method.
```
class ImageTupleList(ImageList):
def __init__(self, items, itemsB=None, **kwargs):
super().__init__(items, **kwargs)
self.itemsB = itemsB
self.copy_new.append('itemsB')
```
We then specify how to get one item. Here we pass the image in the first list of items, and pick one randomly in the second list.
```
def get(self, i):
img1 = super().get(i)
fn = self.itemsB[random.randint(0, len(self.itemsB)-1)]
return ImageTuple(img1, open_image(fn))
```
We also add a custom factory method to directly create an `ImageTupleList` from two folders.
```
@classmethod
def from_folders(cls, path, folderA, folderB, **kwargs):
itemsB = ImageList.from_folder(path/folderB).items
res = super().from_folder(path/folderA, itemsB=itemsB, **kwargs)
res.path = path
return res
```
Finally, we have to specify how to reconstruct the `ImageTuple` from tensors if we want `show_batch` to work. We recreate the images and denormalize.
```
def reconstruct(self, t:Tensor):
return ImageTuple(Image(t[0]/2+0.5),Image(t[1]/2+0.5))
```
There is no need to write a `analyze_preds` method since the default behavior (returning the output tensor) is what we need here. However `show_results` won't work properly unless the target (which we don't really care about here) has the right `reconstruct` method: the fastai library uses the `reconstruct` method of the target on the outputs. That's why we create another custom [`ItemList`](/data_block.html#ItemList) with just that `reconstruct` method. The first line is to reconstruct our dummy targets, and the second one is the same as in `ImageTupleList`.
```
class TargetTupleList(ItemList):
def reconstruct(self, t:Tensor):
if len(t.size()) == 0: return t
return ImageTuple(Image(t[0]/2+0.5),Image(t[1]/2+0.5))
```
To make sure our `ImageTupleList` uses that for labelling, we pass it in `_label_cls` and this is what the result looks like.
```
class ImageTupleList(ImageList):
_label_cls=TargetTupleList
def __init__(self, items, itemsB=None, **kwargs):
super().__init__(items, **kwargs)
self.itemsB = itemsB
self.copy_new.append('itemsB')
def get(self, i):
img1 = super().get(i)
fn = self.itemsB[random.randint(0, len(self.itemsB)-1)]
return ImageTuple(img1, open_image(fn))
def reconstruct(self, t:Tensor):
return ImageTuple(Image(t[0]/2+0.5),Image(t[1]/2+0.5))
@classmethod
def from_folders(cls, path, folderA, folderB, **kwargs):
itemsB = ImageList.from_folder(path/folderB).items
res = super().from_folder(path/folderA, itemsB=itemsB, **kwargs)
res.path = path
return res
```
Lastly, we want to customize the behavior of `show_batch` and `show_results`. Remember the `to_one` method just puts the two images next to each other.
```
def show_xys(self, xs, ys, figsize:Tuple[int,int]=(12,6), **kwargs):
"Show the `xs` and `ys` on a figure of `figsize`. `kwargs` are passed to the show method."
rows = int(math.sqrt(len(xs)))
fig, axs = plt.subplots(rows,rows,figsize=figsize)
for i, ax in enumerate(axs.flatten() if rows > 1 else [axs]):
xs[i].to_one().show(ax=ax, **kwargs)
plt.tight_layout()
def show_xyzs(self, xs, ys, zs, figsize:Tuple[int,int]=None, **kwargs):
"""Show `xs` (inputs), `ys` (targets) and `zs` (predictions) on a figure of `figsize`.
`kwargs` are passed to the show method."""
figsize = ifnone(figsize, (12,3*len(xs)))
fig,axs = plt.subplots(len(xs), 2, figsize=figsize)
fig.suptitle('Ground truth / Predictions', weight='bold', size=14)
for i,(x,z) in enumerate(zip(xs,zs)):
x.to_one().show(ax=axs[i,0], **kwargs)
z.to_one().show(ax=axs[i,1], **kwargs)
```
| github_jupyter |
**`ICUBAM`: ICU Bed Availability Monitoring and analysis in the *Grand Est région* of France during the COVID-19 epidemic.**
https://doi.org/10.1101/2020.05.18.20091264
Python notebook for the sir-like modeling (see Section IV.1 of the main paper).
(ii) visualize results from maximum likelihood estimation, reproducing Fig. 17(left) of the paper
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
import seaborn as sns
import os.path as op
import model_icubam as micu
sns.set_style('whitegrid', {'grid.linestyle': ':'})
data_pop = pd.read_csv(op.join('data', 'pop_dep_2020.csv'), delimiter='\t')
first_date = '2020-03-19'
last_date = '2020-04-29'
data = pd.read_csv(op.join(micu.data_path, 'all_bedcounts_2020-05-04_11h02.csv'), index_col=0)
data = data.groupby(['date', 'department']).sum().reset_index()
data = data[data.date >= first_date]
data = data[data.date <= last_date]
df_params_hat = pd.read_csv(op.join(micu.model_path, 'params_hat_{}.csv'.format(last_date)), index_col=0)
sites = df_params_hat.dep.values
params_hat = df_params_hat[[col for col in df_params_hat.columns if not col.startswith('dep')]].values
n_sites = len(sites)
depname2depid = {'Ardennes':8, 'Aube':10, 'Marne':51, 'Haute-Marne':52,
'Meurthe-et-Moselle':54, 'Meuse':55, 'Moselle':57, 'Bas-Rhin':67,
'Haut-Rhin':68, 'Vosges':88}
micu.make_dir(micu.fig_path)
compute_model = micu.compute_model_seir
def plot_model(fun_model, data, sites, params, n_days_pred=0):
n_sites = len(sites)
for k, dep in enumerate(sites):
if n_sites == 1:
ax = plt.subplot(111)
else:
ax = plt.subplot(np.ceil(n_sites/2), 2, k+1)
condition = data.department==dep
pop = data_pop[data_pop.dep=='{}'.format(depname2depid[dep])]['pop'].values[0]
# data
line_rea = plt.plot(data[condition].date,
data[condition]['n_covid_occ'].values,
'o-', color='tab:red', markersize=5)
line_out = plt.plot(data[condition].date,
data[condition]['n_covid_deaths'].values
+data[condition]['n_covid_healed'].values,
'o-', color='0.4', markersize=5)
# model
n_days = data[condition].date.shape[0]
c, x = fun_model(pop, params[k], n_days)
date_rg = pd.date_range(data.date.min(), periods=n_days).strftime('%Y-%m-%d')
plt.plot(date_rg, c, color='tab:red')
plt.plot(date_rg, x, color='0.4')
ax.set_xticks(ax.get_xticks()[4::7])
if k < n_sites-2:
ax.set_xticklabels('')
else:
plt.xticks(rotation=60, ha='right', fontsize=9)
ax.yaxis.set_major_locator(MaxNLocator(integer=True))
if np.mod(k,2)==0:
plt.ylabel('number of cases')
plt.title(dep, fontdict={'fontweight':'bold'} )
if k==0:
ax.legend(handles=[line_rea[0], line_out[0]],
labels=['in ICU','discharged cases + deaths'])
plt.show()
fh = plt.figure(figsize=(10,4*np.ceil(n_sites/2)))
plot_model(compute_model, data, sites, params_hat, n_days_pred=0)
fh.savefig(op.join(micu.fig_path, 'model_icubam_grand-est_{}.pdf'.format(last_date)),
bbox_inches='tight')
```
| github_jupyter |
# Mask R-CNN Demo
A quick intro to using the pre-trained model to detect and segment objects.
```
import os
import sys
import random
import math
import numpy as np
import skimage.io
import matplotlib
import matplotlib.pyplot as plt
# Root directory of the project
ROOT_DIR = os.path.abspath("../")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
# Import COCO config
sys.path.append(os.path.join(ROOT_DIR, "samples/coco/")) # To find local version
import coco
%matplotlib inline
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
# Directory of images to run detection on
IMAGE_DIR = os.path.join(ROOT_DIR, "images")
```
## Configurations
We'll be using a model trained on the MS-COCO dataset. The configurations of this model are in the ```CocoConfig``` class in ```coco.py```.
For inferencing, modify the configurations a bit to fit the task. To do so, sub-class the ```CocoConfig``` class and override the attributes you need to change.
```
class InferenceConfig(coco.CocoConfig):
# Set batch size to 1 since we'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
config.display()
```
## Create Model and Load Trained Weights
```
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(COCO_MODEL_PATH, by_name=True)
```
## Class Names
The model classifies objects and returns class IDs, which are integer value that identify each class. Some datasets assign integer values to their classes and some don't. For example, in the MS-COCO dataset, the 'person' class is 1 and 'teddy bear' is 88. The IDs are often sequential, but not always. The COCO dataset, for example, has classes associated with class IDs 70 and 72, but not 71.
To improve consistency, and to support training on data from multiple sources at the same time, our ```Dataset``` class assigns it's own sequential integer IDs to each class. For example, if you load the COCO dataset using our ```Dataset``` class, the 'person' class would get class ID = 1 (just like COCO) and the 'teddy bear' class is 78 (different from COCO). Keep that in mind when mapping class IDs to class names.
To get the list of class names, you'd load the dataset and then use the ```class_names``` property like this.
```
# Load COCO dataset
dataset = coco.CocoDataset()
dataset.load_coco(COCO_DIR, "train")
dataset.prepare()
# Print class names
print(dataset.class_names)
```
We don't want to require you to download the COCO dataset just to run this demo, so we're including the list of class names below. The index of the class name in the list represent its ID (first class is 0, second is 1, third is 2, ...etc.)
```
# COCO Class names
# Index of the class in the list is its ID. For example, to get ID of
# the teddy bear class, use: class_names.index('teddy bear')
class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane',
'bus', 'train', 'truck', 'boat', 'traffic light',
'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird',
'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear',
'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',
'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
'kite', 'baseball bat', 'baseball glove', 'skateboard',
'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',
'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',
'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',
'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',
'teddy bear', 'hair drier', 'toothbrush']
```
## Run Object Detection
```
# Load a random image from the images folder
file_names = next(os.walk(IMAGE_DIR))[2]
filename=os.path.join(IMAGE_DIR, 'barca2.jpg')
#image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names)))
image = skimage.io.imread(filename)
# Run detection
results = model.detect([image], verbose=1)
# Visualize results
r = results[0]
visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'],
class_names, r['scores'])
```
| github_jupyter |
# gen_doc.gen_notebooks
This module contains the scripts and API to auto-generate or update a documentation notebook skeleton from a given .py file or a full package. It is not expected you'd use this skeleton as your final docs - you should add markdown, examples, etc to it. The skeleton just has a minimal list of exported symbols.
[<code>fastai.gen_doc.sgen_notebooks</code>](http://docs.fast.ai/gen_doc.sgen_notebooks.html#fastai.gen_doc.sgen_notebooks) is a script that transforms a given module into a notebook skeleton. The usage is
```
python -m sgen_notebooks package path_to_result [--update]
```
- **package** is the package you want to write the documentation of. Note that if the package isn't installed in your environment, you need to execute to execute the script in a place where package is a directory (or make a simlink to it). The script will search thourgh all the subdirectories to create all the relevant notebooks.
- **path_to_result** is a directory where you want those notebooks. The script will auto-execute them, so this directory should contain the file nbdoc.py from this package. If the module you are documenting isn't installed, you will also need to have a simlink to it in your path_to_result folder.
- if the flag **--update** is added, the script will update the notebooks (to reflect the addition of new functions or new arguments).
Alternatively, you can access the same functionality through the module API, documented below.
**Important note:** The notebooks automatically generated or updated need to be trusted before you can see the results in the output cells. To trust a notebook, click on File, then Trust notebook.
This module also contains the scripts and API to convert the documentation notebooks into HTML, which is the format used for the final documentation site.
```
from fastai import gen_doc
from fastai.gen_doc import nbdoc
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.gen_notebooks import *
```
## Installation
This package requires:
- [nbconvert](https://github.com/jupyter/nbconvert): conda install nbconvert
- [nb_extensions](https://github.com/ipython-contrib/jupyter_contrib_nbextensions): conda install -c conda-forge jupyter_contrib_nbextensions
Once nbextensions is installed, your home page of jupyter notebook will look like this:

Click on the Nbextensions tab then make sure the hide inputs extension is activated:

As its name suggests, this will allow you to hide input cells and only show their results.
## Convert modules into notebook skeleton
The first (optional) step is to create a notebook "skeleton" - i.e. a notebook containing all the classes, methods, functions, and other symbols you wish to document. You can create this manually if you prefer, however using the automatic approach can save you some time and ensure you don't miss anything. For the initial skelton, use [`create_module_page`](/gen_doc.gen_notebooks.html#create_module_page), which creates a new module from scratch. To update it later with any newly-added symbols, use [`update_module_page`](/gen_doc.gen_notebooks.html#update_module_page).
```
show_doc(create_module_page, arg_comments={
'mod': 'the module',
'dest_path': 'the folder in which to generate the notebook',
'force': 'if False, will raise an exception if the notebook is already present'})
show_doc(link_all)
show_doc(link_nb)
show_doc(update_module_page, arg_comments={
'mod': 'the module',
'dest_path': 'the folder in which to generate the notebook'})
```
All the cells added by a user are conserved, only the cells of new symbols (aka that weren't documented before) will be inserted at the end. You can then move them to wherever you like in the notebook. For instance, to update this module's documentation, simply run:
```
update_module_page(gen_doc.gen_notebooks, '.')
```
You can also generate and update *all* modules in a package using [`generate_all`](/gen_doc.gen_notebooks.html#generate_all) and [`update_all`](/gen_doc.gen_notebooks.html#update_all).
```
show_doc(generate_all, arg_comments={
'pkg_name': 'name of the package to document',
'dest_path': 'the folder in which to generate the notebooks',
'exclude': 'names of subdirectories to ignore'})
show_doc(update_all, arg_comments={
'pkg_name': 'name of the package to document',
'dest_path': 'the folder in which to generate the notebooks',
'exclude': 'names of subdirectories to ignore',
'create_missing': 'create docs if they don\'t exist'})
```
### Updating all module docs
As a convenience method, there's [`update_all`](/gen_doc.gen_notebooks.html#update_all) to update all notebooks. This snippet does the whole lot for you:
```python
import fastai
from pathlib import Path
fastai_pkg = Path(fastai.__file__).parent
update_all(fastai_pkg, '.', create_missing=True)
```
## Add documentation
The automatically generated module will only contain the table of contents and the doc string of the functions and classes in your module (or the ones you picked with \_\_all\_\_). You should add more prose to them in markdown cells, or examples of uses inside the notebook.
At any time, if you don't want the input of a code cell to figure in the final result, you can use the little button in your tool bar to hide it.

The same button can show you the hidden input from a cell. This used in conjunction with the helper functions from [nbdoc](gen_doc.nbdoc.ipynb) should allow you to easily add any content you need.
## Convert notebook to html
Once you're finished, don't forget to properly save your notebook, then you can either convert all the notebooks together with the script:
```
python -m convert2html dir
```
- **dir** is the directory where all your notebooks are stored.
If you prefer to do this in a notebook, you can simply type:
```python
from fastai.gen_doc.convert2html import convert_nb
convert_nb('gen_doc.gen_notebooks.ipynb', '../docs')
```
For more information see the [documentation of convert2html](gen_doc.convert2html.ipynb).
```
show_doc(update_notebooks)
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Goal" data-toc-modified-id="Goal-1"><span class="toc-item-num">1 </span>Goal</a></span></li><li><span><a href="#Var" data-toc-modified-id="Var-2"><span class="toc-item-num">2 </span>Var</a></span></li><li><span><a href="#Init" data-toc-modified-id="Init-3"><span class="toc-item-num">3 </span>Init</a></span></li><li><span><a href="#LLMGAG" data-toc-modified-id="LLMGAG-4"><span class="toc-item-num">4 </span>LLMGAG</a></span><ul class="toc-item"><li><span><a href="#Setup" data-toc-modified-id="Setup-4.1"><span class="toc-item-num">4.1 </span>Setup</a></span><ul class="toc-item"><li><span><a href="#Run" data-toc-modified-id="Run-4.1.1"><span class="toc-item-num">4.1.1 </span>Run</a></span></li></ul></li></ul></li><li><span><a href="#Summary" data-toc-modified-id="Summary-5"><span class="toc-item-num">5 </span>Summary</a></span><ul class="toc-item"><li><span><a href="#Number-of-genes-assembled-&-clustered" data-toc-modified-id="Number-of-genes-assembled-&-clustered-5.1"><span class="toc-item-num">5.1 </span>Number of genes assembled & clustered</a></span></li><li><span><a href="#Taxonomy" data-toc-modified-id="Taxonomy-5.2"><span class="toc-item-num">5.2 </span>Taxonomy</a></span><ul class="toc-item"><li><span><a href="#Summary" data-toc-modified-id="Summary-5.2.1"><span class="toc-item-num">5.2.1 </span>Summary</a></span></li></ul></li><li><span><a href="#Annotations" data-toc-modified-id="Annotations-5.3"><span class="toc-item-num">5.3 </span>Annotations</a></span><ul class="toc-item"><li><span><a href="#COG-functional-categories" data-toc-modified-id="COG-functional-categories-5.3.1"><span class="toc-item-num">5.3.1 </span>COG functional categories</a></span></li><li><span><a href="#Grouped-by-taxonomy" data-toc-modified-id="Grouped-by-taxonomy-5.3.2"><span class="toc-item-num">5.3.2 </span>Grouped by taxonomy</a></span></li></ul></li><li><span><a href="#humann2-db-genes" data-toc-modified-id="humann2-db-genes-5.4"><span class="toc-item-num">5.4 </span>humann2 db genes</a></span><ul class="toc-item"><li><span><a href="#Summary" data-toc-modified-id="Summary-5.4.1"><span class="toc-item-num">5.4.1 </span>Summary</a></span><ul class="toc-item"><li><span><a href="#By-taxonomy" data-toc-modified-id="By-taxonomy-5.4.1.1"><span class="toc-item-num">5.4.1.1 </span>By taxonomy</a></span></li></ul></li></ul></li></ul></li><li><span><a href="#sessionInfo" data-toc-modified-id="sessionInfo-6"><span class="toc-item-num">6 </span>sessionInfo</a></span></li></ul></div>
# Goal
* Run `LLMGAG` (metagenome assembly of genes) pipeline on animal gut microbiome metagenome study
* study = PRJNA485217
* host = Capuchin
# Var
```
studyID = 'PRJNA485217'
base_dir = file.path('/ebio/abt3_projects/Georg_animal_feces/data/metagenome/multi-study/BioProjects/',
studyID)
tmp_out_dir = file.path('/ebio/abt3_projects/databases_no-backup/animal_gut_metagenomes/multi-study_MG-asmbl/',
studyID)
work_dir = file.path(base_dir, 'LLMGAG')
pipeline_dir = '/ebio/abt3_projects/methanogen_host_evo/bin/llmgag'
threads = 24
```
# Init
```
library(dplyr)
library(tidyr)
library(ggplot2)
library(data.table)
set.seed(8304)
source('/ebio/abt3_projects/Georg_animal_feces/code/misc_r_functions/init.R')
make_dir(base_dir)
make_dir(tmp_out_dir)
make_dir(work_dir)
```
# LLMGAG
## Setup
```
cat_file(file.path(work_dir, 'config.yaml'))
```
### Run
```{bash}
(snakemake_dev) @ rick:/ebio/abt3_projects/methanogen_host_evo/bin/llmgag
$ screen -L -S llmgag-PRJNA485217 ./snakemake_sge.sh /ebio/abt3_projects/Georg_animal_feces/data/metagenome/multi-study/BioProjects/PRJNA485217/LLMGAG/config.yaml cluster.json /ebio/abt3_projects/Georg_animal_feces/data/metagenome/multi-study/BioProjects/PRJNA485217/LLMGAG/SGE_log 24
```
```
pipelineInfo(pipeline_dir)
```
# Summary
## Number of genes assembled & clustered
```
F = file.path(work_dir, 'assembly', 'plass', 'genes.faa')
cmd = glue::glue('grep -c ">" {fasta}', fasta=F)
n_raw_seqs = system(cmd, intern=TRUE)
cat('Number of assembled sequences:', n_raw_seqs, '\n')
F = file.path(work_dir, 'cluster', 'linclust', 'clusters_rep-seqs.faa')
cmd = glue::glue('grep -c ">" {fasta}', fasta=F)
n_rep_seqs = system(cmd, intern=TRUE)
cat('Number of cluster rep sequences:', n_rep_seqs, '\n')
F = file.path(work_dir, 'humann2_db', 'clusters_rep-seqs.faa.gz')
cmd = glue::glue('gunzip -c {fasta} | grep -c ">"', fasta=F)
n_h2_seqs = system(cmd, intern=TRUE)
cat('Number of humann2_db-formatted seqs:', n_h2_seqs, '\n')
```
## Taxonomy
```
# reading in taxonomy table
## WARING: slow
F = file.path(work_dir, 'taxonomy', 'clusters_rep-seqs_tax_db.tsv.gz')
cmd = glue::glue('gunzip -c {file}', file=F)
coln = c('seqID', 'taxID', 'rank', 'spp', 'lineage')
levs = c('Domain', 'Phylum', 'Class', 'Order', 'Family', 'Genus', 'Species')
tax = fread(cmd, sep='\t', header=FALSE, col.names=coln, fill=TRUE) %>%
separate(lineage, levs, sep=':')
tax %>% dfhead
# number of sequences
tax$seqID %>% unique %>% length %>% print
# which ranks found?
tax$rank %>% table %>% print
# number of classifications per seqID
tax %>%
group_by(seqID) %>%
summarize(n = n()) %>%
ungroup() %>%
.$n %>% summary
```
### Summary
```
# summarizing taxonomy
tax_s = tax %>%
filter(Domain != '',
Phylum != '') %>%
group_by(Domain, Phylum) %>%
summarize(n = seqID %>% unique %>% length) %>%
ungroup()
tax_s %>% dfhead
# plotting by phylum
p = tax_s %>%
filter(n > 10) %>%
mutate(Phylum = Phylum %>% reorder(n)) %>%
ggplot(aes(Phylum, n, fill=Domain)) +
geom_bar(stat='identity', position='dodge') +
scale_y_log10() +
labs(y = 'No. of genes') +
coord_flip() +
theme_bw() +
theme(
axis.text.y = element_text(size=7)
)
dims(5,6)
plot(p)
# top phyla
tax_s %>%
arrange(-n) %>%
head(n=30)
# summarizing taxonomy
tax_s = tax %>%
filter(Domain != '',
Phylum != '',
Class != '') %>%
group_by(Domain, Phylum, Class) %>%
summarize(n = seqID %>% unique %>% length) %>%
ungroup()
tax_s %>% dfhead
# top hits
tax_s %>%
arrange(-n) %>%
head(n=30)
```
## Annotations
```
# eggnog-mapper v2
cols = c(
"query_name",
"seed_eggNOG_ortholog",
"seed_ortholog_evalue",
"seed_ortholog_score",
"Predicted_taxonomic_group",
"Predicted_protein_name",
"Gene_Ontology_terms",
"EC_number",
"KEGG_ko",
"KEGG_Pathway",
"KEGG_Module",
"KEGG_Reaction",
"KEGG_rclass",
"BRITE",
"KEGG_TC",
"CAZy",
"BiGG_Reaction",
"tax_scope__eggNOG_taxonomic_level_used_for_annotation",
"eggNOG_OGs",
"bestOG",
"COG_Functional_Category",
"eggNOG_free_text_description"
)
F = file.path(work_dir, 'annotate', 'eggnog-mapper', 'clusters_rep-seqs.emapper.annotations.gz')
cmd = glue::glue('gunzip -c {file}', file=F, header=FALSE)
emap_annot = fread(cmd, sep='\t')
colnames(emap_annot) = cols
emap_annot = emap_annot %>%
dplyr::select(-Gene_Ontology_terms)
emap_annot %>% dfhead
# adding taxonomy info
intersect(emap_annot$query_name, tax$seqID) %>% length %>% print
emap_annot = emap_annot %>%
left_join(tax, c('query_name'='seqID'))
emap_annot %>% dfhead
n_annot_seqs = emap_annot$query_name %>% unique %>% length
cat('Number of rep seqs with eggnog-mapper annotations:', n_annot_seqs, '\n')
```
### COG functional categories
* [wiki on categories](https://ecoliwiki.org/colipedia/index.php/Clusters_of_Orthologous_Groups_%28COGs%29)
```
# summarizing by functional group
max_cat = emap_annot$COG_Functional_Category %>% unique %>% sapply(nchar) %>% max
emap_annot_s = emap_annot %>%
dplyr::select(query_name, COG_Functional_Category) %>%
separate(COG_Functional_Category, LETTERS[1:max_cat], sep='(?<=[A-Z])') %>%
gather(X, COG_func_cat, -query_name) %>%
filter(!is.na(COG_func_cat),
COG_func_cat != '') %>%
dplyr::select(-X)
emap_annot_s %>% dfhead
# plotting summary
p = emap_annot_s %>%
ggplot(aes(COG_func_cat)) +
geom_bar() +
labs(x='COG functional category', y='No. of genes') +
theme_bw()
dims(9,3)
plot(p)
# plotting summary
p = emap_annot_s %>%
group_by(COG_func_cat) %>%
summarize(perc_abund = n() / n_annot_seqs * 100) %>%
ungroup() %>%
ggplot(aes(COG_func_cat, perc_abund)) +
geom_bar(stat='identity') +
labs(x='COG functional category', y='% of all genes') +
theme_bw()
dims(9,3)
plot(p)
```
### Grouped by taxonomy
```
max_cat = emap_annot$COG_Functional_Category %>% unique %>% sapply(nchar) %>% max
emap_annot_s = emap_annot %>%
dplyr::select(query_name, COG_Functional_Category) %>%
separate(COG_Functional_Category, LETTERS[1:max_cat], sep='(?<=[A-Z])') %>%
gather(X, COG_func_cat, -query_name) %>%
left_join(tax, c('query_name'='seqID')) %>%
filter(!is.na(COG_func_cat),
COG_func_cat != '') %>%
dplyr::select(-X)
emap_annot_s %>% dfhead
# plotting summary by domain
p = emap_annot_s %>%
ggplot(aes(COG_func_cat)) +
geom_bar() +
facet_wrap(~ Domain, scales='free_y') +
labs(x='COG functional category', y='No. of genes') +
theme_bw()
dims(9,5)
plot(p)
# plotting summary by phylum
p = emap_annot_s %>%
group_by(Phylum) %>%
mutate(n = n()) %>%
ungroup() %>%
filter(n >= 1000) %>%
mutate(Phylum = Phylum %>% reorder(-n)) %>%
ggplot(aes(COG_func_cat, fill=Domain)) +
geom_bar() +
facet_wrap(~ Phylum, scales='free_y', ncol=3) +
labs(x='COG functional category', y='No. of genes') +
theme_bw()
dims(10,5)
plot(p)
```
## humann2 db genes
```
# gene IDs
F = file.path(work_dir, 'humann2_db', 'clusters_rep-seqs_annot-index.tsv')
hm2 = fread(F, sep='\t', header=TRUE) %>%
separate(new_name, c('UniRefID', 'Gene_length', 'Taxonomy'), sep='\\|') %>%
separate(Taxonomy, c('Genus', 'Species'), sep='\\.s__') %>%
separate(Species, c('Species', 'TaxID'), sep='__taxID') %>%
mutate(Genus = gsub('^g__', '', Genus))
hm2 %>% dfhead
# adding taxonomy
intersect(hm2$original_name, tax$seqID) %>% length %>% print
hm2 = hm2 %>%
left_join(tax, c('original_name'='seqID'))
hm2 %>% dfhead
```
### Summary
```
# number of unique UniRef IDs
hm2$UniRefID %>% unique %>% length
# duplicate UniRef IDs
hm2 %>%
group_by(UniRefID) %>%
summarize(n = n()) %>%
ungroup() %>%
filter(n > 1) %>%
arrange(-n) %>%
head(n=30)
# number of genes with a taxID
hm2_f = hm2 %>%
filter(!is.na(TaxID))
hm2_f %>% nrow
```
#### By taxonomy
```
# number of UniRefIDs
hm2_f_s = hm2_f %>%
group_by(Domain, Phylum) %>%
summarize(n = UniRefID %>% unique %>% length) %>%
ungroup()
p = hm2_f_s %>%
filter(n >= 10) %>%
mutate(Phylum = Phylum %>% reorder(n)) %>%
ggplot(aes(Phylum, n, fill=Domain)) +
geom_bar(stat='identity', position='dodge') +
scale_y_log10() +
coord_flip() +
labs(y='No. of UniRef IDs') +
theme_bw() +
theme(
axis.text.y = element_text(size=7)
)
dims(5,5)
plot(p)
```
# sessionInfo
```
sessionInfo()
```
| github_jupyter |
# SLU13 - Linear Algebra & NumPy, Part 2
In this notebook we will be covering the following:
- **Matrix multiplication**: multiplication of vectors and matrices, properties of matrix multiplication and the transpose, and application in NumPy;
- **The inverse of a matrix**: the intuitions behind the inverse, and how to determine it (if it exists!) using NumPy;
- **Additional `numpy.ndarray` methods**: `.min()`, `.max()`, `.sum()` and `.sort()`;
- **(optional section) Systems of linear equations**
- **(optional section) Eigenvalues and eigenvectors**
**What's in this notebook:**
1. [Matrix multiplication](#1.-Matrix-multiplication)
1.1 [Understanding matrix multiplication](#1.1-Understanding-matrix-multiplication)
1.2 [Properties of matrix multiplication](#1.2-Properties-of-matrix-multiplication)
1.3 [Transpose multiplication rules](#1.3-Transpose-multiplication-rules)
1.4 [Matrix multiplication using NumPy](#1.4-Matrix-multiplication-using-NumPy)
2. [The inverse](#2.-The-inverse)
2.1 [Finding the inverse of a matrix](#2.1-Finding-the-inverse-of-a-matrix)
2.2 [Using NumPy to find the inverse of a matrix](#2.2-Using-NumPy-to-find-the-inverse-of-a-matrix)
3. [Additional NumPy methods](#3.-Additional-NumPy-methods)
3.1 [`ndarray.max()` and `ndarray.min()`](#3.1-ndarray.max()-and-ndarray.min())
3.2 [`ndarray.sort()` and `numpy.sort()`](#3.2-ndarray.sort()-and-numpy.sort())
3.3 [`ndarray.sum()`](#3.3-ndarray.sum())
### Imports
```
import numpy as np
```
<img src="./media/machine.gif"/>
<br>
<center><i>How this dark meowgic works?!</i> 🙀😼</center>
---
In SLU12 we learned the most basic concepts of linear algebra: what are vectors, matrices, their basic operations, and applying all that with `ndarrays`.
I promised that if you worked hard enough in SLUs 12 and 13, you'd be able to read the matrix form of the *multiple linear regression* solution:
$$\mathbf{\beta} = (X^TX)^{-1}(X^T\mathbf{y})$$
You can already see that there is some matrix $\mathbf{X}$ and some vectors $\mathbf{\beta}$ and $\mathbf{y}$ inside that equation, as well as the transpose of $\mathbf{X}$. But how would you find the inverse (that exponent $^{-1}$ over there), and how do we multiply all those matrices? All the answers you need will become clear throughout this notebook.
---
⚠️ <i>Although sections 4 and 5 are optional, I suggest you go through them, when you have time. Understanding the link between matrices and linear systems, and what are eigenvalues and eigenvectors, are key concepts in data science.</i>
---
<center>You ready?</center>
<img src="./media/ready.gif"/>
<center>Let's do thiiiiiiiiiiiiiiiiiiiis.</center>
---
## 1. Matrix multiplication
Now that we aced dot product and linear combinations on the last SLU, matrix multiplication will be a piece of cake. Oh and remember, matrices are just collections of vectors, column vectors...
### 1.1 Understanding matrix multiplication
#### 1.1.1 General formula for matrix multiplication
<a name="vector_def"></a>
<div class="alert alert-block alert-info">
The result of the <b>multiplication between a matrix</b> $\mathbf{A}$, of size $m\times n$, <b>and a matrix</b> $\mathbf{B}$, of size $n\times k$, is the matrix $\mathbf{C}$, of size $m\times k$, where each entry $c_{i,j}$ corresponds to the dot product between the row vector in row $i$ of $\mathbf{A}$ and the column vector in column $j$ of $\mathbf{B}$:
<br>
<br>
$$\mathbf{C} = \mathbf{A} \mathbf{B} =
\begin{bmatrix}
c_{1,1} & c_{1,2} & \dots & c_{1,k}\\
\vdots & \vdots & \ddots & \ddots\\
c_{m,1} & c_{m,2} & \dots & c_{m,k}
\end{bmatrix},\;\;
\text{where}\;
c_{i,j} = (\text{row }i\text{ in }\mathbf{A})\cdot (\text{column }j\text{ in }\mathbf{B}) = a_{i,1}b_{1,j} + a_{i,2}b_{2,j} + \dots + a_{i,n}b_{n,j}
$$
<br>
</div>
Let's go through the formula step by step:
---
**STEP 1**: Compute the dot product between **row 1** in $\mathbf{A}$ and **each column vector** in $\mathbf{B}$:
$\;\;\;\;c_{1,1} = (\text{row 1 in }\mathbf{A}) \cdot (\text{column 1 in }\mathbf{B})$
$\;\,\;\;c_{1,2} = (\text{row 1 in }\mathbf{A}) \cdot (\text{column 2 in }\mathbf{B})$
$\;\;\;\;\vdots$
$\;\;\;\;c_{1,k} = (\text{row 1 in }\mathbf{A}) \cdot (\text{column }k\text{ in }\mathbf{B})$
At this point, we have filled the **first row** in $\mathbf{C}$, as follows:
$\;\;\;(\text{row 1 in }\mathbf{C}) = \begin{bmatrix} c_{1,1} & c_{1,2} & \dots & c_{1,k}\end{bmatrix}$
---
**STEP 2**: Repeate step 1 for **row 2** in $\mathbf{A}$ and **each column vector** in $\mathbf{B}$, until we fill the **second row** in $\mathbf{C}$.
---
**STEP 3**: **Repeat** the same process for each remaining row of $\mathbf{A}$.
Our last element in matrix $\mathbf{C}$ will correspond to the dot product between the last row vector in $\mathbf{A}$ and the last column vector in $\mathbf{B}$:
$\;\;\;c_{m,k} = (\text{row }m\text{ in }\mathbf{A}) \cdot (\text{column }k\text{ in }\mathbf{B})$
---
That's it. The outcome of multiplying two matrices $\mathbf{A}$ (size $m\times n$) and $\mathbf{B}$ (size $n\times k$) is nothing more than a matrix $\mathbf{C}$ of size $m\times k$ where each element corresponds to the dot product between each row in $\mathbf{A}$ and each column in $\mathbf{B}$:
<br>
$$\mathbf{C} = \mathbf{A}\mathbf{B} =
\begin{bmatrix}
a_{1,1}b_{1,1} + a_{1,2}b_{2,1} + ... + a_{1,n}b_{n,1} & \dots & a_{1,1}b_{1,k} + a_{1,2}b_{2,k} + ... + a_{1,n}b_{n,k}\\
\vdots & \ddots & \vdots\\
a_{m,1}b_{1,1} + a_{m,2}b_{2,1} + ... + a_{m,n}b_{n,1} & \dots & a_{m,1}b_{1,k} + a_{m,2}b_{2,k} + ... + a_{m,n}b_{n,k}\\
\end{bmatrix} =
\begin{bmatrix}
c_{1,1} & c_{1,2} & \dots & c_{1,k}\\
\vdots & \vdots & \ddots & \vdots\\
c_{m,1} & c_{m,2} & \dots & c_{m,k}
\end{bmatrix}
$$
<br>
I'm giving you all that "boring" mathematical notation on purpose. You might see a lot of it when reading about algorithms and other mathematical topics. 😉
---
❗️ Remember you can **only** multiply $\mathbf{A}$ by $\mathbf{B}$ if the number of columns in $\mathbf{A}$ is equal to the number of rows in $\mathbf{B}$. It's easy to see why. Just remember that the dot product cannot be calculated for vectors of different dimensions.
The output matrix will always have the same number of rows as the first matrix (or vector) and the same number of columns as the second matrix (or vector).
If you calculate the matrix product between a $1\times n$ vector and a $n\times 1$ vector, you get the dot product between vectors that you learned in SLU12 (scalar product). If you calculate the matrix product between a $n\times 1$ vector and a $1\times n$ vector, you get an $n\times n$ square matrix.
❗️ The order in matrix multiplication matters!
---
#### 1.1.2 Example:
What's the result of multiplying $\mathbf{A}$ by $\mathbf{B}$, knowing that $\mathbf{A} = \begin{bmatrix} -1 & 2\\ 4 & 6\end{bmatrix}$ and $\mathbf{B} = \begin{bmatrix} 0 & 3 & 6\\ -2 & 0 & -1\end{bmatrix}$?
Let's call $\mathbf{M}$, for "Mathmagic", to the output of our matrix multiplication, $\mathbf{M} = \mathbf{A}\mathbf{B}$. Because $\mathbf{A}$ is of size $2\times 2$ and $\mathbf{B}$ is of size $2\times 3$, the resulting "mathmagical" matrix $\mathbf{M}$ will have size $2\times 3$.
You can fill the matrix step by step, using the formula with the dot products:
<img src="./media/matrix_multiplication.PNG" width="360"/>
Or just fill the dot products on the output matrix all at once:
$$\mathbf{M} =
\begin{bmatrix}
-1\times 0 + 2\cdot (-2) & -1\times 3 + 2\times 0 & -1\times 6 + 2\times (-1)\\
4\times 0 + 6\cdot (-2) & 4\times 3 + 6\times 0 & 4\times 6 + 6\times (-1)
\end{bmatrix} =
\begin{bmatrix}
-4 & -3 & -8\\
-12 & 12 & 18
\end{bmatrix}
$$
---
<img src="./media/what_if.jpg" width="500"/>
---
#### 1.1.3 Matrix multiplication and linear combinations
You should remember from SLU12 that linear combinations are things like $c\cdot \mathbf{u} + d\cdot \mathbf{v}$, where $\mathbf{u}$ and $\mathbf{v}$ are vectors, and $c$ and $d$ are real numbers (scalars).
Consider the matrices
$\mathbf{A} =
\begin{bmatrix}
a_{1,1} & a_{1,2}\\
a_{2,1} & a_{2,2}\\
\end{bmatrix}
$
and
$\mathbf{B} =
\begin{bmatrix}
b_{1,1} & b_{1,2}\\
b_{2,1} & b_{2,2}\\
\end{bmatrix}
$.
---
```
Instructor: - "Can we multiply A by B?"
Student: - "Yes!"
Instructor: - "Why?"
Student: - "Because the number of columns in A is equal to the number of rows in B!
Instructor: - "Precisely!"
```
---
We already know that one way we could multiply $\mathbf{A}$ by $\mathbf{B}$ is to compute the dot product between each row vector in $\mathbf{A}$ and each column vector in $\mathbf{B}$ .
Think about column 1 of matrix $\mathbf{B}$. Let's say we're very tired and just want to get the first column of $\mathbf{B}$ before taking a nap.
So let's pretend we lost the entries in column 2:
$
\begin{bmatrix}
a_{1,1} & a_{1,2}\\
a_{2,1} & a_{2,2}\\
\end{bmatrix}
\begin{bmatrix}
b_{1,1} & ?\\
b_{2,1} & ?\\
\end{bmatrix}
$
We'll replace every value we can't determine without the second column of $\mathbf{B}$, by a question mark, $?$.
What part of the final matrix can we fill with the available column?
$
\begin{bmatrix}
a_{1,1} & a_{1,2}\\
a_{2,1} & a_{2,2}\\
\end{bmatrix}
\begin{bmatrix}
b_{1,1} & ?\\
b_{2,1} & ?\\
\end{bmatrix} =
\begin{bmatrix}
a_{1,1}b_{1,1} + a_{1,2}b_{2,1} & a_{1,1} ? + a_{1,2} ?\\
a_{2,1}b_{1,1} + a_{2,2}b_{2,1} & a_{2,1} ? + a_{2,2} ?\\
\end{bmatrix} =
\begin{bmatrix}
a_{1,1}b_{1,1} + a_{1,2}b_{2,1} & ?\\
a_{2,1}b_{1,1} + a_{2,2}b_{2,1} & ?\\
\end{bmatrix}\;\;\;\;(1)
$
---
Using only the first column in $\mathbf{B}$, we were able to fill the first column of the output matrix. This is the same as having multiplied $\mathbf{A}$ by the column vector $\begin{bmatrix}b_{1,1}\\ b_{2,1}\end{bmatrix}$.
But what's really interesting is that if we had used only the second column of $\mathbf{B}$ we would had filled only the second column in the output matrix. 😮
What's actually happening in the output matrix is that we're creating **linear combinations** of the columns of $\mathbf{A}$.
Notice that we can rearrange the resulting column as follows:
$b_{1,1}a_{1,1} + b_{2,1}a_{1,2}$
$b_{1,1}a_{2,1} + b_{2,1}a_{2,2}$
If we consider the column vectors of $\mathbf{A}$, $\begin{bmatrix}a_{1,1}\\ a_{2,1}\end{bmatrix}$ and $\begin{bmatrix}a_{1,2}\\ a_{2,2}\end{bmatrix}$, we can see that what we're doing is simply summing $b_{1,1}\cdot \begin{bmatrix}a_{1,1}\\ a_{2,1}\end{bmatrix}$ with $b_{2,1}\cdot \begin{bmatrix}a_{1,2}\\ a_{2,2}\end{bmatrix}$ yielding:
$b_{1,1}\cdot \begin{bmatrix}a_{1,1}\\ a_{2,1}\end{bmatrix} + b_{2,1}\cdot \begin{bmatrix}a_{1,2}\\ a_{2,2}\end{bmatrix}$
which gets us back to the first column $(1)$ in our result matrix:
$
\begin{bmatrix}
a_{1,1}b_{1,1} + a_{1,2}b_{2,1}\\
a_{2,1}b_{1,1} + a_{2,2}b_{2,1}\\
\end{bmatrix}
$
---
So, when multiplying a matrix $\mathbf{A}$ by a matrix $\mathbf{B}$, you get a matrix where each column is a linear combination of the column vectors of $\mathbf{A}$, using the scalars of each column in $\mathbf{B}$.
---
<center>Generally speaking, the columns of the matrix product $\mathbf{A}\mathbf{B}$ are linear combinations of the columns of the first matrix, $\mathbf{A}$.</center>
<center>Similarly, the rows of the product $\mathbf{A}\mathbf{B}$ are linear combinations of the rows of the second matrix, $\mathbf{B}$.</center>
<br>
<img src="./media/surprised.gif" width="350"/>
<br>
<center>Take a moment to meditate on that, before moving on.</center>
<br>
<center>Oh, and here's a <a href="https://eli.thegreenplace.net/2015/visualizing-matrix-multiplication-as-a-linear-combination/">visual explanation</a> that might help you.</center>
---
> 📝 **Pen and paper exercise 1**: Grab a pen and a piece of paper and determine $\begin{bmatrix} 0\\ 1\end{bmatrix}\begin{bmatrix}-1 & 2\end{bmatrix}$
### 1.2 Properties of matrix multiplication
Don't worry about memorizing all those properties, just check them and save them for your reference:
$\;\;\text{1. }\;\; \mathbf{A}(\mathbf{B}\mathbf{C}) = (\mathbf{A}\mathbf{B})\mathbf{C}$
$\;\;\text{2. }\;\; \mathbf{A}(\mathbf{B}\pm \mathbf{C}) = \mathbf{A}\mathbf{B} \pm \mathbf{A}\mathbf{C}$
$\;\;\text{3. }\;\; (\mathbf{A}\pm \mathbf{B})\mathbf{C} = \mathbf{A}\mathbf{C} \pm \mathbf{B}\mathbf{C}$
$\;\;\text{4. }\;\; c(\mathbf{A}\mathbf{B}) = (c\mathbf{A})\mathbf{B}$
$\;\;\text{5. }\;\; \mathbf{A}\mathbf{0} = \mathbf{0}$ and $\mathbf{0}\mathbf{B} = \mathbf{0}$
$\;\;\text{6. }\;\; \mathbf{A}\mathbf{I} = \mathbf{A}$ and $\mathbf{I}\mathbf{A} = \mathbf{A}$
where $\mathbf{I}$ is the identity matrix and $\mathbf{0}$ is the zero matrix, just like we learned in SLU12. $c$ is a scalar.
> 📝 **Pen and paper exercise 2**: Multiply $\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ by $\begin{bmatrix}-1 & 2\\6 & 0\end{bmatrix}$. Do you get the expected result as by property 6?
>
> Look at the rows of the second matrix and remember that the resulting matrix rows are linear combinations of the second matrix rows. Notice how the disposition of scalars $1$ and $0$ inside the identity matrix allows you to maintain the row vectors of the second matrix unchanged.
>
> Again, it's "mathmagic". All you've learned is connected.
### 1.3 Transpose multiplication rules
We've seen the transpose laws for addition and multiplication by a scalar in the last SLU. There are also special rules for matrix multiplication with the transpose:
$\;\;\text{1. }\;\; (\mathbf{A}\pm \mathbf{B})^T = \mathbf{A}^T\pm \mathbf{B}^T$
$\;\;\text{2. }\;\; (c\mathbf{A})^T = c\mathbf{A}^T$
$\;\;\text{3. }\;\; (\mathbf{A} \mathbf{B})^T = \mathbf{B}^T \mathbf{A}^T$
> 📌 **Tip**: These will become very handy when reading machine learning algorithms equations, and specially some mathematical "tricks" that are applied to get to a simple matrix form solution.
### 1.4 Matrix multiplication using NumPy
#### 1.4.1 Multiplying matrices using `numpy.matmul()`
As always, NumPy has got us covered. We can multiply two matrices using `np.matmul()`:
```
# bring back matrices from our previous example above
# 2D array to represent A
A = np.array([[-1, 2],
[4, 6]])
# 2D array to represent B
B = np.array([[0, 3, 6],
[-2, 0, -1]])
# numpy.matmul() to multiply A by B
C = np.matmul(A, B)
C
```
If the resulting matrix above does not match the one we obtained in [our previous example](#1.1.2-Example:), you have a bad instructor.
On the [reference page for `numpy.matmul()`](https://numpy.org/doc/1.20/reference/generated/numpy.matmul.html) we can read the following on the "Notes" section:
<img src="./media/matmul.png" width="700"/>
There is a lot of info in there, but the key goal here is that you understand that when using 1-D arrays (which could happen if you're representing a vector, as we've seen in SLU12), the behaviour of the function will change. Let's check some examples:
**Using 2D arrays:**
```
# multiplying (2x1) column vector by (1x2) row vector using 2D arrays
np.matmul(np.array([[0],
[1]]), # 2D array
np.array([[-1, 2]])) # 2D array
```
**Using 1D arrays:**
Notice that when you use 1D arrays to represent vectors, NumPy cannot guess whether it's dealing with row or column vectors!! It will return the dot product:
```
# multiplying (2x1) column vector by (1x2) row vector using 2D array and 1D array, respectively
np.matmul(np.array([0, 1]), # 1D array
np.array([-1, 2])) # 1D array
```
**Using 2D and 1D arrays:**
As you can see below, although at first sight it looks like we're doing the right thing (multiplying column vector of dimension $2\times 1$ by row vector of dimension $1\times 2$), NumPy will get confused:
```
x = np.array([[0],[1]]) # 2D array, matrix 2x1 (column vector)
y = np.array([-1, 2]) # 1D array, row vector 1x2
try:
np.matmul(x, y)
except Exception as e:
print("ValueError:", e)
```
As we can read from the documentation page of `numpy.matmul()`:
> If the **second argument** is 1-D, it is promoted to a matrix by **appending** a 1 to its dimensions. After matrix multiplication the appended 1 is removed.
In this case, the shape of our second argument is `(2,)`, so it will be transformed under the hood to `(2, 1)`, which is not compatible with `(2, 1)` (the shape of our first argument) for matrix multiplication.
```
print("x.shape:", x.shape)
print("y.shape:", y.shape)
```
On the other hand, multiplying in the reverse order (`y` as first argument and `x` as second) will work. In this case, our first argument has shape `(2,)`, which will be converted to `(1,2)`, according to NumPy:
> If the **first argument** is 1-D, it is promoted to a matrix by **prepending** a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.
Hence, NumPy will multiply successfully the `(1,2)` shaped array by the `(2,1)` shaped array (our second argument, `x`):
```
np.matmul(y, x)
```
Yes, I know, it's a bit confusing, but it's the price we have to pay to have one awesome library that does all the computations for us. It's worth the effort. 😌
> 📌 **Tip**: Just like before, notice how important the concept of array dimensions and shape is when dealing with NumPy arrays.
⚠️ **The `*` operator is not to be used for matrix multiplication!!**
```
# 2D array to represent A
A = np.array([[-1, 2],
[4, 6]])
# 2D array to represent B
B = np.array([[0, 3],
[-2, 0]])
# matrix multiplication
np.matmul(A, B)
```
The `*` when used between matrices (2D arrays) will be interpreted as element-wise multiplication and not matrix multiplication.
```
# element-wise multiplication
A * B
```
#### 1.4.2 Using `numpy.matmul()` versus `numpy.dot()`
Remember `numpy.dot()`, which we used to determine the scalar product (or dot product) between two vectors on the last SLU?
Well, we could use `numpy.dot()` as well, to perform the multiplication between two matrices.
```
np.dot(A, B)
```
NumPy advises you, **however**, to use `numpy.matmul()` in the case of 2-D arrays, as you can see in the documentation page of [`numpy.dot()`](https://numpy.org/doc/1.20/reference/generated/numpy.dot.html):
<img src="./media/dot_vs_matmul.png" width="600"/>
## 2. The inverse
The first question to ask yourself is: __"If a given matrix is square, is it invertible or not?"__
---
<img src="./media/best_question.gif" width="320"/>
---
A square matrix **may or may not be invertible**. It all (linearly) "depends".
So, here's the general definition of the inverse:
<a name="vector_def"></a>
<div class="alert alert-block alert-info">
<b>A matrix</b> $\mathbf{A}$, of size $m\times m$, <b>is invertible, if there is a matrix</b> $\mathbf{B}$ such that $\mathbf{A}\mathbf{B} = \mathbf{B}\mathbf{A} = \mathbf{I}_m$. When it exists, $B$ is unique, is called the inverse of $\mathbf{A}$, and is denoted by $\mathbf{A}^{-1}$.
<br>
<br>
If one can find a square matrix $\mathbf{B}$ that satisfies $\mathbf{B}\mathbf{A} = \mathbf{I}_m$, then $\mathbf{A}\mathbf{B} = \mathbf{I}_m$, that is, $\mathbf{B} = \mathbf{A}^{-1}$, and vice-versa.
</div>
### 2.1 Finding the inverse of a matrix
Bring to your memory (or go check that again) all you've learned so far on:
- linear independence of vectors; (SLU12)
- matrix multiplication in terms of linear combinations of columns. (SLU13 - this one!)
Once again, everything is interconnected.
---
Let's check a very simple example. Suppose you have a $2\times 2$ matrix $\mathbf{A}$:
$$\mathbf{A} =
\begin{bmatrix}
1 & 2\\
2 & 4
\end{bmatrix}
$$
```
Instructor: -"Dear student, what's the inverse of A?"
Student: -"I'm sorry but I can't tell you yet. The correct question is: A being square, is it invertible or not?"
Instructor: -"That's the best answer of all!"
```
We know that **if the square matrix** $\mathbf{A}$, of size $2\times 2$, is invertible, then we should be able to find $\mathbf{B}$ such that $\mathbf{B}\mathbf{A} = I_2$, or $\mathbf{A}\mathbf{B} = I_2$ ($2\times 2$ identity matrix).
Let's represent that as:
$$\begin{bmatrix}
1 & 2\\
2 & 4
\end{bmatrix}
\begin{bmatrix}
b_{1,1} & b_{1,2}\\
b_{2,1} & b_{2,2}
\end{bmatrix} \stackrel{?}{=}
\begin{bmatrix}
1 & 0\\
0 & 1
\end{bmatrix}
$$
<br>
Recall that "*__The columns of the product of two matrices__ are __combinations of the columns__ of the __first matrix.__*" If you're clueless, [reread section 1.1.3](#1.1.3-Matrix-multiplication-and-linear-combinations).
With that in mind, we can write the **first column** of $\mathbf{A} \mathbf{B}$ as the linear combination:
$$
b_{1,1}\cdot
\begin{bmatrix}
1\\
2
\end{bmatrix}
+
b_{2,1}\cdot
\begin{bmatrix}
2\\
4
\end{bmatrix}
$$
and the second as the linear combination:
$$
b_{1,2}\cdot
\begin{bmatrix}
1\\
2
\end{bmatrix}
+
b_{2,2}\cdot
\begin{bmatrix}
2\\
4
\end{bmatrix}
$$
**If the inverse exists**, we should be able to write:
$$
b_{1,1}\cdot
\begin{bmatrix}
1\\
2
\end{bmatrix}
+
b_{2,1}\cdot
\begin{bmatrix}
2\\
4
\end{bmatrix} =
\begin{bmatrix}
1\\
0
\end{bmatrix}
$$
and:
$$
b_{1,2}\cdot
\begin{bmatrix}
1\\
2
\end{bmatrix}
+
b_{2,2}\cdot
\begin{bmatrix}
2\\
4
\end{bmatrix} =
\begin{bmatrix}
0\\
1
\end{bmatrix}
$$
Notice something familiar there?...
The columns of $\mathbf{A}$ are actually two collinear vectors. If you multiply the first column by scalar $2$, you get the second column of $\mathbf{A}$.
Remember that if you have 2 collinear vectors in a 2D space, you'll get "stuck on the line"?
---
```Python
if student.current_thought == "I have no idea what she's talking about.":
print("You just made your instructor shed a tear. Please reread Learning Notebook 1 of SLU12, section on linear independence.")
elif student.current_thought == "I know what you're talking about!!":
print("2 collinear 2D vectors cannot define the space of all 2D vectors; but 2 non-collinear 2D vectors can!")
else:
print("Hello, World!")
```
---
The concept of linear (in)dependence is extremely important. Here we are, just about to use it again.
Applying the same concept to our $\mathbf{A}$ matrix columns, we see that it is **impossible** to find some matrix $\mathbf{B}$ that multiplied by $\mathbf{A}$ (which has 2 collinear columns) would yield two noncollinear vector columns, namely $\begin{bmatrix} 1\\ 0\end{bmatrix}$ and $\begin{bmatrix} 0\\ 1\end{bmatrix}$, out of that same line.
> 📌 **Tip**: Notice that each and every identity matrix is made up of $n$ vectors, each $n$-dimensional, which are all orthogonal to each other. Since they're not collinear, we can always find a linear combination with those $n$ vectors to represent any $n$-dimensional vector we want.
> *And that's why my favourite matrix is the identity.*
---
<img src="./media/identity_matrix.png"/>
---
❗️ **Key takeaway**: A square matrix is invertible **if and only if** none of its columns is a linear combination of the others.
❗️ **Singular matrix:** A square matrix that is **not invertible** is called a **singular matrix**.
> 📝 **Pen and paper exercise 3**: Consider the matrix $\;\;\mathbf{A} = \begin{bmatrix} 1 & 0 & 1\\ 0 & 1 & 1\\ 3 & -2 & a_{3,3}\end{bmatrix}$. Find **the value** $a_{3,3}$ for which $\mathbf{A}$ is **non-invertible** (singular). Do you think you could choose more than 1 value?
Let's call our geeky friend NumPy to determine the inverse for us, because our brains need a break.
---
### 2.2 Using NumPy to find the inverse of a matrix
*NumPy, dear NumPy, will you do the math for us?...*
Remember [`numpy.linalg`](https://numpy.org/doc/1.20/reference/routines.linalg.html) module?
Well, we can use it to determine the inverse of a matrix, with something called [`numpy.linalg.inv()`](https://numpy.org/doc/1.20/reference/generated/numpy.linalg.inv.html). Let's check it out.
Trying to determine the inverse of a non-invertible square matrix will result in an error, more specifically, a `LinAlgError` with the message `Singular matrix`. (*makes sense, right?*)
```
# a non-invertible square matrix
A = np.array([[1, 2],
[2, 4]])
A
# try to determine the inverse using NumPy
try:
np.linalg.inv(A)
except Exception as e: # NumPy is smart, so it throws an error because A is not invertible
print("LinAlgError:", e)
```
Now let's try to determine the inverse of an invertible matrix using NumPy:
```
# an invertible matrix
B = np.array([[0, 2],
[-4, 1]])
B_inv = np.linalg.inv(B)
B_inv
```
Thank you, NumPy!!
Finally, for the skeptics (*I know you're out there*), let's check that multiplying `B` by `B_inv` actually yields the identity! Oh and by the way, this will not always be the case because NumPy approximates the numbers (*no computer has infinite memory!*), thus in some cases you could for example get a diagonal with values just slightly different from 1.
```
# multiply B by B_inv
print("B multiplied by B_inv:\n", np.matmul(B, B_inv))
# multiply B_inv by B (because we're paranoid and insane, and doubt the Mathematicians)
print("B_inv multiplied by B:\n", np.matmul(B_inv, B))
```
---
```
All this questioning just invoked Gauss, Newton, Cayley, Hamilton and their fellow dead Mathemagicians from their graves:
-"How dare you doubt our findings on the workings of the inverse? You Math wannabes!!"
```
<img src="./media/mathmagicians.png"/>
```Python
(we will now shamefully run away to the following section)
```
---
By the way, the inverse of a matrix was the last missing piece we needed to be able to read the matrix form of the multiple linear regression solution:
$$\mathbf{\beta} = (X^TX)^{-1}(X^T\mathbf{y})$$
<br>
<center>😲😱😲😱</center>
---
## 3. Additional NumPy methods
---
<img src="./media/new_friend.gif"/>
---
When it comes to the `ndarray` class, there are actually some pretty handy methods you can use to reorganize your data and calculate some metrics.
Let's meet these buddies.
### 3.1 `ndarray.max()` and `ndarray.min()`
The method [`ndarray.max()`](https://numpy.org/doc/1.20/reference/generated/numpy.ndarray.max.html) allows you to return the maximum value of an array **along** a chosen axis:
```
# a 3x3 square matrix
A = np.array([[1, -1, 0],
[2, -2, 3],
[6, 0, -1]])
A
# using axis=1 we get the maximum value per row of A
A.max(axis=1)
# using axis=0 we get the maximum value per column of A
A.max(axis=0)
# if we don't specify the axis, it will use the default value axis=None (for v1.20 of NumPy)
# meaning it will return the maximum value in the entire 2D array
A.max()
```
The method [`ndarray.min()`](https://numpy.org/doc/1.20/reference/generated/numpy.ndarray.min.html) works in a similar manner, but instead of returning maximum values it returns minimum values:
```
A # remember A
# using axis=1 we get the minimum value per row of A
A.min(axis=1)
# using axis=0 we get the minimum value per column of A
A.min(axis=0)
# if we don't specify the axis, it will use the default value axis=None (for v1.20 of NumPy)
# meaning it will return the minimum value in the entire 2D array
A.min()
```
```
Student: -"What if we have repeated values?"
Instructor: -"No ideia!! Let's test it out!!"
```
Let's see what would happen with a 2D array `A` that has two maximum values (number `3`) on the last column:
```
# 2D array with 2 equal maximum values in 3rd colum
A = np.array([[1, -1, 3],
[2, 2, 3],
[6, 0, -1]])
# using axis=0 we get the maximum value per column of A
A.max(axis=0)
```
NumPy chooses one of the repeated values.
```
Student: -"Ok, cool!"
```
### 3.2 `ndarray.sort()` and `numpy.sort()`
The method [`ndarray.sort()`](https://numpy.org/doc/1.20/reference/generated/numpy.ndarray.sort.html) allows us to sort an array **in-place**. We also need to choose the axis along which to sort:
```
A
A.sort(axis=1) # this will change the array in-place!!
A
```
Each row was sorted in ascending order from the lowest to the highest value.
❗️ **Notice that** we lose the integrity of the matrix (each row is sorted independently of the remainder, so our columns might change too!).
❗️ Also note that NumPy **changed your array** without you explicitly having to assign the sorted array to a certain variable!! Be careful when using this method.
**If you don't want your array to be changed in-place**, you can use [numpy.sort()](https://numpy.org/doc/1.20/reference/generated/numpy.sort.html) instead:
```
A = np.array([[1, -1, 3],
[2, 2, 3],
[6, 0, -1]])
np.sort(A) # sort along the last axis (default is axis=-1)
np.sort(A, axis=None) # sort the flattened array (array reshaped to 1 dimension)
np.sort(A, axis=0) # sort along the first axis
```
If you wanted to sort in reverse order, you would have to do a *mathmagical trick* like the following:
```
-np.sort(-A, axis=0) # sort along the first axis
```
❗️ NumPy is cool and all, but sometimes it can be very tricky... Remember that snippet, it will probably be useful.
### 3.3 `ndarray.sum()`
Finally, it is quite useful to find the sum of a matrix columns and/or rows. We can do that with the method [`ndarray.sum()`](https://numpy.org/doc/1.20/reference/generated/numpy.ndarray.sum.html).
```
# recall A 'cause we have bad memory
A
# The default, axis=None, will sum all of the elements of the input array
print(A.sum())
B = np.array([[[1], [2]], [[3], [4]]])
B.ndim
B.sum()
A.sum(axis=0) # Sum elements along axis=0 (sum elements in each column)
A.sum(axis=1) # Sum elements along axis=1 (sum elements in each row)
```
---
<center>Feeling confident about linear algebra? I hope so!</center>
<img src="./media/we_got_this.gif"/>
<br>
You might want to take a break now. Do some breathing exercises, eat a chocolate, get some rest...
I know I will.
---
## Time to make a choice...
Do you wish to dive deeper into linear algebra?
---
<img src="./media/choice.jpg" width="800"/>
---
🔴 [Click here to **discover the workings of the matrix**.](#4.-Systems-of-Linear-Equations) (optional sections)
🔵 [Click here to **use NumPy in "blissful" ignorance**.](#Reading-the-matrix-form-of-the-multiple-linear-regression-solution)
---
<center>Up up up!! Choose you must.</center>
---
## 4. Systems of Linear Equations
### (optional section)
---
You chose well.
<img src="./media/proud.gif"/>
---
### 4.1 School days
Remember studying systems of linear equations in Maths class? A simple linear system would look something like this:
$\left\{
\begin{align*}
x_{1} + 2x_{2} = 5\\
2x_{1} - x_{2} = 2\\
\end{align*}
\right.$
I know, you probably used to see an $x$ and a $y$ there. But using this notation will make things much clearer when we jump into the matrix form.
How did you learn to [solve this at school](https://www.mathsisfun.com/algebra/systems-linear-equations.html)? The way I learned it was to use the [substitution method](https://www.khanacademy.org/math/algebra-home/alg-system-of-equations/alg-solving-systems-of-equations-with-substitution/v/solving-linear-systems-by-substitution). It would look something like this:
**Old school step 1** - Use the first equation to solve for $x_1$:
$x_1 = 5 - 2x_2$
**Old school step 2** - Replace (*substitute*) $x_1$ in the second equation for $5 - 2x_2$ and find the value of $x_2$:
$2\times(5 - 2x_2) - x_2 = 2 \iff x_2 = \frac{8}{5}$
**Old school step 3** - Now you would just replace the value you found for $x_2$ in the first equation, and get $x_1 = \frac{9}{5}$.
Don't you feel nostalgic right now?
---
> 📝 **Pen and paper exercise 4**: Draw the lines described by the 2 equations of our linear system on the xy-plane; check that they intersect.
>
> If they didn't intersect, meaning they would be parallel, you would have two possibilities:
> - The system has no solution, meaning that the lines are parallel and distinct, **or**
> - The lines would coincide, that is, the two equations would represent the same line, thus the system would have infinite solutions.
### 4.2 Using matrices to represent systems of linear equations
We can represent our system:
$\left\{
\begin{align*}
x_1 + 2x_2 = 5\\
2x_1 - x_2 = 2\\
\end{align*}
\right.$
as a multiplication of a matrix $\mathbf{A}$ by a vector $\mathbf{x}$, as follows:
$\begin{bmatrix}
1 & 2\\
2 & -1
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2
\end{bmatrix} =
\begin{bmatrix}
5\\
2
\end{bmatrix}
$
If you don't believe me, compute the matrix product on the left side of the equation, and check that you get the same equations of our linear system.
---
**Bridge to linear regression:**
In very simplistic terms, you can think of the matrix formulation of the linear regression as $\mathbf{y} = \mathbf{X}\mathbf{b}$, where you use your matrix input data, $\mathbf{X}$, and your known outcome values, $\mathbf{y}$, to find the **unknown** vector of coefficients, $\mathbf{b}$. Once you find $\mathbf{b}$, you can predict $\mathbf{y}$ for any new value of $\mathbf{X}$
❗️ **Notice that the notation is different** from ours. In linear regression, $\mathbf{X}$ corresponds to $\mathbf{A}$, $\mathbf{b}$ to our unknown values $\mathbf{x}$, and $\mathbf{y}$ to $\mathbf{b}$.
---
❗️ *In "real life", we can't build "simple and perfect" systems of linear equations, like we do here, but understanding the basics will give you a good starting point.*
---
When you actually start looking into the Maths behind machine learning algorithms, you'll probably think to yourself:
"*I wish Calculus and Statistics were as easy as Linear Algebra. (sad face)*"
---
<center><i>Did someone just say 'Calculus' and 'Statistics'???</i></center>
<img src="./media/scream.gif" width="400"/>
<br>
<br>
<center>Yep, sooner or later, they will find you and they will haunt you. For now, let's just focus on strengthening your linear algebra skills.</center>
---
### 4.3 Solving systems of linear equations with Gauss-Jordan elimination
Remember that the matrix product $\mathbf{A}\mathbf{x}$ is a **linear combination of the columns in $\mathbf{A}$**:
$$\mathbf{b} = \begin{bmatrix}
a_{1,1}x_1 + a_{1,2}x_2 + ... + a_{1,n} x_n\\
a_{2,1}x_1 + a_{2,2}x_2 + ... + a_{2,n} x_n\\
\vdots\\
a_{m,1}x_1 + a_{m,2}x_2 + ... + a_{m,n} x_n\\
\end{bmatrix} = x_1
\begin{bmatrix}
a_{1,1}\\
a_{2,1}\\
\vdots\\
a_{m,1}\\
\end{bmatrix} + x_2
\begin{bmatrix}
a_{1,2}\\
a_{2,2}\\
\vdots\\
a_{m,2}\\
\end{bmatrix} + ... + x_n
\begin{bmatrix}
a_{1,n}\\
a_{2,n}\\
\vdots\\
a_{m,n}\\
\end{bmatrix}
$$
> 📌 **Tip**: __If matrix $\mathbf{A}$ is a square matrix ($m=n$) and is invertible__, we can always find an $\mathbf{x}$ for any given $\mathbf{b}$.
>
> It's just about determining $\mathbf{x} = \mathbf{A}^{-1}\mathbf{b}$ (we'll get there).
You might remember that the **3 possible outcomes of a linear system** are:
1. the system has **no solution** (also called an **inconsistent** system);
2. the system has **a unique solution**;
3. the system has **an infinite number of solutions**;
#### 4.3.1 Walkthrough example
Consider the following system of linear equations (3 equations, 3 unknowns):
$\left\{
\begin{align*}
x_1 + 2x_2 + 3x_3 = 2\\
4x_1 + x_2 + 2x_3 = 3\\
x_1 + x_2 + x_3 = 1
\end{align*}
\right.$
We can write it in matrix form as
$\;\;\mathbf{A}\mathbf{x} = \mathbf{b} \iff \begin{bmatrix}
1 & 2 & 3\\
4 & 1 & 2\\
1 & 1 & 1\\
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2\\
x_3
\end{bmatrix} =
\begin{bmatrix}
2\\
3\\
1
\end{bmatrix}
$.
```
Instructor: -"What do you think? Will this system have (i) no solution, (ii) a unique solution or (iii) an infinite number of solutions?"
Student: -"No idea..."
Instructor: -"Haha me neither! Just made those numbers up, so let's check it out..."
```
**Gauss-Jordan elimination:**
Gauss-Jordan elimination method is a method to solve a linear system in matrix form, using an augmented matrix, $\mathbf{A} | \mathbf{b}$, and changing it within a given set of rules, until we get to the **reduced row echelon form**, where **all entries below the diagonal are zeros and the first nonzero number in each row is $1$**.
If we can get to such a form, and if our matrix is square, then it means we have a unique solution, and we can continue reducing the left matrix to the identity, getting the solution vector on the right side. Let's use this in our example.
---
**Step 1** - represent our system by an augmented matrix:
$\mathbf{A}|\mathbf{b} = \left[\begin{array}{ccc|c}
1 & 2 & 3 & 2\\
4 & 1 & 2 & 3\\
1 & 1 & 1 & 1
\end{array}\right]$
#### Elementary row operations (EROs):
Our goal is to reach the **reduced row echelon form**. For that we can operate on our matrix $\mathbf{A}$, using any number of the 3 elementary row operations:
- **ERO 1**: Interchange any two equations (rows in the augmented matrix);
- **ERO 2**: Multiply any equation (row) by a **nonzero** constant;
- **ERO 3**: Add a multiple of one equation (row) to another.
These rules are all just basic arithmetics you can perform on equations, without changing the system's solution (if it exists).
**Step 2** - find the reduced row echelon form, using elementary row operations (ERO):
Because we like columns, let's perform the elimination column by column. If you look at our augmented matrix, we already have a $1$ as the first element in the first entry of the diagonal. Going along the first column, how can we reduce that $4$ into a $0$, using EROs?
We could, for example, add the first row ($R_1$), multiplied by $-4$, to the second row ($R_2$):
$R_2 - 4 R_1 = \left[\begin{array}{ccc|c}4 & 1 & 2 & 3\end{array}\right] - 4\cdot \left[\begin{array}{ccc|c}1 & 2 & 3 & 2\end{array}\right] = \left[\begin{array}{ccc|c}0 & -7 & -10 & -5\end{array}\right]$
which we represent as:
$\left[\begin{array}{ccc|c}
1 & 2 & 3 & 2\\
4 & 1 & 2 & 3\\
1 & 1 & 1 & 1
\end{array}\right]\xrightarrow{R_2 - 4 R_1 \to R_2}
\left[\begin{array}{ccc|c}
1 & 2 & 3 & 2\\
0 & -7 & -10 & -5\\
1 & 1 & 1 & 1
\end{array}\right]$
Let's finish reducing column 1, using a similar strategy:
$\left[\begin{array}{ccc|c}
1 & 2 & 3 & 2\\
0 & -7 & -10 & -5\\
1 & 1 & 1 & 1
\end{array}\right]\xrightarrow{R_3 - R_1 \to R_3}
\left[\begin{array}{ccc|c}
1 & 2 & 3 & 2\\
0 & -7 & -10 & -5\\
0 & -1 & -2 & -1
\end{array}\right]
$
We're finished reducing our first column!
Let's now go to the second column. Remember we want to get to a matrix where all entries below the diagonal are zeros and all entries in the diagonal are ones. For that we need to reduce that $-7$ in the diagonal of column 2, to $1$. Take a moment to think about which row could you use to do that, without messing up with column 1.
That's right, we can only use the third row. If we used the first, we would "undo" our work for the first column!
$\left[\begin{array}{ccc|c}
1 & 2 & 3 & 2\\
0 & -7 & -10 & -5\\
0 & -1 & -2 & -1
\end{array}\right]\xrightarrow{R_2 - 8 R_3 \to R_2}
\left[\begin{array}{ccc|c}
1 & 2 & 3 & 2\\
0 & 1 & 6 & 3\\
0 & -1 & -2 & -1
\end{array}\right]$
By now, you would reduce that $-1$ at the end of the second column and then go to the last element of the third column and reduce it to $1$. Let's do both:
$\left[\begin{array}{ccc|c}
1 & 2 & 3 & 2\\
0 & 1 & 6 & 3\\
0 & -1 & -2 & -1
\end{array}\right]\xrightarrow{R_3 + R_2 \to R_3}
\left[\begin{array}{ccc|c}
1 & 2 & 3 & 2\\
0 & 1 & 6 & 3\\
0 & 0 & 4 & 2
\end{array}\right]\xrightarrow{\frac{1}{4}R_3 \to R_3}
\left[\begin{array}{ccc|c}
1 & 2 & 3 & 2\\
0 & 1 & 6 & 3\\
0 & 0 & 1 & \frac{1}{2}
\end{array}\right]
$
---
*Just like with Rubik's cube... if you follow the algorithm, slowly but steady, you start to see the solution appearing.*
---
By now, we already can see from the last row, that we found the value for $x_3$!
To get the entire solution vector $\mathbf{x}$, let's reduce the left side to the identity matrix!!
*One can only love the identity matrix...*
Let's continue using EROs to get to the identity matrix:
$\left[\begin{array}{ccc|c}
1 & 2 & 3 & 2\\
0 & 1 & 6 & 3\\
0 & 0 & 1 & \frac{1}{2}
\end{array}\right]\xrightarrow{R_1 - 2 R_2 \to R_1}
\left[\begin{array}{ccc|c}
1 & 0 & -9 & -4\\
0 & 1 & 6 & 3\\
0 & 0 & 1 & \frac{1}{2}
\end{array}\right]\xrightarrow{R_1 + 9 R_3 \to R_1}
\left[\begin{array}{ccc|c}
1 & 0 & 0 & \frac{1}{2}\\
0 & 1 & 6 & 3\\
0 & 0 & 1 & \frac{1}{2}
\end{array}\right]\xrightarrow{R_2 - 6 R_3 \to R_2}
\left[\begin{array}{ccc|c}
1 & 0 & 0 & \frac{1}{2}\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & \frac{1}{2}
\end{array}\right]$
That's it! We've found our identity matrix, and therefore the solution vector $\mathbf{x}$!! Beautiful. 😍
Rewriting this as a system of linear equations, the solution becomes:
$\left\{
\begin{align*}
x_1 = \frac{1}{2}\\
x_2 = 0\;\\
x_3 = \frac{1}{2}
\end{align*}
\right.$
From the 3 types of systems we saw before, this one corresponds to a **system with a unique solution**.
---
🙋 *Spotted an error in the calculations? Awesome, [let your instructor know](https://github.com/LDSSA/ds-prep-course-2021/issues/new).*
---
I'm tired of writing so many matrices... your turn to do the work!! 😁
> 📝 **Pen and paper exercise 5**: Using the solution we found above, check that the result of $\mathbf{A}\mathbf{x}$ matches the expected $\begin{bmatrix}2\\3\\1\end{bmatrix}$.
### 4.4 The inverse was there all along
**Think about this for a minute:** the operations we did on $\mathbf{A}$ resulted in the identity matrix, $\mathbf{I}$. We know that if a matrix $n\times n$ is invertible, then $\mathbf{A}\mathbf{A}^{-1} = I_n$.
So, under the hood, we were just performing linear operations that allowed us to transform $\mathbf{A}$ into its inverse $\mathbf{A}^{-1}$.
---
<img src="./media/shocked.gif" width="400"/>
---
```
(We are shaking some tombs here...)
Jordan: -"Oh you think that's interesting? You don't know half the story..."
Gauss: -"Is it 'your' story though?"
Chinese mathematicians: -"You're all a bunch of copycats."
Newton: -"Here here, have an apple."
```
[A bit of history never hurts.](https://en.wikipedia.org/wiki/Gaussian_elimination#History)
---
**Using Gauss-Jordan elimination to find the inverse:**
To find the inverse which was hidden all along throughout our Gauss-Jordan elimination, we simply need to insert the identity matrix into our augmented matrix, and include it in the ERO steps:
$$\left[\begin{array}{ccc|ccc|c}
1 & 2 & 3 & 1 & 0 & 0 & 2\\
4 & 1 & 2 & 0 & 1 & 0 & 3\\
1 & 1 & 1 & 0 & 0 & 1 & 1
\end{array}\right]$$
Performing Gauss-Jordan elimination like we did before, we would have:
$\left[\begin{array}{ccc|ccc|c}
1 & 2 & 3 & 1 & 0 & 0 & 2\\
4 & 1 & 2 & 0 & 1 & 0 & 3\\
1 & 1 & 1 & 0 & 0 & 1 & 1
\end{array}\right]\xrightarrow{R_2 - 4R_1\to R_2}
\left[\begin{array}{ccc|ccc|c}
1 & 2 & 3 & 1 & 0 & 0 & 2\\
0 & -7 & -10 & -4 & 1 & 0 & -5\\
1 & 1 & 1 & 0 & 0 & 1 & 1
\end{array}\right]$
$\xrightarrow{}\dots\xrightarrow{}
\left[\begin{array}{ccc|ccc|c}
1 & 0 & 0 & -\frac{1}{4} & \frac{1}{4} & \frac{1}{4} & \frac{1}{2}\\
0 & 1 & 0 & -\frac{1}{2} & -\frac{1}{2} & \frac{5}{2} & 0\\
0 & 0 & 1 & \frac{3}{4} & \frac{1}{4} & -\frac{7}{4} & \frac{1}{2}
\end{array}\right] =
\mathbf{I} | \mathbf{A}^{-1} | \mathbf{x}
$
So the inverse of $\mathbf{A}$ is:
$\mathbf{A}^{-1} =
\begin{bmatrix}
-\frac{1}{4} & \frac{1}{4} & \frac{1}{4}\\
-\frac{1}{2} & -\frac{1}{2} & \frac{5}{2}\\
\frac{3}{4} & \frac{1}{4} & -\frac{7}{4}
\end{bmatrix}$
We didn't have to include the last column, in order to find $\mathbf{A}^{-1}$. But now you see how to get the inverse and the solution (if they exist) at the same time!
---
❗️ **Key takeaways:**
> - If we had used any other vector $\mathbf{b}$ in $\mathbf{A}|\mathbf{b}$, we would still be able to find some $\mathbf{x}$ to solve the equation, and the inverse of $\mathbf{A}$ would always be the same.
>
> The fact that we can put any 3D vector in the place of $\mathbf{b}$, and still find an $\mathbf{x}$, stems from the fact that the $3\times 3$ square matrix $\mathbf{A}$ is invertible, which is due to its column vectors all being linearly independent (they can represent all 3D vectors through linear combinations).
>
> - If there was no way to transform our left matrix into the identity matrix using the row reduction process, we would say that $\mathbf{A}$ has no inverse, or that it is **singular**.
---
> 📝 **Pen and paper exercise 6**: Perform Gauss-Jordan elimination to find the solution, **if it exists**, and the inverse of $\mathbf{A}$ (**if it exists**), of the following system:
>
> $\;\;\mathbf{A}\mathbf{x} = \mathbf{b} \iff
\begin{bmatrix}
2 & 1 & -1\\
1 & -3 & 2\\
1 & 4 & -3
\end{bmatrix}
\mathbf{x} =
\begin{bmatrix}
7\\
1\\
5
\end{bmatrix}$
>
> What type of system is this? (unique solution, no solution, or infinite number of solutions?)
>
> Remember you can use any of the [allowed EROs we've learned about](#Elementary-row-operations-(EROs):).
### 4.5 Matrix invertibility *vs.* types of linear systems
For a linear system with $n$ equations and $n$ unknowns:
- If we have **a unique solution**, the square matrix $\mathbf{A}$ is **invertible**;
- If we have **an infinite range of solutions**, the square matrix $\mathbf{A}$ is **singular**;
- If we have **no solution**, the square matrix $\mathbf{A}$ is **singular** **and**, somewhere along the elimination steps, you'll get a row representing an **impossible equation** (something like $0=2$ or $0=-1$).
```
## Uncomment the lines below to check the inverse (if it exists) of A in exercise 6
#A = np.array([[2, 1, -1], [1, -3, 2], [1, 4, -3]])
#np.linalg.inv(A)
```
---
<img src="./media/no_time.png"/>
---
### 4.6 Solving linear equations in NumPy
Let's see if we can solve the same system we did before, with a few lines of code and NumPy's amazing collection of linear algebra functions, [`numpy.linalg`](https://numpy.org/doc/1.20/reference/routines.linalg.html).
$\left\{
\begin{align*}
x_1 + 2x_2 + 3x_3 = 2\\
4x_1 + x_2 + 2x_3 = 3\\
x_1 + x_2 + x_3 = 1
\end{align*}
\right.$
---
**1 - (*super lazy method*) Use [`numpy.linalg.solve()`](https://numpy.org/doc/1.20/reference/generated/numpy.linalg.solve.html):**
```
# matrix of coefficients A
A = np.array([[1,2,3], [4,1,2], [1,1,1]]) # 2D array
# colum nvector b
b = np.array([[2], [3], [1]]) # 2D array
np.linalg.solve(A, b)
```
Exactly what we got when solving with Gauss-Jordan elimination!! *Just a hundred times faster...* 😅😅
---
**2 - (*not so lazy method*) Use `numpy.linalg.inv()` to find the inverse and then solve for $\mathbf{x}$:**
We can rearrange the equation $\mathbf{A}\mathbf{x} = \mathbf{b}$ by multiplying both sides of the equation by $\mathbf{A}^{-1}$: $\;\;\;\;\mathbf{A}^{-1}\mathbf{A}\mathbf{x} = \mathbf{A}^{-1}\mathbf{b}$
We know that, **if $\mathbf{A}$ is invertible, then $\mathbf{A}^{-1}\mathbf{A} = \mathbf{I}$**, so we can simplify this equation to: $\;\;\;\;\mathbf{I}\mathbf{x} = \mathbf{A}^{-1}\mathbf{b}$
We also know that the identity matrix does not change another matrix (or vector) when multiplied by it, therefore we can write: $\;\;\;\;\mathbf{x} = \mathbf{A}^{-1}\mathbf{b}$
After having checked if $\mathbf{A}$ is invertible (which we do using `np.linalg.inv()`), we apply this equation in NumPy and get the vector $\mathbf{x}$:
```
# If A is invertible, Ax = b => x = A^{-1} b
# our matrix A (3x3)
A = np.array([[1,2,3],
[4,1,2],
[1,1,1]])
# our vector b (3x1)
b = np.array([[2],
[3],
[1]])
# the inverse of A (3x3)
A_inv = np.linalg.inv(A)
# the solution vector x (3x1)
x = np.matmul(A_inv, b)
# print solution
print("x = ", x)
```
Yep, that's the expected result.
---
**For the math nerds**:
Check [this video](https://youtu.be/J7DzL2_Na80?list=PL221E2BBF13BECF6C) if you're curious to learn about the geometry of linear equations.
---
<img src="./media/almost.gif" width="400"/>
<center>Only one more section to go...</center>
---
## 5. Eigenvalues and eigenvectors
### (optional section)
The words **eigenvalue** and **eigenvector** come from the german *Eigen*, meaning "own", or "characteristic":
<a name="vector_def"></a>
<div class="alert alert-block alert-info">
If we obtain a scaled version of the <b>non-zero</b> vector $\mathbf{x}$, $\;\;\lambda\mathbf{x}\;$, when multiplying a square matrix $\mathbf{A}$ by that vector, $\;\;\mathbf{A}\mathbf{x}\;$, then we say that $\mathbf{x}$ is an <b>eigenvector</b> and $\lambda$ is an <b>eigenvalue</b>:
$$\mathbf{A}\mathbf{x} = \lambda\mathbf{x}$$
</div>
Think about what this means: it means that by using only 1 scalar, you can get the same vector as using one entire matrix (potentially huge). It's as if you're "magically" reducing the number of dimensions in your data matrix, thus simplifying your problem, thus taking away a lot of computational burden.
So, reducing the dimensionality of our data seems pretty useful... but how do we find those 2 unknowns, $\mathbf{x}$ and $\lambda$?
### 5.1 Enter the determinant
To be able to discover eigenvectors $\mathbf{x}\neq 0$ and eigenvalues $\lambda$, we need to first talk about the determinant.
The **determinant** of $\mathbf{A}$, denoted as $\det(\mathbf{A})$, or simply $|\mathbf{A}|$, is a special number that tells us **whether** a **square matrix** is **invertible** or **singular**.
For our purposes here, all you need to know is:
- If the determinant of a square matrix $\mathbf{A}$ is zero, $\det(\mathbf{A}) = 0$, then $\mathbf{A}$ is **singular**;
- If the determinant of a square matrix $\mathbf{A}$ is different from zero, $\det(\mathbf{A}) \neq 0$, then $\mathbf{A}$ is **invertible**.
Making the bridge to what we learned about systems of linear equations, we can also say that, for a square matrix $\mathbf{A}$, of size $n\times n$:
- If $\det(\mathbf{A}) = 0$, $\mathbf{A}$ is **singular** and the system **either** has **no solution**, or **an infinite number of solutions**;
- If $\det(\mathbf{A}) \neq 0$, $\mathbf{A}$ is **invertible** and the system has **a unique solution**.
#### 5.1.1 Calculating the determinant with NumPy
Let's use [`numpy.linalg.det()`](https://numpy.org/doc/1.20/reference/generated/numpy.linalg.det.html) to find the determinant of a matrix for us.
**(i) Determinant of a singular matrix**
```
# a singular matrix --> det(A) = 0
A = np.array([[1, 2],
[2, 4]])
np.linalg.det(A)
```
**(ii) Determinant of an invertible matrix**
```
# an invertible matrix --> det(B) not equal to 0
B = np.array([[2, -1, 6],
[1, 1, 2],
[3, 1, 2]])
np.linalg.det(B).round(2)
```
The inverse of `B` is:
```
# inverse of matrix B
np.linalg.inv(B)
```
**(iii) Determinant of the identity matrix, also an invertible matrix**
```
# the identity matrix is invertible --> determinant not equal to 0
I = np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
np.linalg.det(I)
```
The determinant of the identity matrix is always $1$. Unsurprisingly, the inverse of the identity matrix `I` is the identity matrix itself:
```
np.linalg.inv(I)
```
Honestly, who can resist the identity matrix?...
---
#### Oh and by the way, did you choose your [favourite matrix](https://www.youtube.com/watch?v=1DvlyNjfuzM) already?
<img src="./media/cupcakematrix.jpg" width="600"/>
<center>Time for matrix cupcakes!! :D</center>
---
### 5.2 Finding eigenvalues and eigenvectors
Let's bring back the equation for the eigenvectors and eigenvalues, $\mathbf{A}\mathbf{x} = \lambda\mathbf{x}$.
We can rearrange the equation to:
$$\mathbf{A}\mathbf{x}-\lambda\mathbf{x} = 0$$
Remember that we can write $\mathbf{I}\mathbf{x} = \mathbf{x}$, where $\mathbf{I}$ is the identity matrix:
$$\mathbf{A}\mathbf{x}-\lambda\mathbf{I}\mathbf{x} = 0$$
We can apply the multiplication properties we've learned on vectors, scalars and matrices and write:
$$(\mathbf{A}-\lambda\mathbf{I})\mathbf{x} = 0$$
Notice that what we have here is a system of linear equations, of the form $\mathbf{B}\mathbf{x} = \mathbf{b}$, where our vector $\mathbf{b}$ corresponds to the zero vector, and our matrix $\mathbf{B}$ corresponds to $\mathbf{A}-\lambda\mathbf{I}$. Indeed, for an $n\times n$ matrix $\mathbf{A}$, we could write this as a system of linear equations as follows:
$\left\{
\begin{array}{r@r}
(a_{1,1} - \lambda)x_1 & + &
a_{1,2}x_2 & + &
\dots & + &
a_{1,n}x_n & = &
0\\
a_{2,1}x_1 & + &
(a_{2,2}-\lambda)x_2 & + &
\dots & + &
a_{2,n}x_n & = &
0\\
\vdots&&\vdots&&\ddots&&\vdots&&\vdots\\
a_{n,1}x_1 & + &
a_{n,2}x_2 & + &
\dots & + &
(a_{n,n} - \lambda)x_n & = & 0
\end{array}
\right.$
The zero vector is obviously a solution to this system, **however**, from the definition of eigenvector, what we really want is some vector $\mathbf{x}\neq 0$.
---
Because $\lambda$ is unknown, we don't yet know what's in the matrix $\mathbf{A}-\lambda\mathbf{I}$. But since it is a square matrix, we know it either is invertible (thus it has nonzero determinant) or singular (thus having zero determinant).
If it is invertible, then the system can only have one solution, and that would be the zero vector. We therefore need it to be singular, so that we can find some other vector than the zero one. So we need to be able to find $\lambda$ for which the determinant is zero, $\det(\mathbf{A}-\lambda\mathbf{I})=0$, which we can rewrite as:
$$\left|\begin{array}
a_{1,1}-\lambda & a_{1,2} & \dots & a_{1,n}\\
a_{2,1} & a_{2,2}-\lambda & \dots & a_{2,n}\\
\vdots & \vdots & \ddots & \vdots\\
a_{n,1} & a_{n,2} & \dots & a_{n,n}-\lambda
\end{array}\right| = 0
$$
The calculation of the determinant for an $n$-dimensional square matrix is out of scope, however just know that to solve the equation above you would get a **polynomial of degree $n$ on the variable $\lambda$**.
After having each of the eigenvalues $\lambda_1, \lambda_2,...$ that solve the polynomial equation, one can find the corresponding eigenvectors by solving the linear system, for each of the eigenvalues found.
**Number of eigenvalues for an $n\times n$ square matrix:**
- For a $2\times 2$ matrix, you can get at most 2 different eigenvalues;
- For a $3\times 3$ singular matrix, you could find at most 3 different eigenvalues;
And so on.
---
Although we won't go into further details, let's just check a very simple case, where we have a triangular matrix:
$$\left|\begin{array}
a_{1,1}-\lambda & a_{1,2} & \dots & a_{1,n}\\
0 & a_{2,2}-\lambda & \dots & a_{2,n}\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \dots & a_{n,n}-\lambda
\end{array}\right| = 0
$$
In this case, our polynomial equation would come down to:
$$(a_{1,1}-\lambda)(a_{2,2}-\lambda)\dots(a_{n,n}-\lambda) = 0$$
Thus, in this particular case, the eigenvalues of $\mathbf{A}$ are just the entries of the diagonal of $\mathbf{A}$.
---
You can check a visual explanation of **Eigenvectors and Eigenvalues** [here](https://setosa.io/ev/eigenvectors-and-eigenvalues/).
---
<img src="./media/thats_a_lot_of_math.gif"/>
<br>
<br>
When working with lots and lots of data, we don't have the time to perform all those calculations by hand... so let's put NumPy to work and compute all the eigen-stuff.
---
### 5.3 Finding eigenvalues and eigenvectors with NumPy
We can find the eigenvectors and eigenvalues of a matrix using the function [`numpy.linalg.eig()`](https://numpy.org/doc/1.20/reference/generated/numpy.linalg.eig.html). Just a small snippet of code, to do all that math...
`numpy.linalg.ein()` takes a **square** array as input and returns:
- an array `w` with its eigenvalues;
- the normalized (unit “length”) eigenvectors, such that the column `v[:,i]` is an eigenvector corresponding to the eigenvalue `w[i]`.
```
# create a 3x3 matrix using a 2D numpy array
A = np.array([[1, 2, 3],
[3, -5, 3],
[6, -6, 4]])
# determine the eigenvalues and eigenvectors of matrix A
eigenvalues, eigenvectors = np.linalg.eig(A)
# display eigenvalues and eigenvectors
print("eigenvalues of A:", eigenvalues, "\n")
print("eigenvectors of A:\n", eigenvectors, "\n")
```
The function `numpy.linalg.eig()` might return the eigenvalues and eigenvector entries as complex numbers....
It is indeed possible that you find eigenvalues and eigenvectors with an **imaginary** part. Remember that, to find the $\lambda$ for which $\det(\mathbf{A}-\lambda\mathbf{I})=0$, one usually needs to solve a polynomial equation, and some polynomial equations actually have **complex roots**.
---
<img src="./media/key_of_imagination.gif">
---
```
# create a 4x4 matrix using a 2D numpy array
B = np.array([[2, 0, 0, 0],
[0, 2, 0, 0],
[0, 0, 5, 0],
[0, 0, 0, 4]])
# determine the eigenvalues and eigenvectors of square matrix B
B_eigenvalues, B_eigenvectors = np.linalg.eig(B)
# display eigenvalues and eigenvectors
print("eigenvalues of B:", B_eigenvalues, "\n")
print("eigenvectors of B:\n", B_eigenvectors, "\n")
# create a 3x3 matrix using a 2D numpy array
C = np.array([[2, 4, 0],
[4, 2, 0],
[0, 0, 5]])
# determine the eigenvalues and eigenvectors of square matrix C
C_eigenvalues, C_eigenvectors = np.linalg.eig(C)
# display eigenvalues and eigenvectors
print("eigenvalues of C:", C_eigenvalues, "\n")
print("eigenvectors of C:\n", C_eigenvectors, "\n")
```
---
<img src="./media/mathematical_enlightenment.gif"/>
<br>
<center>This is it. You have reached the first stage of mathematical enlightenment.</center>
<center>Your life will never be the same again.</center>
<br>
<center><font size=4>😲😲</font></center>
<center>Well you chose, mathematically enlightened you've become.</center>
---
<br>
<br>
<br>
<center><font size=4>End of optional sections</font></center>
<br>
<br>
<br>
---
### Reading the matrix form of the multiple linear regression solution
At the beginning of SLU12, I told you you'd be able to read the matrix form of the general solution to the multiple [linear regression](https://en.wikipedia.org/wiki/Linear_regression) algorithm. Let's do this!
In linear regression, you want to find some function to relate an outcome $\mathbf{y}$ to other phenomena, or variables, such as $\mathbf{x_1}, \mathbf{x_2}, \dots, \mathbf{x_n}$, using a linear relation:
$$\mathbf{y} = \beta_1\mathbf{x_1} + \dots + \beta_n\mathbf{x_n}$$
You have a set of data (usually large) $\mathbf{X}$, a matrix where you store all your observation variables, and $\mathbf{y}$, a vector with the corresponding outcomes:
$$\begin{array}{c|cccc}
y&x_1&x_2&\dots&x_n\\
\hline
y_1&x_{1,1}&x_{1,2}&\dots&x_{1,n}\\
y_2&x_{2,1}&x_{2,2}&\dots&x_{2,n}\\
\dots&\dots&\dots&\dots&\dots\\
y_m&x_{m,1}&x_{m,2}&\dots&x_{m,n}\\
\end{array}$$
You want to find the values of the **coefficients** $\beta_1, \dots, \beta_n$, so that when you have a new observation, you can predict $\mathbf{y}$. But how do we find the "right" values of the coefficients?
We want the following equalities to be satisfied, *as much as possible*, all at once:
$$\begin{array}{l}
y_1=\beta_1x_{1,1}+\beta_2x_{1,2}+\dots+\beta_nx_{1,n}\\
y_2=\beta_1x_{2,1}+\beta_2x_{2,2}+\dots+\beta_nx_{2,n}\\
\qquad \qquad \dots\\
y_m=\beta_1x_{m,1}+\beta_2x_{m,2}+\dots+\beta_nx_{m,n}\end{array}$$
<br>
---
This is a system of linear equations, thus we can write it in the form:
$$\mathbf{X}\mathbf{\beta} = \mathbf{y}$$
**Notice that** now, our $\mathbf{A}$ is called $\mathbf{X}$, our $\mathbf{x}$ is now the $\mathbf{\beta}$ and our $\mathbf{b}$ is now called $\mathbf{y}$.
---
Actually, you can't find a *perfect* solution to this in real life. What you'll actually do is try to find a *solution* that is as good as possible. And Mathematics tells us that the best possible solution is given by the equality:
$$\mathbf{\beta} = (\mathbf{X}^T\mathbf{X})^{-1}(\mathbf{X}^T\mathbf{y})$$
provided that the columns in $\mathbf{X}$ are linearly independent. That's right, because now you know that $\mathbf{X}^T\mathbf{X}$ should be non singular for the inverse to exist!!
---
⚠️ **Disclaimer:** This is **a simplistic view** of the multiple linear regression algorithm. But as you become more and more comfortable with linear algebra, the easier it will be to read and make sense of matrix equations.
<br>
```
Some senior data scientist: -"Thank you for that disclaimer."
Instructor: -"No problem."
Student: -"Hey guys, will I become insane like all the Mathematicians out there, if I do data science?"
Instructor: -"You will, but only if you do it the right way..."
```
---
<img src="./media/math_religion.png" width="750"/>
---
## Wrapping up
What we've learned in this notebook:
- how to compute the product between two matrices, or a matrix and a vector;
- how to determine the inverse of a matrix using NumPy;
- useful `ndarray` methods: `.min()`, `.max()`, `.sum()` and `.sort()`;
- (for the ones that opted in) solving systems of linear equations and finding the inverse of a matrix, eigenvalues and eigenvectors.
---
### Resources on Linear Algebra:
- [**3Blue1Brown**](https://www.youtube.com/watch?v=fNk_zzaMoSs&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab) YouTube Playlist with an intuitive and visual introduction to Linear Algebra;
- [**YouTube Playlist for MIT 18.06SC Linear Algebra, Fall 2011**](https://www.youtube.com/watch?v=7UJ4CFRGd-U&list=PL221E2BBF13BECF6C);
### Resources on NumPy:
* [**NumPy v1.20 Quickstart tutorial**](https://numpy.org/doc/1.20/user/quickstart.html) NumPy's official tutorial;
---
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 神经风格迁移
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/tutorials/generative/style_transfer"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 tensorflow.google.cn 上查看</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/generative/style_transfer.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />在 Google Colab 上运行</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/generative/style_transfer.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />在 GitHub 上查看源代码</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/generative/style_transfer.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载此 notebook</a>
</td>
</table>
Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的
[官方英文文档](https://www.tensorflow.org/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到
[tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入
[docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
本教程使用深度学习来用其他图像的风格创造一个图像(曾经你是否希望可以像毕加索或梵高一样绘画?)。 这被称为*神经风格迁移*,该技术概述于 <a href="https://arxiv.org/abs/1508.06576" class="external">A Neural Algorithm of Artistic Style</a> (Gatys et al.).
Note: 本教程演示了原始的风格迁移算法。它将图像内容优化为特定样式。最新的一些方法训练模型以直接生成风格化图像(类似于 [cyclegan](cyclegan.ipynb))。原始的这种方法要快得多(高达 1000 倍)。[TensorFlow Hub](https://tensorflow.google.cn/hub) 和 [TensorFlow Lite](https://tensorflow.google.cn/lite/models/style_transfer/overview) 中提供了预训练的[任意图像风格化模块](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_arbitrary_image_stylization.ipynb)。
神经风格迁移是一种优化技术,用于将两个图像——一个*内容*图像和一个*风格参考*图像(如著名画家的一个作品)——混合在一起,使输出的图像看起来像内容图像, 但是用了风格参考图像的风格。
这是通过优化输出图像以匹配内容图像的内容统计数据和风格参考图像的风格统计数据来实现的。 这些统计数据可以使用卷积网络从图像中提取。
例如,我们选取这张小狗的照片和 Wassily Kandinsky 的作品 7:
<img src="https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg" width="500px"/>
[黄色拉布拉多犬的凝视](https://commons.wikimedia.org/wiki/File:YellowLabradorLooking_new.jpg),来自 Wikimedia Commons
<img src="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/images/kadinsky.jpg?raw=1" style="width: 500px;"/>
如果 Kandinsky 决定用这种风格来专门描绘这只海龟会是什么样子? 是否如下图一样?
<img src="https://tensorflow.google.cn/tutorials/generative/images/stylized-image.png" style="width: 500px;"/>
## 配置
### 导入和配置模块
```
import tensorflow as tf
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import PIL.Image
import time
import functools
def tensor_to_image(tensor):
tensor = tensor*255
tensor = np.array(tensor, dtype=np.uint8)
if np.ndim(tensor)>3:
assert tensor.shape[0] == 1
tensor = tensor[0]
return PIL.Image.fromarray(tensor)
```
下载图像并选择风格图像和内容图像:
```
content_path = tf.keras.utils.get_file('YellowLabradorLooking_new.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg')
# https://commons.wikimedia.org/wiki/File:Vassily_Kandinsky,_1913_-_Composition_7.jpg
style_path = tf.keras.utils.get_file('kandinsky5.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg')
```
## 将输入可视化
定义一个加载图像的函数,并将其最大尺寸限制为 512 像素。
```
def load_img(path_to_img):
max_dim = 512
img = tf.io.read_file(path_to_img)
img = tf.image.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
shape = tf.cast(tf.shape(img)[:-1], tf.float32)
long_dim = max(shape)
scale = max_dim / long_dim
new_shape = tf.cast(shape * scale, tf.int32)
img = tf.image.resize(img, new_shape)
img = img[tf.newaxis, :]
return img
```
创建一个简单的函数来显示图像:
```
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
content_image = load_img(content_path)
style_image = load_img(style_path)
plt.subplot(1, 2, 1)
imshow(content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(style_image, 'Style Image')
```
## 使用 TF-Hub 进行快速风格迁移
本教程演示了原始的风格迁移算法。其将图像内容优化为特定风格。在进入细节之前,让我们看一下 [TensorFlow Hub](https://tensorflow.google.cn/hub) 模块如何快速风格迁移:
```
import tensorflow_hub as hub
hub_module = hub.load('https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/1')
stylized_image = hub_module(tf.constant(content_image), tf.constant(style_image))[0]
tensor_to_image(stylized_image)
```
## 定义内容和风格的表示
使用模型的中间层来获取图像的*内容*和*风格*表示。 从网络的输入层开始,前几个层的激励响应表示边缘和纹理等低级 feature (特征)。 随着层数加深,最后几层代表更高级的 feature (特征)——实体的部分,如*轮子*或*眼睛*。 在此教程中,我们使用的是 VGG19 网络结构,这是一个已经预训练好的图像分类网络。 这些中间层是从图像中定义内容和风格的表示所必需的。 对于一个输入图像,我们尝试匹配这些中间层的相应风格和内容目标的表示。
加载 [VGG19](https://keras.io/applications/#vgg19) 并在我们的图像上测试它以确保正常运行:
```
x = tf.keras.applications.vgg19.preprocess_input(content_image*255)
x = tf.image.resize(x, (224, 224))
vgg = tf.keras.applications.VGG19(include_top=True, weights='imagenet')
prediction_probabilities = vgg(x)
prediction_probabilities.shape
predicted_top_5 = tf.keras.applications.vgg19.decode_predictions(prediction_probabilities.numpy())[0]
[(class_name, prob) for (number, class_name, prob) in predicted_top_5]
```
现在,加载没有分类部分的 `VGG19` ,并列出各层的名称:
```
vgg = tf.keras.applications.VGG19(include_top=False, weights='imagenet')
print()
for layer in vgg.layers:
print(layer.name)
```
从网络中选择中间层的输出以表示图像的风格和内容:
```
# 内容层将提取出我们的 feature maps (特征图)
content_layers = ['block5_conv2']
# 我们感兴趣的风格层
style_layers = ['block1_conv1',
'block2_conv1',
'block3_conv1',
'block4_conv1',
'block5_conv1']
num_content_layers = len(content_layers)
num_style_layers = len(style_layers)
```
#### 用于表示风格和内容的中间层
那么,为什么我们预训练的图像分类网络中的这些中间层的输出允许我们定义风格和内容的表示?
从高层理解,为了使网络能够实现图像分类(该网络已被训练过),它必须理解图像。 这需要将原始图像作为输入像素并构建内部表示,这个内部表示将原始图像像素转换为对图像中存在的 feature (特征)的复杂理解。
这也是卷积神经网络能够很好地推广的一个原因:它们能够捕获不变性并定义类别(例如猫与狗)之间的 feature (特征),这些 feature (特征)与背景噪声和其他干扰无关。 因此,将原始图像传递到模型输入和分类标签输出之间的某处的这一过程,可以视作复杂的 feature (特征)提取器。通过这些模型的中间层,我们就可以描述输入图像的内容和风格。
## 建立模型
使用`tf.keras.applications`中的网络可以让我们非常方便的利用Keras的功能接口提取中间层的值。
在使用功能接口定义模型时,我们需要指定输入和输出:
`model = Model(inputs, outputs)`
以下函数构建了一个 VGG19 模型,该模型返回一个中间层输出的列表:
```
def vgg_layers(layer_names):
""" Creates a vgg model that returns a list of intermediate output values."""
# 加载我们的模型。 加载已经在 imagenet 数据上预训练的 VGG
vgg = tf.keras.applications.VGG19(include_top=False, weights='imagenet')
vgg.trainable = False
outputs = [vgg.get_layer(name).output for name in layer_names]
model = tf.keras.Model([vgg.input], outputs)
return model
```
然后建立模型:
```
style_extractor = vgg_layers(style_layers)
style_outputs = style_extractor(style_image*255)
#查看每层输出的统计信息
for name, output in zip(style_layers, style_outputs):
print(name)
print(" shape: ", output.numpy().shape)
print(" min: ", output.numpy().min())
print(" max: ", output.numpy().max())
print(" mean: ", output.numpy().mean())
print()
```
## 风格计算
图像的内容由中间 feature maps (特征图)的值表示。
事实证明,图像的风格可以通过不同 feature maps (特征图)上的平均值和相关性来描述。 通过在每个位置计算 feature (特征)向量的外积,并在所有位置对该外积进行平均,可以计算出包含此信息的 Gram 矩阵。 对于特定层的 Gram 矩阵,具体计算方法如下所示:
$$G^l_{cd} = \frac{\sum_{ij} F^l_{ijc}(x)F^l_{ijd}(x)}{IJ}$$
这可以使用`tf.linalg.einsum`函数来实现:
```
def gram_matrix(input_tensor):
result = tf.linalg.einsum('bijc,bijd->bcd', input_tensor, input_tensor)
input_shape = tf.shape(input_tensor)
num_locations = tf.cast(input_shape[1]*input_shape[2], tf.float32)
return result/(num_locations)
```
## 提取风格和内容
构建一个返回风格和内容张量的模型。
```
class StyleContentModel(tf.keras.models.Model):
def __init__(self, style_layers, content_layers):
super(StyleContentModel, self).__init__()
self.vgg = vgg_layers(style_layers + content_layers)
self.style_layers = style_layers
self.content_layers = content_layers
self.num_style_layers = len(style_layers)
self.vgg.trainable = False
def call(self, inputs):
"Expects float input in [0,1]"
inputs = inputs*255.0
preprocessed_input = tf.keras.applications.vgg19.preprocess_input(inputs)
outputs = self.vgg(preprocessed_input)
style_outputs, content_outputs = (outputs[:self.num_style_layers],
outputs[self.num_style_layers:])
style_outputs = [gram_matrix(style_output)
for style_output in style_outputs]
content_dict = {content_name:value
for content_name, value
in zip(self.content_layers, content_outputs)}
style_dict = {style_name:value
for style_name, value
in zip(self.style_layers, style_outputs)}
return {'content':content_dict, 'style':style_dict}
```
在图像上调用此模型,可以返回 style_layers 的 gram 矩阵(风格)和 content_layers 的内容:
```
extractor = StyleContentModel(style_layers, content_layers)
results = extractor(tf.constant(content_image))
style_results = results['style']
print('Styles:')
for name, output in sorted(results['style'].items()):
print(" ", name)
print(" shape: ", output.numpy().shape)
print(" min: ", output.numpy().min())
print(" max: ", output.numpy().max())
print(" mean: ", output.numpy().mean())
print()
print("Contents:")
for name, output in sorted(results['content'].items()):
print(" ", name)
print(" shape: ", output.numpy().shape)
print(" min: ", output.numpy().min())
print(" max: ", output.numpy().max())
print(" mean: ", output.numpy().mean())
```
## 梯度下降
使用此风格和内容提取器,我们现在可以实现风格传输算法。我们通过计算每个图像的输出和目标的均方误差来做到这一点,然后取这些损失值的加权和。
设置风格和内容的目标值:
```
style_targets = extractor(style_image)['style']
content_targets = extractor(content_image)['content']
```
定义一个 `tf.Variable` 来表示要优化的图像。 为了快速实现这一点,使用内容图像对其进行初始化( `tf.Variable` 必须与内容图像的形状相同)
```
image = tf.Variable(content_image)
```
由于这是一个浮点图像,因此我们定义一个函数来保持像素值在 0 和 1 之间:
```
def clip_0_1(image):
return tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=1.0)
```
创建一个 optimizer 。 本教程推荐 LBFGS,但 `Adam` 也可以正常工作:
```
opt = tf.optimizers.Adam(learning_rate=0.02, beta_1=0.99, epsilon=1e-1)
```
为了优化它,我们使用两个损失的加权组合来获得总损失:
```
style_weight=1e-2
content_weight=1e4
def style_content_loss(outputs):
style_outputs = outputs['style']
content_outputs = outputs['content']
style_loss = tf.add_n([tf.reduce_mean((style_outputs[name]-style_targets[name])**2)
for name in style_outputs.keys()])
style_loss *= style_weight / num_style_layers
content_loss = tf.add_n([tf.reduce_mean((content_outputs[name]-content_targets[name])**2)
for name in content_outputs.keys()])
content_loss *= content_weight / num_content_layers
loss = style_loss + content_loss
return loss
```
使用 `tf.GradientTape` 来更新图像。
```
@tf.function()
def train_step(image):
with tf.GradientTape() as tape:
outputs = extractor(image)
loss = style_content_loss(outputs)
grad = tape.gradient(loss, image)
opt.apply_gradients([(grad, image)])
image.assign(clip_0_1(image))
```
现在,我们运行几个步来测试一下:
```
train_step(image)
train_step(image)
train_step(image)
tensor_to_image(image)
```
运行正常,我们来执行一个更长的优化:
```
import time
start = time.time()
epochs = 10
steps_per_epoch = 100
step = 0
for n in range(epochs):
for m in range(steps_per_epoch):
step += 1
train_step(image)
print(".", end='')
display.clear_output(wait=True)
display.display(tensor_to_image(image))
print("Train step: {}".format(step))
end = time.time()
print("Total time: {:.1f}".format(end-start))
```
## 总变分损失
此实现只是一个基础版本,它的一个缺点是它会产生大量的高频误差。 我们可以直接通过正则化图像的高频分量来减少这些高频误差。 在风格转移中,这通常被称为*总变分损失*:
```
def high_pass_x_y(image):
x_var = image[:,:,1:,:] - image[:,:,:-1,:]
y_var = image[:,1:,:,:] - image[:,:-1,:,:]
return x_var, y_var
x_deltas, y_deltas = high_pass_x_y(content_image)
plt.figure(figsize=(14,10))
plt.subplot(2,2,1)
imshow(clip_0_1(2*y_deltas+0.5), "Horizontal Deltas: Original")
plt.subplot(2,2,2)
imshow(clip_0_1(2*x_deltas+0.5), "Vertical Deltas: Original")
x_deltas, y_deltas = high_pass_x_y(image)
plt.subplot(2,2,3)
imshow(clip_0_1(2*y_deltas+0.5), "Horizontal Deltas: Styled")
plt.subplot(2,2,4)
imshow(clip_0_1(2*x_deltas+0.5), "Vertical Deltas: Styled")
```
这显示了高频分量如何增加。
而且,本质上高频分量是一个边缘检测器。 我们可以从 Sobel 边缘检测器获得类似的输出,例如:
```
plt.figure(figsize=(14,10))
sobel = tf.image.sobel_edges(content_image)
plt.subplot(1,2,1)
imshow(clip_0_1(sobel[...,0]/4+0.5), "Horizontal Sobel-edges")
plt.subplot(1,2,2)
imshow(clip_0_1(sobel[...,1]/4+0.5), "Vertical Sobel-edges")
```
与此相关的正则化损失是这些值的平方和:
```
def total_variation_loss(image):
x_deltas, y_deltas = high_pass_x_y(image)
return tf.reduce_sum(tf.abs(x_deltas)) + tf.reduce_sum(tf.abs(y_deltas))
total_variation_loss(image).numpy()
```
以上说明了总变分损失的用途。但是无需自己实现,因为 TensorFlow 包含了一个标准实现:
```
tf.image.total_variation(image).numpy()
```
## 重新进行优化
选择 `total_variation_loss` 的权重:
```
total_variation_weight=30
```
现在,将它加入 `train_step` 函数中:
```
@tf.function()
def train_step(image):
with tf.GradientTape() as tape:
outputs = extractor(image)
loss = style_content_loss(outputs)
loss += total_variation_weight*tf.image.total_variation(image)
grad = tape.gradient(loss, image)
opt.apply_gradients([(grad, image)])
image.assign(clip_0_1(image))
```
重新初始化优化的变量:
```
image = tf.Variable(content_image)
```
并进行优化:
```
import time
start = time.time()
epochs = 10
steps_per_epoch = 100
step = 0
for n in range(epochs):
for m in range(steps_per_epoch):
step += 1
train_step(image)
print(".", end='')
display.clear_output(wait=True)
display.display(tensor_to_image(image))
print("Train step: {}".format(step))
end = time.time()
print("Total time: {:.1f}".format(end-start))
```
最后,保存结果:
```
file_name = 'stylized-image.png'
tensor_to_image(image).save(file_name)
try:
from google.colab import files
except ImportError:
pass
else:
files.download(file_name)
```
| github_jupyter |
# Run MPNN interface design on the bound states
### Imports
```
%load_ext lab_black
# Python standard library
from glob import glob
import os
import socket
import sys
# 3rd party library imports
import dask
import matplotlib.pyplot as plt
import pandas as pd
import pyrosetta
import numpy as np
import scipy
import seaborn as sns
from tqdm.auto import tqdm # jupyter compatible progress bar
tqdm.pandas() # link tqdm to pandas
# Notebook magic
# save plots in the notebook
%matplotlib inline
# reloads modules automatically before executing cells
%load_ext autoreload
%autoreload 2
print(f"running in directory: {os.getcwd()}") # where are we?
print(f"running on node: {socket.gethostname()}") # what node are we on?
```
### Set working directory to the root of the crispy_shifty repo
TODO set to projects dir
```
os.chdir("/home/pleung/projects/crispy_shifty")
# os.chdir("/projects/crispy_shifty")
```
### Run MPNN on the interfaces
TODO
```
from crispy_shifty.utils.io import gen_array_tasks
simulation_name = "02_mpnn_bound_states"
design_list_file = os.path.join(
os.getcwd(), "projects/crispy_shifties/01_loop_bound_states/looped_states.list"
)
output_path = os.path.join(os.getcwd(), f"projects/crispy_shifties/{simulation_name}")
options = " ".join(
[
"out:level 200",
]
)
gen_array_tasks(
distribute_func="crispy_shifty.protocols.mpnn.mpnn_bound_state",
design_list_file=design_list_file,
output_path=output_path,
queue="medium",
memory="4G",
nstruct=1,
nstruct_per_task=1,
options=options,
simulation_name=simulation_name,
)
!sbatch -a 1-$(cat /mnt/home/pleung/projects/crispy_shifty/projects/crispy_shifties/02_mpnn_bound_states/tasks.cmds | wc -l) /mnt/home/pleung/projects/crispy_shifty/projects/crispy_shifties/02_mpnn_bound_states/run.sh
```
### Collect scorefiles of designed bound states and concatenate
TODO change to projects dir
```
sys.path.insert(0, "~/projects/crispy_shifty") # TODO
from crispy_shifty.utils.io import collect_score_file
simulation_name = "02_mpnn_bound_states"
output_path = os.path.join(os.getcwd(), f"projects/crispy_shifties/{simulation_name}")
if not os.path.exists(os.path.join(output_path, "scores.json")):
collect_score_file(output_path, "scores")
```
### Load resulting concatenated scorefile
TODO change to projects dir
```
sys.path.insert(0, "~/projects/crispy_shifty") # TODO
from crispy_shifty.utils.io import parse_scorefile_linear
output_path = os.path.join(os.getcwd(), f"projects/crispy_shifties/{simulation_name}")
scores_df = parse_scorefile_linear(os.path.join(output_path, "scores.json"))
scores_df = scores_df.convert_dtypes()
```
### Setup for plotting
```
sns.set(
context="talk",
font_scale=1, # make the font larger; default is pretty small
style="ticks", # make the background white with black lines
palette="colorblind", # a color palette that is colorblind friendly!
)
```
### Data exploration
Gonna remove the Rosetta sfxn scoreterms for now
```
from crispy_shifty.protocols.design import beta_nov16_terms
scores_df = scores_df[
[term for term in scores_df.columns if term not in beta_nov16_terms]
]
print(len(scores_df))
print(list(scores_df.columns))
```
### Save individual fastas
TODO change to projects dir
```
sys.path.insert(0, "~/projects/crispy_shifty") # TODO
from crispy_shifty.utils.io import df_to_fastas
output_path = os.path.join(os.getcwd(), f"projects/crispy_shifties/{simulation_name}")
scores_df = df_to_fastas(scores_df, prefix="mpnn_seq")
```
### Save a list of outputs
```
simulation_name = "02_mpnn_bound_states"
output_path = os.path.join(os.getcwd(), f"projects/crispy_shifties/{simulation_name}")
with open(os.path.join(output_path, "mpnn_states.list"), "w") as f:
for path in tqdm(scores_df.index):
print(path, file=f)
```
### Concat the pdb.bz2 and fasta paths into a single list, for reasons
```
simulation_name = "02_mpnn_bound_states"
output_path = os.path.join(os.getcwd(), f"projects/crispy_shifties/{simulation_name}")
with open(os.path.join(output_path, "mpnn_states.pair"), "w") as f:
for path in tqdm(scores_df.index):
line = path + "____" + path.replace("decoys", "fastas").replace("pdb.bz2", "fa")
print(line, file=f)
```
### Prototyping blocks
test `mpnn_bound_state`
```
%%time
import pyrosetta
pyrosetta.init()
sys.path.insert(0, "~/projects/crispy_shifty/") # TODO projects
from crispy_shifty.protocols.mpnn import mpnn_bound_state
t = mpnn_bound_state(
None,
**{
'pdb_path': '/mnt/home/pleung/projects/crispy_shifty/projects/crispy_shifties/01_loop_bound_states/decoys/0001/01_loop_bound_states_17f57e75865441a78a0057fb8081b4de.pdb.bz2',
}
)
for i, tppose in enumerate(t):
tppose.pose.dump_pdb(f"{i}.pdb")
tppose.pose.scores
import pyrosetta.distributed.viewer as viewer
ppose = pyrosetta.distributed.io.pose_from_file("test.pdb")
view = viewer.init(ppose, window_size=(1600, 1200))
view.add(viewer.setStyle())
view.add(viewer.setStyle(colorscheme="whiteCarbon", radius=0.10))
view.add(viewer.setHydrogenBonds())
view.add(viewer.setHydrogens(polar_only=True))
view.add(viewer.setDisulfides(radius=0.25))
view()
```
| github_jupyter |
<table>
<tr align=left><td><img align=left src="./images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli</td>
</table>
Note: The presentation below largely follows part II in "Finite Difference Methods for Ordinary and Partial Differential Equations" by LeVeque (SIAM, 2007).
```
from __future__ import print_function
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
```
# Numerical Solution to ODE Initial Value Problems - Part 1
Many physical, biological, and societal systems can be written as a system of ordinary differential equations (ODEs). In the case where the initial state (value) is know the problems can be written as
$$
\frac{\text{d} \vec{\!u}}{\text{d}t} = \vec{\!f}(t, \vec{\!u}) \quad \vec{\!u}(0) = \vec{\!u}_0
$$
where
- $\vec{\!u}(t)$ is the state vector
- $\vec{\!f}(t, \vec{\!u})$ is a vector-valued function that controls the growth of $\vec{u}$ with time
- $\vec{\!u}(0)$ is the initial condition at time $t = 0$
#### Examples: Simple radioactive decay
$$
\vec{\!u} = [c]
$$
$$
\frac{\text{d} c}{\text{d}t} = \lambda c \quad c(0) = c_0
$$
which has solutions of the form $c(t) = c_0 e^{\lambda t}$
```
t = numpy.linspace(0.0, 1.6e3, 100)
c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, 1.0 * numpy.exp(decay_constant * t))
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel('t (years)')
axes.set_ylabel('$c$')
axes.set_xlim((0.0, 1650))
axes.set_ylim((0.5,1.0))
plt.show()
```
#### Examples: Complex radioactive decay (or chemical system).
Chain of decays from one species to another.
$$\begin{aligned}
\frac{\text{d} c_1}{\text{d}t} &= -\lambda_1 c_1 \\
\frac{\text{d} c_2}{\text{d}t} &= \lambda_1 c_1 - \lambda_2 c_2 \\
\frac{\text{d} c_3}{\text{d}t} &= \lambda_2 c_3 - \lambda_3 c_3
\end{aligned}$$
$$\frac{\text{d} \vec{\!u}}{\text{d}t} = \frac{\text{d}}{\text{d}t}\begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix} =
\begin{bmatrix}
-\lambda_1 & 0 & 0 \\
\lambda_1 & -\lambda_2 & 0 \\
0 & \lambda_2 & -\lambda_3
\end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix}$$
$$\frac{\text{d} \vec{\!u}}{\text{d}t} = A \vec{\!u}$$
For systems of equations like this the general solution to the ODE is the matrix exponential:
$$\vec{\!u}(t) = \vec{\!u}_0 e^{A t}$$
#### Examples: Particle tracking in a fluid
$$\frac{\text{d} \vec{\!X}}{\text{d}t} = \vec{\!V}(t, \vec{\!X})$$
In fact all ODE IVP systems can be thought of as tracking particles through a flow field (dynamical system). In 1-dimension the flow "manifold" we are on is fixed by the initial condition.
#### Examples: Van der Pol Oscillator
$$y'' - \mu (1 - y^2) y' + y = 0 \quad \quad \text{with} \quad \quad y(0) = y_0, \quad y'(0) = v_0$$
$$\vec{\!u} = \begin{bmatrix} y \\ y' \end{bmatrix} = \begin{bmatrix} u_1 \\ u_2 \end{bmatrix}$$
$$\frac{\text{d}}{\text{d}t} \begin{bmatrix} u_1 \\ u_2 \end{bmatrix} = \begin{bmatrix} u_2 \\ \mu (1 - u_1^2) u_2 - u_1 \end{bmatrix} = \vec{\!f}(t, \vec{\!u})$$
```
import scipy.integrate as integrate
def f(t, u, mu=5):
return numpy.array([u[1], mu * (1.0 - u[0]**2) * u[1] - u[0]])
# N = 100
N = 500
# N = 1000
t = numpy.linspace(0.0, 100, N)
u = numpy.empty((2, N))
u[:, 0] = [0.1, 0.0]
integrator = integrate.ode(f)
integrator.set_integrator("dopri5")
integrator.set_initial_value(u[:, 0])
for (n, t_n) in enumerate(t[1:]):
integrator.integrate(t_n)
if not integrator.successful():
break
u[:, n + 1] = integrator.y
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, u[0,:])
axes.set_title("Solution to Van der Pol Oscillator")
axes.set_xlabel("t")
axes.set_ylabel("y(t)")
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(u[0,:], u[1, :])
axes.plot(u[0,:], u[1, :], 'ro')
axes.set_title("Phase Diagram for Van der Pol Oscillator")
axes.set_xlabel("y(t)")
axes.set_ylabel("y'(t)")
plt.show()
```
## Basic Stepping Schemes
Introducing some notation to simpify things
$$\begin{aligned}
t_0 &= 0 \\
t_1 &= t_0 + \Delta t \\
t_n &= t_{n-1} + \Delta t = n \Delta t + t_0 \\
u_0 &= u(t_0) \approx U_0 \\
u_1 &= u(t_1) \approx U_1 \\
u_n &= u(t_n) \approx U_2 \\
\end{aligned}$$
where lower-case letters are "exact".
Looking back at our work on numerical differentiation why not approximate the derivative as a finite difference:
$$
\frac{u(t + \Delta t) - u(t)}{\Delta t} = f(t, u)
$$
We still need to decide how to evaluate the $f(t, u)$ term however.
Let us look at this from a perspective of quadrature, take the integral of both sides:
$$\begin{aligned}
\int^{t + \Delta t}_t \frac{\text{d} u}{\text{d}\tilde{\!t}} d\tilde{\!t} &= \int^{t + \Delta t}_t f(t, u) d\tilde{\!t} \\
u(t + \Delta t) - u(t) &\approx \Delta t f(t, u(t))
\end{aligned}$$
where we have used a left-sided quadrature rule for the integral on the right.
We can rewrite our scheme
$$
u(t + \Delta t) - u(t) = \Delta t f(t, u(t))
$$
as
$$
\frac{U_{n+1} - U_n}{\Delta t} = f(t_n, U_n)
$$
or
$$
U_{n+1} = U_n + \Delta t f(t_n, U_n)
$$
which is known as the *forward Euler method*. In essence we are approximating the derivative with the value of the function at the point we are at $t_n$.
```
t = numpy.linspace(0.0, 1.6e3, 100)
c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, c_0 * numpy.exp(decay_constant * t), label="True Solution")
# Plot Euler step
dt = 1e3
u_np = c_0 + dt * (decay_constant * c_0)
axes.plot((0.0, dt), (c_0, u_np), 'k')
axes.plot((dt, dt), (u_np, c_0 * numpy.exp(decay_constant * dt)), 'k--')
axes.plot((0.0, 0.0), (c_0, u_np), 'k--')
axes.plot((0.0, dt), (u_np, u_np), 'k--')
axes.text(400, u_np - 0.05, '$\Delta t$', fontsize=16)
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel('t (years)')
axes.set_ylabel('$c$')
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5,1.0))
plt.show()
c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
f = lambda t, u: decay_constant * u
t_exact = numpy.linspace(0.0, 1.6e3, 100)
u_exact = c_0 * numpy.exp(decay_constant * t_exact)
# Implement Forward Euler
t_euler = numpy.linspace(0.0, 1.6e3, 10)
delta_t = t_euler[1] - t_euler[0]
u_euler = numpy.empty(t_euler.shape)
u_euler[0] = c_0
for (n, t_n) in enumerate(t_euler[:-1]):
u_euler[n + 1] = u_euler[n] + delta_t * f(t_n, u_euler[n])
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_euler, u_euler, 'or', label="Euler")
axes.plot(t_exact, u_exact, 'k--', label="True Solution")
axes.set_title("Forward Euler")
axes.set_xlabel("t (years)")
axes.set_xlabel("$c(t)$")
axes.set_ylim((0.4,1.1))
axes.legend()
plt.show()
```
A similar method can be derived if we consider instead using the second order accurate central difference:
$$\frac{U_{n+1} - U_{n-1}}{2\Delta t} = f(t_{n}, U_{n})$$
this method is known as the leap-frog method. Note that the way we have written this method requires a previous function evaluation and technically is a "multi-step" method although we do not actually use the current evaluation.
```
t = numpy.linspace(0.0, 1.6e3, 100)
c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, c_0 * numpy.exp(decay_constant * t), label="True Solution")
# Plot Leap-Frog step
dt = 1e3
u_np = c_0 + dt * (decay_constant * c_0 * numpy.exp(decay_constant * dt / 2.0))
axes.plot((0.0, dt), (c_0, u_np), 'k')
axes.plot((dt, dt), (u_np, c_0 * numpy.exp(decay_constant * dt)), 'k--')
axes.plot((0.0, 0.0), (c_0, u_np), 'k--')
axes.plot((0.0, dt), (u_np, u_np), 'k--')
axes.text(400, u_np - 0.05, '$\Delta t$', fontsize=16)
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel('t (years)')
axes.set_ylabel('$c$')
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5,1.0))
plt.show()
c_0 = 1.0
N = 25
# Stable example
decay_constant = -numpy.log(2.0) / 1600.0
t_exact = numpy.linspace(0.0, 1.6e3, 100)
t_leapfrog = numpy.linspace(0.0, 1.6e3, 25)
# Unstable example
# decay_constant = -1.0
# t_exact = numpy.linspace(0.0, 5.0, 100)
# t_leapfrog = numpy.linspace(0.0, 5.0, N)
f = lambda t, u: decay_constant * u
u_exact = c_0 * numpy.exp(decay_constant * t_exact)
# Implement leap-frog
delta_t = t_leapfrog[1] - t_leapfrog[0]
u_leapfrog = numpy.empty(t_leapfrog.shape)
u_leapfrog[0] = c_0
u_leapfrog[1] = u_leapfrog[0] + delta_t * f(t_leapfrog[0], u_leapfrog[0])
for n in range(1, t_leapfrog.shape[0] - 1):
u_leapfrog[n + 1] = u_leapfrog[n - 1] + 2.0 * delta_t * f(t_leapfrog[n], u_leapfrog[n])
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_leapfrog, u_leapfrog, 'or-', label="Leap-Frog")
axes.plot(t_exact, u_exact, 'k--', label="True Solution")
axes.set_title("Leap-Frog")
axes.set_xlabel("t (years)")
axes.set_xlabel("$c(t)$")
axes.legend()
plt.show()
```
Similar to forward Euler is the *backward Euler* method which, as you may have guessed, evaluates the function $f$ at the updated time so that
$$
U_{n+1} = U_n + \Delta t f(t_{n+1}, U_{n+1}).
$$
Schemes where the function $f$ is evaluated at the unknown time are called *implicit methods*.
For some cases we can solve the equation before hand. For instance in the case of our example problem we have:
$$
U_{n+1} = U_n + \Delta t f(t_{n+1}, U_{n+1}) = U_n + \Delta t (\lambda U_{n+1})
$$
which can be solved for $U_{n+1}$ to find
$$\begin{aligned}
U_{n+1} &= U_n + \Delta t (\lambda U_{n+1}) \\
U_{n+1} \left[ 1 - \Delta t \lambda \right ] &= U_n \\
U_{n+1} &= \frac{U_n}{1 - \Delta t \lambda}
\end{aligned}$$
It's also useful to be able to do this in the case of systems of ODEs. Let $f(U) = A U$, then
$$\begin{aligned}
U_{n+1} &= U_n + \Delta t (A U_{n+1}) \\
U_{n+1} \left [ I - \Delta t A \right ] &= U_n \\
U_{n+1} &= \left [ I - \Delta t A \right]^{-1} U_n
\end{aligned}$$
In general however we are often not able to do this with arbitrary $f$.
```
t = numpy.linspace(0.0, 1.6e3, 100)
c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, c_0 * numpy.exp(decay_constant * t), label="True Solution")
# Plot Euler step
dt = 1e3
u_np = c_0 + dt * (decay_constant * c_0 * numpy.exp(decay_constant * dt))
axes.plot((0.0, dt), (c_0, u_np), 'k')
axes.plot((dt, dt), (u_np, c_0 * numpy.exp(decay_constant * dt)), 'k--')
axes.plot((0.0, 0.0), (c_0, c_0 * numpy.exp(decay_constant * dt)), 'k--')
axes.plot((0.0, dt), (c_0 * numpy.exp(decay_constant * dt), c_0 * numpy.exp(decay_constant * dt)), 'k--')
axes.text(400, u_np - 0.05, '$\Delta t$', fontsize=16)
axes.set_title("Radioactive Decay with $t_{1/2} = 1600$ years")
axes.set_xlabel('t (years)')
axes.set_ylabel('$c$')
axes.set_xlim(-1e2, 1.6e3)
axes.set_ylim((0.5,1.0))
plt.show()
c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
f = lambda t, u: decay_constant * u
t_exact = numpy.linspace(0.0, 1.6e3, 100)
u_exact = c_0 * numpy.exp(decay_constant * t_exact)
# Implement backwards Euler
t_backwards = numpy.linspace(0.0, 1.6e3, 10)
delta_t = t_backwards[1] - t_backwards[0]
u_backwards = numpy.empty(t_backwards.shape)
u_backwards[0] = c_0
for n in range(0, t_backwards.shape[0] - 1):
u_backwards[n + 1] = u_backwards[n] / (1.0 - decay_constant * delta_t)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_backwards, u_backwards, 'or', label="Backwards Euler")
axes.plot(t_exact, u_exact, 'k--', label="True Solution")
axes.set_title("Backwards Euler")
axes.set_xlabel("t (years)")
axes.set_xlabel("$c(t)$")
axes.set_ylim((0.4,1.1))
axes.legend()
plt.show()
```
Another simple implicit method is based on integration using the trapezoidal method. The scheme is
$$
\frac{U_{n+1} - U_{n}}{\Delta t} = \frac{1}{2} (f(U_n) + f(U_{n+1}))
$$
In this case what is the update scheme?
$$\begin{aligned}
U_{n+1} &= U_{n} + \frac{\Delta t}{2} (f(U_n) + f(U_{n+1})) \\
U_{n+1} &= U_{n} + \frac{\Delta t}{2} (\lambda U_n + \lambda U_{n+1}) \\
U_{n+1} \left[1 - \frac{\Delta t \lambda}{2} \right] &= U_{n} \left[1 + \frac{\Delta t \lambda}{2} \right] \\
U_{n+1} &= U_{n} \frac{1 + \frac{\Delta t \lambda}{2}}{1 - \frac{\Delta t \lambda}{2}} \\
\end{aligned}$$
```
c_0 = 1.0
decay_constant = -numpy.log(2.0) / 1600.0
t_exact = numpy.linspace(0.0, 1.6e3, 100)
u_exact = c_0 * numpy.exp(decay_constant * t_exact)
# Implement trapezoidal method
t = numpy.linspace(0.0, 1.6e3, 10)
delta_t = t[1] - t[0]
u = numpy.empty(t.shape)
u[0] = c_0
integration_constant = (1.0 + decay_constant * delta_t / 2.0) / (1.0 - decay_constant * delta_t / 2.0)
for n in range(t.shape[0] - 1):
u[n + 1] = u[n] * integration_constant
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, u, 'or', label="Trapezoidal")
axes.plot(t_exact, u_exact, 'k--', label="True Solution")
axes.set_title("Trapezoidal")
axes.set_xlabel("t (years)")
axes.set_xlabel("$c(t)$")
axes.set_ylim((0.4,1.1))
axes.legend()
plt.show()
```
## Error Analysis of ODE Methods
At this point it is also helpful to introduce more notation to distinguish between the true solution to the ODE $u(t_n)$ and the approximated value which we will denote $U_n$.
**Definition:** We define the *truncation error* of a scheme by replacing the $U_n$ with the true solution $u(t_n)$ in the finite difference formula and looking at the difference from the exact solution.
For example we will use the difference form of forward Euler
$$
\frac{U_{n+1} - U_n}{\Delta t} = f(t_n)
$$
and define the truncation error as
$$
T(t, u; \Delta t) = \frac{u(t_{n+1}) - u(t_n)}{\Delta t} - f(t_n, u(t_n)).
$$
**Definition:** A method is called *consistent* if
$$
\lim_{\Delta t \rightarrow 0} T(t, u; \Delta t) = 0.
$$
**Definition:** We say that a method is *order* $p$ accurate if
$$
\lVert T(t, u; \Delta t) \rVert \leq C \Delta t^p
$$
uniformally on $t \in [0, T]$. This can also be written as $T(t, u; \Delta t) = \mathcal{O}(\Delta t^p)$. Note that a method is consistent if $p > 0$.
### Error Analysis of Forward Euler
We can analyze the error and convergence order of forward Euler by considering the Taylor series centered at $t_n$:
$$
u(t) = u(t_n) + (t - t_n) u'(t_n) + \frac{u''(t_n)}{2} (t - t_n)^2 + \mathcal{O}((t-t_n)^3)
$$
Evaluating this series at $t_{n+1}$ gives
$$\begin{aligned}
u(t_{n+1}) &= u(t_n) + (t_{n+1} - t_n) u'(t_n) + \frac{u''(t_n)}{2} (t_{n+1} - t_n)^2 + \mathcal{O}((t_{n+1}-t_n)^3)\\
&=u_n + \Delta t f(t_n, u_n) + \frac{u''(t_n)}{2} \Delta t^2 + \mathcal{O}(\Delta t^3)
\end{aligned}$$
From the definition of truncation error we can use our Taylor series expression and find the truncation error. Take the finite difference form of forward Euler
$$
\frac{U_{n+1} - U_n}{\Delta t} = f(t_n)
$$
and replacing the derivative formulation with $u(t_n)$ to find
$$\begin{aligned}
T(t, u; \Delta t) &= \frac{u(t_{n+1}) - u(t_n)}{\Delta t} - f(t_n) \\
&= \frac{u(t_{n+1}) - u(t_n)}{\Delta t} - u'(t_n).
\end{aligned}$$
From here we use the Taylor series centered at $t_n$ and evaluated at $t_{n+1}$ to find
$$\begin{aligned}
T(t, u; \Delta t) &= \frac{u(t_{n+1}) - u(t_n)}{\Delta t} - u'(t_n) \\
&= \frac{1}{\Delta t} \left[ u(t_n) + u'(t_n) (t - t_n) + \frac{u''(t_n)}{2} (t - t_n)^2 + \mathcal{O}((t-t_n)^3) - u(t_n) \right] - u'(t_n) \\
&= u'(t_n) + \frac{u''(t_n)}{2} \Delta t + \mathcal{O}(\Delta t^2) - u'(t_n) \\
&= \frac{u''(t_n)}{2} \Delta t + \mathcal{O}(\Delta t^2).
\end{aligned}$$
This implies that forward Euler is first order accurate and therefore consistent.
Another equivalent definition of the truncation error uses the form
$$
U_{n+1} = u(t_n) + \Delta t f(t_n)
$$
and the definition
$$
T(t, u; \Delta t) = \frac{1}{\Delta t} \left [ U_{n+1} - u(t_{n+1}) \right]
$$
to find
$$\begin{aligned}
T(t, u; \Delta t) &= \frac{1}{\Delta t} [U_{n+1} - u(t + \Delta t)] \\
&= \frac{1}{\Delta t} \left[ \underbrace{u_n + \Delta t f(t_n, u_n)}_{U_{n+1}} - \underbrace{\left( u_n + \Delta t f(t_n, u_n) + \frac{u''(t_n)}{2} \Delta t^2 + \mathcal{O}(\Delta t^3) \right )}_{u(t_{n+1})}\right ] \\
&= \frac{1}{\Delta t} \left[ - \frac{u''(t_n)}{2} \Delta t^2 - \mathcal{O}(\Delta t^3) \right ] \\
&= - \frac{u''(t_n)}{2} \Delta t - \mathcal{O}(\Delta t^2)
\end{aligned}$$
### Error Analysis of Leap-Frog Method
To easily analyze this method we will expand the Taylor series from before to another order:
$$
u(t) = u(t_n) + (t - t_n) u'(t_n) + (t - t_n)^2 \frac{u''(t_n)}{2} + (t - t_n)^3 \frac{u'''(t_n)}{6} + \mathcal{O}((t-t_n)^4)
$$
leading to
$$\begin{aligned}
u(t_{n+1}) &= u_n + \Delta t f_n + \Delta t^2 \frac{u''(t_n)}{2} + \Delta t^3 \frac{u'''(t_n)}{6} + \mathcal{O}(\Delta t^4)
\end{aligned}$$
We need one more expansion however due to leap-frog. Recall that leap-frog has the form
$$
\frac{U_{n+1} - U_{n-1}}{2 \Delta t} = f(t_n, U_n)
$$
or
$$
U_{n+1} = U_{n-1} + 2 \Delta t f(t_n, U_n).
$$
To handle the $U_{n-1}$ term we need to write this with relation to $u(t_n)$. Again we use the Taylor series
$$
u(t_{n-1}) = u_n - \Delta t f_n + \Delta t^2 \frac{u''(t_n)}{2} - \Delta t^3 \frac{u'''(t_n)}{6} + \mathcal{O}(\Delta t^4)
$$
$$\begin{aligned}
u(t_{n+1}) &= u_n + \Delta t f_n + \Delta t^2 \frac{u''(t_n)}{2} + \Delta t^3 \frac{u'''(t_n)}{6} + \mathcal{O}(\Delta t^4)
u(t_{n-1}) &= u_n - \Delta t f_n + \Delta t^2 \frac{u''(t_n)}{2} - \Delta t^3 \frac{u'''(t_n)}{6} + \mathcal{O}(\Delta t^4)
\end{aligned}$$
Plugging these into our definition of the truncation error along with the leap-frog method definition leads to
$$\begin{aligned}
T(t, u; \Delta t) &= \frac{1}{\Delta t} \left [\underbrace{U_{n-1} + 2 \Delta t f_n}_{U_{n+1}} - \underbrace{\left(u_n + \Delta t f_n + \Delta t^2 \frac{u''(t_n)}{2} + \Delta t^3 \frac{u'''(t_n)}{6} + \mathcal{O}(\Delta t^4) \right )}_{u(t + \Delta t)} \right ] \\
&=\frac{1}{\Delta t} \left [\underbrace{\left(u_n - \Delta t f_n + \Delta t^2 \frac{u''(t_n)}{2} - \Delta t^3 \frac{u'''(t_n)}{6} + \mathcal{O}(\Delta t^4)\right)}_{U_{n-1}} + 2\Delta t f_n - \left(u_n + \Delta t f_n + \Delta t^2 \frac{u''(t_n)}{2} + \Delta t^3 \frac{u'''(t_n)}{6} + \mathcal{O}(\Delta t^4) \right )\right ] \\
&=\frac{1}{\Delta t} \left [- \Delta t^3 \frac{u'''(t_n)}{3} + \mathcal{O}(\Delta t^4) \right ] \\
&=- \Delta t^2 \frac{u'''(t_n)}{3} + \mathcal{O}(\Delta t^3)
\end{aligned}$$
Therefore the method is second order accurate and is consistent theoretically. In practice it's a bit more complicated than that.
```
# Compare accuracy between Euler and Leap-Frog
f = lambda t, u: -u
u_exact = lambda t: numpy.exp(-t)
u_0 = 1.0
t_f = 10.0
num_steps = [2**n for n in range(4,10)]
delta_t = numpy.empty(len(num_steps))
error_euler = numpy.empty(len(num_steps))
error_trap = numpy.empty(len(num_steps))
error_leapfrog = numpy.empty(len(num_steps))
for (i, N) in enumerate(num_steps):
t = numpy.linspace(0, t_f, N)
delta_t[i] = t[1] - t[0]
# Compute Euler solution
u_euler = numpy.empty(t.shape)
u_euler[0] = u_0
for n in range(t.shape[0] - 1):
u_euler[n+1] = u_euler[n] + delta_t[i] * f(t[n], u_euler[n])
# Compute trapezoidal
u_trap = numpy.empty(t.shape)
u_trap[0] = u_0
integration_constant = (1.0 - delta_t[i] / 2.0) / (1.0 + delta_t[i] / 2.0)
for n in range(t.shape[0] - 1):
u_trap[n + 1] = u_trap[n] * integration_constant
# Compute Leap-Frog
u_leapfrog = numpy.empty(t.shape)
u_leapfrog[0] = 1.0
u_leapfrog[1] = u_euler[1]
for n in range(1, t.shape[0] - 1):
u_leapfrog[n+1] = u_leapfrog[n-1] + 2.0 * delta_t[i] * f(t[n], u_leapfrog[n])
# Compute error for each
error_euler[i] = numpy.linalg.norm(delta_t[i] * (u_euler - u_exact(t)), ord=1)
error_trap[i] = numpy.linalg.norm(delta_t[i] * (u_trap - u_exact(t)), ord=1)
error_leapfrog[i] = numpy.linalg.norm(delta_t[i] * (u_leapfrog - u_exact(t)), ord=1)
# Plot error vs. delta_t
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_t, error_euler, 'bo', label='Forward Euler')
axes.loglog(delta_t, error_trap, 'ro', label='Trapezoidal')
axes.loglog(delta_t, error_leapfrog, 'go', label="Leap-Frog")
axes.loglog(delta_t, order_C(delta_t[2], error_euler[2], 1.0) * delta_t**1.0, '--b')
axes.loglog(delta_t, order_C(delta_t[2], error_trap[2], 2.0) * delta_t**2.0, '--r')
axes.loglog(delta_t, order_C(delta_t[2], error_leapfrog[2], 2.0) * delta_t**2.0, '--r')
axes.legend(loc=2)
axes.set_title("Comparison of Errors")
axes.set_xlabel("$\Delta t$")
axes.set_ylabel("$|U(t_f) - u(t_f)|$")
plt.show()
```
## Taylor Series Methods
A **Taylor series method** can be derived by direct substitution of the right-hand-side function $f(t, u)$ and it's appropriate derivatives into the Taylor series expansion for $u(t_{n+1})$. For a $p$th order method we would look at the Taylor series up to that order and replace all the derivatives of $u$ with derivatives of $f$ instead.
For the general case we have
$$\begin{align*}
u(t_{n+1}) = u(t_n) + \Delta t u'(t_n) + \frac{\Delta t^2}{2} u''(t_n) + \frac{\Delta t^3}{6} u'''(t_n) + \cdots + \frac{\Delta t^p}{p!} u^{(p)}(t_n)
\end{align*}$$
which contains derivatives of $u$ up to $p$th order.
We then replace these derivatives with the appropriate derivative of $f$ which will always be one less than the derivative of $u$ (due to the original ODE)
$$
u^{(p)}(t_n) = f^{(p-1)}(t_n, u(t_n))
$$
leading to the method
$$
u(t_{n+1}) = u(t_n) + \Delta t f(t_n, u(t_n)) + \frac{\Delta t^2}{2} f'(t_n, u(t_n)) + \frac{\Delta t^3}{6} f''(t_n, u(t_n)) + \cdots + \frac{\Delta t^p}{p!} f^{(p-1)}(t_n, u(t_n)).
$$
We then replace these derivatives with the appropriate derivative of $f$ which will always be one less than the derivative of $u$ (due to the original ODE)
$$
u^{(p)}(t_n) = f^{(p-1)}(t_n, u(t_n))
$$
leading to the method
$$
u(t_{n+1}) = u(t_n) + \Delta t f(t_n, u(t_n)) + \frac{\Delta t^2}{2} f'(t_n, u(t_n)) + \frac{\Delta t^3}{6} f''(t_n, u(t_n)) + \cdots + \frac{\Delta t^p}{p!} f^{(p-1)}(t_n, u(t_n)).
$$
The drawback to these methods is that we have to derive a new one each time we have a new $f$ and we also need $p-1$ derivatives of $f$.
### 2nd Order Taylor Series Method
We want terms up to second order so we need to take the derivative of $u' = f(t, u)$ once to find $u'' = f'(t, u)$ and therefore
$$\begin{align*}
u(t_{n+1}) &= u(t_n) + \Delta t u'(t_n) + \frac{\Delta t^2}{2} u''(t_n) \\
&=u(t_n) + \Delta t f(t_n, u(t_n)) + \frac{\Delta t^2}{2} f'(t_n, u(t_n)) ~~~ \text{or} \\
U_{n+1} &= U_n + \Delta t f(t_n, U_n) + \frac{\Delta t^2}{2} f'(t_n, U_n).
\end{align*}$$
## Runge-Kutta Methods
One way to derive higher-order ODE solvers is by computing intermediate stages. These are not *multi-step* methods as they still only require information from the current time step but they raise the order of accuracy by adding *stages*. These types of methods are called **Runge-Kutta** methods.
### Example: Two-stage Runge-Kutta Methods
The basic idea behind the simplest of the Runge-Kutta methods is to approximate the solution at $t_n + \Delta t / 2$ via Euler's method and use this in the function evaluation for the final update.
$$\begin{aligned}
U^* &= U^n + \frac{1}{2} \Delta t f(U^n) \\
U^{n+1} &= U^n + \Delta t f(U^*) \\
&= U^n + \Delta t f(U^n + \frac{1}{2} \Delta t f(U^n))
\end{aligned}$$
The truncation error can be computed similarly to how we did so before but we do need to figure out how to compute the derivative inside of the function. Note that due to
$$
f(u(t_n)) = u'(t_n)
$$
that differentiating this leads to
$$
f'(u(t_n)) u'(t_n) = u''(t_n)
$$
leading to
$$\begin{aligned}
f\left(u(t_n) + \frac{1}{2} \Delta t f(u(t_n)) \right ) &= f\left(u(t_n) +\frac{1}{2} \Delta t u'(t_n) \right ) \\
&= f(u(t_n)) + \frac{1}{2} \Delta t u'(t_n) f'(u(t_n)) + \frac{1}{8} \Delta t^2 (u'(t_n))^2 f''(u(t_n)) + \mathcal{O}(\Delta t^3) \\
&=u'(t_n) + \frac{1}{2} \Delta t u''(t_n) + \mathcal{O}(\Delta t^2)
\end{aligned}$$
Going back to the truncation error we have
$$\begin{aligned}
T(t, u; \Delta t) &= \frac{1}{\Delta t} \left[u_n + \Delta t f\left(u_n + \frac{1}{2} \Delta t f(u_n)\right) - \left(u_n + \Delta t f(t_n, u_n) + \frac{u''(t_n)}{2} \Delta t^2 + \mathcal{O}(\Delta t^3) \right ) \right] \\
&=\frac{1}{\Delta t} \left[\Delta t u'(t_n) + \frac{1}{2} \Delta t^2 u''(t_n) + \mathcal{O}(\Delta t^3) - \Delta t u'(t_n) - \frac{u''(t_n)}{2} \Delta t^2 + \mathcal{O}(\Delta t^3) \right] \\
&= \mathcal{O}(\Delta t^2)
\end{aligned}$$
so this method is second order accurate.
### Example: 4-stage Runge-Kutta Method
$$\begin{aligned}
Y_1 &= U_n \\
Y_2 &= U_n + \frac{1}{2} \Delta t f(Y_1, t_n) \\
Y_3 &= U_n + \frac{1}{2} \Delta t f(Y_2, t_n + \Delta t / 2) \\
Y_4 &= U_n + \Delta t f(Y_3, t_n + \Delta t / 2) \\
U_{n+1} &= U_n + \frac{\Delta t}{6} \left [f(Y_1, t_n) + 2 f(Y_2, t_n + \Delta t / 2) + 2 f(Y_3, t_n + \Delta t/2) + f(Y_4, t_n + \Delta t) \right ]
\end{aligned}$$
```
# Implement and compare the two-stage and 4-stage Runge-Kutta methods
f = lambda t, u: -u
t_exact = numpy.linspace(0.0, 10.0, 100)
u_exact = numpy.exp(-t_exact)
N = 10
t = numpy.linspace(0, 10.0, N)
delta_t = t[1] - t[0]
u_2 = numpy.empty(t.shape)
u_4 = numpy.empty(t.shape)
u_2[0] = 1.0
u_4[0] = 1.0
for (n, t_n) in enumerate(t[1:]):
u_2[n+1] = u_2[n] + 0.5 * delta_t * f(t_n, u_2[n])
u_2[n+1] = u_2[n] + delta_t * f(t_n, u_2[n+1])
y_1 = u_4[n]
y_2 = u_4[n] + 0.5 * delta_t * f(t_n, y_1)
y_3 = u_4[n] + 0.5 * delta_t * f(t_n + 0.5 * delta_t, y_2)
y_4 = u_4[n] + delta_t * f(t_n + 0.5 * delta_t, y_3)
u_4[n+1] = u_4[n] + delta_t / 6.0 * (f(t_n, y_1) + 2.0 * f(t_n + 0.5 * delta_t, y_2) + 2.0 * f(t_n + 0.5 * delta_t, y_3) + f(t_n + delta_t, y_4))
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_exact, u_exact, 'k', label="True")
axes.plot(t, u_2, 'ro', label="2-Stage")
axes.plot(t, u_4, 'bo', label="4-Stage")
axes.legend(loc=1)
plt.show()
# Compare accuracy between Euler and RK
f = lambda t, u: -u
u_exact = lambda t: numpy.exp(-t)
t_f = 10.0
num_steps = [2**n for n in range(5,12)]
delta_t = numpy.empty(len(num_steps))
error_euler = numpy.empty(len(num_steps))
error_2 = numpy.empty(len(num_steps))
error_4 = numpy.empty(len(num_steps))
for (i, N) in enumerate(num_steps):
t = numpy.linspace(0, t_f, N)
delta_t[i] = t[1] - t[0]
# Compute Euler solution
U_euler = numpy.empty(t.shape)
U_euler[0] = 1.0
for (n, t_n) in enumerate(t[1:]):
U_euler[n+1] = U_euler[n] + delta_t[i] * f(t_n, U_euler[n])
# Compute 2 and 4-stage
U_2 = numpy.empty(t.shape)
U_4 = numpy.empty(t.shape)
U_2[0] = 1.0
U_4[0] = 1.0
for (n, t_n) in enumerate(t[1:]):
U_2[n+1] = U_2[n] + 0.5 * delta_t[i] * f(t_n, U_2[n])
U_2[n+1] = U_2[n] + delta_t[i] * f(t_n, U_2[n+1])
y_1 = U_4[n]
y_2 = U_4[n] + 0.5 * delta_t[i] * f(t_n, y_1)
y_3 = U_4[n] + 0.5 * delta_t[i] * f(t_n + 0.5 * delta_t[i], y_2)
y_4 = U_4[n] + delta_t[i] * f(t_n + 0.5 * delta_t[i], y_3)
U_4[n+1] = U_4[n] + delta_t[i] / 6.0 * (f(t_n, y_1) + 2.0 * f(t_n + 0.5 * delta_t[i], y_2) + 2.0 * f(t_n + 0.5 * delta_t[i], y_3) + f(t_n + delta_t[i], y_4))
# Compute error for each
error_euler[i] = numpy.abs(U_euler[-1] - u_exact(t_f)) / numpy.abs(u_exact(t_f))
error_2[i] = numpy.abs(U_2[-1] - u_exact(t_f)) / numpy.abs(u_exact(t_f))
error_4[i] = numpy.abs(U_4[-1] - u_exact(t_f)) / numpy.abs(u_exact(t_f))
# Plot error vs. delta_t
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_t, error_euler, 'bo', label='Forward Euler')
axes.loglog(delta_t, error_2, 'ro', label='2-stage')
axes.loglog(delta_t, error_4, 'go', label="4-stage")
order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_t, order_C(delta_t[1], error_euler[1], 1.0) * delta_t**1.0, '--b')
axes.loglog(delta_t, order_C(delta_t[1], error_2[1], 2.0) * delta_t**2.0, '--r')
axes.loglog(delta_t, order_C(delta_t[1], error_4[1], 4.0) * delta_t**4.0, '--g')
axes.legend(loc=4)
axes.set_title("Comparison of Errors")
axes.set_xlabel("$\Delta t$")
axes.set_ylabel("$|U(t_f) - u(t_f)|$")
plt.show()
```
## Linear Multi-Step Methods
Multi-step methods (as introduced via the leap-frog method) are ODE methods that require multiple time step evaluations to work. Some of the advanatages of using a multi-step method rather than one-step method included
- Taylor series methods require differentiating the given equation which can be cumbersome and difficult to impelent
- One-step methods at higher order often require the evaluation of the function $f$ many times
Disadvantages
- Methods are not self-starting, i.e. they require other methods to find the initial values
- The time step $\Delta t$ in one-step methods can be changed at any time while multi-step methods this is much more complex
### General Linear Multi-Step Methods
All linear multi-step methods can be written as the linear combination of past, present and future solutions:
$$
\sum^r_{j=0} \alpha_j U_{n+j} = \Delta t \sum^r_{j=0} \beta_j f(U_{n+j}, t_{n+j})
$$
If $\beta_r = 0$ then the method is explicit (only requires previous time steps). Note that the coefficients are not unique as we can multiply both sides by a constant. In practice a normalization of $\alpha_r = 1$ is used.
#### Example: Adams Methods
$$
U_{n+r} = U_{n+r-1} + \Delta t \sum^r_{j=0} \beta_j f(U_{n+j}).
$$
All these methods have $\alpha_r = 1$, $\alpha_{r-1} = -1$ and $\alpha_j=0$ for $j < r - 1$.
### Adams-Bashforth Methods
The **Adams-Bashforth** methods are explicit solvers that maximize the order of accuracy given a number of steps $r$. This is accomplished by looking at the Taylor series and picking the coefficients $\beta_j$ to elliminate as many terms in the Taylor series as possible.
$$\begin{aligned}
\text{1-step:} & ~ & U_{n+1} &= U_n +\Delta t f(U_n) \\
\text{2-step:} & ~ & U_{n+2} &= U_{n+1} + \frac{\Delta t}{2} (-f(U_n) + 3 f(U_{n+1})) \\
\text{3-step:} & ~ & U_{n+3} &= U_{n+2} + \frac{\Delta t}{12} (5 f(U_n) - 16 f(U_{n+1}) + 23 f(U_{n+2})) \\
\text{4-step:} & ~ & U_{n+4} &= U_{n+3} + \frac{\Delta t}{24} (-9 f(U_n) + 37 f(U_{n+1}) -59 f(U_{n+2}) + 55 f(U_{n+3}))
\end{aligned}$$
```
# Use 2-step Adams-Bashforth to compute solution
f = lambda t, u: -u
t_exact = numpy.linspace(0.0, 10.0, 100)
u_exact = numpy.exp(-t_exact)
N = 20
# N = 10
t = numpy.linspace(0, 10.0, N)
delta_t = t[1] - t[0]
u_ab2 = numpy.empty(t.shape)
# Use RK-2 to start the method
u_ab2[0] = 1.0
u_ab2[1] = u_ab2[0] + 0.5 * delta_t * f(t[0], u_ab2[0])
u_ab2[1] = u_ab2[0] + delta_t * f(t[0], u_ab2[1])
for n in range(0,len(t)-2):
u_ab2[n+2] = u_ab2[n + 1] + delta_t / 2.0 * (-f(t[n], u_ab2[n]) + 3.0 * f(t[n+1], u_ab2[n+1]))
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_exact, u_exact, 'k', label="True")
axes.plot(t, u_ab2, 'ro', label="2-step A-B")
axes.set_title("Adams-Bashforth Method")
axes.set_xlabel("t")
axes.set_ylabel("u(t)")
axes.legend(loc=1)
plt.show()
```
### Adams-Moulton Methods
The **Adams-Moulton** methods are the implicit versions of the Adams-Bashforth methods. Since this gives one additional parameter to use $\beta_r$ these methods are generally one order of accuracy greater than their counterparts.
$$\begin{aligned}
\text{1-step:} & ~ & U_{n+1} &= U_n + \frac{\Delta t}{2} (f(U_n) + f(U_{n+1})) \\
\text{2-step:} & ~ & U_{n+2} &= U_{n+1} + \frac{\Delta t}{12} (-f(U_n) + 8f(U_{n+1}) + 5f(U_{n+2})) \\
\text{3-step:} & ~ & U_{n+3} &= U_{n+2} + \frac{\Delta t}{24} (f(U_n) - 5f(U_{n+1}) + 19f(U_{n+2}) + 9f(U_{n+3})) \\
\text{4-step:} & ~ & U_{n+4} &= U_{n+3} + \frac{\Delta t}{720}(-19 f(U_n) + 106 f(U_{n+1}) -264 f(U_{n+2}) + 646 f(U_{n+3}) + 251 f(U_{n+4}))
\end{aligned}$$
```
# Use 2-step Adams-Moulton to compute solution
# u' = - decay u
decay_constant = 1.0
f = lambda t, u: -decay_constant * u
t_exact = numpy.linspace(0.0, 10.0, 100)
u_exact = numpy.exp(-t_exact)
N = 20
# N = 10
# N = 5
t = numpy.linspace(0, 10.0, N)
delta_t = t[1] - t[0]
U = numpy.empty(t.shape)
U[0] = 1.0
U[1] = U[0] + 0.5 * delta_t * f(t[0], U[0])
U[1] = U[0] + delta_t * f(t[0], U[1])
integration_constant = 1.0 / (1.0 + 5.0 * decay_constant * delta_t / 12.0)
for n in range(t.shape[0] - 2):
U[n+2] = (U[n+1] + decay_constant * delta_t / 12.0 * (U[n] - 8.0 * U[n+1])) * integration_constant
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_exact, u_exact, 'k', label="True")
axes.plot(t, U, 'ro', label="2-step A-M")
axes.set_title("Adams-Moulton Method")
axes.set_xlabel("t")
axes.set_ylabel("u(t)")
axes.legend(loc=1)
plt.show()
```
### Truncation Error for Multi-Step Methods
We can again find the truncation error in general for linear multi-step methods:
$$\begin{aligned}
T(t, u; \Delta t) &= \frac{1}{\Delta t} \left [\sum^r_{j=0} \alpha_j u_{n+j} - \Delta t \sum^r_{j=0} \beta_j f(u_{n+j}, t_{n+j}) \right ]
\end{aligned}$$
Using the general expansion and evalution of the Taylor series about $t_n$ we have
$$\begin{aligned}
u(t_{n+j}) &= u(t_n) + j \Delta t u'(t_n) + \frac{1}{2} (j \Delta t)^2 u''(t_n) + \mathcal{O}(\Delta t^3) \\
u'(t_{n+j}) &= u'(t_n) + j \Delta t u''(t_n) + \frac{1}{2} (j \Delta t)^2 u'''(t_n) + \mathcal{O}(\Delta t^3)
\end{aligned}$$
leading to
$$\begin{aligned}
T(t, u; \Delta t) &= \frac{1}{\Delta t}\left( \sum^r_{j=0} \alpha_j\right) u(t_n) + \left(\sum^r_{j=0} (j\alpha_j - \beta_j)\right) u'(t_n) + \Delta t \left(\sum^r_{j=0} \left (\frac{1}{2}j^2 \alpha_j - j \beta_j \right) \right) u''(t_n) \\
& \quad \quad + \cdots + \Delta t^{q - 1} \left (\sum^r_{j=0} \left(\frac{1}{q!} j^q \alpha_j - \frac{1}{(q-1)!} j^{q-1} \beta_j \right) \right) u^{(q)}(t_n) + \cdots
\end{aligned}$$
The method is *consistent* if the first two terms of the expansion vanish, i.e. $\sum^r_{j=0} \alpha_j = 0$ and $\sum^r_{j=0} j \alpha_j = \sum^r_{j=0} \beta_j$.
```
# Compare accuracy between RK-2, AB-2 and AM-2
f = lambda t, u: -u
u_exact = lambda t: numpy.exp(-t)
t_f = 10.0
num_steps = [2**n for n in range(4,10)]
delta_t = numpy.empty(len(num_steps))
error_rk = numpy.empty(len(num_steps))
error_ab = numpy.empty(len(num_steps))
error_am = numpy.empty(len(num_steps))
for (i, N) in enumerate(num_steps):
t = numpy.linspace(0, t_f, N)
delta_t[i] = t[1] - t[0]
# Compute RK2
U_rk = numpy.empty(t.shape)
U_rk[0] = 1.0
for n in range(t.shape[0]-1):
U_rk[n+1] = U_rk[n] + 0.5 * delta_t[i] * f(t[n], U_rk[n])
U_rk[n+1] = U_rk[n] + delta_t[i] * f(t[n], U_rk[n+1])
# Compute Adams-Bashforth 2-stage
U_ab = numpy.empty(t.shape)
U_ab[:2] = U_rk[:2]
for n in range(t.shape[0] - 2):
U_ab[n+2] = U_ab[n + 1] + delta_t[i] / 2.0 * (-f(t[n], U_ab[n]) + 3.0 * f(t[n+1], U_ab[n+1]))
# Compute Adama-Moulton 2-stage
U_am = numpy.empty(t.shape)
U_am[:2] = U_rk[:2]
decay_constant = 1.0
integration_constant = 1.0 / (1.0 + 5.0 * decay_constant * delta_t[i] / 12.0)
for n in range(t.shape[0] - 2):
U_am[n+2] = (U_am[n+1] + decay_constant * delta_t[i] / 12.0 * (U_am[n] - 8.0 * U_am[n+1])) * integration_constant
# Compute error for each
error_rk[i] = numpy.linalg.norm(delta_t[i] * (U_rk - u_exact(t)), ord=1)
error_ab[i] = numpy.linalg.norm(delta_t[i] * (U_ab - u_exact(t)), ord=1)
error_am[i] = numpy.linalg.norm(delta_t[i] * (U_am - u_exact(t)), ord=1)
# Plot error vs. delta_t
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_t, error_rk, 'ko', label='RK-2')
axes.loglog(delta_t, error_ab, 'bo', label='AB-2')
axes.loglog(delta_t, error_am, 'go', label="AM-2")
order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_t, order_C(delta_t[0], error_ab[0], 1.0) * delta_t**1.0, '--r')
axes.loglog(delta_t, order_C(delta_t[1], error_ab[1], 2.0) * delta_t**2.0, '--b')
axes.loglog(delta_t, order_C(delta_t[1], error_am[1], 3.0) * delta_t**3.0, '--g')
axes.legend(loc=4)
axes.set_title("Comparison of Errors")
axes.set_xlabel("$\Delta t$")
axes.set_ylabel("$|U(t) - u(t)|$")
plt.show()
```
### Predictor-Corrector Methods
One way to simplify the Adams-Moulton methods so that implicit evaluations are not needed is by estimating the required implicit function evaluations with an explicit method. These are often called **predictor-corrector** methods as the explicit method provides a *prediction* of what the solution might be and the not explicit *corrector* step works to make that estimate more accurate.
#### Example: One-Step Adams-Bashforth-Moulton
Use the One-step Adams-Bashforth method to predict the value of $U_{n+1}$ and then use the Adams-Moulton method to correct that value:
$$\begin{aligned}
\hat{U}_{n+1} &= U_n + \Delta t f(U_n) \\
U_{n+1} &= U_n + \frac{1}{2} \Delta t (f(U_n) + f(\hat{U}_{n+1})
\end{aligned}$$
leading to a second order accurate method.
```
# One-step Adams-Bashforth-Moulton
f = lambda t, u: -u
t_exact = numpy.linspace(0.0, 10.0, 100)
u_exact = numpy.exp(-t_exact)
N = 100
t = numpy.linspace(0, 10.0, N)
delta_t = t[1] - t[0]
U = numpy.empty(t.shape)
U[0] = 1.0
for n in range(t.shape[0] - 1):
U[n+1] = U[n] + delta_t * f(t[n], U[n])
U[n+1] = U[n] + 0.5 * delta_t * (f(t[n], U[n]) + f(t[n+1], U[n+1]))
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t_exact, u_exact, 'k', label="True")
axes.plot(t, U, 'ro', label="2-step A-B")
axes.set_title("Adams-Bashforth-Moulton P/C Method")
axes.set_xlabel("t")
axes.set_ylabel("u(t)")
axes.legend(loc=1)
plt.show()
```
| github_jupyter |
# Full-Waveform Inversion (FWI)
To complete the morning session we aim to highlight the core ideas behind the implementation of seismic inversion problems (where we create an image of the subsurface from field recorded data) in Devito.
## Inversion requirements
Seismic inversion relies on two known parameters:
- **Field data** - or also called **recorded data**. This is a shot record corresponding to the true velocity model. In practice this data is acquired as described in the first tutorial. In order to simplify this tutorial we will fake field data by modelling it with the true velocity model.
- **Initial velocity model**. This is a velocity model that has been obtained by processing the field data. This model is a rough and very smooth estimate of the velocity as an initial estimate for the inversion. This is a necessary requirement for any optimization (method).
## Inversion computational setup
We will introduce the gradient operator. This operator corresponds to the imaging condition introduced in the previous tutorial with some minor modifications that are defined by the objective function (also referred to in the tutorial series as the *functional*, *f*) and its gradient, *g*. We will define these two terms in the tutorial too.
## Notes on the operators
As we have already described the creation of a forward modelling operator, we will only call a wrapper function here. This wrapper already contains all the necessary operators for seismic modeling, imaging and inversion. Operators introduced for the first time in this tutorial will be properly described.
```
import numpy as np
%matplotlib inline
from devito import configuration
configuration['log-level'] = 'WARNING'
```
## Computational considerations
As we will see, FWI is computationally extremely demanding, even more than RTM. To keep this tutorial as lightwight as possible we therefore again use a very small demonstration model. We also define here a few parameters for the final example runs that can be changed to modify the overall runtime of the tutorial.
```
nshots = 9 # Number of shots to used to generate the gradient
nreceivers = 101 # Number of receiver locations per shot
fwi_iterations = 5 # Number of outer FWI iterations
```
# True and smooth velocity models
We will use a very simple model domain, consisting of a circle within a 2D domain. We will again use the "true" model to generate our synthetic shot data and use a "smooth" model as our initial guess. In this case the smooth model is very smooth indeed - it is simply a constant background velocity without any features.
```
#NBVAL_IGNORE_OUTPUT
from examples.seismic import demo_model, plot_velocity, plot_perturbation
# Define true and initial model
shape = (101, 101) # Number of grid point (nx, nz)
spacing = (10., 10.) # Grid spacing in m. The domain size is now 1km by 1km
origin = (0., 0.) # Need origin to define relative source and receiver locations
model = demo_model('circle-isotropic', vp_circle=3.0, vp_background=2.5,
origin=origin, shape=shape, spacing=spacing, nbl=40)
model0 = demo_model('circle-isotropic', vp_circle=2.5, vp_background=2.5,
origin=origin, shape=shape, spacing=spacing, nbl=40,
grid = model.grid)
plot_velocity(model)
plot_velocity(model0)
plot_perturbation(model0, model)
```
## Acquisition geometry
In this tutorial, we will use the simplest case for inversion, namely a transmission experiment. The source(s) will be located on one side of the model and the receivers on the other side. This allows most of the information necessary for inversion to be recorded, since reflections usually lead to poor inversion results.
```
#NBVAL_IGNORE_OUTPUT
# Define acquisition geometry: source
from examples.seismic import AcquisitionGeometry
t0 = 0.
tn = 1000.
f0 = 0.010
# Position the source:
src_coordinates = np.empty((1, 2))
src_coordinates[0, 1] = np.array(model.domain_size[1]) * .5
src_coordinates[0, 0] = 20.
# Define acquisition geometry: receivers
# Initialize receivers for synthetic and imaging data
rec_coordinates = np.empty((nreceivers, 2))
rec_coordinates[:, 1] = np.linspace(0, model.domain_size[0], num=nreceivers)
rec_coordinates[:, 0] = 980.
# Geometry
geometry = AcquisitionGeometry(model, rec_coordinates, src_coordinates, t0, tn, f0=f0, src_type='Ricker')
# We can plot the time signature to see the wavelet
geometry.src.show()
# Plot acquisition geometry
plot_velocity(model, source=geometry.src_positions,
receiver=geometry.rec_positions[::4, :])
```
## True and smooth data
We can generate shot records for the true and smoothed initial velocity models, since the difference between them will again form the basis of our imaging procedure.
```
# Compute synthetic data with forward operator
from examples.seismic.acoustic import AcousticWaveSolver
solver = AcousticWaveSolver(model, geometry, space_order=4)
true_d, _, _ = solver.forward(vp=model.vp)
# Compute initial data with forward operator
smooth_d, _, _ = solver.forward(vp=model0.vp)
#NBVAL_IGNORE_OUTPUT
from examples.seismic import plot_shotrecord
# Plot shot record for true and smooth velocity model and the difference
plot_shotrecord(true_d.data, model, t0, tn)
plot_shotrecord(smooth_d.data, model, t0, tn)
plot_shotrecord(smooth_d.data - true_d.data, model, t0, tn)
```
# Full-Waveform Inversion
## Formulation
Full-waveform inversion (FWI) aims to invert an accurate model of the discrete wave velocity, $\mathbf{c}$, or equivalently the squared slowness of the wave, $\mathbf{m} = \frac{1}{\mathbf{c}^2}$, from a given set of measurements of the pressure wavefield $\mathbf{u}$. This can be expressed as the following optimization problem [1, 2]:
\begin{aligned}
\mathop{\hbox{minimize}}_{\mathbf{m}} \Phi_s(\mathbf{m})&=\frac{1}{2}\left\lVert\mathbf{P}_r
\mathbf{u} - \mathbf{d}\right\rVert_2^2 \\
\mathbf{u} &= \mathbf{A}(\mathbf{m})^{-1} \mathbf{P}_s^T \mathbf{q}_s,
\end{aligned}
where $\mathbf{P}_r$ is the sampling operator at the receiver locations, $\mathbf{P}_s^T$ is the injection operator at the source locations, $\mathbf{A}(\mathbf{m})$ is the operator representing the discretized wave equation matrix, $\mathbf{u}$ is the discrete synthetic pressure wavefield, $\mathbf{q}_s$ is the corresponding pressure source and $\mathbf{d}$ is the measured data. It is worth noting that $\mathbf{m}$ is the unknown in this formulation and that multiple implementations of the wave equation operator $\mathbf{A}(\mathbf{m})$ are possible.
We have already defined a concrete solver scheme for $\mathbf{A}(\mathbf{m})$ in the first tutorial, including appropriate implementations of the sampling operator $\mathbf{P}_r$ and source term $\mathbf{q}_s$.
To solve this optimization problem using a gradient-based method, we use the
adjoint-state method to evaluate the gradient $\nabla\Phi_s(\mathbf{m})$:
\begin{align}
\nabla\Phi_s(\mathbf{m})=\sum_{\mathbf{t} =1}^{n_t}\mathbf{u}[\mathbf{t}] \mathbf{v}_{tt}[\mathbf{t}] =\mathbf{J}^T\delta\mathbf{d}_s,
\end{align}
where $n_t$ is the number of computational time steps, $\delta\mathbf{d}_s = \left(\mathbf{P}_r \mathbf{u} - \mathbf{d} \right)$ is the data residual (difference between the measured data and the modelled data), $\mathbf{J}$ is the Jacobian operator and $\mathbf{v}_{tt}$ is the second-order time derivative of the adjoint wavefield solving:
\begin{align}
\mathbf{A}^T(\mathbf{m}) \mathbf{v} = \mathbf{P}_r^T \delta\mathbf{d}.
\end{align}
We see that the gradient of the FWI function is the previously defined imaging condition with an extra second-order time derivative. We will therefore reuse the operators defined previously inside a Devito wrapper.
## FWI gradient operator
To compute a single gradient $\nabla\Phi_s(\mathbf{m})$ in our optimization workflow we again use `solver.forward` to compute the entire forward wavefield $\mathbf{u}$ and a similar pre-defined gradient operator to compute the adjoint wavefield `v`. The gradient operator provided by our `solver` utility also computes the correlation between the wavefields, allowing us to encode a similar procedure to the previous imaging tutorial as our gradient calculation:
- Simulate the forward wavefield with the background velocity model to get the synthetic data and save the full wavefield $\mathbf{u}$
- Compute the data residual
- Back-propagate the data residual and compute on the fly the gradient contribution at each time step.
This procedure is applied to multiple source positions and summed to obtain a gradient image of the subsurface. We again prepare the source locations for each shot and visualize them, before defining a single gradient computation over a number of shots as a single function.
```
#NBVAL_IGNORE_OUTPUT
# Prepare the varying source locations
source_locations = np.empty((nshots, 2), dtype=np.float32)
source_locations[:, 0] = 30.
source_locations[:, 1] = np.linspace(0., 1000, num=nshots)
plot_velocity(model, source=source_locations)
# Create FWI gradient kernel
from devito import Function, TimeFunction
from examples.seismic import Receiver
import scipy
def fwi_gradient(vp_in):
# Create symbols to hold the gradient and residual
grad = Function(name="grad", grid=model.grid)
residual = Receiver(name='rec', grid=model.grid,
time_range=geometry.time_axis,
coordinates=geometry.rec_positions)
objective = 0.
for i in range(nshots):
# Update source location
geometry.src_positions[0, :] = source_locations[i, :]
# Generate synthetic data from true model
true_d, _, _ = solver.forward(vp=model.vp)
# Compute smooth data and full forward wavefield u0
smooth_d, u0, _ = solver.forward(vp=vp_in, save=True)
# Compute gradient from data residual and update objective function
residual.data[:] = smooth_d.data[:] - true_d.data[:]
objective += .5*np.linalg.norm(residual.data.flatten())**2
solver.gradient(rec=residual, u=u0, vp=vp_in, grad=grad)
return objective, -grad.data
```
Having defined our FWI gradient procedure we can compute the initial iteration from our starting model. This allows us to visualize the gradient alongside the model perturbation and the effect of the gradient update on the model.
```
# Compute gradient of initial model
ff, update = fwi_gradient(model0.vp)
assert np.isclose(ff, 57283, rtol=1e0)
#NBVAL_IGNORE_OUTPUT
from examples.seismic import plot_image
# Plot the FWI gradient
plot_image(update, vmin=-1e4, vmax=1e4, cmap="jet")
# Plot the difference between the true and initial model.
# This is not known in practice as only the initial model is provided.
plot_image(model0.vp.data - model.vp.data, vmin=-1e-1, vmax=1e-1, cmap="jet")
# Show what the update does to the model
alpha = .5 / np.abs(update).max()
plot_image(model0.vp.data - alpha*update, vmin=2.5, vmax=3.0, cmap="jet")
```
We see that the gradient and the true perturbation have the same sign, therefore, with an appropriate scaling factor, we will update the model in the correct direction.
```
# Define bounding box constraints on the solution.
def apply_box_constraint(vp):
# Maximum possible 'realistic' velocity is 3.5 km/sec
# Minimum possible 'realistic' velocity is 2 km/sec
return np.clip(vp, 2.0, 3.5)
#NBVAL_SKIP
# Run FWI with gradient descent
history = np.zeros((fwi_iterations, 1))
for i in range(0, fwi_iterations):
# Compute the functional value and gradient for the current
# model estimate
phi, direction = fwi_gradient(model0.vp)
# Store the history of the functional values
history[i] = phi
# Artificial Step length for gradient descent
# In practice this would be replaced by a Linesearch (Wolfe, ...)
# that would guarantee functional decrease Phi(m-alpha g) <= epsilon Phi(m)
# where epsilon is a minimum decrease constant
alpha = .05 / np.abs(direction).max()
# Update the model estimate and enforce minimum/maximum values
model0.vp = apply_box_constraint(model0.vp.data - alpha * direction)
# Log the progress made
print('Objective value is %f at iteration %d' % (phi, i+1))
#NBVAL_IGNORE_OUTPUT
# Plot inverted velocity model
plot_velocity(model0)
#NBVAL_SKIP
import matplotlib.pyplot as plt
# Plot objective function decrease
plt.figure()
plt.loglog(history)
plt.xlabel('Iteration number')
plt.ylabel('Misift value Phi')
plt.title('Convergence')
plt.show()
```
## References
[1] _Virieux, J. and Operto, S.: An overview of full-waveform inversion in exploration geophysics, GEOPHYSICS, 74, WCC1–WCC26, doi:10.1190/1.3238367, http://library.seg.org/doi/abs/10.1190/1.3238367, 2009._
[2] _Haber, E., Chung, M., and Herrmann, F. J.: An effective method for parameter estimation with PDE constraints with multiple right hand sides, SIAM Journal on Optimization, 22, http://dx.doi.org/10.1137/11081126X, 2012._
| github_jupyter |
# Publications markdown generator for academicpages
Takes a TSV of publications with metadata and converts them for use with [academicpages.github.io](academicpages.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)). The core python code is also in `publications.py`. Run either from the `markdown_generator` folder after replacing `publications.tsv` with one containing your data.
TODO: Make this work with BibTex and other databases of citations, rather than Stuart's non-standard TSV format and citation style.
## Data format
The TSV needs to have the following columns: pub_date, title, venue, excerpt, citation, site_url, and paper_url, with a header at the top.
- `excerpt` and `paper_url` can be blank, but the others must have values.
- `pub_date` must be formatted as YYYY-MM-DD.
- `url_slug` will be the descriptive part of the .md file and the permalink URL for the page about the paper. The .md file will be `YYYY-MM-DD-[url_slug].md` and the permalink will be `https://[yourdomain]/publications/YYYY-MM-DD-[url_slug]`
This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).
```
!cat publications.tsv
```
## Import pandas
We are using the very handy pandas library for dataframes.
```
import pandas as pd
```
## Import TSV
Pandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or `\t`.
I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.
```
publications = pd.read_csv("publications.tsv", sep="\t", header=0)
publications
```
## Escape special characters
YAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.
```
html_escape_table = {
"&": "&",
'"': """,
"'": "'"
}
def html_escape(text):
"""Produce entities within text."""
return "".join(html_escape_table.get(c,c) for c in text)
```
## Creating the markdown files
This is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (```md```) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.
```
import os
for row, item in publications.iterrows():
md_filename = str(item.pub_date) + "-" + item.url_slug + ".md"
html_filename = str(item.pub_date) + "-" + item.url_slug
year = item.pub_date[:4]
## YAML variables
md = "---\ntitle: \"" + item.title + '"\n'
md += """collection: publications"""
md += """\npermalink: /publication/""" + html_filename
if len(str(item.excerpt)) > 5:
md += "\nexcerpt: '" + html_escape(item.excerpt) + "'"
md += "\ndate: " + str(item.pub_date)
md += "\nvenue: '" + html_escape(item.venue) + "'"
if len(str(item.paper_url)) > 5:
md += "\npaperurl: '" + item.paper_url + "'"
md += "\ncitation: '" + html_escape(item.citation) + "'"
md += "\n---"
## Markdown description for individual page
if len(str(item.excerpt)) > 5:
md += "\n" + html_escape(item.excerpt) + "\n"
if len(str(item.paper_url)) > 5:
md += "\n[Download paper here](" + item.paper_url + ")\n"
md += "\nRecommended citation: " + item.citation
md_filename = os.path.basename(md_filename)
with open("../_publications/" + md_filename, 'w') as f:
f.write(md)
```
These files are in the publications directory, one directory below where we're working from.
```
!ls ../_publications/
!cat ../_publications/2009-10-01-paper-title-number-1.md
```
| github_jupyter |
- At first, I want to say thanks to many kernel authors.
- I referred to below kernels.
- https://www.kaggle.com/fabiendaniel/lgbm-starter-lb-1-70
- https://www.kaggle.com/julian3833/1-quick-start-read-csv-and-flatten-json-fields
- https://www.kaggle.com/astrus/entity-embedding-neural-network-keras-lb-0-748
# 1. Read dataset
```
import pandas as pd
import numpy as np
import time
import lightgbm as lgb
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import KFold
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('seaborn')
sns.set(font_scale=2)
import warnings
warnings.filterwarnings('ignore')
import os
import json
import numpy as np
import pandas as pd
from pandas.io.json import json_normalize
def load_df(csv_path='../input/train.csv', nrows=None):
JSON_COLUMNS = ['device', 'geoNetwork', 'totals', 'trafficSource']
df = pd.read_csv(csv_path,
converters={column: json.loads for column in JSON_COLUMNS},
dtype={'fullVisitorId': 'str'}, # Important!!
nrows=nrows)
for column in JSON_COLUMNS:
column_as_df = json_normalize(df[column])
column_as_df.columns = [f"{column}.{subcolumn}" for subcolumn in column_as_df.columns]
df = df.drop(column, axis=1).merge(column_as_df, right_index=True, left_index=True)
print(f"Loaded {os.path.basename(csv_path)}. Shape: {df.shape}")
return df
print(os.listdir("../input"))
%%time
train_df = load_df()
test_df = load_df("../input/test.csv")
target = train_df['totals.transactionRevenue'].fillna(0).astype(float)
target = target.apply(lambda x: np.log(x) if x > 0 else x)
del train_df['totals.transactionRevenue']
columns = [col for col in train_df.columns if train_df[col].nunique() > 1]
#____________________________
train_df = train_df[columns]
test_df = test_df[columns]
```
# 2. Feature engineering
## 2.1 Check null data
```
train_df.head()
percent = (100 * train_df.isnull().sum() / train_df.shape[0]).sort_values(ascending=False)
percent[:10]
percent = (100 * test_df.isnull().sum() / test_df.shape[0]).sort_values(ascending=False)
percent[:10]
```
### Remove features with NaN percents larger than 70%
- For now, Let's remove the columns with NaN percents larger than 70%
```
drop_cols = ['trafficSource.referralPath', 'trafficSource.adContent', 'trafficSource.adwordsClickInfo.slot', 'trafficSource.adwordsClickInfo.page',
'trafficSource.adwordsClickInfo.adNetworkType']
train_df.drop(drop_cols, axis=1, inplace=True)
test_df.drop(drop_cols, axis=1, inplace=True)
```
### trafficSource.keyword
```
train_df['trafficSource.keyword'].fillna('nan', inplace=True)
test_df['trafficSource.keyword'].fillna('nan', inplace=True)
# for ele in train_df['trafficSource.keyword'].vablue_counts().index:
# print(ele)
# Save your page
```
- After looking the feature, simply I think that the category feature can be divided into youtube, google, other categories.
```
def add_new_category(x):
x = str(x).lower()
if x == 'nan':
return 'nan'
x = ''.join(x.split())
if 'youtube' in x or 'you' in x or 'yo' in x or 'tub' in x:
return 'youtube'
elif 'google' in x or 'goo' in x or 'gle' in x:
return 'google'
else:
return 'other'
train_df['trafficSource.keyword'] = train_df['trafficSource.keyword'].apply(add_new_category)
test_df['trafficSource.keyword'] = test_df['trafficSource.keyword'].apply(add_new_category)
train_df['trafficSource.keyword'].value_counts().sort_values(ascending=False).plot.bar()
plt.yscale('log')
plt.show()
categorical_feats = ['trafficSource.keyword']
```
### totals.pageviews
- NaN values in this feature could be relplaced with 0 value becuase nan view would mean no view.
```
train_df['totals.pageviews'].fillna(0, inplace=True)
test_df['totals.pageviews'].fillna(0, inplace=True)
train_df['totals.pageviews'] = train_df['totals.pageviews'].astype(int)
test_df['totals.pageviews'] = test_df['totals.pageviews'].astype(int)
train_df['totals.pageviews'].plot.hist(bins=10)
plt.yscale('log')
plt.show()
```
## 2.2 Object features
```
features_object = [col for col in train_df.columns if train_df[col].dtype == 'object']
features_object
```
### channelGrouping
```
train_df['channelGrouping'].value_counts().plot.bar()
plt.show()
categorical_feats.append('channelGrouping')
```
### device.browser
```
plt.figure(figsize=(20, 10))
train_df['device.browser'].value_counts().plot.bar()
plt.yscale('log')
plt.show()
categorical_feats.append('device.browser')
```
### device.deviceCategory
```
# plt.figure(figsize=(10, 10))
train_df['device.deviceCategory'].value_counts().plot.bar()
# plt.yscale('log')
plt.show()
categorical_feats.append('device.deviceCategory')
```
### device.operatingSystem
```
# plt.figure(figsize=(10, 10))
train_df['device.operatingSystem'].value_counts().plot.bar()
plt.yscale('log')
plt.show()
categorical_feats.append('device.operatingSystem')
```
### geoNetwork.city
```
train_df['geoNetwork.city'].value_counts()
```
- There are so many cities.
```
categorical_feats.append('geoNetwork.city')
```
### geoNetwork.continent
```
train_df['geoNetwork.continent'].value_counts()
categorical_feats.append('geoNetwork.continent')
```
### geoNetwork.country
```
train_df['geoNetwork.country'].value_counts()[:10].plot.bar()
plt.show()
```
- In this dataset, many user ares in US
```
categorical_feats.append('geoNetwork.country')
```
### geoNetwork.metro
```
train_df['geoNetwork.metro'].value_counts()[:10].plot.bar()
categorical_feats.append('geoNetwork.metro')
```
### geoNetwork.networkDomain
```
train_df['geoNetwork.networkDomain'].value_counts()
```
- There are so many domains. How can we deal with it?
- One-hot is not good choice. It will generate 28064 featues...
- Just remove this feature? or use this feature in efficient way?
```
categorical_feats.append('geoNetwork.networkDomain')
```
### geoNetwork.region
```
train_df['geoNetwork.region'].value_counts()
```
- US(California, New York), UK(England) and Thailand(Bangkok), Vietnam(Ho Chi Minh), Turkey(Istanbul) are top 5 region
```
categorical_feats.append('geoNetwork.region')
```
### geoNetwork.subContinent
```
train_df['geoNetwork.subContinent'].value_counts().plot.bar()
plt.yscale('log')
plt.show()
categorical_feats.append('geoNetwork.subContinent')
```
### totals.hits
```
train_df['totals.hits'].value_counts()
```
- This feature could be considered as continuous feature
```
train_df['totals.hits'] = train_df['totals.hits'].astype(int)
test_df['totals.hits'] = test_df['totals.hits'].astype(int)
```
### trafficSource.adwordsClickInfo.gclId
```
train_df['trafficSource.adwordsClickInfo.gclId'].value_counts()
```
- I don't have any idea for this complex feature. For now, I want to remove this feature.
```
train_df.drop('trafficSource.adwordsClickInfo.gclId', axis=1, inplace=True)
test_df.drop('trafficSource.adwordsClickInfo.gclId', axis=1, inplace=True)
```
### trafficSource.campaign
```
train_df['trafficSource.campaign'].value_counts().plot.bar()
plt.yscale('log')
plt.show()
categorical_feats.append('trafficSource.campaign')
```
### trafficSource.medium
```
train_df['trafficSource.medium'].value_counts().plot.bar()
plt.yscale('log')
plt.show()
categorical_feats.append('trafficSource.medium')
```
### trafficSource.source
```
# for value in train_df['trafficSource.source'].value_counts().index:
# print(value)
# save your page
```
- google, baidu, facebook, reddit, yahoo, bing, yandex,
```
def add_new_category(x):
x = str(x).lower()
if 'google' in x:
return 'google'
elif 'baidu' in x:
return 'baidu'
elif 'facebook' in x:
return 'facebook'
elif 'reddit' in x:
return 'reddit'
elif 'yahoo' in x:
return 'yahoo'
elif 'bing' in x:
return 'bing'
elif 'yandex' in x:
return 'yandex'
else:
return 'other'
train_df['trafficSource.source'] = train_df['trafficSource.source'].apply(add_new_category)
test_df['trafficSource.source'] = test_df['trafficSource.source'].apply(add_new_category)
train_df['trafficSource.source'].value_counts().sort_values(ascending=False).plot.bar()
plt.yscale('log')
plt.show()
categorical_feats.append('trafficSource.source')
```
### device.isMobile
- THis feature is boolean type. So let's do astype(int)
```
train_df['device.isMobile'] = train_df['device.isMobile'].astype(int)
test_df['device.isMobile'] = test_df['device.isMobile'].astype(int)
```
## 2.3 Time feature
```
len_train = train_df.shape[0]
df_all = pd.concat([train_df, test_df])
def change_date_to_datetime(x):
str_time = str(x)
date = '{}-{}-{}'.format(str_time[:4], str_time[4:6], str_time[6:])
return date
def add_time_feature(data):
data['date'] = pd.to_datetime(data['date'])
data['Year'] = data.date.dt.year
data['Month'] = data.date.dt.month
data['Day'] = data.date.dt.day
data['WeekOfYear'] = data.date.dt.weekofyear
return data
df_all['date'] = df_all['date'].apply(change_date_to_datetime)
df_all = add_time_feature(df_all)
categorical_feats += ['Year', 'Month', 'Day', 'WeekOfYear']
df_all.drop('date', axis=1, inplace=True)
```
# 3. Model development
## 3.1 Label encoding
```
from sklearn.preprocessing import LabelEncoder
for col in categorical_feats:
lbl = LabelEncoder()
df_all[col] = lbl.fit_transform(df_all[col])
train_df = df_all[:len_train]
test_df = df_all[len_train:]
```
## 3.2 Drop features
```
train_fullVisitorId = train_df['fullVisitorId']
train_sessionId = train_df['sessionId']
train_visitId = train_df['visitId']
test_fullVisitorId = test_df['fullVisitorId']
test_sessionId = test_df['sessionId']
test_visitId = test_df['visitId']
train_df.drop(['fullVisitorId', 'sessionId', 'visitId'], axis=1, inplace=True)
test_df.drop(['fullVisitorId', 'sessionId', 'visitId'], axis=1, inplace=True)
train_df.head()
param = {'num_leaves':48,
'min_data_in_leaf': 300,
'objective':'regression',
'max_depth': -1,
'learning_rate':0.005,
"min_child_samples":40,
"boosting":"gbdt",
"feature_fraction":0.8,
"bagging_freq":1,
"bagging_fraction":0.8 ,
"bagging_seed": 3,
"metric": 'rmse',
"lambda_l1": 1,
'lambda_l2': 1,
"verbosity": -1}
```
## 3.3 Training model
```
folds = KFold(n_splits=5, shuffle=True, random_state=15)
oof = np.zeros(len(train_df))
predictions = np.zeros(len(test_df))
start = time.time()
features = list(train_df.columns)
feature_importance_df = pd.DataFrame()
for fold_, (trn_idx, val_idx) in enumerate(folds.split(train_df.values, target.values)):
trn_data = lgb.Dataset(train_df.iloc[trn_idx], label=target.iloc[trn_idx], categorical_feature=categorical_feats)
val_data = lgb.Dataset(train_df.iloc[val_idx], label=target.iloc[val_idx], categorical_feature=categorical_feats)
num_round = 10000
clf = lgb.train(param, trn_data, num_round, valid_sets = [trn_data, val_data], verbose_eval=400, early_stopping_rounds = 500, categorical_feature=categorical_feats)
oof[val_idx] = clf.predict(train_df.iloc[val_idx].values, num_iteration=clf.best_iteration)
fold_importance_df = pd.DataFrame()
fold_importance_df["feature"] = features
fold_importance_df["importance"] = clf.feature_importance()
fold_importance_df["fold"] = fold_ + 1
feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0)
predictions += clf.predict(test_df.values, num_iteration=clf.best_iteration) / folds.n_splits
print("CV score: {:<8.5f}".format(mean_squared_error(oof, target)**0.5))
cols = feature_importance_df[["feature", "importance"]].groupby("feature").mean().sort_values(
by="importance", ascending=False)[:1000].index
best_features = feature_importance_df.loc[feature_importance_df.feature.isin(cols)]
plt.figure(figsize=(14,10))
sns.barplot(x="importance", y="feature", data=best_features.sort_values(by="importance", ascending=False))
plt.title('LightGBM Features (avg over folds)')
plt.tight_layout()
plt.savefig('lgbm_importances.png')
```
## 3.4 Submission
```
submission = pd.DataFrame()
submission['fullVisitorId'] = test_fullVisitorId
submission['PredictedLogRevenue'] = predictions
grouped_test = submission[['fullVisitorId', 'PredictedLogRevenue']].groupby('fullVisitorId').sum().reset_index()
grouped_test.to_csv('submit.csv',index=False)
```
| github_jupyter |
# Running a Hyperparameter Tuning Job with Vertex Training
## Learning objectives
In this notebook, you learn how to:
1. Create a Vertex AI custom job for training a model.
2. Launch hyperparameter tuning job with the Python SDK.
3. Cleanup resources.
## Overview
This notebook demonstrates how to run a hyperparameter tuning job with Vertex Training to discover optimal hyperparameter values for an ML model. To speed up the training process, `MirroredStrategy` from the `tf.distribute` module is used to distribute training across multiple GPUs on a single machine.
In this notebook, you create a custom-trained model from a Python script in a Docker container. You learn how to modify training application code for hyperparameter tuning and submit a Vertex Training hyperparameter tuning job with the Python SDK.
### Dataset
The dataset used for this tutorial is the [horses or humans dataset](https://www.tensorflow.org/datasets/catalog/horses_or_humans) from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The trained model predicts if an image is of a horse or a human.
Each learning objective will correspond to a _#TODO_ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/distributed-hyperparameter-tuning.ipynb)
### Install additional packages
Install the latest version of Vertex SDK for Python.
```
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
# Install necessary dependencies
! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform
```
### Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
```
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
### Set up your Google Cloud project
1. [Enable the Vertex AI API and Compute Engine API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component).
1. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
#### Set your project ID
**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
```
import os
PROJECT_ID = "qwiklabs-gcp-00-b9e7121a76ba" # Replace your Project ID here
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
```
Otherwise, set your project ID here.
```
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "qwiklabs-gcp-00-b9e7121a76ba" # Replace your Project ID here
```
Set project ID
```
! gcloud config set project $PROJECT_ID
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
```
# Import necessary librarary
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
When you submit a custom training job using the Cloud SDK, you will need to provide a staging bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Make sure to [choose a region where Vertex AI services are
available](https://cloud.google.com/vertex-ai/docs/general/locations#available_regions). You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
```
BUCKET_URI = "gs://qwiklabs-gcp-00-b9e7121a76ba" # Replace your Bucket name here
REGION = "us-central1" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://qwiklabs-gcp-00-b9e7121a76ba": # Replace your Bucket name here
BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
print(BUCKET_URI)
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
# Create your bucket
! gsutil mb -l $REGION $BUCKET_URI
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
# Give access to your Cloud Storage bucket
! gsutil ls -al $BUCKET_URI
```
### Import libraries and define constants
```
# Import necessary libraries
import os
import sys
from google.cloud import aiplatform
from google.cloud.aiplatform import hyperparameter_tuning as hpt
```
### Write Dockerfile
The first step in containerizing your code is to create a Dockerfile. In the Dockerfile, you'll include all the commands needed to run the image such as installing the necessary libraries and setting up the entry point for the training code.
This Dockerfile uses the Deep Learning Container TensorFlow Enterprise 2.5 GPU Docker image. The Deep Learning Containers on Google Cloud come with many common ML and data science frameworks pre-installed. After downloading that image, this Dockerfile installs the [CloudML Hypertune](https://github.com/GoogleCloudPlatform/cloudml-hypertune) library and sets up the entrypoint for the training code.
```
%%writefile Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-gpu.2-5
WORKDIR /
# Installs hypertune library
RUN pip install cloudml-hypertune
# Copies the trainer code to the docker image.
COPY trainer /trainer
# Sets up the entry point to invoke the trainer.
ENTRYPOINT ["python", "-m", "trainer.task"]
```
### Create training application code
Next, you create a trainer directory with a `task.py` script that contains the code for your training application.
```
# Create trainer directory
! mkdir trainer
```
In the next cell, you write the contents of the training script, `task.py`. This file downloads the _horses or humans_ dataset from TensorFlow datasets and trains a `tf.keras` functional model using `MirroredStrategy` from the `tf.distribute` module.
There are a few components that are specific to using the hyperparameter tuning service:
* The script imports the `hypertune` library. Note that the Dockerfile included instructions to pip install the hypertune library.
* The function `get_args()` defines a command-line argument for each hyperparameter you want to tune. In this example, the hyperparameters that will be tuned are the learning rate, the momentum value in the optimizer, and the number of units in the last hidden layer of the model. The value passed in those arguments is then used to set the corresponding hyperparameter in the code.
* At the end of the `main()` function, the hypertune library is used to define the metric to optimize. In this example, the metric that will be optimized is the the validation accuracy. This metric is passed to an instance of `HyperTune`.
```
%%writefile trainer/task.py
import argparse
import hypertune
import tensorflow as tf
import tensorflow_datasets as tfds
def get_args():
"""Parses args. Must include all hyperparameters you want to tune."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--learning_rate', required=True, type=float, help='learning rate')
parser.add_argument(
'--momentum', required=True, type=float, help='SGD momentum value')
parser.add_argument(
'--units',
required=True,
type=int,
help='number of units in last hidden layer')
parser.add_argument(
'--epochs',
required=False,
type=int,
default=10,
help='number of training epochs')
args = parser.parse_args()
return args
def preprocess_data(image, label):
"""Resizes and scales images."""
image = tf.image.resize(image, (150, 150))
return tf.cast(image, tf.float32) / 255., label
def create_dataset(batch_size):
"""Loads Horses Or Humans dataset and preprocesses data."""
data, info = tfds.load(
name='horses_or_humans', as_supervised=True, with_info=True)
# Create train dataset
train_data = data['train'].map(preprocess_data)
train_data = train_data.shuffle(1000)
train_data = train_data.batch(batch_size)
# Create validation dataset
validation_data = data['test'].map(preprocess_data)
validation_data = validation_data.batch(64)
return train_data, validation_data
def create_model(units, learning_rate, momentum):
"""Defines and compiles model."""
inputs = tf.keras.Input(shape=(150, 150, 3))
x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu')(inputs)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(32, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(64, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(units, activation='relu')(x)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(
loss='binary_crossentropy',
optimizer=tf.keras.optimizers.SGD(
learning_rate=learning_rate, momentum=momentum),
metrics=['accuracy'])
return model
def main():
args = get_args()
# Create Strategy
strategy = tf.distribute.MirroredStrategy()
# Scale batch size
GLOBAL_BATCH_SIZE = 64 * strategy.num_replicas_in_sync
train_data, validation_data = create_dataset(GLOBAL_BATCH_SIZE)
# Wrap model variables within scope
with strategy.scope():
model = create_model(args.units, args.learning_rate, args.momentum)
# Train model
history = model.fit(
train_data, epochs=args.epochs, validation_data=validation_data)
# Define Metric
hp_metric = history.history['val_accuracy'][-1]
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=hp_metric,
global_step=args.epochs)
if __name__ == '__main__':
main()
```
### Build the Container
In the next cells, you build the container and push it to Google Container Registry.
```
# Set the IMAGE_URI
IMAGE_URI = f"gcr.io/{PROJECT_ID}/horse-human:hypertune"
# Build the docker image
! docker build -f Dockerfile -t $IMAGE_URI ./
# Push it to Google Container Registry:
! docker push $IMAGE_URI
```
### Create and run hyperparameter tuning job on Vertex AI
Once your container is pushed to Google Container Registry, you use the Vertex SDK to create and run the hyperparameter tuning job.
You define the following specifications:
* `worker_pool_specs`: Dictionary specifying the machine type and Docker image. This example defines a single node cluster with one `n1-standard-4` machine with two `NVIDIA_TESLA_T4` GPUs.
* `parameter_spec`: Dictionary specifying the parameters to optimize. The dictionary key is the string assigned to the command line argument for each hyperparameter in your training application code, and the dictionary value is the parameter specification. The parameter specification includes the type, min/max values, and scale for the hyperparameter.
* `metric_spec`: Dictionary specifying the metric to optimize. The dictionary key is the `hyperparameter_metric_tag` that you set in your training application code, and the value is the optimization goal.
```
# Define required specifications
worker_pool_specs = [
{
"machine_spec": {
"machine_type": "n1-standard-4",
"accelerator_type": "NVIDIA_TESLA_T4",
"accelerator_count": 2,
},
"replica_count": 1,
"container_spec": {"image_uri": IMAGE_URI},
}
]
metric_spec = {"accuracy": "maximize"}
parameter_spec = {
"learning_rate": hpt.DoubleParameterSpec(min=0.001, max=1, scale="log"),
"momentum": hpt.DoubleParameterSpec(min=0, max=1, scale="linear"),
"units": hpt.DiscreteParameterSpec(values=[64, 128, 512], scale=None),
}
```
Create a `CustomJob`.
```
print(BUCKET_URI)
# Create a CustomJob
JOB_NAME = "horses-humans-hyperparam-job" + TIMESTAMP
my_custom_job = # TODO 1: Your code goes here(
display_name=JOB_NAME,
project=PROJECT_ID,
worker_pool_specs=worker_pool_specs,
staging_bucket=BUCKET_URI,
)
```
Then, create and run a `HyperparameterTuningJob`.
There are a few arguments to note:
* `max_trial_count`: Sets an upper bound on the number of trials the service will run. The recommended practice is to start with a smaller number of trials and get a sense of how impactful your chosen hyperparameters are before scaling up.
* `parallel_trial_count`: If you use parallel trials, the service provisions multiple training processing clusters. The worker pool spec that you specify when creating the job is used for each individual training cluster. Increasing the number of parallel trials reduces the amount of time the hyperparameter tuning job takes to run; however, it can reduce the effectiveness of the job overall. This is because the default tuning strategy uses results of previous trials to inform the assignment of values in subsequent trials.
* `search_algorithm`: The available search algorithms are grid, random, or default (None). The default option applies Bayesian optimization to search the space of possible hyperparameter values and is the recommended algorithm.
```
# Create and run HyperparameterTuningJob
hp_job = # TODO 2: Your code goes here(
display_name=JOB_NAME,
custom_job=my_custom_job,
metric_spec=metric_spec,
parameter_spec=parameter_spec,
max_trial_count=15,
parallel_trial_count=3,
project=PROJECT_ID,
search_algorithm=None,
)
hp_job.run()
```
** It will nearly take 50 mintues to complete the job successfully.**
Click on the generated link in the output to see your run in the Cloud Console. When the job completes, you will see the results of the tuning trials.

## Cleaning up
To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
```
# Set this to true only if you'd like to delete your bucket
delete_bucket = # TODO 3: Your code goes here
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -r $BUCKET_URI
```
| github_jupyter |
# Global Configuration
This notebook example demonstrates how the global configuration object, the `MassConfiguration`, is used to configure the default behaviors for various **COBRApy** and **MASSpy** methods.
```
import cobra
import mass
from mass.test import create_test_model
cobra_config = cobra.Configuration()
```
Note that changing the global configuration values is the most useful at the beginning of a work session.
## The MassConfiguration Object
Similar to the `cobra.Configuration` object, the `MassConfiguration` object is a [singleton](https://en.wikipedia.org/wiki/Singleton_pattern), meaning that only one instance can exist and is respected everywhere in **MASSpy**.
The `MassConfiguration` is retrieved via the following:
```
mass_config = mass.MassConfiguration()
```
The `MassConfiguration` is synchronized with the `cobra.Configuration` singleton object such that a change in one configuration object affects the other.
```
print("cobra configuration before: {0!r}".format(cobra_config.bounds))
# Change bounds using the MassConfiguration object
mass_config.bounds = (-444, 444)
print("cobra configuration after: {0!r}".format(cobra_config.bounds))
```
This means that changes only need to be made to the `MassConfiguration` object for workflows that involve both the **COBRApy** and **MASSpy** packages. The shared configuration attributes can be viewed using the `MassConfiguration.shared_state` attribute.
```
list(mass_config.shared_state)
```
## Attributes for Model Construction
The following attributes of the `MassConfiguration` alter default behaviors for constructing models and importing/exporting models via [SBML](http://sbml.org/Main_Page).
```
from mass import MassMetabolite, MassReaction
```
### For irreversible reactions
When an irreversible reaction is created, the equilibrium constant and reverse rate constant are automatically set based on the `irreversible_Keq` and `irreversible_kr` attributes, respectively.
```
mass_config.irreversible_Keq = float("inf")
mass_config.irreversible_kr = 0
print("Irreversible Keq: {0}".format(mass_config.irreversible_Keq))
print("Irreversible kr: {0}".format(mass_config.irreversible_kr))
R1 = MassReaction("R1", reversible=False)
R1.parameters
```
Changing the `irreversible_Keq` and `irreversible_kr` attributes affects newly created `MassReaction` objects.
```
mass_config.irreversible_Keq = 10e6
mass_config.irreversible_kr = 1e-6
print("Irreversible Keq: {0}".format(mass_config.irreversible_Keq))
print("Irreversible kr: {0}\n".format(mass_config.irreversible_kr))
# Create new reaction
R2 = MassReaction("R2", reversible=False)
print(R2.parameters)
```
Existing reactions are not affected.
```
print(R1.parameters)
```
### For rate expressions
Automatic generation of rate expressions are affected using the `exclude_metabolites_from_rates` and `exclude_compartment_volumes_in_rates` attributes.
```
model = create_test_model("textbook")
```
#### Excluding metabolites from rates
The `exclude_metabolites_from_rates` attribute determines which metabolites to exclude from rate expressions by using a dictionary that contains a metabolite attribute for filtering, and a list of values to be excluded.
```
mass_config.exclude_metabolites_from_rates
```
The default setting utilizes the `MassMetabolite.elements` attribute for filtering, excluding any metabolite that returns the elements for hydrogen and water.
```
ENO = model.reactions.get_by_id("ENO")
print(ENO.rate)
```
The `exclude_metabolites_from_rates` attribute can be changed by providing a `dict` that contains a metabolite attribute for filtering and the list of values to be excluded. For example, to exclude "2pg_c" by using its `name` attribute as the criteria for exclusion:
```
mass_config.exclude_metabolites_from_rates = {"name": ["D-Glycerate 2-phosphate"]}
ENO = model.reactions.get_by_id("ENO")
print(ENO.rate)
```
Or, to exclude hydrogen and water by using their identifiers:
```
mass_config.exclude_metabolites_from_rates = {"id": ["h_c", "h2o_c"]}
ENO = model.reactions.get_by_id("ENO")
print(ENO.rate)
```
Boundary reactions are unaffected by the `exclude_metabolites_from_rates` attribute:
```
for rid in ["SK_h_c", "SK_h2o_c"]:
reaction = model.reactions.get_by_id(rid)
print(reaction.rate)
```
#### Excluding compartments from rates
The `exclude_compartment_volumes_in_rates` attribute determines whether compartment volumes are factored into rate expressions. By default, compartment volumes are not included in automatically generated rate expressions:
```
PGI = model.reactions.get_by_id("PGI")
print(PGI.rate)
```
When the `exclude_compartment_volumes_in_rates` attribute is set as `False`, compartments are included in rate expressions as `volume_CID`, with `CID` referring to the compartment identifier.
```
mass_config.exclude_compartment_volumes_in_rates = False
PGI = model.reactions.get_by_id("PGI")
model.custom_parameters["volume_c"] = 1
print(PGI.rate)
```
The compartment volume is currently treated as a custom parameter. This behavior is subject to change in future updates following the release of COBRApy compartment objects.
### For compartments and SBML
The `boundary_compartment` attribute defines the compartment for any external boundary species.
```
# Create a boundary reaction
x1_c = MassMetabolite("x1_c", compartment="c")
R3 = MassReaction("R1")
R3.add_metabolites({x1_c: -1})
print(mass_config.boundary_compartment)
R3.boundary_metabolite
```
The `boundary_compartment` can be changed using a `dict` that contains the new compartment identifier and its full name.
```
mass_config.boundary_compartment = {"xt": "external"}
R3.boundary_metabolite
```
Because the `mass.Simulation` object uses the **libRoadRunner** package, a simulator for SBML models, a model cannot be simulated without defining at least one compartment. The `default_compartment` attribute is used to define the compartment of the model when no compartments have been defined.
```
mass_config.default_compartment
```
As with the `boundary_compartment` attribute, the `default_compartment` attribute can be changed using a `dict`:
```
mass_config.default_compartment = {"def": "default_compartment"}
mass_config.default_compartment
```
#### Model creator
SBML also allows for a model creator to be defined when exporting models:
```
mass_config.model_creator
```
The `model_creator` attribute of the `MassConfiguration` allows the model creator to be set at the time of export by using a `dict`, with valid keys as "familyName", "givenName", "organization", and "email".
```
mass_config.model_creator = {
"familyName": "Smith",
"givenName": "John",
"organization": "Systems Biology Research Group @UCSD"}
mass_config.model_creator
```
## Attributes for Simulation and Analysis
The following attributes of the `MassConfiguration` alter default behaviors of various simulation and analytical methods.
```
from mass import Simulation
# Reset configurations before loading model
mass_config.boundary_compartment = {"b": "boundary"}
mass_config.exclude_compartment_volumes_in_rates = True
model = create_test_model("Glycolysis")
sim = Simulation(model, verbose=True)
```
### Steady state threshold
The `MassConfiguration.steady_state_threshold` attribute determines whether a model has reached a steady state using the following criteria:
* With simulations (i.e., `strategy=simulate`), the absolute difference between the last two solution points must be less than or equal to the steady state threshold.
* With steady state solvers, the sum of squares of the steady state solutions must be less than or equal to the steady state threshold.
In general, compared values must be less than or equal to the `steady_state_threshold` attribute to be considered at a steady state.
```
mass_config.steady_state_threshold = 1e-20
conc_sol, flux_sol = sim.find_steady_state(model, strategy="simulate")
bool(conc_sol) # Empty solution objects return False
```
Changing the threshold affects whether solution values are considered to be at steady state:
```
mass_config.steady_state_threshold = 1e-6
conc_sol, flux_sol = sim.find_steady_state(model, strategy="simulate")
bool(conc_sol) # Filled solution objects return False
```
### Decimal precision
The `MassConfiguration.decimal_precision` attribute is a special attribute used in several `mass` methods. The value of the attribute determines how many digits in rounding after the decimal to preserve.
For many methods, the `decimal_precision` attribute will not be applied unless a `decimal_precision` kwarg is set as `True`.
```
# Set decimal precision
mass_config.decimal_precision = 8
# Will not apply decimal precision to steady state solutions
conc_sol, flux_sol = sim.find_steady_state(model, strategy="simulate",
decimal_precision=False)
print(conc_sol["glc__D_c"])
# Will apply decimal precision to steady state solutions
conc_sol, flux_sol = sim.find_steady_state(model, strategy="simulate",
decimal_precision=True)
print(conc_sol["glc__D_c"])
```
If `MassConfiguration.decimal_precision` is `None`, no rounding will occur.
```
mass_config.decimal_precision = None
# Will apply decimal precision to steady state solutions
conc_sol, flux_sol = sim.find_steady_state(model, strategy="simulate",
decimal_precision=True)
print(conc_sol["glc__D_c"])
```
## Shared COBRA Attributes
The following attributes are those shared with the `cobra.Configuration` object.
### Bounds
When a reaction is created, its default bound values are determined by the `lower_bound` and `upper_bound` attributes of the `MassConfiguration`:
```
mass_config.lower_bound = -1000
mass_config.upper_bound = 1000
R4 = MassReaction("R4")
print("R4 bounds: {0}".format(R4.bounds))
```
Changing the bounds affects newly created reactions, but not existing ones:
```
mass_config.bounds = (-444, 444)
R5 = MassReaction("R5")
print("R5 bounds: {0}".format(R5.bounds))
print("R4 bounds: {0}".format(R4.bounds))
```
### Solver
The default solver and solver tolerance attributes are determined by the `solver` and `tolerance` attributes of the `MassConfiguration`. The `solver` and `tolerance` attributes are utilized by newly instantiated models and `ConcSolver` objects.
```
model = create_test_model("textbook")
print("Solver {0!r}".format(model.solver))
print("Tolerance {0}".format(model.tolerance))
```
The default solver can be changed, depending on the solvers installed in the current environment. GLPK is assumed to always be present in the environment.
The solver tolerance is similarly set using the `tolerance` attribute.
```
# Change solver and solver tolerance
mass_config.solver = "glpk"
mass_config.tolerance = 1e-4
# Instantiate a new model to observe changes
model = create_test_model("textbook")
print("Solver {0!r}".format(model.solver))
print("Tolerance {0}".format(model.tolerance))
```
### Number of processes
The `MassConfiguration.processes` determines the default number of processes used when multiprocessing is possible. The default number corresponds to the number of available cores (hyperthreads).
```
mass_config.processes
```
| github_jupyter |
# IN-CLASS RNASEQ 1
---
## 0. Set up
**1. Open a new terminal.**
**2. Add two scripts directories to your $PATH as follows. Make sure to copy, paste, and execute each line one at a time in UNIX.**
Use the following command to view the list of directories in your path. We do this to make sure we have properly modified the PATH variable.
**(1) Enter the result of the echo $PATH command in the following text box:**
**<a href="https://github.com/itmat/Normalization" target=_blank>PORT</a> requires input files to be organized into a specific directory structure that looks like this. We will create this directory structure now.**
STUDY
└── reads
├── Sample_1
│ ├── Unaligned reads
│ └── Aligned.sam/bam
├── Sample_2
│ ├── Unaligned reads
│ └── Aligned.sam/bam
├── Sample_3
│ ├── Unaligned reads
│ └── Aligned.sam/bam
└── Sample_4
├── Unaligned reads
└── Aligned.sam/bam
<br>
**First give STUDY directory a unique name as follows:**
**Create sample directories and link them to the files of unaligned reads.**
We are not actually copying the raw data into your folders, we are making what are called "symbolic links" in order
to avoid making many copies of the same large files. But it should look just like the files are present in your folders.
The folders of raw data will live inside `$HOME/RNASEQ/reads`.
Copy the following lines into your terminal. You should be able to copy/paste them all at once, so you don't have to do them one line at a time.
You now have all of the files of raw data in place. These files have the short reads that come off the machine. Let's look at the forward read of the first read-pair in the first sample.
**(2) run the command below and and paste the result in the box below.**
head -4 $HOME/RNASEQ/reads/sample1/sample1_forward.fq
The first row is the name of the read, the second row is the read itself, the fourth row is the "quality string" for the read.
Now, we only have raw reads so far, so we do not yet know where in the genome each read comes from. So the first job is to align the reads to the genome.
---
## 1. Align
<br>
**First we must create a text file that contains the names of the sample directories. Do this as follows:**
Copy and paste the below into UNIX.
** (3) When you make a file like we just did, you should always check that it worked. **
So do that by
cat $HOME/RNASEQ/sample_dirs.txt
and make sure it has the four lines it should have, no more and no less. Paste the result of the cat command in the box below.
**Now we are ready to align the data with STAR.**
**Now we wait while STAR does the aligning. Depending on how many reads you have this can take minutes to hours.**
We can monitor the progress in several ways.
First off the `bjobs` command displays the current status of the pending, running or suspended jobs that you own.
**(4) Run that command now and paste what you see in the box below. You should see a header line and up to four lines showing active jobs. Some of the jobs may have already finished so you may not see all four.**
** (5) Secondly, to monitor progress, you can check the log files in the `$HOME/RNASEQ/logs` directory.**
** Run the command below on that directory now and enter the result in the box below.**
**We ran a script that in turn ran all four STAR alignments for us so we didn't have to do them by hand one-by-one.**
** (6) But you could, if you wanted, run the four STAR jobs one-by-one directly at the command line. The the Perl script wrote the command to run STAR into a shell script. You could execute the line in that shell script directly at the command prompt, if you wanted to. To see the command run the following and paste the result in the box below:**
---
# Homework
---
In this assignment you will work with the alignment files that STAR produces.
These files are in "SAM format".
_You will have to refer to the `SAMv1.pdf` file that you can find in this week's folder_
**[Q1] Your first task is to just look at the top ten rows of the following SAM file: **
So execute the following command and paste the result in the box below:
**[Q2] You can see from the first few lines which chromosomes have reads aligned to them. Type those chromosome names into the box below, one per line (from the results of the head command):**
**[Q3] Now your next task is to count the number of alignments (rows) for which the read aligned perfectly without gaps. **
There is a particular field in the SAM file called the "CIGAR String" that gives us this information. If the CIGAR string is "100M" then the read aligned perfectly with no gaps.
So your job is to count the number of rows that have 100M as their CIGAR string.
You will have to refer to the SAMv1.pdf document to find out where the CIGAR string is in each row.
Then you should write one UNIX command that will return the answer. You will have to pipe together a few basic UNIX commands to make this work.
**a. Construct the UNIX command and paste it into the box below.**
**b. Now execute the command and paste the answer in the box below. Because it is a large file it may take several minutes for the command to finish.**
| github_jupyter |
## Machine Learning Project Checklist
[Hands-On Machine Learning](https://www.oreilly.com/library/view/hands-on-machine-learning/9781491962282/app02.html) provides a concise summary capturing the entire development process of a ML project.
We include it here as reference.
Use it as a guideline when working on your group projects.
---
Print [version](https://tdgunes.com/COMP6246-2018Fall/lab1/extra1_3.pdf)
This checklist can guide you through your Machine Learning projects. There are eight main steps:
1. Frame the problem and look at the big picture.
1. Get the data.
1. Explore the data to gain insights.
1. Prepare the data to better expose the underlying data patterns to Machine Learning algorithms.
1. Explore many different models and short-list the best ones.
1. Fine-tune your models and combine them into a great solution.
1. Present your solution.
1. Launch, monitor, and maintain your system.
Obviously, you should feel free to adapt this checklist to your needs.
### 1. Frame the Problem and Look at the Big Picture
Frame the Problem and Look at the Big Picture
1. Define the objective in business terms.
2. How will your solution be used?
3. What are the current solutions/workarounds (if any)?
4. How should you frame this problem (supervised/unsupervised, online/offline,
etc.)?
5. How should performance be measured?
6. Is the performance measure aligned with the business objective?
7. What would be the minimum performance needed to reach the business objective?
8. What are comparable problems? Can you reuse experience or tools?
9. Is human expertise available?
10. How would you solve the problem manually?
11. List the assumptions you (or others) have made so far.
12. Verify assumptions if possible.
### 2. Get the Data
Note: automate as much as possible so you can easily get fresh data.
1. List the data you need and how much you need.
2. Find and document where you can get that data.
3. Check how much space it will take.
4. Check legal obligations, and get authorization if necessary.
5. Get access authorizations.
6. Create a workspace (with enough storage space).
7. Get the data.
8. Convert the data to a format you can easily manipulate (without changing the
data itself).
9. Ensure sensitive information is deleted or protected (e.g., anonymized).
10. Check the size and type of data (time series, sample, geographical, etc.).
11. Sample a test set, put it aside, and never look at it (no data snooping!).
### 3. Explore the Data
Note: try to get insights from a field expert for these steps.
1. Create a copy of the data for exploration (sampling it down to a manageable size
if necessary).
2. Create a Jupyter notebook to keep a record of your data exploration.
3. Study each attribute and its characteristics:
- Name
- Type (categorical, int/float, bounded/unbounded, text, structured, etc.)
- % of missing values
- Noisiness and type of noise (stochastic, outliers, rounding errors, etc.)
- Possibly useful for the task?
- Type of distribution (Gaussian, uniform, logarithmic, etc.)
4. For supervised learning tasks, identify the target attribute(s).
5. Visualize the data.
6. Study the correlations between attributes.
7. Study how you would solve the problem manually.
8. Identify the promising transformations you may want to apply.
9. Identify extra data that would be useful (go back to “Get the Data” on page 506).
10. Document what you have learned.
### 4. Prepare the Data
Notes:
- Work on copies of the data (keep the original dataset intact).
- Write functions for all data transformations you apply, for five reasons:
- So you can easily prepare the data the next time you get a fresh dataset
- So you can apply these transformations in future projects
- To clean and prepare the test set
- To clean and prepare new data instances once your solution is live
- To make it easy to treat your preparation choices as hyperparameters
1. Data cleaning:
- Fix or remove outliers (optional).
- Fill in missing values (e.g., with zero, mean, median…) or drop their rows (or columns).
1. Feature selection (optional):
- Drop the attributes that provide no useful information for the task.
1. Feature engineering, where appropriate:
- Discretize continuous features.
- Decompose features (e.g., categorical, date/time, etc.).
- Add promising transformations of features (e.g., log(x), sqrt(x), x², etc.).
- Aggregate features into promising new features.
1. Feature scaling: standardize or normalize features
### 5. Short-List Promising Models
Notes:
- If the data is huge, you may want to sample smaller training sets so you can train many different models in a reasonable time (be aware that this penalizes complex models such as large neural nets or Random Forests).
- Once again, try to automate these steps as much as possible.
1. Train many quick and dirty models from different categories (e.g., linear, naive Bayes, SVM, Random Forests, neural net, etc.) using standard parameters.
1. Measure and compare their performance.
- For each model, use N-fold cross-validation and compute the mean and standard deviation of the performance measure on the N folds.
1. Analyze the most significant variables for each algorithm.
1. Analyze the types of errors the models make.
- What data would a human have used to avoid these errors?
1. Have a quick round of feature selection and engineering.
1. Have one or two more quick iterations of the five previous steps.
1. Short-list the top three to five most promising models, preferring models that make different types of errors.
### 6. Fine-Tune the System
Notes:
- You will want to use as much data as possible for this step, especially as you move toward the end of fine-tuning.
- As always automate what you can.
1. Fine-tune the hyperparameters using cross-validation.
- Treat your data transformation choices as hyperparameters, especially when you are not sure about them (e.g., should I replace missing values with zero or with the median value? Or just drop the rows?).
- Unless there are very few hyperparameter values to explore, prefer random search over grid search. If training is very long, you may prefer a Bayesian optimization approach (e.g., using Gaussian process priors, as described by Jasper Snoek, Hugo Larochelle, and Ryan Adams).
1. Try Ensemble methods. Combining your best models will often perform better than running them individually.
1. Once you are confident about your final model, measure its performance on the test set to estimate the generalization error.
### 7. Present Your Solution
1. Document what you have done.
2. Create a nice presentation.
- Make sure you highlight the big picture first.
3. Explain why your solution achieves the business objective.
4. Don’t forget to present interesting points you noticed along the way.
- Describe what worked and what did not.
- List your assumptions and your system’s limitations.
5. Ensure your key findings are communicated through beautiful visualizations or easy-to-remember statements (e.g., “the median income is the number-one predictor of housing prices”).
### 8. Launch!
1. Get your solution ready for production (plug into production data inputs, write
unit tests, etc.).
1. Write monitoring code to check your system’s live performance at regular intervals and trigger alerts when it drops.
- Beware of slow degradation too: models tend to “rot” as data evolves.
- Measuring performance may require a human pipeline (e.g., via a crowdsourcing service).
- Also monitor your inputs’ quality (e.g., a malfunctioning sensor sending random values, or another team’s output becoming stale). This is particularly important for online learning systems.
1. Retrain your models on a regular basis on fresh data (automate as much as possible).
### Do not forget to discuss and argue
- Do you think your ML based solution works?
- Do you see room for improvement?
- Do you recommend to use your ML based solution in production?
- What implications do you expect when introducing your ML based system?
- What the economic implications for the company when introducing your system?
- What are the consequences if your system works/fails?
### Prioritize
- Hard facts > soft facts
- First observe, then interpret - solid discussions are important - avoid guessing
- An honest presentation of a system that does not work is better more than a flimsy presentation of one that does
---
*This is a lecture course, we are here to learn, not to pretend.*
| github_jupyter |
# K-Nearest Neighbours - solution on Iris dataset
```
%pylab inline
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
import numpy as np
import pandas as pd
iris = datasets.load_iris()
```
## Scikit-Learn Implementation
The code below presents the solution with a single line of code, as per **Scikit-Learn** [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html).
Note that the default settings uses _Minkowski distance_:
$$d(\vec{x}, \vec{y}) = \sum_i\left(\left|x_i - y_i\right|^p\right)^{1/p}$$
with $p = 2$, and the $K$ value was set to 3 as an aribitrary choice.
```
neigh = KNeighborsClassifier(n_neighbors=3)
y_pred = neigh.fit(iris.data, iris.target).predict(iris.data)
print("Number of mislabeled points out of a total %d points : %d"
% (iris.data.shape[0],(iris.target != y_pred).sum()))
fig, ax = plt.subplots()
ax.plot(y_pred, 'o')
ax.plot(iris.target, '.')
ax.set_ylabel('Target')
ax.set_xlabel('Example number')
ax.set_yticks([0, 1, 2])
ax.legend(['Predictions', 'Original data'], loc='lower right')
ax.grid()
plt.show()
```
Comparing the predictions with the actual labels, we can easily spot the misclassified examples (six in total).
## Dataset Analysis
Let's take a look first on selected _projections_ of the dataset on plains defined by $x_i:x_j$ feature pairs.
```
df = pd.DataFrame(iris.data, columns=iris.feature_names)
df['target'] = iris.target
df['label'] = df['target'].apply(lambda x: iris['target_names'][x])
df.sample(3)
fig, axs = plt.subplots(2, 2, figsize=(8, 6))
plt.tight_layout(h_pad=4, w_pad=4)
colors = ['red', 'orange', 'green']
for x in range(len(iris.feature_names)):
for y, c in zip(range(len(iris.target)), colors):
axs[x % 2, x // 2].scatter(
df[df['target'] == y][iris.feature_names[x % 4]],
df[df['target'] == y][iris.feature_names[(x + 1)%4]],
color=c,
alpha=0.5)
axs[x % 2, x // 2].set_xlabel(iris.feature_names[x % 2])
axs[x % 2, x // 2].set_ylabel(iris.feature_names[x // 2])
axs[x % 2, x // 2].grid()
axs[x % 2, x // 2].legend(iris.target_names, loc='upper right')
axs[x % 2, x // 2].set_xlim([0.0, 8.0])
axs[x % 2, x // 2].set_ylim([0.0, 8.0])
plt.show()
```
It is cumbersome to plot 4-dimensional space. However, from these 2D projections, we can see that the data highly clustered. Therefore, the classification using of the distance in this case has such a high rate of success.
Before continuing, let's split the original dataset onto training and test set.
**Note:** Here we will take a half of the dataset to be our test set.
Normally, it is recommended to take only a small fraction of the original dataset in order to be in possession of more data for training of the actual classifier.
Here, however, we would like to illustrate the impact of choosing of different $K$ parameters, thus it is actually encourage that our training set is smaller and comparable with the test set.
```
x_train, x_test, y_train, y_test = \
train_test_split(iris.data, iris.target, test_size=0.5, random_state=0)
print ("Training set #examples: {}.".format(x_train.shape[0]))
print ("Test set #examples: {}.".format(x_test.shape[0]))
```
## Manual implementation
In order to build the KNN classifier, all we need is to have an ability to calculate the _distance_ between our example data point and _all_ of our training set examples AND the definition of the distance itself.
Since the solution requires sorting and keeping the index reference, we will use _pandas_ dataframe object, to be more verbose.
Here we will use the same Minkowski distance with $p=2$. This will keep our definition "open", but let us treat it as an Euclidean distance.
```
def distance(x0, x1, p=2):
return np.power((abs(x0 - x1)**p).sum(axis=1), (1/p))
def my_knn(x_example, K=3, p=2):
x_example = np.tile(x_example, (x_train.shape[0], 1))
Z = pd.DataFrame(x_train)
Z['distance'] = distance(x_example, x_train, p=p)
Z['target'] = y_train
Z = Z.sort_values('distance')
chosen = Z.head(K)
return chosen['target'].mode().values[0], chosen
```
Lines 1. and 2. implement the Minkowski distance, whose formula we wrote at the begining of this notebook.
In order to vectorize the calulations (always a good practice), we repeat the input example feature vector $N$ times, where $N$ is the number of training set examples.
We formulate a dataframe object.
We append one column: `distance` to the frame, which is our element-wise evaluation of the distance.
We also append the `y_train` data for easier comparison later.
Then, it's sorting and picking the top $K$ rows from the new frame.
Finally, we return the _mode_ of the solution.
In case of a _regression_ problem, we would have gone for the mean.
The function also return the selected `chosen` dataframe, but it does so only for demonstration and debugging.
### Evaluation
To evaluate our `my_knn` function we define another dataframe over or test data.
Note that evaluating the KNN classifier over training data would lead to a trivial case in which the distance would be zero, and thus our classifier would never be wrong.
```
neigh = KNeighborsClassifier(n_neighbors=3)
y_pred = neigh.fit(x_train, y_train).predict(x_test)
Z_test = pd.DataFrame(x_test)
Z_test['original target'] = y_test
Z_test['prediction with my_knn'] = Z_test.apply(lambda x: my_knn(x[[0, 1, 2, 3]], K=3)[0], axis=1)
Z_test['prediction with sklearn'] = y_pred
Z_test.head()
```
As a side note, we evaluate `my_knn` function using `.apply` method involving anonymous function.
Therefore, `x[[0, 1, 2, 3]]` and `axis=1` are used to refer to the first four columns of each row in the dataframe,
and the `[0]` appended to our function call comes from the `my_knn` definition where two outputs are returned.
```
assert (Z_test['prediction with my_knn'].to_numpy() == Z_test['prediction with sklearn'].to_numpy()).sum() == len(Z_test)
```
It seems our implementation gives identical results to the one offered with scikit-learn. This is comforting.
As a final step, let's take a look at what different values of $K$ do to the data.
## Playing with K
```
Kmax = 30
Z_test = pd.DataFrame(x_test)
Z_test['original target'] = y_test
for k in range(Kmax):
Z_test['pred K={}'.format(k + 1)] = Z_test.apply(lambda x: my_knn(x[[0, 1, 2, 3]], K=(k + 1))[0], axis=1)
Z_test.head()
correctly_classfied = np.zeros(Kmax)
count = y_test.shape[0]
for k in range(Kmax):
correctly_classfied[k] = (Z_test['original target'] == Z_test['pred K={}'.format(k + 1)]).sum()/count
plt.plot(np.linspace(1, stop=30, num=30), 100*correctly_classfied, ':o')
plt.xlabel('K')
plt.ylabel('Correct classifications [%]')
plt.ylim([85, 100])
plt.grid()
plt.show()
```
The figure above shows the behavior of the KNN classifier for different values of $K$.
We can see that the optimal value in this case is $K=9$.
It is interesting to observe that lower values of $K$ do not perform optimally, as there is not enough number of "voters" to make the average (or mode) be strong enough to make a good decision.
Conversly, having too many nerighbours stretches the geometrical region that contributes to making the decision, which is also not good.
Finally, it is worth to mention that **odd** $K$'s perform better than **even** values.
The reason behind it is the fact that with odd number of neighbours there is never a "draw" in making of the decision.
When the number is even, it can happen that two values of equal distances are found, in which case the classifier would choose the the one that is sorted first to be the winner.
Consequently, this approach does not guarantee that the selected output is the correct one, making the decision somewhat random.
## Conclusion
In this notebook we have demonsrated the principle behind K-Nearest Neighbours algorithm for classification.
The importan take-aways from this example:
* K-NN works requiers the entire dataset to construct the model.
* It is important to agree on the distance metrics. Common choice is the Euclidean distance (Minkowski with p=2).
* For classification problems we select the _mode_.
* For regression problem we select the _mean_.
* The value of $K$ should neither be too large nor too small.
* Odd number of $K$ is recommended.
| github_jupyter |
```
import numpy as np
import numpy.random as rand
import numpy.linalg as linalg
import matplotlib.pyplot as plt
# SNR is a range between min and max SNR in dB
def generate_signal(N = 16, K = 4, L = 16, f = 2.4e9, theta_bound = np.pi/2):
c = 3e8 # speed of light
wl = c/f # wavelength (lambda)
d = wl/2 # uniform distance between antennas
# antenna array
array = np.linspace(0,N-1,N)*d/wl
theta = rand.rand(K,1) * np.pi - np.pi/2
alpha = (np.random.randn(K,1) + 1j*np.random.randn(K,1))*np.sqrt(1/2)
response = np.exp(-1j*2*np.pi*array*np.sin(theta))*np.sqrt(1/N)
Y = np.dot(response.T, alpha).repeat(L, axis=1)
return theta, Y, alpha
N = 64
K = 4
T = 32
theta, X_raw, alpha = generate_signal(N, K, T)
Noise = (np.random.randn(N, T) + 1j*np.random.randn(N, T))*np.sqrt(1/1000)
X = X_raw + Noise
C = np.dot(X_raw, X_raw.T.conj())/T
eig = linalg.eig(C)[0]
theta = np.sort(theta.flatten())[::-1]
print(theta)
plt.semilogy(list(range(1,N+1)), np.abs(eig))
print(np.abs(eig)[:K])
def compute_H(theta, f = 2.4e9):
c = 3e8 # speed of light
wl = c/f # wavelength (lambda)
d = wl/2 # uniform distance between antennas
array = np.linspace(0,N-1,N)*d/wl
array_response = np.exp(-1j*2*np.pi*array*np.sin(theta))*np.sqrt(1/N)
return array_response.T
search_space_len = 1800 + 1
search_space = np.linspace(0,search_space_len-1, search_space_len) / search_space_len * np.pi - np.pi/2
search_space = search_space.reshape((search_space_len, 1))
A = compute_H(search_space)
A_r = A.real
A_c = A.imag
A_top = np.concatenate((A_r, -A_c), axis=1)
A_bot = np.concatenate((A_c, A_r), axis=1)
A_total = np.concatenate((A_top, A_bot), axis=0)
b = np.concatenate((X.real, X.imag), axis=0)
l = 0.5 # regularization parameter
def soft_thresh(x, l):
return np.sign(x) * np.maximum(np.abs(x) - l, 0.)
def ista(A, b, thresh, l, maxit):
x = np.zeros((2*search_space_len, T))
L = linalg.norm(A) ** 2
for _ in range(maxit):
x = thresh(x + np.dot(A.T, b - A.dot(x)) / L, l / L)
return x
maxit = 3000
x_ista = ista(A_total, b, soft_thresh, l, maxit)
x_ista_r, x_ista_c = np.array_split(x_ista, 2, axis=0)
x_ista = x_ista_r + 1j*x_ista_c
x_ista = np.mean(x_ista, axis=1)
vector_to_rads = lambda x: x/(search_space_len) * np.pi - np.pi/2
idx = np.argwhere(np.abs(x_ista) > 0)
theta_hat = np.sort(vector_to_rads(idx).flatten())[::-1]
alpha_actual = np.zeros((search_space_len))
alpha_hat = np.zeros((search_space_len))
theta_idx = (np.floor((theta+np.pi/2)/np.pi * search_space_len)).astype(int)
theta_hat_idx = (np.floor((theta_hat+np.pi/2)/np.pi * search_space_len)).astype(int)
alpha_actual[theta_idx] = np.abs(alpha[0,:].T)
alpha_hat[theta_hat_idx] = np.abs(x_ista[theta_hat_idx]) / np.max(np.abs(x_ista[theta_hat_idx]))
plt.figure(figsize=(15, 8))
plt.plot(search_space, alpha_actual)
plt.plot(search_space, alpha_hat)
plt.xlabel('Angle (radians)')
plt.ylabel('Amplitude')
plt.legend(['Actual signal', 'Estimated signal'])
theta_soft = theta_hat
x_ista
```
| github_jupyter |
```
from pathlib import Path
import matplotlib.pyplot as plt
import pandas as pd
from pyprojroot import here
import seaborn as sns
import searchnets
VSD_ROOT = here().joinpath('results/VSD/checkpoints')
NET_EXPT_ROOTS = [path for path in sorted(VSD_ROOT.iterdir()) if path.is_dir()]
cornet_z_expt_roots = [path for path in NET_EXPT_ROOTS if 'CORnet_Z' in str(path)]
```
convert all tensorboard events files to .csv (only need to run this once)
```
for cornet_z_expt_root in cornet_z_expt_roots:
net_roots = sorted(cornet_z_expt_root.joinpath('trained_200_epochs').glob('net_number*'))
for net_root in net_roots:
events_file = sorted(net_root.glob('**/*events*'))
events_file = [path for path in events_file if not str(path).endswith('.csv')]
assert len(events_file) == 1, 'found more than one events file'
events_file = events_file[0]
logdir = events_file.parent
searchnets.tensorboard.logdir2csv(logdir)
def get_net_number_from_dirname(dirname):
return dirname.split('_')[-1]
expt_dfs = {}
for cornet_z_expt_root in cornet_z_expt_roots:
expt_name = cornet_z_expt_root.name.replace('CORnet_Z_', '') # will use as dict key
dfs_this_expt = []
net_roots = sorted(cornet_z_expt_root.joinpath('trained_200_epochs').glob('net_number*'))
for net_root in net_roots:
net_number = int(
get_net_number_from_dirname(net_root.name)
)
events_csv = sorted(net_root.glob('**/*events*csv'))
assert len(events_csv) == 1, 'found more than one events file'
events_csv = events_csv[0]
df = pd.read_csv(events_csv)
df['replicate'] = net_number
dfs_this_expt.append(df)
expt_dfs[expt_name] = pd.concat(dfs_this_expt)
for expt_name in expt_dfs.keys():
if 'detect' in expt_name:
fig, ax = plt.subplots(1, 5, figsize=(20, 4))
ax = ax.ravel()
sns.lineplot(x='step', y='loss/train', hue='replicate', data=expt_dfs[expt_name],
ci=None, legend=False, alpha=0.5, ax=ax[0]);
sns.lineplot(x='step', y='loss/val', hue='replicate', data=expt_dfs[expt_name],
ci=None, legend=False, ax=ax[1]);
sns.lineplot(x='step', y='acc/val', hue='replicate', data=expt_dfs[expt_name],
ci=None, legend=False, ax=ax[2]);
ax[2].set_ylim([0., 1.1])
for ax in ax[3:]:
ax.set_axis_off()
st = fig.suptitle(expt_name)
fig.tight_layout(rect=[0, 0.03, 1, 0.95])
else:
fig, ax = plt.subplots(1, 5, figsize=(20, 4))
ax = ax.ravel()
sns.lineplot(x='step', y='loss/train', hue='replicate', data=expt_dfs[expt_name],
ci=None, legend=False, alpha=0.5, ax=ax[0]);
sns.lineplot(x='step', y='loss/val', hue='replicate', data=expt_dfs[expt_name],
ci=None, legend=False, ax=ax[1]);
sns.lineplot(x='step', y='f1/val', hue='replicate', data=expt_dfs[expt_name],
ci=None, legend=False, ax=ax[2]);
ax[2].set_ylim([0., 1.1])
sns.lineplot(x='step', y='acc_largest/val', hue='replicate', data=expt_dfs[expt_name],
ci=None, legend=False, ax=ax[3]);
ax[3].set_ylim([0., 1.1])
sns.lineplot(x='step', y='acc_random/val', hue='replicate', data=expt_dfs[expt_name],
ci=None, legend=False, ax=ax[4]);
ax[4].set_ylim([0., 1.1])
st = fig.suptitle(expt_name)
fig.tight_layout(rect=[0, 0.03, 1, 0.95])
```
| github_jupyter |
```
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import statsmodels.api as sm
# R-like interface, alternatively you can import statsmodels as import statsmodels.api as sm
import statsmodels.formula.api as smf
import theano
from scipy import stats
from scipy.special import logsumexp
%config InlineBackend.figure_format = 'retina'
az.style.use('arviz-darkgrid')
```
#### Code 6.1
```
data = {'species' : ['afarensis', 'africanus', 'habilis', 'boisei', 'rudolfensis', 'ergaster', 'sapiens'],
'brain' : [438, 452, 612, 521, 752, 871, 1350],
'mass' : [37., 35.5, 34.5, 41.5, 55.5, 61.0, 53.5]}
d = pd.DataFrame(data)
d
```
#### Code 6.2
```
m_6_1 = smf.ols('brain ~ mass', data=d).fit()
```
#### Code 6.3
```
1 - m_6_1.resid.var()/d.brain.var()
# m_6_1.summary() check the value for R-squared
```
#### Code 6.4
```
m_6_2 = smf.ols('brain ~ mass + I(mass**2)', data=d).fit()
```
#### Code 6.5
```
m_6_3 = smf.ols('brain ~ mass + I(mass**2) + I(mass**3)', data=d).fit()
m_6_4 = smf.ols('brain ~ mass + I(mass**2) + I(mass**3) + I(mass**4)', data=d).fit()
m_6_5 = smf.ols('brain ~ mass + I(mass**2) + I(mass**3) + I(mass**4) + I(mass**5)', data=d).fit()
m_6_6 = smf.ols('brain ~ mass + I(mass**2) + I(mass**3) + I(mass**4) + I(mass**5) + I(mass**6)', data=d).fit()
```
#### Code 6.6
```
m_6_7 = smf.ols('brain ~ 1', data=d).fit()
```
#### Code 6.7
```
d_new = d.drop(d.index[-1])
```
#### Code 6.8
```
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(8,3))
ax1.scatter(d.mass, d.brain, alpha=0.8)
ax2.scatter(d.mass, d.brain, alpha=0.8)
for i in range(len(d)):
d_new = d.drop(d.index[-i])
m0 = smf.ols('brain ~ mass', d_new).fit()
# need to calculate regression line
# need to add intercept term explicitly
x = sm.add_constant(d_new.mass) # add constant to new data frame with mass
x_pred = pd.DataFrame({'mass': np.linspace(x.mass.min() - 10, x.mass.max() + 10, 50)}) # create linspace dataframe
x_pred2 = sm.add_constant(x_pred) # add constant to newly created linspace dataframe
y_pred = m0.predict(x_pred2) # calculate predicted values
ax1.plot(x_pred, y_pred, 'gray', alpha=.5)
ax1.set_ylabel('body mass (kg)', fontsize=12);
ax1.set_xlabel('brain volume (cc)', fontsize=12)
ax1.set_title('Underfit model')
# fifth order model
m1 = smf.ols('brain ~ mass + I(mass**2) + I(mass**3) + I(mass**4) + I(mass**5)', data=d_new).fit()
x = sm.add_constant(d_new.mass) # add constant to new data frame with mass
x_pred = pd.DataFrame({'mass': np.linspace(x.mass.min()-10, x.mass.max()+10, 200)}) # create linspace dataframe
x_pred2 = sm.add_constant(x_pred) # add constant to newly created linspace dataframe
y_pred = m1.predict(x_pred2) # calculate predicted values from fitted model
ax2.plot(x_pred, y_pred, 'gray', alpha=.5)
ax2.set_xlim(32,62)
ax2.set_ylim(-250, 2200)
ax2.set_ylabel('body mass (kg)', fontsize=12);
ax2.set_xlabel('brain volume (cc)', fontsize=12)
ax2.set_title('Overfit model')
plt.show()
```
#### Code 6.9
```
p = (0.3, 0.7)
-sum(p * np.log(p))
```
#### Code 6.10
```
# fit model
m_6_1 = smf.ols('brain ~ mass', data=d).fit()
#compute de deviance by cheating
-2 * m_6_1.llf
```
#### Code 6.11
```
# standarize the mass before fitting
d['mass_s'] = d['mass'] - np.mean(d['mass'] / np.std(d['mass']))
with pm.Model() as m_6_8 :
a = pm.Normal('a', mu=np.mean(d['brain']), sd=10)
b = pm.Normal('b', mu=0, sd=10)
sigma = pm.Uniform('sigma', 0, np.std(d['brain']) * 10)
mu = pm.Deterministic('mu', a + b * d['mass_s'])
brain = pm.Normal('brain', mu = mu, sd = sigma, observed = d['brain'])
m_6_8 = pm.sample(2000, tune=5000)
theta = az.summary(m_6_8)['mean'][:3]
#compute deviance
dev = - 2 * sum(stats.norm.logpdf(d['brain'], loc = theta[0] + theta[1] * d['mass_s'] , scale = theta[2]))
dev
```
#### Code 6.12
[This](https://github.com/rmcelreath/rethinking/blob/a309712d904d1db7af1e08a76c521ab994006fd5/R/sim_train_test.R) is the original function.
```
# This function only works with number of parameters >= 2
def sim_train_test(N=20, k=3, rho=[0.15, -0.4], b_sigma=100):
n_dim = 1 + len(rho)
if n_dim < k:
n_dim = k
Rho = np.diag(np.ones(n_dim))
Rho[0, 1:3:1] = rho
i_lower = np.tril_indices(n_dim, -1)
Rho[i_lower] = Rho.T[i_lower]
x_train = stats.multivariate_normal.rvs(cov=Rho, size=N)
x_test = stats.multivariate_normal.rvs(cov=Rho, size=N)
mm_train = np.ones((N,1))
np.concatenate([mm_train, x_train[:, 1:k]], axis=1)
#Using pymc3
with pm.Model() as m_sim:
vec_V = pm.MvNormal('vec_V', mu=0, cov=b_sigma * np.eye(n_dim),
shape=(1, n_dim), testval=np.random.randn(1, n_dim)*.01)
mu = pm.Deterministic('mu', 0 + pm.math.dot(x_train, vec_V.T))
y = pm.Normal('y', mu=mu, sd=1, observed=x_train[:, 0])
with m_sim:
trace_m_sim = pm.sample()
vec = pm.summary(trace_m_sim)['mean'][:n_dim]
vec = np.array([i for i in vec]).reshape(n_dim, -1)
dev_train = - 2 * sum(stats.norm.logpdf(x_train, loc = np.matmul(x_train, vec), scale = 1))
mm_test = np.ones((N,1))
mm_test = np.concatenate([mm_test, x_test[:, 1:k +1]], axis=1)
dev_test = - 2 * sum(stats.norm.logpdf(x_test[:,0], loc = np.matmul(mm_test, vec), scale = 1))
return np.mean(dev_train), np.mean(dev_test)
n = 20
tries = 10
param = 6
r = np.zeros(shape=(param - 1, 4))
train = []
test = []
for j in range(2, param + 1):
print(j)
for i in range(1, tries + 1):
tr, te = sim_train_test(N=n, k=param)
train.append(tr), test.append(te)
r[j -2, :] = np.mean(train), np.std(train, ddof=1), np.mean(test), np.std(test, ddof=1)
```
#### Code 6.14
```
num_param = np.arange(2, param + 1)
plt.figure(figsize=(10, 6))
plt.scatter(num_param, r[:, 0], color='C0')
plt.xticks(num_param)
for j in range(param - 1):
plt.vlines(num_param[j], r[j,0] - r[j, 1], r[j,0] + r[j,1], color='mediumblue',
zorder=-1, alpha=0.80)
plt.scatter(num_param + 0.1, r[:, 2], facecolors='none', edgecolors='k')
for j in range(param - 1):
plt.vlines(num_param[j] + 0.1, r[j,2] - r[j, 3], r[j,2] + r[j,3], color='k',
zorder=-2, alpha=0.70)
dist = 0.20
plt.text(num_param[1] - dist, r[1, 0] - dist, 'in', color='C0', fontsize=13)
plt.text(num_param[1] + dist, r[1, 2] - dist, 'out', color='k', fontsize=13)
plt.text(num_param[1] + dist, r[1, 2] + r[1,3] - dist, '+1 SD', color='k', fontsize=10)
plt.text(num_param[1] + dist, r[1, 2] - r[1,3] - dist, '+1 SD', color='k', fontsize=10)
plt.xlabel('Number of parameters', fontsize=14)
plt.ylabel('Deviance', fontsize=14)
plt.title(f'N = {n}', fontsize=14)
plt.show()
```
#### Code 6.15
```
data = pd.read_csv('Data/cars.csv', sep=',')
with pm.Model() as m_6_15 :
a = pm.Normal('a', mu=0, sd=100)
b = pm.Normal('b', mu=0, sd=10)
sigma = pm.Uniform('sigma', 0, 30)
mu = pm.Deterministic('mu', a + b * data['speed'])
dist = pm.Normal('dist', mu=mu, sd=sigma, observed = data['dist'])
m_6_15 = pm.sample(5000, tune=10000)
```
#### Code 6.16
```
n_samples = 1000
n_cases = data.shape[0]
ll = np.zeros((n_cases, n_samples))
for s in range(0, n_samples):
mu = m_6_15['a'][s] + m_6_15['b'][s] * data['speed']
p_ = stats.norm.logpdf(data['dist'], loc=mu, scale=m_6_15['sigma'][s])
ll[:,s] = p_
```
#### Code 6.17
```
n_cases = data.shape[0]
lppd = np.zeros(n_cases)
for a in range(1, n_cases):
lppd[a,] = logsumexp(ll[a,]) - np.log(n_samples)
```
#### Code 6.18
```
pWAIC = np.zeros(n_cases)
for i in range(1, n_cases):
pWAIC[i,] = np.var(ll[i,])
```
#### Code 6.19
```
- 2 * (sum(lppd) - sum(pWAIC))
```
#### Code 6.20
```
waic_vec = - 2 * (lppd - pWAIC)
(n_cases * np.var(waic_vec))**0.5
```
#### Code 6.21
```
d = pd.read_csv('Data/milk.csv', sep=';')
d['neocortex'] = d['neocortex.perc'] / 100
d.dropna(inplace=True)
d.shape
```
#### Code 6.22
```
a_start = d['kcal.per.g'].mean()
sigma_start = d['kcal.per.g'].std()
mass_shared = theano.shared(np.log(d['mass'].values))
neocortex_shared = theano.shared(d['neocortex'].values)
with pm.Model() as m6_11:
alpha = pm.Normal('alpha', mu=0, sd=10, testval=a_start)
mu = alpha + 0 * neocortex_shared
sigma = pm.HalfCauchy('sigma',beta=10, testval=sigma_start)
kcal = pm.Normal('kcal', mu=mu, sd=sigma, observed=d['kcal.per.g'])
trace_m6_11 = pm.sample(1000, tune=1000)
with pm.Model() as m6_12:
alpha = pm.Normal('alpha', mu=0, sd=10, testval=a_start)
beta = pm.Normal('beta', mu=0, sd=10)
sigma = pm.HalfCauchy('sigma',beta=10, testval=sigma_start)
mu = alpha + beta * neocortex_shared
kcal = pm.Normal('kcal', mu=mu, sd=sigma, observed=d['kcal.per.g'])
trace_m6_12 = pm.sample(5000, tune=15000)
with pm.Model() as m6_13:
alpha = pm.Normal('alpha', mu=0, sd=10, testval=a_start)
beta = pm.Normal('beta', mu=0, sd=10)
sigma = pm.HalfCauchy('sigma', beta=10, testval=sigma_start)
mu = alpha + beta * mass_shared
kcal = pm.Normal('kcal', mu=mu, sd=sigma, observed=d['kcal.per.g'])
trace_m6_13 = pm.sample(1000, tune=1000)
with pm.Model() as m6_14:
alpha = pm.Normal('alpha', mu=0, sd=10, testval=a_start)
beta = pm.Normal('beta', mu=0, sd=10, shape=2)
sigma = pm.HalfCauchy('sigma', beta=10, testval=sigma_start)
mu = alpha + beta[0] * mass_shared + beta[1] * neocortex_shared
kcal = pm.Normal('kcal', mu=mu, sd=sigma, observed=d['kcal.per.g'])
trace_m6_14 = pm.sample(5000, tune=15000)
```
#### Code 6.23
```
az.waic(trace_m6_14, m6_14)
```
#### Code 6.24
```
compare_df = az.compare({'m6_11' : trace_m6_11,
'm6_12' : trace_m6_12,
'm6_13' : trace_m6_13,
'm6_14' : trace_m6_14}, method='pseudo-BMA')
compare_df
```
#### Code 6.25
```
az.plot_compare(compare_df);
```
#### Code 6.26
```
diff = np.random.normal(loc=6.7, scale=7.26, size=100000)
sum(diff < 0) / 100000
```
#### Code 6.27
Compare function already checks number of observations to be equal.
```
coeftab = pd.DataFrame({'m6_11': pm.summary(trace_m6_11)['mean'],
'm6_12': pm.summary(trace_m6_12)['mean'],
'm6_13': pm.summary(trace_m6_13)['mean'],
'm6_14': pm.summary(trace_m6_14)['mean']})
coeftab
```
#### Code 6.28
```
traces = [trace_m6_11, trace_m6_12, trace_m6_13, trace_m6_14]
models = [m6_11, m6_12, m6_13, m6_14]
az.plot_forest(traces, figsize=(10, 5));
```
#### Code 6.29
```
kcal_per_g = np.repeat(0, 30) # empty outcome
neocortex = np.linspace(0.5, 0.8, 30) # sequence of neocortex
mass = np.repeat(4.5, 30) # average mass
mass_shared.set_value(np.log(mass))
neocortex_shared.set_value(neocortex)
post_pred = pm.sample_posterior_predictive(trace_m6_14, samples=10000, model=m6_14)
```
#### Code 6.30
```
milk_ensemble = pm.sample_posterior_predictive_w(traces, 10000,
models,
weights=compare_df.weight.sort_index(ascending=True))
plt.figure(figsize=(8, 6))
plt.plot(neocortex, post_pred['kcal'].mean(0), ls='--', color='k')
az.plot_hpd(neocortex, post_pred['kcal'],
fill_kwargs={'alpha': 0},
plot_kwargs={'alpha':1, 'color':'k', 'ls':'--'})
plt.plot(neocortex, milk_ensemble['kcal'].mean(0), color='C1')
az.plot_hpd(neocortex, milk_ensemble['kcal'])
plt.scatter(d['neocortex'], d['kcal.per.g'], facecolor='None', edgecolors='C0')
plt.ylim(0.3, 1)
plt.xlabel('neocortex')
plt.ylabel('kcal.per.g');
import platform
import sys
import IPython
import matplotlib
import scipy
print("""This notebook was created using:\nPython {}\nIPython {}\nPyMC3 {}\nArviZ {}\nNumPy {}\nSciPy {}\nMatplotlib {}\n""".format(sys.version[:5], IPython.__version__, pm.__version__, az.__version__, np.__version__, scipy.__version__, matplotlib.__version__))
```
| github_jupyter |
# JIT Engine: Sparse Matrix x Dense Vector
Most of the previous tutorials have been focused on dense tensors. This tutorial will focus on sparse tensors.
In particular, this example will go over how to compile MLIR code aimed at multiplying a sparse matrix with a dense tensor into a function callable from Python.
Let’s first import some necessary modules and generate an instance of our JIT engine.
```
import mlir_graphblas
import mlir_graphblas.sparse_utils
import numpy as np
engine = mlir_graphblas.MlirJitEngine()
```
## State of MLIR's Current Sparse Tensor Support
MLIR's sparse tensor support is in its early stages and is fairly limited as it is undergoing frequent development. For more details on what is currently being focused on, see [the MLIR discussion on sparse tensors](https://llvm.discourse.group/t/mlir-support-for-sparse-tensors/2020).
It currently has two noteworthy limitations:
- MLIR's sparse tensor functionality in the `linalg` dialect currently only supports reading from sparse tensors but not storing into sparse tensors. Thus, the functions we write can accept sparse tensors as inputs but will return dense tensors.
- MLIR's sparse tensor support only supports a limited number of [sparse storage layouts](https://en.wikipedia.org/wiki/Sparse_matrix#Storing_a_sparse_matrix).
This first tutorial will go over the details of MLIR's sparse tensor support along with how to implement a function to multiply an MLIR sparse matrix with a dense vector to create a dense matrix.
## MLIR’s Sparse Tensor Data Structure Overview
MLIR's sparse tensors are implemented as structs with several array and vector attributes used to store the tensor's elements. The source code for the struct representing MLIR's sparse tensor can be found [here](https://github.com/llvm/llvm-project/blob/main/mlir/lib/ExecutionEngine/SparseUtils.cpp).
The JIT engine provides `mlir_graphblas.sparse_utils.MLIRSparseTensor`, a wrapper around MLIR's sparse tensor struct.
```
# The sparse tensor below looks like this (where the underscores represent zeros):
#
# [[1.2, ___, ___, ___, ___, ___, ___, ___, ___, ___],
# [___, ___, ___, 3.4, ___, ___, ___, ___, ___, ___],
# [___, ___, 5.6, ___, ___, ___, ___, ___, ___, ___],
# [___, ___, ___, ___, ___, ___, ___, ___, ___, ___],
# [___, ___, ___, ___, ___, ___, ___, 7.8, ___, ___],
# [___, ___, ___, ___, ___, ___, ___, ___, ___, ___],
# [___, ___, ___, ___, ___, ___, ___, ___, ___, ___],
# [___, ___, ___, ___, ___, ___, ___, ___, ___, ___],
# [___, ___, ___, ___, ___, ___, ___, ___, ___, ___],
# [___, ___, ___, ___, ___, ___, ___, ___, ___, 9.0]]
#
indices = np.array([
[0, 0],
[1, 3],
[2, 2],
[4, 7],
[9, 9],
], dtype=np.uint64) # Coordinates
values = np.array([1.2, 3.4, 5.6, 7.8, 9.0], dtype=np.float32) # values at each coordinate
sizes = np.array([10, 10], dtype=np.uint64) # tensor shape
sparsity = np.array([True, True], dtype=np.bool8) # a boolean for each dimension telling which dimensions are sparse
sparse_tensor = mlir_graphblas.sparse_utils.MLIRSparseTensor(indices, values, sizes, sparsity)
```
To initialize an instance of `mlir_graphblas.sparse_utils.MLIRSparseTensor`, we need to provide:
- The coordinates of each non-zero position in the sparse tensor (see the variable `indices` above).
- The values at each position (see the variable `values` above). There's a one-to-one correspondence between each coordinate and each value (order matters here).
- The shape of the sparse tensor (see the variable `sizes` above).
- The sparsity of each dimension (see the variable `sparsity` above). This determines the sparsity/data layout, e.g. a matrix dense in the 0th dimension and sparse in the second dimension has a [CSR](https://en.wikipedia.org/wiki/Sparse_matrix#Compressed_sparse_row_%28CSR,_CRS_or_Yale_format%29) data layout. For more information on how the sparse data layouts work, see [the MLIR discussion on sparse tensors](https://llvm.discourse.group/t/mlir-support-for-sparse-tensors/2020).
Despite the fact that we give the positions and values of the non-zero elements to the constructor in a way that resembles [COO format](https://en.wikipedia.org/wiki/Sparse_matrix#Coordinate_list_%28COO%29), the underlying data structure does not store them in [COO format](https://en.wikipedia.org/wiki/Sparse_matrix#Coordinate_list_%28COO%29). The sparsity of each dimension (see the variable `sparsity` above) is what the constructor uses to determine how to store the data.
## Using MLIR’s Sparse Tensor Data Structure in MLIR Code
We'll now go over how we can use the MLIR's sparse tensor in some MLIR code.
Here's the MLIR code for [multiplying a sparse matrix with a dense tensor](https://en.wikipedia.org/wiki/Sparse_matrix-vector_multiplication).
```
mlir_text = """
#trait_matvec = {
indexing_maps = [
affine_map<(i,j) -> (i,j)>,
affine_map<(i,j) -> (j)>,
affine_map<(i,j) -> (i)>
],
iterator_types = ["parallel", "reduction"],
sparse = [
[ "S", "S" ],
[ "D" ],
[ "D" ]
],
sparse_dim_map = [
affine_map<(i,j) -> (j,i)>,
affine_map<(i) -> (i)>,
affine_map<(i) -> (i)>
]
}
#HyperSparseMatrix = #sparse_tensor.encoding<{
dimLevelType = [ "compressed", "compressed" ],
dimOrdering = affine_map<(i,j) -> (i,j)>,
pointerBitWidth = 64,
indexBitWidth = 64
}>
func @spmv(%arga: tensor<10x10xf32, #HyperSparseMatrix>, %argb: tensor<10xf32>) -> tensor<10xf32> {
%output_storage = constant dense<0.0> : tensor<10xf32>
%0 = linalg.generic #trait_matvec
ins(%arga, %argb : tensor<10x10xf32, #HyperSparseMatrix>, tensor<10xf32>)
outs(%output_storage: tensor<10xf32>) {
^bb(%A: f32, %b: f32, %x: f32):
%0 = mulf %A, %b : f32
%1 = addf %x, %0 : f32
linalg.yield %1 : f32
} -> tensor<10xf32>
return %0 : tensor<10xf32>
}
"""
```
One thing to note about the trait `#trait_matvec` used here that makes it different from the traits used by our dense operations we've shown in previous tutorials is that it specifies the sparsity via the `sparse` attribute. Note the presence of `[ "S", "S" ]`. This must correspond to the sparsity of our sparse tensor (see the Python variable `sparsity` from earlier).
Also, note the type of our sparse tensor. The type is `!SparseTensor`, which is an MLIR alias for the type `!llvm.ptr<i8>` from the [LLVM dialect](https://mlir.llvm.org/docs/Dialects/LLVM/). MLIR's passes for sparse tensors are currently under development and treat pointers to 8-bit integers as pointers to a sparse tensor struct. MLIR's sparse tensor passes are able to differentiate normal uses of pointers to 8-bit integers from pointers to a sparse tensor struct via the use of the `linalg.sparse_tensor` operation. Only the results of `linalg.sparse_tensor` are treated as sparse tensors. This is a likely a temporary measure implemented as a prototype that is expected to change into a more mature piece of functionality in the upcoming months.
The results from `linalg.sparse_tensor` operations can be treated as normal tensors with all the complexities of indexing into the sparse tensor handled by MLIR's sparse tensor passes.
The MLIR sparse tensor pass that we'll use to lower our sparse tensors is `--test-sparsification=lower`. Here are all the passes we'll use.
```
passes = [
"--sparsification",
"--sparse-tensor-conversion",
"--linalg-bufferize",
"--func-bufferize",
"--tensor-bufferize",
"--tensor-constant-bufferize",
"--finalizing-bufferize",
"--convert-linalg-to-loops",
"--convert-scf-to-std",
"--convert-memref-to-llvm",
"--convert-std-to-llvm",
]
```
## SpMV Compilation
Let's now actually see what our MLIR code can do.
We'll first compile our code.
```
engine.add(mlir_text, passes)
spmv = engine.spmv
```
We already have a 10x10 sparse tensor from earlier (see the Python variable `sparse_tensor`) that we can use as an input. Let's create a dense vector we can multiply it by.
```
dense_vector = np.arange(10, dtype=np.float32)
```
Let's perform the calculation.
```
spmv_answer = spmv(sparse_tensor, dense_vector)
spmv_answer
```
Let's verify if this is the result we expect.
```
dense_tensor = np.array([
[1.2, 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ],
[0 , 0 , 0 , 3.4, 0 , 0 , 0 , 0 , 0 , 0 ],
[0 , 0 , 5.6, 0 , 0 , 0 , 0 , 0 , 0 , 0 ],
[0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ],
[0 , 0 , 0 , 0 , 0 , 0 , 0 , 7.8, 0 , 0 ],
[0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ],
[0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ],
[0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ],
[0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ],
[0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 9.0]
], dtype=np.float32)
np_answer = dense_tensor @ dense_vector
all(spmv_answer == np_answer)
```
| github_jupyter |
Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.
- Author: Sebastian Raschka
- GitHub Repository: https://github.com/rasbt/deeplearning-models
```
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
```
- Runs on CPU or GPU (if available)
# Model Zoo -- Convolutional Neural Network with He Initialization
## Imports
```
import time
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
if torch.cuda.is_available():
torch.backends.cudnn.deterministic = True
```
## Settings and Dataset
```
##########################
### SETTINGS
##########################
# Device
device = torch.device("cuda:2" if torch.cuda.is_available() else "cpu")
# Hyperparameters
random_seed = 1
learning_rate = 0.05
num_epochs = 10
batch_size = 128
# Architecture
num_classes = 10
##########################
### MNIST DATASET
##########################
# Note transforms.ToTensor() scales input images
# to 0-1 range
train_dataset = datasets.MNIST(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = datasets.MNIST(root='data',
train=False,
transform=transforms.ToTensor())
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
```
## Model
```
##########################
### MODEL
##########################
class ConvNet(torch.nn.Module):
def __init__(self, num_classes):
super(ConvNet, self).__init__()
# calculate same padding:
# (w - k + 2*p)/s + 1 = o
# => p = (s(o-1) - w + k)/2
# 28x28x1 => 28x28x4
self.conv_1 = torch.nn.Conv2d(in_channels=1,
out_channels=4,
kernel_size=(3, 3),
stride=(1, 1),
padding=1) # (1(28-1) - 28 + 3) / 2 = 1
# 28x28x4 => 14x14x4
self.pool_1 = torch.nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2),
padding=0) # (2(14-1) - 28 + 2) = 0
# 14x14x4 => 14x14x8
self.conv_2 = torch.nn.Conv2d(in_channels=4,
out_channels=8,
kernel_size=(3, 3),
stride=(1, 1),
padding=1) # (1(14-1) - 14 + 3) / 2 = 1
# 14x14x8 => 7x7x8
self.pool_2 = torch.nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2),
padding=0) # (2(7-1) - 14 + 2) = 0
self.linear_1 = torch.nn.Linear(7*7*8, num_classes)
###############################################
# Reinitialize weights using He initialization
###############################################
for m in self.modules():
if isinstance(m, torch.nn.Conv2d):
nn.init.kaiming_normal_(m.weight.detach())
m.bias.detach().zero_()
elif isinstance(m, torch.nn.Linear):
nn.init.kaiming_normal_(m.weight.detach())
m.bias.detach().zero_()
def forward(self, x):
out = self.conv_1(x)
out = F.relu(out)
out = self.pool_1(out)
out = self.conv_2(out)
out = F.relu(out)
out = self.pool_2(out)
logits = self.linear_1(out.view(-1, 7*7*8))
probas = F.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(random_seed)
model = ConvNet(num_classes=num_classes)
model = model.to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
```
## Training
```
def compute_accuracy(model, data_loader):
correct_pred, num_examples = 0, 0
for features, targets in data_loader:
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
start_time = time.time()
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = F.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_loader), cost))
model = model.eval()
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
print('Time elapsed: %.2f min' % ((time.time() - start_time)/60))
print('Total Training Time: %.2f min' % ((time.time() - start_time)/60))
```
## Evaluation
```
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
%watermark -iv
```
| github_jupyter |
# Predict the damage to a building
Determining the degree of damage that is done to buildings post an earthquake can help identify safe and unsafe buildings, thus avoiding death and injuries resulting from aftershocks. Leveraging the power of machine learning is one viable option that can potentially prevent massive loss of lives while simultaneously making rescue efforts easy and efficient.
In this challenge we provide you with the before and after details of nearly one million buildings after an earthquake. The damage to a building is categorized in five grades. Each grade depicts the extent of damage done to a building post an earthquake.
Given building details, your task is to build a model that can predict the extent of damage that has been done to a building after an earthquake.
```
#import lib
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
#set the plotting style
plt.style.use("seaborn")
```
# Import Data
```
PATH = 'data'
PATH_TO_train_data = PATH + '/' + 'train.csv'
PATH_TO_test_data = PATH + '/' + 'test.csv'
PATH_TO_building_structure = PATH + '/' + 'Building_Structure.csv'
PATH_TO_building_ownership = PATH + '/' + 'Building_Ownership_Use.csv'
#load the data
train_data = pd.read_csv(PATH_TO_train_data)
ownership_data = pd.read_csv(PATH_TO_building_ownership)
structure_data = pd.read_csv(PATH_TO_building_structure)
test_data = pd.read_csv(PATH_TO_test_data)
#peak into the data
train_data.head()
ownership_data.head()
structure_data.head()
test_data.head()
#shape of the data
train_data.shape
ownership_data.shape
structure_data.shape
test_data.shape
#Converting the object target to int type
target = {'Grade 1': 1, 'Grade 2': 2, 'Grade 3': 3, 'Grade 4': 4, 'Grade 5': 5}
train_data['damage_grade'].replace(target, inplace=True)
```
# Merging Data
- building_id is common variable among the three data sets. So we will create a master data table by merging these datasets
```
#merging the train data with structure data
train_data = pd.merge(train_data, structure_data, on = "building_id")
train_data.head()
#merge the ownership data
train_merged = pd.merge(train_data, ownership_data, on = "building_id")
train_merged.head()
```
# Descriptive Analysis
- Final Base table analysis
```
train_merged.shape
train_merged.info()
#basic descriptive stats
train_merged.describe()
train_merged.head()
#analyse the area_assesed
train_merged.area_assesed.value_counts().plot(kind = "barh")
plt.title("Nature of Damage Assessment")
plt.xlabel("Number of Buildings")
plt.show()
#dropping the duplicate columns
train_merged.drop(["district_id_x", "district_id_y", "vdcmun_id_x", "vdcmun_id_y", "ward_id_x", "building_id"], axis = 1, inplace= True)
#distribution of district_id
train_merged.district_id.plot(kind = "hist")
plt.xlabel("District_ID")
plt.title("Distribution of District ID")
plt.show()
#lets check the distribution of target variable
train_merged.damage_grade.value_counts().plot(kind = "barh")
plt.title("Distribution of Target Values")
plt.xlabel("Count")
plt.show()
#distribution of area_assesed by target variable
sns.countplot(x = "area_assesed", data = train_merged, hue = "damage_grade")
plt.show()
#check the distribution of Municipality where the building is located
train_merged.vdcmun_id.plot(kind = "hist")
plt.xlabel("Ward ID")
plt.title("Distribution of Ward ID")
plt.show()
```
# Missing Values Check
```
# check the missing values
columns_has_missing = train_merged.isna().sum()
columns_has_missing = columns_has_missing[columns_has_missing.nonzero()[0]]
columns_has_missing
#filling the missing values
train_merged.fillna(0, inplace=True)
train_merged.isna().sum().sum()
```
# Handling Categorical Variables
```
#remove the duplicated values
train_merged.duplicated().sum()
duplicates = train_merged.duplicated(keep=False)
#delete the duplicate data
train_merged = train_merged[~duplicates]
train_merged.duplicated(keep = False).sum()
#segregating the variables based on datatypes
numeric_variable_names = [key for key in dict(train_merged.dtypes) if dict(train_merged.dtypes)[key] in ['float64', 'int64', 'float32', 'int32']]
categorical_variable_names = [key for key in dict(train_merged.dtypes) if dict(train_merged.dtypes)[key] in ["object"]]
print(numeric_variable_names)
print(categorical_variable_names)
#store the numerical variables data in seperate dataset
train_merged_num = train_merged[numeric_variable_names]
train_merged_num.head()
#store the categorical variables data in seperate dataset
train_merged_cat = train_merged[categorical_variable_names]
train_merged_cat.head()
#converting into dummy variables
train_merged_cat = pd.get_dummies(train_merged_cat, drop_first=True)
#Merging the both numerical and categorical data
train_cleaned = pd.concat([train_merged_num, train_merged_cat],axis=1)
train_cleaned.head()
train_cleaned.shape
```
## Separating the Target and the Predictors
```
#seperating the target and predictors
X = train_cleaned.drop(columns= ["damage_grade"])
X.head()
y = pd.DataFrame(train_cleaned.damage_grade)
y.head()
```
# SMOTE data
```
# Function for creating model pipelines
from sklearn.pipeline import make_pipeline
#function for crossvalidate score
from sklearn.model_selection import cross_validate
#to find the best
from sklearn.model_selection import GridSearchCV
from imblearn.over_sampling import SMOTE
from sklearn.model_selection import train_test_split
#for fitting classification tree
from sklearn.tree import DecisionTreeClassifier
#to create a confusion matrix
from sklearn.metrics import confusion_matrix
#import whole class of metrics
from sklearn import metrics
#using SMOTE technique
smote = SMOTE(random_state=0)
os_data_X, os_data_y = smote.fit_sample(X, y.damage_grade)
pd.DataFrame(data = os_data_y, columns = ["damage_grade"]).damage_grade.value_counts().plot('bar')
plt.title("Distribution of labels")
plt.show()
```
# Split Data
```
#split the data using stratified sampling
X_train_str, X_test_str, y_train_str,y_test_str = train_test_split(X,y,test_size = 0.3,stratify = y,random_state = 100)
#split the SMOTE data
X_train_so, X_test_so, y_train_so,y_test_so = train_test_split(os_data_X,os_data_y,test_size = 0.3,random_state = 101)
#check the size of the data
X_train_so.shape
X_test_so.shape
```
# Preprocessing on Test Data
```
#merging all the datasets
test_data = pd.merge(test_data, structure_data, on = "building_id")
test_data = pd.merge(test_data, ownership_data, on = "building_id")
#dropping the duplicate columns
test_data.drop(["district_id_x", "district_id_y", "vdcmun_id_x", "vdcmun_id_y", "ward_id_x", "building_id"], axis = 1, inplace= True)
#get the dummies for categorical data
test_data = pd.get_dummies(test_data, drop_first=True)
test_data.head()
#fill the missing values
test_data.fillna(0, inplace=True)
```
# Modeling
- Train on the training data
```
#make a pipeline for decision tree model
pipelines = {
"clf": make_pipeline(DecisionTreeClassifier(max_depth=5,random_state=100))
}
scores = cross_validate(pipelines['clf'], X_train_so, y_train_so,return_train_score=True)
scores
scores['test_score'].mean()
```
Average accuracy of pipeline with Decision Tree Classifier is 66.37%
#### Cross-Validation and Hyper Parameters Tuning
Cross Validation is the process of finding the best combination of parameters for the model by traning and evaluating the model for each combination of the parameters
- Declare a hyper-parameters to fine tune the Decision Tree Classifier
**Decision Tree is a greedy alogritum it searches the entire space of possible decision trees. so we need to find a optimum parameter(s) or criteria for stopping the decision tree at some point. We use the hyperparameters to prune the decision tree**
```
decisiontree_hyperparameters = {
"decisiontreeclassifier__max_depth": np.arange(3,12),
"decisiontreeclassifier__max_features": np.arange(3,10),
"decisiontreeclassifier__min_samples_split": [2,3,4,5,6,7,8,9,10,11,12,13,14,15],
"decisiontreeclassifier__min_samples_leaf" : np.arange(1,3)
}
pipelines['clf']
```
## Decision Tree classifier with gini index
#### Fit and tune models with cross-validation
Now that we have our <code style="color:steelblue">pipelines</code> and <code style="color:steelblue">hyperparameters</code> dictionaries declared, we're ready to tune our models with cross-validation.
- We are doing 5 fold cross validation
```
def feature_importance(model, X_train):
importances = model.feature_importances_
indices = np.argsort(importances)[::-1]
features = X_train.columns
important_features = []
important_features_scores = []
for index in indices:
important_features.append(features[index])
important_features_scores.append(importances[index])
feature_importance_df = pd.Series(data=important_features_scores, index=important_features)
return feature_importance_df
#Create a cross validation object from decision tree classifier and it's hyperparameters
clf_model = GridSearchCV(pipelines['clf'], decisiontree_hyperparameters, cv=5, n_jobs= 3, verbose=1)
#fit the model with train data
clf_model.fit(X_train_so, y_train_so)
#Display the best parameters for Decision Tree Model
clf_model.best_params_
#Display the best score for the fitted model
clf_model.best_score_
#In Pipeline we can use the string names to get the decisiontreeclassifer
clf_model.best_estimator_.named_steps['decisiontreeclassifier']
```
# Save the Model
```
import pickle
```
Let's save the winning <code style="color:steelblue">Pipeline</code> object into a pickle file.
```
with open('final_model.pkl', 'wb') as f:
pickle.dump(clf_model.best_estimator_.named_steps['decisiontreeclassifier'], f)
```
| github_jupyter |
# **Exploratory Data Analysis**: The Titanic Dataset (Extended version)
Source: [https://github.com/d-insight/code-bank.git](https://github.com/d-insight/code-bank.git)
License: [MIT License](https://opensource.org/licenses/MIT). See open source [license](LICENSE) in the Code Bank repository.
-------------
## Overview
In this demo, we will explore the likelihood of surviving the sinking of the RMS Titanic. The Titanic hit a iceberg in 1912 and quickly sank, killing 1,502 out of the 2,224 passengers and crew on board. One most important reason so many people died was that there were not enough lifeboats to serve everyone. Accordingly, it has also been frequently noted that the most **_likely_** people to survive the disaster were women, children, and members of the upper-class. Let's see if that is true.
The Titanic case is a classic problem in data science, and it is still an ongoing [Kaggle competition](https://www.kaggle.com/c/titanic). There are many other examples of the Titanic dataset in introductory statistics and Data Science courses, so we also encourage you to look around and see how others have approached the problem.
<img src="https://upload.wikimedia.org/wikipedia/commons/9/95/Titanic_sinking%2C_painting_by_Willy_St%C3%B6wer.jpg" width="500" height="500" align="center"/>
Image source: https://upload.wikimedia.org/wikipedia/commons/9/95/Titanic_sinking%2C_painting_by_Willy_St%C3%B6wer.jpg
### Introduction
We will conduct our EDA and visualization analysis in three parts:
1. Analyze and visualize base-rates
2. Calculate new predictors that can help the analysis ('Feature Engineering)
3. Visualize advanced data characteristics
To prepare, let's first review the data tools, visualization tools, and actual data for this problem.
### Data Structures for Python
There are three basic options for loading and working with data in Python:
* Pure `Python 3.x`
In this approach, you load data directly into "pure" Python data objects, such as lists, sets, and dictionaries (or nested hiearchies of such objects, such as lists-within-lists, lists-within-dicts, dicts-within-dicts, and so on). Although operation on "pure Python" objects can be slow, it also is extremely flexible.
* `NumPy`
The basic data structures for holding arrays, vectors, and matrices of data are provided by a core package called `NumPy`. NumPy also has a set of linear algebra and numerical fuinctions, but in general such functions are now provided by another package called `scipy` (for scientific computing) and numerical computation is usually done there. `NumPy` has been optimized to run with primitive routines written in `C` (or even `fortran`) and so is orders-of-magnitude faster-running than doing calculations with pure Python. Nevertheless, the data access, subscripting, and slicing of elements for `NumPy` still conforms to the same syntax as pure Python.
* `pandas`
Most applied Data Science projects (that fit into memory/RAM), now use an "Excel-like" package called `pandas`. Pandas stores data in objects called **dataframes**. Dataframes will become the central type of data object for your work in Data Science (much as Excel Spreadsheets often were for Business Analysts). Dataframes provide many different properties and methods that help you to work with your data more effectively. We will use `pandas` for most of the examples and problems in the class.
### Visualization for Python
There are many ways to visualize data and results in Python. Sometimes that is a good thing - and sometimes it is a bad thing, for there are many ways to do it. Eventually you will want to learn multiple methods, as data scientists often use many different libraries. The following are the most common libraries, with links to their documentation:
* `pandas` https://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html
If you are in a hurry, it also is possible to generate many simple plots directly from the `pandas` library and `dataframe` object. This is a good option when you are moving fast and all you need to so is see a simple histogram or trend line and you already have your data in a pandas dataframe.
* `seaborn` https://seaborn.pydata.org/
Seaborn is a simplified and better looking interface that sits on top of the standard `matplotlib` library. Seaborn is often used because it looks great, but also gives you all the ability to go into `matplotlib` to customize graphs for a particular need.
* `plotly` and `plotly_express` https://www.plotly.express/
The commercial package `plotly` is a comprehensive toolkit for building interactive, D3 and WebGL charts. Interactive graphs are particularly good for online (web-based) dashboards. As you can see in the plot of Python visualization options below, `plotly` is quickly rising in popularity. To use plotly, however, you need to sign up for a [plotly account](https://plot.ly/python/) and you need an active internet connection. We won't need all of the features in plotly, however, and so we will develop examples using just `plotly_express`. Plotly express is a free, local-to-your-machine, and easier-to-use, API to the plotly service. [See here for documentation](https://www.plotly.express/plotly_express/) for the plotly express API.
* `matplotlib` and `pyplot` https://matplotlib.org/
Matplotlib is the core package for graphing in Python, and many other packages build on top of it (i.e., pyplot, pandas, and seaborn). The `matplotlib` object model can be somewhat confusing, which often means writing many lines of code and hours of debugging. To help, matplotlib also comes with an interface library called `pyplot` ([documentation here](https://matplotlib.org/api/pyplot_api.html)) that mimics the MatLab approach to graphing (helpful for many engineers). In general, however, `pyplot` has now been supplanted by the other options above. Although it is faster to get started with plotting by using one of the other options above, eventually you will find that you often need to return to matplotlib in order to "tweak" a layout or to work with more complicated graphs.
<img src="viz-options.png" width="600" height="600" align="center"/>
Image source: EPFL TIS Lab
### The Titanic Dataset
The data is taken from the [Kaggle Titanic Competition](https://www.kaggle.com/c/titanic). It is split between a "training" dataset (where you know the actual outcome) and a "testing" dataset (where you do not know the outcome). If we were actually competing in the Kaggle competition, then we would be trying to predict the unknown testing cases and submitting our predictions to Kaggle to see if we could win. But in this case, the objective is simply to get you started with Python and to familiarize you with the basic data structures and graphing libraries of the Data Science stack. We therefore will ignore the testing dataset and work only with the training data.
All of the data that you will need for this demo is in the `titanic.csv` file, located within the same directory as this notebook.
We don't know very much about the 891 passengers in the training dataset. The following features are available.
Feature name | Description |
-------- | -------------- |
Survived | Target variable, i.e. survival, where 0 = No, 1 = Yes |
PassengerId | Id of the passenger |
Pclass | Ticket class, wher 1 = 1st, 2 = 2nd, 3 = 3rd |
Name | Passenger name |
Sex | Sex |
Age | Age in years |
SibSp | Num of siblings or spouses aboard the Titanic |
Parch | Num of parents or children aboard the Titanic |
Ticket | Ticket number, i.e. record ID |
Fare | Passenger fare |
Cabin | Cabin number |
Embarked | Port of Embarkation, where C = Cherbourg, Q = Queenstown, S = Southampton |
**Special Notes**
* **Pclass**: A proxy for socio-economic status (SES): 1st = Upper class; 2nd = Middle class; 3rd = Lower class.
* **Age**: Age is fractional if less than 1. If the age is estimated, is it in the form of xx.5
* **SibSp**: The dataset defines family relations as: Sibling = brother, sister, stepbrother, stepsister; Spouse = husband, wife (mistresses and fiancés were ignored)
* **Parch**: The dataset defines family relations as: Parent = mother, father; Child = daughter, son, stepdaughter, stepson; Some children travelled only with a nanny, therefore parch=0 for them.
--------
## **Part 0**: Setup
```
# Put all import statements at the top of your notebook -- import some basic and important ones here
# Standard imports
import numpy as np
import pandas as pd
import pandas_profiling
import os
import sys
# Visualization packages
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
from plotly.offline import init_notebook_mode
init_notebook_mode(connected=True)
# Special code to ignore un-important warnings
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
```
## **Part 1**: Analyze Base Rates and EDA
The most basic "model" to run for any problem is the "average" (or "base rate") outcome. Before we work on more complicated models, it is a good idea to understand how a very simple heuristic will perform -- and then you can determine how much a more complicated model really improves the predictions. Your first model may be too simple and therefore highly "biased" (i.e., systematically low or systematically high for different groupings or levels within the data); but your simple model should also not vary much if/when we pull a new sample from the same underlying population/data generating function.
We will therefore conduct some **_exploratory data analysis_** ("**EDA**") and **_visualization_** in this demo to understand the distribution of the outcome for this case.
### Open and Inspect the Data
```
# Open dataset with Pandas
data = pd.read_csv('titanic.csv')
# Count rows and coumns
data.shape
```
## **Part 1a**: EDA - the automated approach
Instead of performing all of the steps above manually, you can also run a "profile" on the dataset first, and then drill down into specific cases of interest.
```
# Use the automated pandas profiling utility to examine the dataset
data.profile_report()
```
## **Part 1b**: EDA - the manual approach
It is generally a good idea to go beyond the automated approach to EDA. Here are some useful steps for understanding and plotting the data in more detail.
```
# Inspect the data (variable names, count non-missing values, review variables types)
data.info()
# Look at the "head" of the dataset
data.head()
# Look at the "tail" of the dataset
data.tail()
# Calculate summary statistics (mean, min, max, std, etc.) for all variables and transpose the output
data.describe().T
# Count missing data per feature
data.isnull().sum()
# Plot missing data (Hint: use a seaborn heatmap to see distribution of isnull() for the dataframe
sns.heatmap(data.isnull(), cbar=False, cmap="YlGnBu_r")
```
### Aside: selecting columns in pandas
As we will select columns from a pandas dataframe many times, it is important to note that there are generally two equivalent ways of doing this. We will use the first approach of passing the column name as a string in square brackets. This distinguishes column selection from method calls.
```
# First approach (recommended): pass the column name as a string in square brackets
data['Survived'].describe()
# Second approach: pass the column name as a method call
data.Survived.describe()
```
### Analyze Survival (the basic outcome)
```
# Count number of Survivals and Deaths
survivals = sum(data['Survived'])
deaths = len(data[data['Survived'] == False])
assert survivals + deaths == len(data) # not necessary, but this should be true
assert survivals + deaths == data.shape[0] # not necessary, but this also should be true
print('Survivals: ', survivals)
print('Deaths: ', deaths)
print('The base-rate likelihood of survival: ', survivals/(survivals+deaths))
```
### Plotting the target feature "Survived": 4 approaches
```
# Plot a histogram of "Survived" using Pandas
# The fastest way - this just uses the pandas dataframe
data['Survived'].hist()
# Plot a histogram of "Survived" using Seaborn
# The fastest way is to reference Column Names as Properties of the Dataframe
sns.countplot(data["Survived"])
# Note that another, "safe" way to code is to pass parameters by name
# sns.countplot(x='Survived', data=data)
```
### Analyze Base-Rate Outcomes, by Condition
Again, let's start by calculating the frequency (or "base rate") of the outcome variable for different conditions of interest. You can do this most easily by using the `.crosstab()` function from the `pandas` module.
```
# Use the pandas crosstab() function to count outcomes by condition
pd.crosstab(data['Pclass'], data['Survived'], margins=False)
# Now show the totals (so you can calculate a conditional marginal base rate).
pd.crosstab(data['Pclass'], data['Survived'], margins=True)
# Use the style() function (chain it onto the end of prior code) to also overlay a heatmap
# i.e., you can still do this in one line of code with .style.background_gradient()
pd.crosstab(data['Pclass'], data['Survived'], margins=True).style.background_gradient()
# If you can't read the results, select your own color scheme
pd.crosstab(data['Pclass'], data['Survived'], margins=True).style.background_gradient(cmap='autumn_r')
# Now do a "three-way" crosstab for Class, Sex, Survival
pd.crosstab([data['Sex'], data['Survived']], data['Pclass'], margins=True)
# Use the pandas dataframe to plot a histogram of Age
data['Age'].hist()
# Increase the number of histogram bins
data['Age'].hist(bins = 40)
# Use seaborn to plot the kernel density (a kdeplot) for Age
facet = sns.FacetGrid(data, aspect = 4)
facet.map(sns.kdeplot,'Age', shade = True)
facet.set(xlim = (0, data['Age'].max()))
# Use Seaborn to plot the kernel density of Fare
facet = sns.FacetGrid(data, aspect=4)
facet.map(sns.kdeplot,'Fare', shade=True)
facet.set(xlim = (0, data['Fare'].max()))
# Redo the above in just 1 line of code, but show both a frequency histogram of counts,
# and a kernel density of the probability density function
# There usually is a simple way to do it with seaborn...
facet = sns.distplot(data['Fare'])
# Use pandas to plot a histogram of Survived, separated by Class
# Hint: figsize is defined in inches
lived = data[data['Survived'] == 1]['Pclass'].value_counts()
died = data[data['Survived'] == 0]['Pclass'].value_counts()
df = pd.DataFrame([lived, died])
df.index = ['Lived', 'Died']
df.plot(kind = 'bar', stacked=True, figsize=(12, 5), title='Survival by Social Economic Class (1st, 2nd, 3rd)')
# Use seaborn barplots to plot Survival as a Function of Class
sns.barplot(x='Pclass', y='Survived', data=data)
plt.ylabel("Survival Rate")
plt.title("Survival as function of Pclass")
plt.show() # this removes the annoying line that references the final object, e.g. "<matplotlib.axes._subplots.AxesSubplot at 0x1a1d5f4e48>"
# Use pandas to plot a histogram of Survived, separated by Sex
lived = data[data['Survived'] == 1]['Sex'].value_counts()
died = data[data['Survived'] == 0]['Sex'].value_counts()
df = pd.DataFrame([lived, died])
df.index = ['Lived', 'Died']
df.plot(kind = 'bar', stacked = True, figsize = (12, 5), title = 'Survival by Gender')
plt.show()
# Use Pandas (with matplotlib customization to make it look good) to draw a pie chart of Survival by Sex
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(16,7))
data['Survived'][data['Sex'] == 'male'].value_counts().plot.pie(ax = ax1)
data['Survived'][data['Sex'] == 'female'].value_counts().plot.pie(ax = ax2, colors = ['C1', 'C0'])
# Now use some matplotlib customization to make your previous plot look cool
f, ax = plt.subplots(1, 2, figsize = (16, 7))
data['Survived'][data['Sex'] == 'male'].value_counts().plot.pie(explode=[0,0.2], autopct='%1.1f%%', ax = ax[0], shadow = True)
data['Survived'][data['Sex'] == 'female'].value_counts().plot.pie(explode=[0,0.2], autopct='%1.1f%%', ax = ax[1], shadow = True, colors = ['C1', 'C0'])
ax[0].set_title('Survived (male)')
ax[1].set_title('Survived (female)')
plt.show()
# Use a seaborn facet grid to jointly examine Sex, Class, and Survival
g = sns.FacetGrid(data, row = 'Sex', col = 'Pclass', hue = 'Survived', margin_titles = True, height = 3, aspect = 1.1)
g.map(sns.distplot, 'Age', kde = False, bins = np.arange(0, 80, 5), hist_kws = dict(alpha=0.6))
g.add_legend()
plt.show()
# Examine the disribution of Fare as a function of Pclass, Sex and Survived
g = sns.FacetGrid(data, row = 'Sex', col = 'Pclass', hue = 'Survived', margin_titles = True, height = 3, aspect = 1.1)
g.map(sns.distplot, 'Fare', kde = False, bins = np.arange(0, 550, 50), hist_kws = dict(alpha = 0.6))
g.add_legend()
plt.show()
```
```
# Use the plt.subplots() function from pyplot to capture the figure and subplot objects
# so you can work with both seaborn and matplot lib to make a fully customized distribution plot
LABEL_SURVIVED = 'Survived'
LABEL_DIED = 'Did Not Survive'
fig, axes = plt.subplots(nrows = 1, ncols = 2, figsize = (12, 6))
women = data[data['Sex'] == 'female']
men = data[data['Sex'] == 'male']
ax = sns.distplot(women[women['Survived'] == 1]['Age'].dropna(), bins = 18, label = LABEL_SURVIVED, ax = axes[0], kde = False)
ax = sns.distplot(women[women['Survived'] == 0]['Age'].dropna(), bins = 40, label = LABEL_DIED, ax = axes[0], kde = False)
ax.legend()
ax.set_title('Female')
ax = sns.distplot(men[men['Survived'] == 1]['Age'].dropna(), bins = 18, label = LABEL_SURVIVED, ax = axes[1], kde = False)
ax = sns.distplot(men[men['Survived'] == 0]['Age'].dropna(), bins = 40, label = LABEL_DIED, ax = axes[1], kde = False)
ax.legend()
_ = ax.set_title('Male')
# ADVANCED: combine histograms for many variables related to survival into one composite figure
R = 2
C = 3
fields = ['Survived', 'Sex', 'Pclass', 'SibSp', 'Parch', 'Embarked']
fig, axs = plt.subplots(R, C, figsize = (12, 8))
for row in range(0, R):
for col in range(0, C):
i = row * C + col
ax = axs[row][col]
sns.countplot(data[fields[i]], hue = data["Survived"], ax = ax)
ax.set_title(fields[i], fontsize = 14)
ax.legend(title = "survived", loc = 'upper center')
plt.tight_layout()
```
## **Part 2**: Feature Engineering
Using domain knowledge, we can create new features that might improve performance of our model at a later stage.
```
# Extract the leading "title" from the passenger name, and summarize (count) the different titles
data['Title'] = data['Name'].str.extract(' ([A-Za-z]+)\.', expand=False)
data['Title'].value_counts()
```
## **Part 3**: Explore Swarm and Violin Plots
Swarm and violin plots show the same data as plots of counts and/or density distributions (graphs you did earlier), but they also happen to look very cool and can also draw attention to details that you do not see in other plots. If you have extra time, try to make a few of these below.
```
# Define some constants (such as PALLET and FIGSIZE) so that all of your figures look consistent
PALETTE = ["lightgreen" , "lightblue"] # you can set a custom palette with a simple list of named color values
FIGSIZE = (13, 7)
# Use a seaborn "swarmplot" to examine survival by age and class.
fig, ax = plt.subplots(figsize = FIGSIZE)
sns.swarmplot(x = 'Pclass', y = 'Age', hue = 'Survived', dodge = True, data = data, palette = PALETTE, size = 7, ax = ax)
plt.title('Survival Events by Age and Class ')
plt.show()
# Use a seaborn "violinplot" to examine survival by age and class.
fig, ax = plt.subplots(figsize = FIGSIZE)
sns.violinplot(x = "Pclass", y = "Age", hue = 'Survived', data=data, split=True, bw = 0.05 , palette = PALETTE, ax = ax)
plt.title('Survival Distributions by Age and Class ')
plt.show()
# Use the catplot function (in just one line of code!) to show the comparable distributions for Class, Age, Sex and Survived
g = sns.catplot(x = "Pclass", y = "Age", hue = "Survived", col = "Sex", data = data, kind = "violin", split = True, bw = 0.05, palette = PALETTE, height = 7, aspect = 0.9, s = 7)
```
---------------
## Further Reading
- Recap of basic Python visualization methods with matplotlib: https://machinelearningmastery.com/data-visualization-methods-in-python/
- Guide to Bokeh, an increasingly popular Python interactive visualization package: https://realpython.com/python-data-visualization-bokeh/
| github_jupyter |
## _*HDF5 files and HDF5 driver*_
Qiskit Chemistry supports a number of different chemistry drivers, i.e chemistry programs and software libraries, which are used to compute integrals that are then used to build the second quantized Hamiltonian in the FermionicOperator.
Drivers, built using the programs and libraries, include those for Gaussian 16, PyQuante, PySCF and PSI4. The main Qiskit documentation has more information on [drivers](https://qiskit.org/documentation/aqua/chemistry/qiskit_chemistry_drivers.html).
When a driver is run, Qiskit Chemistry outputs the result in a common format for later processing. This output, the `QMolecule`, includes electron integrals, numbers of electrons, molecular orbital values and so on. While drivers can output different values, even for the same problem, due to computational differences, the result is still in this common format.
This QMolecule then is the output of the drivers and input to the rest of the chemistry stack. Now we have a capability in the QMolecule to be save it as a file in [HDF5](https://www.hdfgroup.org/solutions/hdf5/) format. We can also load such a saved HDF5 file back into a QMolecule to re-create the original. The latter capability we built into a driver we call the HDF5 driver. This driver the file and presents a QMolecule to the chemistry stack just as the other drivers.
Let's take a look at the HDF5 driver and how to create HDF5 files.
### HDF5 driver
This tutorials folder has some HDF5 that were saved before. So let's first use the HDF5 driver to load one up and output a QMolecule.
The HDF5 file name tell us it was an H2 molecule, at interatomic distance 0.735 with a basis set of sto-3g that came from the original driver. _Note: this naming convention is just what is used by us in the HDF5 samples here, no naming convention is enforced._
We'll print some fields from the QMolecule, that are set from the file, to show a small part of its content, which indeed matches what we would expect.
```
from qiskit.chemistry.drivers import HDF5Driver
driver = HDF5Driver('./h2_0.735_sto-3g.hdf5')
molecule = driver.run()
print('Number of orbitals: {}'.format(molecule.num_orbitals))
print('Number of alpha electrons: {}'.format(molecule.num_orbitals))
print('Number of beta electrons: {}'.format(molecule.num_orbitals))
```
To show this molecule can now be used as input to the rest of the chemistry we will compute its ground state energy. This is the electronic part that is computed. As the focus of this tutorial is around the HDF5 file and driver I am not going to explain the code below. There are other tutorials that cover this aspect. Also this uses a classical algorithm from Aqua just to keep things simpler. Again other tutorials here cover this ground state energy problem showing VQE or IQPE to solve it.
```
from qiskit.chemistry.core import Hamiltonian, TransformationType, QubitMappingType
from qiskit.aqua.algorithms.classical import ExactEigensolver
core = Hamiltonian(transformation=TransformationType.FULL, qubit_mapping=QubitMappingType.PARITY,
two_qubit_reduction=True)
qubit_op, aux_ops = core.run(molecule)
ee = ExactEigensolver(qubit_op, aux_operators=aux_ops)
result = ee.run()
print(result['energy'])
```
Another field in the QMolecule is the nuclear repulsion energy. We can combine this with above result to compute the total ground state energy
```
print(result['energy'] + molecule.nuclear_repulsion_energy)
```
The chemistry stack can produce a formatted result from the algorithms result. I show it here for comparison to above output.
```
lines, full_result = core.process_algorithm_result(result)
print(*lines, sep='\n')
```
#### HDF5 file
Lets look at creation of the HDF5 file. There can be many reasons you might want to do this, for example to ensure you are always testing from the exact same content.
Lets first create a QMolecule instance using the PySCF driver. The same can be done with other drivers too.
```
from qiskit.chemistry.drivers import PySCFDriver, UnitsType
driver = PySCFDriver(atom='H .0 .0 .0; H .0 .0 0.735', unit=UnitsType.ANGSTROM,
charge=0, spin=0, basis='sto3g')
molecule = driver.run()
print('Number of orbitals: {}'.format(molecule.num_orbitals))
print('Number of alpha electrons: {}'.format(molecule.num_orbitals))
print('Number of beta electrons: {}'.format(molecule.num_orbitals))
```
Here we save the molecule, in the HDF5 format, by passing in a file name to the save() method. We will use here a temporary file (and delete it once we are done) but it's good practice to name the files after what it represents, such as the example hdf5 file used above, if you intend to save these for future use.
```
import os, tempfile
fd, hdf5_file = tempfile.mkstemp(suffix='.hdf5')
os.close(fd)
molecule.save(hdf5_file)
print('{} : {} bytes'.format(hdf5_file, os.path.getsize(hdf5_file)))
```
With the HDF5 file we can now re-create the original QMolecule by passing the file to the HDF5 driver just as we did in the above section.
```
driver = HDF5Driver(hdf5_file)
molecule1 = driver.run()
print('Number of orbitals: {}'.format(molecule1.num_orbitals))
print('Number of alpha electrons: {}'.format(molecule1.num_orbitals))
print('Number of beta electrons: {}'.format(molecule1.num_orbitals))
os.remove(hdf5_file)
```
##### Saving sets of HDF5 files
The above code can be used in a loop updating values to create a set of HDF5 files. For example you might want a set that span of a set of inter-atomic distances for a dissociation curve, or the same molecule with different basis sets. You can see many examples in the tutorials of looping over values to plot dissociation curves etc., so I am not going to show more code here. But this is simple to do knowing about the basics above.
| github_jupyter |
## Requirement
pip install facenet-pytorch
```
from facenet_pytorch import MTCNN
from PIL import Image
import torch
from imutils.video import FileVideoStream
import cv2
import time
import glob
from tqdm.notebook import tqdm
device = 'cuda' if torch.cuda.is_available() else 'cpu'
```
## Face Detection - FastMTCNN
```
class FastMTCNN(object):
"""Fast MTCNN implementation."""
def __init__(self, stride, resize=1, *args, **kwargs):
"""Constructor for FastMTCNN class.
Arguments:
stride (int): The detection stride. Faces will be detected every `stride` frames
and remembered for `stride-1` frames.
Keyword arguments:
resize (float): Fractional frame scaling. [default: {1}]
*args: Arguments to pass to the MTCNN constructor. See help(MTCNN).
**kwargs: Keyword arguments to pass to the MTCNN constructor. See help(MTCNN).
"""
self.stride = stride
self.resize = resize
self.mtcnn = MTCNN(*args, **kwargs)
def __call__(self, frames):
"""Detect faces in frames using strided MTCNN."""
if self.resize != 1:
frames = [
cv2.resize(f, (int(f.shape[1] * self.resize), int(f.shape[0] * self.resize)))
for f in frames
]
boxes, probs = self.mtcnn.detect(frames[::self.stride])
faces = []
for i, frame in enumerate(frames):
box_ind = int(i / self.stride)
if boxes[box_ind] is None:
continue
else:
for box in boxes[box_ind]:
box = [int(b) for b in box]
image_rgb = frame[box[1]:box[3], box[0]:box[2]]
if (len(image_rgb) > 0) and (image_rgb.shape[0] > 10) and (image_rgb.shape[1] > 10):
faces.append(image_rgb)
ts = time.time()
img_path = '/home/umit/xDataset/Sentinel-img/train-real/train-real-%f.jpeg' %ts
#image_rgb = cv2.resize(image_rgb, (256,256))
image_bgr = cv2.cvtColor(image_rgb, cv2.COLOR_RGB2BGR)
cv2.imwrite(img_path, image_bgr)
return faces
# help(MTCNN)
fast_mtcnn = FastMTCNN(
stride=1,
resize=1,
margin=50,
min_face_size=100, #default =20
thresholds=[0.6, 0.7, 0.7],
factor=0.7, # default = 0.709
post_process=True,
select_largest=True,
keep_all=True,
device=device
)
```
## VIDEO
```
filenames_video = glob.glob('../*.mp4')
len(filenames_video)
jump = 30 # all frame jump=1
def run_video_detection(fast_mtcnn, filenames_video):
frames = []
frames_processed = 0
faces_detected = 0
batch_size = 60
for filename in tqdm(filenames_video):
v_cap = FileVideoStream(filename).start()
v_len = int(v_cap.stream.get(cv2.CAP_PROP_FRAME_COUNT))
print("vlen = "+str(v_len))
for j in range(0,v_len):
frame = v_cap.read()
if j%jump==0 or j == v_len - 1:
#frame = cv2.flip(cv2.transpose(frame), flipCode=1)
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frames.append(frame)
if len(frames) >= batch_size or j == v_len - 1:
faces = fast_mtcnn(frames)
frames_processed += len(frames)
faces_detected += len(faces)
frames = []
faces = []
v_cap.stop()
print("frames_processed = "+str(frames_processed))
print("faces_detected = "+str(faces_detected))
run_video_detection(fast_mtcnn, filenames_video)
```
## IMAGE
```
image = True
if image == True:
filenames_image = glob.glob('../*.jpg')
def run_image_detection(fast_mtcnn, filenames_image):
images = []
images_processed = 0
faces_detected = 0
for filename in tqdm(filenames_image):
image = cv2.imread(filename)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
images.append(image)
face = fast_mtcnn(images)
images_processed += len(images)
faces_detected += len(face)
images = []
print("images_processed = "+str(images_processed))
print("faces_detected = "+str(faces_detected))
run_image_detection(fast_mtcnn, filenames_image)
```
| github_jupyter |
# Chroma Patch Matching via 2D Fourier Transform Magnitude
Beat-synchronous chroma matrices provide a rich representation of the tonal/harmonic content of a musical excerpt. It would be interesting to find all near-neighbors of a given patch of beat-synchronous chroma. But because a chroma patch of, say, 32 beats, can start at any beat offset (not to mention possible transposition differences) a direct search could quickly become very computationally expensive.
To reduce the computational expense, representing each patch by its 2D Fourier Transform Magnitude allows patches to match even despite rotations on the chroma axis (which appear only as differences in phase of the 2D Fourier Transform, without affecting its magnitude) and, unlike direct comparisons, is tolerant of small shifts in the time base, since provided most of the "structure" stays within the analyzed patch, the exact temporal position of that structure is again only encoded in the phase, and is not reflected in the magnitude. Discarding all the phase actually matches many distortions beyond simple circular translations, but in practice these other transformations (corresponding to shifting distinct Fourier components by different amounts) may not give plausible beat-chroma patches, so don't lead to many false matches.
A second computational reduction comes from representing a higher-dimensional beat-chroma patch via a smaller number of principal components - basis vectors in the original beat-chroma patch space that can account for the greatest amount of variation in the actual data set. Thus, a 384-dimensional comparison between two 12x32 chroma patches can be reduced to, say, comparing two 20-dimensional vectors of principal components.
This notebook explores representing a collection of songs described by their beat-chroma arrays as the PCA projection of the 2D Fourier Transform Magnitudes (2DFTM) of 32 beat patches sampled every two beats. These techniques allow approximate matching between all the 32-beat chroma subpatches in the corpus.
This notebook was initially developed as a practical for the course ELEN E4896 Music Signal Processing, but in the event I never got it working well enough to use in class.
```
%pylab inline
from __future__ import print_function
import cPickle as pickle
import os
import time
import IPython
import numpy as np
import scipy
import librosa
def read_beat_chroma_labels(filename):
"""Read back a precomputed beat-synchronous chroma record."""
with open(filename, "rb") as f:
beat_times, chroma_features, label_indices = pickle.load(f)
return beat_times, chroma_features, label_indices
def my_imshow(data, **kwargs):
"""Wrapper for imshow that sets common defaults."""
plt.imshow(data, interpolation='nearest', aspect='auto', origin='bottom', cmap='gray_r', **kwargs)
DATA_DIR = '/Users/dpwe/Downloads/prac10/data/'
file_id = 'beatles/Let_It_Be/06-Let_It_Be'
beat_times, chroma, label_indices = read_beat_chroma_labels(os.path.join(DATA_DIR, 'beatchromlabs', file_id + '.pkl'))
my_imshow(chroma[:100].transpose())
# Make an array of all the possible 32-beat patches within the full chroma matrix
# at every possible 2 beat offset. Do this with stride tricks to avoid blowing up
# the representation by 16x.
frame_length = 32
frame_hop = 2
item_bytes = chroma.itemsize
num_beats, num_chroma = chroma.shape
frame_starts = np.arange(0, num_beats - frame_length, frame_hop)
num_frames = len(frame_starts)
chroma_frames = np.lib.stride_tricks.as_strided(
chroma, strides=(frame_hop * num_chroma * item_bytes,
num_chroma * item_bytes, item_bytes),
shape=(num_frames, frame_length, num_chroma))
# Check that each slice shows an overlapping patch.
for i in xrange(4):
subplot(2,2,i+1)
my_imshow(chroma_frames[i].transpose())
# Show that we can calculate the 2DFTM for each patch, and that they are all quite
# similar despite time shifts.
features = np.abs(np.fft.fft2(chroma_frames))
for i in xrange(4):
subplot(2,2,i+1)
my_imshow(np.fft.fftshift(features[i].transpose()))
from sklearn.decomposition import PCA
# Calculate 20-dimensional PCA features for this set of chroma 2DFTM patches, unraveled into 384-point vectors.
pca = PCA(n_components=20, whiten=True, copy=True)
flat_features = np.reshape(features, (features.shape[0], features.shape[1]*features.shape[2]))
pca.fit(flat_features)
# Show how the succession of overlapping patches results in relatively slowly-changing PCA projections.
my_imshow(pca.transform(flat_features).transpose())
print(np.cumsum(pca.explained_variance_ratio_))
# Plot the first four 2DFTM principal components.
print(pca.components_.shape)
print(frame_length * num_chroma)
for i in xrange(4):
subplot(2, 2, i + 1)
my_imshow(np.fft.fftshift(np.reshape(pca.components_[i], (frame_length, num_chroma)).transpose()))
# Run the separate code to build an entire matrix of beat-chroma-PCA.
os.chdir('/Users/dpwe/Downloads/e4896/elene4896/prac_matchchroma')
import match_chroma
reload(match_chroma)
# Plot the first four 2DFTM principal components.
print(match_chroma.pca_object.components_.shape)
frame_length = 16
print(frame_length * num_chroma)
for i in xrange(9):
subplot(3, 3, i + 1)
my_imshow(np.real(np.fft.fftshift(np.fft.ifft2(np.reshape(match_chroma.pca_object.components_[i], (frame_length, num_chroma)).transpose()))))
# We can quickly find all the near neighbors for each pattern
# by putting all the points into a KDTree.
import sklearn.neighbors
kd_tree = sklearn.neighbors.KDTree(match_chroma.all_features)
def find_best_match(tree, query_index, search_depth=20):
"""Find nearest neighbor with a different id in the all_ids."""
query_id = match_chroma.all_ids[query_index]
distances, indices = tree.query(match_chroma.all_features[[query_index]], k=search_depth)
for index_, distance in zip(indices[0], distances[0]):
if match_chroma.all_ids[index_] is not query_id:
break
if match_chroma.all_ids[index_] == query_id:
return None, None
else:
return index_, distance
best_query = None
best_distance = 999.0
for i in xrange(match_chroma.all_features.shape[0]):
best_match, distance = find_best_match(kd_tree, i)
if distance is not None and distance < 1.0:
print(i, best_match, distance)
# Choose one of the near-neighbors we found, re-retrieve its nearest neighbor, check the distance is small.
best_query = 4324
best_match, distance = find_best_match(kd_tree, best_query)
print(best_match, distance)
# Plot the chroma array (and its 2DFTM) for the source pattern, and extract the corresponding audio.
def show_chroma(index, frame_length=16):
track = match_chroma.all_ids[index]
start = match_chroma.all_starts[index]
beat_times, chroma, _ = match_chroma.read_beat_chroma_labels_for_id(track)
chroma_patch = chroma[start : start + frame_length]
subplot(121)
my_imshow(chroma_patch.transpose())
title(track + "@" + str(start))
subplot(122)
fftm_patch = np.fft.fftshift(np.abs(np.fft.fft2(chroma_patch.transpose())))
my_imshow(np.log10(fftm_patch))
wavfile = os.path.join(DATA_DIR, 'mp3s-32k', track + '.mp3')
y, sr = librosa.load(wavfile, sr=None)
return y[int(sr * beat_times[start]):int(sr * beat_times[start + frame_length])], sr, fftm_patch
plt.figure(figsize=(12,4))
y, sr, patch0 = show_chroma(best_query)
print(y.shape)
IPython.display.Audio(data=y, rate=sr)
# Plot the chroma patch, 2DFTM, and audio excerpt of the nearest neighbor.
plt.figure(figsize=(12,4))
y, sr, patch1 = show_chroma(best_match)
IPython.display.Audio(data=y, rate=sr)
# Below here is just a scratch pad for trying to find more interesting near-neighbor pairs.
print(match_chroma.all_ids.index('beatles/Let_It_Be/06-Let_It_Be'))
print(np.sqrt(np.sum((patch0 - patch1)**2)))
print(np.sqrt(np.sum((match_chroma.all_features[6971] - match_chroma.all_features[1877])**2)))
my_imshow(match_chroma.all_features[0:500].transpose())
plot(np.std(match_chroma.all_features, axis=0))
my_imshow(np.cov(match_chroma.all_features, rowvar=0))
framed_array, starts = match_chroma.frame_array(read_beat_chroma_labels(os.path.join(DATA_DIR, 'beatchromlabs', file_id + '.pkl'))[1], frame_length=8, frame_hop=1)
flattened_array = np.reshape(framed_array, (framed_array.shape[0], framed_array.shape[1]*framed_array.shape[2]))
my_imshow(sklearn.metrics.pairwise.pairwise_distances(flattened_array))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/kalz2q/mycolabnotebooks/blob/master/learnerlang.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# メモ
* erlang を齧る
* erlang のテキストでは erl shell を使っているが、colab 上では不便なので
* erl -eval を使うか escript を使う
参考サイト
* https://learnyousomeerlang.com/
* https://erlang.org/doc/getting_started/users_guide.html
```
%%capture
!sudo rm -f /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so.0
!sudo ln -s /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so.0
!sudo apt install erlang
# erlang のバージョン番号を知る方法に次のような答えがあった
!erl -eval 'erlang:display(erlang:system_info(otp_release)), halt().' -noshell
# 参考 erl --version では erl shell が立ち上がってしまう
!erl -version
# 実験
!erl -eval 'erlang:display("hello erlang"),halt().' -noshell #=> "hello erlang"
!erl -eval 'erlang:display(3+7),halt().' -noshell #=> 10
# 次のようなのが Erlang実験室:コマンドラインから使うerl という記事にあった。 檜山正幸
!erl -boot start_clean -noshell -eval 'io:fwrite("Hello, world.~n")' -s init stop
!erl -boot start_clean -noshell -eval 'io:fwrite("~p~n",[12345*12345*12345])' -s init stop
# 実験 できるだけ簡単に書きたい
!erl -noshell -eval 'io:fwrite("Hello, world.~n"),halt()'
# 実験 文字列以外のことを考えるととりあえず、erlang:display の方が簡単かも
!erl -noshell -eval 'erlang:display("hello display"),halt()'
!erl -noshell -eval 'erlang:display(3+15),halt()'
!erl -noshell -eval 'erlang:display(3+15),halt()'
!erl -noshell -eval 'erlang:display(49+100),halt()'
!erl -noshell -eval 'erlang:display(1892 - 1472),halt()'
!erl -noshell -eval 'erlang:display(5 / 2),halt()'
!erl -noshell -eval 'erlang:display(5 div 2),halt()'
!erl -noshell -eval 'erlang:display(5 rem 2),halt()'
# 1 行に複数のコマンドを実行するにはカンマ `,` で区切る
!erl -noshell -eval 'erlang:display(3+15), erlang:display(49+100),halt()'
!erl -noshell -eval 'erlang:display(49+100),halt()'
# 括弧が使える
!erl -noshell -eval 'erlang:display((50 * 100) - 4999),halt()'
!erl -noshell -eval 'erlang:display(-(50 * 100 - 4999)),halt()'
!erl -noshell -eval 'erlang:display(-50 * (100 - 4999)),halt()'
# 2 進法など
!erl -noshell -eval 'erlang:display(2#101010),halt()'
!erl -noshell -eval 'erlang:display(8#0677),halt()'
!erl -noshell -eval 'erlang:display(16#AE),halt()'
# 変数も使える
# erlang の変数は大文字で始まる
!erl -noshell -eval 'One=1,erlang:display(One),halt()'
!erl -noshell -eval 'Uno=1, One=Uno,erlang:display(One),halt()'
!erl -noshell -eval 'One=1, Two=One+One, erlang:display(Two),halt()'
# Two = Two+1 はエラー
# two = 2 はエラー
# `=` はパターンマッチング pattern matching によるマッチ演算子である
# マッチしたら値を返す。マッチしないとエラー
!erl -noshell -eval 'erlang:display(47 = 45 + 2),halt()' #=> 47
# 左辺が紐付けられていない変数の場合、右辺の値が紐付けられてその後にマッチを試みる
!erl -noshell -eval 'erlang:display(Var = 45 + 3),halt()' #=> 48
# 変数の値は保持される
!erl -noshell -eval 'erlang:display(Var = 45 + 2), erlang:display(Var),halt()' #=> 47 47
# `_` アンダースコア変数の振る舞い
!erl -noshell -eval 'erlang:display(_=14+3),halt()' #=> 17
# !erl -noshell -eval 'erlang:display(_=14+3), erlang:display(_), halt()' #=> unbound_var エラーになる
# erl shell 上では変数の値はクリアできる `f()`
# プログラム上ではできない
# !erl -noshell -eval 'erlang:display(Var=1), f(Var), halt()' #=> undef エラーになる
```
# アトム Atom
```
# アトムは小文字で始まる
# cat は cat を意味する
!erl -noshell -eval 'erlang:display(cat), halt()' #=> cat
!erl -noshell -eval 'erlang:display(cat=cat), halt()' #=> cat
# シングルクォートでアトムを作ることができる
# `_` や `@` を含む場合もシングルクォートで括られる
!erl -noshell -eval 'erlang:display(atom), halt()'
!erl -noshell -eval 'erlang:display(atoms_rule), halt()'
!erl -noshell -eval 'erlang:display(atoms_rule@erlang), halt()'
!erl -noshell -eval "erlang:display('Atoms can be cheated!'), halt()"
!erl -noshell -eval "erlang:display(atom='atom'), halt()"
# アトムは便利に定数のようにつかうことができるが
# メモリー食いでガーベッジコレクションの対象ではないというリスクがある
# 動的に作成したり、ユーザーが作成できる仕組みにはしない
# 使えない単語がある
# after and andalso band begin bnot bor bsl bsr bxor case catch cond div end
# fun if let not of or orelse query receive rem try when xor
```
# ブール代数と比較演算子
Boolean Algebra & Comparison operators
```
```
| github_jupyter |
# Importing Data
## csv
Use the built-in library 'csv' to read (or write) csv files.
```
import csv
with open('example.csv') as my_file:
my_reader = csv.reader(my_file)
for row in my_reader:
print(row)
```
To read into arrays so we can plot a scatter plot, you could do something like this:
```
import numpy
x = []
y = []
with open('example.csv') as my_file:
my_reader = csv.reader(my_file)
for row in my_reader:
x.append(row[0]) # we need to know that x is column 0
y.append(row[1]) # and y is column 1
x.pop(0) # remove the first item, which is the string 'x'
y.pop(0) # remove the first item, which is the string 'y'
x = numpy.array(x)
y = numpy.array(y)
x
from matplotlib import pyplot as plt
%matplotlib inline
plt.plot(x,y,'co')
```
## csv.DictReader
Using a DictReader to read the column headings from the first row, and return a sequence of Python dictionaries, it would look like this. This means we don't have to keep track of which column is which, making code easier to understand and more robust to the data file changing.
```
import numpy as np
data = []
with open('example.csv', 'r') as file_in:
reader_in = csv.DictReader(file_in)
for row in reader_in:
data.append(row)
x = np.array([row['x'] for row in data])
y = np.array([row['y'] for row in data])
x = []
y = []
with open('example.csv', 'r') as file_in:
reader_in = csv.DictReader(file_in)
for row in reader_in:
x.append(row['x'])
y.append(row['y'])
x = np.array(x)
y = np.array(y)
x
```
## numpy.loadtxt
Use numpy's built in 'loadtext' method. Check the documentation to find out various options.
```
xy = numpy.loadtxt('example.csv', delimiter=',', skiprows=1, usecols=(0,1))
# xy is now an (n x 2) array.
# We can tranpose it into (2 x n)
# then split into two arrays, x and y.
x, y = xy.T
# (That is a helpful trick for processing results from scipy.integrate.odeint.)
plt.plot(x,y,'co')
x, y = numpy.loadtxt('example.csv', delimiter=',', skiprows=1, usecols=(0,1), unpack=True)
plt.plot(x,y, 'co')
x
```
## pandas
`conda install pandas` if you don't already have it. Check the help menu top left.
### Challenge:
Using pandas, find the correlation between x and y for each group of animals.
```
import pandas as pd
data = pd.read_csv('example.csv')
print(type(data)) # it's a "DataFrame"
data
```
We can extract the dogs like this, into a new DataFrame
```
dogs = data.loc[data.animal == 'dog']
print(type(dogs))
dogs
```
Or we could have made a dataframe with the animals as an index column, and used that.
```
data_with_index = pd.DataFrame.from_csv('example.csv', index_col=2)
dogs = data_with_index.loc[data_with_index.index=='dog']
dogs
```
To find the correlation between $x$ and $y$ for a given animal, we could extract the x and y into numpy arrays, and use a tool we already know (SciPy)
```
import scipy.stats
x, y = np.array(dogs['x']), np.array(dogs['y'])
scipy.stats.linregress(x,y).rvalue
```
We need to do this in a loop for each animal.
Rather than hard-code the list of animals, we'll extract them from the data (in case a new one shows up!)
```
# We can use the set
set(data.animal)
# Note that These are equivalent
data.animal is data['animal']
for animal in set(data['animal']):
pets = data.loc[data['animal']==animal]
x, y = np.array(pets['x']), np.array(pets['y'])
rvalue = scipy.stats.linregress(x,y).rvalue
print(animal, rvalue)
```
Or, rather than exporting to numpy and scipy, a little reading of pandas help finds there's a built in correlation method (that can do many types of pairwise correlations).
```
dogs.corr()
# in a loop
for animal in set(data['animal']):
pets = data.loc[data['animal']==animal]
print(animal, pets.corr()['x']['y'])
```
Visit http://pandas.pydata.org/pandas-docs/stable/10min.html for more.
You can then get fancy with Pandas data selection:
```
mammals=['dog','cat']
data[data.animal.isin(mammals)].corr()
```
| github_jupyter |
```
import os
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '../t5/prepare/mesolitica-tpu.json'
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
from google.cloud import storage
client = storage.Client()
bucket = client.bucket('mesolitica-tpu-general')
os.system('mkdir out-large')
blob = bucket.blob('albert-large/model.ckpt-475000.data-00000-of-00001')
blob.download_to_filename('out-large/model.ckpt-475000.data-00000-of-00001')
blob = bucket.blob('albert-large/model.ckpt-475000.index')
blob.download_to_filename('out-large/model.ckpt-475000.index')
blob = bucket.blob('albert-large/model.ckpt-475000.meta')
blob.download_to_filename('out-large/model.ckpt-475000.meta')
from albert import modeling
from albert import optimization
from albert import tokenization
import tensorflow as tf
import numpy as np
tokenizer = tokenization.FullTokenizer(
vocab_file='sp10m.cased.v10.vocab', do_lower_case=False,
spm_model_file='sp10m.cased.v10.model')
tokenizer.tokenize('Husein comel')
albert_config = modeling.AlbertConfig.from_json_file('LARGE_config.json')
albert_config
def gather_indexes(sequence_tensor, positions):
"""Gathers the vectors at the specific positions over a minibatch."""
sequence_shape = modeling.get_shape_list(sequence_tensor, expected_rank=3)
batch_size = sequence_shape[0]
seq_length = sequence_shape[1]
width = sequence_shape[2]
flat_offsets = tf.reshape(
tf.range(0, batch_size, dtype=tf.int32) * seq_length, [-1, 1])
flat_positions = tf.reshape(positions + flat_offsets, [-1])
flat_sequence_tensor = tf.reshape(sequence_tensor,
[batch_size * seq_length, width])
output_tensor = tf.gather(flat_sequence_tensor, flat_positions)
return output_tensor
class Model:
def __init__(
self,
):
self.X = tf.placeholder(tf.int32, [None, None])
self.segment_ids = tf.placeholder(tf.int32, [None, None])
self.input_masks = tf.placeholder(tf.int32, [None, None])
model = modeling.AlbertModel(
config=albert_config,
is_training=False,
input_ids=self.X,
input_mask=self.input_masks,
token_type_ids=self.segment_ids,
use_one_hot_embeddings=False)
input_tensor = model.get_sequence_output()
output_weights = model.get_embedding_table()
with tf.variable_scope("cls/predictions"):
with tf.variable_scope("transform"):
input_tensor = tf.layers.dense(
input_tensor,
units=albert_config.embedding_size,
activation=modeling.get_activation(albert_config.hidden_act),
kernel_initializer=modeling.create_initializer(
albert_config.initializer_range))
input_tensor = modeling.layer_norm(input_tensor)
output_bias = tf.get_variable(
"output_bias",
shape=[albert_config.vocab_size],
initializer=tf.zeros_initializer())
logits = tf.matmul(input_tensor, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
log_probs = tf.nn.log_softmax(logits, axis=-1)
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model()
sess.run(tf.global_variables_initializer())
var_lists = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope = 'bert')
cls = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope = 'cls')
saver = tf.train.Saver(var_list = var_lists + cls)
saver.restore(sess, 'out-large/model.ckpt-475000')
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, 'albert-large/model.ckpt')
import os
out = 'albert-large-bahasa-standard-cased'
os.makedirs(out, exist_ok=True)
from transformers import AlbertTokenizer, AlbertModel, AlbertConfig, AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AlbertTokenizer('sp10m.cased.v10.model', do_lower_case = False)
tokenizer.save_pretrained(out)
import torch
import logging
from transformers import AlbertConfig, AlbertForMaskedLM, load_tf_weights_in_albert
logging.basicConfig(level=logging.INFO)
def convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, albert_config_file, pytorch_dump_path):
# Initialise PyTorch model
config = AlbertConfig.from_json_file(albert_config_file)
print("Building PyTorch model from configuration: {}".format(str(config)))
model = AlbertForMaskedLM(config)
# Load weights from tf checkpoint
load_tf_weights_in_albert(model, config, tf_checkpoint_path)
# Save pytorch-model
print("Save PyTorch model to {}".format(pytorch_dump_path))
torch.save(model.state_dict(), pytorch_dump_path)
convert_tf_checkpoint_to_pytorch('albert-large/model.ckpt',
'LARGE_config.json',
f'{out}/pytorch_model.bin')
!rm -rf albert-large
!cp sp10m.cased.v10.* out-large
!cp LARGE_config.json out-large/config.json
!tar cvzf albert-large-475k-19-10-2020.tar.gz out-large
tokenizer = AlbertTokenizer.from_pretrained(f'./{out}', do_lower_case = False)
config = AlbertConfig('LARGE_config.json')
config.vocab_size = 32000
config.intermediate_size = 4096
config.hidden_size = 1024
config.num_attention_heads = 16
config.num_hidden_groups = 1
config.num_hidden_layers = 24
model = AutoModelWithLMHead.from_pretrained(f'./{out}/pytorch_model.bin', config = config)
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_mask('tolonglah gov buat something, kami dah [MASK]')
model.save_pretrained(out)
# !transformers-cli upload ./albert-large-bahasa-standard-cased
```
| github_jupyter |
# Aprendizaje computacional en grandes volúmenes de texto
## Mario Graff (mgraffg@ieee.org, mario.graff@infotec.mx)
## Sabino Miranda (sabino.miranda@infotec.mx)
## Daniela Moctezuma (dmoctezuma@centrogeo.edu.mx)
## Eric S. Tellez (eric.tellez@infotec.mx)
## CONACYT, INFOTEC y CentroGEO
## [https://github.com/ingeotec](https://github.com/ingeotec)
* Aplicaciones
* Análisis de sentimientos
* Determinación de autoría
* Clasificación de noticias
* Spam
* Género y edad
* Conclusiones
# Análisis de Sentimientos
* $\mu$TC
* Supervisión a distancia (D.S.)
* Transferencia de conocimiento a otros lenguajes
## EvoDAG

## EvoDAG

## $\mu$TC + EvoDAG

## $\mu$TC + EvoDAG

## INEGI
Algoritmo | Macro $F_1$ | Positivo | Negativo | Neutro
----------|------------|----------|----------|--------
$\mu$TC | 0.5061 | 0.7631 | 0.5376 | 0.2174
$\mu$TC + EvoDAG | 0.5691 | 0.7566 | 0.5849 | 0.3659
Ord. Voc. Aff. | 0.5704 | 0.7529 | 0.5843 | 0.3739
D.S. all tweets| 0.4783 | 0.6200 | 0.5396 | 0.2755
D.S. | 0.4796 | 0.6269 | 0.5422 | 0.2698
## TASS 2015
Algoritmo | Macro $F_1$ | Positivo | Negativo | Neutro
----------|------------|----------|----------|--------
$\mu$TC |0.6270 | 0.7217 | 0.6403 | 0.5189
$\mu$TC + EvoDAG|0.6420 | 0.7288 | 0.6430 | 0.5541
Ord. Voc. Aff. |0.6352 | 0.7139 | 0.6413 | 0.5504
D.S. all tweets |0.5439 | 0.6108 | 0.6000 | 0.4209
D.S. |0.5439 | 0.6108 | 0.6000 | 0.4209
## SemEval 2015
Algoritmo | Macro $F_1$ | Positivo | Negativo | Neutro
----------|------------|----------|----------|--------
$\mu$TC |0.5815 | 0.6430 | 0.4333 | 0.6681
$\mu$TC + EvoDAG|0.5848 | 0.6322 | 0.4894 | 0.6327
Ord. Voc. Aff. |0.5756 | 0.6292 | 0.4769 | 0.6207
D.S. all tweets |0.5194 | 0.5696 | 0.4480 | 0.5407
D.S. |0.5217 | 0.5622 | 0.4543 | 0.5485
## SemEval 2016
Algoritmo | Macro $F_1$ | Positivo | Negativo | Neutro
----------|------------|----------|----------|--------
$\mu$TC |0.4638 | 0.5907 | 0.2980 | 0.5026
$\mu$TC + EvoDAG|0.5229 | 0.6223 | 0.4564 | 0.4901
Ord. Voc. Aff. |0.5144 | 0.6154 | 0.4472 | 0.4807
D.S. all tweets |0.4664 | 0.5753 | 0.4068 | 0.4172
D.S. |0.4431 | 0.5577 | 0.3926 | 0.3790
## Distant Supervision

## EvoDAG + Transferir Conocimiento

## EvoDAG + Transferir Conocimiento
Model | INEGI | TASS2015 | SemEval2015 | SemEval2016
------|-------|----------|-------------|-------------
INEGI | 0.4796 | 0.4393 | 0.4759 | 0.4312
TASS2015 | 0.4910 | 0.5439 | 0.5065 | 0.4652
SemEval2015 | 0.4690 | 0.4491 | 0.5217 | 0.4526
SemEval2016 | 0.4383 | 0.4504 | 0.4684 | 0.4431
## Determinación de autoría
* (PAN)[http://pan.webis.de/index.html] Eventos de identificación de texto.
* Identificar la autoría basado en su estilo
* Tarea
* genero
* edad
* genero - edad
* Lenguajes
* Español
* Inglés
* [PAN2013](http://pan.webis.de/clef13/pan13-web/index.html)
* [PAN2016](http://pan.webis.de/clef16/pan16-web/author-identification.html)
```json
{"text": ["Domingo termina Horario de Verano. A mi la verdad me encanta esta \u00e9poca del a\u00f1o x q los d\u00edas x alguna razon parece q me \"duran mas\".", "@ClaudioXGG a mi me preocupa mucho la distancia con la Sociedad q Reforma Fiscal q vendieron a @EPN caus\u00f3 en forma innecesaria."], "age_group": "35-49", "klass": "male", "gender": "male"}
```
## PAN2013 - Género
### Inglés
Approach | Accuracy
---------|----------
Meina et al. | 0.5921
microTC | 0.5867
Santosh et al. | 0.5816
Pastor L. et al. | 0.5690
## PAN2013 - Género
### Español
Approach | Accuracy
---------|----------
microTC | 0.6750
Santosh et al. | 0.6473
Pastor L. et al. | 0.6299
Cruz et al. | 0.6165
## PAN2013 - Edad
### Inglés
* Rango de 10, 20, 30 años
Approach | Accuracy
---------|----------
microTC | 0.6605
Pastor L. et al. | 0.6572
Meina et al. | 0.6491
Santosh et al. | 0.6408
## PAN2013 - Edad
### Español
Approach | Accuracy
---------|----------
microTC | 0.6897
Pastor L. et al. | 0.6558
Santosh et al. | 0.6430
Cruz et al. | 0.6219
## authorship - CCA
### JimGilchrist
Hong Kong Dragon Airlines (Dragonair) is about to buy two new Airbus Industrie consortium aircraft and lease another to cope with increasing demand on its routes into China, industry sources told Reuters.
...
Dragonair recently began new services to the Chinese city of Qingdao and Khaohsiung in Taiwan. It also has full scheduled cargo rights on two of its Chinese routes to Xian and Chengdu and says it has plans for additional Chinese services to Chongqing, Urumqi and Shantou.
Other shareholders in Dragonair are the Chinese state-owned China National Aviation Corp with 36 percent, China-backed CITIC Pacific Ltd with 29 percent and Cathay's parent, Swire Pacific Ltd with eight percent. --Air Cargo Newsroom Tel+44 171 542 7706 Fax +44 171 542 5017
## authorship - CCA
Approach | accuracy | macro-f1 | micro-f1
---------|----------|----------|---------|
microTC | 0.7680 | 0.7544 |0.7680
Escalante et al. 2015 | 0.7372 | 0.7032 | -
Cummins et al. | 0.1000 |0.0182 | -
## authorship - NFL
### Joe Lapointe
Last season, the consistent strength of the GiantsÕ offense was a combination of three running backs known as Earth, Wind and Fire.
It was meant as a compliment, but those elements of nature can also spell disaster, and this seasonÕs fortunes have bordered on that.
On Tuesday Ahmad Bradshaw (Fire) was on crutches with at least a sprained left ankle, according to the Giants. The foot was in a protective boot and it was uncertain if he could play on Thursday night at Denver, or if he would even make the trip.
ÒWeÕll see,Ó Coach Tom Coughlin said. Coughlin said Bradshaw left SundayÕs 34-31 victory over Atlanta with what the Giants thought was a routine sprain.
ÒBut, evidently, it is more severe than that,Ó Coughlin said. For most of the season, Bradshaw has played with a fractured bone in the small toe of his right foot and has practiced rarely.
...
## authorship - NFL
Approach | accuracy | macro-f1 | micro-f1
---------|----------|----------|---------|
microTC | 0.8444 | 0.8403 | 0.8444
Escalante et al. 2015 | 0.8376 | 0.7637 | -
Cummins et al. | 0.7778 | 0.7654 | -
## authorship
* CCA
* NFL
* Business
* Poetry
* Travel
* Cricket
## OHSUMED
* Colección de 348,566 (se uso un subset de 20K)
* MEDLINE,
* Título
* Resumen
* Identificar el sujeto de estudio.
```json
{"text": "Haemophilus influenzae meningitis with prolonged hospital course.\n A retrospective evaluation of Haemophilus influenzae type b meningitis observed over a 2-year period documented 86 cases.\n Eight of these patients demonstrated an unusual clinical course characterized by persistent fever (duration: greater than 10 days), ... had no sequelae.\n", "klass": "C01"}
```
name | accuracy | macro-f1 | micro-f1
-----|----------|----------|----------
Until 2015 | ~0.40 | - | -
Huynh et al. 2015* | 0.5690 | - | -
microTC | 0.4611 | 0.3959 | 0.4611
## Texto pre-procesado
* 20 new groups
* Reuters
* 8 clases
* 10 clases
* 52
* cade
* webkb
## cade
```json
{"text": "br br email arvores arvores http www apoio mascote natureza vida links foram animais animais brasileira alunos andar sp desenvolvido projeto carlos guarulhos associacao associacao associacao associacao paulista maos homens homens homens uniao mesquita saudavel acao htm coragem diretoria aguia plantas plantas org org universo tradicional tiveram federacao garra penetrar tornaram assasinadas taz sobreviveram cruelmente shu barranco fontagua homenageamos homenageamos homenageamos insanas heroicamente devastadora kuoshu mundomagico kuo paraventi", "klass": "08_cultura"}
```
# Preguntas
| github_jupyter |
Welcome to quickstart notebook of EfficientNetLite Keras package.
We will go over some basic concepts, like
1. Installation.
2. Download data + fine tune.
3. Convert to TFLite.
4. Convert to ONNX.
Execute the cell below to check if we are using a GPU:
```
!nvidia-smi
```
### Installation
Run below cell to install the module:
```
!pip install -q git+https://github.com/sebastian-sz/efficientnet-lite-keras@main
import os
import tensorflow as tf
from efficientnet_lite import EfficientNetLiteB0
print(tf.__version__)
```
### Download example dataset
In this section we are going to download example dataset.
```
!curl https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz | tar xz
```
Remove the License file so it doesn't mess up directory structure:
```
!rm flower_photos/LICENSE.txt
```
Preview Class names:
```
!ls flower_photos
```
### Load the data:
```
DATA_PATH = "./flower_photos"
BATCH_SIZE = 32
TARGET_SIZE = (224, 224)
def preprocess_data(images, labels):
images = (images - 127.00) / 128.00
return images, labels
def augment_data(images, labels):
return tf.image.random_flip_left_right(images), labels
# Create tf.data.dataset objects:
train_dataset = tf.keras.preprocessing.image_dataset_from_directory(
directory=DATA_PATH,
batch_size=BATCH_SIZE,
image_size=TARGET_SIZE,
label_mode="categorical",
seed=1234,
validation_split=0.2,
subset="training"
)
val_dataset = tf.keras.preprocessing.image_dataset_from_directory(
directory=DATA_PATH,
batch_size=BATCH_SIZE,
image_size=TARGET_SIZE,
label_mode="categorical",
seed=1234,
validation_split=0.2,
subset="validation"
)
# Apply preprocessing and augmentation:
AUTOTUNE = tf.data.AUTOTUNE
train_dataset = train_dataset.map(preprocess_data, num_parallel_calls=AUTOTUNE).map(augment_data, num_parallel_calls=AUTOTUNE).prefetch(AUTOTUNE)
val_dataset = val_dataset.map(preprocess_data, num_parallel_calls=AUTOTUNE).prefetch(AUTOTUNE)
# Sanity check our dataset
for image_batch, label_batch in train_dataset.take(1):
print(image_batch.shape)
print(label_batch.shape)
```
### Train (extract features)
Let us fine tune Efficient Net Lite.
```
def build_model(num_classes=5):
base_model = EfficientNetLiteB0(
input_shape=(TARGET_SIZE[0], TARGET_SIZE[1], 3),
include_top=False,
pooling="avg",
weights="imagenet"
)
base_model.trainable=False
return tf.keras.Sequential([
base_model,
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(num_classes, activation="softmax")
])
model = build_model()
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=False),
metrics=['accuracy']
)
model.summary()
model.fit(
train_dataset,
epochs=5,
validation_data=val_dataset,
)
```
### Convert TFLite
We can convert the modified model to Tensorflow Lite:
```
# Convert
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save
with open("efficientnet_lite.tflite", "wb") as file:
file.write(tflite_model)
!ls *.tflite
```
### Convert onnx
We can also convert this model to ONNX via `tf2onnx` package:
```
!pip install tf2onnx~=1.8.4
# Save the model in TF's Saved Model format:
model.save("my_saved_model/")
# Convert:
!python -m tf2onnx.convert \
--saved-model my_saved_model/ \
--output efficientnet_lite.onnx
!ls *.onnx
```
| github_jupyter |
# Building abstractive text summaries
_This notebook is part of a tutorial series on [txtai](https://github.com/neuml/txtai), an AI-powered semantic search platform._
In the field of text summarization, there are two primary categories of summarization, extractive and abstractive summarization.
Extractive summarization takes subsections of the text and joins them together to form a summary. This is commonly backed by graph algorithms like TextRank to find the sections/sentences with the most commonality. These summaries can be highly effective but they are unable to transform text and don't have a contextual understanding.
Abstractive summarization uses Natural Language Processing (NLP) models to build transformative summaries of text. This is similar to having a human read an article and asking what was it about. A human wouldn't just give a verbose reading of the text. This notebook shows how blocks of text can be summarized using an abstractive summarization pipeline.
# Install dependencies
Install `txtai` and all dependencies. Since this notebook is using optional pipelines, we need to install the pipeline extras package.
```
%%capture
!pip install git+https://github.com/neuml/txtai#egg=txtai[pipeline]
```
# Create a Summary instance
The Summary instance is the main entrypoint for text summarization. This is a light-weight wrapper around the summarization pipeline in Hugging Face Transformers.
In addition to the default model, additional models can be found on the [Hugging Face model hub](https://huggingface.co/models?pipeline_tag=summarization).
```
%%capture
from txtai.pipeline import Summary
# Create summary model
summary = Summary()
```
# Summarize text
The example below shows how a large block of text can be distilled down into a smaller summary.
```
text = ("Search is the base of many applications. Once data starts to pile up, users want to be able to find it. It’s the foundation "
"of the internet and an ever-growing challenge that is never solved or done. The field of Natural Language Processing (NLP) is "
"rapidly evolving with a number of new developments. Large-scale general language models are an exciting new capability "
"allowing us to add amazing functionality quickly with limited compute and people. Innovation continues with new models "
"and advancements coming in at what seems a weekly basis. This article introduces txtai, an AI-powered search engine "
"that enables Natural Language Understanding (NLU) based search in any application."
)
summary(text, maxlength=10)
```
Notice how the summarizer built a sentence using parts of the document above. It takes a basic understanding of language in order to understand the first two sentences and how to combine them into a single transformative sentence.
# Summarize a document
The next section retrieves an article, extracts text from it (more to come on this topic) and summarizes that text.
```
!wget -q "https://medium.com/neuml/time-lapse-video-for-the-web-a7d8874ff397"
from txtai.pipeline import Textractor
textractor = Textractor()
text = textractor("time-lapse-video-for-the-web-a7d8874ff397")
summary(text)
```
Click through the link to see the full article. This summary does a pretty good job of covering what the article is about!
| github_jupyter |
# DCGAN - Syft Duet - Data Owner 🎸
Contributed by [@Koukyosyumei](https://github.com/Koukyosyumei)
This example trains a DCGAN network on the BSD300 dataset with Syft.
This notebook is mainly based on the original pytorch [example](https://github.com/OpenMined/PySyft/tree/dev/examples/duet/dcgan/original).
## PART 1: Launch a Duet Server and Connect
As a Data Owner, you want to allow someone else to perform data science on data that you own and likely want to protect.
In order to do this, we must load our data into a locally running server within this notebook. We call this server a "Duet".
To begin, you must launch Duet and help your Duet "partner" (a Data Scientist) connect to this server.
You do this by running the code below and sending the code snippet containing your unique Server ID to your partner and following the instructions it gives!
```
import os
import torch
import torchvision
import torchvision.utils as vutils
try:
# make notebook progress bars nicer
from tqdm.notebook import tqdm
except ImportError:
print(f"Unable to import tqdm")
# TorchVision hotfix https://github.com/pytorch/vision/issues/3549
from syft.util import get_root_data_path
from torchvision import datasets
datasets.MNIST.resources = [
(
"https://ossci-datasets.s3.amazonaws.com/mnist/train-images-idx3-ubyte.gz",
"f68b3c2dcbeaaa9fbdd348bbdeb94873",
),
(
"https://ossci-datasets.s3.amazonaws.com/mnist/train-labels-idx1-ubyte.gz",
"d53e105ee54ea40749a09fcbcd1e9432",
),
(
"https://ossci-datasets.s3.amazonaws.com/mnist/t10k-images-idx3-ubyte.gz",
"9fb629c4189551a2d022fa330f9573f3",
),
(
"https://ossci-datasets.s3.amazonaws.com/mnist/t10k-labels-idx1-ubyte.gz",
"ec29112dd5afa0611ce80d1b7f02629c",
),
]
datasets.MNIST(get_root_data_path(), train=True, download=True)
datasets.MNIST(get_root_data_path(), train=False, download=True)
import syft as sy
duet = sy.launch_duet(loopback=True)
sy.logger.add(sink="./syft_do.log")
```
# Add handlers
```
# handler with no tags accepts everything. Better handlers coming soon.
duet.requests.add_handler(action="accept")
```
# Strore
```
duet.store.pandas
```
### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 1 : Now STOP and run the Data Scientist notebook until the same checkpoint.
### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 2 : Well done!
| github_jupyter |
# Introduction
**Summary:** The Jupyter notebook is a document with text, code and results.
This is a text cell, or more precisely a *markdown* cell.
* Pres <kbd>Enter</kbd> to *edit* the cell.
* Pres <kbd>Ctrl+Enter</kbd> to *run* the cell.
* Pres <kbd>Shift+Enter</kbd> to *run* the cell + advance.
We can make lists:
1. **First** item
2. *Second* item
3. ~~Third~~ item
We can also do LaTeX math, e.g. $\alpha^2$ or
$$
X = \int_0^{\infty} \frac{x}{x+1} dx
$$
```
# this is a code cell
α = 10
# let us do some calculations
a = 20
b = 30
c = a+b
# lets print the results (shown below the cell)
print(c)
```
We can now write some more text, and continue with our calculations.
```
d = c*2
print(d)
print(c)
```
**Note:** Despite JupyterLab is running in a browser, it is running offline (the path is something like *localhost:8888/lab*).<br>
**Binder:** The exception is if you use *binder*, then JupyterLab wil run in the cloud.
[<img src="https://mybinder.org/badge_logo.svg">](https://mybinder.org/v2/gh/NumEconCopenhagen/lectures-2020/master?urlpath=lab/tree/01/Introduction.ipynb)
# Solve the consumer problem
Consider the following consumer problem:
$$
\begin{aligned}
V(p_{1},p_{2},I) & = \max_{x_{1},x_{2}} x_{1}^{\alpha}x_{2}^{1-\alpha}\\
& \text{s.t.}\\
p_{1}x_{1}+p_{2}x_{2} & \leq I,\,\,\,p_{1},p_{2},I>0\\
x_{1},x_{2} & \geq 0
\end{aligned}
$$
We can solve this problem _numerically_ in a few lines of code.
1. Choose some **parameters**:
```
alpha = 0.5
I = 10
p1 = 1
p2 = 2
```
2. The **consumer objective** is:
```
def value_of_choice(x1,alpha,I,p1,p2):
# a. all income not spent on the first good
# is spent on the second
x2 = (I-p1*x1)/p2
# b. the resulting utility is
utility = x1**alpha * x2**(1-alpha)
return utility
```
3. We can now use a function from the *scipy* module to **solve the consumer problem**.
```
# a. load external module from scipy
from scipy import optimize
# b. make value-of-choice as a funciton of only x1
obj = lambda x1: -value_of_choice(x1,alpha,I,p1,p2)
# c. call minimizer
solution = optimize.minimize_scalar(obj,bounds=(0,I/p1))
# d. print result
x1 = solution.x
x2 = (I-x1*p1)/p2
print(x1,x2)
```
**Task**: Solve the consumer problem with the CES utility funciton.
$$
u(x_1,x_2) = (\alpha x_1^{-\beta} + (1-\alpha) x_2^{-\beta})^{-1/\beta}
$$
```
import numpy as np
# a. choose parameters
alpha = 0.5
beta = 0.000001
I = 10
p1 = 1
p2 = 2
# b. value-of-choice
def value_of_choice_ces(x1,alpha,beta,I,p1,p2):
x2 = (I-p1*x1)/p2
if x1 > 0 and x2 > 0:
utility = (alpha*x1**(-beta)+(1-alpha)*x2**(-beta))**(-1/beta)
else:
utility = 0
return utility
# c. objective
obj = lambda x1: -value_of_choice_ces(x1,alpha,beta,I,p1,p2)
# d. solve
solution = optimize.minimize_scalar(obj,bounds=(0,I/p1))
# e. result
x1 = solution.x
x2 = (I-x1*p1)/p2
print(x1,x2)
```
# Simulate the AS-AD model
Consider the following AS-AD model:
$$
\begin{aligned}
\hat{y}_{t} &= b\hat{y}_{t-1}+\beta(z_{t}-z_{t-1})-a\beta s_{t}+a\beta\phi s_{t-1} \\
\hat{\pi}_{t} &= b\hat{\pi}_{t-1}+\beta\gamma z_{t}-\beta\phi\gamma z_{t}+\beta s_{t}-\beta\phi s_{t-1} \\
z_{t} &= \delta z_{t-1}+x_{t}, x_{t} \sim N(0,\sigma_x^2) \\
s_{t} &= \omega s_{t-1}+c_{t}, c_{t} \sim N(0,\sigma_c^2) \\
b &= \frac{1+a\phi\gamma}{1+a\gamma} \\
\beta &= \frac{1}{1+a\gamma}
\end{aligned}
$$
where $\hat{y}_{t}$ is the output gap, $\hat{\pi}_{t}$ is the inflation gap, $z_{t}$ is a AR(1) demand shock, and $\hat{s}_{t}$ is a AR(1) supply shock.
1. Choose **parameters**:
```
a = 0.4
gamma = 0.1
phi = 0.9
delta = 0.8
omega = 0.15
sigma_x = 1
sigma_c = 0.4
T = 100
```
2. Calculate **composite parameters**:
```
b = (1+a*phi*gamma)/(1+a*gamma)
beta = 1/(1+a*gamma)
```
3. Define **model functions**:
```
y_hat_func = lambda y_hat_lag,z,z_lag,s,s_lag: b*y_hat_lag + beta*(z-z_lag) - a*beta*s + a*beta*phi*s_lag
pi_hat_func = lambda pi_lag,z,z_lag,s,s_lag: b*pi_lag + beta*gamma*z - beta*phi*gamma*z_lag + beta*s - beta*phi*s_lag
z_func = lambda z_lag,x: delta*z_lag + x
s_func = lambda s_lag,c: omega*s_lag + c
```
4. Run the **simulation**:
```
import numpy as np
# a. set setup
np.random.seed(2015)
# b. allocate simulation data
x = np.random.normal(loc=0,scale=sigma_x,size=T)
c = np.random.normal(loc=0,scale=sigma_c,size=T)
z = np.zeros(T)
s = np.zeros(T)
y_hat = np.zeros(T)
pi_hat = np.zeros(T)
# c. run simulation
for t in range(1,T):
# i. update z and s
z[t] = z_func(z[t-1],x[t])
s[t] = s_func(s[t-1],c[t])
# ii. compute y og pi
y_hat[t] = y_hat_func(y_hat[t-1],z[t],z[t-1],s[t],s[t-1])
pi_hat[t] = pi_hat_func(pi_hat[t-1],z[t],z[t-1],s[t],s[t-1])
```
5. **Plot** the simulation:
```
import matplotlib.pyplot as plt
%matplotlib inline
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(y_hat,label='$\\hat{y}$')
ax.plot(pi_hat,label='$\\hat{\pi}$')
ax.set_xlabel('time')
ax.set_ylabel('percent')
ax.set_ylim([-8,8])
ax.legend(loc='upper left');
```
I like the **seaborn style**:
```
plt.style.use('seaborn-whitegrid')
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(y_hat,label='$\\hat{y}$')
ax.plot(pi_hat,label='$\\hat{pi}$')
ax.set_xlabel('time')
ax.set_ylabel('percent')
ax.set_ylim([-8,8])
ax.legend(loc='upper left',facecolor='white',frameon='True');
```
# Using modules
A **module** is a **.py**-file with functions you import and can then call in the notebook.
Try to open **mymodule.py** and have a look.
```
import mymodule
x = 5
y = mymodule.myfunction(x)
print(y)
```
# Downloading with Git
1. Follow the [installation guide](https://numeconcopenhagen.netlify.com/guides/python-setup/) in detail
2. Open VScode
3. Pres <kbd>Ctrl</kbd>+<kbd>Shift</kbd>+<kbd>P</kbd>
4. Write `git: clone` + <kbd>Enter</kbd>
5. Write `https://github.com/NumEconCopenhagen/lectures-2021` + <kbd>Enter</kbd>
6. You can always update to the newest version of the code with `git: sync` + <kbd>Enter</kbd>
7. Create a copy of the cloned folder, where you work with the code (otherwise you can not sync with updates)
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import obspy
from obspy.signal.invsim import paz_to_freq_resp
%%time
#example for MBB-2
sens=750
normfact=5.46606E12
p11=-0.063
p12=+0.0521
p1p=complex (p11,p12)
p1m=np.conj(p1p)
p11tak=-0.042
p12tak=+0.026
#p12tak=0.042
p1ptak=complex (p11tak,p12tak)
p1mtak=np.conj(p1ptak)
p2p=-190 + 620j
p2m=-190 - 620j
p3p= -2000 + 3000j
p3m= -2000 - 3000j
print()
print()
print ("Factory Poles (blue line):")
print (p1p,p1m,p2p,p2m,p3p,p3m)
poles = [p1p,p1m, p2p,p2m,p3p,p3m]
polestak = [p1ptak,p1mtak, p2p,p2m,p3p,p3m]
print ("Observed Poles (orange line):")
print (p1ptak,p1mtak,p2p,p2m,p3p,p3m)
zeros=[0,0]
scale_fac = normfact*sens
scale_factak = normfact*650
h, f = paz_to_freq_resp(poles, zeros, scale_fac, 0.002, 67072*64, freq=True)
htak, f = paz_to_freq_resp(polestak, zeros, scale_fac, 0.002, 67072*64, freq=True)
x=10**(3/20)
val3db=sens/x
plt.rcParams['figure.figsize'] = [12, 5]
plt.figure()
plt.semilogx(f, (abs(h)),label="factory")
#plt.semilogx(f, (abs(htak)),label="observed")
#plt.loglog (f, (abs(h)),label="factory")
#plt.loglog (f, (abs(htak)),label="observed")
plt.xlabel('Frequency [Hz]')
plt.ylabel('Amplitude in V*s/m')
plt.grid (which='major')
#plt.vlines(1/120,200,1200,colors='red')
#plt.hlines(val3db,1/500,1/30,colors='green')
plt.legend(loc='upper left')
plt.show()
#print (f.size)
#print (h.size)
#print (f[100],abs(h[100]))
#print (f[500],abs(h[500]))
#print (f[1000],abs(h[1000]))
#print (f[1500],abs(h[1500]))
#determine 3dB point
val3db=sens/x
#print (x,val3db)
#h=htak
print()
print()
print ("**************")
print("The 3dB point of", "%6.2f" % (val3db), "for the lower corner is between")
print()
for i in range (f.size):
if abs(h[i])>=val3db:
t=1/f[i-1]
print ("Period","%6.2f" % (t),"[sec]",abs(h[i-1]))
t=1/f[i]
print ("Period","%6.2f" % (t),"[sec]",abs(h[i]))
print ("**************")
break
print()
print()
#example for MBB-2
sens=750
normfact=5.46606E12
p11=-0.037
p12=+0.037
p1p=complex (p11,p12)
p1m=np.conj(p1p)
p11tak=-0.042
p12tak=+0.026
#p12tak=0.042
p1ptak=complex (p11tak,p12tak)
p1mtak=np.conj(p1ptak)
p2p=-190 + 620j
p2m=-190 - 620j
p3p= -2000 + 3000j
p3m= -2000 - 3000j
print()
print()
print ("Factory Poles (blue line):")
print (p1p,p1m,p2p,p2m,p3p,p3m)
poles = [p1p,p1m, p2p,p2m,p3p,p3m]
polestak = [p1ptak,p1mtak, p2p,p2m,p3p,p3m]
print ("Observed Poles (orange line):")
print (p1ptak,p1mtak,p2p,p2m,p3p,p3m)
zeros=[0,0]
scale_fac = normfact*sens
scale_factak = normfact*650
#example for CMG-3T 360sec
sens=1500
normfact=5.71508E08
p11=-0.012347
p12=+0.012347
p1p=complex (p11,p12)
p1m=np.conj(p1p)
p2p=-502.65
p3p=-1005
p4p=-1131
poles = [p1p,p1m,p2p,p3p,p4p]
zeros=[0,0]
scale_fac = normfact*sens
#example for STS-2
sens=1500
normfact=5.70624E12
p11=-0.037
p12=+0.037
p1p=complex (p11,p12)
p1m=np.conj(p1p)
p2p=-15.99
p3p=-417.1
p41=-100.9
p42=401.9
p4p=complex (p41,p42)
p4m=np.conj(p4p)
p51=-7454
p52=-7142
p5p=complex (p51,p52)
p5m=np.conj(p5p)
p6p=-187.24
poles = [p1p,p1m,p2p,p3p,p4p,p4m,p5p,p5m,p6p]
z41=-318.6
z42=401.2
z4p=complex (z41,z42)
z4m=np.conj(z4p)
zeros=[0,0,-15.15,z4p,z4m]
scale_fac = normfact*sens
plt.subplot(122)
phase = 2 * np.pi + np.unwrap(np.angle(h))
plt.semilogx(f, phase)
plt.xlabel('Frequency [Hz]')
plt.ylabel('Phase [radian]')
# ticks and tick labels at multiples of pi
plt.yticks(
[0, np.pi / 2, np.pi, 3 * np.pi / 2, 2 * np.pi],
['$0$', r'$\frac{\pi}{2}$', r'$\pi$', r'$\frac{3\pi}{2}$', r'$2\pi$'])
plt.ylim(-0.2, 2 * np.pi + 0.2)
# title, centered above both subplots
plt.suptitle('Frequency Response of MBB-2')
# make more room in between subplots for the ylabel of right plot
plt.subplots_adjust(wspace=0.3)
plt.show()
```
| github_jupyter |
```
import io
import sys
import pdfminer
from pdfminer.pdfdocument import PDFDocument
from pdfminer.pdfparser import PDFParser
from pdfminer.converter import XMLConverter, HTMLConverter, TextConverter
from pdfminer.pdfpage import PDFPage
from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
from pdfminer.converter import PDFPageAggregator
from pdfminer.layout import LAParams, LTTextBox, LTTextLine
get_ipython().config.get('IPKernelApp', {})['parent_appname'] = ""
import pandas as pd
import numpy as np
from textblob import TextBlob
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
import warnings
warnings.filterwarnings('ignore')
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (10.0, 6.0)
import spacy
import plotly.graph_objs as go
import plotly.plotly as py
from plotly.graph_objs import FigureWidget
import cufflinks
pd.options.display.max_columns = 30
from IPython.core.interactiveshell import InteractiveShell
import plotly.figure_factory as ff
InteractiveShell.ast_node_interactivity = 'all'
from plotly.offline import iplot
cufflinks.go_offline()
cufflinks.set_config_file(world_readable=True, theme='pearl')
from sklearn.decomposition import TruncatedSVD
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.manifold import TSNE
from bokeh.plotting import figure, output_file, show
from bokeh.models import Label
from bokeh.io import output_notebook
output_notebook()
from collections import Counter
import scattertext as st
import spacy
from pprint import pprint
from wordcloud import WordCloud, STOPWORDS
from spacy.lang.en import English
import en_core_web_sm
def convert(filename):
fp = open(filename, 'rb')
rsrcmgr = PDFResourceManager()
retstr = io.StringIO()
codec = 'utf-8'
laparams = LAParams()
device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams)
# Create a PDF interpreter object.
interpreter = PDFPageInterpreter(rsrcmgr, device)
# Process each page contained in the document.
for page in PDFPage.get_pages(fp):
interpreter.process_page(page)
data = retstr.getvalue()
return data
#data=convert('t.pdf')
import os
#converts all pdfs in directory pdfDir, saves all resulting txt files to txtdir
def convertMultiple(pdfDir, txtDir):
if pdfDir == "": pdfDir = os.getcwd() + "\\" #if no pdfDir passed in
for pdf in os.listdir(pdfDir): #iterate through pdfs in pdf directory
fileExtension = pdf.split(".")[-1]
fileName=pdf.split(".")[0]
if fileExtension == "pdf":
pdfFilename = pdfDir + pdf
text = convert(pdfFilename) #get string of text content of pdf
textFilename = txtDir + fileName + ".txt"
textFile = open(textFilename, "w",encoding='utf8') #make text file
textFile.write(text) #write text to text file
textFile.close()
pdfDir = "C:/Users/v9022828/Documents/Projects/Learn/TextAnalytics/Text/dataf/"
txtDir = "C:/Users/v9022828/Documents/Projects/Learn/TextAnalytics/Text/datat/"
resDir = "C:/Users/v9022828/Documents/Projects/Learn/TextAnalytics/Text/results/"
convertMultiple(pdfDir, txtDir)
df = pd.DataFrame()
df_c = pd.DataFrame()
c=0
for file in os.listdir(txtDir):
textFilename = txtDir + file
print(textFilename)
fileExtension = file.split(".")[-1]
if fileExtension == "txt":
data = pd.read_csv(textFilename,sep="/n", header = None)
df=df.append(data)
df_c = pd.concat([df_c, data], ignore_index=True,axis=1)
c=c+1
get_ipython().config.get('IPKernelApp', {})['parent_appname'] = ""
df_c.tail()
#print(df_c.head())
#df_s = df_c[[2]].copy()
#df_s = df_s.apply(lambda x: x.lower())
df_c.dropna(inplace=True)
#df_s.columns=['text']
#print(df_s.tail())
df_c[2].shape
```
# Removing numbers and non ascii words
```
df_s2 = df_s['text'].str.replace('\d+', '')
def remove_non_ascii(text):
return ''.join(i for i in text if ord(i)<128)
df_s3 = df_s2.apply(remove_non_ascii)
df_s3.tail()
```
Read multiple pdf files
# Top bigrams after removing stop words
```
def get_top_n_bigram(corpus, n=None):
vec = CountVectorizer(ngram_range=(2, 2), stop_words='english').fit(corpus)
bag_of_words = vec.transform(corpus)
sum_words = bag_of_words.sum(axis=0)
words_freq = [(word, sum_words[0, idx]) for word, idx in vec.vocabulary_.items()]
words_freq =sorted(words_freq, key = lambda x: x[1], reverse=True)
return words_freq[:n]
common_words = get_top_n_bigram(df_s3, 20)
for word, freq in common_words:
print(word, freq)
df4 = pd.DataFrame(common_words, columns = ['ReviewText' , 'count'])
#fig = plt.figure()
df4.groupby('ReviewText').sum()['count'].sort_values(ascending=False).iplot(
kind='bar', yTitle='Count', linecolor='black', title='Top 20 bigrams in review after removing stop words')
#fig.savefig()
```
## Saving fugures
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(15,7))
df4.groupby('ReviewText').sum()['count'].sort_values(ascending=True).plot.barh(ax=ax,title='Top 20 keywords')
plt.show()
fig.savefig('results/Journal.png')
df4.head()
#data=df_4
for file in os.listdir(txtDir):
fileName=file.split(".")[0]
textFilename = txtDir + file
resFilename = resDir + fileName + ".png"
text = open(textFilename,encoding='utf8').read()
wordcloud = WordCloud().generate(text)
wordcloud.to_file(resFilename)
```
# Top unigrams after removing stop words
```
def get_top_n_words(corpus, n=None):
vec = CountVectorizer(stop_words = 'english').fit(corpus)
bag_of_words = vec.transform(corpus)
sum_words = bag_of_words.sum(axis=0)
words_freq = [(word, sum_words[0, idx]) for word, idx in vec.vocabulary_.items()]
words_freq =sorted(words_freq, key = lambda x: x[1], reverse=True)
return words_freq[:n]
common_words = get_top_n_words(df_s3, 20)
for word, freq in common_words:
print(word, freq)
df2 = pd.DataFrame(common_words, columns = ['ReviewText' , 'count'])
submission = pd.DataFrame({"Keywords":df2['ReviewText'], "Count":df2['count']})
submission.to_csv("keywords.csv", index=False)
df2.groupby('ReviewText').sum()['count'].sort_values(ascending=False).iplot(
kind='bar', yTitle='Count', linecolor='black', title='Top 20 words in review after removing stop words')
```
# TF-IDF
```
from sklearn.feature_extraction.text import TfidfVectorizer
tvec = TfidfVectorizer(min_df=.0025, max_df=.1, stop_words='english', ngram_range=(1,2))
tvec_weights = tvec.fit_transform(df_s3.stemmed.dropna())
weights = np.asarray(tvec_weights.mean(axis=0)).ravel().tolist()
weights_df = pd.DataFrame({'term': tvec.get_feature_names(), 'weight': weights})
weights_df.sort_values(by='weight', ascending=False).head(20)
```
# Top trigrams after removing stop words
```
def get_top_n_trigram(corpus, n=None):
vec = CountVectorizer(ngram_range=(3, 3), stop_words='english').fit(corpus)
bag_of_words = vec.transform(corpus)
sum_words = bag_of_words.sum(axis=0)
words_freq = [(word, sum_words[0, idx]) for word, idx in vec.vocabulary_.items()]
words_freq =sorted(words_freq, key = lambda x: x[1], reverse=True)
return words_freq[:n]
common_words = get_top_n_trigram(df['text'], 20)
for word, freq in common_words:
print(word, freq)
df6 = pd.DataFrame(common_words, columns = ['ReviewText' , 'count'])
df6.groupby('ReviewText').sum()['count'].sort_values(ascending=False).iplot(
kind='bar', yTitle='Count', linecolor='black', title='Top 20 trigrams in review after removing stop words')
```
# Top 20 part-of-speech tagging of review corpus
```
import nltk
nltk.download('punkt')
from textblob import TextBlob
blob = TextBlob(str(df['text']))
pos_df = pd.DataFrame(blob.tags, columns = ['word' , 'pos'])
pos_df = pos_df.pos.value_counts()[:20]
pos_df.iplot(
kind='bar',
xTitle='POS',
yTitle='count',
title='Top 20 Part-of-speech tagging for review corpus')
```
# Topic Modeling with LSA (Latent Semantic Analysis)
#https://medium.com/nanonets/topic-modeling-with-lsa-psla-lda-and-lda2vec-555ff65b0b05
```
reindexed_data = df_s
tfidf_vectorizer = TfidfVectorizer(stop_words='english', use_idf=True, smooth_idf=True)
reindexed_data = reindexed_data.values
document_term_matrix = tfidf_vectorizer.fit_transform(reindexed_data)
n_topics = 6
lsa_model = TruncatedSVD(n_components=n_topics)
lsa_topic_matrix = lsa_model.fit_transform(document_term_matrix)
def get_keys(topic_matrix):
'''
returns an integer list of predicted topic
categories for a given topic matrix
'''
keys = topic_matrix.argmax(axis=1).tolist()
return keys
def keys_to_counts(keys):
'''
returns a tuple of topic categories and their
accompanying magnitudes for a given list of keys
'''
count_pairs = Counter(keys).items()
categories = [pair[0] for pair in count_pairs]
counts = [pair[1] for pair in count_pairs]
return (categories, counts)
lsa_keys = get_keys(lsa_topic_matrix)
lsa_categories, lsa_counts = keys_to_counts(lsa_keys)
def get_top_n_words(n, keys, document_term_matrix, tfidf_vectorizer):
'''
returns a list of n_topic strings, where each string contains the n most common
words in a predicted category, in order
'''
top_word_indices = []
for topic in range(n_topics):
temp_vector_sum = 0
for i in range(len(keys)):
if keys[i] == topic:
temp_vector_sum += document_term_matrix[i]
temp_vector_sum = temp_vector_sum.toarray()
top_n_word_indices = np.flip(np.argsort(temp_vector_sum)[0][-n:],0)
top_word_indices.append(top_n_word_indices)
top_words = []
for topic in top_word_indices:
topic_words = []
for index in topic:
temp_word_vector = np.zeros((1,document_term_matrix.shape[1]))
temp_word_vector[:,index] = 1
the_word = tfidf_vectorizer.inverse_transform(temp_word_vector)[0][0]
topic_words.append(the_word.encode('ascii').decode('utf-8'))
top_words.append(" ".join(topic_words))
return top_words
top_n_words_lsa = get_top_n_words(3, lsa_keys, document_term_matrix, tfidf_vectorizer)
for i in range(len(top_n_words_lsa)):
print("Topic {}: ".format(i+1), top_n_words_lsa[i])
top_3_words = get_top_n_words(3, lsa_keys, document_term_matrix, tfidf_vectorizer)
labels = ['Topic {}: \n'.format(i) + top_3_words[i] for i in lsa_categories]
fig, ax = plt.subplots(figsize=(16,8))
ax.bar(lsa_categories, lsa_counts);
ax.set_xticks(lsa_categories);
ax.set_xticklabels(labels);
ax.set_ylabel('Number of review text');
ax.set_title('LSA topic counts');
plt.show();
```
# Text Summarization
```
from gensim.summarization.summarizer import summarize
extracted_text
print(summarize(extracted_text))
```
## Word cloud
| github_jupyter |
# 4교시 조인 연산
### 목차
* [1. 조인 유형](#1.-조인-유형)
* [2. Inner Join](#2.-Inner-Join)
* [3. Outer Join](#3.-Outer-Join)
* [4. 조인 유의사항](#4.-조인-유의사항)
* [참고자료](#참고자료)
### 1. 조인 유형
```
from pyspark.sql import *
from pyspark.sql.functions import *
from pyspark.sql.types import *
from IPython.display import display, display_pretty, clear_output, JSON
spark = (
SparkSession
.builder
.config("spark.sql.session.timeZone", "Asia/Seoul")
.getOrCreate()
)
# 노트북에서 테이블 형태로 데이터 프레임 출력을 위한 설정을 합니다
spark.conf.set("spark.sql.repl.eagerEval.enabled", True) # display enabled
spark.conf.set("spark.sql.repl.eagerEval.truncate", 100) # display output columns size
```
### JOIN 학습을 위해 상품은 단 하나만 구매할 수 있다고 가정하여 아래와 같은 테이블이 존재합니다
#### 정보 1. 고객은 4명이지만, 1명은 탈퇴하여 존재하지 않습니다
| 고객 아이디 (u_id) | 고객 이름 (u_name) | 고객 성별 (u_gender) |
| - | - | - |
| 1 | 정휘센 | 남 |
| 2 | 김싸이언 | 남 |
| 3 | 박트롬 | 여 |
#### 정보 2. 구매 상품은 3개이며, 탈퇴한 고객의 상품정보가 남아있습니다
| 구매 고객 아이디 (u_id) | 구매 상품 이름 (p_name) | 구매 상품 가격 (p_amount) |
| - | - | - |
| 2 | LG DIOS | 2,000,000 |
| 3 | LG Cyon | 1,800,000 |
| 4 | LG Computer | 4,500,000 |
```
user = spark.createDataFrame([
(1, "정휘센", "남"),
(2, "김싸이언", "남"),
(3, "박트롬", "여")
]).toDF("u_id", "u_name", "u_gender")
user.printSchema()
display(user)
purchase = spark.createDataFrame([
(2, "LG DIOS", 2000000),
(3, "LG Cyon", 1800000),
(4, "LG Computer", 4500000)
]).toDF("p_uid", "p_name", "p_amont")
purchase.printSchema()
display(purchase)
```
## 2. Inner Join
### 2.1 구매 정보와 일치하는 고객 정보를 조인 (inner)
```
user.join(purchase, user.u_id == purchase.p_uid).show()
user.join(purchase, user.u_id == purchase.p_uid, "inner").count()
```
### <font color=green>1. [기본]</font> 고객 정보 "data/tbl_user", 제품 정보 "data/tbl_purchase" CSV 파일을 읽고
#### 1. 각각 스키마를 출력하세요
#### 2. 각각 데이터를 출력하세요
#### 3. 고객(tbl_user) 테이블의 u_id 와 제품(tbl_purchase) 테이블의 p_uid 는 고객아이디 입니다
#### 4. 고객 테이블을 기준으로 어떤 제품을 구매하였는지 inner join 을 통해 조인해 주세요
#### 5. 조인된 최종 테이블의 스키마와 데이터를 출력해 주세요
<details><summary>[실습1] 출력 결과 확인 </summary>
> 아래와 유사하게 방식으로 작성 되었다면 정답입니다
```python
left = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_user.csv")
)
left.printSchema()
left.show()
right = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_purchase.csv")
)
right.printSchema()
right.show()
join_codition = left.u_id == right.p_uid
answer = left.join(right, join_codition, "inner")
answer.printSchema()
display(answer)
```
</details>
```
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
left = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_user.csv")
)
left.printSchema()
left.show()
right = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_purchase.csv")
)
right.printSchema()
right.show()
join_codition = left.u_id == right.p_uid
answer = left.join(right, join_codition, "inner")
answer.printSchema()
display(answer)
```
## 3. Outer Join
### 3.1 모든 고객의 정보 구매 정보를 조인 (left_outer)
```
user.join(purchase, user.u_id == purchase.p_uid, "left_outer").orderBy(purchase.p_uid.asc()).show()
```
### 3-2. 모든 상품에 대한 고객 정보를 조인 (right_outer)
```
user.join(purchase, user.u_id == purchase.p_uid, "right_outer").orderBy(purchase.p_uid.asc()).show()
```
### 3-3. 모든 고객과 상품에 대한 정보를 조인 (full_outer)
```
user.join(purchase, user.u_id == purchase.p_uid, "full_outer").orderBy(purchase.p_uid.asc()).show()
```
### <font color=green>2. [기본]</font> 고객 정보 "data/tbl_user", 제품 정보 "data/tbl_purchase" CSV 파일을 읽고
#### 1. 각각 스키마를 출력하세요
#### 2. 각각 데이터를 출력하세요
#### 3. 고객(tbl_user) 테이블의 u_id 와 제품(tbl_purchase) 테이블의 p_uid 는 고객아이디 입니다
#### 4. 모든 상품을 기준으로 구매하 고객정보를 조인해 주세요 (left: purchase, right: user, join: left_outer)
#### 5. 조인된 최종 테이블의 스키마와 데이터를 출력해 주세요
#### 6. 출력시에 상품 가격의 내림차순으로 정렬해 주세요
<details><summary>[실습2] 출력 결과 확인 </summary>
> 아래와 유사하게 방식으로 작성 되었다면 정답입니다
```python
left = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_purchase.csv")
)
left.printSchema()
# left.show()
right = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_user.csv")
)
right.printSchema()
# right.show()
join_codition = left.p_uid == right.u_id
answer = left.join(right, join_codition, "left_outer")
answer.printSchema()
display(answer.orderBy(desc("p_amount")))
```
</details>
```
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
left = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_purchase.csv")
)
left.printSchema()
# left.show()
right = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_user.csv")
)
right.printSchema()
# right.show()
join_codition = left.p_uid == right.u_id
answer = left.join(right, join_codition, "left_outer")
answer.printSchema()
display(answer.orderBy(desc("p_amount")))
```
### <font color=blue>3. [중급]</font> 고객 정보 "data/tbl_user", 제품 정보 "data/tbl_purchase" CSV 파일을 읽고
#### 1. 각각 스키마를 출력하세요
#### 2. 각각 데이터를 출력하세요
#### 3. 고객(tbl_user) 테이블의 u_id 와 제품(tbl_purchase) 테이블의 p_uid 는 고객아이디 입니다
#### 4. 모든 고객을 기준으로 구매한 상품 정보를 조인해 주세요 (left: user, right: purchase, join: left_outer)
#### 5. 조인된 최종 테이블의 스키마와 데이터를 출력해 주세요
#### 6. 출력시에 상품 가격(tbl_purchase.p_amount)의 내림차순으로 정렬해 주세요
#### 7. 상품가격이 없는 경우에는 등록일자(tbl_user.u_signup)가 최신으로 정렬해 주세요
<details><summary>[실습3] 출력 결과 확인 </summary>
> 아래와 유사하게 방식으로 작성 되었다면 정답입니다
```python
left = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_user.csv")
)
left.printSchema()
# left.show()
right = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_purchase.csv")
)
right.printSchema()
# right.show()
join_codition = left.u_id == right.p_uid
answer = left.join(right, join_codition, "left_outer")
answer.printSchema()
display(answer.orderBy(desc("p_amount"), desc("u_signup")))
```
</details>
```
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
left = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_user.csv")
)
left.printSchema()
# left.show()
right = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_purchase.csv")
)
right.printSchema()
# right.show()
join_codition = left.u_id == right.p_uid
answer = left.join(right, join_codition, "left_outer")
answer.printSchema()
display(answer.orderBy(desc("p_amount"), desc("u_signup")))
```
### <font color=red>4. [고급]</font> 고객 정보 "data/tbl_user", 제품 정보 "data/tbl_purchase" CSV 파일을 읽고
#### 1. 각각 스키마를 출력하세요
#### 2. 각각 데이터를 출력하세요
#### 3. 고객(tbl_user) 테이블의 u_id 와 제품(tbl_purchase) 테이블의 p_uid 는 고객아이디 입니다
#### 4. 모든 고객을 기준으로 모든 상품 정보를 조인해 주세요 (left: user, right: purchase, join: inner)
#### 5. 조인된 최종 테이블의 스키마와 데이터를 출력해 주세요
#### 6. 출력시에 상품 가격(tbl_purchase.p_amount)의 내림차순으로 정렬해 주세요
#### 7. 상품가격이 없는 경우에는 등록일자(tbl_user.u_signup)가 최신으로 정렬해 주세요
<details><summary>[실습4] 출력 결과 확인 </summary>
> 아래와 유사하게 방식으로 작성 되었다면 정답입니다
```python
left = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_user.csv")
)
left.printSchema()
# left.show()
right = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_purchase.csv")
)
right.printSchema()
# right.show()
join_codition = left.u_id == right.p_uid
answer = left.join(right, join_codition, "inner")
answer.printSchema()
display(answer.orderBy(desc("p_amount"), desc("u_signup")))
```
</details>
```
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
left = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_user.csv")
)
left.printSchema()
# left.show()
right = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_purchase.csv")
)
right.printSchema()
# right.show()
join_codition = left.u_id == right.p_uid
answer = left.join(right, join_codition, "inner")
answer.printSchema()
display(answer.orderBy(desc("p_amount"), desc("u_signup")))
```
### <font color=blue>5. [중급]</font> 실습 5. 고객 정보 "data/tbl_user", 제품 정보 "data/tbl_purchase" CSV 파일을 읽고
#### 1. 각각 스키마를 출력하세요
#### 2. 각각 데이터를 출력하세요
#### 3. 고객(tbl_user) 테이블의 u_id 와 제품(tbl_purchase) 테이블의 p_uid 는 고객아이디 입니다
#### 4. 모든 고객과 모든 상품 정보를 조인해 주세요 (left: user, right: purchase, join: full_outer)
#### 5. 조인된 최종 테이블의 스키마와 데이터를 출력해 주세요
#### 6. 출력시에 상품 가격(tbl_purchase.p_amount)의 내림차순으로 정렬해 주세요
#### 7. 상품가격이 없는 경우에는 등록일자(tbl_user.u_signup)가 최신으로 정렬해 주세요
<details><summary>[실습5] 출력 결과 확인 </summary>
> 아래와 유사하게 방식으로 작성 되었다면 정답입니다
```python
left = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_user.csv")
)
left.printSchema()
# left.show()
right = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_purchase.csv")
)
right.printSchema()
# right.show()
join_codition = left.u_id == right.p_uid
answer = left.join(right, join_codition, "full_outer")
answer.printSchema()
display(answer.orderBy(desc("p_amount"), desc("u_signup")))
```
</details>
```
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
left = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_user.csv")
)
left.printSchema()
# left.show()
right = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_purchase.csv")
)
right.printSchema()
# right.show()
join_codition = left.u_id == right.p_uid
answer = left.join(right, join_codition, "full_outer")
answer.printSchema()
display(answer.orderBy(desc("p_amount"), desc("u_signup")))
```
### <font color=green>6. [기본]</font> 아래의 조인 연산 결과에서 null 값에 대한 치환을 해주세요
#### 1. 고객아이디(u_id)와 고객이름(u_name), 성별(u_gender), 가입일자(u_signup) 기본값을 채워주세요
##### u_id = 0, u_name = '미확인', u_gender = '미확인', u_signup = '19700101'
<details><summary>[실습6] 출력 결과 확인 </summary>
> 아래와 유사하게 방식으로 작성 되었다면 정답입니다
```python
left = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_purchase.csv")
)
left.printSchema()
# left.show()
right = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_user.csv")
)
right.printSchema()
# right.show()
join_codition = left.p_uid == right.u_id
user_fill = { "u_id":0, "u_name":"미확인", "u_gender":"미확인", "u_signup":"19700101" }
answer = left.join(right, join_codition, "left_outer").na.fill(user_fill)
answer.printSchema()
display(answer.orderBy(asc("u_signup")))
```
</details>
```
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
left = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_purchase.csv")
)
left.printSchema()
# left.show()
right = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_user.csv")
)
right.printSchema()
# right.show()
join_codition = left.p_uid == right.u_id
user_fill = { "u_id":0, "u_name":"미확인", "u_gender":"미확인", "u_signup":"19700101" }
answer = left.join(right, join_codition, "left_outer").na.fill(user_fill)
answer.printSchema()
display(answer.orderBy(asc("u_signup")))
```
## 4. 조인시 유의할 점
```
u = spark.createDataFrame([
(1, "정휘센", "남"),
(2, "김싸이언", "남"),
(3, "박트롬", "여")
]).toDF("id", "name", "gender")
u.printSchema()
u.show()
p = spark.createDataFrame([
(2, "LG DIOS", 2000000),
(3, "LG Cyon", 1800000),
(4, "LG Computer", 4500000)
]).toDF("id", "name", "amount")
p.printSchema()
p.show()
```
### 3.1 중복 컬럼명 처리가 되지 않은 경우
> #### AnalysisException: "Reference 'id' is ambiguous, could be: id, id.;"
```
up = u.join(p, u.id == p.id)
up.show()
# up.select("id")
```
### 3.2 중복 컬럼명 해결방안 - 데이터 프레임의 컬럼 명을 다르게 만든다
```
u1 = u.withColumnRenamed("id", "u_uid")
p1 = p.withColumnRenamed("id", "p_uid")
u1.printSchema()
p1.printSchema()
up = u1.join(p1, u1.u_uid == p1.p_uid)
up.show()
up.select("u_uid")
```
### 3.3 중복 컬럼명 해결방안 - 조인 직후 중복 컬럼을 제거합니다
```
up = u.join(p, u.id == p.id).drop(p.id)
up.show()
up.select("id")
```
### <font color=red>7. [고급]</font> 고객 정보 "data/tbl_user_id", 제품 정보 "data/tbl_purchase_id" CSV 파일을 읽고
#### 1. *고객(tbl_user) 테이블의 아이디도 id 이고 제품(tbl_purchase) 테이블의 아이디도 id 입니다*
#### 2. 가장 비싼 제품을 구매한 고객의 고객정보와 제품정보를 출력해 주세요
#### 3. 최종 출력되는 컬럼은 고객아이디(u_id), 고객이름(u_name), 상품이름(p_name), 상품가격(p_amount) 이렇게 4개 컬럼입니다
#### 4. 상품가격 (p_amount) 내림차순으로 정렬되어야 합니다
<details><summary>[실습7] 출력 결과 확인 </summary>
> 아래와 유사하게 방식으로 작성 되었다면 정답입니다
```python
left = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_purchase_id.csv")
)
# left.printSchema()
# left.show()
right = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_user_id.csv")
)
# right.printSchema()
# right.show()
u_left = left.withColumnRenamed("id", "u_id")
u_right = right.withColumnRenamed("id", "p_uid")
u_left.printSchema()
u_right.printSchema()
join_codition = u_left.u_id == u_right.p_uid
answer = u_left.join(u_right, join_codition, "inner").where("p_amount > 0").select("u_id", "u_name", "p_name", "p_amount")
answer.printSchema()
display(answer.orderBy(desc("p_amount")))
```
</details>
```
# 여기에 실습 코드를 작성하고 실행하세요 (Shift+Enter)
left = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_purchase_id.csv")
)
# left.printSchema()
# left.show()
right = (
spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("data/tbl_user_id.csv")
)
# right.printSchema()
# right.show()
u_left = left.withColumnRenamed("id", "u_id")
u_right = right.withColumnRenamed("id", "p_uid")
u_left.printSchema()
u_right.printSchema()
join_codition = u_left.u_id == u_right.p_uid
answer = u_left.join(u_right, join_codition, "inner").where("p_amount > 0").select("u_id", "u_name", "p_name", "p_amount")
answer.printSchema()
display(answer.orderBy(desc("p_amount")))
```
## 참고자료
#### 1. [Spark Programming Guide](https://spark.apache.org/docs/latest/sql-programming-guide.html)
#### 2. [PySpark SQL Modules Documentation](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html)
#### 3. <a href="https://spark.apache.org/docs/3.0.1/api/sql/" target="_blank">PySpark 3.0.1 Builtin Functions</a>
#### 4. [PySpark Search](https://spark.apache.org/docs/latest/api/python/search.html)
#### 5. [Pyspark Functions](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?#module-pyspark.sql.functions)
| github_jupyter |
```
import pickle
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
makenewdata = False
if makenewdata:
N=50
muac = np.random.randn(N)*15+113
muac.sort()
muac = muac[np.diff(muac)<7]
N = len(muac)
ok = ((muac - 108)+(np.random.randn(N)*15))>0
pickle.dump([muac,ok],open( "example_muac.p", "wb" ))
else:
example = pickle.load(open( "example_muac.p", "rb" ))
muac = example[0]
ok = example[1]
N = len(muac)
if makenewdata:
zscore = 3*((ok-0.5)*0.8 - 0.3*((muac - np.mean(muac))/np.std(muac)) + 0.3*np.random.randn(N))
pickle.dump(zscore,open('example_zscore.p','wb'))
else:
zscore = pickle.load(open( "example_zscore.p", "rb" ))
print len(muac[ok])
print len(muac[~ok])
def plotdata(muac,ok,xlim = [80,150],testpoint=115):
plt.plot(muac[ok],np.ones(np.sum(ok)),'+g',mew=2,markersize=10)
plt.plot(muac[~ok],np.zeros(np.sum(~ok)),'xr',mew=2,markersize=10)
plt.ylim([-.5,1.5])
plt.xlim(xlim)
plt.xlabel('MUAC / mm')
frame1 = plt.gca()
plt.yticks([0,1], ['need\ntreatment','do not\nneed\ntreatment'])
#frame1.axes.get_yaxis().set_visible(False)
#frame1.axes.get_yaxis().set_ticks(['Did not recover','Recovered'])
plt.text(testpoint,0.5,'?',fontsize=14)
plt.title('MUAC dataset')
fig = plt.gcf()
fig.set_size_inches(8, 6)
def plotroc(muac,ok,threshes,symbol='x',skipnewfig = False):
if not skipnewfig:
plt.figure(num=None, figsize=(6,6), dpi=80, facecolor='w', edgecolor='k')
falsepos = []
trueneg = []
for t in threshes:
falsepos.append(np.mean(muac[ok]<t))
trueneg.append(np.mean(muac[~ok]<t))
plt.plot(falsepos,trueneg,symbol,lw=3,mew=3,markersize=20)
#plt.axis('equal')
margin = 0.01
plt.xlim([0-margin,1+margin])
plt.ylim([0-margin,1+margin])
plt.plot([0,1],[0,1],'k-')
plt.title('ROC Curve')
plt.xlabel('Proportion of children who don\'t\nneed treatment who got it (False Positive)')
plt.ylabel('Proportion of children who\nneed treatment who got it (True Positive)')
#xticks = range(0,1,0.1)
#xticks.append(len(muac[ok]))
#plt.xticks(xticks, xticks)
#yticks = range(0,len(muac[~ok]),2)
#yticks.append(len(muac[~ok]))
#plt.yticks(yticks, yticks)
def savefig(filename):
plt.savefig(filename,bbox_inches='tight',transparent=True)
plotdata(muac,ok)
savefig('classify1.png')
threshes = []
plt.figure()
plotdata(muac,ok)
topthresh = np.max(muac[~ok])+1.5
threshes.append(topthresh)
plt.vlines([topthresh],-1,2)
plt.title('A possible threshold')
savefig('classify2.png')
#plt.figure()
plotroc(muac,ok,threshes)
savefig('roc1.png')
plt.figure()
plotdata(muac,ok)
lowthresh = np.min(muac[ok])-1.5
threshes.append(lowthresh)
plt.vlines([lowthresh],-1,2)
plt.title('Another possible threshold')
savefig('classify3.png')
plotroc(muac,ok,threshes)
savefig('roc2.png')
plt.figure()
plotdata(muac,ok)
midthresh = 4+(np.min(muac[ok])+np.max(muac[~ok]))/2 #moved it a little
threshes.append(midthresh)
plt.vlines([midthresh],-1,2)
plt.title('Somewhere in between?')
savefig('classify4.png')
plotroc(muac,ok,threshes)
savefig('roc3.png')
allthreshes = []
#for t in np.arange(np.min(muac[ok])-1,np.max(muac[~ok])+1,1.0):
for t in np.arange(0,200,1.0):
allthreshes.append(t)
plotroc(muac,ok,threshes)
plotroc(muac,ok,allthreshes,'k-',skipnewfig=True)
savefig('roc4.png')
def plotboth():
plt.title('z-score vs muac')
plt.plot(muac[ok],zscore[ok],'g+',mew=2,markersize=10,label='not required')
plt.plot(muac[~ok],zscore[~ok],'xr',mew=2,markersize=10,label='need\ntreatment')
plt.legend(loc='upper right')
plt.text(115,0.15,'?',fontsize=16)
plt.ylabel('z-score')
plt.xlabel('MUAC / mm')
fig = plt.gcf()
fig.set_size_inches(8, 6)
plotboth()
savefig('both.png')
#plt.plot([-3,2.5],[150,80],'k-')
plt.figure()
plotdata(zscore,ok,xlim=[-3.5,3.5],testpoint=0.15)
plt.title('z-score dataset')
plt.xlabel('z-score')
savefig('zscore.png')
plotboth()
plt.plot([80,145],[3,-3])
savefig('linear.png')
plotdata(zscore,ok,xlim=[-3.5,3.5],testpoint=0.15)
from sklearn.linear_model import LogisticRegression
m = LogisticRegression()
data = zscore[:,None]
target = np.array([1 if k else 0 for k in ok])
m.fit(data,target)
test_muac = np.arange(-5,5,0.01)[:,None]
p = m.predict_proba(test_muac)
plt.plot(test_muac,p[:,1])
#plt.xlim([0,30])
#plt.plot(data,target,'o')
plt.title('Logistic Regression Fit')
plt.xlabel('Z-score')
savefig('logistic')
data = np.vstack([zscore,muac]).T #Here I combine the zscores and MUAC. # <<< Modify for exercise 7
target = np.array([1 if k else 0 for k in ok])
m.fit(t)
muac.shape
```
| github_jupyter |
## Code generation
```
# initialize sympy printing (for latex output)
from sympy import init_printing, Symbol
init_printing()
# import functions and classes for compartment models
from compartor import *
```
Here, we illustrate how to export the moment equations in LaTeX format or generate code for simulations.
We consider again the exemplary model of the paper and derive moments equations with the automated function.
```
x = Content('x')
y = Content('y')
# Intake Distribution
pi_I = OutcomeDistribution.Poisson(Symbol('\pi_{I}(y; \lambda)'),y[0],Symbol('\lambda'))
Intake = TransitionClass( {} -to> [(y[0],0)], 'k_I', pi=pi_I, name='I')
Fusion = TransitionClass( [x] + [y] -to> [x+y], 'k_F', name='F')
Conversion = TransitionClass( [x] -to> [x + (-1,1)], 'k_c', x[0], name='c')
Degradation = TransitionClass( [x] -to> [x + (0,-1)], 'k_d', x[1], name='d')
transitions = [ Intake, Fusion, Conversion, Degradation]
display_transition_classes(transitions)
desired_moments = [Moment(0,0), Moment(0,0)**2, Moment(1,0), Moment(1,0)**2, Moment(0,1), Moment(0,1)**2]
equations = automated_moment_equations(2, transitions, desired_moments)
display_moment_equations(equations)
```
The LaTeX source of the ODE system can be found and copy-pasted by modifying the Math Render option of the jupyter notebook. This is done by right-clicking on the system and choosing Math Settings > Math Render > Plain Source .
From the closed equations, we can also generate code to simulate the system. Currently, Python or Julia code can be generated.
For a direct code output, the user can rely on the functions:
```
python_code = generate_python_code(equations, function_name="example")
print(python_code)
```
or, for Julia code,
```
julia_code = generate_julia_code(equations, function_name="example")
print(julia_code)
```
Both derive from a common `AbstractCodeGenerator` class, which allows the user to customize the code output with further implementation-specific details. The argument `function_name` gives sets the initial name of the two functions defined in the respective output:
* `function_name_ODEs()` implements the ODE system in the respective programming language,
* `function_name_initial()` is a supplementary function that computes the initial moments for a given initial configuration `n0` of the compartment population.
## Solving and Plotting results
We use the generated functions with `scipy.solve_ivp` to solve the system with the same initial condition and parameter values reported in the paper.
```
from scipy.integrate import solve_ivp
import numpy as np
exec(python_code)
# vector of timepoints where to solve ODE problem
timepoints = np.linspace(0, 100, 1001)
# initial condition
# 1 compartment with 20 molecules of first species
M0 = example_initial((1, 20, 0))
# rate constants and other parameters
kI = 1.0
kF = 0.005
kc = 0.1
kd = 0.05
Lambda = 10
parameters = (Lambda, kF, kI, kc, kd)
# solve
sol = solve_ivp(
lambda t, M: example_ODEs(M, np.zeros(7), parameters),
(np.min(timepoints), np.max(timepoints)),
M0,
method='BDF',
t_eval=timepoints)
# print solution of N
print(sol.y[1])
```
Finally, we use `matplotlib` to plot the solution for mean and standard deviation of $N$ and of the total amounts of molecules $M^{(1,0)}$ and $M^{(0,1)}$ for the two chemical species.
```
%matplotlib inline
import matplotlib.pyplot as plt
y = sol.y
N = y[0]
stdN = np.sqrt(y[1] - N**2)
plt.plot(timepoints, y[0], color="b")
plt.ylabel(r'$\left< N \right>$', size=12)
plt.xlabel('time', size=12)
plt.title("Expected compartment number", size=12)
plt.fill_between(timepoints, N-stdN, N+stdN, alpha=0.4, color="b")
plt.ylim((0,30))
m1 = y[2]
stdm1 = np.sqrt(y[3] - m1**2)
m2 = y[4]
stdm2 = np.sqrt(y[5] - m2**2)
plt.plot(timepoints, m1, color="g")
plt.fill_between(timepoints, m1-stdm1, m1+stdm1, alpha=0.4, color="g",label=r'$\left< M^{(1,0)} \right>$')
plt.plot(timepoints, m2, color="orange")
plt.ylabel('molecule number', size=12)
plt.fill_between(timepoints, m2-stdm2, m2+stdm2, alpha=0.4, color="orange",label=r'$\left< M^{(0,1)} \right>$')
plt.xlabel('time', size=12)
plt.title("Expected total molecule amount", size=12)
plt.legend()
plt.ylim((0,250))
```
| github_jupyter |
# Tensor类型
- 模块`torch.Tensor`
```
import torch
torch.Tensor
# help(torch.Tensor)
```
- Tensor张量就是一个统一类型的矩阵。可以按照数据类型与设备分成多种。
- 默认的torch.Tensor实际是torch.FloatTensor的别名;
类型|类型定义|CPU|GPU
-|-|-|-
16位浮点数|torch.half / torch.float16|torch.HalfTensor|torch.cuda.HalfTensor
32浮点数|torch.float / torch.float32|torch.FloatTensor|torch.cuda.FloatTensor
64位浮点数|torch.double / torch.float64|torch.DoubleTensor|torch.cuda.DoubleTensor
8位整数|torch.int8|torch.CharTensor|torch.cuda.CharTensor
16位整数|torch.int16 / torch.short|torch.ShortTensor|torch.cuda.ShortTensor
32位整数|torch.int32 / torch.int|torch.IntTensor|torch.cuda.IntTensor
64位整数|torch.int64 / torch.long|torch.LongTensor|torch.cuda.LongTensor
8位无符号整数|torch.uint8|torch.ByteTensor|torch.cuda.ByteTensor
8位逻辑类型|torch.bool|torch.BoolTensor|torch.cuda.BoolTensor
# Tensor的构建
- Tensor的构建两种方式
1. 构造器方式
- torch.Tensor类
2. 函数工具方式
- torch.tensor函数
## Tensor构造器
```
help(torch.Tensor.__init__)
```
- 实际上这儿存在一个文档描述没有说清楚的问题,上面构造器来自`_TensorBase.py`中的`_TensorBase`类,就是Tensor的父类,`_TensorBase`类来自C++。下面用几点来说明这个调用过程:
- 上面的`_TensorBase.py`文件可以通过文件搜索找到;或者使用PyCharm跟踪找到。
- Torch早期版本来自Lua语言实现,该语言是与C语言交互非常直接的。
- 后来Torch从C扩展到C++
- 在Python中开始使用Cython开始扩展,其中很多效率性的处理都是交给C/C++语言的,这样Python文档很多不清楚的地方实际都在C++中找到原型说明,比如Tensor的构造器;
- 在官网可以直接下载C++库;C++只能下载库,源代码只有Python的扩展源代码,没有C++的源代码:
- 
## C的Tensor构造函数
- 下面文件可以下载C库,并在include目录下找到`TH\generic\THTensor.h`
```C
#ifndef TH_GENERIC_FILE
#define TH_GENERIC_FILE "TH/generic/THTensor.h"
#else
/* a la lua? dim, storageoffset, ... et les methodes ? */
#include <c10/core/TensorImpl.h>
#define THTensor at::TensorImpl
// These used to be distinct types; for some measure of backwards compatibility and documentation
// alias these to the single THTensor type.
#define THFloatTensor THTensor
#define THDoubleTensor THTensor
#define THHalfTensor THTensor
#define THByteTensor THTensor
#define THCharTensor THTensor
#define THShortTensor THTensor
#define THIntTensor THTensor
#define THLongTensor THTensor
#define THBoolTensor THTensor
#define THBFloat16Tensor THTensor
/**** access methods ****/
TH_API THStorage* THTensor_(storage)(const THTensor *self);
TH_API ptrdiff_t THTensor_(storageOffset)(const THTensor *self);
// See [NOTE: nDimension vs nDimensionLegacyNoScalars vs nDimensionLegacyAll]
TH_API int THTensor_(nDimension)(const THTensor *self);
TH_API int THTensor_(nDimensionLegacyNoScalars)(const THTensor *self);
TH_API int THTensor_(nDimensionLegacyAll)(const THTensor *self);
TH_API int64_t THTensor_(size)(const THTensor *self, int dim);
TH_API int64_t THTensor_(stride)(const THTensor *self, int dim);
TH_API scalar_t *THTensor_(data)(const THTensor *self);
/**** creation methods ****/
TH_API THTensor *THTensor_(new)(void);
TH_API THTensor *THTensor_(newWithTensor)(THTensor *tensor);
TH_API THTensor *THTensor_(newWithStorage1d)(THStorage *storage_, ptrdiff_t storageOffset_,
int64_t size0_, int64_t stride0_);
TH_API THTensor *THTensor_(newWithStorage2d)(THStorage *storage_, ptrdiff_t storageOffset_,
int64_t size0_, int64_t stride0_,
int64_t size1_, int64_t stride1_);
TH_API THTensor *THTensor_(newWithStorage3d)(THStorage *storage_, ptrdiff_t storageOffset_,
int64_t size0_, int64_t stride0_,
int64_t size1_, int64_t stride1_,
int64_t size2_, int64_t stride2_);
TH_API THTensor *THTensor_(newWithStorage4d)(THStorage *storage_, ptrdiff_t storageOffset_,
int64_t size0_, int64_t stride0_,
int64_t size1_, int64_t stride1_,
int64_t size2_, int64_t stride2_,
int64_t size3_, int64_t stride3_);
/* stride might be NULL */
TH_API THTensor *THTensor_(newWithSize1d)(int64_t size0_);
TH_API THTensor *THTensor_(newWithSize2d)(int64_t size0_, int64_t size1_);
TH_API THTensor *THTensor_(newWithSize3d)(int64_t size0_, int64_t size1_, int64_t size2_);
TH_API THTensor *THTensor_(newWithSize4d)(int64_t size0_, int64_t size1_, int64_t size2_, int64_t size3_);
TH_API THTensor *THTensor_(newClone)(THTensor *self);
TH_API THTensor *THTensor_(newContiguous)(THTensor *tensor);
TH_API THTensor *THTensor_(newSelect)(THTensor *tensor, int dimension_, int64_t sliceIndex_);
TH_API THTensor *THTensor_(newNarrow)(THTensor *tensor, int dimension_, int64_t firstIndex_, int64_t size_);
TH_API THTensor *THTensor_(newTranspose)(THTensor *tensor, int dimension1_, int dimension2_);
// resize* methods simply resize the storage. So they may not retain the current data at current indices.
// This is especially likely to happen when the tensor is not contiguous. In general, if you still need the
// values, unless you are doing some size and stride tricks, do not use resize*.
TH_API void THTensor_(resizeNd)(THTensor *tensor, int nDimension, const int64_t *size, const int64_t *stride);
TH_API void THTensor_(resizeAs)(THTensor *tensor, THTensor *src);
TH_API void THTensor_(resize0d)(THTensor *tensor);
TH_API void THTensor_(resize1d)(THTensor *tensor, int64_t size0_);
TH_API void THTensor_(resize2d)(THTensor *tensor, int64_t size0_, int64_t size1_);
TH_API void THTensor_(resize3d)(THTensor *tensor, int64_t size0_, int64_t size1_, int64_t size2_);
TH_API void THTensor_(resize4d)(THTensor *tensor, int64_t size0_, int64_t size1_, int64_t size2_, int64_t size3_);
TH_API void THTensor_(resize5d)(THTensor *tensor, int64_t size0_, int64_t size1_, int64_t size2_, int64_t size3_, int64_t size4_);
// Note: these are legacy resize functions that treat sizes as size->size() == 0 and size->data<int64_t>() as being 0-terminated.
TH_API void THTensor_(set)(THTensor *self, THTensor *src);
TH_API void THTensor_(setStorageNd)(THTensor *self, THStorage *storage_, ptrdiff_t storageOffset_, int nDimension, const int64_t *size, const int64_t *stride);
TH_API void THTensor_(setStorage1d)(THTensor *self, THStorage *storage_, ptrdiff_t storageOffset_,
int64_t size0_, int64_t stride0_);
TH_API void THTensor_(setStorage2d)(THTensor *self, THStorage *storage_, ptrdiff_t storageOffset_,
int64_t size0_, int64_t stride0_,
int64_t size1_, int64_t stride1_);
TH_API void THTensor_(setStorage3d)(THTensor *self, THStorage *storage_, ptrdiff_t storageOffset_,
int64_t size0_, int64_t stride0_,
int64_t size1_, int64_t stride1_,
int64_t size2_, int64_t stride2_);
TH_API void THTensor_(setStorage4d)(THTensor *self, THStorage *storage_, ptrdiff_t storageOffset_,
int64_t size0_, int64_t stride0_,
int64_t size1_, int64_t stride1_,
int64_t size2_, int64_t stride2_,
int64_t size3_, int64_t stride3_);
TH_API void THTensor_(narrow)(THTensor *self, THTensor *src, int dimension_, int64_t firstIndex_, int64_t size_);
TH_API void THTensor_(select)(THTensor *self, THTensor *src, int dimension_, int64_t sliceIndex_);
TH_API void THTensor_(transpose)(THTensor *self, THTensor *src, int dimension1_, int dimension2_);
TH_API int THTensor_(isTransposed)(const THTensor *self);
TH_API void THTensor_(unfold)(THTensor *self, THTensor *src, int dimension_, int64_t size_, int64_t step_);
TH_API void THTensor_(squeeze)(THTensor *self, THTensor *src);
TH_API void THTensor_(squeeze1d)(THTensor *self, THTensor *src, int dimension_);
TH_API void THTensor_(unsqueeze1d)(THTensor *self, THTensor *src, int dimension_);
TH_API int THTensor_(isContiguous)(const THTensor *self);
TH_API int THTensor_(isSameSizeAs)(const THTensor *self, const THTensor *src);
TH_API int THTensor_(isSetTo)(const THTensor *self, const THTensor *src);
TH_API ptrdiff_t THTensor_(nElement)(const THTensor *self);
TH_API void THTensor_(retain)(THTensor *self);
TH_API void THTensor_(free)(THTensor *self);
TH_API void THTensor_(freeCopyTo)(THTensor *self, THTensor *dst);
/* Slow access methods [check everything] */
TH_API void THTensor_(set0d)(THTensor *tensor, scalar_t value);
TH_API void THTensor_(set1d)(THTensor *tensor, int64_t x0, scalar_t value);
TH_API void THTensor_(set2d)(THTensor *tensor, int64_t x0, int64_t x1, scalar_t value);
TH_API void THTensor_(set3d)(THTensor *tensor, int64_t x0, int64_t x1, int64_t x2, scalar_t value);
TH_API void THTensor_(set4d)(THTensor *tensor, int64_t x0, int64_t x1, int64_t x2, int64_t x3, scalar_t value);
TH_API scalar_t THTensor_(get0d)(const THTensor *tensor);
TH_API scalar_t THTensor_(get1d)(const THTensor *tensor, int64_t x0);
TH_API scalar_t THTensor_(get2d)(const THTensor *tensor, int64_t x0, int64_t x1);
TH_API scalar_t THTensor_(get3d)(const THTensor *tensor, int64_t x0, int64_t x1, int64_t x2);
TH_API scalar_t THTensor_(get4d)(const THTensor *tensor, int64_t x0, int64_t x1, int64_t x2, int64_t x3);
/* Shape manipulation methods */
TH_API void THTensor_(cat)(THTensor *r_, THTensor *ta, THTensor *tb, int dimension);
TH_API void THTensor_(catArray)(THTensor *result, THTensor **inputs, int numInputs, int dimension);
/* Debug methods */
TH_API THDescBuff THTensor_(desc)(const THTensor *tensor);
TH_API THDescBuff THTensor_(sizeDesc)(const THTensor *tensor);
#endif
```
## C++的构造
- 来自C++库(与C同一个库)的`TH\generic\THTensor.hpp`文件:
```C++
#ifndef TH_GENERIC_FILE
#define TH_GENERIC_FILE "TH/generic/THTensor.hpp"
#else
// STOP!!! Thinking of including this header directly? Please
// read Note [TH abstraction violation]
// NOTE: functions exist here only to support dispatch via Declarations.cwrap. You probably don't want to put
// new functions in here, they should probably be un-genericized.
TH_CPP_API void THTensor_(setStorage)(THTensor *self, THStorage *storage_, ptrdiff_t storageOffset_,
at::IntArrayRef size_, at::IntArrayRef stride_);
/* strides.data() might be NULL */
TH_CPP_API THTensor *THTensor_(newWithStorage)(THStorage *storage, ptrdiff_t storageOffset,
at::IntArrayRef sizes, at::IntArrayRef strides);
TH_CPP_API void THTensor_(resize)(THTensor *self, at::IntArrayRef size, at::IntArrayRef stride);
TH_CPP_API THTensor *THTensor_(newWithSize)(at::IntArrayRef size, at::IntArrayRef stride);
#endif
```
## TensorStorage类
```C++
#ifndef TH_GENERIC_FILE
#define TH_GENERIC_FILE "TH/generic/THStorage.h"
#else
#include <c10/core/Allocator.h>
#include <c10/core/StorageImpl.h>
/* on pourrait avoir un liste chainee
qui initialise math, lab structures (or more).
mouais -- complique.
Pb: THMapStorage is kind of a class
THLab_()... comment je m'en sors?
en template, faudrait que je les instancie toutes!!! oh boy!
Et comment je sais que c'est pour Cuda? Le type float est le meme dans les <>
au bout du compte, ca serait sur des pointeurs float/double... etc... = facile.
primitives??
*/
// Struct definition is moved to THStorage.hpp (so this file stays C compatible)
#define THStorage at::StorageImpl
// These used to be distinct types; for some measure of backwards compatibility and documentation
// alias these to the single THStorage type.
#define THFloatStorage THStorage
#define THDoubleStorage THStorage
#define THHalfStorage THStorage
#define THByteStorage THStorage
#define THCharStorage THStorage
#define THShortStorage THStorage
#define THIntStorage THStorage
#define THLongStorage THStorage
#define THBoolStorage THStorage
#define THBFloat16Storage THStorage
TH_API scalar_t* THStorage_(data)(const THStorage*);
TH_API ptrdiff_t THStorage_(size)(const THStorage*);
TH_API size_t THStorage_(elementSize)(void);
/* slow access -- checks everything */
TH_API void THStorage_(set)(THStorage*, ptrdiff_t, scalar_t);
TH_API scalar_t THStorage_(get)(const THStorage*, ptrdiff_t);
TH_API THStorage* THStorage_(new)(void);
TH_API THStorage* THStorage_(newWithSize)(ptrdiff_t size);
TH_API THStorage* THStorage_(newWithSize1)(scalar_t);
TH_API THStorage* THStorage_(newWithSize2)(scalar_t, scalar_t);
TH_API THStorage* THStorage_(newWithSize3)(scalar_t, scalar_t, scalar_t);
TH_API THStorage* THStorage_(newWithSize4)(scalar_t, scalar_t, scalar_t, scalar_t);
TH_API THStorage* THStorage_(newWithMapping)(const char *filename, ptrdiff_t size, int flags);
TH_API THStorage* THStorage_(newWithAllocator)(ptrdiff_t size,
c10::Allocator* allocator);
TH_API THStorage* THStorage_(newWithDataAndAllocator)(
at::DataPtr&& data, ptrdiff_t size, at::Allocator* allocator);
/* should not differ with API */
TH_API void THStorage_(setFlag)(THStorage *storage, const char flag);
TH_API void THStorage_(clearFlag)(THStorage *storage, const char flag);
TH_API void THStorage_(retain)(THStorage *storage);
TH_API void THStorage_(swap)(THStorage *storage1, THStorage *storage2);
/* might differ with other API (like CUDA) */
TH_API void THStorage_(free)(THStorage *storage);
TH_API void THStorage_(resize)(THStorage *storage, ptrdiff_t size);
TH_API void THStorage_(fill)(THStorage *storage, scalar_t value);
#endif
```
## Python中的函数
- C与C++的函数在Python中都提供了封装实现。在python的`site-package`目录下的__init__.pyi文件中都有接口说明。
- 实际上Tensor的构造器与`tensor`, `*_like`,`new_*`等函数共享相同的参数格式。
## 官方推荐的Tensor创建方式
- 使用torch.tensor函数
- 使用torch.*_like函数
- 使用torch.new_*函数
- 其他的特殊功能的创建函数(随机Tensor,从其他格式转换创建,从文件加载创建等)
## Tensor的创建例子
### 使用tensor函数创建
- tensor函数总是使用深度拷贝,器特点是从已有的数据直接构建Tensor。 已有的数据格式包含
- list
- tuple,
- NumPy ``ndarray``,
- scalar
- other types.
```python
torch.tensor(data, dtype=None, device=None, requires_grad=False, pin_memory=False) → Tensor
```
```
import torch
print(help(torch.tensor))
```
1. list与tuple
```
import torch
t_list = torch.tensor([1, 2, 3])
t_tuple = torch.tensor(((4, 5, 6), (7, 8, 9)))
print(t_list, t_tuple)
```
2. scalar标量
```
t_scalar = torch.tensor(88)
print(t_scalar)
```
3. numpy.ndarray
```
import numpy as np
n_arr = np.array([[1, 2, 3, 4], [5, 6, 7, 8]])
t_ndarray = torch.tensor(n_arr)
print(t_ndarray)
```
4. 其他
- 测试下DataFrame,但是数据还是需要转换成numpy。
```
import pandas as pd
pd_data = pd.DataFrame([[1,2,3], [4,5,6]])
print(pd_data)
print(type(pd_data.values))
t_pandas = torch.tensor(pd_data.values)
print(t_pandas)
```
### 使用Tensor构造器
- 按照C的函数定义与C++的类取使用。
#### 空初始化
```C++
/* Empty init */
THTensor *THTensor_(new)(void)
{
return c10::make_intrusive<at::TensorImpl, at::UndefinedTensorImpl>(
c10::intrusive_ptr<at::StorageImpl>::reclaim(THStorage_(new)()),
at::CPUTensorId()
).release();
}
```
```
import torch
t1 = torch.Tensor()
print(t1)
```
#### 指针拷贝
- 引用拷贝
```C++
/* Pointer-copy init */
THTensor *THTensor_(newWithTensor)(THTensor *tensor)
{
return at::native::alias(THTensor_wrap(tensor)).unsafeReleaseTensorImpl();
}
```
```
arr = np.array([[1, 2, 3, 4], [5, 6, 7, 8]], np.float32) # 记得添加类型
t_arr = torch.tensor(arr)
t2 = torch.Tensor(t_arr) # t_arr必须是float32, 这是Tensor的默认类型,
# Tensor构造器是不能指定类型,tensor函数可以
print(t2)
# 如果输入的是整型,就必须使用整型的Tensor
arr_i = np.array([[1, 2, 3, 4], [5, 6, 7, 8]]) # 字面值默认是Long类型
t_arr_i = torch.tensor(arr_i)
t2_i = torch.LongTensor(t_arr_i) # t_arr必须是float32, 这是Tensor的默认类型,
# Tensor构造器是不能指定类型,tensor函数可以
print(t2_i)
```
#### 使用Storage构造
- 在官方的文档中,提供的是torch.Storage的类说明,实际上每个Tensor都提供一个对应类型的Storage,可以使用python的doc工具查看到如下输出:
```python
torch.storage._StorageBase(builtins.object)
|- BoolStorage(torch._C.BoolStorageBase, torch.storage._StorageBase)
|- ByteStorage(torch._C.ByteStorageBase, torch.storage._StorageBase)
|- CharStorage(torch._C.CharStorageBase, torch.storage._StorageBase)
|- DoubleStorage(torch._C.DoubleStorageBase, torch.storage._StorageBase)
|- FloatStorage(torch._C.FloatStorageBase, torch.storage._StorageBase)
|- IntStorage(torch._C.IntStorageBase, torch.storage._StorageBase)
|- LongStorage(torch._C.LongStorageBase, torch.storage._StorageBase)
|- ShortStorage(torch._C.ShortStorageBase, torch.storage._StorageBase)
```
- Storage的构造函数在python中也查不到详细的说明,可以通过C/C++的文档查阅到
```C++
TH_API THStorage* THStorage_(new)(void);
TH_API THStorage* THStorage_(newWithSize)(ptrdiff_t size);
TH_API THStorage* THStorage_(newWithSize1)(scalar_t);
TH_API THStorage* THStorage_(newWithSize2)(scalar_t, scalar_t);
TH_API THStorage* THStorage_(newWithSize3)(scalar_t, scalar_t, scalar_t);
TH_API THStorage* THStorage_(newWithSize4)(scalar_t, scalar_t, scalar_t, scalar_t);
TH_API THStorage* THStorage_(newWithMapping)(const char *filename, ptrdiff_t size, int flags);
TH_API THStorage* THStorage_(newWithAllocator)(ptrdiff_t size,
c10::Allocator* allocator);
TH_API THStorage* THStorage_(newWithDataAndAllocator)(
at::DataPtr&& data, ptrdiff_t size, at::Allocator* allocator);
```
- Tensor使用Storage作为参数的构造器
```C++
TH_API THTensor *THTensor_(newWithStorage1d)(THStorage *storage_, ptrdiff_t storageOffset_,
int64_t size0_, int64_t stride0_);
TH_API THTensor *THTensor_(newWithStorage2d)(THStorage *storage_, ptrdiff_t storageOffset_,
int64_t size0_, int64_t stride0_,
int64_t size1_, int64_t stride1_);
TH_API THTensor *THTensor_(newWithStorage3d)(THStorage *storage_, ptrdiff_t storageOffset_,
int64_t size0_, int64_t stride0_,
int64_t size1_, int64_t stride1_,
int64_t size2_, int64_t stride2_);
TH_API THTensor *THTensor_(newWithStorage4d)(THStorage *storage_, ptrdiff_t storageOffset_,
int64_t size0_, int64_t stride0_,
int64_t size1_, int64_t stride1_,
int64_t size2_, int64_t stride2_,
int64_t size3_, int64_t stride3_);
```
```
s1 = torch.Storage(5) # 5个空间的存储(数据没有初始化,是内存的原始状态,
# 多次运行可以看出其随机性,因为分配的空间在改变)
ts1 = torch.Tensor(s1)
print(s1, ts1)
```
- 下面是使用data创建Storage
```C++
TH_API THStorage* THStorage_(newWithDataAndAllocator)(
at::DataPtr&& data, ptrdiff_t size, at::Allocator* allocator);
```
```
s2 = torch.Storage([1,2,3,4], 6) # 5个空间的存储(数据没有初始化,是内存的原始状态,
# 多次运行可以看出其随机性,因为分配的空间在改变)
ts2 = torch.Tensor(s2)
print(s2, ts2)
```
- 注意:
- 如果故意犯一个错,则会输出文档中查不到的Storage的Python构造器说明,如下:
- 修改上面语句如下:`s2 = torch.Storage([1,2,3,4], 3) `,增加一个参数。
```shell
TypeError: torch.FloatStorage constructor received an invalid combination of arguments - got (list, int), but expected one of:
* no arguments
* (int size)
* (Sequence data)
* (torch.FloatStorage view_source)
* (torch.FloatStorage view_source, int offset)
didn't match because some of the arguments have invalid types: (list, int)
* (torch.FloatStorage view_source, int offset, int size)
```
- 同样的可以通过错误得到Tensor的构造器说明:
```C++
TypeError: new() received an invalid combination of arguments - got (torch.FloatStorage, int, int), but expected one of:
|- * (torch.device device)
|- * (torch.Storage storage)
|- * (Tensor other)
|- * (tuple of ints size, torch.device device)
|- * (object data, torch.device device)
```
```
s3 = torch.Storage([1,2,3,4]) # 5个空间的存储(数据没有初始化,是内存的原始状态,
# 多次运行可以看出其随机性,因为分配的空间在改变)
ts3 = torch.Tensor(s3, 2, 2)
print(s3, ts3)
```
#### 构造指定大小的Tensor
- `* (tuple of ints size, torch.device device)`
- 使用元组的方式就是直接使用多个参数,不要使用(),否则当成数据来处理。
```
t4 = torch.Tensor(3, 2, 3)
print(t4)
```
#### 使用数据来构造Tensor
```
t5 = torch.Tensor((3, 2, 3)) # 自动转换
print(t5)
```
# 总结
## Tensor的Python构造器定义如下
```python
Tensor.__init__(torch.device device)
Tensor.__init__(torch.Storage storage)
Tensor.__init__(Tensor other)
Tensor.__init__(tuple of ints size, torch.device device)
Tensor.__init__(object data, torch.device device)
```
## Storage的Python构造器定义如下
```python
FloatStorage.__init__() no arguments
FloatStorage.__init__(int size)
FloatStorage.__init__(Sequence data)
FloatStorage.__init__(torch.FloatStorage view_source)
FloatStorage.__init__(torch.FloatStorage view_source, int offset)
FloatStorage.__init__(torch.FloatStorage view_source, int offset, int size)
```
----
- 有了这两个构造器,创建Tensor就没有问题了,为什么官方文档,不提供详细的文档呢?估计也是这样构造比较啰嗦,不推荐的缘故吧!但是这里通过常规的编程思路,可以更好的理解Torch。
| github_jupyter |
```
%load_ext watermark
%watermark -p torch,pytorch_lightning,torchvision,torchmetrics,matplotlib
```
<a href="https://pytorch.org"><img src="https://raw.githubusercontent.com/pytorch/pytorch/master/docs/source/_static/img/pytorch-logo-dark.svg" width="90"/></a> <a href="https://www.pytorchlightning.ai"><img src="https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/docs/source/_static/images/logo.svg" width="150"/></a>
# VGG16 Smile Classifier with BCEwithLogitsLoss
## General settings and hyperparameters
```
BATCH_SIZE = 256
NUM_EPOCHS = 4
LEARNING_RATE = 0.001
NUM_WORKERS = 4
```
## Implementing a Neural Network using PyTorch Lightning's `LightningModule`
- For brevity, we load the modules from [`./model.py`](./model.py).
```
from model import PyTorchVGG16Logits
from model import LightningModelForBCE
```
## Setting up the dataset
- The CelebA dataset is available at https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html.
```
from dataset import get_dataloaders_celeba
from torchvision import transforms
custom_transforms = transforms.Compose([
transforms.CenterCrop((160, 160)),
transforms.Resize([128, 128]),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
train_loader, valid_loader, test_loader = get_dataloaders_celeba(
batch_size=BATCH_SIZE,
train_transforms=custom_transforms,
test_transforms=custom_transforms,
download=False,
num_workers=4)
```
### A quick visual check
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import torchvision
for images, labels in train_loader:
break
plt.figure(figsize=(8, 8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(torchvision.utils.make_grid(
images[:64],
padding=2,
normalize=True),
(1, 2, 0)))
plt.show()
```
## Training the model using the PyTorch Lightning Trainer class
```
import torch
pytorch_model = PyTorchVGG16Logits(num_outputs=1)
loss_fn = torch.nn.BCEWithLogitsLoss()
lightning_model = LightningModelForBCE(
model=pytorch_model,
learning_rate=LEARNING_RATE,
use_logits=True,
loss_fn=loss_fn)
```
- Now it's time to train our model:
```
import time
import pytorch_lightning as pl
trainer = pl.Trainer(
max_epochs=NUM_EPOCHS,
deterministic=True,
accelerator="auto", # Uses GPUs or TPUs if available
devices="auto", # Uses all available GPUs/TPUs if applicable
)
start_time = time.time()
trainer.fit(model=lightning_model, train_dataloaders=train_loader)
runtime = (time.time() - start_time)/60
print(f"Training took {runtime:.2f} min in total.")
```
## Evaluating the model
```
trainer.test(model=lightning_model, dataloaders=test_loader)
```
| github_jupyter |
# Classifying Fashion-MNIST
Now it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world.
<img src='assets/fashion-mnist-sprite.png' width=500px>
In this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebook though as you work through this.
First off, let's load the dataset through torchvision.
```
import torch
from torchvision import datasets, transforms
import helper
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# Download and load the training data
trainset = datasets.FashionMNIST('F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
```
Here we can see one of the images.
```
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
```
With the data loaded, it's time to import the necessary packages.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
import time
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms
import helper
```
## Building the network
Here you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits from the forward pass. It's up to you how many layers you add and the size of those layers.
```
# Hyperparameters for our network
input_size = 784
hidden_sizes = [256, 128, 64]
output_size = 10
from collections import OrderedDict
# Build a feed-forward network
model = nn.Sequential(OrderedDict([
('fc1', nn.Linear(input_size, hidden_sizes[0])),
('relu1', nn.ReLU()),
('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
('relu2', nn.ReLU()),
('fc3', nn.Linear(hidden_sizes[1], hidden_sizes[2])),
('relu2', nn.ReLU()),
('logits', nn.Linear(hidden_sizes[2], output_size))]))
```
# Train the network
Now you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.html#loss-functions) ( something like `nn.CrossEntropyLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`).
Then write the training code. Remember the training pass is a fairly straightforward process:
* Make a forward pass through the network to get the logits
* Use the logits to calculate the loss
* Perform a backward pass through the network with `loss.backward()` to calculate the gradients
* Take a step with the optimizer to update the weights
By adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4.
```
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)
epochs = 4
print_every = 50
steps = 0
for e in range(epochs):
running_loss = 0
for images, labels in iter(trainloader):
steps += 1
# Flatten MNIST images into a 784 long vector
images.resize_(images.size()[0], 784)
optimizer.zero_grad()
# Forward and backward passes
output = model.forward(images)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
print("Epoch: {}/{}... ".format(e+1, epochs),
"Loss: {:.4f}".format(running_loss/print_every))
running_loss = 0
# Test out your network!
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.resize_(1, 784)
# Turn off gradients to speed up this part
with torch.no_grad():
logits = model.forward(img)
# TODO: Calculate the class probabilities (softmax) for img
ps = F.softmax(logits, dim=1)
# Plot the image and probabilities
helper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion')
```
Now that your network is trained, you'll want to save it to disk so you can load it later instead of training it again. Obviously, it's impractical to train a network every time you need one. In practice, you'll train it once, save the model, then reload it for further training or making predictions. In the next part, I'll show you how to save and load trained models.
| github_jupyter |
# indicators
```
import vectorbt as vbt
import numpy as np
import pandas as pd
from datetime import datetime, timedelta
from numba import njit
import itertools
import talib
import ta
# Disable caching for performance testing
vbt.settings.caching['enabled'] = False
close = pd.DataFrame({
'a': [1., 2., 3., 4., 5.],
'b': [5., 4., 3., 2., 1.],
'c': [1., 2., 3., 2., 1.]
}, index=pd.DatetimeIndex([
datetime(2018, 1, 1),
datetime(2018, 1, 2),
datetime(2018, 1, 3),
datetime(2018, 1, 4),
datetime(2018, 1, 5)
]))
np.random.seed(42)
high = close * np.random.uniform(1, 1.1, size=close.shape)
low = close * np.random.uniform(0.9, 1, size=close.shape)
volume = close * 0 + np.random.randint(10, 10, size=close.shape).astype(float)
big_close = pd.DataFrame(np.random.randint(10, size=(1000, 1000)).astype(float))
big_close.index = [datetime(2018, 1, 1) + timedelta(days=i) for i in range(1000)]
big_high = big_close * np.random.uniform(1, 1.1, size=big_close.shape)
big_low = big_close * np.random.uniform(0.9, 1, size=big_close.shape)
big_volume = big_close * 0 + np.random.randint(10, 100, size=big_close.shape).astype(float)
close_ts = pd.Series([1, 2, 3, 4, 3, 2, 1], index=pd.DatetimeIndex([
datetime(2018, 1, 1),
datetime(2018, 1, 2),
datetime(2018, 1, 3),
datetime(2018, 1, 4),
datetime(2018, 1, 5),
datetime(2018, 1, 6),
datetime(2018, 1, 7)
]))
high_ts = close_ts * 1.1
low_ts = close_ts * 0.9
volume_ts = pd.Series([4, 3, 2, 1, 2, 3, 4], index=close_ts.index)
```
## IndicatorFactory
```
def apply_func(i, ts, p, a, b=100):
return ts * p[i] + a + b
@njit
def apply_func_nb(i, ts, p, a, b):
return ts * p[i] + a + b # numba doesn't support **kwargs
# Custom function can be anything that takes time series, params and other arguments, and returns outputs
def custom_func(ts, p, *args, **kwargs):
return vbt.base.combine_fns.apply_and_concat_one(len(p), apply_func, ts, p, *args, **kwargs)
@njit
def custom_func_nb(ts, p, *args):
return vbt.base.combine_fns.apply_and_concat_one_nb(len(p), apply_func_nb, ts, p, *args)
F = vbt.IndicatorFactory(input_names=['ts'], param_names=['p'], output_names=['out'])
print(F.from_custom_func(custom_func, var_args=True)
.run(close, [0, 1], 10, b=100).out)
print(F.from_custom_func(custom_func_nb, var_args=True)
.run(close, [0, 1], 10, 100).out)
# Apply function is performed on each parameter individually, and each output is then stacked for you
# Apply functions are less customizable than custom functions, but are simpler to write
def apply_func(ts, p, a, b=100):
return ts * p + a + b
@njit
def apply_func_nb(ts, p, a, b):
return ts * p + a + b # numba doesn't support **kwargs
F = vbt.IndicatorFactory(input_names=['ts'], param_names=['p'], output_names=['out'])
print(F.from_apply_func(apply_func, var_args=True)
.run(close, [0, 1], 10, b=100).out)
print(F.from_apply_func(apply_func_nb, var_args=True)
.run(close, [0, 1], 10, 100).out)
# test *args
F = vbt.IndicatorFactory(input_names=['ts'], param_names=['p'], output_names=['out'])
print(F.from_apply_func(lambda ts, p, a: ts * p + a, var_args=True)
.run(close, [0, 1, 2], 3).out)
print(F.from_apply_func(njit(lambda ts, p, a: ts * p + a), var_args=True)
.run(close, [0, 1, 2], 3).out)
# test **kwargs
# Numba doesn't support kwargs out of the box
F = vbt.IndicatorFactory(input_names=['ts'], param_names=['p'], output_names=['out'])
print(F.from_apply_func(lambda ts, p, a=1: ts * p + a)
.run(close, [0, 1, 2], a=3).out)
# test no inputs
F = vbt.IndicatorFactory(param_names=['p'], output_names=['out'])
print(F.from_apply_func(lambda p: np.full((3, 3), p))
.run([0, 1]).out)
print(F.from_apply_func(njit(lambda p: np.full((3, 3), p)))
.run([0, 1]).out)
# test no inputs with input_shape, input_index and input_columns
F = vbt.IndicatorFactory(param_names=['p'], output_names=['out'])
print(F.from_apply_func(lambda input_shape, p: np.full(input_shape, p), require_input_shape=True)
.run((5,), 0).out)
print(F.from_apply_func(njit(lambda input_shape, p: np.full(input_shape, p)), require_input_shape=True)
.run((5,), 0).out)
print(F.from_apply_func(lambda input_shape, p: np.full(input_shape, p), require_input_shape=True)
.run((5,), [0, 1]).out)
print(F.from_apply_func(njit(lambda input_shape, p: np.full(input_shape, p)), require_input_shape=True)
.run((5,), [0, 1]).out)
print(F.from_apply_func(lambda input_shape, p: np.full(input_shape, p), require_input_shape=True)
.run((5, 3), [0, 1], input_index=close.index, input_columns=close.columns).out)
print(F.from_apply_func(njit(lambda input_shape, p: np.full(input_shape, p)), require_input_shape=True)
.run((5, 3), [0, 1], input_index=close.index, input_columns=close.columns).out)
# test multiple inputs
F = vbt.IndicatorFactory(input_names=['ts1', 'ts2'], param_names=['p'], output_names=['out'])
print(F.from_apply_func(lambda ts1, ts2, p: ts1 * ts2 * p)
.run(close, high, [0, 1]).out)
print(F.from_apply_func(njit(lambda ts1, ts2, p: ts1 * ts2 * p))
.run(close, high, [0, 1]).out)
# test no params
F = vbt.IndicatorFactory(input_names=['ts'], output_names=['out'])
print(F.from_apply_func(lambda ts: ts)
.run(close).out)
print(F.from_apply_func(njit(lambda ts: ts))
.run(close).out)
# test no inputs and no params
F = vbt.IndicatorFactory(output_names=['out'])
print(F.from_apply_func(lambda: np.full((3, 3), 1))
.run().out)
print(F.from_apply_func(njit(lambda: np.full((3, 3), 1)))
.run().out)
# test multiple params
F = vbt.IndicatorFactory(input_names=['ts'], param_names=['p1', 'p2'], output_names=['out'])
print(F.from_apply_func(lambda ts, p1, p2: ts * (p1 + p2))
.run(close, np.asarray([0, 1]), np.asarray([2, 3])).out)
print(F.from_apply_func(njit(lambda ts, p1, p2: ts * (p1 + p2)))
.run(close, np.asarray([0, 1]), np.asarray([2, 3])).out)
# test param_settings array_like
F = vbt.IndicatorFactory(input_names=['ts'], param_names=['p1', 'p2'], output_names=['out'])
print(F.from_apply_func(lambda ts, p1, p2: ts * (p1 + p2),
param_settings={'p1': {'array_like': True}})
.run(close, np.asarray([0, 1, 2]), np.asarray([2, 3])).out)
print(F.from_apply_func(njit(lambda ts, p1, p2: ts * (p1 + p2)),
param_settings={'p1': {'array_like': True}})
.run(close, np.asarray([0, 1, 2]), np.asarray([2, 3])).out)
# test param_settings bc_to_input
F = vbt.IndicatorFactory(input_names=['ts'], param_names=['p1', 'p2'], output_names=['out'])
print(F.from_apply_func(lambda ts, p1, p2: ts * (p1 + p2),
param_settings={'p1': {'array_like': True, 'bc_to_input': True}})
.run(close, np.asarray([0, 1, 2]), np.asarray([2, 3])).out)
print(F.from_apply_func(njit(lambda ts, p1, p2: ts * (p1 + p2)),
param_settings={'p1': {'array_like': True, 'bc_to_input': True}})
.run(close, np.asarray([0, 1, 2]), np.asarray([2, 3])).out)
# test param product
F = vbt.IndicatorFactory(input_names=['ts'], param_names=['p1', 'p2'], output_names=['out'])
print(F.from_apply_func(lambda ts, p1, p2: ts * (p1 + p2))
.run(close, [0, 1], [2, 3], param_product=True).out)
print(F.from_apply_func(njit(lambda ts, p1, p2: ts * (p1 + p2)))
.run(close, [0, 1], [2, 3], param_product=True).out)
# test default params
F = vbt.IndicatorFactory(input_names=['ts'], param_names=['p1', 'p2'], output_names=['out'])
print(F.from_apply_func(lambda ts, p1, p2: ts * (p1 + p2), p2=2)
.run(close, [0, 1]).out)
print(F.from_apply_func(njit(lambda ts, p1, p2: ts * (p1 + p2)), p2=2)
.run(close, [0, 1]).out)
# test hide_params
F = vbt.IndicatorFactory(input_names=['ts'], param_names=['p1', 'p2'], output_names=['out'])
print(F.from_apply_func(lambda ts, p1, p2: ts * (p1 + p2), hide_params=['p2'])
.run(close, [0, 1], 2).out)
print(F.from_apply_func(njit(lambda ts, p1, p2: ts * (p1 + p2)), hide_params=['p2'])
.run(close, [0, 1], 2).out)
# test hide_default
F = vbt.IndicatorFactory(input_names=['ts'], param_names=['p1', 'p2'], output_names=['out'])
print(F.from_apply_func(lambda ts, p1, p2: ts * (p1 + p2), p2=2)
.run(close, [0, 1], hide_default=False).out)
print(F.from_apply_func(njit(lambda ts, p1, p2: ts * (p1 + p2)), p2=2)
.run(close, [0, 1], hide_default=False).out)
print(F.from_apply_func(lambda ts, p1, p2: ts * (p1 + p2), p2=2)
.run(close, [0, 1], hide_default=True).out)
print(F.from_apply_func(njit(lambda ts, p1, p2: ts * (p1 + p2)), p2=2)
.run(close, [0, 1], hide_default=True).out)
# test multiple outputs
F = vbt.IndicatorFactory(input_names=['ts'], param_names=['p'], output_names=['o1', 'o2'])
print(F.from_apply_func(lambda ts, p: (ts * p, ts * p ** 2))
.run(close, [0, 1]).o1)
print(F.from_apply_func(lambda ts, p: (ts * p, ts * p ** 2))
.run(close, [0, 1]).o2)
print(F.from_apply_func(njit(lambda ts, p: (ts * p, ts * p ** 2)))
.run(close, [0, 1]).o1)
print(F.from_apply_func(njit(lambda ts, p: (ts * p, ts * p ** 2)))
.run(close, [0, 1]).o2)
# test in-place outputs
def apply_func(ts, ts_out, p):
ts_out[:, 0] = p
return ts * p
F = vbt.IndicatorFactory(input_names=['ts'], param_names=['p'], output_names=['out'], in_output_names=['ts_out'])
print(F.from_apply_func(apply_func)
.run(close, [0, 1]).ts_out)
print(F.from_apply_func(njit(apply_func))
.run(close, [0, 1]).ts_out)
print(F.from_apply_func(apply_func, in_output_settings={'ts_out': {'dtype': np.int_}})
.run(close, [0, 1]).ts_out)
print(F.from_apply_func(njit(apply_func), in_output_settings={'ts_out': {'dtype': np.int_}})
.run(close, [0, 1]).ts_out)
print(F.from_apply_func(apply_func, ts_out=-1)
.run(close, [0, 1]).ts_out)
print(F.from_apply_func(njit(apply_func), ts_out=-1)
.run(close, [0, 1]).ts_out)
# test kwargs_to_args
F = vbt.IndicatorFactory(input_names=['ts'], param_names=['p'], output_names=['out'])
print(F.from_apply_func(lambda ts, p, a, kw: ts * p + a + kw, kwargs_to_args=['kw'], var_args=True)
.run(close, [0, 1, 2], 3, kw=10).out)
print(F.from_apply_func(njit(lambda ts, p, a, kw: ts * p + a + kw), kwargs_to_args=['kw'], var_args=True)
.run(close, [0, 1, 2], 3, kw=10).out)
# test caching func
F = vbt.IndicatorFactory(input_names=['ts'], param_names=['p'], output_names=['out'])
print(F.from_apply_func(lambda ts, param, c: ts * param + c, cache_func=lambda ts, params: 100)
.run(close, [0, 1]).out)
print(F.from_apply_func(njit(lambda ts, param, c: ts * param + c), cache_func=njit(lambda ts, params: 100))
.run(close, [0, 1]).out)
# test run_combs
F = vbt.IndicatorFactory(input_names=['ts'], param_names=['p1', 'p2'], output_names=['out'])
print(F.from_apply_func(lambda ts, p1, p2: ts * (p1 + p2))
.run_combs(close, [0, 1, 2], [3, 4, 5], short_names=['i1', 'i2'])[0].out)
print(F.from_apply_func(lambda ts, p1, p2: ts * (p1 + p2))
.run_combs(close, [0, 1, 2], [3, 4, 5], short_names=['i1', 'i2'])[1].out)
print(F.from_apply_func(njit(lambda ts, p1, p2: ts * (p1 + p2)))
.run_combs(close, [0, 1, 2], [3, 4, 5], short_names=['i1', 'i2'])[0].out)
print(F.from_apply_func(njit(lambda ts, p1, p2: ts * (p1 + p2)))
.run_combs(close, [0, 1, 2], [3, 4, 5], short_names=['i1', 'i2'])[1].out)
from collections import namedtuple
TestEnum = namedtuple('TestEnum', ['Hello', 'World'])(0, 1)
# test attr_settings
F = vbt.IndicatorFactory(
input_names=['ts'], output_names=['o1', 'o2'], in_output_names=['ts_out'],
attr_settings={
'ts': {'dtype': None},
'o1': {'dtype': np.float_},
'o2': {'dtype': np.bool_},
'ts_out': {'dtype': TestEnum}
}
)
dir(F.from_apply_func(lambda ts, ts_out: (ts + ts_out, ts + ts_out)).run(close))
CustomInd = vbt.IndicatorFactory(
input_names=['ts1', 'ts2'],
param_names=['p1', 'p2'],
output_names=['o1', 'o2']
).from_apply_func(lambda ts1, ts2, p1, p2: (ts1 * p1, ts2 * p2))
dir(CustomInd) # you can list here all of the available tools
custom_ind = CustomInd.run(close, high, [1, 2], [3, 4])
big_custom_ind = CustomInd.run(big_close, big_high, [1, 2], [3, 4])
print(custom_ind.wrapper.index) # subclasses ArrayWrapper
print(custom_ind.wrapper.columns)
print(custom_ind.wrapper.ndim)
print(custom_ind.wrapper.shape)
print(custom_ind.wrapper.freq)
# not changed during indexing
print(custom_ind.short_name)
print(custom_ind.level_names)
print(custom_ind.input_names)
print(custom_ind.param_names)
print(custom_ind.output_names)
print(custom_ind.output_flags)
print(custom_ind.p1_array)
print(custom_ind.p2_array)
```
### Pandas indexing
```
print(custom_ind._ts1)
print(custom_ind.ts1)
print(custom_ind.ts1.iloc[:, 0])
print(custom_ind.iloc[:, 0].ts1)
print(custom_ind.ts1.iloc[:, [0]])
print(custom_ind.iloc[:, [0]].ts1)
print(custom_ind.ts1.iloc[:2, :])
print(custom_ind.iloc[:2, :].ts1)
print(custom_ind.o1.iloc[:, 0])
%timeit big_custom_ind.o1.iloc[:, 0] # benchmark, 1 column
print(custom_ind.iloc[:, 0].o1) # performed on the object itself
%timeit big_custom_ind.iloc[:, 0] # slower since it forwards the operation to each dataframe
print(custom_ind.o1.iloc[:, np.arange(3)])
%timeit big_custom_ind.o1.iloc[:, np.arange(1000)] # 1000 columns
print(custom_ind.iloc[:, np.arange(3)].o1)
%timeit big_custom_ind.iloc[:, np.arange(1000)]
print(custom_ind.o1.loc[:, (1, 3, 'a')])
%timeit big_custom_ind.o1.loc[:, (1, 3, 0)] # 1 column
print(custom_ind.loc[:, (1, 3, 'a')].o1)
%timeit big_custom_ind.loc[:, (1, 3, 0)]
print(custom_ind.o1.loc[:, (1, 3)])
%timeit big_custom_ind.o1.loc[:, 1] # 1000 columns
print(custom_ind.loc[:, (1, 3)].o1)
%timeit big_custom_ind.loc[:, 1]
print(custom_ind.o1.xs(1, axis=1, level=0))
%timeit big_custom_ind.o1.xs(1, axis=1, level=0) # 1000 columns
print(custom_ind.xs(1, axis=1, level=0).o1)
%timeit big_custom_ind.xs(1, axis=1, level=0)
```
### Parameter indexing
```
# Indexing by parameter
print(custom_ind._p1_mapper)
print(custom_ind.p1_loc[2].o1)
print(custom_ind.p1_loc[1:2].o1)
print(custom_ind.p1_loc[[1, 1, 1]].o1)
%timeit big_custom_ind.p1_loc[1] # 1000 columns
%timeit big_custom_ind.p1_loc[np.full(10, 1)] # 10000 columns
print(custom_ind._tuple_mapper)
print(custom_ind.tuple_loc[(1, 3)].o1)
print(custom_ind.tuple_loc[(1, 3):(2, 4)].o1)
%timeit big_custom_ind.tuple_loc[(1, 3)]
%timeit big_custom_ind.tuple_loc[[(1, 3)] * 10]
```
### Comparison methods
```
print(custom_ind.o1 > 2)
%timeit big_custom_ind.o1.values > 2 # don't even try pandas
print(custom_ind.o1_above(2))
%timeit big_custom_ind.o1_above(2) # slower than numpy because of constructing dataframe
print(pd.concat((custom_ind.o1 > 2, custom_ind.o1 > 3), axis=1))
%timeit np.hstack((big_custom_ind.o1.values > 2, big_custom_ind.o1.values > 3))
print(custom_ind.o1_above([2, 3]))
%timeit big_custom_ind.o1_above([2, 3])
```
## TA-Lib
```
ts = pd.DataFrame({
'a': [1, 2, 3, 4, np.nan],
'b': [np.nan, 4, 3, 2, 1],
'c': [1, 2, np.nan, 2, 1]
}, index=pd.DatetimeIndex([
datetime(2018, 1, 1),
datetime(2018, 1, 2),
datetime(2018, 1, 3),
datetime(2018, 1, 4),
datetime(2018, 1, 5)
]))
SMA = vbt.IndicatorFactory.from_talib('SMA')
print(SMA.run(close['a'], 2).real)
print(SMA.run(close, 2).real)
print(SMA.run(close, [2, 3]).real)
%timeit SMA.run(big_close)
%timeit SMA.run(big_close, np.arange(2, 10))
%timeit SMA.run(big_close, np.full(10, 2))
%timeit SMA.run(big_close, np.full(10, 2), speedup=True)
comb = itertools.combinations(np.arange(2, 20), 2)
fast_windows, slow_windows = np.asarray(list(comb)).transpose()
print(fast_windows)
print(slow_windows)
%timeit SMA.run(big_close, fast_windows), SMA.run(big_close, slow_windows) # individual caching
%timeit SMA.run_combs(big_close, np.arange(2, 20)) # mutual caching
%timeit vbt.MA.run(big_close, fast_windows), vbt.MA.run(big_close, slow_windows) # the same using Numba
%timeit vbt.MA.run_combs(big_close, np.arange(2, 20))
sma1, sma2 = SMA.run_combs(close, [2, 3, 4])
print(sma1.real_above(sma2, crossover=True))
print(sma1.real_below(sma2, crossover=True))
dir(vbt.IndicatorFactory.from_talib('BBANDS'))
```
## MA
```
print(close.rolling(2).mean())
print(close.ewm(span=3, min_periods=3).mean())
print(vbt.IndicatorFactory.from_talib('SMA').run(close, timeperiod=2).real)
print(vbt.MA.run(close, [2, 3], ewm=[False, True]).ma) # adjust=False
# One window
%timeit big_close.rolling(2).mean() # pandas
%timeit vbt.IndicatorFactory.from_talib('SMA').run(big_close, timeperiod=2)
%timeit vbt.MA.run(big_close, 2, return_cache=True) # cache only
%timeit vbt.MA.run(big_close, 2) # with pre+postprocessing and still beats pandas
print(vbt.MA.run(big_close, 2).ma.shape)
# Multiple windows
%timeit pd.concat([big_close.rolling(i).mean() for i in np.arange(2, 10)])
%timeit vbt.IndicatorFactory.from_talib('SMA').run(big_close, np.arange(2, 10))
%timeit vbt.MA.run(big_close, np.arange(2, 10))
%timeit vbt.MA.run(big_close, np.arange(2, 10), speedup=True)
%timeit vbt.MA.run(big_close, np.arange(2, 10), return_cache=True) # cache only
cache = vbt.MA.run(big_close, np.arange(2, 10), return_cache=True)
%timeit vbt.MA.run(big_close, np.arange(2, 10), use_cache=cache) # using cache
print(vbt.MA.run(big_close, np.arange(2, 10)).ma.shape)
# One window repeated
%timeit pd.concat([big_close.rolling(i).mean() for i in np.full(10, 2)])
%timeit vbt.IndicatorFactory.from_talib('SMA').run(big_close, np.full(10, 2))
%timeit vbt.MA.run(big_close, np.full(10, 2))
%timeit vbt.MA.run(big_close, np.full(10, 2), speedup=True) # slower for large inputs
%timeit vbt.MA.run(big_close, np.full(10, 2), return_cache=True)
print(vbt.MA.run(big_close, np.full(10, 2)).ma.shape)
%timeit pd.concat([big_close.iloc[:, :10].rolling(i).mean() for i in np.full(100, 2)])
%timeit vbt.IndicatorFactory.from_talib('SMA').run(big_close.iloc[:, :10], np.full(100, 2))
%timeit vbt.MA.run(big_close.iloc[:, :10], np.full(100, 2))
%timeit vbt.MA.run(big_close.iloc[:, :10], np.full(100, 2), speedup=True) # faster for smaller inputs
%timeit vbt.MA.run(big_close.iloc[:, :10], np.full(100, 2), return_cache=True)
print(vbt.MA.run(big_close.iloc[:, :10], np.full(100, 2)).ma.shape)
ma = vbt.MA.run(close, [2, 3], ewm=[False, True])
print(ma.ma)
ma[(2, False, 'a')].plot().show_png()
```
## MSTD
```
print(close.rolling(2).std(ddof=0))
print(close.ewm(span=3, min_periods=3).std(ddof=0))
print(vbt.IndicatorFactory.from_talib('STDDEV').run(close, timeperiod=2).real)
print(vbt.MSTD.run(close, [2, 3], ewm=[False, True]).mstd) # adjust=False, ddof=0
# One window
%timeit big_close.rolling(2).std()
%timeit vbt.IndicatorFactory.from_talib('STDDEV').run(big_close, timeperiod=2)
%timeit vbt.MSTD.run(big_close, 2)
print(vbt.MSTD.run(big_close, 2).mstd.shape)
# Multiple windows
%timeit pd.concat([big_close.rolling(i).std() for i in np.arange(2, 10)])
%timeit vbt.IndicatorFactory.from_talib('STDDEV').run(big_close, timeperiod=np.arange(2, 10))
%timeit vbt.MSTD.run(big_close, np.arange(2, 10))
print(vbt.MSTD.run(big_close, np.arange(2, 10)).mstd.shape)
# One window repeated
%timeit vbt.IndicatorFactory.from_talib('STDDEV').run(big_close, timeperiod=np.full(10, 2))
%timeit vbt.MSTD.run(big_close, window=np.full(10, 2))
print(vbt.MSTD.run(big_close, window=np.full(10, 2)).close.shape)
mstd = vbt.MSTD.run(close, [2, 3], [False, True])
print(mstd.mstd)
mstd[(2, False, 'a')].plot().show_png()
```
## BBANDS
```
print(ta.volatility.BollingerBands(close=close['a'], window=2, window_dev=2).bollinger_hband())
print(ta.volatility.BollingerBands(close=close['a'], window=2, window_dev=2).bollinger_mavg())
print(ta.volatility.BollingerBands(close=close['a'], window=2, window_dev=2).bollinger_lband())
print(vbt.IndicatorFactory.from_talib('BBANDS').run(close, timeperiod=2, nbdevup=2, nbdevdn=2).upperband)
print(vbt.IndicatorFactory.from_talib('BBANDS').run(close, timeperiod=2, nbdevup=2, nbdevdn=2).middleband)
print(vbt.IndicatorFactory.from_talib('BBANDS').run(close, timeperiod=2, nbdevup=2, nbdevdn=2).lowerband)
print(vbt.BBANDS.run(close, window=2, ewm=False, alpha=2).upper)
print(vbt.BBANDS.run(close, window=2, ewm=False, alpha=2).middle)
print(vbt.BBANDS.run(close, window=2, ewm=False, alpha=2).lower)
# One window
%timeit vbt.IndicatorFactory.from_talib('BBANDS').run(big_close, timeperiod=2)
%timeit vbt.BBANDS.run(big_close, window=2)
print(vbt.BBANDS.run(big_close).close.shape)
# Multiple windows
%timeit vbt.IndicatorFactory.from_talib('BBANDS').run(big_close, timeperiod=np.arange(2, 10))
%timeit vbt.BBANDS.run(big_close, window=np.arange(2, 10))
print(vbt.BBANDS.run(big_close, window=np.arange(2, 10)).close.shape)
# One window repeated
%timeit vbt.IndicatorFactory.from_talib('BBANDS').run(big_close, timeperiod=np.full(10, 2))
%timeit vbt.BBANDS.run(big_close, window=np.full(10, 2))
print(vbt.BBANDS.run(big_close, window=np.full(10, 2)).close.shape)
bb = vbt.BBANDS.run(close, window=2, alpha=[1., 2.], ewm=False)
print(bb.middle)
print()
print(bb.upper)
print()
print(bb.lower)
print()
print(bb.percent_b)
print()
print(bb.bandwidth)
print(bb.close_below(bb.upper) & bb.close_above(bb.lower)) # price between bands
bb[(2, False, 1., 'a')].plot().show_png()
```
## RSI
```
print(ta.momentum.RSIIndicator(close=close['a'], window=2).rsi()) # alpha=1/n
print(ta.momentum.RSIIndicator(close=close['b'], window=2).rsi())
print(ta.momentum.RSIIndicator(close=close['c'], window=2).rsi())
print(vbt.IndicatorFactory.from_talib('RSI').run(close, timeperiod=2).real)
print(vbt.RSI.run(close, window=[2, 2], ewm=[True, False]).rsi) # span=n
# One window
%timeit vbt.IndicatorFactory.from_talib('RSI').run(big_close, timeperiod=2)
%timeit vbt.RSI.run(big_close, window=2)
print(vbt.RSI.run(big_close, window=2).rsi.shape)
# Multiple windows
%timeit vbt.IndicatorFactory.from_talib('RSI').run(big_close, timeperiod=np.arange(2, 10))
%timeit vbt.RSI.run(big_close, window=np.arange(2, 10))
print(vbt.RSI.run(big_close, window=np.arange(2, 10)).rsi.shape)
# One window repeated
%timeit vbt.IndicatorFactory.from_talib('RSI').run(big_close, timeperiod=np.full(10, 2))
%timeit vbt.RSI.run(big_close, window=np.full(10, 2))
print(vbt.RSI.run(big_close, window=np.full(10, 2)).rsi.shape)
rsi = vbt.RSI.run(close, window=[2, 3], ewm=[False, True])
print(rsi.rsi)
print(rsi.rsi_above(70))
rsi[(2, False, 'a')].plot().show_png()
```
## STOCH
```
print(ta.momentum.StochasticOscillator(high=high['a'], low=low['a'], close=close['a'], window=2, smooth_window=3).stoch())
print(ta.momentum.StochasticOscillator(high=high['a'], low=low['a'], close=close['a'], window=2, smooth_window=3).stoch_signal())
print(vbt.IndicatorFactory.from_talib('STOCHF').run(
high, low, close, fastk_period=2, fastd_period=3).fastk)
print(vbt.IndicatorFactory.from_talib('STOCHF').run(
high, low, close, fastk_period=2, fastd_period=3).fastd)
print(vbt.STOCH.run(high, low, close, k_window=2, d_window=3).percent_k)
print(vbt.STOCH.run(high, low, close, k_window=2, d_window=3).percent_d)
# One window
%timeit vbt.IndicatorFactory.from_talib('STOCHF').run(\
big_high, big_low, big_close, fastk_period=2)
%timeit vbt.STOCH.run(big_high, big_low, big_close, k_window=2)
print(vbt.STOCH.run(big_high, big_low, big_close, k_window=2).percent_d.shape)
# Multiple windows
%timeit vbt.IndicatorFactory.from_talib('STOCHF').run(\
big_high, big_low, big_close, fastk_period=np.arange(2, 10))
%timeit vbt.STOCH.run(big_high, big_low, big_close, k_window=np.arange(2, 10))
print(vbt.STOCH.run(big_high, big_low, big_close, k_window=np.arange(2, 10)).percent_d.shape)
# One window repeated
%timeit vbt.IndicatorFactory.from_talib('STOCHF').run(\
big_high, big_low, big_close, fastk_period=np.full(10, 2))
%timeit vbt.STOCH.run(big_high, big_low, big_close, k_window=np.full(10, 2))
print(vbt.STOCH.run(big_high, big_low, big_close, k_window=np.full(10, 2)).percent_d.shape)
stochastic = vbt.STOCH.run(high, low, close, k_window=[2, 4], d_window=2, d_ewm=[False, True])
print(stochastic.percent_k)
print(stochastic.percent_d)
stochastic[(2, 2, False, 'a')].plot().show_png()
```
## MACD
```
print(ta.trend.MACD(close['a'], window_fast=2, window_slow=3, window_sign=2).macd())
print(ta.trend.MACD(close['a'], window_fast=2, window_slow=3, window_sign=2).macd_signal())
print(ta.trend.MACD(close['a'], window_fast=2, window_slow=3, window_sign=2).macd_diff())
print(vbt.IndicatorFactory.from_talib('MACD').run(
close, fastperiod=2, slowperiod=3, signalperiod=2).macd) # uses sma
print(vbt.IndicatorFactory.from_talib('MACD').run(
close, fastperiod=2, slowperiod=3, signalperiod=2).macdsignal)
print(vbt.IndicatorFactory.from_talib('MACD').run(
close, fastperiod=2, slowperiod=3, signalperiod=2).macdhist)
print(vbt.MACD.run(close, fast_window=2, slow_window=3, signal_window=2, macd_ewm=True, signal_ewm=True).macd)
print(vbt.MACD.run(close, fast_window=2, slow_window=3, signal_window=2, macd_ewm=True, signal_ewm=True).signal)
print(vbt.MACD.run(close, fast_window=2, slow_window=3, signal_window=2, macd_ewm=True, signal_ewm=True).hist)
# One window
%timeit vbt.IndicatorFactory.from_talib('MACD').run(big_close, fastperiod=2)
%timeit vbt.MACD.run(big_close, fast_window=2)
print(vbt.MACD.run(big_close, fast_window=2).macd.shape)
# Multiple windows
%timeit vbt.IndicatorFactory.from_talib('MACD').run(big_close, fastperiod=np.arange(2, 10))
%timeit vbt.MACD.run(big_close, fast_window=np.arange(2, 10))
print(vbt.MACD.run(big_close, fast_window=np.arange(2, 10)).macd.shape)
# One window repeated
%timeit vbt.IndicatorFactory.from_talib('MACD').run(big_close, fastperiod=np.full(10, 2))
%timeit vbt.MACD.run(big_close, fast_window=np.full(10, 2))
print(vbt.MACD.run(big_close, fast_window=np.full(10, 2)).macd.shape)
macd = vbt.MACD.run(close, fast_window=2, slow_window=3, signal_window=[2, 3], macd_ewm=True, signal_ewm=True)
print(macd.macd)
print(macd.signal)
print(macd.hist)
macd[(2, 3, 2, True, True, 'a')].plot().show_png()
```
## ATR
```
print(ta.volatility.AverageTrueRange(high['a'], low['a'], close['a'], window=2).average_true_range())
print(ta.volatility.AverageTrueRange(high['b'], low['b'], close['b'], window=2).average_true_range())
print(ta.volatility.AverageTrueRange(high['c'], low['c'], close['c'], window=2).average_true_range())
print(vbt.IndicatorFactory.from_talib('ATR').run(high, low, close, timeperiod=2).real)
print(vbt.ATR.run(high, low, close, window=[2, 3], ewm=[False, True]).atr)
# One window
%timeit vbt.IndicatorFactory.from_talib('ATR').run(big_high, big_low, big_close, timeperiod=2)
%timeit vbt.ATR.run(big_high, big_low, big_close, window=2)
print(vbt.ATR.run(big_high, big_low, big_close, window=2).atr.shape)
# Multiple windows
%timeit vbt.IndicatorFactory.from_talib('ATR').run(big_high, big_low, big_close, timeperiod=np.arange(2, 10))
%timeit vbt.ATR.run(big_high, big_low, big_close, window=np.arange(2, 10)) # rolling min/max very expensive
print(vbt.ATR.run(big_high, big_low, big_close, window=np.arange(2, 10)).atr.shape)
# One window repeated
%timeit vbt.IndicatorFactory.from_talib('ATR').run(big_high, big_low, big_close, timeperiod=np.full(10, 2))
%timeit vbt.ATR.run(big_high, big_low, big_close, window=np.full(10, 2))
print(vbt.ATR.run(big_high, big_low, big_close, window=np.full(10, 2)).atr.shape)
atr = vbt.ATR.run(high, low, close, window=[2, 3], ewm=[False, True])
print(atr.tr)
print(atr.atr)
atr[(2, False, 'a')].plot().show_png()
```
## OBV
```
print(ta.volume.OnBalanceVolumeIndicator(close['a'], volume['a']).on_balance_volume())
print(ta.volume.OnBalanceVolumeIndicator(close['b'], volume['b']).on_balance_volume())
print(ta.volume.OnBalanceVolumeIndicator(close['c'], volume['c']).on_balance_volume())
print(vbt.IndicatorFactory.from_talib('OBV').run(close, volume).real)
print(vbt.OBV.run(close, volume).obv)
%timeit vbt.IndicatorFactory.from_talib('OBV').run(big_close, big_volume)
%timeit vbt.OBV.run(big_close, big_volume)
print(vbt.OBV.run(big_close, big_volume).obv.shape)
obv = vbt.OBV.run(close, volume)
print(obv.obv)
print(obv.obv_above([0, 5]))
obv['a'].plot().show_png()
```
| github_jupyter |
```
%load_ext lab_black
import os, sys
%load_ext autoreload
%autoreload 2
sys.path.append("/n/home12/khou/holystore/")
import matplotlib
matplotlib.rcParams["pdf.fonttype"] = 42
matplotlib.rcParams["ps.fonttype"] = 42
import paper_utils
import matplotlib.pyplot as plt
import scdrs.data_loader as dl
import pandas as pd
import numpy as np
from os.path import join
from statsmodels.stats.multitest import multipletests
from scipy.stats import rankdata
from tqdm import tqdm
from scipy import stats
from matplotlib.patches import Rectangle
import matplotlib.patches as patches
def small_squares(ax, pos, size=1, linewidth=0.8):
"""
Draw many small squares on ax, given the positions of
these squares.
"""
for xy in pos:
x, y = xy
margin = (1 - size) / 2
rect = patches.Rectangle(
(x + margin, y + margin),
size,
size,
linewidth=linewidth,
edgecolor="k",
facecolor="none",
zorder=20,
)
ax.add_patch(rect)
def celltype_display_name(x):
if x in dict_celltype_name:
name = dict_celltype_name[x]
else:
name = x
name += f" ({df_celltype_n_cell[x]})"
name = name.replace("_", " ")
name = name[0].upper() + name[1:]
return name
def trait_display_name(x):
dict_trait_name = {
row.Trait_Identifier: row["Trait Name"] for _, row in df_trait_info.iterrows()
}
dict_trait_code = {
row.Trait_Identifier: row["Code"] for _, row in df_trait_info.iterrows()
}
if dict_trait_name[x].lower() != dict_trait_code[x].lower():
return f"{dict_trait_name[x]} ({dict_trait_code[x]})"
else:
return dict_trait_name[x]
def asterisk_display(x):
if x < 0.05:
return "×"
else:
return ""
# Setup file paths
DATA_PATH = "/n/holystore01/LABS/price_lab/Users/mjzhang/scDRS_data"
data_facs_ct = dl.load_tms_ct(DATA_PATH, data_name="facs")
df_hom = pd.read_csv(
"/n/holystore01/LABS/price_lab/Users/mjzhang/scDRS_data/gene_annotation/"
"mouse_human_homologs.txt",
sep="\t",
)
SCORE_PATH = join(DATA_PATH, "score_file/score.tms_facs_with_cov.magma_10kb_1000")
URL_SUPP_TABLE = (
"supp_tables.xlsx"
)
df_trait_info = pd.read_excel(
URL_SUPP_TABLE,
sheet_name=0,
)
df_celltype_info = pd.read_excel(
URL_SUPP_TABLE,
sheet_name=1,
)
df_fdr_prop1 = pd.read_csv("data/summary_ct/drs_fdr_prop.10kb.1000.csv", index_col=0)
df_fdr_prop2 = pd.read_csv("data/summary_ct/drs_fdr_prop.tms_facs.csv", index_col=0)
df_fdr_prop2 = df_fdr_prop2.loc[df_fdr_prop1.index, df_fdr_prop1.columns]
df_fdr_prop = df_fdr_prop2.copy()
# check sources of difference
df_gearysc_meta_fdr = pd.read_csv(
"data/summary_ct/df_gearysc_fdr.tms_facs.csv", index_col=0
)
```
# Overview
```
df_celltype_n_cell = data_facs_ct.obs.cell_ontology_class.value_counts()
df_celltype_n_cell.index = [
c.replace(" ", "_").replace(",", "") for c in df_celltype_n_cell.index
]
dict_celltype_name = {
row.id: row.code for _, row in df_celltype_info.iterrows() if not pd.isna(row.code)
}
df_plot = df_fdr_prop.copy()
df_ct_pval = pd.read_csv("data/summary_ct/df_pval.tms_facs.csv", index_col=0)
df_ct_fdr = pd.DataFrame(
multipletests(df_ct_pval.values.flatten(), method="fdr_bh")[1].reshape(
df_ct_pval.shape
),
index=df_ct_pval.index,
columns=df_ct_pval.columns,
)
df_ct_fdr = df_ct_fdr.loc[df_plot.index, df_plot.columns].copy()
df_gearysc_plot = df_gearysc_meta_fdr.loc[df_plot.index, df_plot.columns].copy()
df_gearysc_annot = df_gearysc_plot.applymap(asterisk_display)
df_gearysc_annot[df_ct_fdr > 0.05] = ""
df_plot[df_ct_fdr > 0.05] = 0.0
df_plot = df_plot.rename(index=trait_display_name, columns=celltype_display_name)
fig, ax = paper_utils.plot_heatmap(
df_plot,
squaresize=20,
heatmap_annot=df_gearysc_annot,
heatmap_annot_kws={"color": "black", "size": 4},
heatmap_cbar_kws=dict(
use_gridspec=False, location="top", fraction=0.03, pad=0.05, drawedges=True
),
heatmap_vmin=0,
heatmap_vmax=1,
colormap_n_bin=10,
)
######### add small squares to show cell-type trait association ##########
small_squares(
ax,
pos=[(y, x) for x, y in zip(*np.where(df_ct_fdr < 0.05))],
size=0.6,
linewidth=0.5,
)
cb = ax.collections[0].colorbar
cb.ax.set_title("Prop. of sig. cells (FDR < 0.1)")
cb.outline.set_edgecolor("black")
cb.outline.set_linewidth(1)
# add diagonal boxes
x_seps = [
df_trait_info.Category.isin(["blood/immune"]).sum(),
df_trait_info.Category.isin(["brain"]).sum(),
df_trait_info.Category.isin(["metabolic", "heart", "other"]).sum(),
]
y_seps = [
df_celltype_info.category.isin(["blood", "immune"]).sum(),
df_celltype_info.category.isin(["brain"]).sum(),
df_celltype_info.category.isin(["others"]).sum(),
]
paper_utils.plot_diagonal_block(y_seps, x_seps, ax, linewidth=1.5)
dict_group_colors = {"Blood / immune": "C0", "Brain": "C1", "Others": "C2"}
# annotate cell-types
ct_breaks = np.cumsum([0] + y_seps)
for i, ct in enumerate(["Blood / immune", "Brain", "Others"]):
paper_utils.annotation_line(
ax=ax,
text=ct,
xy1=(ct_breaks[i], -0.2),
xy2=(ct_breaks[i + 1], -0.2),
text_offset_y=-1,
linecolor=dict_group_colors[ct],
text_color=dict_group_colors[ct],
fontsize=12,
)
for label in ax.xaxis.get_ticklabels()[ct_breaks[i] : ct_breaks[i + 1]]:
label.set_color(dict_group_colors[ct])
# annotate traits
trait_breaks = np.cumsum([0] + x_seps)
for i, trait in enumerate(["Blood / immune", "Brain", "Others"]):
paper_utils.annotation_line(
ax=ax,
text=trait,
xy1=(ct_breaks[-1] + 0.2, trait_breaks[i]),
xy2=(ct_breaks[-1] + 0.2, trait_breaks[i + 1]),
text_rotation=270,
text_offset_x=1,
linecolor=dict_group_colors[trait],
text_color=dict_group_colors[trait],
fontsize=12,
)
for label in ax.yaxis.get_ticklabels()[trait_breaks[i] : trait_breaks[i + 1]]:
label.set_color(dict_group_colors[trait])
plt.savefig("results/celltype_assoc_detail.pdf", bbox_inches="tight")
plt.show()
print(
f"Out of all {df_ct_fdr.shape[0] * df_ct_fdr.shape[1]} cell type disease pairs, {(df_ct_fdr <= 0.05).sum().sum()} significant, {(df_gearysc_annot == '×').sum().sum()} heterogenous"
)
print(
f"{((df_ct_fdr < 0.05) & (df_gearysc_annot == '×')).any(axis=1).sum()} out of {(df_ct_fdr < 0.05).any(axis=1).sum()} traits with a critical cell type has a significant heterogeneity cell-type"
)
df_ct_fdr_brain = df_ct_fdr.loc[
df_trait_info.Category.isin(["brain"]).values,
df_celltype_info.category.isin(["brain"]).values,
]
print(
f"Number of brain cell types: {df_ct_fdr_brain.shape[1]}, number of brain traits: {df_ct_fdr_brain.shape[0]}"
)
print(
f"Number of significant cell type trait pair: {(df_ct_fdr_brain < 0.05).sum().sum()}"
)
```
# Main figures
```
# Plot order for main figure
df_tmp = pd.read_excel(
URL_SUPP_TABLE,
sheet_name="fig3-display",
).dropna(how="all")
plot_order = dict()
for c, group in df_tmp.groupby("cluster"):
plot_order[c] = (group.trait.dropna().values, group.celltype.dropna().values)
dict_trait_name = {
row.Trait_Identifier: row["Code"] for _, row in df_trait_info.iterrows()
}
dict_celltype_name = {
row.id: row.code for _, row in df_celltype_info.iterrows() if not pd.isna(row.code)
}
def trait_display_name(x):
dict_trait_name = {
row.Trait_Identifier: row["Trait Name"] for _, row in df_trait_info.iterrows()
}
dict_trait_code = {
row.Trait_Identifier: row["Code"] for _, row in df_trait_info.iterrows()
}
if dict_trait_name[x].lower() != dict_trait_code[x].lower():
display_name = f"{dict_trait_name[x]} ({dict_trait_code[x]})"
else:
display_name = dict_trait_name[x]
# special case for RDW
if dict_trait_code[x] == "RDW":
return "RBC Distribution width (RDW)"
return display_name
df_ct_pval = pd.read_csv("data/summary_ct/df_pval.tms_facs.csv", index_col=0)
df_ct_fdr = pd.DataFrame(
multipletests(df_ct_pval.values.flatten(), method="fdr_bh")[1].reshape(
df_ct_pval.shape
),
index=df_ct_pval.index,
columns=df_ct_pval.columns,
)
df_ct_fdr = df_ct_fdr.loc[
np.concatenate([plot_order[c][0] for c in plot_order]),
np.concatenate([plot_order[c][1] for c in plot_order]),
]
df_plot = df_fdr_prop.loc[
np.concatenate([plot_order[c][0] for c in plot_order]),
np.concatenate([plot_order[c][1] for c in plot_order]),
].copy()
print("Number of traits / cell types in the analysis:", df_plot.shape)
# add diagonal boxes
x_seps = [len(plot_order[c][0]) for c in plot_order]
y_seps = [len(plot_order[c][1]) for c in plot_order]
df_gearysc_plot = df_gearysc_meta_fdr.loc[df_plot.index, df_plot.columns].copy()
df_gearysc_annot = df_gearysc_plot.applymap(asterisk_display)
df_gearysc_annot[(df_ct_fdr > 0.05)] = ""
df_plot[df_ct_fdr > 0.05] = 0.0
df_plot = df_plot.rename(index=trait_display_name, columns=celltype_display_name)
print(
f"{np.sum(df_gearysc_annot == '×').sum()} of {np.sum(df_ct_fdr <= 0.05).sum()} associations are heterogeneous"
)
fig, ax = paper_utils.plot_heatmap(
df_plot,
squaresize=27,
heatmap_annot=df_gearysc_annot,
heatmap_annot_kws={"color": "black", "size": 6},
heatmap_xticklabels=True,
heatmap_yticklabels=True,
heatmap_linecolor="gray",
heatmap_linewidths=0.05,
heatmap_cbar_kws=dict(
use_gridspec=False,
location="top",
fraction=0.02,
pad=0.1,
drawedges=True,
anchor=(0.9, 1.0),
aspect=15,
),
heatmap_vmin=0,
heatmap_vmax=1.0,
colormap_n_bin=10,
)
small_squares(
ax,
pos=[(y, x) for x, y in zip(*np.where(df_ct_fdr < 0.05))],
size=0.6,
linewidth=0.5,
)
cb = ax.collections[0].colorbar
cb.outline.set_edgecolor("black")
cb.outline.set_linewidth(1)
cb.ax.set_title("Prop. of sig. cells", fontsize=9)
cb.set_ticks([0, 0.5, 1.0])
cb.ax.set_xticklabels(["0%", "50%", "100%"], size=7)
# add bounding box
for x in ax.get_xlim():
ax.axvline(x=x, color="k", linewidth=1)
ax.axvline(x=x, color="k", linewidth=1)
for y in ax.get_ylim():
ax.axhline(y=y, color="k", linewidth=1)
ax.axhline(y=y, color="k", linewidth=1)
paper_utils.plot_diagonal_block(y_seps, x_seps, ax, linewidth=1.2)
dict_group_colors = {"Blood / immune": "C0", "Brain": "C1", "Others": "C2"}
# annotate cell-types
ct_breaks = np.cumsum([0] + y_seps)
for i, ct in enumerate(["Blood / immune", "Brain", "Others"]):
paper_utils.annotation_line(
ax=ax,
text=ct,
xy1=(ct_breaks[i], -0.2),
xy2=(ct_breaks[i + 1], -0.2),
text_offset_y=-0.5,
linecolor=dict_group_colors[ct],
text_color=dict_group_colors[ct],
)
for label in ax.xaxis.get_ticklabels()[ct_breaks[i] : ct_breaks[i + 1]]:
label.set_color(dict_group_colors[ct])
# annotate traits
trait_breaks = np.cumsum([0] + x_seps)
for i, trait in enumerate(["Blood / immune", "Brain", "Others"]):
paper_utils.annotation_line(
ax=ax,
text=trait,
xy1=(ct_breaks[-1] + 0.2, trait_breaks[i]),
xy2=(ct_breaks[-1] + 0.2, trait_breaks[i + 1]),
text_rotation=270,
text_offset_x=0.5,
linecolor=dict_group_colors[trait],
text_color=dict_group_colors[trait],
)
for label in ax.yaxis.get_ticklabels()[trait_breaks[i] : trait_breaks[i + 1]]:
label.set_color(dict_group_colors[trait])
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
plt.text(-3.5, -4, "☐ Sig. cell type-disease association", fontsize=9)
plt.text(-3.5, -2.5, "× Sig. within-cell type heterogeneity", fontsize=9)
plt.savefig("results/celltype_assoc_overview.pdf", bbox_inches="tight")
df_ct_pval = pd.read_csv("data/summary_ct/df_pval.tms_facs.csv", index_col=0)
df_plot_fdr = pd.DataFrame(
multipletests(df_ct_pval.values.flatten(), method="fdr_bh")[1].reshape(
df_ct_pval.shape
),
index=df_ct_pval.index,
columns=df_ct_pval.columns,
)
df_plot = df_ct_pval.loc[
np.concatenate([plot_order[c][0] for c in plot_order]),
np.concatenate([plot_order[c][1] for c in plot_order]),
].copy()
df_plot_fdr = df_plot_fdr.loc[df_plot.index, df_plot.columns]
df_plot_fdr = df_plot_fdr.rename(index=trait_display_name)
def signif_display(x):
if x < 0.05:
return "*"
else:
return ""
df_plot_annot = df_plot_fdr.applymap(signif_display)
df_plot = -np.log10(df_plot)
df_plot = df_plot.rename(index=trait_display_name, columns=celltype_display_name)
print("Number of traits / cell types in the analysis:", df_plot.shape)
# add diagonal boxes
x_seps = [len(plot_order[c][0]) for c in plot_order]
y_seps = [len(plot_order[c][1]) for c in plot_order]
fig, ax = paper_utils.plot_heatmap(
df_plot,
squaresize=27,
heatmap_annot=df_plot_annot,
heatmap_annot_kws={"color": "black", "size": 6},
heatmap_xticklabels=True,
heatmap_yticklabels=True,
heatmap_linecolor="gray",
heatmap_linewidths=0.05,
heatmap_cbar_kws=dict(
use_gridspec=False, location="top", fraction=0.02, pad=0.08, drawedges=True
),
heatmap_vmin=0,
heatmap_vmax=4.0,
colormap_n_bin=10,
)
cb = ax.collections[0].colorbar
cb.outline.set_edgecolor("black")
cb.outline.set_linewidth(1)
cb.ax.set_title("scDRS cell-type -$\log_{10}(p)$", fontsize=9)
# add bounding box
for x in ax.get_xlim():
ax.axvline(x=x, color="k", linewidth=1)
ax.axvline(x=x, color="k", linewidth=1)
for y in ax.get_ylim():
ax.axhline(y=y, color="k", linewidth=1)
ax.axhline(y=y, color="k", linewidth=1)
paper_utils.plot_diagonal_block(y_seps, x_seps, ax, linewidth=1.2)
dict_group_colors = {"Blood / immune": "C0", "Brain": "C1", "Others": "C2"}
# annotate cell-types
ct_breaks = np.cumsum([0] + y_seps)
for i, ct in enumerate(["Blood / immune", "Brain", "Others"]):
paper_utils.annotation_line(
ax=ax,
text=ct,
xy1=(ct_breaks[i], -0.2),
xy2=(ct_breaks[i + 1], -0.2),
text_offset_y=-0.5,
linecolor=dict_group_colors[ct],
text_color=dict_group_colors[ct],
)
for label in ax.xaxis.get_ticklabels()[ct_breaks[i] : ct_breaks[i + 1]]:
label.set_color(dict_group_colors[ct])
# annotate traits
trait_breaks = np.cumsum([0] + x_seps)
for i, trait in enumerate(["Blood / immune", "Brain", "Others"]):
paper_utils.annotation_line(
ax=ax,
text=trait,
xy1=(ct_breaks[-1] + 0.2, trait_breaks[i]),
xy2=(ct_breaks[-1] + 0.2, trait_breaks[i + 1]),
text_rotation=270,
text_offset_x=0.5,
linecolor=dict_group_colors[trait],
text_color=dict_group_colors[trait],
)
for label in ax.yaxis.get_ticklabels()[trait_breaks[i] : trait_breaks[i + 1]]:
label.set_color(dict_group_colors[trait])
plt.savefig("results/celltype_assoc_pval_overview.pdf", bbox_inches="tight")
```
| github_jupyter |
```
import matplotlib as matplot
import seaborn
import bokeh
import keras
import numpy
import scipy
import pandas as pd
# import block
ncs1 = pd.read_csv("ICPSR_06693-V6/ICPSR_06693/DS0001/06693-0001-Data.tsv", sep = "\t")
# ncs1 includes all of the symptom data from the NCS Data
ncs1 = pd.DataFrame(ncs1)
#turn into pandas DF
ncs1
ncs_sections = pd.DataFrame(data = {'Label': [], 'Subject': [], 'Part': [], 'Start': [], 'End': []})
ncs_sections
ncs_a = ncs1.iloc[:, ncs1.columns.get_loc('V101'):ncs1.columns.get_loc('V250')]
ncs_a.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group A From DS001
ncs_sections.loc[len(ncs_sections)] = ['a', 'ACTIVITIES OF DAILY LIFE', 1, ncs1.columns.get_loc('V101'), ncs1.columns.get_loc('V250')]
# Adding description of section to ncs_sections
ncs_b = ncs1.iloc[:, ncs1.columns.get_loc('V301'):ncs1.columns.get_loc('V948')]
ncs_b.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group B From DS001
ncs_sections.loc[len(ncs_sections)] = ['b', 'LIFETIME MOODS AND HEALTH BEHAVIORS', 1, ncs1.columns.get_loc('V301'), ncs1.columns.get_loc('V948')]
ncs_sections
# Adding description of section to ncs_sections
ncs_c = ncs1.iloc[:, ncs1.columns.get_loc('V1001'):ncs1.columns.get_loc('V1016')]
ncs_c.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group C From DS001
ncs_sections.loc[len(ncs_sections)] = ['c', 'ONGOING SADNESS', 1]
# Adding description of section to ncs_sections
ncs_d = ncs1.iloc[:, ncs1.columns.get_loc('V1101'):ncs1.columns.get_loc('V1559')]
ncs_d.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group D From DS001
ncs_sections.loc[len(ncs_sections)] = ['d', 'SADNESS', 1]
# Adding description of section to ncs_sections
ncs_e = ncs1.iloc[:, ncs1.columns.get_loc('V1601'):ncs1.columns.get_loc('V1757')]
ncs_e.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group E From DS001
ncs_sections.loc[len(ncs_sections)] = ['e', 'MANIA', 1]
# Adding description of section to ncs_sections
ncs_f = ncs1.iloc[:, ncs1.columns.get_loc('V1801'):ncs1.columns.get_loc('V1816')]
ncs_f.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group F From DS001
ncs_sections.loc[len(ncs_sections)] = ['f', 'ALCOHOL', 1]
# Adding description of section to ncs_sections
ncs_g = ncs1.iloc[:, ncs1.columns.get_loc('V1817'):ncs1.columns.get_loc('V3757')]
ncs_g.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group G From DS001
ncs_sections.loc[len(ncs_sections)] = ['g', 'MEDICATION & DRUGS', 1]
# Adding description of section to ncs_sections
ncs_h = ncs1.iloc[:, ncs1.columns.get_loc('V3801'):ncs1.columns.get_loc('V3817')]
ncs_h.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group H From DS001
ncs_sections.loc[len(ncs_sections)] = ['h', 'PROBLEM BEHAVIORS', 1]
# Adding description of section to ncs_sections
ncs_j = ncs1.iloc[:, ncs1.columns.get_loc('V3901'):ncs1.columns.get_loc('V3936')]
ncs_j.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group J From DS001
ncs_sections.loc[len(ncs_sections)] = ['j', 'DEMOGRAPHICS', 1]
# Adding description of section to ncs_sections
ncs_aa = ncs1.iloc[:, ncs1.columns.get_loc('V4000'):ncs1.columns.get_loc('V4053')]
ncs_aa.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group AA From DS001
ncs_sections.loc[len(ncs_sections)] = ['aa', 'INTERVIEWER\'s OBSERVATIONS', 1]
# Adding description of section to ncs_sections
ncs_p2 = ncs1.iloc[:, ncs1.columns.get_loc('P2WTV3'):ncs1.columns.get_loc('V4100')]
ncs_p2.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group p2 From DS001
ncs_sections.loc[len(ncs_sections)] = ['p2', 'SURVEY ADMINISTRATION', 2]
# Adding description of section to ncs_sections
ncs_k = ncs1.iloc[:, ncs1.columns.get_loc('V4101'):ncs1.columns.get_loc('V4341')]
ncs_k.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group K From DS001
ncs_sections.loc[len(ncs_sections)] = ['k', 'BELIEFS AND EXPERIENCES', 2]
# Adding description of section to ncs_sections
ncs_l = ncs1.iloc[:, ncs1.columns.get_loc('V4401'):ncs1.columns.get_loc('V4447')]
ncs_l.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group L From DS001
ncs_sections.loc[len(ncs_sections)] = ['l', 'PERSONALITY', 2]
# Adding description of section to ncs_sections
ncs_m = ncs1.iloc[:, ncs1.columns.get_loc('V4501'):ncs1.columns.get_loc('V4629')]
ncs_m.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group M From DS001
ncs_sections.loc[len(ncs_sections)] = ['m', 'MARRIAGE', 2]
# Adding description of section to ncs_sections
ncs_n = ncs1.iloc[:, ncs1.columns.get_loc('V4701'):ncs1.columns.get_loc('V4824')]
ncs_n.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group N From DS001
ncs_sections.loc[len(ncs_sections)] = ['n', 'EMPLOYMENT', 2]
# Adding description of section to ncs_sections
ncs_p = ncs1.iloc[:, ncs1.columns.get_loc('V4901'):ncs1.columns.get_loc('V4917')]
ncs_p.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group P From DS001
ncs_sections.loc[len(ncs_sections)] = ['p', 'HOME AND WORK', 2]
# Adding description of section to ncs_sections
ncs_q = ncs1.iloc[:, ncs1.columns.get_loc('V5001'):ncs1.columns.get_loc('V5059')]
ncs_q.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group Q From DS001
ncs_sections.loc[len(ncs_sections)] = ['q', 'CHILDREN', 2]
# Adding description of section to ncs_sections
ncs_r = ncs1.iloc[:, ncs1.columns.get_loc('V5101'):ncs1.columns.get_loc('V5134')]
ncs_r.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group R From DS001
ncs_sections.loc[len(ncs_sections)] = ['r', 'SELF-DESCRIPTION', 2]
# Adding description of section to ncs_sections
ncs_s = ncs1.iloc[:, ncs1.columns.get_loc('V5201'):ncs1.columns.get_loc('V5938')]
ncs_s.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group S From DS001
ncs_sections.loc[len(ncs_sections)] = ['s', 'HEALTH', 2]
# Adding description of section to ncs_sections
ncs_t = ncs1.iloc[:, ncs1.columns.get_loc('V6001'):ncs1.columns.get_loc('V6019')]
ncs_t.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group T From DS001
ncs_sections.loc[len(ncs_sections)] = ['t', 'FINANCES', 2]
# Adding description of section to ncs_sections
ncs_u = ncs1.iloc[:, ncs1.columns.get_loc('V6101'):ncs1.columns.get_loc('V6318')]
ncs_u.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group U From DS001
ncs_sections.loc[len(ncs_sections)] = ['u', 'LIFE EVENT HISTORY', 2]
# Adding description of section to ncs_sections
ncs_v = ncs1.iloc[:, ncs1.columns.get_loc('V6401'):ncs1.columns.get_loc('V6541')]
ncs_v.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group V From DS001
ncs_sections.loc[len(ncs_sections)] = ['v', 'RECENT LIFE EVENTS', 2]
# Adding description of section to ncs_sections
ncs_x = ncs1.iloc[:, ncs1.columns.get_loc('V6601'):ncs1.columns.get_loc('V7028')]
ncs_x.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group X From DS001
ncs_sections.loc[len(ncs_sections)] = ['x', 'FAMILY HISTORY', 2]
# Adding description of section to ncs_sections
ncs_y = ncs1.iloc[:, ncs1.columns.get_loc('V7101'):ncs1.columns.get_loc('V7110')]
ncs_y.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group Y From DS001
ncs_sections.loc[len(ncs_sections)] = ['y', 'RELIGION', 2]
# Adding description of section to ncs_sections
ncs_z = ncs1.iloc[:, ncs1.columns.get_loc('V7111'):ncs1.columns.get_loc('V7141')]
ncs_z.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group Z From DS001
ncs_sections.loc[len(ncs_sections)] = ['z', 'RACIAL AND ETHNIC BACKGROUND', 2]
# Adding description of section to ncs_sections
ncs_bb = ncs1.iloc[:, ncs1.columns.get_loc('V7200'):ncs1.columns.get_loc('V732')]
ncs_bb.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group BB From DS001
ncs_sections.loc[len(ncs_sections)] = ['bb', 'INTERVIEWER\'S OBSERVATIONS', 2]
# Adding description of section to ncs_sections
ncs_tobw = ncs1.iloc[:, ncs1.columns.get_loc('TOBACWT'):ncs1.columns.get_loc('V7442')]
ncs_tobw.insert(0, 'CASEID', ncs1.loc[:,'CASEID'], True)
# Subset of question group A From DS001
ncs_sections.loc[len(ncs_sections)] = ['tobw', 'TOBACCO USE SUPPLEMENT FREQUENCIES', 2]
# Adding description of section to ncs_sections
ncs_sections
```
| github_jupyter |
# Get your data ready for training
This module defines the basic [`DataBunch`](/basic_data.html#DataBunch) object that is used inside [`Learner`](/basic_train.html#Learner) to train a model. This is the generic class, that can take any kind of fastai [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) or [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader). You'll find helpful functions in the data module of every application to directly create this [`DataBunch`](/basic_data.html#DataBunch) for you.
```
from fastai.gen_doc.nbdoc import *
from fastai.basics import *
show_doc(DataBunch)
```
It also ensure all the dataloaders are on `device` and apply to them `tfms` as batch are drawn (like normalization). `path` is used internally to store temporary files, `collate_fn` is passed to the pytorch `Dataloader` (replacing the one there) to explain how to collate the samples picked for a batch. By default, it applies data to the object sent (see in [`vision.image`](/vision.image.html#vision.image) or the [data block API](/data_block.html) why this can be important).
`train_dl`, `valid_dl` and optionally `test_dl` will be wrapped in [`DeviceDataLoader`](/basic_data.html#DeviceDataLoader).
### Factory method
```
show_doc(DataBunch.create)
```
`num_workers` is the number of CPUs to use, `tfms`, `device` and `collate_fn` are passed to the init method.
```
jekyll_warn("You can pass regular pytorch Dataset here, but they'll require more attributes than the basic ones to work with the library. See below for more details.")
```
### Visualization
```
show_doc(DataBunch.show_batch)
```
### Grabbing some data
```
show_doc(DataBunch.dl)
show_doc(DataBunch.one_batch)
show_doc(DataBunch.one_item)
show_doc(DataBunch.sanity_check)
```
### Empty [`DataBunch`](/basic_data.html#DataBunch) for inference
```
show_doc(DataBunch.export)
show_doc(DataBunch.load_empty, full_name='load_empty')
```
This method should be used to create a [`DataBunch`](/basic_data.html#DataBunch) at inference, see the corresponding [tutorial](/tutorial.inference.html).
### Dataloader transforms
```
show_doc(DataBunch.add_tfm)
```
Adds a transform to all dataloaders.
## Using a custom Dataset in fastai
If you want to use your pytorch [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) in fastai, you may need to implement more attributes/methods if you want to use the full functionality of the library. Some functions can easily be used with your pytorch [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) if you just add an attribute, for others, the best would be to create your own [`ItemList`](/data_block.html#ItemList) by following [this tutorial](/tutorial.itemlist.html). Here is a full list of what the library will expect.
### Basics
First of all, you obviously need to implement the methods `__len__` and `__getitem__`, as indicated by the pytorch docs. Then the most needed things would be:
- `c` attribute: it's used in most functions that directly create a [`Learner`](/basic_train.html#Learner) ([`tabular_learner`](/tabular.data.html#tabular_learner), [`text_classifier_learner`](/text.learner.html#text_classifier_learner), [`unet_learner`](/vision.learner.html#unet_learner), [`create_cnn`](/vision.learner.html#create_cnn)) and represents the number of outputs of the final layer of your model (also the number of classes if applicable).
- `classes` attribute: it's used by [`ClassificationInterpretation`](/train.html#ClassificationInterpretation) and also in [`collab_learner`](/collab.html#collab_learner) (best to use [`CollabDataBunch.from_df`](/collab.html#CollabDataBunch.from_df) than a pytorch [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset)) and represents the unique tags that appear in your data.
- maybe a `loss_func` attribute: that is going to be used by [`Learner`](/basic_train.html#Learner) as a default loss function, so if you know your custom [`Dataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset) requires a particular loss, you can put it.
### For a specific application
In text, your dataset will need to have a `vocab` attribute that should be an instance of [`Vocab`](/text.transform.html#Vocab). It's used by [`text_classifier_learner`](/text.learner.html#text_classifier_learner) and [`language_model_learner`](/text.learner.html#language_model_learner) when building the model.
In tabular, your dataset will need to have a `cont_names` attribute (for the names of continuous variables) and a `get_emb_szs` method that returns a list of tuple `(n_classes, emb_sz)` representing, for each categorical variable, the number of different codes (don't forget to add 1 for nan) and the corresponding embedding size. Those two are used with the `c` attribute by [`tabular_learner`](/tabular.data.html#tabular_learner).
### Functions that really won't work
To make those last functions work, you really need to use the [data block API](/data_block.html) and maybe write your own [custom ItemList](/tutorial.itemlist.html).
- [`DataBunch.show_batch`](/basic_data.html#DataBunch.show_batch) (requires `.x.reconstruct`, `.y.reconstruct` and `.x.show_xys`)
- [`Learner.predict`](/basic_train.html#Learner.predict) (requires `x.set_item`, `.y.analyze_pred`, `.y.reconstruct` and maybe `.x.reconstruct`)
- [`Learner.show_results`](/basic_train.html#Learner.show_results) (requires `x.reconstruct`, `y.analyze_pred`, `y.reconstruct` and `x.show_xyzs`)
- `DataBunch.set_item` (requires `x.set_item`)
- [`Learner.backward`](/basic_train.html#Learner.backward) (uses `DataBunch.set_item`)
- [`DataBunch.export`](/basic_data.html#DataBunch.export) (requires `export`)
```
show_doc(DeviceDataLoader)
```
Put the batches of `dl` on `device` after applying an optional list of `tfms`. `collate_fn` will replace the one of `dl`. All dataloaders of a [`DataBunch`](/basic_data.html#DataBunch) are of this type.
### Factory method
```
show_doc(DeviceDataLoader.create)
```
The given `collate_fn` will be used to put the samples together in one batch (by default it grabs their data attribute). `shuffle` means the dataloader will take the samples randomly if that flag is set to `True`, or in the right order otherwise. `tfms` are passed to the init method. All `kwargs` are passed to the pytorch [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) class initialization.
### Methods
```
show_doc(DeviceDataLoader.add_tfm)
show_doc(DeviceDataLoader.remove_tfm)
show_doc(DeviceDataLoader.new)
show_doc(DeviceDataLoader.proc_batch)
show_doc(DatasetType, doc_string=False)
```
Internal enumerator to name the training, validation and test dataset/dataloader.
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(DeviceDataLoader.collate_fn)
```
## New Methods - Please document or move to the undocumented section
| github_jupyter |
### Notebook is based on sparkoverflow data dump from 2017-03-14
```
from pyspark.sql.session import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
spark = (SparkSession.builder
.config('spark.executor.memoty', 2800)
.appName('appname')
.getOrCreate())
posts = spark.read.parquet('/data/Posts/*')
votes = spark.read.parquet('/data/Votes/*')
users = spark.read.parquet('/data/Users/*')
comments = spark.read.parquet('/data/Comments')
```
### Do questions ended with a question mark have bigger chance of being answered?
Number of posts
```
posts.count()
```
Number of questions
```
questions_count = posts.where(col('PostTypeId') == 1).count()
questions_count
```
How many questions ends with question mark?
```
ends_with_qm = col('Title').like('%?').alias('ends_with_qm')
as_percent = lambda col: round(col * 100, 2)
posts.where(posts.PostTypeId == 1).groupBy(ends_with_qm).count() \
.withColumn('percent', as_percent(col('count') / questions_count)).show()
```
How many questions in each group is answered?
```
question_aswered = col('AcceptedAnswerId').isNotNull()
posts.where(posts.PostTypeId == 1) \
.withColumn('ends_with_qm', col('Title').like('%?')) \
.withColumn('answered', question_aswered.cast('int')) \
.groupBy('ends_with_qm').agg(count('*'), sum(col('answered'))) \
.withColumn('solving_chance', as_percent(col('sum(answered)') / col('count(1)'))).show()
```
Percent of answered questions by tag
```
answered_by_tag = posts.where(posts.PostTypeId == 1) \
.withColumn('ends_with_qm', col('Title').like('%?')) \
.withColumn('answered', question_aswered.cast('int')) \
.withColumn('tag', explode('Tags')) \
.groupBy('tag', 'ends_with_qm').agg(count('*'), sum(col('answered'))) \
.withColumn('solving_chance', as_percent(col('sum(answered)') / col('count(1)'))).cache()
answered_by_tag.show()
```
Difference between solving chances for each tag
```
answered_by_tag \
.withColumn('solving_chance', when(col('ends_with_qm'), 1).otherwise(-1) * col('solving_chance')) \
.groupBy('tag').agg(round(sum('solving_chance'), 2).alias('diff'), sum('count(1)').alias('total_questions')) \
.orderBy(desc('diff')) \
.where(col('total_questions') > 1000).show()
answered_by_tag.where(col('tag') == 'ioexception').show()
```
### The most controversial posts
```
upvote = (col('VoteTypeId') == 2).cast('int')
downvote = (col('VoteTypeId') == 3).cast('int')
votes.repartition(200).groupBy('PostId').agg(
sum(upvote).alias('sum(upvote)'), sum(downvote) .alias('sum(downvote)')
).join(posts, posts.Id == votes.PostId) \
.where(col('sum(downvote)') > 0.5 * col('sum(upvote)')).orderBy(desc('sum(upvote)')) \
.select(posts.Id, posts.ParentId, 'sum(upvote)', 'sum(downvote)').show()
```
The most controversial post: http://stackoverflow.com/questions/244777/can-comments-be-used-in-json/18018493#18018493
### Answers to posts ratio
```
users_aq = posts.where(col('PostTypeId').isin(1, 2)).select(
'OwnerUserId',
when(col('PostTypeId') == 1, 1).alias('question'),
when(col('PostTypeId') == 2, 1).alias('answer')
).groupBy('OwnerUserId').agg(sum('question'), sum('answer'), count('*').alias('all_posts')).cache()
users_aq.show()
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (20,10)
df = users_aq.withColumn('answer_rate', coalesce(col('sum(answer)') / col('all_posts'), lit(0))) \
.withColumn('log10_posts', log10('all_posts')) \
.select(round('answer_rate', 2).alias('ar'), round('log10_posts', 1).alias('posts')) \
.groupBy('ar', 'posts').count().orderBy(desc('count')).toPandas()
df.plot(x='posts', y='ar', kind='scatter', s=df['count'] / df['count'].max() * 10000)
```
The above charts shows group of users depending of number of posts (normalized to log10(number_of_posts)) on X axis and percent of answers in all posts on Y axis. The bigger dot is, the more users there are inside group.
```
pl_users = users.where(col('Location').like('%Poland%'))
pl_users_df = users_aq.withColumn('answer_rate', coalesce(col('sum(answer)') / col('all_posts'), lit(0))) \
.withColumn('log10_posts', log10('all_posts')) \
.join(pl_users, pl_users.Id == users_aq.OwnerUserId, 'leftsemi') \
.select(round('answer_rate', 2).alias('ar'), round('log10_posts', 1).alias('posts')) \
.groupBy('ar', 'posts').count().orderBy(desc('count')).toPandas()
pl_users_df.plot(x='posts', y='ar', kind='scatter', s=df['count'] / df['count'].max() * 10000)
```
The above chart applies to users from Poland. Users with the most answers ratio from Poland are:
```
users_aq.join(pl_users, pl_users.Id == users_aq.OwnerUserId) \
.withColumn('answer_rate', coalesce(col('sum(answer)') / col('all_posts'), lit(0))) \
.where(col('all_posts') > 1000) \
.orderBy(desc('answer_rate')) \
.select('Id', 'Location', 'DisplayName', 'sum(question)', 'sum(answer)').show()
```
### Patient conversations
```
comments.groupBy('PostId').agg(count('*'), countDistinct('UserId')) \
.orderBy(desc('count(1)')).show()
```
The most patient conversation between two stackoverflow users has 176 comments: http://stackoverflow.com/questions/38846406/compare-2-datatables-and-result-missing-data/38892391#38892391
### The shortest time to first answer
```
questions = posts.where(col('PostTypeId') == 1).alias('q')
answers = posts.where(col('PostTypeId') == 2).alias('a')
time_to_answer = unix_timestamp('a.CreationDate') - unix_timestamp('q.CreationDate')
by_tag = questions.join(answers, col('q.Id') == col('a.ParentId')) \
.where(col('q.OwnerUserId') != col('a.OwnerUserId')) \
.groupBy('q.Id', 'q.Tags').agg(min(time_to_answer).alias('time_to_first_answer')) \
.select(explode('Tags').alias('tag'), 'time_to_first_answer') \
.groupBy('tag').agg(
expr('percentile_approx(time_to_first_answer, 0.5)').alias('median_time_to_first_answer'),
count('*').alias('number_of_questions')).cache()
by_tag.show()
by_tag.where(col('number_of_questions') > 1000) \
.orderBy('median_time_to_first_answer').show(truncate=False)
by_tag.where(col('tag').isin('java', 'scala', 'python', 'r')) \
.orderBy('median_time_to_first_answer').show(truncate=False)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/k-timy/Keras-GAN/blob/master/GAN_keras.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## **Let's Setup the Environment first!**
We need to install Tensorflow and Keras versions 2 (the newest versions are fine, however, they should not be before 2.1)
```
# Upgrading Colab's frameworks
!pip install keras --upgrade
!pip uninstall tensorflow
!pip install tensorflow==2.1
```
## **The Algorithm**
Here is the algorithm from the paper:

## **Let's get into it!**
The rest of this notebook is the implementation of the Generative Adversarial Network using multi-layered neural perceptrons (MLP). First the main libraries are imported:
```
from tensorflow import keras
from tensorflow.keras import layers
from keras.datasets import mnist
import tensorflow as tf
# initial preprocessing image dimensions:
img_rows, img_cols = 28, 28
num_classes = 10
# Just to make sure the tf version is 2.1.0 (or newer)
print(tf.__version__)
```
## **Gimme the Data!**
We load the set of images including 60000 training and 10000 testing handwritten images from the dataset of MNIST.
Notice that the test set is also loaded here. However, it is not necessary for GAN to load the test set.
```
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if len(y_train.shape) < 2:
# convert class vectors to binary class matrices
# this "if" condition here, makes sure that the to_categorical function is
# only called once. And prevents the code from adding further dimensions
# to the y vectors(if the code is run again during the same runtime execution)
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# Since we are implementing an MLP, we convert the 2D images of size 28x28 into
# 1D vectors of size 784 (=28x28)
x_train = x_train.reshape(x_train.shape[0], img_rows * img_cols)
x_test = x_test.reshape(x_test.shape[0], img_rows * img_cols)
```
Taking a look at the shape of the loaded arrays of image:
```
[x_train.shape ,y_train.shape]
```
The imported data needs some preprocessing. Converting the image data to float data type and normalizing them to fall in range [0,1].
```
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
```
We define the Generator and the Discriminator classes here.
Since both of these classes are **MLPs**, the pretty much cleaner and easier way to implement the structure of them is to use the sequential model as described in the official Keras's documentation [here](https://keras.io/getting-started/sequential-model-guide/):
```
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential([
Dense(32, input_shape=(784,)),
Activation('relu'),
Dense(10),
Activation('softmax'),
])
```
However, I intentionally used this approach of implementing a sequence of layers, in order to get my hands on this method of writing code as well. This method is helpful and necessary when writing custom architectures with probably several branches of computational graph.
**Note:** For the architecture and hyper-parameters I used [this](https://github.com/lyeoni/pytorch-mnist-GAN/blob/master/pytorch-mnist-GAN.ipynb) and [this](https://github.com/eriklindernoren/Keras-GAN/blob/master/acgan/acgan.py) implementations as references for implementation.
## **The two Rivals! The Generator and The Discriminator**
The following piece of code defines the MLPs of Generator and Discriminator.
More explanations in the code!
```
# Link: https://github.com/lyeoni/pytorch-mnist-GAN/blob/master/pytorch-mnist-GAN.ipynb
class Generator(tf.keras.Model):
def __init__(self,latent_var_len, hidden_layer_len, output_size):
"""
The Generator class.
To be used in GAN class as a property. That takes a vector of latent
variable and generates.
* `call()`: feeds the input of size `latent_var_len` to the generative MLP
and outputs an image in form of a vector of `output_size` dimensions.
# Arguments
latent_var_len: The number of dimensions of the latent vector.
hidden_layer_len: The number of dimensions of the first hidden layer.
The other hidden layers will be twice as size of the previous hidden
layer in dimensions.
output_size: The number of dimensions of the vector that represents
a generated image. This needs to be converted to a 2D array, in order
to be visualized. For example for the MNIST dataset, this will be a
vector with length 784. That needs to be converted to a 2D array of
28 * 28.
"""
super(Generator, self).__init__()
# Setting up the input layer
self.input_layer = keras.Input(shape=(latent_var_len,),
name='inp_latent_var')
# Dense: layers are fully connected network (FCN)
# BatchNormalization: layers, perform normalization on the outputs of each
# FCN that results in faster convergence.
# LeakyReLU : Provide unsaturated non-linearity so that the training speed
# increases.
self.dense1 = layers.Dense(hidden_layer_len,
name='dense_1')(self.input_layer)
self.bo1 = layers.BatchNormalization()(self.dense1)
self.lr1 = layers.LeakyReLU(alpha=0.2)(self.bo1)
self.dense2 = layers.Dense(hidden_layer_len * 2,name='dense_2')(self.lr1)
self.bo2 = layers.BatchNormalization()(self.dense2)
self.lr2 = layers.LeakyReLU(alpha=0.2)(self.bo2)
self.dense3 = layers.Dense(hidden_layer_len * 4,name='dense_3')(self.lr2)
self.bo3 = layers.BatchNormalization()(self.dense3)
self.lr3 = layers.LeakyReLU(alpha=0.2)(self.bo3)
self.dense4 = layers.Dense(output_size, activation='tanh',
name='dense_4')(self.lr3)
# Wrapping all the computational graph in a single object so that it can
# be called in the __call__ function
self.gen = tf.keras.Model(inputs=self.input_layer, outputs=self.dense4)
def __call__(self, inputs):
return self.gen(inputs)
class Discriminator(tf.keras.Model):
def __init__(self,input_image_size, hidden_layer_len=1024):
"""
Discriminator class.
To be used in GAN class. The purpose of this model is to identify the fake
images from the real ones.
* `call()`: Contains the logic for loss calculation using `y_true`, `y_pred`.
# Arguments
input_image_size: The size of the image as a vector. i.e. width x height
hidden_layer_len: The size of the first hidden layer. The size of the
next hidden layers will be half of their previous ones.
"""
super(Discriminator, self).__init__()
# Dense: layers are Fully Connected Networks (FCN).
# LeakyReLU: as explained for the Generator class.
# Droput: Increases the regularization of the MLP by reducing overfitting.
# Setting up the input layer of the MLP
self.input_layer = keras.Input(shape=(input_image_size,),name='inp_image_var')
self.dense1 = layers.Dense(hidden_layer_len, name='dense_1')(self.input_layer)
self.lr1 = layers.LeakyReLU(alpha=0.2)(self.dense1)
self.do1 = layers.Dropout(rate=0.3)(self.lr1)
self.dense2 = layers.Dense(hidden_layer_len // 2,name='dense_2')(self.do1)
self.lr2 = layers.LeakyReLU(alpha=0.2)(self.dense2)
self.do2 = layers.Dropout(rate=0.5)(self.lr2)
self.dense3 = layers.Dense(hidden_layer_len // 4, name='dense_3')(self.do2)
self.lr3 = layers.LeakyReLU(alpha=0.2)(self.dense3)
self.do3 = layers.Dropout(rate=0.3)(self.lr3)
self.dense4 = layers.Dense(1, activation='sigmoid', name='dense_4')(self.do3)
# Wrapping up the input and output as a single model.
self.disc = tf.keras.Model(inputs=self.input_layer, outputs=self.dense4)
def __call__(self, inputs):
return self.disc(inputs)
# Just to make sure that the code is run.
print(Generator,Discriminator)
```
## **Let's Define Generative Adversarial Network!**
The **Generative Adversarial Network (GAN)** is defined as a class in the following code:
```
import numpy as np
# A class for storing samples of generated images
import imageio
import os
import time
class MyGAN:
def __init__(self,image_size,img_classes, disc_hidden_layer_len,gen_hidden_layer_len,latent_var_size):
""" My implementation of the GANs.
# Arguments
image_size: The size of the image as a vector. i.e. width x height
img_classes: Number of image classes. (Not used in this implementation)
disc_hidden_layer_len: The number of nodes in the first hidden layer of
the discriminator.
gen_hidden_layer_len: The number of nodes in the first hidden layer of
the generator.
latent_var_size: The size of the latent variable vector.
"""
super(MyGAN,self).__init__()
# Initializing Generator and Discriminator given the values.
self.generator = Generator(latent_var_size,gen_hidden_layer_len,image_size)
self.discriminator = Discriminator(image_size,hidden_layer_len=disc_hidden_layer_len)
# Setting up some values as properties
self.latent_var_size = latent_var_size
self.image_classes = img_classes
self.image_size = image_size
# Setting up loss functions. Since there are only two classes of images in
# This implementation of GAN (fake=0, real=1), we consider using
# BinaryCrossEntropy
self.gen_loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=False)
self.disc_loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=False)
# Setting up optimiziers for each of the MLPs
self.gen_opt = tf.keras.optimizers.Adam(learning_rate=2e-4,beta_1=0.5)
self.disc_opt = tf.keras.optimizers.Adam(learning_rate=2e-4,beta_1=0.5)
def __train_generator_one_batch(self,x):
"""
Train the generator with one batch of input images (x). Though, only the
size of batch is used for this training and the generator does not have
direct access to the images of the dataset. It is only trained based on the
gradients passed from the discriminator.
"""
# Keeping track of the computations in a tape:
with tf.GradientTape() as tape:
# Drawing N samples of latent vectors with normal distribution.
# where N is the size of the batches.
z = tf.keras.backend.random_normal((x.shape[0],self.latent_var_size))
# Deceiving the discriminator by assiging the class of real images
# to fake images
y = tf.ones(x.shape[0],1)
# generating images
gen_out = self.generator(z)
# classifying the generated images
disc_out = self.discriminator(gen_out)
# calculating the loss of classification to be passed to the generator
gen_loss = self.gen_loss_fn(y, disc_out)
# calculating gradients of the generator weights based on the loss
grads = tape.gradient(gen_loss, self.generator.trainable_weights)
# updating the weights of the generator using the gradients.
self.gen_opt.apply_gradients(zip(grads, self.generator.trainable_weights))
# returning the loss of classification of generated samples
return gen_loss.numpy()
def __train_discriminator_one_batch(self,x):
"""
Train the discriminator with one batch of real image samples (x)
through the adversarial process.
"""
# Keeping track of the computations in a tape:
with tf.GradientTape() as tape:
# train discriminator on real data
x_real, y_real = x, tf.ones((x.shape[0],1))
disc_real_out = self.discriminator(x_real)
disc_real_loss = self.disc_loss_fn(y_real,disc_real_out)
# train discriminator on fake data
# drawing samples of latent variables
z = tf.keras.backend.random_normal((x.shape[0],self.latent_var_size))
# generating fake images from the latent variables given
x_fake = self.generator(z)
y_fake = tf.zeros((x.shape[0],1))
# calculating loss of classification of fake images
disc_fake_out = self.discriminator(x_fake)
disc_fake_loss = self.disc_loss_fn(y_fake,disc_fake_out)
# sum both losses of fake and real classifications
disc_loss_total = disc_fake_loss + disc_real_loss
# calculating the gradients of the discriminator from the total loss
# of classifications
grads = tape.gradient(disc_loss_total, self.discriminator.trainable_weights)
# updating weights of the discriminator MLP
self.disc_opt.apply_gradients(zip(grads, self.discriminator.trainable_weights))
# returning the discriminator loss
return disc_loss_total.numpy()
def train_one_batch(self,x,disc_runs=1):
"""
Train the GAN using the algorithm described in the paper of GAN,
given a batch of input images (x)
"""
# train discriminator for `disc_runs` epochs. The default value is 1 and it
# works well. However, I wrote this code to follow the algorithm
# described in GAN paper.
d_losses = []
for i in range(disc_runs):
d_losses.append(self.__train_discriminator_one_batch(x))
# train the generator
g_loss = self.__train_generator_one_batch(x)
d_loss = np.asarray(d_losses).mean()
return [d_loss, g_loss]
def sample_images(self, epoch):
"""
Sample 200 images from the GAN and store them as a single image file of
20x10 tiles of small images. The `epoch` argument is only passed so
that the saved images can be distinguished from each other.
The function returns the name of stored image as an string.
"""
r, c = 20, 10
# Draw r x c latent samples
z = tf.keras.backend.random_normal((r * c,self.latent_var_size))
# Generate images
x_fake = self.generator(z)
# Rescale images into the range of [0,1]
gen_imgs = 0.5 * x_fake.numpy() + 0.5
# Reshape the images into an array containing 200 2D images of size 28 x 28
gen_imgs = gen_imgs.reshape(x_fake.shape[0],28,28)
# placing images on a big image of size (r x 28) x (c x 28)
canvas = np.zeros((r * 28,c * 28))
# index of image on gen_imgs array
cnt = 0
for i in range(r):
for j in range(c):
# storing each image on its respective place on canvas
canvas[i * 28:(i+1) * 28,j * 28:(j+1) * 28] = gen_imgs[cnt,:,:]
cnt += 1
fname = 'samples_from_my_gan_epoch_{}.png'.format(epoch)
# saving image file
imageio.imwrite(fname,canvas)
return fname
# Just to make sure the code is run. Some times you think you have clicked on
# the run button, but in fact you have not :D
print(MyGAN)
```
## **MyGAN Class in Action!**
Let's see how this GAN class performs. I ran this code and in 50 epochs it works fine. At first, a dataset object is created and then during the epochs it
is trained.
```
import numpy as np
# create an instance of teh class
mygan = MyGAN(28*28,10,1024,256,100)
# Prepare the training dataset
batch_size = 64
train_dataset = tf.data.Dataset.from_tensor_slices((x_train,y_train))
train_dataset = train_dataset.batch(batch_size)
# For debugging purposes
break_loops = False
# Iterate over epochs.
epochs = 50
file_names = []
for epoch in range(epochs):
if break_loops:
break
print('Start of epoch {}'.format(epoch))
d_losses = []
g_losses = []
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
dl,gl = mygan.train_one_batch(x_batch_train)
d_losses.append(dl)
g_losses.append(gl)
# aggregate losses
d_losses = np.asarray(d_losses)
g_losses = np.asarray(g_losses)
# Sample some images and store them
if epoch % 5 == 0:
file_names.append(mygan.sample_images(epoch))
break
print('epoch {} : disc: {:.4f} gen: {:.4f}'.format(epoch,d_losses.mean(),g_losses.mean()))
```
## **Where the images at?**
Now that the training process is completed. Lets take a look at how the generated images actually look like! The following code, downloads the samples of multliplications of 5:
**Note:** In order to run the files.download() function of google colab properly, you might need to allow the **colab.research.google.com** website to use the 3rd party cookies on your google chrome. As explained [here](https://stackoverflow.com/questions/53581023/google-colab-file-download-failed-to-fetch-error). Otherwise, you might get some errors.
```
from google.colab import files
for f in file_names:
files.download(f)
```
I hope this piece of code helps you in getting started with the Keras and Tensorflow library. Please let me know of your feedbacks. Thanks.
| github_jupyter |
```
import os
import pandas as pd
import numpy as np
import pickle
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
data_path \
= 'https://raw.githubusercontent.com/fclesio/learning-space/master/Datasets/02%20-%20Classification/default_credit_card.csv'
def get_features_and_labels(df):
# Features
X = df[
[
"LIMIT_BAL",
"AGE",
"PAY_0",
"PAY_2",
"PAY_3",
"BILL_AMT1",
"BILL_AMT2",
"PAY_AMT1",
]
]
gender_dummies = pd.get_dummies(df[["SEX"]].astype(str))
X = pd.concat([X, gender_dummies], axis=1)
# Labels
y = df["DEFAULT"]
return X, y
def get_results(y_test, y_pred):
acc = metrics.accuracy_score(y_test, y_pred)
acc = round(acc, 2) * 100
df_results = pd.DataFrame(y_pred)
df_results.columns = ["status"]
print(f"Accuracy: {acc}%")
print(df_results.groupby(by=["status"]).size())
df = pd.read_csv(data_path)
X, y = get_features_and_labels(df)
X_train, X_test, y_train, y_test \
= train_test_split(X, y, test_size=0.1, random_state=42)
model = RandomForestClassifier(
n_estimators=5,
random_state=42,
max_depth=3,
min_samples_leaf=100,
n_jobs=-1,
)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
get_results(y_test, y_pred)
pickle.dump(model, open("model_rf.pkl", 'wb'))
```
### Attack
In that case we have a pre-trained model that will be handover to another place, for example, from some Data Science team to a Machine Learning engineering team.
The attack consist to take this model, make a slighty modification that can be harmful, and put it again in the ML Supply Chain flow, that in this case is the ML Production Pipeline.
```
# Load model from Pickle
model_rf_reload_pkl = pickle.load(open('model_rf.pkl', 'rb'))
# Displays prediction classes
model_rf_reload_pkl.classes_
# Attack: Change the classes for the model only to 1
model_rf_reload_pkl.classes_ = np.array([1, 1])
# Quick check
model_rf_reload_pkl.classes_
# Call predict from the new model
y_pred = model_rf_reload_pkl.predict(X_test)
# Check results with a new model
get_results(y_test, y_pred)
```
As we can see, if we have in our ML Supply Chain some "man in the middle" that can take our file and modify it, an entire class of a model (in this case we used Scikit-Learn) can be corrupted. This was an extreme case but remember: The attacker wants to stay attacking for a long time and wants to stay in the stealth model at maximum as they can.
### Countermeasures
- If there's some risk of some "man in the middle" in the touchpoints of ML models (e.g. a DS team, makes the handover to an MLE team and after to another team), it's suitable to use SHA1 or MD5 references from the start to assure the file integrity between all entities involved with the model;
- If possible, own your models and deployment and reduce as many intermediate steps as possible;
- If possible, avoid implementations that allow modifications in models (e.g. classes, attributes, coefficients, etc)
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES']='6'
import time
import numpy as np
import tensorflow as tf
from VGG16_GAP import VGG16_GAP
from VGG16_flatten import VGG16_flatten
import numpy as np
import pandas as pd
import skimage.io as imageio
import pickle
from progress.bar import Bar
from ipywidgets import IntProgress
from IPython.display import display
with open('save/label_dict.pkl', 'rb') as f:
y_dict = pickle.load(f)
with open('save/inv_label_dict.pkl', 'rb') as f:
inv_y_dict = pickle.load(f)
HOME_DIR = "/home/cmchang/DLCV2018SPRING/final/"
TRAIN_DIR = HOME_DIR+"dlcv_final_2_dataset/train/"
VALID_DIR = HOME_DIR+"dlcv_final_2_dataset/val/"
dtrain = pd.read_csv(HOME_DIR+"dlcv_final_2_dataset/train_id.txt", header=None,sep=" ", names=["img", "id"])
dvalid = pd.read_csv(HOME_DIR+"dlcv_final_2_dataset/val_id.txt", header=None,sep=" ", names=["img", "id"])
train_list = list(TRAIN_DIR+dtrain.img)
valid_list = list(VALID_DIR+dvalid.img)
def readImgList(file_list):
images = list()
for i, file in enumerate(file_list):
print(i, end="\r")
img = imageio.imread(file)
img = img.astype(int)
images.append(img)
return np.array(images)
def transformLabel(id_list, y_dict):
label = list()
for uid in list(id_list):
label.append(y_dict[uid])
return np.array(label)
def one_hot_encoding(class_numbers, num_classes):
return np.eye(num_classes, dtype=float)[class_numbers]
def initialize_uninitialized(sess):
global_vars = tf.global_variables()
is_not_initialized = sess.run([tf.is_variable_initialized(var) for var in global_vars])
not_initialized_vars = [v for (v,f) in zip(global_vars, is_not_initialized) if not f]
if len(not_initialized_vars):
sess.run(tf.variables_initializer(not_initialized_vars))
Xtrain = readImgList(train_list)
print("train:", Xtrain.shape)
Xvalid = readImgList(valid_list)
print("valid:", Xtrain.shape)
ytrain = transformLabel(list(dtrain.id), y_dict)
yvalid = transformLabel(list(dvalid.id), y_dict)
Ytrain = one_hot_encoding(ytrain, len(y_dict))
Yvalid = one_hot_encoding(yvalid, len(y_dict))
scope_name = "Model"
model = VGG16_GAP(scope_name=scope_name)
FLAG_save_dir = "/home/cmchang/DLCV2018SPRING/final/newCL_lambda-1e-1_dynamic_gap_L5_v3_rescale0-1_save_linear/"
FLAG_init_from = FLAG_save_dir + "para_dict.npy"
model.build(vgg16_npy_path=FLAG_init_from,
shape=Xtrain.shape[1:],
classes=len(y_dict),
conv_pre_training=True,
fc_pre_training=True,
new_bn=False)
dp = [1.0]
model.set_idp_operation(dp=dp)
def count_number_params(para_dict):
n = 0
for k,v in sorted(para_dict.items()):
if 'bn_mean' in k:
continue
elif 'bn_variance' in k:
continue
elif 'gamma' in k:
continue
elif 'beta' in k:
continue
elif 'conv' in k or 'fc' in k:
n += get_params_shape(v[0].shape.as_list())
n += get_params_shape(v[1].shape.as_list())
return n
def get_params_shape(shape):
n = 1
for dim in shape:
n = n*dim
return n
def count_flops(para_dict, net_shape):
input_shape = (3 ,32 ,32) # Format:(channels, rows,cols)
total_flops_per_layer = 0
input_count = 0
for k,v in sorted(para_dict.items()):
if 'bn_mean' in k:
continue
elif 'bn_variance' in k:
continue
elif 'gamma' in k:
continue
elif 'beta' in k:
continue
elif 'fc' in k:
continue
elif 'conv' in k:
conv_filter = v[0].shape.as_list()[3::-1] # (64 ,3 ,3 ,3) # Format: (num_filters, channels, rows, cols)
stride = 1
padding = 1
if conv_filter[1] == 0:
n = conv_filter[2] * conv_filter[3] # vector_length
else:
n = conv_filter[1] * conv_filter[2] * conv_filter[3] # vector_length
flops_per_instance = n + ( n -1) # general defination for number of flops (n: multiplications and n-1: additions)
num_instances_per_filter = (( input_shape[1] - conv_filter[2] + 2 * padding) / stride) + 1 # for rows
num_instances_per_filter *= ((input_shape[1] - conv_filter[2] + 2 * padding) / stride) + 1 # multiplying with cols
flops_per_filter = num_instances_per_filter * flops_per_instance
total_flops_per_layer += flops_per_filter * conv_filter[0] # multiply with number of filters
total_flops_per_layer += conv_filter[0] * input_shape[1] * input_shape[2]
input_shape = net_shape[input_count].as_list()[3:0:-1]
input_count +=1
total_flops_per_layer += net_shape[-1].as_list()[1] *2360*2
return total_flops_per_layer
def countFlopsParas(net):
total_flops = count_flops(net.para_dict, net.net_shape)
if total_flops / 1e9 > 1: # for Giga Flops
print(total_flops/ 1e9 ,'{}'.format('GFlops'))
else:
print(total_flops / 1e6 ,'{}'.format('MFlops'))
total_params = count_number_params(net.para_dict)
if total_params / 1e9 > 1: # for Giga Flops
print(total_params/ 1e9 ,'{}'.format('G'))
else:
print(total_params / 1e6 ,'{}'.format('M'))
return total_flops, total_params
flops, params = countFlopsParas(model)
FLAG_flops_M = flops/1e6
FLAG_params_M = params/1e6
print("Flops: %3f M, Paras: %3f M" % (flops/1e6, params/1e6))
# extract features
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.global_variables())
print("Initialized")
output = []
for i in range(int(Xtrain.shape[0]/200+1)):
print(i, end="\r")
st = i*200
ed = min((i+1)*200, Xtrain.shape[0])
prob = sess.run(model.features, feed_dict={model.x: Xtrain[st:ed,:],
model.is_train: False})
output.append(prob)
for i in range(int(Xvalid.shape[0]/200+1)):
print(i, end="\r")
st = i*200
ed = min((i+1)*200, Xvalid.shape[0])
prob = sess.run(model.features, feed_dict={model.x: Xvalid[st:ed,:],
model.is_train: False})
output.append(prob)
EX = np.concatenate(output,)
print(EX.shape)
Y = np.concatenate([ytrain, yvalid])
print(Y.shape)
centers = np.zeros((len(y_dict), EX.shape[1]))
for i in range(len(y_dict)):
centers[i,:] = np.mean(EX[Y==i,:], axis=0)
centers.shape
np.save(arr=centers,file=FLAG_save_dir+"centers.npy")
```
| github_jupyter |
## Link to Colab
```
# connect to google colab
from google.colab import drive
drive.mount("/content/drive")
```
## Downloading Dependencies
```
# install torchaudio
!pip install torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
import os
import numpy as np
import pandas as pd
# current torch version is 1.7.0+cu101
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import warnings
warnings.filterwarnings("ignore")
import torchaudio
import matplotlib.pyplot as plt
import IPython.display as ipd
# check if cuda GPU is available, make sure you're using GPU runtime on Google Colab
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device) # you should output "cuda"
# COLAB CONFIG
# change colab flag to false if train using jupyter notebook
COLAB_FLAG = True
COLAB_FILEPATH = './drive/My Drive/TIL2021/competition/SC/' if COLAB_FLAG == True else './'
pd.options.mode.chained_assignment = None # default='warn'
%matplotlib inline
```
## Speech Classification Dataset
We will be providing the base dataset that will be used for the first task of the Speech Classification competition.
```
!gdown --id 1im5shxcavdoTRNT66mhVdtA_E0ZR8QLl
!unzip s1_train_release.zip
class CustomSpeechDataset(torch.utils.data.Dataset):
def __init__(self, path, typ='train', transforms=None):
assert typ == 'train' or typ == 'test', 'typ must be either "train" or "test"'
self.typ = typ
self.transforms = transforms
self.targets = []
if self.typ == 'train':
self.class_names = sorted(os.listdir(path))
num_classes = len(self.class_names)
for class_idx, class_name in enumerate(self.class_names):
class_dirx = os.path.join(path, class_name)
wav_list = os.listdir(class_dirx)
for wav_file in wav_list:
self.targets.append({
'filename': wav_file,
'path': os.path.join(class_dirx, wav_file),
'class': class_name
})
if self.typ == 'test':
wav_list = os.listdir(path)
for wav_file in wav_list:
self.targets.append({
'filename': wav_file,
'path': os.path.join(path, wav_file)
})
def __len__(self):
return len(self.targets)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx.tolist()
# sr is the sampling rate
signal, sr = torchaudio.load(self.targets[idx]['path'], normalization=True)
filename = self.targets[idx]['filename']
if self.transforms:
for transform in self.transforms:
signal = transform(signal)
if self.typ == 'train':
clx_name = self.targets[idx]['class']
return filename, signal, sr, clx_name
elif self.typ == 'test':
return filename, signal, sr
full_dataset = CustomSpeechDataset(path='s1_train_release', typ='train')
train_size = int(len(full_dataset)*0.8)
valid_size = len(full_dataset) - train_size
print(train_size, valid_size)
# train test split
train_set, valid_set = torch.utils.data.random_split(full_dataset, [train_size, valid_size])
labels = full_dataset.class_names
print(labels)
labels_to_indices = {}
for idx, l in enumerate(labels):
labels_to_indices[l] = idx
print(labels_to_indices)
```
## Get the data distribution of each class
```
train_filename_list = []
train_label_list = []
for i in range(len(train_set)):
train_filename_list.append(train_set[i][0])
train_label_list.append(train_set[i][3])
print(train_filename_list[:5])
print(train_label_list[:5])
# make to dataframe
train_tuple = list(zip(train_filename_list, train_label_list))
#print(result_tuple)
train_df = pd.DataFrame(train_tuple, columns=['train_filename', 'train_label'])
train_df.head()
# find the count of each label to check distribution of the labels
train_df["train_label"].value_counts()
# Hmmm... The distribution of the data is quite even for the training set :)
```
## Let's next look at one example from the training set.
```
train_set[0]
filename, waveform, sample_rate, label_id = train_set[0]
print("Shape of waveform: {}".format(waveform.size()))
print("Sample rate of waveform: {}".format(sample_rate))
# Let's plot the waveform using matplotlib
# We observe that the main audio activity happens at the later end of the clip
plt.plot(waveform.t().numpy());
# let's play the audio clip and hear it for ourselves!
ipd.Audio(waveform.numpy(), rate=sample_rate)
```
## Constant Sample Lengths
In order to insert our features into a model, we have to ensure that the features are of the same size. Below, we see that the sample length varies across the audio clips.
Let's pad the audio clips to a maximum sample length of 16000. (16000 sample length is equal to 1 second at 16,000 Hz sampling rate)
We will pad audio clips which are less than 1 second in length, with parts of itself.
```
audio_lens = []
for i in range(len(train_set)):
audio_lens.append(train_set[i][1].size(1))
print('Max Sample Length:', max(audio_lens))
print('Min Sample Length:', min(audio_lens))
```
Since the min and the max length is the same and hence no need to pad the audio, else can run PadAudio and the shorter ones will be repeated with its own audio
```
class PadAudio(torch.nn.Module):
def __init__(self, req_length = 16000):
super().__init__()
self.req_length = req_length
def forward(self, waveform):
while waveform.size(1) < self.req_length:
# example if audio length is 15800 and max is 16000, the remaining 200 samples will be concatenated
# with the FIRST 200 samples of the waveform itself again (repetition)
waveform = torch.cat((waveform, waveform[:, :self.req_length - waveform.size(1)]), axis=1)
return waveform
# let's set up a list of transformations we are going to apply to the waveforms
transformations = []
transformations.append(PadAudio())
transformations
```
## Features
In this classification example, instead of using the raw waveform of the audio clips, we will craft handmade audio features known as melspectrograms instead.
For an in-depth explanation of what a melspectrogram is, I would highly recommend reading this article [here](https://medium.com/analytics-vidhya/understanding-the-mel-spectrogram-fca2afa2ce53).
In short, a melspectrogram is a way to represent an audio signal’s loudness as it varies over time at different frequencies, while scaled to how humans perceive sound. (We can easily tell the difference between 500 and 1000 Hz, but we can't between 10,000 and 10,500 Hz.)

TorchAudio has an in-built method that can help us with this transformation. We shall then apply log scaling.
```
from torchaudio.transforms import MelSpectrogram
# We define our own log transformation here
class LogMelTransform(torch.nn.Module):
def __init__(self, log_offset = 1e-6):
super().__init__()
self.log_offset = log_offset
def forward(self, melspectrogram):
return torch.log(melspectrogram + self.log_offset)
# Let's append these new transformations
transformations.append(MelSpectrogram(sample_rate = 16000, n_mels = 128))
transformations.append(LogMelTransform())
transformations
```
## Data Augmentation
We will do a simple data augmentation process in order to increase the variations in our dataset.
In the audio domain, the augmentation technique known as [SpecAugment](https://arxiv.org/abs/1904.08779) is often used. It makes use of 3 steps:
- Time Warp (warps the spectrogram to the left or right) 2nd row
- Frequency Masking (randomly masks a range of frequencies) 3rd row
- Time Masking (randomly masks a range of time) 4th row

As Time Warp is computationally intensive and does not contribute significant improvement in results, we shall simply use Frequency and Time Masking in this example.
```
from torchaudio.transforms import TimeMasking, FrequencyMasking
eval_transformations = transformations.copy()
# Let's extend the list of transformations with the augmentations
transformations.append(TimeMasking(time_mask_param = 10)) # a maximum of 10 time steps will be masked
transformations.append(FrequencyMasking(freq_mask_param = 3)) # maximum of 3 freq channels will be masked
print(transformations)
```
## Data Loaders
Let's now set up our data loaders so that we can streamline the batch loading of data for our model training later on.
```
BATCH_SIZE = 32
NUM_WORKERS = 4
PIN_MEMORY = True if device == 'cuda' else False
def train_collate_fn(batch):
# A data tuple has the form:
# filename, waveform, sample_rate, label
tensors, targets, filenames = [], [], []
# Gather in lists, and encode labels as indices
for filename, waveform, sample_rate, label in batch:
# apply transformations
for transform in transformations:
waveform = transform(waveform)
waveform = waveform.squeeze().T
tensors += [waveform]
targets += [labels_to_indices[label]]
filenames += [filename]
# Group the list of tensors into a batched tensor
tensors = torch.stack(tensors)
targets = torch.LongTensor(targets)
return (tensors, targets, filenames)
def eval_collate_fn(batch):
# A data tuple has the form:
# filename, waveform, sample_rate, label
tensors, targets, filenames = [], [], []
# Gather in lists, and encode labels as indices
for filename, waveform, sample_rate, label in batch:
# apply transformations
for transform in eval_transformations:
waveform = transform(waveform)
waveform = waveform.squeeze().T
tensors += [waveform]
targets += [labels_to_indices[label]]
filenames += [filename]
# Group the list of tensors into a batched tensor
tensors = torch.stack(tensors)
targets = torch.LongTensor(targets)
filenames += [filename]
return (tensors, targets, filenames)
train_loader = torch.utils.data.DataLoader(
train_set,
batch_size=BATCH_SIZE,
shuffle=True,
drop_last=False,
collate_fn=train_collate_fn,
num_workers=NUM_WORKERS,
pin_memory=PIN_MEMORY,
)
valid_loader = torch.utils.data.DataLoader(
valid_set,
batch_size=BATCH_SIZE,
shuffle=False,
drop_last=False,
collate_fn=eval_collate_fn,
num_workers=NUM_WORKERS,
pin_memory=PIN_MEMORY,
)
```
## Setting up the Model
In this speech classification example, we will make use of a Long-Short-Term Memory Recurrent Neural Network (LSTM-RNN).
```
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, num_classes, device, classes=None):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, num_classes)
self.device = device
self.classes = classes
def forward(self, x):
# Set initial hidden and cell states
batch_size = x.size(0)
h0 = torch.zeros(self.num_layers, batch_size, self.hidden_size).to(self.device)
c0 = torch.zeros(self.num_layers, batch_size, self.hidden_size).to(self.device)
# Forward propagate LSTM
out, _ = self.lstm(x, (h0, c0)) # shape = (batch_size, seq_length, hidden_size)
# Decode the hidden state of the last time step
out = self.fc(out[:, -1, :])
return out
def predict(self, x):
'''Predict one label from one sample's features'''
# x: feature from a sample, LxN
# L is length of sequency
# N is feature dimension
x = torch.tensor(x[np.newaxis, :], dtype=torch.float32)
x = x.to(self.device)
outputs = self.forward(x)
_, predicted = torch.max(outputs.data, 1)
predicted_index = predicted.item()
return predicted_index
# initialize the model class
model = RNN(input_size=128, hidden_size=128, num_layers=2, num_classes=len(labels), device=device, classes=labels).to(device)
```
## Training the model
```
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
optimizer.zero_grad()
num_epochs = 10 #50
for epoch in range(1,num_epochs+1):
# training steps
model.train()
count_correct, count_total = 0, 0
for idx, (features, targets, filenames) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
# forward pass
outputs = model(features)
loss = criterion(outputs, targets)
# backward pass
loss.backward()
optimizer.step()
optimizer.zero_grad()
# training results
_, argmax = torch.max(outputs, 1)
count_correct += (targets == argmax.squeeze()).sum().item()
count_total += targets.size(0)
train_acc = count_correct / count_total
# evaluation steps
model.eval()
count_correct, count_total = 0, 0
with torch.no_grad():
for idx, (features, targets, filenames) in enumerate(valid_loader):
features = features.to(device)
targets = targets.to(device)
# forward pass
val_outputs = model(features)
val_loss = criterion(val_outputs, targets)
# validation results
_, argmax = torch.max(val_outputs, 1)
count_correct += (targets == argmax.squeeze()).sum().item()
count_total += targets.size(0)
# print results
valid_acc = count_correct / count_total
print('Epoch [{}/{}], Train loss = {:.4f}, Train accuracy = {:.2f}, Valid loss = {:.4f}, Valid accuracy = {:.2f}'
.format(epoch, num_epochs, loss.item(), 100*train_acc, val_loss.item(), 100*valid_acc))
# save the model
torch.save(model.state_dict(), f'{COLAB_FILEPATH}model/speech_classification_lstm.pt')
```
## Load back the model
```
##model = RNN(input_size=128, hidden_size=128, num_layers=2, num_classes=len(labels), device=device, classes=labels).to(device)
model.load_state_dict(torch.load(f'{COLAB_FILEPATH}model/speech_classification_lstm.pt'))
```
## Test Set
```
!gdown --id 1AvP49xengGjnFTG209AgvAGj-by8WGSi
!unzip -q -o s1_test.zip
# Initialise dataset object for test set
test_set = CustomSpeechDataset(path='s1_test', typ='test')
# define test collate function and set up test loader
def test_collate_fn(batch):
# A data tuple has the form:
# filename, waveform, sample_rate
tensors, filenames = [], []
# Gather in lists
for filename, waveform, sample_rate in batch:
# apply transformations
for transform in eval_transformations:
waveform = transform(waveform)
waveform = waveform.squeeze().T
tensors += [waveform]
filenames += [filename]
# Group the list of tensors into a batched tensor
tensors = torch.stack(tensors)
return (tensors, filenames)
test_loader = torch.utils.data.DataLoader(
test_set,
batch_size=BATCH_SIZE,
shuffle=False,
drop_last=False,
collate_fn=test_collate_fn,
num_workers=NUM_WORKERS,
pin_memory=PIN_MEMORY,
)
# pass test set through the RNN model
model.eval()
pred_list, filename_list = [], []
with torch.no_grad():
for idx, (features, filenames) in enumerate(test_loader):
features = features.to(device)
# forward pass
outputs = model(features)
# test results
_, argmax = torch.max(outputs, 1)
pred_list += argmax.cpu().tolist()
filename_list += filenames
print(pred_list[:5], filename_list[:5])
print(filename_list)
print(full_dataset.class_names)
```
## Submission of Results
Submission csv file should contain only 2 columns for filename and label, in that order. The file should be sorted by filename, and exclude headers.
Refer to **sample_submission.csv** for an example.
```
result_tuple = list(zip(filename_list, pred_list))
#print(result_tuple)
submission = pd.DataFrame(result_tuple, columns=['filename', 'pred'])
submission = submission.sort_values('filename').reset_index(drop=True)
submission['label'] = submission['pred'].apply(lambda x: labels[x])
submission[['filename', 'label']].head()
submission["label"].value_counts()
submission[['filename', 'label']].to_csv(f'{COLAB_FILEPATH}submission/submission.csv', header=None, index=None)
```
| github_jupyter |
Import basic libraries
```
from __future__ import division, print_function, absolute_import
from __future__ import unicode_literals
import numpy as np
import math
```
Import trajectories
```
import mdshare
### Note: Please replace the working_directory with the current one ! ###
wd = '/idiom/demo/' ### This defines the working directory
local_filename = mdshare.fetch('alanine-dipeptide-3x250ns-backbone-dihedrals.npz',working_directory=wd )
with np.load(local_filename) as fh:
trajs = [fh[key] for key in fh.keys()]
local_filename1 = mdshare.fetch('alanine-dipeptide-3x250ns-heavy-atom-positions.npz',working_directory=wd )
with np.load(local_filename1) as fh:
trajs1 = [fh[key] for key in fh.keys()]
traj_concat = np.concatenate((trajs[0],trajs[1],trajs[2]),axis=0)[::1,:]
print(traj_concat.shape)
data_size,_ = traj_concat.shape
# print(traj_concat[:10,:])
data_phi = traj_concat[:,0]
data_psi = traj_concat[:,1]
torsion = np.column_stack((data_phi,data_psi))
########## xyz:
traj_concat1 = np.concatenate((trajs1[0],trajs1[1],trajs1[2]),axis=0)[::1,:]
atom_position_list = np.split(traj_concat1,10,axis=1)
# print(atom_position_list[1][0])
dist_list = []
for i in range(10):
for j in range(i+1,10):
dist_tmp = np.sum(
np.square(atom_position_list[i]-atom_position_list[j]),
axis=-1)
dist_list.append(dist_tmp)
print(len(dist_list))
dist_array_raw = np.column_stack(dist_list)
dist_array = (dist_array_raw - np.mean(dist_array_raw,axis=0,keepdims=True))/( np.std(dist_array_raw,axis=0,keepdims=True) ) ## Batch Normalization
```
Task Sampling by FPS:
```
def distance(A, B):
return np.sqrt( np.sum( ( A - B )**2 , axis=-1) ) ### [N,s] or [N]
def MaxMinSampling(points, seed_id, n=1000):
seed_id = seed_id.astype(np.int32)
seed = points[ seed_id ]
s,_ = seed.shape
landmark = seed_id.tolist() ### [s], to be expanded
pts_3d = np.expand_dims(points,axis=1) ## [N,1,dim]
seed_3d = np.expand_dims(seed,axis=0) ## [1,s,dim]
m = np.min( distance(pts_3d,seed_3d), axis=-1) ### [N]
for i in range(s,n):
lm = np.argmax(m)
landmark.append( lm )
###
dist = distance( np.expand_dims(points[lm],axis=0) , points ) ### [N]
m = np.minimum( m, dist )
return landmark
```
Perform mini-batch Farthest Point Sampling (FPS)
```
win_id_list = []
n_iter = 10
minibs = data_size // n_iter
id_set = np.arange(minibs)
for i in range(n_iter):
print('Iteration: ', i)
np.random.shuffle(id_set)
seed_id = id_set[:100]
dist_array_tmp = dist_array[ i*minibs : (i+1)*minibs ]
win_id = MaxMinSampling( dist_array_tmp, seed_id=seed_id, n=1000 ) ###minibs
win_id = np.array(win_id).astype(np.int32)
win_id_shifted = i*minibs + win_id
win_id_list.append(win_id_shifted)
id_array = np.concatenate(win_id_list)
np.savetxt('./fps.txt', id_array .reshape((-1,1)),fmt='%d')
np.savetxt('./torsion.fps.txt',torsion[id_array])
```
| github_jupyter |
# Assignment for machine learning workshop 2018
## 1. Installing Anaconda
- Anaconda (https://www.anaconda.com/download)
- single bundle includes most scientific computing packages.
- make sure to pick version for **Python 3**.
- easy install packages for Windows, Mac, Linux.
- (single directory install)
- Jupyter (ipython notebooks)
- Launch from _Anaconda Navigator_
- browser-based interactive computing environment
- development, documenting, executing code, viewing results
- whole session stored in notebook document (.ipynb)
- (also made and presented these slides!)

## 2. Introducing libraries
### (1) NumPy
- Library for multidimensional arrays and 2D matrices
```
from numpy import * # import all classes from numpy
a = arange(15)
a
b = a.reshape(3,5)
b
b.shape # get the shape (num rows x num columns)
```
- array creation and array operations
```
a = array([1, 2, 3, 4]) # use a list to initialize
a
b = array([[1.1,2,3], [4,5,6]]) # or list of lists
b
```
- One-dimensional arrays are indexed, sliced, and iterated similar to Python lists.
```
a = array([1,2,3,4,5])
a[2]
a[0]
a[2:5] # index 2 through 4
# iterating with loop
for i in a:
print(i)
```
### (2) pandas
- pandas is a Python library for data wrangling and analysis.
- `Dataframe` is a table of entries (like an Excel spreadsheet).
- each column does not need to be the same type
- operations to modify and operate on the table
```
# setup pandas and display
import pandas as pd
# read CSV file
dataset = pd.read_csv("kc_house_data.csv")
# print the first 5 rows of the dataframe
dataset.head(5)
```
- select a column
```
dataset['price'] #dataset.price
```
- query the table # select the data according to some specific requirements (constraints)
```
# select Age greater than 30
dataset[dataset.bedrooms > 10]
```
# Ready to start!!!
```
#Import Libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```
# Dataset
This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015.
It's a great dataset for evaluating simple regression models.
```
#Importing DataSet
dataset = pd.read_csv("kc_house_data.csv")
```
19 house features plus the price and the id columns, along with 21613 observations.
1. **id** a notation for a house
2. **date** Date house was sold
3. **price** Price is prediction target
4. **bedrooms** Number of Bedrooms/House
5. **bathrooms** Number of bathrooms/bedrooms
6. **sqft_living** square footage of the home
7. **sqft_lot** square footage of the lot
8. **floors** Total floors (levels) in house
9. **waterfront** House which has a view to a waterfront
10. **view** Has been viewed
11. **condition** How good the condition is ( Overall )
12. **grade** overall grade given to the housing unit, based on King County grading system
13. **sqft_above** square footage of house apart from basement
14. **sqft_basement** square footage of the basement
15. **yr_built** Built Year
16. **yr_renovated** Year when house was renovated
17. **zipcode** zip
18. **lat** Latitude coordinate
19. **long** Longitude coordinate
20. **sqft_living15** Living room area in 2015(implies-- some renovations) This might or might not have affected the lotsize area
21. **sqft_lot15** lotSize area in 2015(implies-- some renovations)
```
dataset.head(5)
price = dataset['price']
price = np.array(price) # transfer the dataframe type to numpy array type
features = dataset.iloc[:, 3:21]
features.head(5)
```
## Select features
You can see there are some many features in this dataset and we could choose any one of them to be our predictor variable (regressor), or more than one. Actually you could try any combination of features as our input data. For example, we could use **sqft_living** as our input to predict the price, or **bathrooms** and **sqft_living** to do our regression.
But how to choose a suitable variable as input? Perhaps start with choosing the one which is likely to have a linear relationship with price is the best. So we could visualize it first:
```
dataset['price'] = price
```
Run following codes to draw scatter plots to find the relationship of every variable and price.
```
plt.figure(figsize=(20,40))
for i in range(18):
tmpX = np.array(features.iloc[:,i]).reshape(-1, 1)
plt.subplot(10,2,i+1)
plt.scatter(tmpX, price, color= 'red')
plt.xlabel(dataset.keys()[3+i])
```
## Simple linear regression
- Linear regression with one variable.
- Here we use **sqft_living**, which is the square footage of the home, as our variable.
```
space=dataset['sqft_living']
price=dataset['price']
x = np.array(space).reshape(-1, 1)
y = np.array(price)
```
## Split training/test data
- We will select 50% of the data for training, and 50% for testing
- use `model_selection` module
- `train_test_split` - give the percentage for training and testing.
- `StratifiedShuffleSplit` - also preserves the percentage of examples for each class.
```
#Splitting the data into Train and Test
from sklearn.model_selection import train_test_split
# randomly split data into 50% train and 50% test set
xtrain, xtest, ytrain, ytest = train_test_split(x, y, train_size=0.7, test_size=0.3, random_state=4487)
# view train & test data
plt.figure(figsize=(12,5))
plt.subplot(1,2,1) # put two subplots in the same figure
# scatter plot - Y value selects the color
plt.scatter(xtrain, ytrain, color= 'red')
plt.title('training data')
plt.subplot(1,2,2)
plt.scatter(xtest, ytest, color= 'blue')
plt.title('testing data')
plt.show()
```
## Learning Process
```
#Fitting simple linear regression to the Training Set
from sklearn.linear_model import LinearRegression
```
## Prediction
```
#Predicting the prices
```
## Visualization
```
#Visualizing the training Test Results
plt.scatter(xtrain, ytrain, color= 'red')
plt.plot(xtrain, regressor.predict(xtrain), color = 'blue')
plt.title ("Visuals for Training Dataset")
plt.xlabel("Space")
plt.ylabel("Price")
plt.show()
#Visualizing the Test Results
from sklearn.metrics import mean_squared_error
plt.scatter(xtest, ytest, color= 'red')
plt.plot(xtrain, regressor.predict(xtrain), color = 'blue')
# calculate mean-square error on training set
MSE = mean_squared_error(ytest, regressor.predict(xtest))
plt.title("w={:.5f}; b={:.5f}\nMSE={:.5f}".format(float(regressor.coef_), float(regressor.intercept_), MSE))
plt.xlabel("Space")
plt.ylabel("Price")
plt.show()
```
# Multiple linear regression
- Linear regression with more than one variable.
- Here we use **sqft_living** (square footage of the home, as our variable) and **yr_built** (Built Year).
## Split training/test data
- We will select 50% of the data for training, and 50% for testing
- use `model_selection` module
- `train_test_split` - give the percentage for training and testing.
- `StratifiedShuffleSplit` - also preserves the percentage of examples for each class.
```
from mpl_toolkits import mplot3d # enable 3d plot
# Visualize
plt.figure(figsize=(20,10))
ax = plt.axes(projection='3d')
ax.scatter(xtrain[:,0], xtrain[:,1], ytrain, c=ytrain,cmap='Greens')
ax.scatter(xtest[:,0], xtest[:,1], ytest, c=ytest,cmap='Reds')
ax.view_init(15, 85)
ax.set_title('3d plot of the data');
ax.set_xlabel('space')
ax.set_ylabel('bedrooms')
#Fitting simple linear regression to the Training Set
#Predicting the prices
regressor.coef_
regressor.intercept_
#print("y =" + str(regressor.coef_[0]) + "*x1 + " + str(regressor.coef_[1]) + " *x2 + " + str(regressor.intercept_))
```
| github_jupyter |
# Preferential Bayesian Optimization: EI
This notebook demonstrates the use of the Expected Improvement (EI) acquisition function on ordinal (preference) data.
```
import numpy as np
import gpflow
import tensorflow as tf
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import sys
import os
import pickle
from gpflow.utilities import set_trainable, print_summary
gpflow.config.set_default_summary_fmt("notebook")
sys.path.append(os.path.split(os.path.split(os.path.split(os.getcwd())[0])[0])[0]) # Move 3 levels up directory to import project files as module
import importlib
PBO = importlib.import_module("Top-k-Ranking-Bayesian-Optimization")
gpu_to_use = 0
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only use the first GPU
try:
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
tf.config.experimental.set_visible_devices(gpus[gpu_to_use], 'GPU')
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPU")
except RuntimeError as e:
# Visible devices must be set before GPUs have been initialized
print(e)
lengthscale = 0.3
lengthscale_prior_alpha = tf.constant(2, dtype=tf.float64)
lengthscale_prior_beta = tf.constant(4, dtype=tf.float64)
objective = PBO.objectives.hartmann3d
objective_low = 0
objective_high = 1.
objective_name = "Hart3"
acquisition_name = "EI"
experiment_name = acquisition_name + "_" + objective_name
num_runs = 10
num_evals = 50
num_samples = 1000
num_choices = 2
input_dims = 3
objective_dim = input_dims # CHANGE 1: require the objective dim
num_maximizers = 20
num_init_prefs = 12 # CHANGE 2: randomly initialize with some preferences
# CHANGE 1: reduce the value of delta to avoid numerical error
# as k(x,x') = sigma^2 * exp( -[(x-x')/l]^2 )
# which could be very small if l is too small
# so we define l relatively by the range of input (objective_high - objective_low)
# It is ok for the total number of observations > the total number of possible inputs
# because there is a noise in the observation, it might require repeated observations
# at the same input pair to improve the confidence
num_discrete_per_dim = 20
delta = (objective_high - objective_low) / num_discrete_per_dim
results_dir = os.getcwd() + '/results/' + experiment_name + '/'
try:
# Create target Directory
os.makedirs(results_dir)
print("Directory " , results_dir , " created ")
except FileExistsError:
print("Directory " , results_dir , " already exists")
def get_noisy_observation(X, objective):
f = PBO.objectives.objective_get_f_neg(X, objective)
return PBO.observation_model.gen_observation_from_f(X, f, 1)
def train_and_visualize(X, y, title, lengthscale_init=None, signal_variance_init=None):
lengthscale_prior = tfp.distributions.Gamma(concentration=lengthscale_prior_alpha,
rate=lengthscale_prior_beta)
# Train model with data
# CHANGE 6: use full_gp instead of sparse,
result = PBO.models.learning_fullgp.train_model_fullcov(
X, y,
obj_low=objective_low,
obj_high=objective_high,
lengthscale_init=lengthscale_init,
signal_variance_init=signal_variance_init,
indifference_threshold=0.,
n_sample=1000,
deterministic=True, # only sample f values once, not re-sampling
num_steps=3000)
q_mu = result['q_mu']
q_sqrt = result['q_sqrt']
u = result['u']
inputs = result['inputs']
k = result['kernel']
likelihood = gpflow.likelihoods.Gaussian()
model = PBO.models.learning.init_SVGP_fullcov(q_mu, q_sqrt, u, k, likelihood)
u_mean = q_mu.numpy()
inducing_vars = u.numpy()
return model, inputs, u_mean, inducing_vars
def uniform_grid(input_dims, num_discrete_per_dim, low=0., high=1.):
"""
Returns an array with all possible permutations of discrete values in input_dims number of dimensions.
:param input_dims: int
:param num_discrete_per_dim: int
:param low: int
:param high: int
:return: tensor of shape (num_discrete_per_dim ** input_dims, input_dims)
"""
num_points = num_discrete_per_dim ** input_dims
out = np.zeros([num_points, input_dims])
discrete_points = np.linspace(low, high, num_discrete_per_dim)
for i in range(num_points):
for dim in range(input_dims):
val = num_discrete_per_dim ** (dim)
out[i, dim] = discrete_points[int((i // val) % num_discrete_per_dim)]
return out
```
This function is our main metric for the performance of the acquisition function: The closer the model's best guess to the global minimum, the better.
```
def best_guess(model):
"""
Returns a GP model's best guess of the global maximum of f.
"""
# CHANGE 7: use a discrete grid
xx = PBO.models.learning_fullgp.get_all_discrete_inputs(objective_low, objective_high, objective_dim, delta)
res = model.predict_f(xx)[0].numpy()
return xx[np.argmax(res)]
```
Store the results in these arrays:
```
num_data_at_end = int(num_init_prefs + num_evals)
X_results = np.zeros([num_runs, num_data_at_end, num_choices, input_dims])
y_results = np.zeros([num_runs, num_data_at_end, 1, input_dims])
best_guess_results = np.zeros([num_runs, num_evals, input_dims])
```
Create the initial values for each run:
```
np.random.seed(0)
# CHANGE 8: just randomly initialize with some preference observation
init_vals = np.zeros([num_runs, num_init_prefs, num_choices, input_dims])
for run in range(num_runs):
for i in range(num_init_prefs):
init_vals[run,i] = PBO.models.learning_fullgp.get_random_inputs(
objective_low,
objective_high,
objective_dim,
delta,
size=num_choices,
with_replacement=False,
exclude_inputs=None)
```
The following loops carry out the Bayesian optimization algorithm over a number of runs, with a fixed number of evaluations per run.
```
# CHANGE 9: need to store lengthscale and signal_variance from previous iteration to initialize the current iteration
lengthscale_init = None
signal_variance_init = None
for run in range(num_runs): # CHECK IF STARTING RUN IS CORRECT
print("")
print("==================")
print("Beginning run %s" % (run))
X = init_vals[run]
y = get_noisy_observation(X, objective)
model, inputs, u_mean, inducing_vars = train_and_visualize(X, y,
"Run_{}:_Initial_model".format(run))
# save optimized lengthscale and signal variance for next iteration
lengthscale_init = model.kernel.lengthscale.numpy()
signal_variance_init = model.kernel.variance.numpy()
for evaluation in range(num_evals):
print("Beginning evaluation %s" % (evaluation))
# Get incumbent maximizer
input_vals = model.predict_f(inputs)[0].numpy()
maximizer = np.expand_dims(inputs[np.argmax(input_vals)], axis=0)
print("Maximizer:")
print(maximizer)
# Sample possible next input points. In EI, all queries are a pair with the incumbent maximizer as the
# first point and a next input point as the second point
samples = PBO.models.learning_fullgp.get_random_inputs(low=objective_low,
high=objective_high,
dim=objective_dim,
delta=delta,
size=num_samples,
exclude_inputs=maximizer)
# Calculate EI vals
ei_vals = PBO.acquisitions.ei.EI(model, maximizer, samples)
L = np.argsort(np.ravel(-ei_vals)) # n-th element in this (num_samples, ) size array is the index of n-th
#largest element in ei_vals
# Select query that maximizes EI
if np.all(np.equal(samples[L[0]], maximizer)): #if value with highest EI is same as maximizer, pick the next
# highest value. Else pick this
next_idx = L[1]
else:
next_idx = L[0]
next_query = np.zeros((num_choices, input_dims))
next_query[0, :] = maximizer # EI only works in binary choices
next_query[1, :] = samples[next_idx]
print("Evaluation %s: Next query is %s with EI value of %s" % (evaluation, next_query, ei_vals[next_idx]))
X = np.concatenate([X, [next_query]])
# Evaluate objective function
y = np.concatenate([y, get_noisy_observation(np.expand_dims(next_query, axis=0), objective)], axis=0)
print("Evaluation %s: Training model" % (evaluation))
model, inputs, u_mean, inducing_vars = train_and_visualize(X, y,
"Run_{}_Evaluation_{}".format(run, evaluation))
print_summary(model)
# save optimized lengthscale and signal variance for next iteration
lengthscale_init = model.kernel.lengthscale.numpy()
signal_variance_init = model.kernel.variance.numpy()
best_guess_results[run, evaluation, :] = best_guess(model)
# CHANGE 11: log both the estimated minimizer and its objective value
print("Best_guess f({}) = {}".format(
best_guess_results[run, evaluation, :],
objective(best_guess_results[run, evaluation, :])))
# Save model
pickle.dump((X, y, inputs,
model.kernel.variance,
model.kernel.lengthscale,
model.likelihood.variance,
inducing_vars,
model.q_mu,
model.q_sqrt,
maximizer),
open(results_dir + "Model_Run_{}_Evaluation_{}.p".format(run, evaluation), "wb"))
X_results[run] = X
y_results[run] = y
pickle.dump((X_results, y_results, best_guess_results),
open(results_dir + acquisition_name + "_" + objective_name + "_" + "Xybestguess.p", "wb"))
global_min = np.min(objective(PBO.models.learning_fullgp.get_all_discrete_inputs(objective_low, objective_high, objective_dim, delta)))
metric = best_guess_results
ir = objective(metric) - global_min
mean = np.mean(ir, axis=0)
std_dev = np.std(ir, axis=0)
std_err = std_dev / np.sqrt(ir.shape[0])
print("Mean immediate regret at each evaluation averaged across all runs:")
print(mean)
print("Standard error of immediate regret at each evaluation averaged across all runs:")
print(std_err)
with open(results_dir + acquisition_name + "_" + objective_name + "_" + "mean_sem" + ".txt", "w") as text_file:
print("Mean immediate regret at each evaluation averaged across all runs:", file=text_file)
print(mean, file=text_file)
print("Standard error of immediate regret at each evaluation averaged across all runs:", file=text_file)
print(std_err, file=text_file)
pickle.dump((mean, std_err), open(results_dir + acquisition_name + "_" + objective_name + "_" + "mean_sem.p", "wb"))
```
| github_jupyter |
# Not so random...
The success of hedge funds fundamentally boils down to the question of market efficiency. If markets are perfectly efficient, the ability for hedge funds to return above-market risk-adjusted returns should be at best a matter of luck. While the religious believers of Efficient Market Hypothesis (EMH) might justify the notable successes of the Buffets, Simons and the Griffins of this world through the law of large numbers, the majority of studies indicate prices in most markets to be, at most, weak-form efficient. While this may seem obvious, we can test this for ourselves, using a range of different methods for testing serial correlation. Using the statsmodels package, we can make use of the Durbin-Watson and Ljung Box-test to test the presence of serial correlation in a stock. In the plot below, we can perform a Ljung box-test on a year's worth of Apple stock market returns. Using this data, we see a strong argument in against market efficiency, given the Durbin-Watson Statistic shown in our test.
```
import os
import pickle
from functools import reduce
from operator import mul
import pandas as pd
import numpy as np
from statsmodels.regression.linear_model import OLS
from statsmodels.stats.stattools import durbin_watson
from statsmodels.stats.diagnostic import acorr_ljungbox
from sklearn import linear_model
from sklearn.decomposition import PCA
import holoviews as hv
import hvplot
import hvplot.pandas
np.random.seed(42)
hv.extension('bokeh')
# There is a compatilibility issue with this library \
#and newer versions of Pandas, this is short fix to the problem, \
#if you have issues at this chunk comment it out and you should be fine.
pd.core.common.is_list_like = pd.api.types.is_list_like
import pandas_datareader as pdr
```
The success of hedge funds fundamentally boils down to the question of market efficiency. If markets are perfectly efficient, the ability for Hedge Funds to return above-market risk-adjusted returns should be at best a matter of luck. While the religious believers of Efficient Market Hypothesis (EMH) might justify the notable successes of the Buffets, Simons and the Griffin's of this world through the law of large numbers, the majority of studies indicate prices in most markets to be, at most, weak-form efficient. While this may seem obvious, we can test this for ourselves, using a range of different methods for testing serial correlation. Using the statsmodels package, we can make use of the Durbin-Watson and Ljung Box-test to test the presence of serial correlation in a stock. In the plot below, we can perform a Ljung box-test on a year's worth of Apple stock market returns. Using this data, we see a strong argument in against market efficiency, given the high p-values shown in our test.
```
apple = pdr.robinhood.RobinhoodHistoricalReader(['AAPL'],
retry_count=3,
pause=0.1,
timeout=30,
session=None,
freq=None,
interval='day',
span='year').read().reset_index()
dw = durbin_watson(pd.to_numeric(apple.close_price).pct_change().dropna().values)
print(f'DW-statistic of {dw}')
pd.Series(acorr_ljungbox(pd.to_numeric(apple.close_price).pct_change().dropna().values)[1]).hvplot.line(label="p-values at lags")
```
This strongly exceeds the upper-bound of the DW-statistic at the 5% level, indicating the presence of first order correlation.
The question then remains: if markets are inefficient, where is this inefficiency? This question has remained at the forefront of research for decades. Fundamentally, investors not only want to be able to quantify sources of return but also want to identify sources of potential portfolio risk. If market returns can be considered white noise, is there some trend or underlying factor which will allow us to identify and understand these risks?
The simplest of these models, Capital Asset Pricing Model (CAPM) developed by Treynor (1961), Treynor (1962), Sharpe (1964), Lintner (1965), Mossin (1966) and Black, Jensen & Scholes (1972), remains at the core of modern financial theory by providing investors with a framework in determining how the expected return of an investment is affected
by its exposure to the systematic risk.
$$ \text{Expected Return} = r_f+β(r_m-r_f)$$
Where Expected Return is the expected returns of a share in the market, $r_{f}$, is the risk-free rate, $r_{m}$ are the returns of the market, and, $\beta$ is a coefficient computed using Ordinary Least Squares Regression, under the assumption of normally distributed errors.
Under the CAPM, an asset may only earn a higher average return given an increase in exposure to a comprehensive market portfolio, as denoted by $\beta$, which should capture all systematic risk in the market. However, given that the market portfolio, which should exist as the universe of all investable assets, is not identifiable in reality, a market index is used as a proxy. While the application of CAPM is ubiquitous both in practice and in research, there exists numerous papers investigating markets around the work which critique its application over concerns over the emergence of stylized facts, the existence of cohesive market portfolios and many practical concerns over market concentration and liquidity.
While this set of notes will not aim to investigate the validity of the CAPM model, we will investigate the Arbitrage Pricing Theory (APT) as a segue into its implications on hedge fund construction, analysis and risk (Ross, 1976). Sadly, as discussed in the lecture recordings, the availability of public, open hedge fund data is limited, and so this module will be relying primarily on market-data, data on ETF's and famous academic datasets.
APT is a generalized framework for asset pricing that sets the expected return of an asset as a linear function of various factors, denoted below:
$$ \text{Expected Value} = \beta_{0} + \beta_{1} F_{1} + ... + \beta_{n} F_{n}$$
While this may appear simple, given your exposure to advanced methods in Statistical Learning, the use of linear models in this application allows for computational stability and inference- crucial to many of its extensions.
While a number of behavioral studies have been investigated in understanding non-randomness in markets, one on-going area of research has been in the use of factor models. Most factor models explore some combination of portfolio fundamentals in trying to analyze sources of non-systematic return. In seminal papers by Banz (1981) and Basu (1983), researchers explore the presence of a size- and value-effect in predicting expected returns. These factors analyze the Market Cap and PE-ratios of companies, under the APT framework, including these variables alongside the traditional market returns and risk-free rate.
While research into these anomalies has varied in its findings, suggesting them a possible function of market dynamics at a point in time, studies by Lizenverg & Ramasamy (1979), Stattman (1980) and Rosenberg (1885) suggest Dividend Yield and Book-to-Market as other significant stylized facts. This research is not limited to American and European markets. In studies around the world, researchers have identified factors like momentum, cashflows, NAV and sector index as factors relevant to particular markets. Some of the most famous studies in the area of factor models has been in the Fama-French 3- and 5-Factor models. These models include market returns, size, book-to-market, operating profitability and investment.
While the presence of these factors, many argue, provides a strong argument for the use of an exploration of statistical modelling in finance, there exist a number of counter-arguments which aim to break down the idea of just trying everything. The first argument raised by most efficiency market believers is about liquidity risk. While the size effect does indicate a negative correlation between size and expected returns, many smaller stocks are far less liquid on an exchange and, as such, present a risk to investors during times of extreme market failure. Secondly, opponents argue that many of these anomalies are temporal. In the book, The Quants, author Scott Patterson details the increasingly large leverage required by many funds towards the end of a particular trading strategies life-time as many new copy-cats enter a particular strategy. Lastly, often simple cost can limit the ability to act on a particular trade. Fundamentally, if one cannot realistically profit from a market anomaly or market inefficiency, then its ability to be realistically considered an argument in favour of market inefficiency is void.
Additionally, in the case of hedge funds, not only do these strategies need to exceed transaction costs and overcome liquidity risk in the market, but for the investor, trades must justify the cost structure of a hedge fund and the common lockup clause – which many argue presents an implied cost to the investor. While some may argue that active management ensures the pricing efficiency necessary in order to ensure passive funds can profit the reality is, from an investor point of view, Passive Funds have on average outperformed active management over a long time horizon.
For students unfamiliar with the research discussed in these notes, I would recommend reading further in your own time. The [Podcast Freakonomics Radio](http://freakonomics.com/podcast/stupidest-thing-can-money/), has an interesting show on passive vs active investments. The show interviews Vanguage founder John C. Bogle who shares a lifetime of knowledge into running a passive fund and its growing acceptance among consumers. I would also recommend a blogpost in [Turing Finance](http://www.turingfinance.com/testing-the-efficient-market-hypothesis-with-r/) on testing market efficiency.
# References
Fama, E. F. (1965a). The behaviour of stock market prices, Journal of Business 38, 34–105.
Fama, E. F. (1965b). Random walks in stock market prices, Financial Analysts Journal, 21, 55–9.
Fama, E. F. (1970). Efficient capital markets, a review of theory and empirical work, Journal of Finance, 25,383–417.
Fama, E. F. (1965a). The behaviour of stock market prices, Journal of Business 38, 34–105.
Fama, E. F. and French, K. R. (1988). Dividend yields and expected stock returns, Journal of Financial Economics, 22(1), 3-25.
Jegadeesh, N. and Titman, S. (1993). Returns to buying winners and selling losers: Implications for stock market efficiency, Journal of Finance, 48, 65-91.
Jensen, M. (1978). Some anomalous evidence regarding Market Efficiency, Journal of Financial Economics, 6, 95 –102.
Lo, A. W. and MacKinlay, C. A. (1988). Stock market prices do not follow random walks, evidence from a simple specification test, Review of Financial Studies, Oxford University Press for Society for Financial Studies, 1(1), 41-66.
Markowitz, H. M. (1952), Portfolio selection, The Journal of Finance, 7 (1), 77-91.
Ross, S. (1976). The arbitrage theory of capital asset pricing, Journal of Economic Theory, 13 (2), 341 – 360.
Sharpe, W. (1964). Capital Asset Prices: A theory of market equilibrium under conditions of risk, The Journal of Finance, 19 (3s), 425 – 442.
| github_jupyter |
# Widgets without writing widgets: interact
The Jupyter widgets library offers tools to create rich graphical controls from your Python code that connect JavaScript elements such as buttons and menus with your Python kernel.
As an example, this interface below consists of a collection of components for simulating binary star orbits, built with Jupyter Widgets:
+ Green: [pythreejs](https://github.com/jupyter-widgets/pythreejs)
+ Blue: [bqplot](https://github.com/bloomberg/bqplot/blob/master/examples/Index.ipynb)
+ Everything else: [ipywidgets](https://github.com/jupyter-widgets/ipywidgets)
+ Serving it up to users during development on [mybinder.org](https://mybinder.org/)

You can [find here](https://github.com/JuanCab/AstroInteractives) the source for this example (including links to binder), created by [Juan Cabanela](http://www.cabanela.com) from the Minnesota State University Physics and Astronomy Department. This short video illustrates the demo:
```
from IPython.display import YouTubeVideo
YouTubeVideo("kbgST0uifvM")
```
But in this notebook, we're going to see first how to walk before we run! For the most simple tasks, `ipywidgets` provides some convenience functions that give you interactive controls for simple parameter exploration with barely writing any new code, and without having to learn much about GUI programming paradigms. The `interact` function (`ipywidgets.interact`) automatically creates user interface (UI) controls for exploring code and data interactively. It is the easiest way to get started using IPython's widgets.
```
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
```
## Basic `interact`
At the most basic level, `interact` autogenerates UI controls for function arguments, and then calls the function with those arguments when you manipulate the controls interactively. To use `interact`, you need to define a function that you want to explore. Here is a function that triples its argument, `x`.
```
def f(x):
return 3*x
```
When you pass this function as the first argument to `interact` along with an integer keyword argument (`x=10`), a slider is generated and bound to the function parameter.
```
interact(f, x=10);
```
When you move the slider, the function is called, and the return value is printed.
If you pass `True` or `False`, `interact` will generate a checkbox:
```
interact(f, x=True);
```
If you pass a string, `interact` will generate a `Text` field.
```
interact(f, x='Hi there!');
```
`interact` can also be used as a decorator. This allows you to define a function and interact with it in a single shot. As this example shows, `interact` also works with functions that have multiple arguments.
```
@interact(x=True, y=1.0)
def g(x, y):
return (x, y)
```
## Fixing arguments using `fixed`
There are times when you may want to explore a function using `interact`, but fix one or more of its arguments to specific values. This can be accomplished by wrapping values with the `fixed` function.
```
def h(p, q):
return (p, q)
```
When we call `interact`, we pass `fixed(20)` for q to hold it fixed at a value of `20`.
```
interact(h, p=5, q=fixed(20));
```
Notice that a slider is only produced for `p` as the value of `q` is fixed.
## Widget abbreviations
When you pass an integer-valued keyword argument of `10` (`x=10`) to `interact`, it generates an integer-valued slider control with a range of `[-10,+3*10]`. In this case, `10` is an *abbreviation* for an actual slider widget:
```python
IntSlider(min=-10,max=30,step=1,value=10)
```
In fact, we can get the same result if we pass this `IntSlider` as the keyword argument for `x`:
```
interact(f, x=widgets.IntSlider(min=-10, max=30, step=1, value=10));
```
This examples clarifies how `interact` proceses its keyword arguments:
1. If the keyword argument is a `Widget` instance with a `value` attribute, that widget is used. Any widget with a `value` attribute can be used, even custom ones.
2. Otherwise, the value is treated as a *widget abbreviation* that is converted to a widget before it is used.
The following table gives an overview of different widget abbreviations:
<table class="table table-condensed table-bordered">
<tr><td><strong>Keyword argument</strong></td><td><strong>Widget</strong></td></tr>
<tr><td>`True` or `False`</td><td>Checkbox</td></tr>
<tr><td>`'Hi there'`</td><td>Text</td></tr>
<tr><td>`value` or `(min,max)` or `(min,max,step)` if integers are passed</td><td>IntSlider</td></tr>
<tr><td>`value` or `(min,max)` or `(min,max,step)` if floats are passed</td><td>FloatSlider</td></tr>
<tr><td>`['orange','apple']` or `[('one', 1), ('two', 2)]`</td><td>Dropdown</td></tr>
</table>
Note that a dropdown is used if a list or a list of tuples is given (signifying discrete choices), and a slider is used if a tuple is given (signifying a range).
You have seen how the checkbox and textarea widgets work above. Here, more details about the different abbreviations for sliders and dropdowns are given.
If a 2-tuple of integers is passed `(min,max)`, an integer-valued slider is produced with those minimum and maximum values (inclusively). In this case, the default step size of `1` is used.
```
interact(f, x=(0, 4));
```
A `FloatSlider` is generated if any of the values are floating point. The step size can be changed by passing a third element in the tuple.
```
interact(f, x=(0, 10, 1));
```
For both integer and float-valued sliders, you can pick the initial value of the widget by passing a default keyword argument to the underlying Python function. Here we set the initial value of a float slider to `5.5`.
```
@interact(x=(0.0, 20.0, 0.5))
def h(x=5.5):
return x
```
Dropdown menus are constructed by passing a list of strings. In this case, the strings are both used as the names in the dropdown menu UI and passed to the underlying Python function.
```
interact(f, x=['apples','oranges']);
```
If you want a dropdown menu that passes non-string values to the Python function, you can pass a list of tuples of the form `('label', value)`. The first items are the names in the dropdown menu UI and the second items are values that are the arguments passed to the underlying Python function.
```
interact(f, x=[('one', 10), ('two', 20)]);
```
## `interactive`
In addition to `interact`, IPython provides another function, `interactive`, that is useful when you want to reuse the widgets that are produced or access the data that is bound to the UI controls.
Note that unlike `interact`, the return value of the function will not be displayed automatically, but you can display a value inside the function with `IPython.display.display`.
Here is a function that returns the sum of its two arguments and displays them.
```
from IPython.display import display
def f(a, b):
display(a + b)
return a+b
```
Unlike `interact`, `interactive` returns a `Widget` instance rather than immediately displaying the widget.
```
w = interactive(f, a=10, b=20)
```
The widget is an `interactive`, a subclass of `VBox`, which is a container for other widgets.
```
type(w)
```
The children of the `interactive` are two integer-valued sliders and an output widget, produced by the widget abbreviations above.
```
w.children
```
To actually display the widgets, you can use IPython's `display` function.
```
display(w)
```
At this point, the UI controls work just like they would if `interact` had been used. You can manipulate them interactively and the function will be called. However, the widget instance returned by `interactive` also gives you access to the current keyword arguments and return value of the underlying Python function.
Here are the current keyword arguments. If you rerun this cell after manipulating the sliders, the values will have changed.
```
w.kwargs
```
Here is the current return value of the function.
```
w.result
```
## Basic interactive plot
Though the examples so far in this notebook had very basic output, more interesting possibilities are straightforward.
The function below plots a straight line whose slope and intercept are given by its arguments.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
def f(m, b):
plt.figure(2)
x = np.linspace(-10, 10, num=1000)
plt.plot(x, m * x + b)
plt.ylim(-5, 5)
plt.show()
```
The interactive below displays a line whose slope and intercept is set by the sliders. Note that if the variable containing the widget, `interactive_plot`, is the last thing in the cell it is displayed.
```
interactive_plot = interactive(f, m=(-2.0, 2.0), b=(-3, 3, 0.5))
interactive_plot
```
## Disabling continuous updates
When interacting with long running functions, or even with short functions whose results take some to display, realtime feedback is a burden instead of being helpful. You might have noticed the output of some of the widgets above "flickering" as you adjusted the controls. By default, `interact` and `interactive` call the function for every update of the widgets value.
There are two ways to mitigate this. You can either only execute on demand, or restrict execution to mouse release events.
### `interact_manual`
The `interact_manual` function provides a variant of interaction that allows you to restrict execution so it is only done on demand. A button is added to the interact controls that allows you to trigger an execute event.
```
def slow_function(i):
"""
Sleep for 1 second then print the argument
"""
from time import sleep
print('Sleeping...')
sleep(1)
print(i)
interact_manual(slow_function,i=widgets.FloatSlider(min=1e4, max=1e6, step=1e4));
```
You can do the same thing with `interactive` by using the a `dict` as the second argument, as shown below.
```
foo = interactive(slow_function, {'manual': True}, i=widgets.FloatSlider(min=1e4, max=1e6, step=1e4))
foo
```
### `continuous_update`
If you are using slider widgets, you can set the `continuous_update` kwarg to `False`. `continuous_update` is a keyword argument of slider widgets that restricts executions to mouse release events.
In ipywidgets 7, the `Text` and `Textarea` controls also have a `continuous_update` argument.
The first example below provides the `continuous_update` argument when the widget is created.
```
interact(slow_function,i=widgets.FloatSlider(min=1e4, max=1e6, step=5e4, continuous_update=False));
```
# For more information
For more extended examples of `interact` and `interactive`, see [the example in the ipywidgets source repository](https://github.com/jupyter-widgets/ipywidgets/blob/master/docs/source/examples/Index.ipynb).
| github_jupyter |
# CSE445.1 Final Assignment Notebook
**Name**: Ferdous Zeaul Islam
**ID**: 173 1136 042
**Course**: CSE445 (Machine Learning)
**Faculty**: Dr. Sifat Momen (Sfm1)
**Section**: 01
**Semester**: Spring 2021
```
# only need this line in jupyter
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
```
## (a) Read the dataset using panda's dataframe.
```
hearts_df = pd.read_csv('./heart.csv')
hearts_df.info()
```
## (b) Find out the number of instances and the number of features (including the target class) in the dataset.
```
hearts_df.shape
```
There are 303 instances and total 14 features (including the target class).
## (c) Show the first five rows of the dataset
```
hearts_df.head()
```
## (d) Print the number of missing entries (i.e. the number of null values) per feature. If there exists any missing entries, replace the value with that particular feature's mean.
```
# courtesy- <https://chartio.com/resources/tutorials/how-to-check-if-any-value-is-nan-in-a-pandas-dataframe/>
hearts_df.isnull().sum()
```
There exists no missing(null) values.
## (e) Print the number of unique values per feature. If the number of unique values for any feature is less than 10, print those unique feature values.
```
for column in hearts_df.columns:
column_distinctValue_cnt = hearts_df[column].value_counts()
print(column,'has', len(column_distinctValue_cnt),'distinct values')
if len(column_distinctValue_cnt) < 10:
print(column_distinctValue_cnt)
print()
```
## (f) Generate a boxplot that shows the gender-wise age distribution. Show the boxplot for target = 0 as well as target = 1.
```
sns.boxplot(x = 'target', y = 'age', data = hearts_df, hue = 'sex', palette = ['green', 'red'])
plt.show()
```
## (g) Now, generate a boxplot that shows the chestpain-wise age distribution. Show the boxplot for target = 0 as well as target = 1
```
'''
chage plot size
courtesy- <https://stackoverflow.com/questions/31594549/how-do-i-change-the-figure-size-for-a-seaborn-plot>
'''
fig, ax = plt.subplots(figsize=(14,8))
sns.boxplot(ax = ax, x = 'target', y = 'age', data = hearts_df, hue = 'cp', palette = ['green', 'red'])
plt.show()
```
## (h) Generate lmplot to show how cholestoral varies with age. Draw separate lmplots for different gender.
```
sns.lmplot(x = 'age', y = 'chol', data = hearts_df, col = 'sex', scatter_kws = {'color':'green'}, ci = False)
plt.show()
```
## (i) Generate a heatmap showing correlation between all features.
```
# help taken from ->
# https://medium.com/@szabo.bibor/how-to-create-a-seaborn-correlation-heatmap-in-python-834c0686b88e
plt.figure(figsize=(15, 8))
corr_matrix = hearts_df.corr()
heatmap = sns.heatmap(
# correlation matrix
corr_matrix,
# two-contrast color, different color for + -
cmap="PiYG",
# color map range
vmin=-1, vmax=1,
# show corr values in the cells
annot=True
)
# set a title
heatmap.set_title('Correlation Heatmap', fontdict={'fontsize':20}, pad=16);
plt.show()
```
## (j) Scale all the features between 0 and 1
```
hearts_df.head()
hearts_df.tail()
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
hearts_df_scaled = pd.DataFrame(scaler.fit_transform(hearts_df), columns=hearts_df.columns)
hearts_df_scaled.head()
hearts_df_scaled.tail()
```
## (k) You are going to predict the “target”. Use 10 fold cross-validation to predict the target. Use the classifiers ZeroR, KNN, SVM, logistic regression and Decision Tree and for each of them, report the accuracy, precision, recall, roc area and f1 score along with the standard deviation of each of them.
```
X = hearts_df_scaled.drop(columns=['target'])
y = hearts_df['target'] # intentionally taken from unscaled df
X.head()
y.head()
from sklearn.model_selection import StratifiedKFold, cross_val_score
# 10-fold cross validation
cv = StratifiedKFold(n_splits = 10, random_state = 42, shuffle = True)
'''
applies 10 fold cross validation to the passed model
returns list of accuracy, recall, precision, f1 score, AUC of the kfold cross validation
'''
def apply_kfold(model):
accuracies = cross_val_score(model, X, y, scoring = 'accuracy', cv = cv, n_jobs = -1)
recalls = cross_val_score(model, X, y, scoring = 'recall', cv = cv, n_jobs = -1)
precisions = cross_val_score(model, X, y, scoring = 'precision', cv = cv, n_jobs = -1)
f1s = cross_val_score(model, X, y, scoring = 'f1', cv = cv, n_jobs = -1)
aucs = cross_val_score(model, X, y, scoring = 'roc_auc', cv = cv, n_jobs = -1)
return accuracies, recalls, precisions, f1s, aucs
'''
prints mean and standard deviation
of passed list of accuracy, recall, precision, f1 score, AUC
'''
def show_evaluation_metrics(kfold_accuracies, kfold_precisions, kfold_recalls, kfold_f1s, kfold_aucs):
print('Accuracy = ', round(np.mean(kfold_accuracies), 3), '( std =',round(np.std(kfold_accuracies), 3),')')
print('Precision = ', round(np.mean(kfold_precisions), 3), '( std =',round(np.std(kfold_precisions), 3),')')
print('Recall = ', round(np.mean(kfold_recalls), 3), '( std =',round(np.std(kfold_recalls), 3),')')
print('f1-score = ', round(np.mean(kfold_f1s), 3), '( std =',round(np.std(kfold_f1s), 3),')')
print('AUC = ', round(np.mean(kfold_aucs), 3), '( std =',round(np.std(kfold_aucs), 3),')')
```
## Applying ZeroR
```
from sklearn.dummy import DummyClassifier
# ZeroR classifier
zeroR_model = DummyClassifier(strategy = 'most_frequent', random_state = 42)
zeroR_accuracies, zeroR_precisions, zeroR_recalls, zeroR_f1s, zeroR_aucs = apply_kfold(zeroR_model)
show_evaluation_metrics(zeroR_accuracies, zeroR_precisions, zeroR_recalls, zeroR_f1s, zeroR_aucs)
```
## Applying KNN
```
from sklearn.neighbors import KNeighborsClassifier
```
#### Finding the best(highest accuracy) hyper-parameter values using GridSearchCV
Testing for n_neighbors from 1 to 50 and p value 1 to 10.
```
'''
courtesy-
<https://towardsdatascience.com/building-a-k-nearest-neighbors-k-nn-model-with-scikit-learn-51209555453a>
<https://towardsdatascience.com/gridsearchcv-for-beginners-db48a90114ee>
'''
from sklearn.model_selection import GridSearchCV
# create a dictionary of parameters we want to search
param_grid = {'n_neighbors': np.arange(1, 50), 'p':np.arange(1, 10)}
# use gridsearch to check all values for n_neighbors
knn_gscv_model = GridSearchCV(KNeighborsClassifier(metric='minkowski'), param_grid, scoring='accuracy', cv=cv)
# fit model to data
knn_gscv_model.fit(X, y)
knn_gscv_model.best_params_
```
So, GridSearchCV found n_neighbors=13, p=1 gives best accuracy.
```
model = KNeighborsClassifier(n_neighbors=13, metric='minkowski', p=1)
knn_accuracies, knn_precisions, knn_recalls, knn_f1s, knn_aucs = apply_kfold(model)
show_evaluation_metrics(knn_accuracies, knn_precisions, knn_recalls, knn_f1s, knn_aucs)
```
## Applying Decision Tree
```
from sklearn.tree import DecisionTreeClassifier
```
#### Finding the best(highest accuracy) hyper-parameter values using GridSearchCV
Testing for max_depth from 1 to 50 and criterion value 'gini' and 'entropy'
```
# create a dictionary of all parameter values we want to exhaustively search
param_grid = {'max_depth': np.arange(1, 50), 'criterion': ['gini', 'entropy']}
# use gridsearch to check all values in param_grid
decisionTree_gscv_model = GridSearchCV(DecisionTreeClassifier(random_state=42), param_grid, scoring='accuracy', cv=cv)
# fit model to data
decisionTree_gscv_model.fit(X, y)
decisionTree_gscv_model.best_params_
```
So, GridSearchCV found max_depth=3, criterion='gini' gives best accuracy.
```
model = DecisionTreeClassifier(max_depth=3, criterion='gini', random_state=42)
dTree_accuracies, dTree_precisions, dTree_recalls, dTree_f1s, dTree_aucs = apply_kfold(model)
show_evaluation_metrics(dTree_accuracies, dTree_precisions, dTree_recalls, dTree_f1s, dTree_aucs)
```
## Applying SVM
```
from sklearn.svm import SVC
```
#### Finding the best(highest accuracy) hyper-parameter values using GridSearchCV
Testing for C from integers 1 to 100 and floats [0.1, 0.2, 0.3,....,0.9] with kernel='linear'
```
# create a dictionary of all parameter values we want to exhaustively search
C_vals = []
for i in range (1, 10):
C_vals.append(round(i*0.1, 2))
for i in range (1, 50):
C_vals.append(i)
param_grid = {'C': C_vals}
# use gridsearch to check all values in param_grid
linearSVM_gscv_model = GridSearchCV(SVC(kernel='linear'), param_grid, scoring='accuracy', cv=cv)
# fit model to data
linearSVM_gscv_model.fit(X, y)
linearSVM_gscv_model.best_params_
```
So, GridSearchCV found C=14 gives best accuracy.
```
model = SVC(kernel='linear', C=14)
linearSVM_accuracies, linearSVM_precisions, linearSVM_recalls, linearSVM_f1s, linearSVM_aucs = apply_kfold(model)
show_evaluation_metrics(linearSVM_accuracies, linearSVM_precisions, linearSVM_recalls, linearSVM_f1s, linearSVM_aucs)
```
## Apply Logistic Regression
```
from sklearn.linear_model import LogisticRegression
```
#### Finding the best(highest accuracy) hyper-parameter values using GridSearchCV
Testing for,
C value: integers 1 to 100 and floats [0.1, 0.2, 0.3,..., 0.9]
solver: ‘newton-cg’, ‘sag’, ‘lbfgs’
```
# create a dictionary of all parameter values we want to exhaustively search
param_grid = {'C': C_vals, 'solver':['newton-cg', 'sag', 'lbfgs']} # C_vals are same as linearSVM
# use gridsearch to check all values in param_grid
logisticRegression_gscv_model = GridSearchCV(LogisticRegression(random_state=42),
param_grid, scoring='accuracy', cv=cv)
# fit model to data
logisticRegression_gscv_model.fit(X, y)
logisticRegression_gscv_model.best_params_
```
So, GridSearchCV found C=25 and solver='newton-cg' gives best accuracy.
```
model = LogisticRegression(C=25, solver='newton-cg', random_state=42)
logisticRegression_accuracy, logisticRegression_precision, logisticRegression_recall, logisticRegression_f1, logisticRegression_auc = apply_kfold(model)
show_evaluation_metrics(logisticRegression_accuracy, logisticRegression_precision,
logisticRegression_recall, logisticRegression_f1, logisticRegression_auc)
```
## Performance Comparison
| | Accuracy | Precision | Recall | F1-score | AUC |
|---------------------|----------|-----------|--------|----------|-------|
| ZeroR | 0.545 | 1 | 0.545 | 0.705 | 0.5 |
| KNN | 0.848 | 0.904 | **0.836** | 0.866 | **0.908** |
| Decision Tree | 0.822 | 0.879 | 0.812 | 0.843 | 0.85 |
| **<u>Linear SVM</u>** | **0.851** | **0.915** | 0.831 | **0.87** | 0.898 |
| Logistic Regression | 0.838 | 0.903 | 0.82 | 0.859 | 0.895 |
| github_jupyter |
# Chocolate Database - Data Munging Notebook
### Sources: USDA Food Composition Website, CSV files, Twitter
```
# Importing relevant libraries
import os
from bs4 import BeautifulSoup
import urllib3
import pandas as pd
import re
from itertools import repeat
import csv
import numpy as np
import time
from tweepy import API
from tweepy import Cursor
from tweepy import TweepError
from tweepy import OAuthHandler
# importing Twitter API keys from file
import t_credentials as twitter_credentials
```
## Part I - Scraping USDA Food Composition Database Website
```
"""
Input parameters for the 3 part run.
data_folder - mentions the folder location where data to and from is accessed
csvfile - name of the merged csv file containing all the parsed data
url_used - URL of the webite scraped
brands_regex - RegEx to search relevant chocolate brands
@Author: Dhawal Priyadarshi
@Created On: Jan 2019
"""
#*********** User Input Parameters ***********
# Path to data folder
# data_folder = os.getcwd() + "\\data\\" # for Windows
data_folder = os.getcwd() + '/data/' # for Mac and Linux
# File for data storage
csvfile="chocolates_master_file.csv"
# URL of the USDA website (chocolate lookup)
url_used ="https://ndb.nal.usda.gov/ndb/search/list?fgcd=&manu=&lfacet=&count=&max=25&sort=default&\
qlookup=chocolate&offset=0&format=Full&new=&measureby=&ds=&order=asc&qt=&qp=&qa=&qn=&q=&ing="
# Brands search RegEx
brands_regex = r"ghirardelli*|lindt*|mondelez*|mars*|hersheys*|taza*"
#**********************************************
print("Parameters are set!")
"""
Extracts data-rows from USDA website on even and odd positions for each page out of the 511 pages and appends
the data to a list, which is then written into the csv file row by row
@Author: Mansi Nagraj
@Created On: Jan 2019
"""
# Setting run variables
filepath = data_folder + csvfile
offset=0
myWebData = []
http = urllib3.PoolManager()
urllib3.disable_warnings() # disable SSL warnings
try:
print("Writing file to", filepath)
with open(filepath, "a") as output:
writer = csv.writer(output, dialect='excel')
# Writing header for the csv file:
writer.writerow(["Db","Ndb Id","FoodDescription","Manufacturer"])
for i in range(1,511):
new_url= url_used.replace(url_used[113:114] ,str(offset))
offset=offset+25
req = http.request('GET', new_url)
soup = BeautifulSoup(req.data, 'html.parser')
even_titles = soup.find_all('tr', {"class":"even"})
odd_titles = soup.find_all('tr', {"class":"odd"})
# Collecting data for even-numbered rows:
for even_title in even_titles:
cols= even_title.find_all('td')
myWebData = []
for col in cols:
row_data =col.get_text()
evenRowdData= row_data.strip()
myWebData.append(evenRowdData)
writer.writerow(myWebData)
# Collecting data for odd-numbered rows:
for odd_title in odd_titles:
cols= odd_title.find_all('td')
myWebData = []
for col in cols:
row_data =col.get_text()
oddRowData= row_data.strip()
myWebData.append(oddRowData)
writer.writerow(myWebData)
print("Scraping completed!")
except BaseException as e:
print("Base Exception: %s" % str(e))
# Displaying the scraped dataset
sitedf = pd.read_csv(filepath)
sitedf
"""
Subsetting the data using user-defined RegEx
@Author: Dhawal Priyadarshi, Mansi Nagraj
@Created On: Jan 2019
"""
all_match = list(map(re.search, repeat(brands_regex\
, len(sitedf['Manufacturer'])), sitedf['Manufacturer'] \
, repeat(re.IGNORECASE, len(sitedf['Manufacturer']))))
s = np.bitwise_not(pd.isnull(all_match))
sitedf_filtered = sitedf[s]
sitedf_filtered.head(15)
# Randomly selecting 20 rows from the filtered data-frame
# final_sample = sitedf_filtered.sample(20)
# Using fixed Ndb IDs for the purpose of demonstration (based on the downloaded csv files in "data" folder)
filterList = ['45318621','45145291','45318334','45053347','45148117','45376125','45375537','45318454','45369935'\
,'45375781','45375900','45158103','45153224','45236273','45375862','45331173','45143068','45004850'\
,'45158934','45208905']
final_sample = sitedf_filtered[sitedf_filtered['Ndb Id'].isin(filterList)]
final_sample
```
## Part II - Reading data from CSV files
```
def NutrientFunction(fn):
"""
Function to extract the Nutrient data from the different csv files and return parsed dataset
@Author: Mansi Nagraj
@Created On: Jan 2019
"""
rows=[]
inp=[]
filename = data_folder + str(fn)+'.csv'
with open(filename , newline='') as csvfile:
inputfile = csv.reader(csvfile)
for row in inputfile:
inp.append(row)
mylist=[8,11,16,17,19]
nutrientlist = []
for n in mylist:
nut = inp[n][:6]
nutrientlist.append(nut)
nutrientdf=pd.DataFrame(nutrientlist,columns=["Nutrient","Unit","DataPoint","StdError","Weight","Value"])
nutrientdf['Ndb Id']= fn
return nutrientdf
# Gathering the parsed data from each CSV file into one dataframe
all_results = pd.DataFrame(columns=["Nutrient","Unit","Value","Ndb Id"])
for n in final_sample['Ndb Id']:
results = NutrientFunction(n)
all_results = all_results.append(results)
all_results.head(15)
# Merging scraped and file based data into one dataframe
# Branded Food DB
bfdb = pd.concat([final_sample.set_index('Ndb Id'),all_results.set_index('Ndb Id')], axis=1, join='inner')
bfdb
```
# Part III - Munging using Twitter API
```
"""
Query creator for Twitter API. Creates a query column in the bfdb dataset.
@Author: Dhawal Priyadarshi
@Created On: Jan 2019
"""
bfdb['Query'] = bfdb.apply(lambda row: row['FoodDescription'].split(",")[0], axis=1)
bfdb
"""
Twitter querying classes.
@Author: Dhawal Priyadarshi
@Created On: Jan 2019
@Credits: VP Russo: https://github.com/vprusso/youtube_tutorials/tree/master/twitter_python
"""
# # # # TWITTER AUTHENTICATOR CLASS# # # #
class TwitterAuthenticator():
def authenticate_twitter_app(self):
"""
Twitter API authenticator function
@Author: VP Russo: https://github.com/vprusso/youtube_tutorials/tree/master/twitter_python
"""
auth = OAuthHandler(twitter_credentials.CONSUMER_KEY, twitter_credentials.CONSUMER_SECRET)
auth.set_access_token(twitter_credentials.ACCESS_TOKEN, twitter_credentials.ACCESS_TOKEN_SECRET)
return auth
# # # # TWEET QUERYING CLASS # # # #
class TweetQuery():
"""
This class runs search queries using Tweepy API
"""
def __init__(self):
self.twitter_authenticator = TwitterAuthenticator()
"""
Takes filename, query string, and optional count of number of tweets.
Appends the tweets to a given file name
@Author: Dhawal Priyadarshi
@Credits: VPRusso: https://github.com/vprusso/youtube_tutorials/tree/master/twitter_python
@Credits: VickyQian: https://gist.github.com/vickyqian/f70e9ab3910c7c290d9d715491cde44c
"""
def search(self, ndbid, query, count = 15):
auth = self.twitter_authenticator.authenticate_twitter_app()
api = API(auth)
tweet_df = pd.DataFrame(columns=['Ndb Id','Tweet_Created','Tweet_Text'])
try:
for tweet in Cursor(api.search, q = query, count = count, tweet_mode = 'extended', lang = 'en').items():
temp_dict = {
'Ndb Id': ndbid,
'Tweet_Created': tweet.created_at,
'Tweet_Text': tweet.full_text
}
tweet_df = tweet_df.append(temp_dict, ignore_index=True)
return tweet_df
except TweepError as e:
print(e.response.text, "Going to sleep for 15 mins")
time.sleep(60 * 15)
return None
print("Classes defined!")
# Creating data frame for queries
queries = pd.DataFrame(bfdb['Query'])
queries = queries.drop_duplicates()
queries
# Hitting the Twitter API with the query strings in the Query dataframe
tweets_all = pd.DataFrame(columns=['Ndb Id','Tweet_Created','Tweet_Text'])
max_query_count = 100 # putting a hard limit to number of requests to avoid API rate_limit error
t_query = TweetQuery()
for idx, val in enumerate(queries.index.values):
print("Iteration:", idx, val)
if i <= max_query_count:
query = queries.loc[val,'Query'] # val is the index/NDB ID
ndbid = val
tweets = t_query.search(ndbid, query, count = 10)
tweets_all = tweets_all.append(tweets, ignore_index=True)
tweets_all
# Storing as CSV and displaying tweets pasrsed table
tweets_all.to_csv(data_folder + 'tweets_parsed.csv')
tweets_all
```
| github_jupyter |
```
#%matplotlib notebook
import os
import sys
sys.path.append(f'{os.environ["HOME"]}/Projects/planckClusters/catalogs')
from load_catalogs import load_PSZcatalog
import numpy as np
import subprocess
import aplpy
import matplotlib.pyplot as plt
import matplotlib.patheffects as pe
from astropy.table import Table
from astropy.io.fits import getheader
from astropy.convolution import convolve
from astropy.convolution import Gaussian2DKernel
from tqdm import tqdm_notebook
# parallel processor
from utilities import parallel_process
import warnings
from astropy.utils.exceptions import AstropyWarning
warnings.simplefilter('ignore', category=AstropyWarning)
from astropy import log
log.setLevel('WARN')
def cp_results(outpath):
print('Copying files...')
os.system(f'find {outpath}/PSZ* -name "*vtp.png" -exec cp -t {outpath}/pngs/images/ ' '{} \;')
def show_sources(name, outpath, zoom=False, window=7):
# check for files
# check for detections
if os.path.isfile(f'{outpath}/{name}/{name}_vtp.detect'):
srcs = f'{outpath}/{name}/{name}_vtp.detect'
else:
pass # make the images even if we don't have any detections
# events image
if os.path.isfile(f'{outpath}/{name}/{name}_img_50-600.fits'):
evnts = f'{outpath}/{name}/{name}_img_50-600.fits'
else:
return
# check for imaging -- sdss first
if os.path.isfile(f'{outpath}/{name}/{name}_DESstack_r.fits'):
survey = 'DES'
optimage = True
img = f'{outpath}/{name}/{name}_DESstack_r.fits'
elif os.path.isfile(f'{outpath}/{name}/{name}_DECaLSstack_r.fits'):
survey = 'DECaLS'
optimage = True
img = f'{outpath}/{name}/{name}_DECaLSstack_r.fits'
elif os.path.isfile(f'{outpath}/{name}/{name}_SDSSstack_i.fits'):
survey = 'SDSS'
optimage = True
img = f'{outpath}/{name}/{name}_SDSSstack_i.fits'
elif os.path.isfile(f'{outpath}/{name}/{name}_PS1stack_i.fits'):
survey = 'PS1'
optimage = True
img = f'{outpath}/{name}/{name}_PS1stack_i.fits'
else:
optimage = False
info = Table.read(f'{outpath}/{name}/{name}.info', format='ascii.fast_csv')
# show the figure
gc = aplpy.FITSFigure(evnts, figsize=(10, 10))
gc.show_grayscale(vmin=0, pmax=100, stretch='linear', interpolation='none')
# recenter on the PSZ position, and make a n'x n' window
if zoom:
gc.recenter(info['RA'], info['DEC'], window / 60)
# now we need to read the individual detections
# detects = Table.read(srcs, hdu=1)
# add all the sources -- the float is the pixscale in deg
# r1 = detects['R'].data[:,0] * 6.548089E-04
# r2 = detects['R'].data[:,1] * 6.548089E-04
# gc.show_ellipses(detects['RA'], detects['DEC'], r1, r2, detects['ROTANG'],
# coords_frame='world', edgecolor='cyan')
try:
gc.show_regions(f'{outpath}/{name}/{name}_vtp.reg')
except FileNotFoundError:
pass # make the images even if there aren't any detections
# add PSZ info and circles
gc.show_circles(info['RA'], info['DEC'], 2 / 60,
linestyle='--', edgecolor='#e24a33', facecolor='none',
path_effects=[pe.Stroke(linewidth=1.2, foreground='white'),
pe.Normal()])
gc.show_circles(info['RA'], info['DEC'], 5 / 60,
linestyle='-', edgecolor='#e24a33', facecolor='none',
path_effects=[pe.Stroke(linewidth=1.2, foreground='white'),
pe.Normal()])
gc.show_markers(info['RA'], info['DEC'],
marker='*', s=150, layer='psz', edgecolor='#e24a33',
path_effects=[pe.Stroke(linewidth=1.2,
foreground='white'), pe.Normal()])
# write the exposure time
exp_time = getheader(evnts)['EXPOSURE']
text = f'exp time: {exp_time:.2f}s'
xo, yo = (0.05, 0.05)
gc.add_label(xo, yo, text, relative=True, fontsize=18, color='white', horizontalalignment='left')
# write redshift
ztext = f'z: {info["REDSHIFT"][0]:.3f}'
gc.add_label(xo, yo + 0.03, ztext, relative=True, fontsize=18, color='white', horizontalalignment='left')
# write legend
xo, yo = (0.95, 0.05)
gc.add_label(xo, yo, 'Extended', relative=True, fontsize=18, color='magenta',
horizontalalignment='right')
gc.add_label(xo, yo + 0.03, 'P-Source', relative=True, fontsize=18, color='yellow',
horizontalalignment='right')
if zoom:
gc.save(f'{outpath}/{name}/{name}_XRT_vtp_zoom.png', dpi=90)
else:
gc.save(f'{outpath}/{name}/{name}_XRT_vtp.png', dpi=90)
gc.close()
### optical imaging ###
if optimage:
# make sure the links aren't broken
if os.path.exists(f'{outpath}/{name}/{name}_{survey}stack.jpg'):
ending = 'stack.jpg'
elif os.path.exists(f'{outpath}/{name}/{name}_{survey}stack_irg.tiff'):
ending = 'stack_irg.tiff'
else:
return
# show the figure
gc = aplpy.FITSFigure(img, figsize=(10, 10))
try:
gc.show_rgb(f'{outpath}/{name}/{name}_{survey}{ending}')
except FileNotFoundError:
gc.show_grayscale(stretch='arcsinh', pmin=1, pmax=98)
gc.set_theme('publication')
# recenter on the PSZ position, and make a n'x n' window
if zoom:
gc.recenter(info['RA'], info['DEC'], window / 60)
#gc.set_tick_labels_format(xformat='hh:mm:ss', yformat='dd:mm')
#gc.set_tick_labels_size('small')
try:
gc.show_regions(f'{outpath}/{name}/{name}_vtp.reg')
except FileNotFoundError:
pass # make the images even if there aren't any detections
# add PSZ info and circles
gc.show_circles(info['RA'], info['DEC'], 2 / 60,
linestyle='--', edgecolor='#e24a33', facecolor='none',
path_effects=[pe.Stroke(linewidth=1.2, foreground='white'),
pe.Normal()])
gc.show_circles(info['RA'], info['DEC'], 5 / 60,
linestyle='-', edgecolor='#e24a33', facecolor='none',
path_effects=[pe.Stroke(linewidth=1.2, foreground='white'),
pe.Normal()])
gc.show_markers(info['RA'], info['DEC'],
marker='*', s=150, layer='psz', edgecolor='#e24a33',
path_effects=[pe.Stroke(linewidth=1.2,
foreground='white'), pe.Normal()])
xo, yo = (0.05, 0.05)
# write the exposure time
gc.add_label(xo, yo, text, relative=True, fontsize=18, color='white', horizontalalignment='left')
# write redshift
gc.add_label(xo, yo + 0.03, ztext, relative=True, fontsize=18, color='white', horizontalalignment='left')
# write legend
xo, yo = (0.95, 0.05)
gc.add_label(xo, yo, 'Extended', relative=True, fontsize=18, color='magenta',
horizontalalignment='right')
gc.add_label(xo, yo + 0.03, 'P-Source', relative=True, fontsize=18, color='yellow',
horizontalalignment='right')
if zoom:
gc.save(f'{outpath}/{name}/{name}_OP_vtp_zoom.png', dpi=90)
else:
gc.save(f'{outpath}/{name}/{name}_OP_vtp.png', dpi=90)
gc.close()
return
# get file data
data = load_PSZcatalog()
data = data.sort_values('NAME')
outpath = './data'
arr = [{'name':n.replace(' ', '_'), 'outpath':outpath} for n in data['NAME']]
parallel_process(arr, show_sources, use_kwargs=True, n_jobs=6)
arr = [{'name':n.replace(' ', '_'), 'outpath':outpath, 'zoom':True} for n in data['NAME']]
parallel_process(arr, show_sources, use_kwargs=True, n_jobs=6)
cp_results(outpath)
# outpath = './data_full'
# name = 'PSZ2_G287.96-32.99'
# show_sources(name, outpath)
```
| github_jupyter |
Contenido bajo licencia Creative Commons BY 4.0 y código bajo licencia MIT. © Juan Gómez y Nicolás Guarín-Zapata 2020. Este material es parte del curso Modelación computacional en el programa de Ingeniería Civil de la Universidad EAFIT.
# Proyecto: Diseño de una presa trapezoidal de concreto
## Introducción
Una solución óptima al problema de diseño en Ingeniería (Civil) tiene como resultado un producto que es seguro, funcional y económico. Por ejemplo, en el caso de una estructura se espera que esta sea capaz de soportar de manera segura las cargas externas a las que se verá sometida durante su vida útil y que no presente desplazamientos excesivos que puedan generar incomodidad o sensación de inseguridad a sus ocupantes.
En el proyecto integrador que se formula en este documento los estudiantes deberán aplicar conceptos y habilidades relacionadas con **Modelación Computacional, Mecánica de los Medios Continuos y Mecánica de Sólidos** para resolver un problema de diseño de una presa de gravedad. El problema constituye un reto ya que los estudiantes deben buscar la solución óptima dada una serie de restricciones.
## Estrategia de enseñanza-aprendizaje
Este proyecto se enmarca dentro de la modalidad de **Aprendizaje Basado en Proyectos** en la cual se expone a los estudiantes a un problema típico de ingeniería, posiblemente sin solución única y en algunas ocasiones mal formulado. Este contexto permite a los estudiantes aprender de manera activa mediante la conexión con un problema "real".
El tema particular del presente proyecto corresponde al diseño de una presa de concreto bajo ciertas restricciones impuestas por las resistencias de los materiales. Como diseño inicial se plantea una presa de forma trapezoidal.
## Definición del problema
Para atender las demandas de energía de un país se requiere diseñar una presa de concreto de forma trapezoidal (ver figura). Considerando el caudal disponible en la fuente hídrica, la presa debe tener una altura de al menos $100 \text{ m}$ por lo que se requiere diseñar la misma de manera que ofrezca la mayor relación beneficio/costo. Esta relación se define como
$$R=\frac {E_w}C$$
donde $E_w$ es la potencia generada (medida en megavatios) y $C$ es el costo total (en dólares). Se debe tener en consideración que aunque la potencia generada se incrementa con la altura en el nivel de agua, para alturas bajas la eficiencia de generación no es la ideal. La función que describe la generación es la siguiente
$$E_w = 10000 H_p\left(1 - e^{-H_p/100}\right)\, $$
donde $H_p$ es la altura de la presa. En este sentido, el mejor diseño será aquel que genere la mayor potencia, con el menor volumen de concreto y, por ende, con el menor costo.
Para el diseño de la presa se deben tener en cuenta las siguientes condiciones.
* El metro cúbico de concreto del tipo asumido como material inicial tiene un costo de 1000 USD. Para calcular el volumen de concreto use la expresión $V_c = 1.0\, A_s$ donde $A_s$ es el área superficial de la presa.
* Cada megapascal de aumento en la resistencia a la compresión del concreto tiene un costo de 100 USD.
* Cada megapascal de aumento en la resistencia a la tensión del concreto tiene un costo de 500 USD.
* Cada megapascal de aumento en la resistencia al corte del concreto tiene un costo de 300 USD.
El material base para el concreto de la presa tiene las siguientes propiedades:
* Módulo de elasticidad: 50 GPa.
* Relación de Poisson: 0.20.
* Resistencia a la compresión: 60 MPa.
* Resistencia a la tensión: 10 MPa.
* Resistencia al corte : 15 MPa.
La presa se construirá sobre un basamento rocoso (basalto asumido elástico y de extensión infinita) con las siguientes propiedades:
* Módulo de elasticidad: 60 GPa.
* Relación de Poisson: 0.20.
Se asume además que:
* La presa estará sometida a la acción de la presión hidrostática impartida por un fluido de peso especifico $\gamma = 9.8 \text{ kN/m}^3$ y dada por:
$$p = \gamma z$$
donde $z$ es la distancia desde la superficie libre del fluido asumida en la misma cota de la cresta de la presa.
* El programa de análisis asume condiciones de deformación plana.
* Es posible considerar las fuerzas de cuerpo correspondientes al peso propio de la presa.
* Tenga en cuenta que en el caso real el depósito de suelo puede considerarse infinito con respecto a las dimensiones de la presa, sin embargo en el modelo por elementos finitos este depósito debe truncarse generando posibles errores en la solución.
<center>
<img src="img/presa.svg"
alt="Diagrama de la presa."
style="width:400px">
</center>
### Dimensiones del modelo
Las dimensiones del modelo son las siguientes:
* $H_p $ : es la altura de la presa con respecto al nivel superior del deposito de suelo .
* $A_p$ : es el ancho de la pata de la presa.
* $A_c$ : es el ancho de la cresta de la presa.
Los paramétros para el rectangulo que representa el suelo no se consideran paramétros de diseño, pero sus valores pueden afectar la precisión de los resultados. Estos se definen como:
* $H_s$ : profundidad del deposito de suelo.
* $D_l$ : distancia de la frontera izquierda medida desde la pata de la presa.
* $D_r$ : distancia de la fronetra derecha medida desde la pata de la presa.
<center>
<img src="img/presa_dim.svg"
alt="Diagrama de la presa."
style="width:400px">
</center>
## Entregables
Cada equipo de trabajo debe someter:
1. Informe escrito en formato PDF y en los términos indicados por el profesor de la materia y el cual debe contener como mínimo las siguientes secciones:
- Introducción;
- Revisión literaria;
- Metodología;
- Resultados;
- Conclusiones; y
- Referencias.
2. Notebook de Jupyter en el que se incluyan y expliquen todos los análisis usados de manera directa (o indirecta a través de módulos importados) para llegar a la solución. Este notebook debe funcionar sin errores al ser ejecutado.
Adicionalmente debe hacerse un proceso de verificación de las soluciones obtenidas. Esto puede realizarse a través de:
1. verificaciones de equilibrio;
2. comparaciones con soluciones analíticas; o
3. visualizaciones de diferentes tipos para la presa.
## Herramientas para la solución.
Como herramienta de cálculo para la realización del análisis de tensiones se debe usar el programa por elementos finitos [SolidsPy](https://solidspy.readthedocs.io/en/latest/readme.html) el cual permite resolver problemas de elasticidad en 2 dimensiones. El programa entrega los resultados a través de imágenes de distribución de los diferentes campos así como sus equivalentes en vectores y matrices disponibles en memoria para realizar operaciones de posprocesado.
## Algunas unidades y equivalencias útiles
* $1 \quad \text{ N} = 1\quad \text{ kg m/s}^2$.
* $1\quad \text{ Pa} = 1 \text{ N/m}^2$.
* $1\quad \text{ kPa} = 1 \times 10^3 \text{ Pa}$.
* $1\quad \text{ MPa} = 1 \times 10^6 \text{ Pa}$.
* $1\quad \text{ GPa} = 1 \times 10^9 \text{ Pa}$.
## Referencias
* Juan Gómez, Nicolás Guarín-Zapata (2018). SolidsPy: 2D-Finite Element Analysis with Python, <https://github.com/AppliedMechanics-EAFIT/SolidsPy>.
```
from IPython.core.display import HTML
def css_styling():
styles = open('./nb_style.css', 'r').read()
return HTML(styles)
css_styling()
```
| github_jupyter |
<small><i>This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/data-science-ipython-notebooks).</i></small>
# Pandas
Credits: The following are notes taken while working through [Python for Data Analysis](http://www.amazon.com/Python-Data-Analysis-Wrangling-IPython/dp/1449319793) by Wes McKinney
* Series
* DataFrame
* Reindexing
* Dropping Entries
* Indexing, Selecting, Filtering
* Arithmetic and Data Alignment
* Function Application and Mapping
* Sorting and Ranking
* Axis Indices with Duplicate Values
* Summarizing and Computing Descriptive Statistics
* Cleaning Data (Under Construction)
* Input and Output (Under Construction)
```
from pandas import Series, DataFrame
import pandas as pd
import numpy as np
```
## Series
A Series is a one-dimensional array-like object containing an array of data and an associated array of data labels. The data can be any NumPy data type and the labels are the Series' index.
Create a Series:
```
ser_1 = Series([1, 1, 2, -3, -5, 8, 13])
ser_1
```
Get the array representation of a Series:
```
ser_1.values
```
Index objects are immutable and hold the axis labels and metadata such as names and axis names.
Get the index of the Series:
```
ser_1.index
```
Create a Series with a custom index:
```
ser_2 = Series([1, 1, 2, -3, -5], index=['a', 'b', 'c', 'd', 'e'])
ser_2
```
Get a value from a Series:
```
ser_2[4] == ser_2['e']
```
Get a set of values from a Series by passing in a list:
```
ser_2[['c', 'a', 'b']]
```
Get values great than 0:
```
ser_2[ser_2 > 0]
```
Scalar multiply:
```
ser_2 * 2
```
Apply a numpy math function:
```
import numpy as np
np.exp(ser_2)
```
A Series is like a fixed-length, ordered dict.
Create a series by passing in a dict:
```
dict_1 = {'foo' : 100, 'bar' : 200, 'baz' : 300}
ser_3 = Series(dict_1)
ser_3
```
Re-order a Series by passing in an index (indices not found are NaN):
```
index = ['foo', 'bar', 'baz', 'qux']
ser_4 = Series(dict_1, index=index)
ser_4
```
Check for NaN with the pandas method:
```
pd.isnull(ser_4)
```
Check for NaN with the Series method:
```
ser_4.isnull()
```
Series automatically aligns differently indexed data in arithmetic operations:
```
ser_3 + ser_4
```
Name a Series:
```
ser_4.name = 'foobarbazqux'
```
Name a Series index:
```
ser_4.index.name = 'label'
ser_4
```
Rename a Series' index in place:
```
ser_4.index = ['fo', 'br', 'bz', 'qx']
ser_4
```
## DataFrame
A DataFrame is a tabular data structure containing an ordered collection of columns. Each column can have a different type. DataFrames have both row and column indices and is analogous to a dict of Series. Row and column operations are treated roughly symmetrically. Columns returned when indexing a DataFrame are views of the underlying data, not a copy. To obtain a copy, use the Series' copy method.
Create a DataFrame:
```
data_1 = {'state' : ['VA', 'VA', 'VA', 'MD', 'MD'],
'year' : [2012, 2013, 2014, 2014, 2015],
'pop' : [5.0, 5.1, 5.2, 4.0, 4.1]}
df_1 = DataFrame(data_1)
df_1
```
Create a DataFrame specifying a sequence of columns:
```
df_2 = DataFrame(data_1, columns=['year', 'state', 'pop'])
df_2
```
Like Series, columns that are not present in the data are NaN:
```
df_3 = DataFrame(data_1, columns=['year', 'state', 'pop', 'unempl'])
df_3
```
Retrieve a column by key, returning a Series:
```
df_3['state']
```
Retrive a column by attribute, returning a Series:
```
df_3.year
```
Retrieve a row by position:
```
df_3.ix[0]
```
Update a column by assignment:
```
df_3['unempl'] = np.arange(5)
df_3
```
Assign a Series to a column (note if assigning a list or array, the length must match the DataFrame, unlike a Series):
```
unempl = Series([6.0, 6.0, 6.1], index=[2, 3, 4])
df_3['unempl'] = unempl
df_3
```
Assign a new column that doesn't exist to create a new column:
```
df_3['state_dup'] = df_3['state']
df_3
```
Delete a column:
```
del df_3['state_dup']
df_3
```
Create a DataFrame from a nested dict of dicts (the keys in the inner dicts are unioned and sorted to form the index in the result, unless an explicit index is specified):
```
pop = {'VA' : {2013 : 5.1, 2014 : 5.2},
'MD' : {2014 : 4.0, 2015 : 4.1}}
df_4 = DataFrame(pop)
df_4
```
Transpose the DataFrame:
```
df_4.T
```
Create a DataFrame from a dict of Series:
```
data_2 = {'VA' : df_4['VA'][1:],
'MD' : df_4['MD'][2:]}
df_5 = DataFrame(data_2)
df_5
```
Set the DataFrame index name:
```
df_5.index.name = 'year'
df_5
```
Set the DataFrame columns name:
```
df_5.columns.name = 'state'
df_5
```
Return the data contained in a DataFrame as a 2D ndarray:
```
df_5.values
```
If the columns are different dtypes, the 2D ndarray's dtype will accomodate all of the columns:
```
df_3.values
```
## Reindexing
Create a new object with the data conformed to a new index. Any missing values are set to NaN.
```
df_3
```
Reindexing rows returns a new frame with the specified index:
```
df_3.reindex(list(reversed(range(0, 6))))
```
Missing values can be set to something other than NaN:
```
df_3.reindex(range(6, 0), fill_value=0)
```
Interpolate ordered data like a time series:
```
ser_5 = Series(['foo', 'bar', 'baz'], index=[0, 2, 4])
ser_5.reindex(range(5), method='ffill')
ser_5.reindex(range(5), method='bfill')
```
Reindex columns:
```
df_3.reindex(columns=['state', 'pop', 'unempl', 'year'])
```
Reindex rows and columns while filling rows:
```
df_3.reindex(index=list(reversed(range(0, 6))),
fill_value=0,
columns=['state', 'pop', 'unempl', 'year'])
```
Reindex using ix:
```
df_6 = df_3.ix[range(0, 7), ['state', 'pop', 'unempl', 'year']]
df_6
```
## Dropping Entries
Drop rows from a Series or DataFrame:
```
df_7 = df_6.drop([0, 1])
df_7
```
Drop columns from a DataFrame:
```
df_7 = df_7.drop('unempl', axis=1)
df_7
```
## Indexing, Selecting, Filtering
Series indexing is similar to NumPy array indexing with the added bonus of being able to use the Series' index values.
```
ser_2
```
Select a value from a Series:
```
ser_2[0] == ser_2['a']
```
Select a slice from a Series:
```
ser_2[1:4]
```
Select specific values from a Series:
```
ser_2[['b', 'c', 'd']]
```
Select from a Series based on a filter:
```
ser_2[ser_2 > 0]
```
Select a slice from a Series with labels (note the end point is inclusive):
```
ser_2['a':'b']
```
Assign to a Series slice (note the end point is inclusive):
```
ser_2['a':'b'] = 0
ser_2
```
Pandas supports indexing into a DataFrame.
```
df_6
```
Select specified columns from a DataFrame:
```
df_6[['pop', 'unempl']]
```
Select a slice from a DataFrame:
```
df_6[:2]
```
Select from a DataFrame based on a filter:
```
df_6[df_6['pop'] > 5]
```
Perform a scalar comparison on a DataFrame:
```
df_6 > 5
```
Perform a scalar comparison on a DataFrame, retain the values that pass the filter:
```
df_6[df_6 > 5]
```
Select a slice of rows from a DataFrame (note the end point is inclusive):
```
df_6.ix[2:3]
```
Select a slice of rows from a specific column of a DataFrame:
```
df_6.ix[0:2, 'pop']
df_6
```
Select rows based on an arithmetic operation on a specific row:
```
df_6.ix[df_6.unempl > 5.0]
```
## Arithmetic and Data Alignment
Adding Series objects results in the union of index pairs if the pairs are not the same, resulting in NaN for indices that do not overlap:
```
np.random.seed(0)
ser_6 = Series(np.random.randn(5),
index=['a', 'b', 'c', 'd', 'e'])
ser_6
np.random.seed(1)
ser_7 = Series(np.random.randn(5),
index=['a', 'c', 'e', 'f', 'g'])
ser_7
ser_6 + ser_7
```
Set a fill value instead of NaN for indices that do not overlap:
```
ser_6.add(ser_7, fill_value=0)
```
Adding DataFrame objects results in the union of index pairs for rows and columns if the pairs are not the same, resulting in NaN for indices that do not overlap:
```
np.random.seed(0)
df_8 = DataFrame(np.random.rand(9).reshape((3, 3)),
columns=['a', 'b', 'c'])
df_8
np.random.seed(1)
df_9 = DataFrame(np.random.rand(9).reshape((3, 3)),
columns=['b', 'c', 'd'])
df_9
df_8 + df_9
```
Set a fill value instead of NaN for indices that do not overlap:
```
df_10 = df_8.add(df_9, fill_value=0)
df_10
```
Like NumPy, pandas supports arithmetic operations between DataFrames and Series.
Match the index of the Series on the DataFrame's columns, broadcasting down the rows:
```
ser_8 = df_10.ix[0]
df_11 = df_10 - ser_8
df_11
```
Match the index of the Series on the DataFrame's columns, broadcasting down the rows and union the indices that do not match:
```
ser_9 = Series(range(3), index=['a', 'd', 'e'])
ser_9
df_11 - ser_9
```
Broadcast over the columns and match the rows (axis=0) by using an arithmetic method:
```
df_10
ser_10 = Series([100, 200, 300])
ser_10
df_10.sub(ser_10, axis=0)
```
## Function Application and Mapping
NumPy ufuncs (element-wise array methods) operate on pandas objects:
```
df_11 = np.abs(df_11)
df_11
```
Apply a function on 1D arrays to each column:
```
func_1 = lambda x: x.max() - x.min()
df_11.apply(func_1)
```
Apply a function on 1D arrays to each row:
```
df_11.apply(func_1, axis=1)
```
Apply a function and return a DataFrame:
```
func_2 = lambda x: Series([x.min(), x.max()], index=['min', 'max'])
df_11.apply(func_2)
```
Apply an element-wise Python function to a DataFrame:
```
func_3 = lambda x: '%.2f' %x
df_11.applymap(func_3)
```
Apply an element-wise Python function to a Series:
```
df_11['a'].map(func_3)
```
## Sorting and Ranking
```
ser_4
```
Sort a Series by its index:
```
ser_4.sort_index()
```
Sort a Series by its values:
```
ser_4.order()
df_12 = DataFrame(np.arange(12).reshape((3, 4)),
index=['three', 'one', 'two'],
columns=['c', 'a', 'b', 'd'])
df_12
```
Sort a DataFrame by its index:
```
df_12.sort_index()
```
Sort a DataFrame by columns in descending order:
```
df_12.sort_index(axis=1, ascending=False)
```
Sort a DataFrame's values by column:
```
df_12.sort_index(by=['d', 'c'])
```
Ranking is similar to numpy.argsort except that ties are broken by assigning each group the mean rank:
```
ser_11 = Series([7, -5, 7, 4, 2, 0, 4, 7])
ser_11 = ser_11.order()
ser_11
ser_11.rank()
```
Rank a Series according to when they appear in the data:
```
ser_11.rank(method='first')
```
Rank a Series in descending order, using the maximum rank for the group:
```
ser_11.rank(ascending=False, method='max')
```
DataFrames can rank over rows or columns.
```
df_13 = DataFrame({'foo' : [7, -5, 7, 4, 2, 0, 4, 7],
'bar' : [-5, 4, 2, 0, 4, 7, 7, 8],
'baz' : [-1, 2, 3, 0, 5, 9, 9, 5]})
df_13
```
Rank a DataFrame over rows:
```
df_13.rank()
```
Rank a DataFrame over columns:
```
df_13.rank(axis=1)
```
## Axis Indexes with Duplicate Values
Labels do not have to be unique in Pandas:
```
ser_12 = Series(range(5), index=['foo', 'foo', 'bar', 'bar', 'baz'])
ser_12
ser_12.index.is_unique
```
Select Series elements:
```
ser_12['foo']
```
Select DataFrame elements:
```
df_14 = DataFrame(np.random.randn(5, 4),
index=['foo', 'foo', 'bar', 'bar', 'baz'])
df_14
df_14.ix['bar']
```
## Summarizing and Computing Descriptive Statistics
Unlike NumPy arrays, Pandas descriptive statistics automatically exclude missing data. NaN values are excluded unless the entire row or column is NA.
```
df_6
df_6.sum()
```
Sum over the rows:
```
df_6.sum(axis=1)
```
Account for NaNs:
```
df_6.sum(axis=1, skipna=False)
```
## Cleaning Data (Under Construction)
* Replace
* Drop
* Concatenate
```
from pandas import Series, DataFrame
import pandas as pd
```
Setup a DataFrame:
```
data_1 = {'state' : ['VA', 'VA', 'VA', 'MD', 'MD'],
'year' : [2012, 2013, 2014, 2014, 2015],
'population' : [5.0, 5.1, 5.2, 4.0, 4.1]}
df_1 = DataFrame(data_1)
df_1
```
### Replace
Replace all occurrences of a string with another string, in place (no copy):
```
df_1.replace('VA', 'VIRGINIA', inplace=True)
df_1
```
In a specified column, replace all occurrences of a string with another string, in place (no copy):
```
df_1.replace({'state' : { 'MD' : 'MARYLAND' }}, inplace=True)
df_1
```
### Drop
Drop the 'population' column and return a copy of the DataFrame:
```
df_2 = df_1.drop('population', axis=1)
df_2
```
### Concatenate
Concatenate two DataFrames:
```
data_2 = {'state' : ['NY', 'NY', 'NY', 'FL', 'FL'],
'year' : [2012, 2013, 2014, 2014, 2015],
'population' : [6.0, 6.1, 6.2, 3.0, 3.1]}
df_3 = DataFrame(data_2)
df_3
df_4 = pd.concat([df_1, df_3])
df_4
```
## Input and Output (Under Construction)
* Reading
* Writing
```
from pandas import Series, DataFrame
import pandas as pd
```
### Reading
Read data from a CSV file into a DataFrame (use sep='\t' for TSV):
```
df_1 = pd.read_csv("../data/ozone.csv")
```
Get a summary of the DataFrame:
```
df_1.describe()
```
List the first five rows of the DataFrame:
```
df_1.head()
```
### Writing
Create a copy of the CSV file, encoded in UTF-8 and hiding the index and header labels:
```
df_1.to_csv('../data/ozone_copy.csv',
encoding='utf-8',
index=False,
header=False)
```
View the data directory:
```
!ls -l ../data/
```
| github_jupyter |
# Differentiable Spatial to Numerical Transform
An example of the usage of the DSNT layer, as taken from the paper "Numerical Coordinate Regression with Convolutional Neural Networks"
```
# Imports
import tensorflow as tf
import cv2
import numpy as np
import sonnet as snt
# Import for us of the transform layer and loss function
import dsnt
# For the Sonnet Module
# from dsnt_snt import DSNT
```
## Build some dummy data
Circles of random colour, size and position on a black background
```
img_size = 150
image_count = 200
train_percent = 0.75
train_image_count = int(train_percent * image_count)
test_image_count = image_count - train_image_count
images = []
targets = []
for _ in range(200):
img = np.zeros((img_size, img_size, 3))
row, col = np.random.randint(0, img_size), np.random.randint(0, img_size)
radius = np.random.randint(8, 15)
b, g, r = np.random.randint(0, 255), np.random.randint(0, 255), np.random.randint(0, 255)
cv2.circle(img, (row, col), radius, (b, g, r), -1)
images.append(img)
norm_row = row / img_size
norm_col = col / img_size
targets.append([norm_row, norm_col])
images = np.array(images)
targets = np.array(targets)
train_images = images[:train_image_count]
test_images = images[train_image_count:]
train_targets = targets[:train_image_count]
test_targets = targets[train_image_count:]
print('''
{} images total
training: {}
testing : {}'''.format(image_count, train_image_count, test_image_count))
```
## A simple model
A handful of convolutional layers, each time downsampling by a factor of 2.
The network finishes with a kernel-size 1 convolution, producing a single channel heat-map.
I'm an advocate of [Deepmind's Sonnet](https://github.com/deepmind/sonnet), so the convolution operations are written using this. It's quite obvious what the equivalent Tensorflow operations would be.
```
def inference(inputs):
inputs = snt.Conv2D(output_channels=166,
kernel_shape=3,
rate=1,
padding='SAME',
name='conv1')(inputs)
inputs = tf.nn.relu(inputs)
inputs = tf.nn.max_pool(inputs, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
inputs = snt.Conv2D(output_channels=32,
kernel_shape=3,
rate=2,
padding='SAME',
name='conv2')(inputs)
inputs = tf.nn.relu(inputs)
inputs = tf.nn.max_pool(inputs, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
inputs = snt.Conv2D(output_channels=64,
kernel_shape=3,
rate=4,
padding='SAME',
name='conv3')(inputs)
inputs = tf.nn.relu(inputs)
inputs = tf.nn.max_pool(inputs, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
inputs = snt.Conv2D(output_channels=128,
kernel_shape=3,
rate=8,
padding='SAME',
name='conv4')(inputs)
inputs = tf.nn.relu(inputs)
inputs = tf.nn.max_pool(inputs, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
inputs = snt.Conv2D(output_channels=256,
kernel_shape=3,
rate=16,
padding='SAME',
name='conv5')(inputs)
inputs = tf.nn.relu(inputs)
inputs = tf.nn.max_pool(inputs, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
inputs = snt.Conv2D(output_channels=256,
kernel_shape=3,
padding='SAME',
name='conv6')(inputs)
inputs = tf.nn.relu(inputs)
inputs = tf.nn.max_pool(inputs, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
inputs = snt.Conv2D(output_channels=1,
kernel_shape=1,
padding='SAME',
name='conv7')(inputs)
coords, norm_heatmap = dsnt.dsnt(inputs)
# The Sonnet option
# coords, norm_heatmap = DSNT()(inputs)
return coords, norm_heatmap
```
## Training
A very simple training loop with no mini-batching.
```
tf.reset_default_graph()
input_x = tf.placeholder(tf.float32, shape=[None, img_size, img_size, 3])
input_y = tf.placeholder(tf.float32, shape=[None, 2])
heatmaps, predictions = inference(input_x)
# The predictions are in the range [-1, 1] but I prefer to work with [0, 1]
predictions = (predictions + 1) / 2
# Coordinate regression loss
loss_1 = tf.losses.mean_squared_error(input_y, predictions)
# Regularization loss
loss_2 = dsnt.js_reg_loss(heatmaps, input_y)
loss = loss_1 + loss_2
optimizer = tf.train.AdamOptimizer(learning_rate=6e-5).minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(10):
for i in range(train_image_count):
curr_img = train_images[i]
curr_target = train_targets[i]
_, loss_val = sess.run(
[optimizer, loss],
{
input_x: [curr_img],
input_y: [curr_target]
}
)
def evaluate_total_mse(images, targets):
'''
Evaluate the mean-squared-error across the whole given batch of images, targets
'''
total_loss = 0
image_count = images.shape[0]
for i in range(image_count):
curr_img = images[i]
curr_target = targets[i]
loss_val = sess.run(loss_1, {
input_x: [curr_img],
input_y: [curr_target]
})
total_loss += loss_val
return total_loss / image_count
print("Training MSE: {:.5f}".format(evaluate_total_mse(train_images, train_targets)))
print("Testing MSE : {:.5f}".format(evaluate_total_mse(test_images, test_targets)))
```
| github_jupyter |
# Verification History
```
import json, re, pprint, os
import pandas as pd
import numpy as np
import matplotlib
#matplotlib.use('pgf') # this makes exports more beautiful, but disables plots in this notebook
DIRECTORY = "."
pltsettings = {
"figure.figsize" : (5.0, 4.0),
"pgf.texsystem" : "pdflatex",
"font.family": "serif",
"font.serif": [], # use latex default serif font
#"font.sans-serif": ["DejaVu Sans"], # use a specific sans-serif font
}
matplotlib.rcParams.update(pltsettings)
import matplotlib.pyplot as plt
#import seaborn as sns # makes exports ugly
def read_json(fname):
"""
reads JSOn and returns (totals, units), tuple of dicts
"""
units=None
totals=None
with open(fname,'r') as f:
endofunits = False
for line in f:
match = re.search(r'^TOTALS', line)
if match:
endofunits = True
if not endofunits:
try:
if not units:
units = json.loads(line)
else:
print "error: units appearing multiple times"
except:
pass
else:
try:
if not totals:
totals = json.loads(line)
else:
print "error: totals appearing multiple times"
except:
pass
# unpack units (list of dicts) => dict
if units:
tmp=units
units={}
for u in tmp:
name=u.keys()[0]
stats=u[name]
#print "unit="+name +", stats=" + str(stats)
units[name]=stats
return (totals, units)
#######
from datetime import datetime
def date_from_path(f):
"""
Extract calendar date from file path. If fails, take
folder name
"""
parts = f.split(os.sep)
for p in parts:
match = re.search(r"(\d+-\d+-\d+_\d+:\d+:\d+)", p)
if match:
try:
dobj = datetime.strptime(match.group(1), "%Y-%m-%d_%H:%M:%S")
return dobj
except:
pass
return parts[0]
#######
import fnmatch
# find all log files in the subfolders
logfiles=[]
for root, dirnames, filenames in os.walk(DIRECTORY):
for f in fnmatch.filter(filenames, 'unitstats*.log'):
logfiles.append(os.path.join(root, f))
# load each of them into dict calendar date -> data
logfiles=list(set(logfiles))
alldata = {}
for f in logfiles:
if True: #try:
caldate = date_from_path (f)
(totals, units) = read_json(f)
print "file=" + f + ", date=" + str(caldate)
if not caldate in alldata:
alldata[caldate] = {}
alldata[caldate]["totals"] = totals
#alldata[caldate]["units"] = units
else: #except:
pass
print "FIRST: "
t = alldata.keys()[0]
print str(t) + ": "+ str(alldata[t]["totals"])
```
## Now make graph over totals
```
import matplotlib.dates as dates
totals = {k : v['totals'] for k,v in alldata.iteritems() if v['totals']['props'] > 0}
# test data:
#totals={datetime(2016, 10, 12, 14, 57, 31): {'props' : 10, 'proven': 9},datetime(2016, 10, 13, 14, 57, 31): {'props' : 11, 'proven': 9},datetime(2016, 10, 14, 14, 57, 31): {'props' : 11, 'proven': 10},datetime(2016, 10, 15, 14, 57, 31): {'props' : 10, 'proven': 10}}
df = pd.DataFrame(totals).T;
#print df.head()
exclude_columns=['units','ents', 'flows','skip','flows_proven', 'suppressed', 'flows_suppressed', 'flows_success']
ax=df.ix[ : "2016-09-15",df.columns.difference(exclude_columns)].plot(logy=True, marker='x',figsize=(13,8));
#ax.xaxis.set_minor_locator(dates.WeekdayLocator(byweekday=(1), interval=1))
#ax.xaxis.set_minor_formatter(dates.DateFormatter('%d\n%a'))
#ax.xaxis.set_major_locator(dates.MonthLocator())
#ax.xaxis.set_major_formatter(dates.DateFormatter('\n\n\n%b\n%Y'))
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%b\n%d'))
ax.xaxis.grid(True, which="major")
ax.xaxis.grid(False, which="minor")
ax.set_ylabel('number of')
ax.yaxis.grid()
plt.savefig(DIRECTORY + os.sep + 'history.pdf', bbox_inches='tight')
plt.show()
```
| github_jupyter |
NPWP strings can be converted to the following formats via the `output_format` parameter:
* `compact`: only number strings without any seperators or whitespace, like "013000666091000"
* `standard`: NPWP strings with proper whitespace in the proper places, like "01.300.066.6-091.000"
Invalid parsing is handled with the `errors` parameter:
* `coerce` (default): invalid parsing will be set to NaN
* `ignore`: invalid parsing will return the input
* `raise`: invalid parsing will raise an exception
The following sections demonstrate the functionality of `clean_id_npwp()` and `validate_id_npwp()`.
### An example dataset containing NPWP strings
```
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"npwp": [
"013000666091000",
"123456789",
"51824753556",
"51 824 753 556",
"hello",
np.nan,
"NULL"
],
"address": [
"123 Pine Ave.",
"main st",
"1234 west main heights 57033",
"apt 1 789 s maple rd manhattan",
"robie house, 789 north main street",
"(staples center) 1111 S Figueroa St, Los Angeles",
"hello",
]
}
)
df
```
## 1. Default `clean_id_npwp`
By default, `clean_id_npwp` will clean npwp strings and output them in the standard format with proper separators.
```
from dataprep.clean import clean_id_npwp
clean_id_npwp(df, column = "npwp")
```
## 2. Output formats
This section demonstrates the output parameter.
### `standard` (default)
```
clean_id_npwp(df, column = "npwp", output_format="standard")
```
### `compact`
```
clean_id_npwp(df, column = "npwp", output_format="compact")
```
## 3. `inplace` parameter
This deletes the given column from the returned DataFrame.
A new column containing cleaned NPWP strings is added with a title in the format `"{original title}_clean"`.
```
clean_id_npwp(df, column="npwp", inplace=True)
```
## 4. `errors` parameter
### `coerce` (default)
```
clean_id_npwp(df, "npwp", errors="coerce")
```
### `ignore`
```
clean_id_npwp(df, "npwp", errors="ignore")
```
## 4. `validate_id_npwp()`
`validate_id_npwp()` returns `True` when the input is a valid NPWP. Otherwise it returns `False`.
The input of `validate_id_npwp()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame.
When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated.
When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_id_npwp()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_id_npwp()` returns the validation result for the whole DataFrame.
```
from dataprep.clean import validate_id_npwp
print(validate_id_npwp("013000666091000"))
print(validate_id_npwp("123456789"))
print(validate_id_npwp("51824753556"))
print(validate_id_npwp("51 824 753 556"))
print(validate_id_npwp("hello"))
print(validate_id_npwp(np.nan))
print(validate_id_npwp("NULL"))
```
### Series
```
validate_id_npwp(df["npwp"])
```
### DataFrame + Specify Column
```
validate_id_npwp(df, column="npwp")
```
### Only DataFrame
```
validate_id_npwp(df)
```
| github_jupyter |
```
import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('SIC98/GPT2-python-code-generator')
model = GPT2LMHeadModel.from_pretrained('SIC98/GPT2-python-code-generator')
# Can use fast tokenizer
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained('SIC98/GPT2-python-code-generator')
# Test tokenizer
print(tokenizer("Hello world"))
print(tokenizer(" Hello world"))
print(tokenizer.encode("Hello world"))
print(tokenizer.encode(" Hello world"))
print(tokenizer.decode([15496, 995]))
print(tokenizer.decode([18435, 995]))
sequence = """# coding=utf-8
# Copyright 2020 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");"""
inputs = tokenizer.encode(sequence, return_tensors='pt')
outputs = model.generate(inputs, max_length=1024, do_sample=True, temperature=0.5, top_p=1.0)
text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(text)
sequence = 'def is_palindrome(s):\n """Check whether a string is a palindrome"""'
inputs = tokenizer.encode(sequence, return_tensors='pt')
outputs = model.generate(inputs, max_length=64, do_sample=True, temperature=0.1, top_p=1.0)
text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(text)
sequence = 'def long_palindrome_indices(l):\n """Return list indices for elemets that are palindrimes and at least 7 characters"""'
inputs = tokenizer.encode(sequence, return_tensors='pt')
outputs = model.generate(inputs, max_length=64, do_sample=True, temperature=0.1, top_p=1.0)
text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(text)
sequence = """@dataclass
class Item:
name: str
price: float
@dataclass
class Order
id: int
items: List[Item]
def compute_total_price(self, palindrome_discount=0.2):"""
inputs = tokenizer.encode(sequence, return_tensors='pt')
outputs = model.generate(inputs, max_length=128, do_sample=True, temperature=0.1, top_p=1.0)
text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(text)
sequence = '@dataclass\nclass Item:\n name: str\n price: float\n\n@dataclass\nclass Order\n id: int\n items: List[Item]\n \n def compute_total_price(self, palindrome_discount=0.2):\n """\n Compute the total price and return it.\n Apply a discount to items whose names are palindromes.\n """'
inputs = tokenizer.encode(sequence, return_tensors='pt')
outputs = model.generate(inputs, max_length=256, do_sample=True, temperature=0.2, top_p=1.0)
text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(text)
sequence = 'def add_all(a, b, c, d):'
inputs = tokenizer.encode(sequence, return_tensors='pt')
outputs = model.generate(inputs, max_length=256, do_sample=True, temperature=0.2, top_p=1.0)
text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(text)
!python transformers/examples/text-generation/run_generation.py \
--model_type=gpt2 \
--model_name_or_path='SIC98/GPT2-python-code-generator'
```
| github_jupyter |
# Linear transformations
When working in regular vector spaces, a common tool is a linear transformation, typically in the form of a matrix.
While geometric algebra already provides the rotors as a means of describing transformations (see [the CGA tutorial section](cga/index.ipynb#Operations)), there are types of linear transformation that are not suitable for this representation.
This tutorial leans heavily on the explanation of linear transformations in <cite data-cite="ga4cs">GA4CS</cite>, chapter 4. It explores the [clifford.transformations](../api/transformations.rst) submodule.
## Vector transformations in linear algebra
As a brief reminder, we can represent transforms in $\mathbb{R}^3$ using the matrices in $\mathbb{R}^{3 \times 3}$:
```
import numpy as np
rot_and_scale_x = np.array([
[1, 0, 0],
[0, 1, -1],
[0, 1, 1],
])
```
We can read this as a table, where each column corresponds to a component of the input vector, and each row a component of the output:
```
def show_table(data, cols, rows):
# trick to get a nice looking table in a notebook
import pandas as pd; return pd.DataFrame(data, columns=cols, index=rows)
show_table(rot_and_scale_x, ["$\mathit{in}_%s$" % c for c in "xyz"], ["$\mathit{out}_%s$" % c for c in "xyz"])
```
We can apply it to some vectors using the `@` matrix multiply operator:
```
v1 = np.array([1, 0, 0])
v2 = np.array([0, 1, 0])
v3 = np.array([0, 0, 1])
(
rot_and_scale_x @ v1,
rot_and_scale_x @ v2,
rot_and_scale_x @ v3,
)
```
We say this transformation is linear because $f(a + b) = f(a) + f(b)$:
```
assert np.array_equal(
rot_and_scale_x @ (2*v1 + 3*v2),
2 * (rot_and_scale_x @ v1) + 3 * (rot_and_scale_x @ v2)
)
```
## Multivector transformations in geometric algebra
How would we go about applying `rot_and_scale_x` in a geometric algebra? Clearly we can apply it to vectors in the same way as before, which we can do by unpacking coefficients and repacking them:
```
from clifford.g3 import *
v = 2*e1 + 3*e2
v_trans = layout.MultiVector()
v_trans[1,], v_trans[2,], v_trans[3,] = rot_and_scale_x @ [v[1,], v[2,], v[3,]]
v_trans
```
However, in geometric algebra we don't only care about the vectors, we want to transform the the higher-order blades too. This can be done via an outermorphism, which extends $f(a)$ to $f(a \wedge b) = f(a) \wedge f(b)$. This is where the `clifford.transformations` submodule comes in handy:
```
from clifford import transformations
rot_and_scale_x_ga = transformations.OutermorphismMatrix(rot_and_scale_x, layout)
```
To apply these transformations, we use the `()` operator, rather than `@`:
```
rot_and_scale_x_ga(e12)
# check it's an outermorphism
rot_and_scale_x_ga(e1) ^ rot_and_scale_x_ga(e2)
```
It shouldn't come as a surprise that applying the transformation to the psuedoscalar will tell us the determinant of our original matrix - the determinant tells us how a transformation scales volumes, and `layout.I` is a representation of the unit volume element!
```
np.linalg.det(rot_and_scale_x), rot_and_scale_x_ga(layout.I)
```
### Matrix representation
Under the hood, clifford implements this using a matrix too - it's just now a matrix operating over all of the basis blades, not just over the vectors. We can see this by looking at the _private_ `_matrix` attribute:
```
show_table(rot_and_scale_x_ga._matrix, ["$\mathit{in}_{%s}$" % c for c in layout.names], ["$\mathit{out}_{%s}$" % c for c in layout.names])
```
| github_jupyter |
```
# Reload when code changed:
%load_ext autoreload
%autoreload 2
%pwd
import sys
import os
path = "../"
sys.path.append(path)
#os.path.abspath("../")
print(os.path.abspath(path))
import os
import core
import importlib
importlib.reload(core)
try:
logging.shutdown()
importlib.reload(logging)
except:
pass
import pandas as pd
import numpy as np
import json
import time
import re
from event_handler import EventHandler
print(core.__file__)
pd.__version__
user_id_1 = 'user_1'
user_id_2 = 'user_2'
user_1_ws_1 = 'mw1'
print(path)
paths = {'user_id': user_id_1,
'workspace_directory': path + '/workspaces',
'resource_directory': path + '/resources',
'log_directory': path + '/log',
'test_data_directory': path + '/test_data'}
workspace_uuid = '327bbf4f-f367-4d85-80f4-e9151907eadc'
subset_uuid = 'ac45fef1-3042-44f4-bdca-e121d1d93f45'
# ekos = EventHandler(**paths)
# ekos.action_workspace_load_default_data(workspace_uuid)
ekos = EventHandler(**paths)
ekos.action_load_data(workspace_uuid)
workspace = ekos.workspaces[workspace_uuid]
subset = workspace.get_subset_object(subset_uuid)
df = workspace.data_handler.all_data
print(workspace.index_handler.booleans.keys())
print(workspace.index_handler.booleans['step_0'].keys())
print(workspace.index_handler.booleans['step_0'][subset_uuid].keys())
print(workspace.index_handler.booleans['step_0'][subset_uuid]['step_1'].keys())
b0 = workspace.index_handler.booleans['step_0']['boolean']
print(set(df['MYEAR']))
print(set(df.loc[b0, 'MYEAR']))
# Check filter
ekos.apply_data_filter(workspace_uuid=workspace_uuid,
step=0)
ekos.apply_data_filter(workspace_uuid=workspace_uuid,
subset_uuid=subset_uuid,
step=1)
ih = core.index_handler.new_IndexHandler(workspace_object=workspace, data_handler_object=workspace.data_handler)
ih._get_filter_level('step_0')
ih._add_filter_level('step_0')
class Level(dict):
def __init__(self,
level=None,
name=None,
boolean=None,
parent_level=None):
self.level = level
self.name = name
self.boolean = boolean
self.parent_level = parent_level
levels = ['data_0', 'subset', 'data_1', 'water_body', 'indicator']
parent = dict(zip(levels[1:], levels[:-1]))
def _get_parent_level(key):
return parent.get(key, None)
def get_level_info(level, name):
return {'level': level,
'name': name,
'boolean': None,
'parent_level': _get_parent_level(level)}
all_data = {}
def add_boolean(filter_object=None,
df=None,
**kwargs):
given_levels = []
for level in levels:
if level in kwargs:
given_levels.append(level)
# Check if sufficient information
if given_levels != levels[:len(given_levels)]:
print('NO')
return False
data = all_data
for key in given_levels:
value = kwargs[key]
if value in data:
data = data[value]
else:
data[value] = set_boolean_info(level=key,
name=value,
boolean=1,
next_level={})
add_boolean(data_1=True, data_0=True, subset='sub1')
all_data['fgsagfsa']['parent_dict'].keys()
all_data.keys()
def set_boolean_info(level=None,
name=None,
boolean=None,
parent_dict=None,
**kwargs):
return {'level': level,
'name': name,
'boolean': boolean,
'parent_dict': parent_dict}
data0 = set_boolean_info(level='data_0',
name='data_0',
boolean=1,
parent_dict=False)
sub1 = set_boolean_info(level='subset',
name='ac45fef1-3042-44f4-bdca-e121d1d93f45',
boolean=2,
parent_dict=data0)
data1 = set_boolean_info(level='data_1',
name='data_1',
boolean=3,
parent_dict=sub1)
wb1 = set_boolean_info(level='water_body',
name='wb1',
boolean=5,
parent_dict=data1)
wb2 = set_boolean_info(level='water_body',
name='wb2',
boolean=10,
parent_dict=data1)
def get_combined_boolean(info_dict):
boolean = info_dict['boolean']
if info_dict['parent_dict'] == False:
return boolean
else:
# print(info_dict.keys())
boolean += get_combined_boolean(info_dict['parent_dict'])
return boolean
get_combined_boolean(wb1)
indicator_din_winter = 'din_winter'
indicator_bqi = 'bqi'
indicator_oxygen = 'oxygen'
type_area = False
viss_eu_cd = 'SE582000-115270'
sf_din_winter = ekos.get_settings_filter_object(workspace_uuid=workspace_uuid,
subset_uuid=subset_uuid,
indicator=indicator_din_winter,
filter_type='data')
sf_bqi = ekos.get_settings_filter_object(workspace_uuid=workspace_uuid,
subset_uuid=subset_uuid,
indicator=indicator_bqi,
filter_type='data')
sf_oxygen = ekos.get_settings_filter_object(workspace_uuid=workspace_uuid,
subset_uuid=subset_uuid,
indicator=indicator_oxygen,
filter_type='data')
sf_oxygen.get_viss_eu_cd_list()
sf_bqi.get_value(type_area='2',
variable='DEPH_INTERVAL',
water_body=False,
return_series=False)
sf_din_winter.get_value(type_area='2',
variable='DEPH_INTERVAL',
water_body=False,
return_series=False)
df = sf_bqi.settings.df.copy(deep=True)
suf = sf_bqi.settings.suf
num = sf_bqi.settings.num
var = sf_bqi.settings.var
print(suf)
print(num)
print(var)
df.loc[(df['TYPE_AREA_NUMBER']=='2') | \
(df['TYPE_AREA_NUMBER']=='4'), var]
df.loc[(df['TYPE_AREA_NUMBER']==num) & \
(df['VISS_EU_CD'] == 'unspecified'), var]
sf_bqi.settings.value_series
sf = ekos.get_settings_filter_object(workspace_uuid=workspace_uuid,
subset_uuid=subset_uuid,
indicator=indicator,
filter_type='data')
print(len(sf.settings.df.columns))
print(len(sf.settings.columns_in_file))
sf.settings.df
set_data = {viss_eu_cd:{'DEPH_INTERVAL': [[10, 20]]}}
sf.set_values(set_data)
len(sf.settings.columns)
len(sf.settings.columns_in_file)
df = pd.read_csv('D:/Utveckling/git/ekostat_calculator/workspaces/40e11bb1-49fb-4eb3-bb2b-4d5b9f520b2c/subsets/a15e6f23-5f78-4e78-b3fa-d9cc14e88f93/step_2/settings/indicator_settings/oxygen_test.set', sep='\t', encoding='cp1252')
df = pd.read_csv('D:/Utveckling/git/ekostat_calculator/workspaces/40e11bb1-49fb-4eb3-bb2b-4d5b9f520b2c/subsets/a15e6f23-5f78-4e78-b3fa-d9cc14e88f93/step_2/settings/indicator_settings/BQI.set', sep='\t', encoding='cp1252')
df = df.fillna('')
df.head()
viss_eu_cd = 'SE582000-115270'
#viss_eu_cd = 'SE592000-184700'
viss_eu_cd = 'SE570900-121060'
type_area = ekos.mapping_objects['water_body'].get_type_area_for_water_body(viss_eu_cd, include_suffix=True).replace('-', '')
water_body_name = ekos.mapping_objects['water_body'].get_display_name(water_body=viss_eu_cd)
print('type_area:', type_area)
print('viss_eu_cd:', viss_eu_cd)
print('water_body_name:', water_body_name)
s = df.loc[((df['Type_Area_number'].astype(str) + df['Type_Area_suffix'])==type_area) & (df['VISS_EU_CD']=='unspecified')].copy()
if not len(s):
type_area = re.findall('\d+', type_area)[0]
s = df.loc[((df['Type_Area_number'].astype(str) + df['Type_Area_suffix'])==type_area) & (df['VISS_EU_CD']=='unspecified')].copy()
s['VISS_EU_CD'] = viss_eu_cd
s['WATERBODY_NAME'] = water_body_name
type(s)
df2 = df.append(s)
wbm = ekos.mapping_objects['water_body']
wbm.get_display_name(water_body=viss_eu_cd)
df2.tail()
print(viss_eu_cd)
print()
matching_columns = ['WATERBODY_NAME', 'tolerance_MIN_NR_YEARS_int',
'tolerance_BOTTOM_WATER_int', 'MONTH_LIST_int',
'filter_DEPH_INTERVAL_int']
for viss in set(df2['VISS_EU_CD']):
if viss =='unspecified':
continue
type_area = ekos.mapping_objects['water_body'].get_type_area_for_water_body(viss, include_suffix=True).replace('-', '')
type_area_series = df2.loc[((df2['Type_Area_number'].astype(str) + df2['Type_Area_suffix'])==type_area) & (df2['VISS_EU_CD']=='unspecified')].copy()
if not len(type_area_series):
type_area = re.findall('\d+', type_area)[0]
type_area_series = df2.loc[((df2['Type_Area_number'].astype(str) + df2['Type_Area_suffix'])==type_area) & (df2['VISS_EU_CD']=='unspecified')].copy()
viss_series = df2.loc[df2['VISS_EU_CD']==viss_eu_cd, :]
for col in matching_columns:
if col not in viss_series.columns:
continue
if list(viss_series[col].values) != list(type_area_series[col].values):
print('+', viss, type_area)
df2.loc[df2['Type_Area_number']==1, :]
df3 = df2.copy(deep=True)
df3.reset_index(inplace=True)
viss_eu_cd_boolean = df3['VISS_EU_CD']==viss_eu_cd
import numpy as np
df3.drop(np.where(viss_eu_cd_boolean)[0])
df3
import codecs
hp = {}
with codecs.open('D:/Utveckling/git/ekostat_calculator/resources/mappings/indicator_settings_homogeneous_parameters.txt') as fid:
for line in fid:
line = line.strip()
if line:
indicator, par = [item.strip() for item in line.split('\t')]
if not hp.get(indicator):
hp[indicator] = []
hp[indicator].append(par)
hp
```
| github_jupyter |
# <p style="text-align: center;"> Part Two: Scaling & Normalization </p>
```
from IPython.display import HTML
from IPython.display import Image
Image(url= "https://miro.medium.com/max/3316/1*yR54MSI1jjnf2QeGtt57PA.png")
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
The raw code for this IPython notebook is by default hidden for easier reading.
To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''')
```
# <p style="text-align: center;"> Table of Contents </p>
- ## 1. [Introduction](#Introduction)
- ### 1.1 [Abstract](#abstract)
- ### 1.2 [Importing Libraries](#importing_libraries)
- ## 2. [Data Scaling](#data_scaling)
- ### 2.1 [Standardization](#standardization)
- ### 2.2 [Normalization](#normalization)
- ### 2.3 [The Big Question – Normalize or Standardize?](#the_big_question)
- ### 2.4 [Implementation](#implementation)
- #### 2.4.1 [Original Distributions](#original_distributions)
- #### 2.4.2 [Adding a Feature with Much Larger Values](#larger_values)
- #### 2.4.3 [MinMaxScaler](#min_max_scaler)
- #### 2.4.4 [StandardScaler](#standard_scaler)
- #### 2.4.5 [RobustScaler](#robust_scaler)
- #### 2.4.6 [Normalizer](#normalizer)
- #### 2.4.7 [Combined Plot](#combined_plot)
- ## 3. [Conclusion](#Conclusion)
- ## 4. [Contribution](#Contribution)
- ## 5. [Citation](#Citation)
- ## 6. [License](#License)
# <p style="text-align: center;"> 1.0 Introduction </p> <a id='Introduction'></a>
# 1.1 Abstract <a id='abstract'></a>
Welcome to the Data Cleaning
[Back to top](#Introduction)
# 1.2 Importing Libraries <a id='importing_libraries'></a>
This is the official start to any Data Science or Machine Learning Project. A Python library is a reusable chunk of code that you may want to include in your programs/ projects.
In this step we import a few libraries that are required in our program. Some major libraries that are used are Numpy, Pandas, MatplotLib, Seaborn, Sklearn etc.
[Back to top](#Introduction)
```
# modules we'll use
import pandas as pd
import numpy as np
# for Box-Cox Transformation
from scipy import stats
# for min_max scaling
from sklearn import preprocessing
from mlxtend.preprocessing import minmax_scaling
# plotting modules
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
from astropy.table import Table, Column
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
matplotlib.style.use('ggplot')
np.random.seed(34)
```
# 2.0 Data Scaling <a id='data_scaling'></a>
## Why Should we Use Feature Scaling?
The first question we need to address – why do we need to scale the variables in our dataset? Some machine learning algorithms are sensitive to feature scaling while others are virtually invariant to it.
Machine learning models learn a mapping from input variables to an output variable. As such, the scale and distribution of the data drawn from the domain may be different for each variable. Input variables may have different units (e.g. feet, kilometers, and hours) that, in turn, may mean the variables have different scales.
### Gradient Descent Based Algorithms
Machine learning algorithms like linear regression, logistic regression, neural network, etc. that use gradient descent as an optimization technique require data to be scaled. Take a look at the formula for gradient descent below:

The presence of feature value X in the formula will affect the step size of the gradient descent. The difference in ranges of features will cause different step sizes for each feature. To ensure that the gradient descent moves smoothly towards the minima and that the steps for gradient descent are updated at the same rate for all the features, we scale the data before feeding it to the model.
> Having features on a similar scale can help the gradient descent converge more quickly towards the minima.
### Distance-Based Algorithms
Distance algorithms like KNN, K-means, and SVM are most affected by the range of features. This is because behind the scenes they are using distances between data points to determine their similarity.
For example, let’s say we have data containing high school CGPA scores of students (ranging from 0 to 5) and their future incomes (in thousands Dollars):

Since both the features have different scales, there is a chance that higher weightage is given to features with higher magnitude. This will impact the performance of the machine learning algorithm and obviously, we do not want our algorithm to be biassed towards one feature.
> Therefore, we scale our data before employing a distance based algorithm so that all the features contribute equally to the result.

The effect of scaling is conspicuous when we compare the Euclidean distance between data points for students A and B, and between B and C, before and after scaling as shown below:

Scaling has brought both the features into the picture and the distances are now more comparable than they were before we applied scaling.
### Tree-Based Algorithms
Tree-based algorithms, on the other hand, are fairly insensitive to the scale of the features. Think about it, a decision tree is only splitting a node based on a single feature. The decision tree splits a node on a feature that increases the homogeneity of the node. This split on a feature is not influenced by other features.
So, there is virtually no effect of the remaining features on the split. This is what makes them invariant to the scale of the features!
One of the reasons that it's easy to get confused between scaling and normalization is because the terms are sometimes used interchangeably and, to make it even more confusing, they are very similar! In both cases, you're transforming the values of numeric variables so that the transformed data points have specific helpful properties.
[Back to top](#Introduction)
## 2.1 Standardization <a id='standardization'></a>
**Scaling (Standardization):** Change in the range of your data.
Differences in the scales across input variables may increase the difficulty of the problem being modeled. A model with large weight values is often unstable, meaning that it may suffer from poor performance during learning and sensitivity to input values resulting in higher generalization error.
This means that you're transforming your data so that it fits within a specific scale, like 0-100 or 0-1. You want to scale data when you're using methods based on measures of how far apart data points are, like support vector machines (SVM) or k-nearest neighbors (KNN). With these algorithms, a change of "1" in any numeric feature is given the same importance.
For example, you might be looking at the prices of some products in both Yen and US Dollars. One US Dollar is worth about 100 Yen, but if you don't scale your prices, methods like SVM or KNN will consider a difference in price of 1 Yen as important as a difference of 1 US Dollar! This clearly doesn't fit with our intuitions of the world. With currency, you can convert between currencies. But what about if you're looking at something like height and weight? It's not entirely clear how many pounds should equal one inch (or how many kilograms should equal one meter).
By scaling your variables, you can help compare different variables on equal footing
Standardization is scaling a technique where the values are centered around the mean with a unit standard deviation. This means that the mean of the attribute becomes zero and the resultant distribution has a unit standard deviation.
Here’s the formula for standardization:

- Mu is the mean of the feature values and
- Sigma is the standard deviation of the feature values. Note that in this case, the values are not restricted to a particular range.
[Back to top](#Introduction)
```
# generate 1000 data points randomly drawn from an exponential distribution
original_data = np.random.exponential(size=1000)
# mix-max scale the data between 0 and 1
scaled_data = minmax_scaling(original_data, columns=[0])
# plot both together to compare
fig, ax = plt.subplots(1,2)
sns.distplot(original_data, ax=ax[0])
ax[0].set_title("Original Data")
sns.distplot(scaled_data, ax=ax[1])
ax[1].set_title("Scaled data")
```
## 2.2 Normalization <a id='normalization'></a>
**Normalization:** Change in the shape of the distribution of data.
Normalization scales each input variable separately to the range 0-1, which is the range for floating-point values where we have the most precision. Normalization requires that you know or are able to accurately estimate the minimum and maximum observable values. You may be able to estimate these values from your available data.
Scaling just changes the range of your data. Normalization is a more radical transformation. The point of normalization is to change your observations so that they can be described as a normal distribution.
Normal distribution: Also known as the "bell curve", this is a specific statistical distribution where a roughly equal observations fall above and below the mean, the mean and the median are the same, and there are more observations closer to the mean. The normal distribution is also known as the Gaussian distribution.
In general, you'll normalize your data if you're going to be using a machine learning or statistics technique that assumes your data is normally distributed. Some examples of these include linear discriminant analysis (LDA) and Gaussian naive Bayes. (Pro tip: any method with "Gaussian" in the name probably assumes normality.)
Normalization is a scaling technique in which values are shifted and rescaled so that they end up ranging between 0 and 1. It is also known as Min-Max scaling.
Here’s the formula for normalization:

Here, Xmax and Xmin are the maximum and the minimum values of the feature respectively.
- When the value of X is the minimum value in the column, the numerator will be 0, and hence X’ is 0
- On the other hand, when the value of X is the maximum value in the column, the numerator is equal to the denominator and thus the value of X’ is 1
- If the value of X is between the minimum and the maximum value, then the value of X’ is between 0 and 1
**PS:-** The method we're using to normalize here is called the Box-Cox Transformation.
Now, the big question in your mind must be when should we use normalization and when should we use standardization? Let’s find out!
[Back to top](#Introduction)
```
# normalize the exponential data with boxcox
normalized_data = stats.boxcox(original_data)
# plot both together to compare
fig, ax=plt.subplots(1,2)
sns.distplot(original_data, ax=ax[0])
ax[0].set_title("Original Data")
sns.distplot(normalized_data[0], ax=ax[1])
ax[1].set_title("Normalized data")
```
## 2.3 The Big Question – Normalize or Standardize? <a id='the_big_question'></a>
Normalization vs. standardization is an eternal question among machine learning newcomers. Let me elaborate on the answer in this section.
- Normalization is good to use when you know that the distribution of your data does not follow a Gaussian distribution. This can be useful in algorithms that do not assume any distribution of the data like K-Nearest Neighbors and Neural Networks.
- Standardization, on the other hand, can be helpful in cases where the data follows a Gaussian distribution. However, this does not have to be necessarily true. Also, unlike normalization, standardization does not have a bounding range. So, even if you have outliers in your data, they will not be affected by standardization.
However, at the end of the day, the choice of using normalization or standardization will depend on your problem and the machine learning algorithm you are using. There is no hard and fast rule to tell you when to normalize or standardize your data. You can always start by fitting your model to raw, normalized and standardized data and compare the performance for best results.
It is a good practice to fit the scaler on the training data and then use it to transform the testing data. This would avoid any data leakage during the model testing process. Also, the scaling of target values is generally not required.
[Back to top](#Introduction)
## 2.4 Implementation <a id='implementation'></a>
This is all good in theory, but how do we implement it in real life. The sklearn library has various modules in the preprocessing section which implement these in different ways. The 4, that are most widely used and that we're going to implement here are:-
- **MinMaxScalar:** The MinMaxScaler transforms features by scaling each feature to a given range. This range can be set by specifying the feature_range parameter (default at (0,1)). This scaler works better for cases where the distribution is not Gaussian or the standard deviation is very small. However, it is sensitive to outliers, so if there are outliers in the data, you might want to consider another scaler.
> x_scaled = (x-min(x)) / (max(x)–min(x))
- **StandardScaler:** Sklearn its main scaler, the StandardScaler, uses a strict definition of standardization to standardize data. It purely centers the data by using the following formula, where u is the mean and s is the standard deviation.
> x_scaled = (x — u) / s
- **RobustScalar:** If your data contains many outliers, scaling using the mean and standard deviation of the data is likely to not work very well. In these cases, you can use the RobustScaler. It removes the median and scales the data according to the quantile range. The exact formula of the RobustScaler is not specified by the documentation. By default, the scaler uses the Inter Quartile Range (IQR), which is the range between the 1st quartile and the 3rd quartile. The quantile range can be manually set by specifying the quantile_range parameter when initiating a new instance of the RobustScaler.
- **Normalizer:**
- **‘l1’:** The l1 norm uses the sum of all the values as and thus gives equal penalty to all parameters, enforcing sparsity.
> x_normalized = x / sum(X)
- **‘l2’:** The l2 norm uses the square root of the sum of all the squared values. This creates smoothness and rotational invariance. Some models, like PCA, assume rotational invariance, and so l2 will perform better.
> x_normalized = x / sqrt(sum((i\**2) for i in X))
**`TLDR`**
- Use MinMaxScaler as your default
- Use RobustScaler if you have outliers and can handle a larger range
- Use StandardScaler if you need normalized features
- Use Normalizer sparingly - it normalizes rows, not columns
[Back to top](#Introduction)
### 2.4.1 Original Distributions <a id='original_distributions'></a>
Let's make several types of random distributions. We're doing this because when we deal with real world data, the data is not necessarily in a normal (Gaussian) distribution. Each type of scaling may have a different effect depending on the type of the distribution, thus we take examples of 5 different type of distributions here.
- **Beta:** The Beta distribution is a probability distribution on probabilities.
- **Exponential:** The exponential distribution is a probability distribution which represents the time between events in a Poisson process.
- **Normal (Platykurtic):** The term "platykurtic" refers to a statistical distribution in which the excess kurtosis value is negative. For this reason, a platykurtic distribution will have thinner tails than a normal distribution, resulting in fewer extreme positive or negative events.
- **Normal (Leptokurtic):** Leptokurtic distributions are statistical distributions with kurtosis over three. It is one of three major categories found in kurtosis analysis.
- **Bimodal:** The bimodal distribution has two peaks.
[Back to top](#Introduction)
```
#create columns of various distributions
df = pd.DataFrame({
'beta': np.random.beta(5, 1, 1000) * 60, # beta
'exponential': np.random.exponential(10, 1000), # exponential
'normal_p': np.random.normal(10, 2, 1000), # normal platykurtic
'normal_l': np.random.normal(10, 10, 1000), # normal leptokurtic
})
# make bimodal distribution
first_half = np.random.normal(20, 3, 500)
second_half = np.random.normal(-20, 3, 500)
bimodal = np.concatenate([first_half, second_half])
df['bimodal'] = bimodal
# create list of column names to use later
col_names = list(df.columns)
```
After defining the distributions, lets visualize them
```
# plot original distribution plot
fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8))
ax1.set_title('Original Distributions')
sns.kdeplot(df['beta'], ax=ax1)
sns.kdeplot(df['exponential'], ax=ax1)
sns.kdeplot(df['normal_p'], ax=ax1)
sns.kdeplot(df['normal_l'], ax=ax1)
sns.kdeplot(df['bimodal'], ax=ax1);
df.describe()
df.plot()
```
As we can clearly see from the statistics and the plots, all values are in the same ball park. But what happens if we disturb this by adding a feature with much larger values.
### 2.4.2 Adding a Feature with Much Larger Values <a id='larger_values'></a>
This feature could be home prices, for example.
[Back to Top](#Introduction)
```
normal_big = np.random.normal(1000000, 10000, (1000,1)) # normal distribution of large values
df['normal_big'] = normal_big
col_names.append('normal_big')
df['normal_big'].plot(kind='kde')
df.normal_big.mean()
```
We've got a normalish distribution with a mean near 1,000,0000. But if we put this on the same plot as the original distributions, you can't even see the earlier columns.
```
# plot original distribution plot with larger value feature
fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8))
ax1.set_title('Original Distributions')
sns.kdeplot(df['beta'], ax=ax1)
sns.kdeplot(df['exponential'], ax=ax1)
sns.kdeplot(df['normal_p'], ax=ax1)
sns.kdeplot(df['normal_l'], ax=ax1)
sns.kdeplot(df['bimodal'], ax=ax1);
sns.kdeplot(df['normal_big'], ax=ax1);
df.describe()
```
The new, high-value distribution is way to the right. And here's a plot of the values.
```
df.plot()
```
### 2.4.3 MinMaxScaler <a id='min_max_scaler'></a>
MinMaxScaler subtracts the column mean from each value and then divides by the range.
[Back to Top](#Introduction)
```
mm_scaler = preprocessing.MinMaxScaler()
df_mm = mm_scaler.fit_transform(df)
df_mm = pd.DataFrame(df_mm, columns=col_names)
fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8))
ax1.set_title('After MinMaxScaler')
sns.kdeplot(df_mm['beta'], ax=ax1)
sns.kdeplot(df_mm['exponential'], ax=ax1)
sns.kdeplot(df_mm['normal_p'], ax=ax1)
sns.kdeplot(df_mm['normal_l'], ax=ax1)
sns.kdeplot(df_mm['bimodal'], ax=ax1)
sns.kdeplot(df_mm['normal_big'], ax=ax1);
df_mm.describe()
```
Notice how the shape of each distribution remains the same, but now the values are between 0 and 1. Our feature with much larger values was brought into scale with our other features.
### 2.4.4 StandardScaler <a id='standard_scaler'></a>
StandardScaler is scales each column to have 0 mean and unit variance.
[Back to Top](#Introduction)
```
s_scaler = preprocessing.StandardScaler()
df_s = s_scaler.fit_transform(df)
df_s = pd.DataFrame(df_s, columns=col_names)
fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8))
ax1.set_title('After StandardScaler')
sns.kdeplot(df_s['beta'], ax=ax1)
sns.kdeplot(df_s['exponential'], ax=ax1)
sns.kdeplot(df_s['normal_p'], ax=ax1)
sns.kdeplot(df_s['normal_l'], ax=ax1)
sns.kdeplot(df_s['bimodal'], ax=ax1)
sns.kdeplot(df_s['normal_big'], ax=ax1);
```
You can see that all features now have 0 mean.
```
df_s.describe()
```
### 2.4.5 RobustScaler <a id='robust_scaler'></a>
RobustScaler subtracts the column median and divides by the interquartile range.
[Back to Top](#Introduction)
```
r_scaler = preprocessing.RobustScaler()
df_r = r_scaler.fit_transform(df)
df_r = pd.DataFrame(df_r, columns=col_names)
fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8))
ax1.set_title('After RobustScaler')
sns.kdeplot(df_r['beta'], ax=ax1)
sns.kdeplot(df_r['exponential'], ax=ax1)
sns.kdeplot(df_r['normal_p'], ax=ax1)
sns.kdeplot(df_r['normal_l'], ax=ax1)
sns.kdeplot(df_r['bimodal'], ax=ax1)
sns.kdeplot(df_r['normal_big'], ax=ax1);
df_r.describe()
```
Although the range of values for each feature is much smaller than for the original features, it's larger and varies more than for MinMaxScaler. The bimodal distribution values are now compressed into two small groups. Standard and RobustScalers have pretty much the same ranges.
### 2.4.6 Normalizer <a id='normalizer'></a>
Note that normalizer operates on the rows, not the columns. It applies l2 normalization by default.
[Back to Top](#Introduction)
```
n_scaler = preprocessing.Normalizer()
df_n = n_scaler.fit_transform(df)
df_n = pd.DataFrame(df_n, columns=col_names)
fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8))
ax1.set_title('After Normalizer')
sns.kdeplot(df_n['beta'], ax=ax1)
sns.kdeplot(df_n['exponential'], ax=ax1)
sns.kdeplot(df_n['normal_p'], ax=ax1)
sns.kdeplot(df_n['normal_l'], ax=ax1)
sns.kdeplot(df_n['bimodal'], ax=ax1)
sns.kdeplot(df_n['normal_big'], ax=ax1);
df_n.describe()
```
Normalizer also moved the features to similar scales. Notice that the range for our much larger feature's values is now extremely small and clustered around .9999999999.
### 2.4.7 Combined Plot <a id='combined_plot'></a>
Let's look at our original and transformed distributions together. We'll exclude Normalizer because you generally want to tranform your features, not your samples.
[Back to Top](#Introduction)
```
# Combined plot.
fig, (ax0, ax1, ax2, ax3) = plt.subplots(ncols=4, figsize=(20, 8))
ax0.set_title('Original Distributions')
sns.kdeplot(df['beta'], ax=ax0)
sns.kdeplot(df['exponential'], ax=ax0)
sns.kdeplot(df['normal_p'], ax=ax0)
sns.kdeplot(df['normal_l'], ax=ax0)
sns.kdeplot(df['bimodal'], ax=ax0)
sns.kdeplot(df['normal_big'], ax=ax0);
ax1.set_title('After MinMaxScaler')
sns.kdeplot(df_mm['beta'], ax=ax1)
sns.kdeplot(df_mm['exponential'], ax=ax1)
sns.kdeplot(df_mm['normal_p'], ax=ax1)
sns.kdeplot(df_mm['normal_l'], ax=ax1)
sns.kdeplot(df_mm['bimodal'], ax=ax1)
sns.kdeplot(df_mm['normal_big'], ax=ax1);
ax2.set_title('After RobustScaler')
sns.kdeplot(df_r['beta'], ax=ax2)
sns.kdeplot(df_r['exponential'], ax=ax2)
sns.kdeplot(df_r['normal_p'], ax=ax2)
sns.kdeplot(df_r['normal_l'], ax=ax2)
sns.kdeplot(df_r['bimodal'], ax=ax2)
sns.kdeplot(df_r['normal_big'], ax=ax2);
ax3.set_title('After StandardScaler')
sns.kdeplot(df_s['beta'], ax=ax3)
sns.kdeplot(df_s['exponential'], ax=ax3)
sns.kdeplot(df_s['normal_p'], ax=ax3)
sns.kdeplot(df_s['normal_l'], ax=ax3)
sns.kdeplot(df_s['bimodal'], ax=ax3)
sns.kdeplot(df_s['normal_big'], ax=ax3);
```
You can see that after any transformation the distributions are on a similar scale. Also notice that MinMaxScaler doesn't distort the distances between the values in each feature.
# <p style="text-align: center;">Conclusion<p><a id='Conclusion'></a>
We have used various data Scaling and preprocessing techniques in this notebook. As listed below
- Use MinMaxScaler as your default
- Use RobustScaler if you have outliers and can handle a larger range
- Use StandardScaler if you need normalized features
- Use Normalizer sparingly - it normalizes rows, not columns
[Back to top](#Introduction)
# <p style="text-align: center;">Contribution<p><a id='Contribution'></a>
This was a fun project in which we explore the idea of Data cleaning and Data Preprocessing. We take inspiration from kaggle learning course and create our own notebook enhancing the same idea and supplementing it with our own contributions from our experiences and past projects.
- Code by self : 65%
- Code from external Sources : 35%
[Back to top](#Introduction)
# <p style="text-align: center;">Citation<p><a id='Citation'></a>
- https://www.kaggle.com/alexisbcook/scaling-and-normalization
- https://scikit-learn.org/stable/modules/preprocessing.html
- https://www.analyticsvidhya.com/blog/2020/04/feature-scaling-machine-learning-normalization-standardization/
- https://kharshit.github.io/blog/2018/03/23/scaling-vs-normalization
- https://www.kaggle.com/discdiver/guide-to-scaling-and-standardizing
- https://docs.google.com/spreadsheets/d/1woVi7wq13628HJ-tN6ApaRGVZ85OdmHsDBKLAf5ylaQ/edit#gid=0
- https://towardsdatascience.com/preprocessing-with-sklearn-a-complete-and-comprehensive-guide-670cb98fcfb9
- https://www.kaggle.com/rpsuraj/outlier-detection-techniques-simplified?select=insurance.csv
- https://statisticsbyjim.com/basics/remove-outliers/
- https://statisticsbyjim.com/basics/outliers/
# <p style="text-align: center;">License<p><a id='License'></a>
Copyright (c) 2020 Manali Sharma, Rushabh Nisher
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
[Back to top](#Introduction)
| github_jupyter |
# Modeling the global energy budget
## Introducing the zero-dimensional Energy Balance Model
____________
<a id='section1'></a>
## 1. Recap of the global energy budget
____________
Let's look again at the observations:

____________
<a id='section1'></a>
## 2. Tuning radiative fluxes to the observations
____________
### Recap of our simple greenhouse model
Last class we introduced a very simple model for the **OLR** or Outgoing Longwave Radiation to space:
$$ \text{OLR} = \tau \sigma T_s^4 $$
where $\tau$ is the **transmissivity** of the atmosphere, a number less than 1 that represents the greenhouse effect of Earth's atmosphere.
We also tuned this model to the observations by choosing $ \tau \approx 0.61$.
More precisely:
```
OLR_obs = 238.5 # in W/m2
sigma = 5.67E-8 # S-B constant
Ts_obs = 288. # global average surface temperature
tau = OLR_obs / sigma / Ts_obs**4 # solve for tuned value of transmissivity
print(tau)
```
Let's now deal with the shortwave (solar) side of the energy budget.
### Absorbed Shortwave Radiation (ASR) and Planetary Albedo
Let's define a few terms.
#### Global mean insolation
From the observations, the area-averaged incoming solar radiation, or **insolation**, is 341.3 W m$^{-2}$.
Let's denote this quantity by $Q$.
```
Q = 341.3 # the insolation
```
#### Planetary albedo
Some of the incoming radiation is not absorbed at all but simply reflected back to space. Let's call this quantity $F_{reflected}$
From observations we have:
```
Freflected = 101.9 # reflected shortwave flux in W/m2
```
The **planetary albedo** is the fraction of $Q$ that is reflected.
We will denote the planetary albedo by $\alpha$.
From the observations:
```
alpha = Freflected / Q
print(alpha)
```
That is, about 30% of the incoming radiation is reflected back to space.
#### Absorbed Shortwave Radiation
The **Absorbed Shortwave Radiation** or ASR is the part of the incoming sunlight that is *not* reflected back to space, i.e. that part that is absorbed somewhere within the Earth system.
Mathematically we write
$$ \text{ASR} = Q - F_{reflected}\\
= Q-\alpha Q \\
= (1-\alpha) Q $$
From the observations:
```
ASRobserved = Q - Freflected
print(ASRobserved)
```
As we noted last time, this number is *just slightly greater* than the observed OLR of 238.5 W m$^{-2}$.
____________
<a id='section3'></a>
## 3. Equilibrium temperature
____________
*This is one of the central concepts in climate modeling.*
The Earth system is in **energy balance** when energy in = energy out, i.e., when
$$ \text{ASR} = \text{OLR} $$
We want to know:
- What surface temperature do we need to have this balance?
- By how much would the temperature change in response to other changes in Earth system?
- Changes in greenhouse gases
- Changes in cloudiness
- etc.
With our simple greenhouse model, we can get an **exact solution** for the equilibrium temperature.
First, write down our statement of energy balance:
$$ (1-\alpha) Q = \tau \sigma T_s^4 $$
Rearrange to solve for $T_s$:
$$ T_s^4 = \frac{(1-\alpha) Q}{\tau \sigma} $$
and take the fourth root, denoting our **equilibrium temperature** as $T_{eq}$:
$$ T_{eq} = \left( \frac{(1-\alpha) Q}{\tau \sigma} \right)^\frac{1}{4} $$
Plugging the observed values back in, we compute:
```
# define a reusable function
def equilibrium_temperature(alpha, Q, tau):
return ((1-alpha) *Q / (tau * sigma))**(1/4)
# call the function, passing arguments, and assign the return value to a new variable
Teq_obs = equilibrium_temperature(alpha, Q, tau)
print(Teq_obs)
```
And this equilibrium temperature is *just slightly warmer* than 288 K.
____________
## 4. A climate change scenario
____________
Suppose that, due to global warming (changes in atmospheric composition and subsequent changes in cloudiness):
- The longwave transmissitivity decreases to $\tau = 0.57$
- The planetary albedo increases to $\alpha = 0.32$
What is the ***new equilibrium temperature***?
For this very simple model, we can work out the answer exactly:
```
Teq_new = equilibrium_temperature(0.32, Q, 0.57)
# an example of formatted print output, limiting to two or one decimal places
print('The new equilibrium temperature is {:.2f} K.'.format(Teq_new))
print('The equilibrium temperature increased by about {:.1f} K.'.format(Teq_new-Teq_obs))
```
Most climate models are more complicated mathematically, and solving directly for the equilibrium temperature will not be possible!
Instead, we will be able to use the model to calculate the terms in the energy budget (ASR and OLR).
### Python exercise
- Write **two** Python functions to calculate ASR and OLR for *arbitrary parameter values*.
- Verify the following:
- With the new parameter values but the old temperature $T = 288$ K, is ASR greater or lesser than OLR? (Use formatted strings to print your results.)
- Is the Earth gaining or losing energy?
- How does your answer change if $T = 295$ K (or any other temperature greater than 291 K)?
____________
## 5. A time-dependent Energy Balance Model
____________
The above exercise shows us that if some properties of the climate system change in such a way that the **equilibrium temperature goes up**, then the Earth system *receives more energy from the sun than it is losing to space*. The system is **no longer in energy balance**.
The temperature must then increase to get back into balance. The increase will not happen all at once! It will take time for energy to accumulate in the climate system. We want to model this **time-dependent adjustment** of the system.
In fact almost all climate models are **time-dependent**, meaning the model calculates **time derivatives** (rates of change) of climate variables.
### An energy balance **equation**
We will write the **total energy budget** of the Earth system as
\begin{align}
\frac{dE}{dt} &= \text{net energy flux in to system} \\
&= \text{flux in – flux out} \\
&= \text{ASR} - \text{OLR}
\end{align}
where $E$ is the **enthalpy** or **heat content** of the total system.
We will express the budget per unit surface area, so each term above has units W m$^{-2}$
Note: any **internal exchanges** of energy between different reservoirs (e.g. between ocean, land, ice, atmosphere) do not appear in this budget – because $E$ is the **sum of all reservoirs**.
Also note: **This is a generically true statement.** We have just defined some terms, and made the (very good) assumption that the only significant energy sources are radiative exchanges with space.
**This equation is the starting point for EVERY CLIMATE MODEL.**
But so far, we don’t actually have a MODEL. We just have a statement of a budget. To use this budget to make a model, we need to relate terms in the budget to state variables of the atmosphere-ocean system.
For now, the state variable we are most interested in is **temperature** – because it is directly connected to the physics of each term above.
### An energy balance **model**
If we now suppose that
$$ E = C T_s $$
where $T_s$ is the global mean surface temperature, and $C$ is a constant – the **effective heat capacity** of the atmosphere-ocean column.
Then our budget equation becomes:
$$ C \frac{dT_s}{dt} = \text{ASR} - \text{OLR} $$
where
- $C$ is the **heat capacity** of Earth system, in units of J m$^{-2}$ K$^{-1}$.
- $\frac{dT_s}{dt}$ is the rate of change of global average surface temperature.
By adopting this equation, we are assuming that the energy content of the Earth system (atmosphere, ocean, ice, etc.) is *proportional to surface temperature*.
Important things to think about:
- Why is this a sensible assumption?
- What determines the heat capacity $C$?
- What are some limitations of this assumption?
For our purposes here we are going to use a value of $C$ equivalent to heating 100 meters of water:
$$C = c_w \rho_w H$$
where
$c_w = 4 \times 10^3$ J kg$^{-1}$ $^\circ$C$^{-1}$ is the specific heat of water,
$\rho_w = 10^3$ kg m$^{-3}$ is the density of water, and
$H$ is an effective depth of water that is heated or cooled.
```
c_w = 4E3 # Specific heat of water in J/kg/K
rho_w = 1E3 # Density of water in kg/m3
H = 100. # Depth of water in m
C = c_w * rho_w * H # Heat capacity of the model
print('The effective heat capacity is {:.1e} J/m2/K'.format(C))
```
### Solving the energy balance model
This is a first-order ordinary differential equation (ODE) for $T_s$ as a function of time. It is also **our very first climate model!**
To solve it (i.e. see how $T_s$ evolves from some specified initial condition) we have two choices:
1. Solve it analytically
2. Solve it numerically
Option 1 (analytical) will usually not be possible because the equations will typically be too complex and non-linear. This is why computers are our best friends in the world of climate modeling.
HOWEVER it is often useful and instructive to simplify a model down to something that is analytically solvable when possible. Why? Two reasons:
1. Analysis will often yield a deeper understanding of the behavior of the system
2. It gives us a benchmark against which to test the results of our numerical solutions.
____________
## 6. Representing time derivatives on a computer
____________
Recall that the derivative is the **instantaneous rate of change**. It is defined as
$$ \frac{dT}{dt} = \lim_{\Delta t\rightarrow 0} \frac{\Delta T}{\Delta t}$$
- **On the computer there is no such thing as an instantaneous change.**
- We are always dealing with *discrete quantities*.
- So we approximate the derivative with $\Delta T/ \Delta t$.
- So long as we take the time interval $\Delta t$ "small enough", the approximation is valid and useful.
- (The meaning of "small enough" varies widely in practice. Let's not talk about it now)
So we write our model as
$$ C \frac{\Delta T}{\Delta t} \approx \text{ASR} - \text{OLR}$$
where $\Delta T$ is the **change in temperature predicted by our model** over a short time interval $\Delta t$.
We can now use this to **make a prediction**:
Given a current temperature $T_1$ at time $t_1$, what is the temperature $T_2$ at a future time $t_2$?
We can write
$$ \Delta T = T_2-T_1 $$
$$ \Delta t = t_2-t_1 $$
and so our model says
$$ C \frac{T_2-T_1}{\Delta t} = \text{ASR} - \text{OLR} $$
Which we can rearrange to **solve for the future temperature**:
$$ T_2 = T_1 + \frac{\Delta t}{C} \left( \text{ASR} - \text{OLR}(T_1) \right) $$
We now have a formula with which to make our prediction!
Notice that we have written the OLR as a *function of temperature*. We will use the current temperature $T_1$ to compute the OLR, and use that OLR to determine the future temperature.
____________
## 7. Numerical solution of the Energy Balance Model
____________
The quantity $\Delta t$ is called a **timestep**. It is the smallest time interval represented in our model.
Here we're going to use a timestep of 1 year:
```
dt = 60. * 60. * 24. * 365. # one year expressed in seconds
# Try a single timestep, assuming we have working functions for ASR and OLR
T1 = 288.
T2 = T1 + dt / C * ( ASR(alpha=0.32) - OLR(T1, tau=0.57) )
# above I am passing arguments using keywords so that I don't have to remember the order
print(T2)
```
What happened? Why?
Try another timestep
```
T1 = T2
T2 = T1 + dt / C * ( ASR(alpha=0.32) - OLR(T1, tau=0.57) )
print(T2)
```
Warmed up again, but by a smaller amount.
But this is tedious typing. Time to **define a function** to make things easier and more reliable:
```
def step_forward(T):
return T + dt / C * ( ASR(alpha=0.32) - OLR(T, tau=0.57) )
```
Try it out with an arbitrary temperature:
```
step_forward(300.)
```
Notice that our function calls other functions and variables we have already defined.
***
#### Python tip
Functions can access variables and other functions defined outside of the function.
This is both very useful and occasionally confusing.
***
Now let's really harness the power of the computer by **making a loop** (and storing values in arrays).
### Python "For" Loops
Definite iteration loops are frequently referred to as **for** loops because for is the keyword that is used to introduce them in nearly all programming languages, including Python.
Definite iteration means that the number of repetitions is specified in advance. Later in the course we will introduce indefinite iteration, in which the code block executes until some condition is met.
```
# Iterate through a list of numbers and execute statements
for n in [0, 1, 2, 3, 4]:
print(n)
# Note the loop variable n takes on the value of the next element
# in the collection each time through the loop.
# Same thing, but use the built-in range() function
# range(<end>) returns an iterable that yields integers
# starting with 0, up to but not including <end>:
for n in range(5):
print(n)
```
### Numpy arrays
[NumPy](https://numpy.org/) is the fundamental package for scientific computing with Python.
The fundamental data structure of NumPy is an **n-dimensional array**. N-dimensional means it can be 1-dimensional, 2-dimensional, 3-dimensional, and so on.
As your programming skills grow, you will want to avoid for loops and intead use array programming to speed up operation runtime. Let's not worry about that now.
To access numpy, we must first import it.
```
import numpy as np
```
The `linspace` function creates an array object that is an array of numbers evenly spaced between the start and end points.
```
np.linspace(230,300,10)
```
An array can be a function argument.
```
OLR(np.linspace(230,300,10), tau=0.57)
```
The `zeros` function creates an array object that is an array of zeros of the specified size.
```
np.zeros(10)
```
### Implementing For-loop and Array assignment in the Energy Balance Model
```
numsteps = 20
Tsteps = np.zeros(numsteps+1)
Years = np.zeros(numsteps+1)
Tsteps[0] = 288.
for n in range(numsteps):
Years[n+1] = n+1
#Here we are calling a function inside a loop
Tsteps[n+1] = step_forward( Tsteps[n] )
print(Tsteps)
```
What did we just do?
- Created an array of zeros
- Set the initial temperature to 288 K
- Repeated our time step 20 times
- Stored the results of each time step into the array
***
#### Python tip
Use square bracket [ ] to refer to elements of an array or list. Use round parentheses ( ) for function arguments.
***
### Plotting the result
Now let's draw a picture of our result!
```
# a special instruction for the Jupyter notebook
# Display all plots inline in the notebook
%matplotlib inline
# import the plotting package
import matplotlib.pyplot as plt
plt.plot(Years, Tsteps)
plt.xlabel('Years')
plt.ylabel('Global mean temperature (K)');
```
Note how the temperature *adjusts smoothly toward the equilibrium temperature*, that is, the temperature at which
ASR = OLR.
**If the planetary energy budget is out of balance, the temperature must change so that the OLR gets closer to the ASR!**
The adjustment is actually an *exponential decay* process: The rate of adjustment slows as the temperature approaches equilibrium.
The temperature gets very very close to equilibrium but never reaches it exactly.
***
#### Python tip
We can easily make simple graphs with the function `plt.plot(x,y)`, where `x` and `y` are arrays of the same size. But we must import it first.
This is actually not native Python, but uses a graphics library called [matplotlib](https://matplotlib.org). This is the workhorse of scientific plotting in Python, and we will be using it all the time!
Just about all of our notebooks will start with this:
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
***
____________
## 8. Summary
____________
- We looked at the flows of energy in and out of the Earth system.
- These are determined by radiation at the top of the Earth's atmosphere.
- Any imbalance between shortwave absorption (ASR) and longwave emission (OLR) drives a change in temperature
- Using this idea, we built a climate model!
- This **Zero-Dimensional Energy Balance Model** solves for the global, annual mean surface temperature $T_s$
- Two key assumptions:
- Energy content of the Earth system varies proportionally to $T_s$
- The OLR increases as $\tau \sigma T_s^4$ (our simple greenhouse model)
- Earth (or any planet) has a well-defined **equilibrium temperature** $T_{eq}$ at which ASR = OLR, because of the *temperature dependence of the outgoing longwave radiation*.
- If $T_s < T_{eq}$, the model will warm up.
- We can represent the continous warming process on the computer using discrete timesteps.
- We can plot the result.
____________
## Credits
This notebook is part of [The Climate Laboratory](https://brian-rose.github.io/ClimateLaboratoryBook), an open-source textbook developed and maintained by [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany. It has been modified by [Nicole Feldl](http://nicolefeldl.com), UC Santa Cruz.
It is licensed for free and open consumption under the
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license.
Development of these notes and the [climlab software](https://github.com/brian-rose/climlab) is partially supported by the National Science Foundation under award AGS-1455071 to Brian Rose. Any opinions, findings, conclusions or recommendations expressed here are mine and do not necessarily reflect the views of the National Science Foundation.
____________
| github_jupyter |
```
%matplotlib inline
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=1
import os
import numpy as np
import torch
from object_pose_utils.utils import to_np, to_var
import matplotlib.pyplot as plt
import pylab
pylab.rcParams['figure.figsize'] = 20, 12
from mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import
import warnings
warnings.filterwarnings('ignore')
```
## Set location and object set for YCB Dataset
### YCB Object Indices
| Object Indices |[]()|[]()|
|---|---|---|
| __1.__ 002_master_chef_can | __8.__ 009_gelatin_box | __15.__ 035_power_drill |
| __2.__ 003_cracker_box | __9.__ 010_potted_meat_can | __16.__ 036_wood_block |
| __3.__ 004_sugar_box | __10.__ 011_banana | __17.__ 037_scissors |
| __4.__ 005_tomato_soup_can | __11.__ 019_pitcher_base | __18.__ 040_large_marker |
| __5.__ 006_mustard_bottle | __12.__ 021_bleach_cleanser | __19.__ 051_large_clamp |
| __6.__ 007_tuna_fish_can | __13.__ 024_bowl | __20.__ 052_extra_large_clamp |
| __7.__ 008_pudding_box | __14.__ 025_mug | __21.__ 061_foam_brick |
```
### Set this to the root of your YCB Dataset
dataset_root = '/media/DataDrive/ycb/YCB_Video_Dataset'
### If you want individual objects, change this to
### a list of the indices you want (see above).
object_list = list(range(1,22))
### Set this to the dataset subset you want
mode = 'test'
```
## Initalize YCB Dataset
```
from object_pose_utils.datasets.ycb_dataset import YcbDataset as YCBDataset
from object_pose_utils.datasets.image_processing import ImageNormalizer
from object_pose_utils.datasets.pose_dataset import OutputTypes as otypes
output_format = [otypes.OBJECT_LABEL,
otypes.QUATERNION,
otypes.TRANSLATION,
otypes.IMAGE_CROPPED,
otypes.DEPTH_POINTS_MASKED_AND_INDEXES,
]
dataset = YCBDataset(dataset_root, mode=mode,
object_list = object_list,
output_data = output_format,
resample_on_error = False,
add_syn_background = False,
add_syn_noise = False,
use_posecnn_data = True,
postprocessors = [ImageNormalizer()],
image_size = [640, 480], num_points=1000)
```
## Initialize Dense Fusion Pose Estimator
```
from dense_fusion.network import PoseNet, PoseNetGlobal, PoseRefineNet, PoseNetDropout
df_weights = '/home/bokorn/src/DenseFusion/trained_checkpoints/ycb/pose_model_26_0.012863246640872631.pth'
df_estimator = PoseNet(num_points = 1000, num_obj = 21)
df_estimator.load_state_dict(torch.load(df_weights, map_location=torch.device('cpu')))
df_estimator.cuda();
df_estimator.eval();
```
## Set Feature Comparison File Paths
```
### Set this to the root of your grid featurization
feature_root = '../weights/dense_fusion_features/'
### Set this to your comparison network checkpoint file path
comp_model_checkpoint = '../weights/feature_comparison_df.pth'
```
## Initialize the Feature Comparison Network
```
from se3_distributions.losses.loglik_loss import evaluateFeature
from se3_distributions.models.compare_networks import SigmoidCompareNet, SigmoidNet
from object_pose_utils.utils.interpolation import TetraInterpolation
tetra_interp = TetraInterpolation(2)
feature_key = 'feat_global'
feature_size = 1024
grid_vertices = torch.load(os.path.join(feature_root, 'grid',
'{}_vertices.pt'.format(dataset.classes[1])))
### Histogram Comparison
grid_features = {}
for object_id in object_list:
grid_features[object_id] = torch.load(os.path.join(feature_root, 'grid',
'{}_{}_features.pt'.format(feature_key, dataset.classes[object_id])))
comp_estimator = SigmoidCompareNet(feature_size, 21)
comp_estimator.load_state_dict(torch.load(comp_model_checkpoint))
comp_estimator.cuda();
comp_estimator.eval();
def histogram_comparison(img, points, choose, obj):
max_q, max_t, feat = evaluateDenseFusion(df_estimator, img, points, choose, obj)
lik_est = evaluateFeature(comp_estimator, obj, feat, grid_features)
lik_est = to_np(lik_est.flatten())
lik_est /= lik_est.sum()
return lik_est, max_q, max_t
```
## Sample Dataset and Estimate Likelihood Distribution
```
%matplotlib inline
from object_pose_utils.utils.display import torch2Img
from se3_distributions.utils.evaluation_utils import evaluateDenseFusion
index = np.random.randint(len(dataset))
obj, quat, trans, img, points, choose = dataset[index]
lik_est, q_est, t_est = histogram_comparison(img, points, choose, obj)
tetra_interp.setValues(lik_est)
lik_gt = tetra_interp.smooth(to_np(quat)).item()
lik_out = tetra_interp.smooth(q_est).item()
print('Likelihood of Ground Truth: {:0.5f}'.format(lik_gt))
print('Likelihood of Estimate: {:0.5f}'.format(lik_out))
plt.imshow(torch2Img(img, normalized=True))
plt.axis('off')
plt.show()
```
## Visualize Distribution
```
#%matplotlib notebook
from object_pose_utils.utils.display import scatterSO3, quats2Point
fig = plt.figure()
ax = fig.add_subplot(1,1,1, projection='3d')
scatterSO3(to_np(grid_vertices), lik_est, [to_np(quat)], ax=ax, alims = [0,1], s=10)
gt_pt = quats2Point([to_np(quat)])
est_pt = quats2Point([q_est])
ax.scatter(gt_pt[:,0], gt_pt[:,1], gt_pt[:,2], c='g',s=100, marker='x')
ax.scatter(est_pt[:,0], est_pt[:,1], est_pt[:,2], c='b',s=100, marker='+')
plt.show()
```
| github_jupyter |
# The Ambulance Routing Problem
## Description
One potential application of reinforcement learning involves positioning a server or servers (in this case an ambulance) in an optimal way geographically to respond to incoming calls while minimizing the distance traveled by the servers. This is closely related to the [k-server problem](https://en.wikipedia.org/wiki/K-server_problem), where there are $k$ servers stationed in a space that must respond to requests arriving in that space in such a way as to minimize the total distance traveled.
The ambulance routing problem addresses the problem by modeling an environment where there are ambulances stationed at locations, and calls come in that one of the ambulances must be sent to respond to. The goal of the agent is to minimize both the distance traveled by the ambulances between calls and the distance traveled to respond to a call by optimally choosing the locations to station the ambulances. The ambulance environment has been implemented in two different ways; as a 1-dimensional number line $[0,1]$ along which ambulances will be stationed and calls will arrive, and a graph with nodes where ambulances can be stationed and calls can arrive, and edges between the nodes that ambulances travel along.
### Line
`ambulance_metric.py` is a 1-dimensional reinforcement learning environment in the space $X = [0, 1]$. Each ambulance in the problem can be located anywhere in $X$, so the state space is $S = X^k$, where $k$ is the number of ambulances. The distance function is chosen by the user, who specifies what kind of norm to use. Calls for an ambulance can also arrive anywhere in $X$, and the nearest ambulance will respond to the call, leaving the locations of the other ambulances unchanged. Between calls the agent must choose a location to station each ambulance, with the goal of minimizing both the distance traveled between calls and to respond to a call.
The default distribution for call arrivals is $Beta(5, 2)$ over $[0,1]$, however any probability distribution defined over the interval $[0,1]$ is valid. The probability distribution can also change with each timestep.
For example, in a problem with two ambulances, imagine the ambulances are initially located at $0.4$ and $0.6$, and the distance function being used is the $\ell_1$ norm. The agent could choose to move the ambulances to $0.342$ and $0.887$. If a call arrived at $0.115$, ambulance 1, which was at $0.342$, would respond to that call, and the state at the end of the iteration would be ambulance 1 at $0.115$ and ambulance 2 at $0.887$. The agent could then choose new locations to move the ambulances to, and the cycle would repeat.
At the beginning of the iteration:
<div>
<img src="attachment:line1.jpg" width="400"/>
</div>
After the ambulances move to the locations specified by the agent:
<div>
<img src="attachment:line2.jpg" width="400"/>
</div>
After ambulance 1 responds to the call:
<div>
<img src="attachment:line3.jpg" width="395"/>
</div>
### Graph
`ambulance_graph.py` is structured as a graph of nodes $V$ with edges between the nodes $E$. Each node represents a location where an ambulance could be stationed or a call could come in. The edges between nodes are undirected and have a weight representing the distance between those two nodes.
The nearest ambulance to a call is determined by computing the shortest path from each ambulance to the call, and choosing the ambulance with the minimum length path. The calls arrive using a prespecified iid probability distribution. The default is for the probability of call arrivals to be evenly distributed over all the nodes; however, the user can also choose different probabilities for each of the nodes that a call will arrive at that node. For example, in the following graph the default setting would be for each call to have a 0.25 probability of arriving at each node, but the user could instead specify that there is a 0.1 probability of a call at node 0, and a 0.3 probability of a call arriving at each of the other three nodes.
<div>
<img src="attachment:graph.jpg" width="600"/>
</div>
After each call comes in, the agent will choose where to move each ambulance in the graph. Every ambulance except the ambulance that moved to respond to the call will be at the same location where the agent moved it to on the previous iteration, and the ambulance that moved to respond to the call will be at the node where the call came in.
The graph environment is currently implemented using the [networkx package](https://networkx.org/documentation/stable/index.html).
## Model Assumptions
* New calls do not arrive while an ambulance is in transit
* There is no step for traveling to a hospital after responding to a call
## Dynamics
### State Space
#### Line
The state space for the line environment is $S = X^k$ where $X = [0, 1]$ and there are $k$ ambulances. Each ambulance can be located at any point on the line $X$.
#### Graph
The graph environment consists of nodes $V$ and edges between the nodes $E$, and each ambulance can be located at any node $v \in V$ (and multiple ambulances can be at the same node). The state space of this environment is $S = V^k$, where $k$ is the number of ambulances.
### Action space
#### Line
The agent chooses a location for each ambulance to travel to between calls. The location for each ambulance can be any point $t \in X$ where $X = [0, 1]$.
#### Graph
The agent chooses a node for each ambulance to travel to between calls. The location for any ambulance can be any node $v \in V$, so the action space $A$ will be $A = V^k$.
### Reward
The reward is $-1 \cdot (\alpha \cdot d(s, a) + (1 - \alpha) \cdot d(a, n))$ where $s$ is the previous state of the system, $a$ is the action chosen by the user, $n$ is the state of the system after the new call arrival, and $d$ is the distance function. In the case of the metric environment $d$ is the norm specified by the user, and in the graph environment $d$ is the shortest distance between two nodes. The goal of the agent is to maximize this reward, and because the reward is negative this implies getting the reward as close to $0$ as possible.
The $\alpha$ parameter allows the user to control the proportional difference in cost to move ambulances normally versus when responding to an emergency. In real world scenarios the distance traveled to respond to a call will likely be more costly than the distance traveled between calls because of the additional cost of someone having to wait a long time for an ambulance.
By collecting data on their past actions, call arrival locations, and associated rewards, an agent's goal is to learn how to most effectively position ambulances to respond to calls to minimize the distance the ambulances have to travel.
### Transitions
Given an initial state at the start of the iteration $x$, an action chosen by the user $a$, and a call arrival $p$, the state at the end of the iteration will be
$\begin{align*}
x_i^{new} & = \begin{cases}
a_i \qquad & i \neq i^\star \\
p_h \qquad & i = i^\star
\end{cases} \\
\end{align*}$
for all ambulances $i \in [k]$, where $i^*$ is the nearest ambulance to the call $p$ from the action $a$
$\begin{align*}
i^\star = \text{argmin}_{i \in [k]} |a_i - p|
\end{align*}$
## Environment
### Metric
`reset`
Returns the environment to its original state.
`step(action)`
Takes an action from the agent and returns the state of the system after the next arrival.
* `action`: a list with the location of each ambulance, where each location is a float between $0$ and $1$.
Ex. two ambulances at 0.572 and 0.473 would be `[0.572, 0.473]`
Returns:
* `state`: A list containing the locations of each ambulance
* `reward`: The reward associated with the most recent action and event
* `pContinue`:
* `info`: a dictionary containing the node where the most recent arrival occured
- Ex. `{'arrival': 0.988}` if the most recent arrival was at 0.988
`render`
Renders an iteration by showing three pictures: where the ambulances are after moving to their action location, the location of the call arrival, and the locations of the ambulances after an ambulance moves to respond to the call.
Takes one parameter `mode`. When `mode = "rgb_array"` returns a tuple of three rgb_arrays representing the three different images that need to be rendered.
`close`
Currently unimplemented
Init parameters for the line ambulance environment, passed in using a dictionary named CONFIG
* `epLen`: the length of each episode, i.e. how many calls will come in before the episode terminates.
* `arrival_dist(timestep)`: a function that returns a sample from a probability distribution. The probability distribution can change with each timestep.
* `alpha`: a float $\in [0,1]$ that controls the proportional difference between the cost to move ambulances in between calls and the cost to move an ambulance to respond to a call.
- `alpha = 0`: no cost to move between calls
- `alpha = 1`: no cost to move to respond to a call
* `starting_state`: a list of floats $\in (0,1)$ the length of the number of ambulances. Each entry in the list corresponds to the starting location for that ambulance.
* `num_ambulance`: integer representing the number of ambulances in the system
* `norm`: an integer representing the norm to use to calculate distances; in most cases it should probably be set to 1 to be the $\ell_1$ norm
### Graph
`reset`
Returns the environment to its original state.
`step(action)`
Takes an action from the agent and returns the state of the system after the next arrival.
* `action`: a list with the location of each ambulance
Ex. two ambulances at nodes 0 and 6 would be `[0, 6]`
Returns:
* `state`: A list containing the locations of each ambulance
* `reward`: The reward associated with the most recent action and event
* `pContinue`:
* `info`: a dictionary containing the node where the most recent arrival occured
- Ex. `{'arrival': 1}` if the most recent arrival was at node 1
`render`
Currently unimplemented
`close`
Currently unimplemented
Init parameters for the graph ambulance environment, passed in using a dictionary named CONFIG
* `epLen`: the length of each episode, i.e. how many calls will come in before the episode terminates.
* `arrival_dist(timestep, num_nodes, [arrival_data])`: a function on the timestep and number of nodes in the graph (and a list of arrival data if `from_data = True`), returning a numpy array with an entry for each node in the graph representing the probability of an arrival occurring at that node. All the entries in the array must forma a probability distribution, i.e. they must sum to 1.
- When generating arrivals from data, the arrivals might be deterministic. In this case the array generated at each timestep would have an entry of 1 at the node where the call arrives and 0 for all other nodes.
* `alpha`: controls the proportional difference between the cost to move ambulances in between calls and the cost to move an ambulance to respond to a call.
- `alpha = 0`: no cost to move between calls
- `alpha = 1`: no cost to move to respond to a call
* `from_data`: an indicator of whether or not the ambulance arrivals are being read from data
* `edges`: a list of tuples where each tuple has three entries corresponding to the starting node, the ending node, and the distance between them. The distance is a dictionary with one entry, 'travel_time', where the value is an int representing the time required to travel between the two nodes
- Ex. `(0, 4, {'travel_time': 2})` is an edge between nodes 0 and 4 with travel time 2
- The graph is undirected and nodes are inferred from the edges
- Requires that the graph is fully connected
- Requires that the numbering of nodes is chronological and starts at 0 (ie, if you have 5 nodes they must be labeled 0, 1, 2, 3, and 4)
* `starting_state`: a list where each index corresponds to an ambulance, and the entry at that index is the node where the ambulance is located
* `num_ambulance`: integer representing the number of ambulances in the system (kind of redundant, maybe we should get rid of this?)
## Heuristic Agents
### Stable Agent
The stable agent does not move any of the ambulances between calls, and the only time an ambulance moves is when responding to an incoming call. In other words, the policy $\pi$ chosen by the agent for any given state $X$ will be $\pi_h(X) = X$
### Metric Median Agent
The median agent for the metric environment takes a list of all past call arrivals sorted by arrival location, and partitions it into $k$ quantiles where $k$ is the number of ambulances. The algorithm then selects the middle data point in each quantile as the locations to station the ambulances.
### Metric K-Mediod Agent
**k-medoid is currently not included because it takes too long to run**
The k-medoid agent uses the k-medoid algorithm where $k$ is the number of ambulances to figure out where to station ambulances. The k-medoids algorithm attempts to find $k$ clusters of data such that the total distance from each of the data points to the center of the cluster is minimized, however it differs from k-means in that it always chooses an element of the dataset as the center of the cluster. The k-medoid agent is implemented using the [scikit learn k-medoids algorithm](https://scikit-learn-extra.readthedocs.io/en/latest/generated/sklearn_extra.cluster.KMedoids.html). The policy $\pi$ chosen by the agent for a state $X$ will be $\pi_h(X) = kmedoid\text{(historical call data)}$
The precise definition of the medoid $x_{\text{medoid}}$ for a set of points $\mathcal{X} := \{x_1, x_2, ..., x_n\}$ with a distance function $d$ is
$x_{\text{medoid}} = \text{arg} \text{min}_{y \in \mathcal{X}} \sum_{i=1}^n d(y, x_i)$
### Graph Median Agent
The median agent for the graph environment chooses to station the ambulances at the nodes where the minimum distance would have to be traveled to respond to all calls that have arrived in the past. The distance between each pair of nodes is calculated and put into a (symmetric) matrix, where an entry in the matrix $(i, j)$ is the length of the shortest path between nodes $i$ and $j$. This matrix is multiplied by a vector containing the number of calls that have arrived at each node in the past. The minimum total distances in the resulting matrix are chosen as the nodes at which to station the ambulances.
The following is an example calculated for the graph from the overview assuming the data of past call arrivals is:
$[0,0,3,2,0,1,1,1,0,3,3,3,2,3,3]$
<div>
<img src="attachment:graph.jpg" width="600"/>
</div>
$\begin{bmatrix}
d(0,0) & d(0,1) & d(0,2) & d(0,3)\\
d(1,0) & d(1,1) & d(1,2) & d(1,3)\\
d(2,0) & d(2,1) & d(2,2) & d(2,3)\\
d(3,0) & d(3,1) & d(3,2) & d(3,3)
\end{bmatrix} =
\begin{bmatrix}
0 & 4 & 5 & 3\\
4 & 0 & 2 & 5\\
5 & 2 & 0 & 3\\
3 & 5 & 3 & 0
\end{bmatrix}$
$\begin{bmatrix}
\sum_{x \in \text{past data}} \mathbb{1}(x = 0)\\
\sum_{x \in \text{past data}} \mathbb{1}(x = 1)\\
\sum_{x \in \text{past data}} \mathbb{1}(x = 2)\\
\sum_{x \in \text{past data}} \mathbb{1}(x = 3)
\end{bmatrix} =
\begin{bmatrix}
4\\
3\\
2\\
6
\end{bmatrix}$
$\begin{bmatrix}
0 & 4 & 5 & 3\\
4 & 0 & 2 & 5\\
5 & 2 & 0 & 3\\
3 & 5 & 3 & 0
\end{bmatrix}
\begin{bmatrix}
4\\
3\\
2\\
6
\end{bmatrix}
= \begin{bmatrix}
40\\
50\\
44\\
33
\end{bmatrix}$
The graph median agent would choose to position the first ambulance at node 3, the second ambulance at node 0, etc.
### Graph Mode Agent
The mode agent chooses to station the ambulances at the nodes where the most calls have come in the past. The first ambulance will be stationed at the node with the most historical calls, the second ambulance at the node with the second most historical calls, etc. The policy $\pi$ chosen by the agent for a state $X$ will be $\pi_h(X) = mode\text{(historical call data)}$
| github_jupyter |
# Блок A1.6 Модуля A1
## А1.6.1 Подведём итоги
### А1. Тест. Часть 1
**А1.6.1.1 Какие из нижеперечисленных типов данных используются в _Python_ при работе с числами?**
* int __[верно]__
* str
* float __[верно]__
**А1.6.1.2 Какие из нижеперечисленных типов данных используются в Python при работе со строками?**
* int
* str __[верно]__
* float
**А1.6.1.3 Какие математические операторы можно применять для двух объектов формата str? (`str` _название оператора_ `str`)**
* `+` __[верно]__
* `/`
* `*`
* `**`
* `%`
* `//`
**А1.6.1.4 Отметьте зарезервированное слово, которое используется в краткой форме условного оператора ( _если `а`, то `б`_ ):**
* then
* unless
* else
* while
* if __[верно]__
* for
**А1.6.1.5 Отметьте зарезервированные слова, которые используются в полной форме условного оператора, предполагающей проверку нескольких условий:**
* else __[верно]__
* unless
* for
* while
* if __[верно]__
* elif __[верно]__
### Больше заданий, больше огня
**А1.6.2.1 Какие ошибки допущены в нижеследующем блоке кода?**
```
last = 10
first = 7
if last < first or first < 0:
print("Введён неверный диапазон чисел")
else:
for i in range(first, last + 1)
if i%2 == 1 and (i%3 == 0 or i%5 == 0):
print(i)
```
* Поставлены лишние двоеточия
* Отсутствует двоеточие __[верно]__
* Цикл for расположен внутри _**if**_
**А1.6.2.2 Какие ошибки допущены в нижеследующем блоке кода?**
```
while:
print('Я работаю!')
```
* Поставлены лишние двоеточия
* Отсутствует двоеточие
* Отсутствует условие для цикла _**while**_ __[верно]__
* Отсутствует отступ для тела цикла _**while**_ __[верно]__
**А1.6.2.3 Какие ошибки допущены в нижеследующем блоке кода?**
```
n = 20
fib1 = 1
fib2 = 1
if n == 0:
print(0)
elif n <= 2:
print(1)
else:
for n in range(2, n):
fib_sum = fib1 + fib2
fib1 = fib2
fib2 = fib_sum
else:
print(fib2)
```
* Поставлены лишние двоеточия
* Отсутствует двоеточие
* Отсутствует условие для цикла while
* Отсутствует отступ для тела цикла while
* Ошибок нет __[верно]__
### Почти всё, но нет
**А1.6.3.1 Какое число будет выведено на экран в результате исполнения следующего блока кода?**
```
for i in range(10):
if i>=3:
break
print(i)
```
**А1.6.3.2 Какое число будет выведено на экран в результате исполнения следующего блока кода?**
```
count = 1
while count < 27:
if count%2 == 0:
to_print = 12
else:
to_print = 10
count += 1
print(to_print)
```
| github_jupyter |
# Build your Custom RNN using Keras
<a href="https://colab.research.google.com/github/luckykadam/adder/blob/master/rnn_full_adder.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab
</a>
<a href="https://github.com/luckykadam/adder/blob/master/rnn_full_adder.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub
</a>
## Introduction
It's rare to encounter a situation where LSTM/GRU might not be the choice of RNN cell. How hard it would be to identify and work-around the problem? Read on to find out.
In this notebook, we will emulate Binary Full Adder using RNN in Keras. We will see:
1. Why at some situations LSTM/GRU might not be the most optimal choice.
2. How to write custom RNN layer.
## Background
In the <a href="https://github.com/luckykadam/adder/blob/master/full_adder.ipynb">previous post</a> we developed a small neural network to simulate binary full adder. We analysed all the parameters learnt, plotted decision hypersurfaces and drew the circuit. Later, we observed how much the usage pattern resembled Recurrent Neural Network. So, lets see how to achieve the same objective using RNN.
## Full Adder
A Full Adder can perform an addition operation on three bits. The full adder produces a sum of three inputs and carry value. The carry value can then be used as input to the next full adder.
Using this unit in repetition, two binary numbers of arbitrary length can be added.
<img height="220" src="https://upload.wikimedia.org/wikipedia/commons/thumb/6/69/Full-adder_logic_diagram.svg/800px-Full-adder_logic_diagram.svg.png">
## RNN emulation
The structure of Full Adder is very similar to <a href="https://colah.github.io/posts/2015-08-Understanding-LSTMs/">how RNN works</a>. We can expolit this similarity.
In the current context, we want an RNN cell that can fullfill following conditions:
1. Output and State, both should have dimension 1.
2. Output and State should represent **independent** information.
Let's see if common choices of RNN cells satisfy these conditions:
### GRU (Gated Recurrent Unit)
<img height="240" src="img/gru.png">
<br>
This cell has output and state of same size, but they are not independent. Infact, output and state are the same vector in GRU. It can't satisfy the condition 2, and is not suitable here.
### LSTM (Long-Short Term Memory)
<img height="240" src="img/lstm.png">
<br>
This cell produces two states (cell state and hidden state) of different sizes, and an output. Hidden state and output are the exact same vector, which means only cell state is useful in the next iteration (being independent of output). If we configure the network to have cell state and output as size 1, it might learn to ignore the redundant hidden state. But, there will still be parameters corresponding to hidden state, learning things which are eventually ignored, which doesn't look optimal.
So, I guess we will have to define out own custom RNN cell. Lets jump right in ;)
## Implementation
We are going to use Keras (`tf.keras` from Tensorflow 2.0) and it's `keras.layers.RNN` API to implement out RNN.
```
# only for Google Colab compatibiity
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import numpy as np
import tensorflow as tf
from tensorflow.keras import models, layers, activations
print(tf.__version__)
# set random seed to get reproducible results
np.random.seed(0)
tf.random.set_seed(1)
```
## Dataset creation
Dataset can be easily prepared by randomly generating two sets of numbers and adding these sets across to get expected result. We generate numbers with a limit to number of bits in binary representation: `max_bit`.
```
max_bits = 8
n_samples = 100000
# samples in decimal form
samples = np.random.randint(np.power(2, max_bits-1), size=(n_samples, 2))
summed_samples = np.sum(samples, axis=1)
# convert samples to binary representation
samples_binary_repr = [[np.binary_repr(a, width=max_bits), np.binary_repr(b, width=max_bits)] for a,b in samples]
summed_binary_repr = [np.binary_repr(c, width=max_bits) for c in summed_samples]
x_str = np.array([[list(a), list(b)] for a, b in samples_binary_repr])
y_str = np.array([list(c) for c in summed_binary_repr])
# flip binary representation to get increasing significant bit
x_flipped = np.flip(x_str, axis=-1)
y_flipped = np.flip(y_str, axis=-1)
# convert string to numbers
x = np.transpose((x_flipped == '1')*1, axes=(0, 2, 1))
y = (y_flipped == '1')*1
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.1, shuffle=True)
```
## RNN Cell
Each RNN cell will have the structure we came up with in the <a href="https://github.com/luckykadam/adder/blob/master/full_adder.ipynb">previous post</a>.
The structure has:
1. Three inputs (i<sup>th</sup> bit of the 2 numbers and previous carry).
2. One hidden layer (3 neurons).
3. One output layer (2 neurons). Out of two output bits, we want one to be a part of the answer and other to be input (carry) to the next RNN cell.
We extend `keras.layers.Layer` to define the custom RNN cell. To define any custom layer we need to follow these steps:
1. Define `__init__()` to initialize some object level constants. Keras requires you to declare `units` variable: dimension of the output.
2. Define `build()` to initialize all the trainable parameters and set `built=True`.
3. Define `call()` to compute the output (and state) using input and parameters.
```
class FullAdderCell(layers.Layer):
def __init__(self, hidden_units, **kwargs):
super(FullAdderCell, self).__init__(**kwargs)
self.units = 1
self.state_size = 1
self.hidden_units = hidden_units
def build(self, input_shape):
self.hidden_kernel = self.add_weight(shape=(input_shape[-1] + self.state_size, self.hidden_units),
initializer='uniform',
name='hidden_kernel')
self.hidden_bias = self.add_weight(shape=(1, self.hidden_units),
initializer='uniform',
name='hidden_bias')
self.output_kernel = self.add_weight(shape=(self.hidden_units, self.units + self.state_size),
initializer='uniform',
name='output_kernel')
self.output_bias = self.add_weight(shape=(1, self.units + self.state_size),
initializer='uniform',
name='output_bias')
self.built = True
def call(self, inputs, states):
x = tf.concat([inputs, states[0]], axis=-1)
h = tf.keras.activations.tanh(tf.matmul(x, self.hidden_kernel) + self.hidden_bias)
o_s = tf.keras.activations.sigmoid(tf.matmul(h, self.output_kernel) + self.output_bias)
output = o_s[:, :self.units]
state = o_s[:, self.units:]
return output, [state]
```
## Model
`Sequential` API can be used to define the model. We need to wrap the RNN cell with `keras.layer.RNN`, to get an RNN layer. We set `return_sequences=True`, because we want to collect the bits produced by RNN cell at each step.
```
model = tf.keras.Sequential(name='full_adder')
model.add(layers.RNN(FullAdderCell(3), return_sequences=True, input_shape=(None, 2)))
model.summary()
```
## Loss function
At each step, only one bit is produced, giving the output of shape `(batch_size, max_bits, 1)`, hence we use `binary_crossentropy` loss function.
```
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
```
## Training
```
model.fit(x_train, y_train, batch_size=32, epochs=5)
scores = model.evaluate(x_test, y_test, verbose=2)
```
## Testing
Let's generate two random numbers in range (0, 2<sup>max_bits-1</sup>), predict their sum using our network, and compare it with actual sum.
```
max_bits = 8
a = np.random.randint(np.power(2, max_bits-1))
b = np.random.randint(np.power(2, max_bits-1))
a_bin = np.float32(1) * (np.flip(list(np.binary_repr(a, width=max_bits)), axis=-1) == '1')
b_bin = np.float32(1) * (np.flip(list(np.binary_repr(b, width=max_bits)), axis=-1) == '1')
print('a: {}, b: {}'.format(a, b))
print('binary representations -> a: {}, b: {}'.format(a_bin, b_bin))
a_b = np.stack((a_bin, b_bin), axis=-1).reshape(1,-1,2)
print('a_b: {}'.format(a_b))
predictions = model(a_b).numpy().flatten()
summed_bin = 1 * (predictions > 0.5)
summed = np.packbits(np.flip(summed_bin , axis=-1))[0]
print('predictions: {}'.format(predictions))
print('binary representations -> summed: {}'.format(summed_bin))
print('summed: {}'.format(summed))
```
## Result
Voila! Our network worked perfectly. Its amazing, how easily we created a custom RNN layer using `keras.layer.RNN` and `keras.layer.Layer` APIs.
## Conclusion
Frameworks like Keras, Tensorflow and PyTorch give us power to experiment at such speed and effecinecy. Combine it with python's flexibility, and you have a swiss-knife for all your AI needs.
## References:
1. <https://en.wikibooks.org/wiki/Digital_Electronics/Digital_Adder>
2. <https://colah.github.io/posts/2015-08-Understanding-LSTMs/>
3. <https://www.tensorflow.org/guide/keras/rnn>
4. <http://dprogrammer.org/rnn-lstm-gru>
| github_jupyter |
```
import os
import sys
import cPickle
module_path = os.path.abspath(os.path.join('../../'))
sys.path.append(module_path)
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import auc, roc_curve
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
color = sns.color_palette()
%matplotlib inline
matplotlib.style.use('ggplot')
from utils import xgb_utils
from conf.configure import Configure
train = pd.read_csv(Configure.base_path + 'train/orderFuture_train.csv', encoding='utf8')
test = pd.read_csv(Configure.base_path + 'test/orderFuture_test.csv', encoding='utf8')
print train.shape, test.shape
```
## 加载 lq 数据集 RF 和 ET 预测结果
```
with open('lq_dataset_et_train.pkl', "rb") as f:
lq_et_train = cPickle.load(f)
with open('lq_dataset_et_test.pkl', "rb") as f:
lq_et_test = cPickle.load(f)
with open('lq_dataset_rf_train.pkl', "rb") as f:
lq_rf_train = cPickle.load(f)
with open('lq_dataset_rf_test.pkl', "rb") as f:
lq_rf_test = cPickle.load(f)
train = pd.merge(train, lq_et_train, on='userid', how='left')
train = pd.merge(train, lq_rf_train, on='userid', how='left')
test = pd.merge(test, lq_et_test, on='userid', how='left')
test = pd.merge(test, lq_rf_test, on='userid', how='left')
print train.shape, test.shape
```
## 加载 lq 数据集 Adaboost 预测结果
```
with open('lq_dataset_ada_train.pkl', "rb") as f:
lq_train = cPickle.load(f)
with open('lq_dataset_ada_test.pkl', "rb") as f:
lq_test = cPickle.load(f)
train = pd.merge(train, lq_train, on='userid', how='left')
test = pd.merge(test, lq_test, on='userid', how='left')
print train.shape, test.shape
```
## 加载 lq 数据集 XGB 和 LGBM 预测结果
```
train_files = os.listdir('./train')
for train_f in train_files:
if train_f.startswith('lq'):
lq_train = pd.read_csv('./train/'+train_f)
train = pd.merge(train, lq_train, on='userid', how='left')
test_files = os.listdir('./test')
for test_f in test_files:
if test_f.startswith('lq'):
lq_test = pd.read_csv('./test/'+test_f)
test = pd.merge(test, lq_test, on='userid', how='left')
print train.shape, test.shape
```
## 加载 lq 数据集 Catboost 预测结果
```
train_files = os.listdir('./train/catboost')
for train_f in train_files:
if train_f.startswith('lq'):
lq_train = pd.read_csv('./train/catboost/'+train_f)
train = pd.merge(train, lq_train, on='userid', how='left')
test_files = os.listdir('./test/catboost')
for test_f in test_files:
if test_f.startswith('lq'):
lq_test = pd.read_csv('./test/catboost/'+test_f)
test = pd.merge(test, lq_test, on='userid', how='left')
print train.shape, test.shape
```
## 加载 hl 数据集 XGB 和 LGBM 预测结果
```
train_files = os.listdir('./train')
for train_f in train_files:
if train_f.startswith('hl'):
lq_train = pd.read_csv('./train/'+train_f)
train = pd.merge(train, lq_train, on='userid', how='left')
test_files = os.listdir('./test')
for test_f in test_files:
if test_f.startswith('hl'):
lq_test = pd.read_csv('./test/'+test_f)
test = pd.merge(test, lq_test, on='userid', how='left')
print train.shape, test.shape
```
## 加载 hl 数据集 RF 和 ET 预测结果
```
with open('hl_dataset_et_train.pkl', "rb") as f:
hl_et_train = cPickle.load(f)
with open('hl_dataset_et_test.pkl', "rb") as f:
hl_et_test = cPickle.load(f)
with open('hl_dataset_rf_train.pkl', "rb") as f:
hl_rf_train = cPickle.load(f)
with open('hl_dataset_rf_test.pkl', "rb") as f:
hl_rf_test = cPickle.load(f)
train = pd.merge(train, hl_et_train, on='userid', how='left')
train = pd.merge(train, hl_rf_train, on='userid', how='left')
test = pd.merge(test, hl_et_test, on='userid', how='left')
test = pd.merge(test, hl_rf_test, on='userid', how='left')
print train.shape, test.shape
```
## 加载 hl 数据集 Catboost 预测结果
```
train_files = os.listdir('./train/catboost')
for train_f in train_files:
if train_f.startswith('hl'):
lq_train = pd.read_csv('./train/catboost/'+train_f)
train = pd.merge(train, lq_train, on='userid', how='left')
test_files = os.listdir('./test/catboost')
for test_f in test_files:
if test_f.startswith('hl'):
lq_test = pd.read_csv('./test/catboost/'+test_f)
test = pd.merge(test, lq_test, on='userid', how='left')
print train.shape, test.shape
```
## 加载 sqg 数据集 XGB 和 LGBM 预测结果
```
train_files = os.listdir('./train')
for train_f in train_files:
if train_f.startswith('sqg'):
sqg_train = pd.read_csv('./train/'+train_f)
train = pd.merge(train, sqg_train, on='userid', how='left')
test_files = os.listdir('./test')
for test_f in test_files:
if test_f.startswith('sqg'):
sqg_test = pd.read_csv('./test/'+test_f)
test = pd.merge(test, sqg_test, on='userid', how='left')
print train.shape, test.shape
```
## 加载 sqg 数据集 RF 和 ET 预测结果
```
with open('sqg_dataset_et_train.pkl', "rb") as f:
sqg_et_train = cPickle.load(f)
with open('sqg_dataset_et_test.pkl', "rb") as f:
sqg_et_test = cPickle.load(f)
with open('sqg_dataset_rf_train.pkl', "rb") as f:
sqg_rf_train = cPickle.load(f)
with open('sqg_dataset_rf_test.pkl', "rb") as f:
sqg_rf_test = cPickle.load(f)
train = pd.merge(train, sqg_et_train, on='userid', how='left')
train = pd.merge(train, sqg_rf_train, on='userid', how='left')
test = pd.merge(test, sqg_et_test, on='userid', how='left')
test = pd.merge(test, sqg_rf_test, on='userid', how='left')
print train.shape, test.shape
```
## 加载 sqg 数据集 Catboost 预测结果
```
train_files = os.listdir('./train/')
for train_f in train_files:
if train_f.startswith('sqg_cat'):
sqg_train = pd.read_csv('./train/'+train_f)
sqg_train.columns = ['userid', 'sqg_{}'.format(sqg_train.columns.values[1])]
train = pd.merge(train, sqg_train, on='userid', how='left')
test_files = os.listdir('./test/')
for test_f in test_files:
if test_f.startswith('sqg_cat'):
sqg_test = pd.read_csv('./test/'+test_f)
sqg_test.columns = ['userid', 'sqg_{}'.format(sqg_test.columns.values[1])]
test = pd.merge(test, sqg_test, on='userid', how='left')
print train.shape, test.shape
```
## 添加原始主要特征
```
from get_datasets import load_datasets
with open('../train_0.97329.pkl', "rb") as f:
lq_train = cPickle.load(f)
with open('../test_0.97329.pkl', "rb") as f:
lq_test = cPickle.load(f)
used_features = ['userid', 'goodorder_vs_actiontype_1_ratio', 'isOrder', 'total_good_order_ratio',
'history_order_type_sum_lg0', 'goodorder_vs_actiontype_5_ratio', 'finalAction_4',
'action_type_511_time_delta_min', 'finalAction_8', 'action_type_511_time_delta_max',
'goodorder_vs_actiontype_6_ratio', 'type_1to4valuemean', 'histord_sum_cont4',
'age_lg90', 'three_gram_789_last_time', 'three_gram_789_time_mean',
'action_type_710_time_delta_min', 'three_gram_456_time_min',
'three_gram_action_456_ratio', 'pay_money_min_delta']
train = pd.merge(train, lq_train[used_features], on='userid', how='left')
test = pd.merge(test, lq_test[used_features], on='userid', how='left')
hl_train = pd.read_csv(Configure.base_path + 'huang_lin/train_dataHL.csv')
hl_test = pd.read_csv(Configure.base_path + 'huang_lin/test_dataHL.csv')
used_features = ['userid', 'endclosest_3_4', 'endclosest_3_3', 'actionType_recent_time_3',
'type_3to4valueamin', 'minute_last', 'rangeTime_to_end5','typeend3_4diff',
'endclosest_1_3', 'action_end1Browse', 'actionType_recent_time_4',
'actionType5_Per', 'type_4to4valuemean', 'hour_last', 'type_3to4valuemean',
'actionType_recent_time_1', 'endclosest_1_4', 'rangeTime_to_begin6',
'endclosest_4_4']
# train = pd.merge(train, hl_train[used_features], on='userid', how='left')
# test = pd.merge(test, hl_test[used_features], on='userid', how='left')
qg_train = pd.read_csv(Configure.base_path + 'sun_qian_guo/train.csv')
qg_test = pd.read_csv(Configure.base_path + 'sun_qian_guo/test.csv')
used_features = ['userid', 'recentmin5', 'lastBrowse', 'browseLastTwo',
'1To6Timemin', '5To6Timemin', 'recentmax5', 'typeDismax6',
'recentmin1', '6To5Timemin']
train = pd.merge(train, qg_train[used_features], on='userid', how='left')
test = pd.merge(test, qg_test[used_features], on='userid', how='left')
print train.shape, test.shape
# 删除一些特征
remove_features = ['histord_sum_cont4', 'action_type_511_time_delta_min', 'finalAction_8',
'goodorder_vs_actiontype_6_ratio', 'type_1to4valuemean', 'total_good_order_ratio',
'goodorder_vs_actiontype_5_ratio']
train.drop(remove_features, axis=1, inplace=True)
test.drop(remove_features, axis=1, inplace=True)
print train.shape, test.shape
# plt.figure(figsize=(8,6))
# sns.heatmap(train.corr(), xticklabels=False, yticklabels=False)
# plt.show()
```
# Save level-1 dataset
```
with open('level1_train.pkl', "wb") as f:
cPickle.dump(train, f, -1)
with open('level1_test.pkl', "wb") as f:
cPickle.dump(test, f, -1)
```
# Level 2
```
print train.shape, test.shape
import xgboost as xgb
y_train_all = train['orderType']
submit_df = pd.DataFrame({'userid': test['userid']})
train.drop(['orderType', 'userid'], axis=1, inplace=True)
test.drop(['userid'], axis=1, inplace=True)
train = train[test.columns.values]
df_columns = train.columns.values
print('train: {}, test: {}, feature count: {}, orderType 1:0 = {:.5f}'.format(
train.shape, test.shape, len(df_columns), 1.0*sum(y_train_all) / len(y_train_all)))
1.0*sum(y_train_all) / (len(y_train_all) - sum(y_train_all))
xgb_params = {
'alpha': 0.1,
'booster': 'gbtree',
'colsample_bytree': 0.7,
'eta': 0.01,
'eval_metric': 'auc',
'gamma': 2,
'gpu_id': 2,
'lambda': 1,
'max_depth': 10,
'min_child_weight': 3,
'nthread': -1,
'objective': 'binary:logistic',
'scale_pos_weight': 1,
'silent': 1,
'subsample': 0.6,
'updater': 'grow_gpu'
}
print('---> cv train to choose best_num_boost_round')
dtrain_all = xgb.DMatrix(train.values, y_train_all, feature_names=df_columns)
dtest = xgb.DMatrix(test, feature_names=df_columns)
# 4-折 valid 为 10077 和 测试集大小一致
nfold = 3
cv_result = xgb.cv(dict(xgb_params),
dtrain_all,
nfold=nfold,
stratified=True,
num_boost_round=4000,
early_stopping_rounds=100,
verbose_eval=100,
show_stdv=False,
)
best_num_boost_rounds = len(cv_result)
mean_train_logloss = cv_result.loc[best_num_boost_rounds-11 : best_num_boost_rounds-1, 'train-auc-mean'].mean()
mean_test_logloss = cv_result.loc[best_num_boost_rounds-11 : best_num_boost_rounds-1, 'test-auc-mean'].mean()
print('best_num_boost_rounds = {}'.format(best_num_boost_rounds))
print('mean_train_auc = {:.7f} , mean_test_auc = {:.7f}\n'.format(mean_train_logloss, mean_test_logloss))
```
- 调参之前: mean_train_auc = 0.9907898 , mean_test_auc = 0.9733552
- 调参之后: mean_train_auc = 0.9810015 , mean_test_auc = 0.9736901
- 删除特征: mean_train_auc = 0.9799322 , mean_test_auc = 0.9736705
- 10个lq-catboost:mean_train_auc = 0.9802385 , mean_test_auc = 0.9740707
- mean_train_auc = 0.9834573 , mean_test_auc = 0.9741202
```
print('---> training on total dataset')
model = xgb.train(dict(xgb_params),
dtrain_all,
num_boost_round=best_num_boost_rounds)
import time
print('---> predict test')
y_pred = model.predict(dtest, ntree_limit=model.best_ntree_limit)
submit_df['orderType'] = y_pred
submission_path = '../../result/{}_scaleposweight_{}.csv'.format('stacking', 1)
submit_df.to_csv(submission_path, index=False, columns=['userid', 'orderType'])
print('-------- predict and valid check ------')
print('test count mean: {:.6f} , std: {:.6f}'.format(np.mean(submit_df['orderType']), np.std(submit_df['orderType'])))
print('done.')
```
| github_jupyter |
```
from zipfile import ZipFile
import os
```
testing the reading from a zip file and extracting just one file
```
z = ZipFile('/home/nfarrugi/bigdisk2/new/dropbox/SilentCitiesData/0003/Sebastien Puechmaille - 003_01.zip')
filelist = z.namelist()
filelist[:10]
```
Pipeline
--
We will do this in the following steps :
1. For each folder or dropbox, search for an existing folder on bigdisk1 or bigdisk2.
2. If the folder is not founddd, this is a new site.
3. Parse the list of new sites and put them on bigdisk2, extract them flat. Save the list of new sites as sites for which we will need to restart "metadata_file"
4. Parse all the other archives by checking their file list, compare with the csv of the corresponding site, copy if needed. If ANY copy is done, add the corresponding site to the list of sites to update.
5. Relaunch metadata_file.py for all the sites that have been updated
6. Relaunch metadata_site.py
7. Relaunch audio_processing.py for all the sites that have been updated
1. For each folder or dropbox, search for an existing folder on bigdisk1 or bigdisk2.
2. If the folder is not found, this is a new site.
```
bigdisk1 = '/home/nfarrugi/bigdisk1'
bigdisk2 = '/home/nfarrugi/bigdisk2'
dropboxfolder = '/home/nfarrugi/bigdisk2/new/dropbox/SilentCitiesData/'
newsites = []
bigdisk1sites = []
bigdisk2sites = []
for curdir in os.listdir(dropboxfolder):
if os.path.isdir(os.path.join(bigdisk1,'silentcities',curdir)):
print(f"{curdir} found on bigdisk1")
elif os.path.isdir(os.path.join(bigdisk2,'silentcities',curdir)):
print(f"{curdir} found on bigdisk2")
else:
newsites.append(curdir)
print(f"{curdir} is a new site")
```
3. Parse the list of new sites and put them on bigdisk2, extract them flat. Save the list of new sites as sites for which we will need to restart "metadata_file"
```
def extract_list_from_zip(zipfile,destdir,filelist=None):
### filelist is a subset (potentially all) from the list of files in zipfile
### If not, take the whole list of files from the zip
os.makedirs(destdir,exist_ok=True)
z = ZipFile(zipfile)
if filelist is None:
filelist = z.namelist()
print("Extracting all files...")
print(f"Extracting {len(filelist)}files")
for curfile in filelist:
# In case in the filelist there are paths
filenameonly = os.path.split(curfile)[1]
# extract a specific file from the zip container
f = z.open(curfile)
# save the extracted file
content = f.read()
f = open(os.path.join(destdir,filenameonly), 'wb')
f.write(content)
f.close()
def check_folder_zip_copy(zipfile,destfolder)
```
| github_jupyter |
## Second Assignment
#### 1) Create a function called **"even_squared"** that receives an integer value **N**, and returns a list containing, in ascending order, the square of each of the even values, from 1 to N, including N if applicable.
```
def even_squared(N):
les = [i ** 2 for i in range(1, N+1) if i % 2 == 0]
return les
even_squared(10)
```
#### 2) Using a while loop and the **input()** function, read an indefinite amount of **integers** until the number read is **-1**. After this process, print two lists on the screen: The first containing the even integers, and the second containing the odd integers. Both must be in ascending order.
```
eveni = []
oddi = []
while True:
inp = int(input('Enter a number'))
if inp == -1:
break
elif inp % 2 == 0:
eveni.append(inp)
else:
oddi.append(inp)
print(sorted(eveni), sorted(oddi), sep='\n')
```
#### 3) Create a function called **"even_account"** that receives a list of integers, counts the number of existing even elements, and returns this count.
```
def even_account(LoI):
count = len([v for v in LoI if v % 2 == 0])
return count
even_account([2, 3, 4, 5, 6, 1, 2, 4, 7])
```
#### 4) Create a function called **"squared_list"** that receives a list of integers and returns another list whose elements are the squares of the elements of the first.
```
def squared_list(LOI):
LOI1 = [v ** 2 for v in LOI]
return LOI1
squared_list([2, 3, 4, 1, 10])
```
#### 5) Create a function called **"descending"** that receives two lists of integers and returns a single list, which contains all the elements in descending order, and may include repeated elements.
```
def descending(Loi, Loi1):
Loi.extend(Loi1)
return sorted(Loi)[::-1]
print(descending([2, 4, 5], [6, 2, 3]))
```
#### 6) Create a function called **"adding"** that receives a list **A**, and an arbitrary number of integers as input. Return a new list containing the elements of **A** plus the integers passed as input, in the order in which they were given. Here is an example:
>```python
>>>> A = [10,20,30]
>>>> adding(A, 4, 10, 50, 1)
> [10, 20, 30, 4, 10, 50, 1]
```
```
def adding(A, *args):
A1 = [a for a in args]
return A + A1
A = [3,4,5,6]
adding(A, 10, 100, 200)
```
#### 7) Create a function called **"intersection"** that receives two input lists and returns another list with the values that belong to the two lists simultaneously (intersection) without repetition of values and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example:
>```python
>>>> A = [-2, 0, 1, 2, 3]
>>>> B = [-1, 2, 3, 6, 8]
>>>> intersection(A,B)
> [2, 3]
```
```
def intersection(L1, L2):
L3 = [x for x in L1 if x in L2]
L4 = []
for y in L3:
if y not in L4:
L4.append(y)
return sorted(L4)
A = [3, 5, 6,]
B = [5, 5, 6, 6, 2]
intersection(A, B)
```
#### 8) Create a function called **"union"** that receives two input lists and returns another list with the union of the elements of the two received, without repetition of elements and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example:
>```python
>>>> A = [-2, 0, 1, 2]
>>>> B = [-1, 1, 2, 10]
>>>> union(A,B)
> [-2, ,-1, 0, 1, 2, 10]
```
```
def union(LI1, LI2):
LI1.extend(LI2)
LI3 = []
for v in LI1:
if v not in LI3:
LI3.append(v)
return sorted(LI3)
A = [-2, 0, 1, 2, 10, 11]
B = [-4, -2, -1, 1, 2, 10, 11]
union(A, B)
```
#### 9) Generalize the **"intersection"** function so that it receives an indefinite number of lists and returns the intersection of all of them. Call the new function **intersection2**.
```
def intersection2(*args):
LIx = []
ct = 0
for a in args[0]:
for b in range(1, len(args)):
if a in args[b]:
ct += 1
if ct == (len(args) - 1):
LIx.append(a)
ct = 0
LIy = []
for c in LIx:
if c not in LIy:
LIy.append(c)
return sorted(LIy)
A = [-3, 4, 5, 6, 7]
B = [-3, 3, 4, 5, 6, 7]
C = [-3, 9, 3, 5, 6, 7]
D = [-3, -9, 3, 4, 5, 6, 7]
intersection2(A, B, C, D)
```
## Challenge
#### 10) Create a function named **"matrix"** that implements matrix multiplication:
Given the matrices:
$A_{m\times n}=
\left[\begin{matrix}
a_{11}&a_{12}&...&a_{1n}\\
a_{21}&a_{22}&...&a_{2n}\\
\vdots &\vdots &&\vdots\\
a_{m1}&a_{m2}&...&a_{mn}\\
\end{matrix}\right]$
We will represent then as a list of lists.
$A = [[a_{11},a_{12},...,a_{1n}],[a_{21},a_{22},...,a_{2n}], . . . ,[a_{m1},a_{m2},...,a_{mn}]]$
The **"matrix"** funtion must receive two matrices $A$ e $B$ in the specified format and return $A\times B$
```
def matrix(M1, M2):
Am = len(M1)
An = len(M1[0])
Bm = len(M2)
An = Bm
Bn = len(M2[0])
C = [[] for x in range(Am)]
if An != Bm:
print('The multiplication of the two matrices is not possible')
else:
c = 0
d = 0
for a in range(Am):
for b in range(Bn):
for a1 in range(An):
c = M1[a][a1] * M2[a1][b]
d += c
C[a].append(d)
c = 0
d = 0
return C
A = [[3, 2, 1], [1, 0, 2]]
B = [[1, 2], [0, 1], [4, 0]]
matrix(A, B)
```
| github_jupyter |
# Multiclass Example
This example show shows how to use `tsfresh` to extract and select useful features from timeseries in a multiclass classification example.
We use an example dataset of human activity recognition for this.
The dataset consists of timeseries for 7352 accelerometer readings.
Each reading represents an accelerometer reading for 2.56 sec at 50hz (for a total of 128 samples per reading). Furthermore, each reading corresponds one of six activities (walking, walking upstairs, walking downstairs, sitting, standing and laying).
For more information go to https://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones
This notebook follows the example in [the first notebook](./01%20Feature%20Extraction%20and%20Selection.ipynb), so we will go quickly over the extraction and focus on the more interesting feature selection in this case.
```
%matplotlib inline
import matplotlib.pylab as plt
from tsfresh import extract_features, extract_relevant_features, select_features
from tsfresh.utilities.dataframe_functions import impute
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import pandas as pd
import numpy as np
```
## Load and visualize data
```
from tsfresh.examples.har_dataset import download_har_dataset, load_har_dataset, load_har_classes
# fetch dataset from uci
download_har_dataset()
df = load_har_dataset()
df.head()
y = load_har_classes()
```
The data is not in a typical time series format so far:
the columns are the time steps whereas each row is a measurement of a different person.
Therefore we bring it to a format where the time series of different persons are identified by an `id` and are order by time vertically.
```
df["id"] = df.index
df = df.melt(id_vars="id", var_name="time").sort_values(["id", "time"]).reset_index(drop=True)
df.head()
plt.title('accelerometer reading')
plt.plot(df[df["id"] == 0].set_index("time").value)
plt.show()
```
## Extract Features
```
# only use the first 500 ids to speed up the processing
X = extract_features(df[df["id"] < 500], column_id="id", column_sort="time", impute_function=impute)
X.head()
```
## Train and evaluate classifier
For later comparison, we train a decision tree on all features (without selection):
```
X_train, X_test, y_train, y_test = train_test_split(X, y[:500], test_size=.2)
classifier_full = DecisionTreeClassifier()
classifier_full.fit(X_train, y_train)
print(classification_report(y_test, classifier_full.predict(X_test)))
```
# Multiclass feature selection
We will now select a subset of relevant features using the `tsfresh` select features method.
However it only works for binary classification or regression tasks.
For a 6 label multi classification we therefore split the selection problem into 6 binary one-versus all classification problems.
For each of them we can do a binary classification feature selection:
```
relevant_features = set()
for label in y.unique():
y_train_binary = y_train == label
X_train_filtered = select_features(X_train, y_train_binary)
print("Number of relevant features for class {}: {}/{}".format(label, X_train_filtered.shape[1], X_train.shape[1]))
relevant_features = relevant_features.union(set(X_train_filtered.columns))
len(relevant_features)
```
we keep only those features that we selected above, for both the train and test set
```
X_train_filtered = X_train[list(relevant_features)]
X_test_filtered = X_test[list(relevant_features)]
```
and train again:
```
classifier_selected = DecisionTreeClassifier()
classifier_selected.fit(X_train_filtered, y_train)
print(classification_report(y_test, classifier_selected.predict(X_test_filtered)))
```
It worked! The precision improved by removing irrelevant features.
## Improved Multiclass feature selection
We can instead specify the number of classes for which a feature should be a relevant predictor in order to pass through the filtering process. This is as simple as setting the `multiclass` parameter to `True` and setting `n_significant` to the required number of classes. We will try with a requirement of being relevant for 5 classes.
```
X_train_filtered_multi = select_features(X_train, y_train, multiclass=True, n_significant=5)
X_train_filtered_multi.shape
```
We can see that the number of relevant features is lower than the previous implementation.
```
classifier_selected_multi = DecisionTreeClassifier()
classifier_selected_multi.fit(X_train_filtered_multi, y_train)
X_test_filtered_multi = X_test[X_train_filtered_multi.columns]
print(classification_report(y_test, classifier_selected_multi.predict(X_test_filtered_multi)))
```
We now get slightly better classification performance, especially for classes where the previous classifier performed poorly. The parameter `n_significant` can be tuned for best results.
| github_jupyter |
```
# for use in tutorial and development; do not include this `sys.path` change in production:
import sys ; sys.path.insert(0, "../")
```
# Data Sources
Throughout this tutorial we'll work with data in the `dat` subdirectory:
```
!ls -goh ../dat
```
In particular, we'll work with a series of *progressive examples* based on the
`dat/recipes.csv` CSV file.
This data comes from a
[Kaggle dataset](https://www.kaggle.com/shuyangli94/food-com-recipes-and-user-interactions/metadata)
that describes metadata about [Food.com](https://food.com/):
> "Food.com Recipes and Interactions"
Shuyang Li
Kaggle (2019)
<https://doi.org/10.34740/kaggle/dsv/783630>
One of the simpler recipes in that dataset is `"anytime crepes"` at <https://www.food.com/recipe/327593>
* id: 327593
* minutes: 8
* ingredients: `"['egg', 'milk', 'whole wheat flour']"`
The tutorial begins by showing how to represent the metadata for this recipe in a knowledge graph, then gradually builds up more and more information about this collection of recipes.
To start, let's load and examine the CSV data:
```
import pandas as pd
df = pd.read_csv("../dat/recipes.csv")
df.head()
```
Now let's drill down to the metadata for the `"anytime crepes"` recipe
```
recipe_row = df[df["name"] == "anytime crepes"].iloc[0]
recipe_row
```
Given that we have a rich source of *linked data* to use, next we need to focus on *knowledge representation*.
We'll use the [FoodOn](https://foodon.org/design/foodon-relations/) ontology (see below) to represent recipes, making use of two of its *controlled vocabularies*:
* <http://purl.org/heals/food/>
* <http://purl.org/heals/ingredient/>
The first one defines an entity called `Recipe` which has the full URL of <http://purl.org/heals/food/Recipe> and we'll use that to represent our recipe data from the *Food.com* dataset.
It's a common practice to abbreviate the first part of the URL for a controlled vocabular with a *prefix*.
In this case we'll use the prefix conventions used in previous publications related to this ontology:
| URL | prefix |
| --- | --- |
| <http://purl.org/heals/food/> | `wtm:` |
| <http://purl.org/heals/ingredient/> | `ind:` |
Now let's represent the data using this ontology, starting with the three ingredients for the **anytime crepes** recipe:
```
ingredients = eval(recipe_row["ingredients"])
ingredients
```
These ingredients become represented, respectively, as:
* `ind:ChickenEgg`
* `ind:CowMilk`
* `ind:WholeWheatFlour`
## Ontology Sources
We'll use several different sources for data and ontology throughout the **kglab** tutorial, although most of it focuses on progressive examples that use [*FoodOn*](https://www.nature.com/articles/s41538-018-0032-6).
*FoodOn* – subtitled "a farm to fork ontology" – takes a comprehensive view of the data and metadata involved in our food supply, beginning with seed genomics, micronutrients, the biology of food alergies, etc.
This work is predicated on leveraging large knowlege graphs to represent the different areas of science, technology, business, public policy, etc.:
> The need to represent knowledge about food is central to many human activities including agriculture, medicine, food safety inspection, shopping patterns, and sustainable development. FoodOn is an ontology – a controlled vocabulary which can be used by both people and computers – to name all parts of animals, plants, and fungai which can bear a food role for humans and domesticated animals, as well as derived food products and the processes used to make them.
For more details, see:
* <https://foodon.org/design/foodon-relations/>
* <https://foodkg.github.io/docs/ontologyDocumentation/Ingredient/doc/index-en.html>
* <https://foodkg.github.io/foodkg.html>
* <https://github.com/foodkg/foodkg.github.io>
For primary sources, see: [[vardeman2014ceur]](https://derwen.ai/docs/kgl/biblio/#vardeman2014ceur), [[sam2014odp]](https://derwen.ai/docs/kgl/biblio/#sam2014odp), [[dooley2018npj]](https://derwen.ai/docs/kgl/biblio/#dooley2018npj), [[hitzler2018]](https://derwen.ai/docs/kgl/biblio/#hitzler2018)
We'll work through several examples of representation, although here's an example of what a full recipe in *FoodOn* would look like:
## Graph Size Comparisons
One frequently asked question is about the size of the graphs that we're using in the **kglab** tutorial.
The short answer: "No, these aren't trivial graphs."
We'll start out with small examples, to show the basics for how to construct an RDF graph.
Most of the examples here will use a knowledge graph with ~300 nodes and ~2000 edges.
This is a *non-trivial* size, especially when you start working with some graph algorithms.
Again, this tutorial has learning as its main intent, and this size of graph is ideal for running queries, validation, graph algorithms, visualization, etc., with the kinds of compute and memory resources available on contemporary laptops.
In other words, we prioritize datasets that are large enough for examples to illustrate common use cases, though small enough for learners to understand.
* 10^6 or more nodes are needed for deep learning
* 10^8 can run on contemporary laptops
* larger graphs require hardware accelerators (e.g., GPUs) or cloud-based clusters
The full `recipes.tsv` dataset includes nearly 250,000 recipes. In some of the later examples, we'll work with that entire dataset – which is definitely non-trivial.
| github_jupyter |
```
!pip install yfinance
!pip install GetOldTweets3
!pip install treeinterpreter
import datetime
import GetOldTweets3 as got
import yfinance as yf
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import csv
import sys
import re
import string
import json
import os
import nltk
nltk.download('vader_lexicon')
nltk.download('wordnet')
nltk.download('stopwords')
nltk.download('punkt')
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.sentiment import SentimentAnalyzer
from nltk.sentiment.vader import SentimentIntensityAnalyzer
import unicodedata
sentiment_i_a = SentimentIntensityAnalyzer()
from nltk.corpus import subjectivity
from nltk.sentiment.util import *
from sklearn.model_selection import train_test_split
from treeinterpreter import treeinterpreter as ti
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn import svm
from sklearn.svm import SVR
from sklearn.metrics import mean_squared_error
from math import sqrt
# Get hourly stock data
def getHourlyStocks(stockname):
company = yf.Ticker(stockname)
stockdata = company.history(interval="60m")
stockdata.to_csv('stockData_' + stockname +'.csv')
# process stock data to get date and time separately
def processStockData(stockname):
stockdata = pd.read_csv('stockData_' + stockname +'.csv',encoding='utf-8')
stockdata.head()
temp = str(stockdata['Datetime'].values)
new = temp.split("\n")
temp1 = ' '.join(str(new).split())
temp1 = temp1.replace('"', '')
temp1 = temp1.replace('\'', '')
temp1 = temp1.replace(',', '')
temp1 = temp1.replace('[', '')
temp1 = temp1.replace(']', '')
temp1 = temp1.replace('-04:00', '')
new2 = temp1.split(" ")
new2 = [x.strip() for x in new2 if x.strip()]
indx = 0
for i in range(0,len(new2),2):
stockdata.at[indx,'date'] = new2[i]
indx = indx + 1
indx = 0
for j in range(1,len(new2),2):
stockdata.at[indx,'time'] = new2[j]
indx = indx + 1
stockdata = stockdata.drop('Datetime', 1)
stockdata['time'] = stockdata['time'].apply(lambda x: datetime.datetime.strptime(x,'%H:%M:%S').time())
stockdata.head()
stockdata.to_csv('processedStockData_' + stockname +'.csv')
startDate = stockdata['date'].iloc[0]
endDate = stockdata['date'].iloc[len(stockdata)-1]
return startDate, endDate
#Method for data cleaning
class TweetCleaner:
def __init__(self):
self.stop_words = set(stopwords.words('english'))
self.punc_table = str.maketrans("", "", string.punctuation) # to remove punctuation from each word in tokenize
def compound_word_split(self, compound_word):
matches = re.finditer('.+?(?:(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])|$)', compound_word)
return [m.group(0) for m in matches]
def remove_non_ascii_chars(self, text):
return ''.join([w if ord(w) < 128 else ' ' for w in text])
def remove_hyperlinks(self,text):
return ' '.join([w for w in text.split(' ') if not 'http' in w])
def get_cleaned_text(self, text):
cleaned_tweet = text.replace('\"','').replace('\'','').replace('-',' ')
cleaned_tweet = self.remove_non_ascii_chars(cleaned_tweet)
if re.match(r'RT @[_A-Za-z0-9]+:',cleaned_tweet):
cleaned_tweet = cleaned_tweet[cleaned_tweet.index(':')+2:]
cleaned_tweet = self.remove_hyperlinks(cleaned_tweet)
cleaned_tweet = cleaned_tweet.replace('#','HASHTAGSYMBOL').replace('@','ATSYMBOL') # to avoid being removed while removing punctuations
tokens = [w.translate(self.punc_table) for w in word_tokenize(cleaned_tweet)] # remove punctuations and tokenize
tokens = [nltk.WordNetLemmatizer().lemmatize(w) for w in tokens if not w.lower() in self.stop_words and len(w)>1] # remove stopwords and single length words
cleaned_tweet = ' '.join(tokens)
cleaned_tweet = cleaned_tweet.replace('HASHTAGSYMBOL','#').replace('ATSYMBOL','@')
cleaned_tweet = cleaned_tweet
return cleaned_tweet
def clean_tweets(self, tweets, is_bytes = False):
test_tweet_list = []
for tweet in tweets:
if is_bytes:
test_tweet_list.append(self.get_cleaned_text(ast.literal_eval(tweet).decode("UTF-8")))
else:
test_tweet_list.append(self.get_cleaned_text(tweet))
return test_tweet_list
def clean_single_tweet(self, tweet, is_bytes = False):
if is_bytes:
return self.get_cleaned_text(ast.literal_eval(tweet).decode("UTF-8"))
return self.get_cleaned_text(tweet)
def cleaned_file_creator(self, op_file_name, value1, value2):
csvFile = open(op_file_name, 'w+')
csvWriter = csv.writer(csvFile)
for tweet in range(len(value1)):
csvWriter.writerow([value1[tweet], value2[tweet]])
csvFile.close()
#fetch tweet data
def fetchTweets(stockname, startDate, endDate):
csvFile = open('tweets_' + stockname + '.csv', 'a',encoding="utf-8")
csvWriter = csv.writer(csvFile, lineterminator= '\n')
cleanObj = TweetCleaner()
tweetCriteria = got.manager.TweetCriteria().setQuerySearch(stockname).setSince(startDate).setUntil(endDate).setTopTweets("true")
tweets = got.manager.TweetManager.getTweets(tweetCriteria)
try:
for tweet in tweets:
tweet_text = tweet.text.encode('utf-8')
tweet_text = cleanObj.get_cleaned_text(tweet_text.decode())
tweetDate = tweet.date
csvWriter.writerow([tweetDate, tweet.text])
except BaseException as e:
print('failed on_status,',str(e))
# process tweet data to get date and time separately
def processTweetData(stockname):
columns=['Date','Tweets']
tweets = pd.read_csv('tweets_' + stockname + '.csv',encoding='utf-8', names=columns, header=None)
temp = str(tweets['Date'].values)
new = temp.split("\n")
temp1 = ' '.join(str(new).split())
temp1 = temp1.replace('"', '')
temp1 = temp1.replace('\'', '')
temp1 = temp1.replace(',', '')
temp1 = temp1.replace('[', '')
temp1 = temp1.replace(']', '')
temp1 = temp1.replace('+00:00', '')
new2 = temp1.split(" ")
new2 = [x.strip() for x in new2 if x.strip()]
indx = 0
for i in range(0,len(new2),2):
tweets.at[indx,'date'] = new2[i]
indx = indx + 1
indx = 0
for j in range(1,len(new2),2):
tweets.at[indx,'time'] = new2[j]
indx = indx + 1
tweets = tweets.drop('Date', 1)
tweets.head()
tweets.to_csv('processedTweets_' + stockname + '.csv')
# process both stock and tweet data to get prices for respective date and time
def processData(stockname):
date_time_obj1 = datetime.datetime.strptime('09:30:00', '%H:%M:%S').time()
date_time_obj2 = datetime.datetime.strptime('10:30:00', '%H:%M:%S').time()
date_time_obj3 = datetime.datetime.strptime('11:30:00', '%H:%M:%S').time()
date_time_obj4 = datetime.datetime.strptime('12:30:00', '%H:%M:%S').time()
date_time_obj5 = datetime.datetime.strptime('13:30:00', '%H:%M:%S').time()
date_time_obj6 = datetime.datetime.strptime('14:30:00', '%H:%M:%S').time()
date_time_obj7 = datetime.datetime.strptime('15:30:00', '%H:%M:%S').time()
column_names = ["date", "time", "Tweets"]
df = pd.DataFrame(columns = column_names)
tweets = pd.read_csv('processedTweets_' + stockname + '.csv',encoding='utf-8')
tweets['time'] = tweets['time'].apply(lambda x: datetime.datetime.strptime(x,'%H:%M:%S').time())
readStockData = pd.read_csv('processedStockData_' + stockname +'.csv')
indx1 = 0
indx2 = 1
indx3 = 2
indx4 = 3
indx5 = 4
indx6 = 5
get_tweet1 = ""
get_tweet2 = ""
get_tweet3 = ""
get_tweet4 = ""
get_tweet5 = ""
get_tweet6 = ""
# mapping hourly tweets
for i in range(0,len(tweets)-1):
get_date= tweets.date.iloc[i]
next_date= tweets.date.iloc[i+1]
if(str(get_date)==str(next_date)):
# check time
if tweets.time.iloc[i] > date_time_obj1 and tweets.time.iloc[i] < date_time_obj2:
get_tweet1 = get_tweet1 + tweets.Tweets.iloc[i] + " "
if tweets.time.iloc[i] > date_time_obj2 and tweets.time.iloc[i] < date_time_obj3:
get_tweet2 = get_tweet2 + tweets.Tweets.iloc[i] + " "
if tweets.time.iloc[i] > date_time_obj3 and tweets.time.iloc[i] < date_time_obj4:
get_tweet3 = get_tweet3 + tweets.Tweets.iloc[i] + " "
if tweets.time.iloc[i] > date_time_obj4 and tweets.time.iloc[i] < date_time_obj5:
get_tweet4 = get_tweet4 + tweets.Tweets.iloc[i] + " "
if tweets.time.iloc[i] > date_time_obj5 and tweets.time.iloc[i] < date_time_obj6:
get_tweet5 = get_tweet5 + tweets.Tweets.iloc[i] + " "
if tweets.time.iloc[i] > date_time_obj6 and tweets.time.iloc[i] < date_time_obj7:
get_tweet6 = get_tweet6 + tweets.Tweets.iloc[i] + " "
if(str(get_date)!=str(next_date)):
df.at[indx1,'date'] = get_date
df.at[indx1,'time'] = date_time_obj2
df.at[indx1,'Tweets'] = get_tweet1
df.at[indx2,'date'] = get_date
df.at[indx2,'time'] = date_time_obj3
df.at[indx2,'Tweets'] = get_tweet2
df.at[indx3,'date'] = get_date
df.at[indx3,'time'] = date_time_obj4
df.at[indx3,'Tweets'] = get_tweet3
df.at[indx4,'date'] = get_date
df.at[indx4,'time'] = date_time_obj5
df.at[indx4,'Tweets'] = get_tweet4
df.at[indx5,'date'] = get_date
df.at[indx5,'time'] = date_time_obj6
df.at[indx5,'Tweets'] = get_tweet5
df.at[indx6,'date'] = get_date
df.at[indx6,'time'] = date_time_obj7
df.at[indx6,'Tweets'] = get_tweet6
indx1 = indx1 + 6
indx2 = indx2 + 6
indx3 = indx3 + 6
indx4 = indx4 + 6
indx5 = indx5 + 6
indx6 = indx6 + 6
get_tweet1 = ""
get_tweet2 = ""
get_tweet3 = ""
get_tweet4 = ""
get_tweet5 = ""
get_tweet6 = ""
# drop rows if tweets are not present
df['Tweets'].replace('', np.nan, inplace=True)
df.dropna(subset=['Tweets'], inplace=True)
df.reset_index(drop=True, inplace=True)
df.head()
# map prices for respective date and time
df['Prices']=""
for i in range (0,len(df)):
for j in range (0,len(readStockData)):
get_tweet_date = df.date.iloc[i]
get_tweet_time = df.time.iloc[i]
get_stock_date = readStockData.date.iloc[j]
get_stock_time = readStockData.time.iloc[j]
if(str(get_stock_date)==str(get_tweet_date)):
if(str(get_tweet_time) == str(get_stock_time)):
df.at[i,'Prices'] = int(readStockData.Close[j])
break
# dropping rows if prices are not available
df['Prices'].replace('', np.nan, inplace=True)
df.dropna(subset=['Prices'], inplace=True)
df.reset_index(drop=True, inplace=True)
df['Prices'] = df['Prices'].apply(np.int64)
df.to_csv('processedData_' + stockname +'.csv')
# performing sentiment analysis
def sentimentAnalysis(stockname):
df = pd.read_csv('processedData_' + stockname +'.csv')
df["Comp"] = ''
df["Negative"] = ''
df["Neutral"] = ''
df["Positive"] = ''
for indexx, row in df.T.iteritems():
try:
sentence_i = unicodedata.normalize('NFKD', df.loc[indexx, 'Tweets'])
sentence_sentiment = sentiment_i_a.polarity_scores(sentence_i)
df.at[indexx, 'Comp'] = sentence_sentiment['compound']
df.at[indexx, 'Negative'] = sentence_sentiment['neg']
df.at[indexx, 'Neutral'] = sentence_sentiment['neu']
df.at[indexx, 'Positive'] = sentence_sentiment['pos']
except TypeError:
print('failed on_status,',str(e))
print(df.head())
df.to_csv('sentimentAnalysis_' + stockname +'.csv')
posi=0
nega=0
neutral = 0
for i in range (0,len(df)):
get_val=df.Comp[i]
if(float(get_val)<(0)):
nega=nega+1
if(float(get_val>(0))):
posi=posi+1
if(float(get_val)==(0)):
neutral=neutral+1
posper=(posi/(len(df)))*100
negper=(nega/(len(df)))*100
neutralper=(neutral/(len(df)))*100
print("% of positive tweets= ",posper)
print("% of negative tweets= ",negper)
print("% of neutral tweets= ",neutralper)
arr=np.asarray([posper,negper,neutralper], dtype=int)
plt.figure()
plt.pie(arr,labels=['positive','negative', 'neutral'])
plt.plot()
# Predicting stock prices using Random Forest model
def RandomForestModel(stockname):
df = pd.read_csv('sentimentAnalysis_' + stockname +'.csv')
train, test = train_test_split(df, shuffle=False, test_size=0.2)
print(train.size)
print(test.size)
sentiment_score_list_train = []
for date, row in train.T.iteritems():
sentiment_score = np.asarray([df.loc[date, 'Negative'], df.loc[date, 'Neutral'], df.loc[date, 'Positive']])
sentiment_score_list_train.append(sentiment_score)
numpy_df_train = np.asarray(sentiment_score_list_train)
sentiment_score_list_test = []
for date, row in test.T.iteritems():
sentiment_score = np.asarray([df.loc[date, 'Negative'], df.loc[date, 'Neutral'], df.loc[date, 'Positive']])
sentiment_score_list_test.append(sentiment_score)
numpy_df_test = np.asarray(sentiment_score_list_test)
y_train = pd.DataFrame(train['Prices'])
y_test = pd.DataFrame(test['Prices'])
rf = RandomForestRegressor()
rf.fit(numpy_df_train, y_train)
prediction, bias, contributions = ti.predict(rf, numpy_df_test)
print("\n\n")
plt.figure()
plt.plot(test['Prices'].iloc[:].values)
plt.plot(prediction.flatten())
plt.title('Random Forest predicted prices')
plt.ylabel('Stock Prices')
plt.xlabel('Days')
plt.legend(['actual', 'predicted'])
plt.show()
print("\n\n")
print("RMSE value for Random Forest Model : ")
rmse = sqrt(mean_squared_error(y_test, prediction.flatten()))
print(rmse)
print("\n\n")
# Predicting stock prices using Support Vector Regression model
def SVRModel(stockname):
df = pd.read_csv('sentimentAnalysis_' + stockname +'.csv')
train, test = train_test_split(df, shuffle=False, test_size=0.2)
print(train.size)
print(test.size)
sentiment_score_list_train = []
for date, row in train.T.iteritems():
sentiment_score = np.asarray([df.loc[date, 'Negative'], df.loc[date, 'Neutral'], df.loc[date, 'Positive']])
sentiment_score_list_train.append(sentiment_score)
numpy_df_train = np.asarray(sentiment_score_list_train)
sentiment_score_list_test = []
for date, row in test.T.iteritems():
sentiment_score = np.asarray([df.loc[date, 'Negative'], df.loc[date, 'Neutral'], df.loc[date, 'Positive']])
sentiment_score_list_test.append(sentiment_score)
numpy_df_test = np.asarray(sentiment_score_list_test)
y_train = pd.DataFrame(train['Prices'])
y_test = pd.DataFrame(test['Prices'])
svr_rbf = SVR(kernel='rbf', C=1e6, gamma=0.1)
svr_rbf.fit(numpy_df_train, y_train.values.flatten())
output_test_svm = svr_rbf.predict(numpy_df_test)
plt.figure()
plt.plot(test['Prices'].iloc[:].values)
plt.plot(output_test_svm)
plt.title('SVM predicted prices')
plt.ylabel('Stock Prices')
plt.xlabel('Days')
plt.legend(['actual', 'predicted'])
plt.show()
print("\n\n")
print("RMSE value for Support Vector Regression Model : ")
rmse = sqrt(mean_squared_error(y_test, output_test_svm))
print(rmse)
print("\n\n")
def main():
name = input("Enter a valid STOCKNAME of the Corporation: ") #enter the name of the company
if(len(name) > 0):
STOCKNAME = name
else:
STOCKNAME = "AAPL"
#Get Stock Details and get date and time
print("------------------------------ Getting Stock details and processing it -----------------------------")
getHourlyStocks(STOCKNAME)
startDate, endDate = processStockData(STOCKNAME)
print("Stock Details fetched! \n")
#Fetching tweets and get date and time
print("------------------------------ Fetching Tweets and processing it-----------------------------")
fetchTweets(STOCKNAME, startDate, endDate)
processTweetData(STOCKNAME)
print("Tweets fetched! \n")
# Process data by fetching Tweets and prices for respective date and hour
print("------------- Process data by fetching Tweets and prices for respective date and hour ----------------------")
processData(STOCKNAME)
print("Completed Data Processing! \n")
# Sentiment analysis of tweets on hourly basis
print("------------------------------ Sentiment analysis of tweets on hourly basis-----------------------------")
sentimentAnalysis(STOCKNAME)
print("Completed sentiment Analysis! \n")
# Predicting stock prices using Random Forest model
print("------------------------------ Predicting stock prices using Random Forest model-----------------------------")
RandomForestModel(STOCKNAME)
print("Completed Random Forest prediction! \n")
# Predicting stock prices using Support Vector Regression model
print("------------------------------ Predicting stock prices using Support Vector Regression model-----------------------------")
SVRModel(STOCKNAME)
print("Completed Support Vector Regression prediction! \n")
main()
```
| github_jupyter |
```
import pandas as pd
# load languages.txt tab separated
df = pd.read_csv("webapp/data/languages.txt", sep="\t")
df
# 🇦 🇧 🇨 🇩 🇪 🇫 🇬 🇭 🇮 🇯 🇰 🇱 🇲 🇳 🇴 🇵 🇶 🇷 🇸 🇹 🇺 🇻 🇼 🇽 🇾 🇿
#
ascii2lang = {
"a": "🇦",
"b": "🇧",
"c": "🇨",
"d": "🇩",
"e": "🇪",
"f": "🇫",
"g": "🇬",
"h": "🇭",
"i": "🇮",
"j": "🇯",
"k": "🇰",
"l": "🇱",
"m": "🇲",
"n": "🇳",
"o": "🇴",
"p": "🇵",
"q": "🇶",
"r": "🇷",
"s": "🇸",
"t": "🇹",
"u": "🇺",
"v": "🇻",
"w": "🇼",
"x": "🇽",
"y": "🇾",
"z": "🇿",
}
languages = {}
for row in df.iterrows():
# column '639-1' is the language code
language_code = row[1]["639-1"]
language_name = row[1]["ISO language name"].split(",")[0].split("(")[0].strip()
language_name_native = (
row[1]["Native name (endonym)"].split(",")[0].split("(")[0].strip()
)
language_emoji = ascii2lang[language_code[0]] + ascii2lang[language_code[1]]
languages[language_code] = {
"language_code": language_code,
"language_name": language_name,
"language_name_native": language_name_native,
"language_emoji": language_emoji,
}
print(language_emoji, language_name, language_name_native)
import json
# save to JSON
with open("webapp/data/languages.json", "w") as f:
json.dump(languages, f)
# load arabic.json
import json
with open("arabic.json", "r") as f:
arabic = json.load(f)
arabic
import datetime
def get_todays_idx():
n_days = (datetime.datetime.utcnow() - datetime.datetime(1970, 1, 1)).days
idx = n_days - 18992 + 195
return idx
get_todays_idx()
words = [
"cigar",
"rebut",
"sissy",
"humph",
"awake",
"blush",
"focal",
"evade",
"naval",
"serve",
"heath",
"dwarf",
"model",
"karma",
"stink",
"grade",
"quiet",
"bench",
"abate",
"feign",
"major",
"death",
"fresh",
"crust",
"stool",
"colon",
"abase",
"marry",
"react",
"batty",
"pride",
"floss",
"helix",
"croak",
"staff",
"paper",
"unfed",
"whelp",
"trawl",
"outdo",
"adobe",
"crazy",
"sower",
"repay",
"digit",
"crate",
"cluck",
"spike",
"mimic",
"pound",
"maxim",
"linen",
"unmet",
"flesh",
"booby",
"forth",
"first",
"stand",
"belly",
"ivory",
"seedy",
"print",
"yearn",
"drain",
"bribe",
"stout",
"panel",
"crass",
"flume",
"offal",
"agree",
"error",
"swirl",
"argue",
"bleed",
"delta",
"flick",
"totem",
"wooer",
"front",
"shrub",
"parry",
"biome",
"lapel",
"start",
"greet",
"goner",
"golem",
"lusty",
"loopy",
"round",
"audit",
"lying",
"gamma",
"labor",
"islet",
"civic",
"forge",
"corny",
"moult",
"basic",
"salad",
"agate",
"spicy",
"spray",
"essay",
"fjord",
"spend",
"kebab",
"guild",
"aback",
"motor",
"alone",
"hatch",
"hyper",
"thumb",
"dowry",
"ought",
"belch",
"dutch",
"pilot",
"tweed",
"comet",
"jaunt",
"enema",
"steed",
"abyss",
"growl",
"fling",
"dozen",
"boozy",
"erode",
"world",
"gouge",
"click",
"briar",
"great",
"altar",
"pulpy",
"blurt",
"coast",
"duchy",
"groin",
"fixer",
"group",
"rogue",
"badly",
"smart",
"pithy",
"gaudy",
"chill",
"heron",
"vodka",
"finer",
"surer",
"radio",
"rouge",
"perch",
"retch",
"wrote",
"clock",
"tilde",
"store",
"prove",
"bring",
"solve",
"cheat",
"grime",
"exult",
"usher",
"epoch",
"triad",
"break",
"rhino",
"viral",
"conic",
"masse",
"sonic",
"vital",
"trace",
"using",
"peach",
"champ",
"baton",
"brake",
"pluck",
"craze",
"gripe",
"weary",
"picky",
"acute",
"ferry",
"aside",
"tapir",
"troll",
"unify",
"rebus",
"boost",
"truss",
"siege",
"tiger",
"banal",
"slump",
"crank",
"gorge",
"query",
"drink",
"favor",
"abbey",
"tangy",
"panic",
"solar",
"shire",
"proxy",
"point",
"robot",
"prick",
"wince",
"crimp",
"knoll",
"sugar",
"whack",
"mount",
"perky",
"could",
"wrung",
"light",
"those",
"moist",
"shard",
"pleat",
"aloft",
"skill",
"elder",
"frame",
"humor",
"pause",
"ulcer",
"ultra",
"robin",
"cynic",
"aroma",
"caulk",
"shake",
"dodge",
"swill",
"tacit",
"other",
"thorn",
"trove",
"bloke",
"vivid",
"spill",
"chant",
"choke",
"rupee",
"nasty",
"mourn",
"ahead",
"brine",
"cloth",
"hoard",
"sweet",
"month",
"lapse",
"watch",
"today",
"focus",
"smelt",
"tease",
"cater",
"movie",
"saute",
"allow",
"renew",
"their",
"slosh",
"purge",
"chest",
"depot",
"epoxy",
"nymph",
"found",
"shall",
"stove",
"lowly",
"snout",
"trope",
"fewer",
"shawl",
"natal",
"comma",
"foray",
"scare",
"stair",
"black",
"squad",
"royal",
"chunk",
"mince",
"shame",
"cheek",
"ample",
"flair",
"foyer",
"cargo",
"oxide",
"plant",
"olive",
"inert",
"askew",
"heist",
"shown",
"zesty",
"trash",
"larva",
"forgo",
"story",
"hairy",
"train",
"homer",
"badge",
"midst",
"canny",
"fetus",
"butch",
"farce",
"slung",
"tipsy",
"metal",
"yield",
"delve",
"being",
"scour",
"glass",
"gamer",
"scrap",
"money",
"hinge",
"album",
"vouch",
"asset",
"tiara",
"crept",
"bayou",
"atoll",
"manor",
"creak",
"showy",
"phase",
"froth",
"depth",
"gloom",
"flood",
"trait",
"girth",
"piety",
"goose",
"float",
"donor",
"atone",
"primo",
"apron",
"blown",
"cacao",
"loser",
"input",
"gloat",
"awful",
"brink",
"smite",
"beady",
"rusty",
"retro",
"droll",
"gawky",
"hutch",
"pinto",
"egret",
"lilac",
"sever",
"field",
"fluff",
"flack",
"agape",
"voice",
"stead",
"stalk",
"berth",
"madam",
"night",
"bland",
"liver",
"wedge",
"augur",
"roomy",
"wacky",
"flock",
"angry",
"trite",
"aphid",
"tryst",
"midge",
"power",
"elope",
"cinch",
"motto",
"stomp",
"upset",
"bluff",
"cramp",
"quart",
"coyly",
"youth",
"rhyme",
"buggy",
"alien",
"smear",
"unfit",
"patty",
"cling",
"glean",
"label",
"hunky",
"khaki",
"poker",
"gruel",
"twice",
"twang",
"shrug",
"treat",
"waste",
"merit",
"woven",
"needy",
"clown",
"widow",
"irony",
"ruder",
"gauze",
"chief",
"onset",
"prize",
"fungi",
"charm",
"gully",
"inter",
"whoop",
"taunt",
"leery",
"class",
"theme",
"lofty",
"tibia",
"booze",
"alpha",
"thyme",
"doubt",
"parer",
"chute",
"stick",
"trice",
"alike",
"recap",
"saint",
"glory",
"grate",
"admit",
"brisk",
"soggy",
"usurp",
"scald",
"scorn",
"leave",
"twine",
"sting",
"bough",
"marsh",
"sloth",
"dandy",
"vigor",
"howdy",
"enjoy",
"valid",
"ionic",
"equal",
"floor",
"catch",
"spade",
"stein",
"exist",
"quirk",
"denim",
"grove",
"spiel",
"mummy",
"fault",
"foggy",
"flout",
"carry",
"sneak",
"libel",
"waltz",
"aptly",
"piney",
"inept",
"aloud",
"photo",
"dream",
"stale",
"unite",
"snarl",
"baker",
"there",
"glyph",
"pooch",
"hippy",
"spell",
"folly",
"louse",
"gulch",
"vault",
"godly",
"threw",
"fleet",
"grave",
"inane",
"shock",
"crave",
"spite",
"valve",
"skimp",
"claim",
"rainy",
"musty",
"pique",
"daddy",
"quasi",
"arise",
"aging",
"valet",
"opium",
"avert",
"stuck",
"recut",
"mulch",
"genre",
"plume",
"rifle",
"count",
"incur",
"total",
"wrest",
"mocha",
"deter",
"study",
"lover",
"safer",
"rivet",
"funny",
"smoke",
"mound",
"undue",
"sedan",
"pagan",
"swine",
"guile",
"gusty",
"equip",
"tough",
"canoe",
"chaos",
"covet",
"human",
"udder",
"lunch",
"blast",
"stray",
"manga",
"melee",
"lefty",
"quick",
"paste",
"given",
"octet",
"risen",
"groan",
"leaky",
"grind",
"carve",
"loose",
"sadly",
"spilt",
"apple",
"slack",
"honey",
"final",
"sheen",
"eerie",
"minty",
"slick",
"derby",
"wharf",
"spelt",
"coach",
"erupt",
"singe",
"price",
"spawn",
"fairy",
"jiffy",
"filmy",
"stack",
"chose",
"sleep",
"ardor",
"nanny",
"niece",
"woozy",
"handy",
"grace",
"ditto",
"stank",
"cream",
"usual",
"diode",
"valor",
"angle",
"ninja",
"muddy",
"chase",
"reply",
"prone",
"spoil",
"heart",
"shade",
"diner",
"arson",
"onion",
"sleet",
"dowel",
"couch",
"palsy",
"bowel",
"smile",
"evoke",
"creek",
"lance",
"eagle",
"idiot",
"siren",
"built",
"embed",
"award",
"dross",
"annul",
"goody",
"frown",
"patio",
"laden",
"humid",
"elite",
"lymph",
"edify",
"might",
"reset",
"visit",
"gusto",
"purse",
"vapor",
"crock",
"write",
"sunny",
"loath",
"chaff",
"slide",
"queer",
"venom",
"stamp",
"sorry",
"still",
"acorn",
"aping",
"pushy",
"tamer",
"hater",
"mania",
"awoke",
"brawn",
"swift",
"exile",
"birch",
"lucky",
"freer",
"risky",
"ghost",
"plier",
"lunar",
"winch",
"snare",
"nurse",
"house",
"borax",
"nicer",
"lurch",
"exalt",
"about",
"savvy",
"toxin",
"tunic",
"pried",
"inlay",
"chump",
"lanky",
"cress",
"eater",
"elude",
"cycle",
"kitty",
"boule",
"moron",
"tenet",
"place",
"lobby",
"plush",
"vigil",
"index",
"blink",
"clung",
"qualm",
"croup",
"clink",
"juicy",
"stage",
"decay",
"nerve",
"flier",
"shaft",
"crook",
"clean",
"china",
"ridge",
"vowel",
"gnome",
"snuck",
"icing",
"spiny",
"rigor",
"snail",
"flown",
"rabid",
"prose",
"thank",
"poppy",
"budge",
"fiber",
"moldy",
"dowdy",
"kneel",
"track",
"caddy",
"quell",
"dumpy",
"paler",
"swore",
"rebar",
"scuba",
"splat",
"flyer",
"horny",
"mason",
"doing",
"ozone",
"amply",
"molar",
"ovary",
"beset",
"queue",
"cliff",
"magic",
"truce",
"sport",
"fritz",
"edict",
"twirl",
"verse",
"llama",
"eaten",
"range",
"whisk",
"hovel",
"rehab",
"macaw",
"sigma",
"spout",
"verve",
"sushi",
"dying",
"fetid",
"brain",
"buddy",
"thump",
"scion",
"candy",
"chord",
"basin",
"march",
"crowd",
"arbor",
"gayly",
"musky",
"stain",
"dally",
"bless",
"bravo",
"stung",
"title",
"ruler",
"kiosk",
"blond",
"ennui",
"layer",
"fluid",
"tatty",
"score",
"cutie",
"zebra",
"barge",
"matey",
"bluer",
"aider",
"shook",
"river",
"privy",
"betel",
"frisk",
"bongo",
"begun",
"azure",
"weave",
"genie",
"sound",
"glove",
"braid",
"scope",
"wryly",
"rover",
"assay",
"ocean",
"bloom",
"irate",
"later",
"woken",
"silky",
"wreck",
"dwelt",
"slate",
"smack",
"solid",
"amaze",
"hazel",
"wrist",
"jolly",
"globe",
"flint",
"rouse",
"civil",
"vista",
"relax",
"cover",
"alive",
"beech",
"jetty",
"bliss",
"vocal",
"often",
"dolly",
"eight",
"joker",
"since",
"event",
"ensue",
"shunt",
"diver",
"poser",
"worst",
"sweep",
"alley",
"creed",
"anime",
"leafy",
"bosom",
"dunce",
"stare",
"pudgy",
"waive",
"choir",
"stood",
"spoke",
"outgo",
"delay",
"bilge",
"ideal",
"clasp",
"seize",
"hotly",
"laugh",
"sieve",
"block",
"meant",
"grape",
"noose",
"hardy",
"shied",
"drawl",
"daisy",
"putty",
"strut",
"burnt",
"tulip",
"crick",
"idyll",
"vixen",
"furor",
"geeky",
"cough",
"naive",
"shoal",
"stork",
"bathe",
"aunty",
"check",
"prime",
"brass",
"outer",
"furry",
"razor",
"elect",
"evict",
"imply",
"demur",
"quota",
"haven",
"cavil",
"swear",
"crump",
"dough",
"gavel",
"wagon",
"salon",
"nudge",
"harem",
"pitch",
"sworn",
"pupil",
"excel",
"stony",
"cabin",
"unzip",
"queen",
"trout",
"polyp",
"earth",
"storm",
"until",
"taper",
"enter",
"child",
"adopt",
"minor",
"fatty",
"husky",
"brave",
"filet",
"slime",
"glint",
"tread",
"steal",
"regal",
"guest",
"every",
"murky",
"share",
"spore",
"hoist",
"buxom",
"inner",
"otter",
"dimly",
"level",
"sumac",
"donut",
"stilt",
"arena",
"sheet",
"scrub",
"fancy",
"slimy",
"pearl",
"silly",
"porch",
"dingo",
"sepia",
"amble",
"shady",
"bread",
"friar",
"reign",
"dairy",
"quill",
"cross",
"brood",
"tuber",
"shear",
"posit",
"blank",
"villa",
"shank",
"piggy",
"freak",
"which",
"among",
"fecal",
"shell",
"would",
"algae",
"large",
"rabbi",
"agony",
"amuse",
"bushy",
"copse",
"swoon",
"knife",
"pouch",
"ascot",
"plane",
"crown",
"urban",
"snide",
"relay",
"abide",
"viola",
"rajah",
"straw",
"dilly",
"crash",
"amass",
"third",
"trick",
"tutor",
"woody",
"blurb",
"grief",
"disco",
"where",
"sassy",
"beach",
"sauna",
"comic",
"clued",
"creep",
"caste",
"graze",
"snuff",
"frock",
"gonad",
"drunk",
"prong",
"lurid",
"steel",
"halve",
"buyer",
"vinyl",
"utile",
"smell",
"adage",
"worry",
"tasty",
"local",
"trade",
"finch",
"ashen",
"modal",
"gaunt",
"clove",
"enact",
"adorn",
"roast",
"speck",
"sheik",
"missy",
"grunt",
"snoop",
"party",
"touch",
"mafia",
"emcee",
"array",
"south",
"vapid",
"jelly",
"skulk",
"angst",
"tubal",
"lower",
"crest",
"sweat",
"cyber",
"adore",
"tardy",
"swami",
"notch",
"groom",
"roach",
"hitch",
"young",
"align",
"ready",
"frond",
"strap",
"puree",
"realm",
"venue",
"swarm",
"offer",
"seven",
"dryer",
"diary",
"dryly",
"drank",
"acrid",
"heady",
"theta",
"junto",
"pixie",
"quoth",
"bonus",
"shalt",
"penne",
"amend",
"datum",
"build",
"piano",
"shelf",
"lodge",
"suing",
"rearm",
"coral",
"ramen",
"worth",
"psalm",
"infer",
"overt",
"mayor",
"ovoid",
"glide",
"usage",
"poise",
"randy",
"chuck",
"prank",
"fishy",
"tooth",
"ether",
"drove",
"idler",
"swath",
"stint",
"while",
"begat",
"apply",
"slang",
"tarot",
"radar",
"credo",
"aware",
"canon",
"shift",
"timer",
"bylaw",
"serum",
"three",
"steak",
"iliac",
"shirk",
"blunt",
"puppy",
"penal",
"joist",
"bunny",
"shape",
"beget",
"wheel",
"adept",
"stunt",
"stole",
"topaz",
"chore",
"fluke",
"afoot",
"bloat",
"bully",
"dense",
"caper",
"sneer",
"boxer",
"jumbo",
"lunge",
"space",
"avail",
"short",
"slurp",
"loyal",
"flirt",
"pizza",
"conch",
"tempo",
"droop",
"plate",
"bible",
"plunk",
"afoul",
"savoy",
"steep",
"agile",
"stake",
"dwell",
"knave",
"beard",
"arose",
"motif",
"smash",
"broil",
"glare",
"shove",
"baggy",
"mammy",
"swamp",
"along",
"rugby",
"wager",
"quack",
"squat",
"snaky",
"debit",
"mange",
"skate",
"ninth",
"joust",
"tramp",
"spurn",
"medal",
"micro",
"rebel",
"flank",
"learn",
"nadir",
"maple",
"comfy",
"remit",
"gruff",
"ester",
"least",
"mogul",
"fetch",
"cause",
"oaken",
"aglow",
"meaty",
"gaffe",
"shyly",
"racer",
"prowl",
"thief",
"stern",
"poesy",
"rocky",
"tweet",
"waist",
"spire",
"grope",
"havoc",
"patsy",
"truly",
"forty",
"deity",
"uncle",
"swish",
"giver",
"preen",
"bevel",
"lemur",
"draft",
"slope",
"annoy",
"lingo",
"bleak",
"ditty",
"curly",
"cedar",
"dirge",
"grown",
"horde",
"drool",
"shuck",
"crypt",
"cumin",
"stock",
"gravy",
"locus",
"wider",
"breed",
"quite",
"chafe",
"cache",
"blimp",
"deign",
"fiend",
"logic",
"cheap",
"elide",
"rigid",
"false",
"renal",
"pence",
"rowdy",
"shoot",
"blaze",
"envoy",
"posse",
"brief",
"never",
"abort",
"mouse",
"mucky",
"sulky",
"fiery",
"media",
"trunk",
"yeast",
"clear",
"skunk",
"scalp",
"bitty",
"cider",
"koala",
"duvet",
"segue",
"creme",
"super",
"grill",
"after",
"owner",
"ember",
"reach",
"nobly",
"empty",
"speed",
"gipsy",
"recur",
"smock",
"dread",
"merge",
"burst",
"kappa",
"amity",
"shaky",
"hover",
"carol",
"snort",
"synod",
"faint",
"haunt",
"flour",
"chair",
"detox",
"shrew",
"tense",
"plied",
"quark",
"burly",
"novel",
"waxen",
"stoic",
"jerky",
"blitz",
"beefy",
"lyric",
"hussy",
"towel",
"quilt",
"below",
"bingo",
"wispy",
"brash",
"scone",
"toast",
"easel",
"saucy",
"value",
"spice",
"honor",
"route",
"sharp",
"bawdy",
"radii",
"skull",
"phony",
"issue",
"lager",
"swell",
"urine",
"gassy",
"trial",
"flora",
"upper",
"latch",
"wight",
"brick",
"retry",
"holly",
"decal",
"grass",
"shack",
"dogma",
"mover",
"defer",
"sober",
"optic",
"crier",
"vying",
"nomad",
"flute",
"hippo",
"shark",
"drier",
"obese",
"bugle",
"tawny",
"chalk",
"feast",
"ruddy",
"pedal",
"scarf",
"cruel",
"bleat",
"tidal",
"slush",
"semen",
"windy",
"dusty",
"sally",
"igloo",
"nerdy",
"jewel",
"shone",
"whale",
"hymen",
"abuse",
"fugue",
"elbow",
"crumb",
"pansy",
"welsh",
"syrup",
"terse",
"suave",
"gamut",
"swung",
"drake",
"freed",
"afire",
"shirt",
"grout",
"oddly",
"tithe",
"plaid",
"dummy",
"broom",
"blind",
"torch",
"enemy",
"again",
"tying",
"pesky",
"alter",
"gazer",
"noble",
"ethos",
"bride",
"extol",
"decor",
"hobby",
"beast",
"idiom",
"utter",
"these",
"sixth",
"alarm",
"erase",
"elegy",
"spunk",
"piper",
"scaly",
"scold",
"hefty",
"chick",
"sooty",
"canal",
"whiny",
"slash",
"quake",
"joint",
"swept",
"prude",
"heavy",
"wield",
"femme",
"lasso",
"maize",
"shale",
"screw",
"spree",
"smoky",
"whiff",
"scent",
"glade",
"spent",
"prism",
"stoke",
"riper",
"orbit",
"cocoa",
"guilt",
"humus",
"shush",
"table",
"smirk",
"wrong",
"noisy",
"alert",
"shiny",
"elate",
"resin",
"whole",
"hunch",
"pixel",
"polar",
"hotel",
"sword",
"cleat",
"mango",
"rumba",
"puffy",
"filly",
"billy",
"leash",
"clout",
"dance",
"ovate",
"facet",
"chili",
"paint",
"liner",
"curio",
"salty",
"audio",
"snake",
"fable",
"cloak",
"navel",
"spurt",
"pesto",
"balmy",
"flash",
"unwed",
"early",
"churn",
"weedy",
"stump",
"lease",
"witty",
"wimpy",
"spoof",
"saner",
"blend",
"salsa",
"thick",
"warty",
"manic",
"blare",
"squib",
"spoon",
"probe",
"crepe",
"knack",
"force",
"debut",
"order",
"haste",
"teeth",
"agent",
"widen",
"icily",
"slice",
"ingot",
"clash",
"juror",
"blood",
"abode",
"throw",
"unity",
"pivot",
"slept",
"troop",
"spare",
"sewer",
"parse",
"morph",
"cacti",
"tacky",
"spool",
"demon",
"moody",
"annex",
"begin",
"fuzzy",
"patch",
"water",
"lumpy",
"admin",
"omega",
"limit",
"tabby",
"macho",
"aisle",
"skiff",
"basis",
"plank",
"verge",
"botch",
"crawl",
"lousy",
"slain",
"cubic",
"raise",
"wrack",
"guide",
"foist",
"cameo",
"under",
"actor",
"revue",
"fraud",
"harpy",
"scoop",
"climb",
"refer",
"olden",
"clerk",
"debar",
"tally",
"ethic",
"cairn",
"tulle",
"ghoul",
"hilly",
"crude",
"apart",
"scale",
"older",
"plain",
"sperm",
"briny",
"abbot",
"rerun",
"quest",
"crisp",
"bound",
"befit",
"drawn",
"suite",
"itchy",
"cheer",
"bagel",
"guess",
"broad",
"axiom",
"chard",
"caput",
"leant",
"harsh",
"curse",
"proud",
"swing",
"opine",
"taste",
"lupus",
"gumbo",
"miner",
"green",
"chasm",
"lipid",
"topic",
"armor",
"brush",
"crane",
"mural",
"abled",
"habit",
"bossy",
"maker",
"dusky",
"dizzy",
"lithe",
"brook",
"jazzy",
"fifty",
"sense",
"giant",
"surly",
"legal",
"fatal",
"flunk",
"began",
"prune",
"small",
"slant",
"scoff",
"torus",
"ninny",
"covey",
"viper",
"taken",
"moral",
"vogue",
"owing",
"token",
"entry",
"booth",
"voter",
"chide",
"elfin",
"ebony",
"neigh",
"minim",
"melon",
"kneed",
"decoy",
"voila",
"ankle",
"arrow",
"mushy",
"tribe",
"cease",
"eager",
"birth",
"graph",
"odder",
"terra",
"weird",
"tried",
"clack",
"color",
"rough",
"weigh",
"uncut",
"ladle",
"strip",
"craft",
"minus",
"dicey",
"titan",
"lucid",
"vicar",
"dress",
"ditch",
"gypsy",
"pasta",
"taffy",
"flame",
"swoop",
"aloof",
"sight",
"broke",
"teary",
"chart",
"sixty",
"wordy",
"sheer",
"leper",
"nosey",
"bulge",
"savor",
"clamp",
"funky",
"foamy",
"toxic",
"brand",
"plumb",
"dingy",
"butte",
"drill",
"tripe",
"bicep",
"tenor",
"krill",
"worse",
"drama",
"hyena",
"think",
"ratio",
"cobra",
"basil",
"scrum",
"bused",
"phone",
"court",
"camel",
"proof",
"heard",
"angel",
"petal",
"pouty",
"throb",
"maybe",
"fetal",
"sprig",
"spine",
"shout",
"cadet",
"macro",
"dodgy",
"satyr",
"rarer",
"binge",
"trend",
"nutty",
"leapt",
"amiss",
"split",
"myrrh",
"width",
"sonar",
"tower",
"baron",
"fever",
"waver",
"spark",
"belie",
"sloop",
"expel",
"smote",
"baler",
"above",
"north",
"wafer",
"scant",
"frill",
"awash",
"snack",
"scowl",
"frail",
"drift",
"limbo",
"fence",
"motel",
"ounce",
"wreak",
"revel",
"talon",
"prior",
"knelt",
"cello",
"flake",
"debug",
"anode",
"crime",
"salve",
"scout",
"imbue",
"pinky",
"stave",
"vague",
"chock",
"fight",
"video",
"stone",
"teach",
"cleft",
"frost",
"prawn",
"booty",
"twist",
"apnea",
"stiff",
"plaza",
"ledge",
"tweak",
"board",
"grant",
"medic",
"bacon",
"cable",
"brawl",
"slunk",
"raspy",
"forum",
"drone",
"women",
"mucus",
"boast",
"toddy",
"coven",
"tumor",
"truer",
"wrath",
"stall",
"steam",
"axial",
"purer",
"daily",
"trail",
"niche",
"mealy",
"juice",
"nylon",
"plump",
"merry",
"flail",
"papal",
"wheat",
"berry",
"cower",
"erect",
"brute",
"leggy",
"snipe",
"sinew",
"skier",
"penny",
"jumpy",
"rally",
"umbra",
"scary",
"modem",
"gross",
"avian",
"greed",
"satin",
"tonic",
"parka",
"sniff",
"livid",
"stark",
"trump",
"giddy",
"reuse",
"taboo",
"avoid",
"quote",
"devil",
"liken",
"gloss",
"gayer",
"beret",
"noise",
"gland",
"dealt",
"sling",
"rumor",
"opera",
"thigh",
"tonga",
"flare",
"wound",
"white",
"bulky",
"etude",
"horse",
"circa",
"paddy",
"inbox",
"fizzy",
"grain",
"exert",
"surge",
"gleam",
"belle",
"salvo",
"crush",
"fruit",
"sappy",
"taker",
"tract",
"ovine",
"spiky",
"frank",
"reedy",
"filth",
"spasm",
"heave",
"mambo",
"right",
"clank",
"trust",
"lumen",
"borne",
"spook",
"sauce",
"amber",
"lathe",
"carat",
"corer",
"dirty",
"slyly",
"affix",
"alloy",
"taint",
"sheep",
"kinky",
"wooly",
"mauve",
"flung",
"yacht",
"fried",
"quail",
"brunt",
"grimy",
"curvy",
"cagey",
"rinse",
"deuce",
"state",
"grasp",
"milky",
"bison",
"graft",
"sandy",
"baste",
"flask",
"hedge",
"girly",
"swash",
"boney",
"coupe",
"endow",
"abhor",
"welch",
"blade",
"tight",
"geese",
"miser",
"mirth",
"cloud",
"cabal",
"leech",
"close",
"tenth",
"pecan",
"droit",
"grail",
"clone",
"guise",
"ralph",
"tango",
"biddy",
"smith",
"mower",
"payee",
"serif",
"drape",
"fifth",
"spank",
"glaze",
"allot",
"truck",
"kayak",
"virus",
"testy",
"tepee",
"fully",
"zonal",
"metro",
"curry",
"grand",
"banjo",
"axion",
"bezel",
"occur",
"chain",
"nasal",
"gooey",
"filer",
"brace",
"allay",
"pubic",
"raven",
"plead",
"gnash",
"flaky",
"munch",
"dully",
"eking",
"thing",
"slink",
"hurry",
"theft",
"shorn",
"pygmy",
"ranch",
"wring",
"lemon",
"shore",
"mamma",
"froze",
"newer",
"style",
"moose",
"antic",
"drown",
"vegan",
"chess",
"guppy",
"union",
"lever",
"lorry",
"image",
"cabby",
"druid",
"exact",
"truth",
"dopey",
"spear",
"cried",
"chime",
"crony",
"stunk",
"timid",
"batch",
"gauge",
"rotor",
"crack",
"curve",
"latte",
"witch",
"bunch",
"repel",
"anvil",
"soapy",
"meter",
"broth",
"madly",
"dried",
"scene",
"known",
"magma",
"roost",
"woman",
"thong",
"punch",
"pasty",
"downy",
"knead",
"whirl",
"rapid",
"clang",
"anger",
"drive",
"goofy",
"email",
"music",
"stuff",
"bleep",
"rider",
"mecca",
"folio",
"setup",
"verso",
"quash",
"fauna",
"gummy",
"happy",
"newly",
"fussy",
"relic",
"guava",
"ratty",
"fudge",
"femur",
"chirp",
"forte",
"alibi",
"whine",
"petty",
"golly",
"plait",
"fleck",
"felon",
"gourd",
"brown",
"thrum",
"ficus",
"stash",
"decry",
"wiser",
"junta",
"visor",
"daunt",
"scree",
"impel",
"await",
"press",
"whose",
"turbo",
"stoop",
"speak",
"mangy",
"eying",
"inlet",
"crone",
"pulse",
"mossy",
"staid",
"hence",
"pinch",
"teddy",
"sully",
"snore",
"ripen",
"snowy",
"attic",
"going",
"leach",
"mouth",
"hound",
"clump",
"tonal",
"bigot",
"peril",
"piece",
"blame",
"haute",
"spied",
"undid",
"intro",
"basal",
"shine",
"gecko",
"rodeo",
"guard",
"steer",
"loamy",
"scamp",
"scram",
"manly",
"hello",
"vaunt",
"organ",
"feral",
"knock",
"extra",
"condo",
"adapt",
"willy",
"polka",
"rayon",
"skirt",
"faith",
"torso",
"match",
"mercy",
"tepid",
"sleek",
"riser",
"twixt",
"peace",
"flush",
"catty",
"login",
"eject",
"roger",
"rival",
"untie",
"refit",
"aorta",
"adult",
"judge",
"rower",
"artsy",
"rural",
"shave",
"bobby",
"eclat",
"fella",
"gaily",
"harry",
"hasty",
"hydro",
"liege",
"octal",
"ombre",
"payer",
"sooth",
"unset",
"unlit",
"vomit",
"fanny",
]
print(len(words))
# save words to ../webapp/data/languages/en/en_5words_new.txt
with open("../webapp/data/languages/en/en_5words_new.txt", "w") as f:
for word in words:
f.write(word + "\n")
```
| github_jupyter |
### Drill 1: Question Design
For each of the following, indicate the best question type to use and why.
##### Age
A rating scale would be most appropriate, but can bore some people if there are too many numbers to scroll through. An alternative is a write in response, but that runs the risk of people mistyping or writing random words. Although, those who are typing in random words are probably not reading the questions and it might be possible to disregard their whole survey.
##### Gender
Multiple choice, as there are a set number of genders.
##### Income
Income ranges can be quite large. There are two logical choices here. The first is to give a range with multiple choice answers. The second is a write in response, but again this runs the risk of people writing random words.
##### Opinions about dish soap
This really depends on what the survey is looking for. If the question is a simple 'do you like this brand'? then a simple yes/no multiple choice will suffice. If it is more complicated with multiple questions regarding the brand, such as 'this brand is cheap', 'this brand is easy to find', and 'I have heard of this brand', then a multiple choice answer where you can choose multiple answers is more appropriate.
##### Brand of dish soap used
This can be a written answer, but it is probably best to list the most popular 5-10 brands and a last answer of other/have it be a write in.
##### Preference for dish soap brand
If the previous answer allowed for multiple answers (and it should..) then for the brands the user indicates they used, let them rank each one based on their preference.
##### Positive vs negative feelings about dish soap
If there are recurrent feelings about dish soap that was discovered in previous surveys or a focus group, then there can be multiple choice answers (multiple answers) plus a write in. An alternative is just a plain write in.
### Drill 2: Fix the flaw
Choose 5 questions you believe can be improved from the Mainstream Media Accountability Survey sent in February 2017. Describe the flaw you see and write an improved version of the question.
##### 1) What percentage do you believe is an accurate representation of President Trump’s positive news coverage by the mainstream media?
If you want a more accurate picture of media representation, then the question should first ask what percentage is an accurate representation of Trump's news coverage. A follow up question can be if the accurate representation is mostly positive, neutral, or negative.
##### 2) Do you believe that the media sensationalizes and exaggerates stories in order to paint President Trump and conservatives in a bad light?
This is a loaded question and assumes that the user thinks the media sensationalizes and exaggerates stories in the first place. The better question is to first ask if the media sensationalizes stories about Trump. If the user says yes, then they can ask if the stores are sensationalized to paint Trump in a positive, neutral, or negative light.
##### 3) What television source do you primarily get your news from?
This is flawed because many people, including myself, do not get news information from tv. A better way to do this would be first to ask how the user gets their news (mobile, pc, tv, doesn't watch news). If the intent is focused on television news sources, those who picked tv can choose their primary source from the list.
##### 4) Do you believe that the media purposely tries to divide Republicans in order to help elect Democrats?
Again, this is a loaded question and assumes the user thinks the media is trying to divide anyone/is biased. First we must determine if the user believes the media is trying to divide viewers based on their political views. If they believe the media is trying to divide viewers, then the survey can ask if the media tends to put one party in a more favorable light than the others. If the user answers yes, then the question can ask which party the media favors.
##### 5) Do you believe that the media unfairly reported on President Trump’s executive order granting Americans the freedom to buy health care across state lines?
This is a framing problem made to think that the media is being unfair regarding healthcare. A better question might be to ask 'what is your opinion on the media reporting on Trumps executive order granting Americans the freedom to buy health care across state lines?'
| github_jupyter |
# Reading R0/R1 Waveforms
If you instead wanted to do some very specific analysis of the waveforms themselves, that does not fit in the reduction scheme described in the previous tutorials, CHECLabPy contains some classes that simplify the reading of .tio files in Python.
For this tutorial you need TargetDriver, TargetIO and TargetCalib installed.
## Setup
Prepare your machine and environment by following the instructions at: https://forge.in2p3.fr/projects/gct/wiki/Installing_CHEC_Software
If you do not wish to install the TARGET libraries as you will only be reading DL1 files, you can skip this tutorial.
Check the installation was successful by running these lines:
```
import target_driver
import target_io
import target_calib
```
## Files
To run this tutorial you must download a reference dataset (using the username and password Rich has sent around in emails/Slack). This file required for this tutorial is a calibrated R1 file. This run corresponds to a ~50 p.e. illumination run.
```
username = '***'
pw = '***'
r0_url = 'https://www.mpi-hd.mpg.de/personalhomes/white/checs/data/d0000_ReferenceData/Run17473_r0.tio'
r1_url = 'https://www.mpi-hd.mpg.de/personalhomes/white/checs/data/d0000_ReferenceData/Run17473_r1.tio'
!mkdir refdata
!wget --user $username --password $pw -P refdata $r0_url
!wget --user $username --password $pw -P refdata $r1_url
r0_path = "refdata/Run17473_r0.tio"
r1_path = "refdata/Run17473_r1.tio"
```
## Reading the file
The class to read TIO files is called `TIOReader:
```
from CHECLabPy.core.io import TIOReader
reader = TIOReader(r1_path)
```
### Metadata
With this reader you can find out a lot of information about the file:
```
print("R1 Calibrated: ", reader.is_r1)
print("N_events: ", reader.n_events)
print("N_pixels: ", reader.n_pixels)
print("N_samples", reader.n_samples)
print("Camera Version: ", reader.camera_version)
```
### Mapping
The pixel mapping for the file can automatically be obtained. This includes the TargetCalib Mapping class, and the CHECLabPy mapping DataFrame. The latter is generated from the former.
```
reader.tc_mapping
reader.mapping
```
### Indexing
The reader can be indexed to extract the waveforms of an event:
```
wf = reader[0] # Obtain the waveforms for every pixel for the first event
print(wf.shape)
wf = reader[-1] # Obtain the last event in the file
wfs = reader[:10] # Obtain the first 10 events in the file
print(wfs.shape)
```
Once an event has been extracted, further information about the event can be obtained:
```
wf = reader[10]
print("Time of event: ", wf.t_cpu)
print("Event TACK: ", wf.t_tack)
print("TARGET ASIC Cell ID for first sample in event: ", wf.first_cell_id[0])
print("Is event stale? ", bool(wf.stale[0]))
```
### Looping Over Events
It is also possible to iterate over all the events in the file with a loop:
```
# Loop over events in file
for wf in reader:
break
```
## TIOReader Subclasses
If you wish to force a script to only allow either R0 or R1 files to be read, one can use the `ReaderR0` and `ReaderR1` subclasses:
```
from CHECLabPy.core.io import ReaderR0, ReaderR1
reader_r0 = ReaderR0(r0_path) # Works
print("n_events = ", reader_r0.n_events)
reader_r0 = ReaderR0(r1_path) # Doesn't work - wrong file!
```
## Analysis Example
With the reading of waveforms into a numpy array automatically by the `TIOReader` class, it is very simple to create a plot of all waveforms in an event:
```
%matplotlib inline
from matplotlib import pyplot as plt
from CHECLabPy.core.io import TIOReader
reader = TIOReader(r1_path)
wf = reader[10]
plt.plot(wf.T) # Transpose the waveform so the dimensions are more sensible for plotting
plt.show()
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="1"
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer, LabelBinarizer, Normalizer
from sklearn.model_selection import StratifiedKFold
import tensorflow as tf
import pickle
import numpy as np
import pandas as pd
from random import choice
from annsa.template_sampling import *
from hyperparameter_models import make_dense_model as make_model
import tensorflow.contrib.eager as tfe
tf.enable_eager_execution()
```
#### Import model, training function
```
from annsa.model_classes import (dnn_model_features,
DNN,
save_model,
train_earlystop)
```
## Load testing dataset
```
dataset = np.load('../dataset_generation/testing_dataset_full_200keV_1000.npy')
all_spectra = np.add(dataset.item()['sources'], dataset.item()['backgrounds'])
all_keys = dataset.item()['keys']
mlb=LabelBinarizer()
all_keys_binarized = mlb.fit_transform(all_keys)
```
# Train network
### Define hyperparameters
```
number_hyperparameters_to_search = 256
earlystop_errors_test = []
model_id='DNN-kfoldsfull-updated-5'
```
### Search hyperparameters
```
def save_features(model_features,
model_id,
hyperparameter_index):
with open('./hyperparameter-search-results/' + model_id + '_' +
str(hyperparameter_index) + '_dae_features', 'wb+') as f:
pickle.dump(model_features,f)
return None
skf = StratifiedKFold(n_splits=5, random_state=5)
testing_errors = []
all_kf_errors = []
for network_id in range(number_hyperparameters_to_search):
print(network_id)
model, model_features = make_model(all_keys_binarized)
save_features(model_features,
model_id,
network_id)
k_folds_errors = []
for train_index, test_index in skf.split(all_spectra, all_keys):
# reset model on each iteration
model = DNN(model_features)
optimizer = tf.train.AdamOptimizer(model_features.learining_rate)
costfunction_errors_tmp, earlystop_errors_tmp = train_earlystop(
training_data=all_spectra[train_index],
training_keys=all_keys_binarized[train_index],
testing_data=all_spectra[test_index],
testing_keys=all_keys_binarized[test_index],
model=model,
optimizer=optimizer,
num_epochs=200,
obj_cost=model.cross_entropy,
earlystop_cost_fn=model.f1_error,
earlystop_patience=10,
not_learning_patience=10,
not_learning_threshold=0.9,
verbose=True,
fit_batch_verbose=10,
data_augmentation=model.default_data_augmentation)
k_folds_errors.append(earlystop_errors_tmp)
all_kf_errors.append(earlystop_errors_tmp)
testing_errors.append(np.average(k_folds_errors))
np.save('./final-models/final_test_errors_'+model_id, testing_errors)
np.save('./final-models/final_kf_errors_'+model_id, all_kf_errors)
# model.save_weights('./final-models/'+model_id+'_checkpoint_'+str(network_id))
network_id += 1
```
| github_jupyter |
# Binomial Tree
discrete-time models
- Single Period Binomial model
- Multiperiod Binomial model$\DeclareMathOperator*{\argmin}{argmin}
\DeclareMathOperator*{\argmax}{argmax}
\DeclareMathOperator*{\plim}{plim}
\newcommand{\using}[1]{\stackrel{\mathrm{#1}}{=}}
\newcommand{\ffrac}{\displaystyle \frac}
\newcommand{\asim}{\overset{\text{a}}{\sim}}
\newcommand{\space}{\text{ }}
\newcommand{\bspace}{\;\;\;\;}
\newcommand{\QQQ}{\boxed{?\:}}
\newcommand{\void}{\left.\right.}
\newcommand{\Tran}[1]{{#1}^{\mathrm{T}}}
\newcommand{\d}[1]{\displaystyle{#1}}
\newcommand{\CB}[1]{\left\{ #1 \right\}}
\newcommand{\SB}[1]{\left[ #1 \right]}
\newcommand{\P}[1]{\left( #1 \right)}
\newcommand{\abs}[1]{\left| #1 \right|}
\newcommand{\norm}[1]{\left\| #1 \right\|}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\Exp}{\mathrm{E}}
\newcommand{\RR}{\mathbb{R}}
\newcommand{\EE}{\mathbb{E}}
\newcommand{\II}{\mathbb{I}}
\newcommand{\NN}{\mathbb{N}}
\newcommand{\ZZ}{\mathbb{Z}}
\newcommand{\QQ}{\mathbb{Q}}
\newcommand{\PP}{\mathbb{P}}
\newcommand{\AcA}{\mathcal{A}}
\newcommand{\FcF}{\mathcal{F}}
\newcommand{\AsA}{\mathscr{A}}
\newcommand{\FsF}{\mathscr{F}}
\newcommand{\Var}[2][\,\!]{\mathrm{Var}_{#1}\left[#2\right]}
\newcommand{\Avar}[2][\,\!]{\mathrm{Avar}_{#1}\left[#2\right]}
\newcommand{\Cov}[2][\,\!]{\mathrm{Cov}_{#1}\left(#2\right)}
\newcommand{\Corr}[2][\,\!]{\mathrm{Corr}_{#1}\left(#2\right)}
\newcommand{\I}[1]{\mathrm{I}\left( #1 \right)}
\newcommand{\N}[1]{\mathcal{N} \left( #1 \right)}
\newcommand{\ow}{\text{otherwise}}
\newcommand{\FSD}{\text{FSD}}$
## Single Period Binomial model
Suppose the market to be observed at just two times: $0$ and $T$, where it has two possible states at time $T$:
- $P\CB{S_t = S_0 \cdot u} = p$
- $P\CB{S_t = S_0 \cdot d} = 1-p$
Here, $p$ is the ***objective (actual) probability***.
$Remark$ Portfolio and Arbitrage
>Using two instruments: a bond and a stock; to build the portfolio, denoted as a vector $h\P{x,y}$ where $x$ and $y$ denote the number of bonds and number of units of the stock in the portfolio, respectively.
>
>$Def$ ***Value Process***
>
>The value process of the portfolio is $V_t^h = x \cdot B_t + y \cdot S_t$, where $B_t$ is the bond price at time $t$ and we let $B_t = e^{rt}$
>
>$Def$ **Arbitrage Portfolio**
>
>An arbitrage portfolio is a portfolio $h$ that satisfies
>
>- $V_0^h = 0\\[0.5em]$. At time $0$ there's nothing in your hand.
- $V_T^h\P{\omega} \geq 0\\[0.5em]$. $\omega$ is the possible states in time $T$.
- $\Exp\SB{V_T^h\P{\omega}} > 0$. $\Exp\SB\cdot$ is the expectation under the actual probability measure.
And in **Binomial Model**, $S_T = S_0d$ or $S_T = S_0u$ are the two states for $\omega$, with actual probability $d$ and $u$ respectively. $\Exp\SB{V_T^h\P{\omega}} = u\cdot V_T^h\P{S_T = S_0u} + d\cdot V_T^h\P{S_T = S_0d}$
$Lemma$
The single period binomial model is *free of arbitrage* $iff$ the $d < e^{rT} < u$
$Proof$
> $\Rightarrow)$: Here we're gonna first suppose that $e^{rT}$ is *not* between $d$ and $u$ and derive that the model is *not* free of arbitrage. Skipped, too easy man. The arbitrage strategy we're gonna construct is $h\P{S_0,-1}$ or $h\P{-S_0,1}$
>
>$\Leftarrow)$: Assume that there's an arbitrage portfolio $h \P{x,y}$ such that $V_0^h = 0$ then, $x = -yS_0$. Then at time $T$, we have
>$$\bspace \begin{align}
V_T^h &= -yS_0e^{rT} + yS_T \\[0.6em]
&= \begin{cases}
yS_0 \P{u - e^{rT}}, &\text{if } S_T = S_0 \cdot u \\[0.6em]
yS_0 \P{d - e^{rT}}, &\text{if } S_T = S_0 \cdot d
\end{cases}
\end{align}$$
>
>And no matter $y$ is greater of less than $0$, since $u$ and $d$ are on the different size of $e^{rT}$, $V_T^h$ can't be just bigger than $0$, thus free of arbitrage.
$Def$ **contingent claim**
A ***contingent claim (or financial derivative)*** is any stochastic variable $U$ of the form $\Phi(Z)$ (nonlinear or linear function), where $Z$ can be the stochastic variable driving the stock price process.
$Remark$
>$Z$ is a *stochastic variable* because it has two periods of time, $0$ and $T$, and at each time period the price can be treated as a $r.v.$ and it actually is at time $T$ for $S_T = S_0\cdot u$ with probability $u$ and $S_0\cdot d$ with probability $d$.
Say, for an european call option, we have $U = \P{S_T - K}^+ = \Phi\P{S_T}$
That **claim** at time $T$, is ***attainable*** if it can be ***hedged/replicated***. That is, there exists a portfolio $h$ whose value at time $T$ is *exactly* $U$ ($i.e.$ $V_T^h = U$ $wp1$). Then $h$ is called a ***hedging portfolio*** or a ***replicating portfolio***.
There is NO difference between holding the claim and holding the portfolio.
$Proposition$
Suppose that a claim $U$ at time $T$ is attainable with replicating portfolio $h$. Denote the price of the claim at time $t$ by $\Pi\P{t, U}$, then $\Pi\P{t,U} = V_t^h$.
## No-arbitrage for options pricing
**e.g.**
A stock price is currently $20$, and it will be either $22$ or $18$ at the end of the next $3$ months. The risk-free interest rate is $12\%$ per annum. What is the price of a European call option to buy the stock for $21$ in $3$ months?
> We try to replicate the option with $x$ bonds and $y$ shares of stock. Then:
>
>$$V_t^h = x\cdot e^{rt} + y \cdot S_t\\[0.6em]
V_T^h = \begin{cases}
x \cdot e^{0.12*3/12} + 22\cdot y = \P{22-21}^+, & \text{state } 1\\
x \cdot e^{0.12*3/12} + 18\cdot y = \P{18-21}^+, & \text{state } 2
\end{cases}\\[1em]
\Longrightarrow x = -4.5 e^{-0.03}, y=0.25$$
>
>Thus we price the option as: $c = V_0^h = x + y\cdot S_0 = -4.5 e^{-0.03} + 0.25 \times 20 = 0.633$
$Remark$
>By solving the equation set of two equations, actually it's an implicaiton of the **binomial tree** model. For two states results in two equations and two unknown variables related to a bond and some certain cash, respectively.
**e.g.** **binomial option pricing formula**
Using the ***riskless hedging principle*** to derive that **formula**. The portfolio is consisted of a short position $\Delta$ shares of stock and a long position in one call option.
> First, how to say about the *risk*? Here it can be simply regarded as a variable with NO variance. Thus, $0$ changes. Then we have (following the last example):
>
>$\begin{cases}
\P{uS_0 - 21}^+ - \Delta u S_0 = \P{c - \Delta S_0} e^{rT}\\
\P{dS_0 - 21}^+ - \Delta d S_0 = \P{c - \Delta S_0} e^{rT}
\end{cases}$
>
>Solve it and that's all.
$Remark$
>Easy to see that this equation set is the same with the one before by letting $\Delta = y$.
>
> Mathematically, the ***Delta*** of stock option is the ratio of the change in the price of the stock option to the change in the price of the underlying stock. And financially, it's also the number of units of the stock we hold for each option shorted in order to create a riskless portfolio (Solving this **Delta** is called ***delta hedging***).
***
**Summary**
> let $c$ denote a European call option price at time $0$; and $c_u$ ($c_d$) the call option price at time $T$ corresponding to the up-move (down-move) of the asset price.
>
>Then we construct the portfolio $h$ that replicates the long position of a call option:
>
>$$\begin{cases}
x\cdot e^{rT} + y \cdot uS_0 = c_u = \P{uS_0 - K}^+\\
x\cdot e^{rT} + y \cdot dS_0 = c_d = \P{dS_0 - K}^+
\end{cases} \\[1em]
\Longrightarrow \boxed{x = \ffrac{u\cdot c_d - d\cdot c_u} {\P{u-d}e^{rT}}, y = \ffrac{c_u - c_d} {\P{u-d}S_0}}$$
$Remark$
>An interesting fact about the solution is that $x \leq 0$ and $y \geq 0$. Prove them by write $c_d = \max\P{dS_0 - k,0}$ and $c_u = \max\P{uS_0 - K,0}$
>
> Thus we substitute the value back and obtain:
>
>$$c = x + yS_0 = e^{-rT}\SB{q c_u + \P{1-q}c_d}, q = \ffrac{e^{rT}-d} {u-d}$$
>
>And this also has some connection with the previous lemma, that $d < e^{rT} < u$, thus $0 < q < 1$, could be considered as a probability and $c$, a expectation.
$Remark$
>To understand the relaiton of $p$ and $q$, here we consider $p$ as a sign of the symbol of what kinds of state the stock CAN have. Then $q$ is just a new probability measure.
>
>Then we have $c = e^{-rT}\SB{q c_u + \P{1-q}c_d} = e^{-rT} \Exp^Q\SB{\P{S_T-K}^+}$. Here $\Exp^Q$ is the expectation with respect to the probability measure $Q$.
>
>Also, we can calculate the expeaction under $Q$ of $S_T$: $\Exp^Q\SB{S_T} = qS_0u + \P{1-q}S_0d$, then we substitute $q$ back, and then, Boom!
>
>$$S_0 = e^{-rT} \Exp^Q\SB{S_T}$$
>
>Thus, A probability measure $Q$ is called ***martingale measure*** (or ***risk neutral measure***) if $S_0 = e^{-rT} \Exp^Q\SB{S_T}$.
>
>And $e^{-rt}S_t$ is the martingale since $\Exp^Q\SB{e^{-rT}S_T} = e^{-r\cdot0}S_0$, true!
$Remark$
>In a risk-neutral world, the expected return on all securities is the risk-free interest rate, $r$. Also, once there's a question that why the money you put in the stock market will not expand with multiplicator $e^{rT}$, cause it's already in it's expecation value! $\Exp^Q\SB{S_T} = S_0\cdot e^{rT}$
## Risk-neutral valuation
Assume the existence of the risk-neutral probability values $q_u$ and $q_d$ corresponding to up and down movement of the price of the stock. Since:$S_0 = e^{-rT} \Exp^Q\SB{S_T}= e^{-rT}\P{q_uS_0u + q_dS_0d}$ and $q_u + q_d = 1$, if we let $q_u = q$, then we have:
$$c = e^{-rT} \Exp^Q\SB{\P{S_T - K}^+} = e^{-rT}\SB{qc_u + \P{1-q}c_d}$$
**e.g.** the same example
> $20 = e^{-0.12\times 3/12}\P{22q_u + 18q_d}$ and $q_u + q_d = 1$ then $q_u = 0.6523$. Thus:
>
> $\bspace c = e^{-0.12\times 3/12}\SB{0.6523 \times 1 + \P{1-0.6523}\times 0} = 0.633$
$Remark$
If *all* claims can be replicated, then the market is ***complete***.
$Proposition.4$
Assume that the general binomial model is free of arbitrage. Then it is also complete.
$Proof$
>For claim $U$ at time $T$, let the asset price be $U_d$ is moving down or $U_u$ is moving up. Construct a portfolio $h(x, y)$ that replicates any claim $U$:
>
>$$xe^{rT} + yuS_0 = U_u,\; xe^{rT}+ydS_0 = U_d \\[0.6em]
\Longrightarrow x = \ffrac{uU_d - dU_u} {\P{u-d}e^{rT}},\; y = \ffrac{U_u - U_d} {\P{u-d}S_0}$$
$Proposition.5$
If the binomial model if free of arbitrage. Then the arbitrage free price of a contingent claim $U$ at time zero is given by
$$\Pi\P{0,U} = e^{-rT}\Exp^Q\SB{U}$$
Here the martingale measure $Q$ is uniquely determined by the relation $S_0 = e^{-rT}\Exp^Q\SB{S_T}$
**Interpretation**
Price of the claim $U$: expected value of the discount payoff under the martingale measure.
For **forward**
$\begin{align}
0 &= \Pi\P{0,\P{S_T-K}} = e^{-rT}\Exp^Q\SB{S_T - K} \\
&\using{\text martingale} e^{-rT} \P{e^{rT}S_0 - K} \\
&= S_0 - Ke^{-rT}
\end{align}$
so that $K = S_0 e^{rT}$ is the price of the forward, verified!
***
For **European options**:
$\begin{align}
c = \Pi\P{0,\P{S_T-K}^+} = e^{-rT}\Exp^Q\SB{\P{S_T-K}^+} = e^{-rT}\SB{qc_u + \P{1-q}c_d}
\end{align}$
so that $c = e^{-rT}\SB{qc_u + \P{1-q}c_d}$ is the price of the european call, verified!
***
$Remark$
>This formula can be applied to calculate the price of american call, shown later.
## Multiperiod Binomial model 要考啦!
**e.g.**

Suppose that the stock prices are given by the tree above, The stock follows the binomial model over each time period $[t, t + 1]$, where $t = 0, 1, 2$. If the interest rates are zero, what is the cost of an option to buy the stock at price $100$ at time $3$?
>We will calculate from the last column to the first column. We first need to calculate $q$, **risk-neutral probability** by the stock price change. Since $e^{-rT}\Exp^Q\SB{S_3} = S_2$ we have
>
>$$e^{-0\times 1} \Exp^Q\SB{q\times160 + \P{1-q}\times 120} = 140 \Longrightarrow q = 0.5$$
>
>And since the interest rates are zero, $q = \ffrac{e^{rT} - d} {u-d}$ keeps constant!
>
> So that for the payoff: $e^{-0\times 1} \Exp^Q\SB{0.5\times\P{160-100}^+ + 0.5\times \P{120 - 100}^+} = 40$, on and on...
>
>
>
>***
>
>Alternative method: Path probabilities
>
>We can also let $V_t\P{k}$ denote the option price at the node $\P{t,k}$, where $k$ denotes the number of up-moves that have occurred and $k = 0 , 1 ,\dots,t$. Thus:
>
>$\bspace \begin{align}
c &= e^{-rT} \Exp^Q\SB{\P{S_T-K}^+}\\
&= e^{-0 \times 3}\SB{\binom{3}{3}q^3\P{160-100}^+ + \binom{3}{2}q^2\P{1-q}\P{120 - 100}^+ + \binom{3}{1}q\P{1-q}^2\P{80 - 100}^+ +\cdots} \\
&\using{q=0.5} 15
\end{align}$
$Remark$
>Here the $u$ and $d$ are calculated by $\ffrac{S_t - S_{t-1}}{ S_{t-1}}+1$. You can check that only when $r=0$ will the $q$ keeps constant while $u$ and $d$ change.
## Replication
Construct a portfolio that exactly replicates the previous claim at time $3$. Write $\P{x_t\P{k},y_t\P{k}}$ for the amount of stock and bond held in the portfolio at node $(t, k)$ over the time interval $[t, t + 1)$.
An intuitive thoughts is that since we're going to replicate the final state, it's better to start from the beginning. Suppose the price jumped up and up and down:
At time $0$, we calculate $y_0\P{0} = \ffrac{25-5} {120-80} = 0.5$. Note that we are given $15$ for selling the option, so we buy $0.5$ units of stock, which costs $50$ and then borrow $35$ cash bonds, $x_0\P{0} = -35$.
At time $1$, suppose that $S_1 = 120$. $y_1\P{1} = \ffrac{40-10} {140-100} = 0.75$, $x_1\P{1} = -35 - 0.25\times 120 = -65$ or $x_1\P{1} = 25 - 0.75\times 120 = -65$
At time $2$, suppose that $S_2 = 140$. $y_2\P{2} = \ffrac{60-20} {160-120} = 1$, $x_1\P{1} = -65 - 0.25\times 140 = -100$ or $x_1\P{1} = 40 - 1\times 100 = -100$
Finally, at time $3$, suppose that $S_3 = 120$. Our replication strategy has one share of stock and $-100$ debt. We will hand over unit of stock for strike price $100$, which is exactly enough to cancel our bond debt.
$Remark$
No extra input or output of cash, ***self-financing***.
$Proposition.6$
Consider a claim $U = \phi\P{S_T}$. Then this claim can be replicated using a self-financing portfolio. If $V_t\P{k}$ denotes the option price at the node $\P{t,k}$, where $k$ is the number of up-moves that have occurred, then $V_t\P{k}$ can be computed recursively by the scheme:
$$\begin{align}
V_T\P{k} &= \phi\P{S_T} \\
V_t\P{k} &= e^{-r \times 1}\SB{q V_{t+1}\P{t+1} + \P{1-q} V_{t+1}\P{k}}
\end{align}$$
and to hedge this, we construct the portfolio as
$$y_t\P{k} = \ffrac{V_{t+1}\P{k+1} - V_{t+1}\P{k}} {\P{u-d}S_t}, x_t\P{k} = V_{t}\P{k}-\ffrac{V_{t+1}\P{k+1} - V_{t+1}\P{k}} {u-d}$$
here $\P{x_t\P{k},y_t\P{k}}$ for the amount of stock and bond held in the portfolio at node $(t, k)$ over the time interval $[t, t + 1)$
**e.g.** Pricing American options under multiperiod binomial models
We don't need to consider the American call options. The American put options with strike price $52$ on a stock whose current price is $50$. We suppose that there are two time steps of $1$ year, and in each time step the stock price either moves up by $20\%$ or down $20\%$. The risk-free interest rate $r$ is $5\%$. Then what's the American put option price at time $0$?
> Strategy: Work back through the tree from the end to the beginning, testing at each node to see whether early exercise is optimal.
>
>At erlier nodes $t_i$, the option price is the greater of
>1. the value given by $e^{-r\Delta_t}\Exp^Q\SB{V\P{t_{i+1}}}$, the continuation value
>2. the payoff from early exercise
>
>First find the $q$: from $S_0 = e^{-r\times 1}\Exp^Q\SB{S_1} \Longrightarrow q = \ffrac{e^{r\Delta_t}-d} {u-d} = \ffrac{e^{0.05}-0.8} {1.2-0.8} = 0.6282$ and $1-q = 0.3718$
>
>
>Then, as node $B$, the continuation value is $e^{-0.05}\P{0.6282 \times 0 + 0.3718 \times 4} = 1.4147$, while the early exercise payoff is $52-60 = -8$. Thus the option price is $1.4147$
>
>At node $C$, the continuation value is $e^{-0.05}\P{0.6282 \times 4 + 0.3718 \times 20} = 9.4636$, while the early exercise payoff is $52-40 =12$. Thus the option price is $12$
>
>Finally, at node $A$. The continuation value is $e^{-0.05}\P{0.6282 \times 1.4147 + 0.3718 \times 12} = 5.0894$, while the early exercise payoff is $52-50 =2$. Thus the option price is $5.0894$.
## Matching volatility with $u$ and $d$
Suppose that the expected return on a stock in the real world is $\mu$ and its volatility is $\sigma$. The time step is of length $\Delta_t$. Then we matching the expected return on the stock with the binomial tree's parameters, we have
$$\Exp\SB{S_{\Delta_t}} = pS_0 u + \P{1-p}S_0 d = S_0 e^{\mu \Delta_t} \Longrightarrow p = \ffrac{e^{\mu\Delta_t} - d} {u - d}$$
Matching the stock price volatility with the tree's parameters:
$$pu^2 + \P{1-p}d^2 - \SB{pu + \P{1-p}d}^2 = \Var{\ffrac{S_{\Delta_t}} {S_0}}\approx\sigma^2\Delta_t \\
\Downarrow p = \ffrac{e^{\mu\Delta_t} - d} {u - d}\\[0.7em]
e^{\mu\Delta_t} \P{u+d} -ud -e^{2\mu\Delta_t} = \sigma^2 \Delta_t
$$
And an extra condition: ***tree-symmetry condition***: $ud = 1$.
Then we can solve the equation and obtain that:
$$\begin{align}
u &= 1 + \sigma \sqrt{\Delta_t} + \ffrac{\sigma^2} {2} \Delta_t + \cdots = e^{\sigma\sqrt{\Delta_t}}\\
d &= e^{-\sigma\sqrt{\Delta_t}}
\end{align}$$
However in risk-neutral world, $\Exp^Q\SB{S_{\Delta_t}} = qS_0u + \P{1-q} S_0d = S_0 e^{r\Delta_t} \Rightarrow q = \ffrac{e^{r\Delta_t} -d} {u-d}$. And the volatility:
$$qu^2 + \P{1-q}d^2 -\SB{qu + \P{1-q}d}^2 = e^{r\Delta_t} \P{u+d} -ud -e^{2r\Delta_t} = \sigma^2\Delta_t + \P{ \text{higher powers of }\Delta_t}$$
$Remark$
When we move from the real world to the risk-neutral world, the expected return on the stock changes, but its volatility remains the same (at least in the limit as $\Delta_t$ tends to zero).
## Question
$A$ and $B$ are gambling, both of whom with infinite money. However $A$ got a higher chance to lose, so find the strategy that $A$ can at least win $\$1$ from $B$.
>Double the pool if lose, starting from $\$1$. So that if, any chances, that $A$ win (definitely gonna happen if only the event is not a not-possible event), $A$ will win $B$ $1$ dollar.
| github_jupyter |
# The Store - Caching and (re-)using simulator results with SWYFT
The caching and (re-)use of simulator results is central to the working of SWYFT, with reuse possible both within the context of a single inference problem, as well as between different experiments -- provided the simulator used (including **all** its settings) is the same.
**It is the responsibility of the user to ensure the employed simulator is consistent between experiments using the same store.**
To this end SWYFT incorporates a `Store` class with two main implementations: a **memory store**, which holds data in the main memory, and a **directory store**, which saves data in files written to disk. Here we demonstrate the use of these stores.
```
%load_ext autoreload
%autoreload 2
# DON'T FORGET TO ACTIVATE THE GPU when on google colab (Edit > Notebook settings)
from os import environ
GOOGLE_COLAB = True if "COLAB_GPU" in environ else False
if GOOGLE_COLAB:
!pip install git+https://github.com/undark-lab/swyft.git
import numpy as np
import torch
import pylab as plt
import os
import swyft
```
We again begin by defining some parameters, a toy simulator, and a prior.
```
# Set randomness
np.random.seed(25)
torch.manual_seed(25)
# cwd
cwd = os.getcwd()
# swyft
device = 'cpu'
n_training_samples = 3000
n_parameters = 2
observation_key = "x"
def model(v, sigma = 0.01):
x = v + np.random.randn(n_parameters)*sigma
return {observation_key: x}
v_o = np.zeros(n_parameters)
observation_o = model(v_o, sigma = 0.)
n_observation_features = observation_o[observation_key].shape[0]
observation_shapes = {key: value.shape for key, value in observation_o.items()}
simulator = swyft.Simulator(
model,
n_parameters,
sim_shapes=observation_shapes,
)
low = -1 * np.ones(n_parameters)
high = 1 * np.ones(n_parameters)
prior = swyft.get_uniform_prior(low, high)
store = swyft.Store.memory_store(simulator)
# drawing samples from the store is Poisson distributed. Simulating slightly more than we need avoids attempting to draw more than we have.
store.add(n_training_samples + 0.01 * n_training_samples, prior)
store.simulate()
```
## The memory store
The memory store, which, intuitively, stores all results in active memory using `zarr`, provides `SWYFT`'s simplest store option.
An empty store can be instantiated as follows, requiring only the specification of an associated simulator.
```
store = swyft.Store.memory_store(simulator)
```
Subsequently, parameters, drawn according to the specified prior, can be added to the store as
NOTE: the store ADDS a Poisson-distributed number of samples with parameter `n_training_samples`. When samples are drawn FROM the store, that amount is also Poisson-distributed. When more samples are drawn than exist within the store, an error is thrown. To avoid this issue, add more samples to the store than you intend to draw from it.
```
# Drawing samples from the store is Poisson distributed.
# Simulating slightly more than we need avoids attempting to draw more than we have.
store.add(n_training_samples + 0.01 * n_training_samples, prior=prior)
```
and it is possible to check whether entries in the store require simulator runs using
```
needs_sim = store.requires_sim()
needs_sim
```
Similarly, an overview of the exact simulation staus of all entries can be obtained using
```
store.get_simulation_status()
```
Where a value of 0 corresponds to not yet simulated .
The reqired simulations can then be run using the store's `simulate` method.
```
store.simulate()
```
Afterwards, all simulations have been run, and their status in the store has been updated (2 corresponds to successfully simulated).
```
store.requires_sim()
store.get_simulation_status()
```
### Sample re-use and coverage
`SWYFT`'s store enables reuse of simulations. In order to check which fraction of a required number of samples can be reused, the coverge of the store for the desired prior, i.e. which fraction of the desired nuumber of samples to be drawn from the specified prior is already available in the store, can he inspected as follows.
```
store.coverage(2*n_training_samples, prior=prior)
```
Adding a specified number of samples to the store then becomes a question of adding the missing number.
```
store.add(2*n_training_samples, prior=prior)
```
These, however, do not yet have associated simulation results.
```
store.requires_sim()
store._get_indices_to_simulate()
```
#### Saving and loading
A memory store can also be saved, i.e. serialized to disk as a directory store, using the `save` method which takes the desired path as an argument,
```
store.save(cwd+'/SavedStore')
```
and be loaded into memory by specifying the path to a directory store and a simulator
```
store2 = swyft.Store.load(cwd+'/SavedStore', simulator=simulator).to_memory()
store2._get_indices_to_simulate()
```
## The directory store
In many cases, running an instance of a simulator may be quite computationally expensive. For such simulators `SWYFT`'s ability to support reuse of simulations across different experiments is of paramount importance.
`SWYFT` provides this capability in the form of the directory store, which serializes the store to disk using `zarr`and keeps it up-to-date with regard to requested samples and parameters.
A directory store can be instantiated via the `Store.directory_store()` convenience method by providing a path and a simulator as arguments. In order to open an existing store, `Store.load()` can be employed.
```
dirStore = swyft.Store.load(cwd+'/SavedStore')
```
While it is necessary to specify the simulator to be associated with a directory store upon instantiation via the `simulator` keyword, it is possible to load an existing store without specifying a simulator and set the simulator later/afterwards.
```
dirStore.set_simulator(simulator)
```
### Updating on disk
We now briefly demonstrate the difference between a directory store and a memory store which has been loaded from an existing directory store.
In the example above, both the `dirStor` and `store2` are currenlty equivalent in content. In the `dirStore` we will now add simulations for half of the currently present samples lacking simulations,
```
all_to_sim = dirStore._get_indices_to_simulate()
sim_now = all_to_sim[0:int(len(all_to_sim)/2)]
dirStore.simulate(sim_now)
```
Where we have made use of the ability to explicitly specify the indices of samples to be simulated.
The remaining samples lacking simulation results in the `dirStore` are now
```
dirStore._get_indices_to_simulate()
```
i.e. the store has been updated on disk, while in comparison the samples lacking simulation results in `store2` are still
```
store2._get_indices_to_simulate()
```
## asynchronous usage
In contrast to the memory store, the directory store also supports asynchronous usage, i.e. when simulations are requested control immediately returns, with the simulations and updating of the store happening in the background.
This is particularly relevant for long-running simulators and parallelization using Dask, as is showcased in a separate notebook.
Here, as a small example, we simply add further samples to the store and then execute the associated simulations without waiting for the results.
```
dirStore.add(5*n_training_samples,prior=prior)
dirStore.simulate(wait_for_results=False)
print('control returned')
```
| github_jupyter |
# 3 時間目
# フィルタを使って学習機械を作ってみよう(続き)
それでは,畳込みニューラルネットワークを,多段化して深層畳み込みニューラルネットワーク (Deep Convolution Neural Network) を実現してみましょう.
基本的には `model` の部分を繰り返すという話になります.
前の時間のものは *初期視覚野* のモデルとして畳み込みネットワークを作りいましたが,高次視覚野をモデル化させるために
初期視覚野と同じような構造を用意してやるだけです.
```
# とりあえずはおまじない
import numpy as np
import matplotlib.pylab as plt
import keras
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
```
## データを用意する.
データは先程と同じく MNIST を用います.
```
from keras.datasets import mnist
(x_train, y_train_label), (x_test, y_test_label) = mnist.load_data()
# このままだと扱いにくいので画像データは [0, 1] の値を持つデータにします.
x_train = x_train.astype('float32').reshape(60000, 28, 28, 1)
x_test = x_test.astype('float32').reshape(10000, 28, 28, 1)
x_train /= 255
x_test /= 255
# ラベルデータもちょっと扱いにくいので 10 要素のベクトルで表す (one-hot vector 表現)
y_train = keras.utils.to_categorical(y_train_label)
y_test = keras.utils.to_categorical(y_test_label)
```
## 視覚モデルを構築してみる.
高次視覚野のモデルとしては,初期視覚野的のモデルを繰り返し適用することで作り出します.
あまり大規模なものにすると計算時間がかかるので,ここでは,初期視覚野の構造を2回繰り返すことを考えます.
```
model = Sequential()
# 初期視覚野の部分は前述の model と同じ
model.add(Conv2D(16, kernel_size=(3, 3), input_shape=(28, 28, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# 高次視覚野は上記の初期視覚野の部分を繰り返し作る
model.add(Conv2D(16, kernel_size=(3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
```
ここまでで,高次視覚野のモデルになっていると思うことにしましょう.
この表現を分類器にかけてみるのは,前のモデルと一緒です.
```
# 分類器,とりあえずはおまじないと思っておk
model.add(Flatten()) # 上記表現を1列のベクトルに並べて
model.add(Dense(10, activation='softmax')) # 識別器を構成
```
## モデルの学習
さて,ここまででモデルができました.でもフィルタの係数とかは決まってません.この部分を学習で決めることにします.
学習自体は,ある種の関数(loss関数と呼ばれます)を小さくすることで求めます.
ここでは `categorical_crossentropy` と呼ばれるを loss 関数としています.
```
from keras.optimizers import Adam
# 学習モデルを定義して
model.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['accuracy'])
# 学習させます
history = model.fit(x_train, y_train, batch_size=128, epochs=20, verbose=1)
# loss の値
plt.figure()
plt.plot(history.history['loss'])
plt.title('loss value')
plt.grid()
# 学習データに対する正答率
plt.figure()
plt.plot(history.history['acc'])
plt.title('Accuracy (For training set)')
plt.grid()
```
大体同程度の識別精度になっていることがわかると思われます.
次にテストセットで評価してみましょう
```
# テストセットで評価してみる
score = model.evaluate(x_test, y_test, verbose=1)
print('test loss: ', score[0])
print('test accuracy: ', score[1])
```
おおよそ 98 % 程度の精度(が出るはず)であるので,まぁまぁのものができた.
ついでに,間違えたやつをピックアップしてみよう.
```
# 識別間違いしたやつをピックアップしてみる
y_pred_label = model.predict_classes(x_test)
idx = (y_pred_label != y_test_label) # 答えが違う添字のやつを抜き出し
x_failed = x_test[idx]
y_true_failed = y_test_label[idx]
y_pred_failed = y_pred_label[idx]
plt.figure(figsize=(10, 10))
for i in range(9):
plt.subplot(3, 3, i+1)
plt.imshow(x_failed[i, :, :, 0], cmap='gray')
plt.title('True: %d, Pred: %d' % (y_true_failed[i], y_pred_failed[i]))
```
| github_jupyter |
## DeepAR Model - Predict Bike Rental with Dynamic Features
Note: This dataset is not a true timeseries as there a lot of gaps
We have data only for first 20 days of each month and model needs to predict the rentals for
the remaining days of the month. The dataset consists of two years data. DeepAR will shine with true multiple-timeseries dataset like the electricity example given below
```
import time
import numpy as np
import pandas as pd
import json
import matplotlib.pyplot as plt
import datetime
import boto3
import sagemaker
from sagemaker import get_execution_role
# Provide endpoint
with_categories = False
# ***TODO: You would need to update the endpoint name to point to your endpoint***
endpoint_name = 'deepar-biketrain-with-dynamic-feat-2021-06-15-16-39-30-403'
freq='H' # Timeseries consists Hourly Data and we need to predict hourly rental count
# how far in the future predictions can be made
# 12 days worth of hourly forecast
prediction_length = 288
# aws recommends setting context same as prediction length as a starting point.
# This controls how far in the past the network can see
context_length = 288
dt_predict_max = pd.Timestamp("2012-12-31 23:00:00", freq=freq) # 2012-12-31 23:00 alt way..pd.datetime(2012,12,31,23,0,0)
dt_dataset_start_time = pd.Timestamp("2011-01-01 00:00:00", freq=freq)
dt_dataset_end_time = pd.Timestamp("2012-12-19 23:00:00", freq=freq)
# use for model training
# Start time is the first row provided by kaggle
# Training TS end time ensures some data is withheld for model testing
# 12 days worth of training data is withheld for testing
dt_train_range = (dt_dataset_start_time,
dt_dataset_end_time - datetime.timedelta(hours=12*24) )
# Use entire data for testing
# We can compare predicted values vs actual (i.e. last 12 days is withheld for testing and model hasn't seen that data)
dt_test_range = (dt_dataset_start_time,
dt_dataset_end_time)
sagemaker_session = sagemaker.Session()
role = get_execution_role()
def encode_target(ts):
return [x if np.isfinite(x) else "NaN" for x in ts]
def encode_dynamic_feat(dynamic_feat):
l = []
for col in dynamic_feat:
assert (not dynamic_feat[col].isna().any()), col + ' has NaN'
l.append(dynamic_feat[col].tolist())
return l
def series_to_obj(ts, cat=None, dynamic_feat=None):
obj = {"start": str(ts.index[0]), "target": encode_target(ts)}
if cat is not None:
obj["cat"] = cat
if dynamic_feat is not None:
obj["dynamic_feat"] = encode_dynamic_feat(dynamic_feat)
return obj
def series_to_jsonline(ts, cat=None, dynamic_feat=None):
return json.dumps(series_to_obj(ts, cat, dynamic_feat))
# SDK 2. RealTimePredictor renamed to Predictor
class DeepARPredictor(sagemaker.predictor.Predictor):
def set_prediction_parameters(self, freq, prediction_length):
"""Set the time frequency and prediction length parameters. This method **must** be called
before being able to use `predict`.
Parameters:
freq -- string indicating the time frequency
prediction_length -- integer, number of predicted time points
Return value: none.
"""
self.freq = freq
self.prediction_length = prediction_length
def predict(self, ts, cat=None, dynamic_feat=None,
encoding="utf-8", num_samples=100, quantiles=["0.1", "0.5", "0.9"]):
"""Requests the prediction of for the time series listed in `ts`, each with the (optional)
corresponding category listed in `cat`.
Parameters:
ts -- list of `pandas.Series` objects, the time series to predict
cat -- list of integers (default: None)
encoding -- string, encoding to use for the request (default: "utf-8")
num_samples -- integer, number of samples to compute at prediction time (default: 100)
quantiles -- list of strings specifying the quantiles to compute (default: ["0.1", "0.5", "0.9"])
Return value: list of `pandas.DataFrame` objects, each containing the predictions
"""
#prediction_times = [x.index[-1]+1 for x in ts]
prediction_times = [x.index[-1] + datetime.timedelta(hours=1) for x in ts]
req = self.__encode_request(ts, cat, dynamic_feat, encoding, num_samples, quantiles)
res = super(DeepARPredictor, self).predict(req)
return self.__decode_response(res, prediction_times, encoding)
def __encode_request(self, ts, cat, dynamic_feat, encoding, num_samples, quantiles):
instances = [series_to_obj(ts[k],
cat[k] if cat else None,
dynamic_feat)
for k in range(len(ts))]
configuration = {"num_samples": num_samples, "output_types": ["quantiles"], "quantiles": quantiles}
http_request_data = {"instances": instances, "configuration": configuration}
return json.dumps(http_request_data).encode(encoding)
def __decode_response(self, response, prediction_times, encoding):
response_data = json.loads(response.decode(encoding))
list_of_df = []
for k in range(len(prediction_times)):
#prediction_index = pd.DatetimeIndex(start=prediction_times[k], freq=self.freq, periods=self.prediction_length)
prediction_index = pd.date_range(start=prediction_times[k], freq=self.freq, periods=self.prediction_length)
list_of_df.append(pd.DataFrame(data=response_data['predictions'][k]['quantiles'], index=prediction_index))
return list_of_df
# SDK 2 parameter name endpoint_name, content_type is specified as part of the serializer
predictor = DeepARPredictor(
endpoint_name=endpoint_name,
sagemaker_session=sagemaker_session
)
predictor.set_prediction_parameters(freq, prediction_length)
predictor.serializer.content_type = "application/json"
df = pd.read_csv('all_data_dynamic_feat.csv', parse_dates=['datetime'],index_col=0)
df_test = pd.read_csv('test.csv', parse_dates=['datetime'],index_col=0) # data points to be predicted for submission
df = df.resample('1h').mean()
dynamic_features = ['season', 'holiday', 'workingday', 'weather', 'temp', 'atemp', 'humidity', 'windspeed']
target_values = ['count','registered','casual']
time_series_test = []
time_series_training = []
for t in target_values:
time_series_test.append(df[dt_test_range[0]:dt_test_range[1]][t])
time_series_training.append(df[dt_train_range[0]:dt_train_range[1]][t])
df_dynamic_feat = df[dynamic_features]
dynamic_features_test = df_dynamic_feat [dt_test_range[0]:dt_test_range[1]]
dynamic_features_training = df_dynamic_feat[dt_train_range[0]:dt_train_range[1]]
df_dynamic_feat.head()
dynamic_features_test.shape
# Provide 0 based index for categories
list_of_df = predictor.predict(time_series_training,
cat=[[0],[1],[2]] if with_categories else None,
dynamic_feat=dynamic_features_test)
for i in range(len(list_of_df)):
print(len(list_of_df[i]))
```
### Predict total count, registered, casual - We can also predict only on the total count time series
```
for k in range(len(list_of_df)):
# print (-prediction_length-context_length) #120 = 72+48
plt.figure(figsize=(12,6))
time_series_test[k][-prediction_length-context_length:].plot(label='target')
p10 = list_of_df[k]['0.1']
p90 = list_of_df[k]['0.9']
plt.fill_between(p10.index, p10, p90, color='y', alpha=0.5, label='80% confidence interval')
list_of_df[k]['0.5'].plot(label='prediction median')
plt.legend()
plt.show()
predict_window = []
for i,x in df_test.groupby([df_test.index.year,df_test.index.month]):
predict_window.append(x.index.min()-datetime.timedelta(hours=1))
for t in target_values:
df_test[t] = np.nan
df_test.head()
for window in predict_window:
print(window)
# If trained with categories, we need to send corresponding category for each training set
# In this case
for i in range(len(target_values)):
list_of_df = predictor.predict([time_series_test[i][:window]],
cat=[i] if with_categories else None,
dynamic_feat= df_dynamic_feat[:window +
datetime.timedelta(hours=prediction_length)])
df_tmp = list_of_df[0]
df_tmp.index.name = 'datetime'
df_tmp.columns = ['0.1',target_values[i],'0.9']
df_test.update(df_tmp[target_values[i]])
df_test.head()
df_test.tail()
def adjust_count(x):
if x < 0:
return 0
else:
return x
df_test['count'] = df_test['count'].map(adjust_count)
df_test['registered'] = df_test['registered'].map(adjust_count)
df_test['casual'] = df_test['casual'].map(adjust_count)
df_reg_cas = pd.DataFrame(df_test['registered'] + df_test['casual'])
df_reg_cas.columns = ['count']
df_reg_cas.head()
# Store the results
df_test[['count']].to_csv('prediction-with-dynamic-features.csv',index=True,index_label='datetime')
df_reg_cas[['count']].to_csv('prediction-with-dynamic-features-reg-cas.csv',index=True,index_label='datetime')
# Delete the endpoint after completing the demo...otherwise, your account will accumulate hourly charges
predictor.delete_endpoint()
# Don't forget to terminate the end point after completing the demo
# Otherwise, you account will accumulate hourly charges
# you can delete from sagemaker management console or through command line or throught code
# predictor.delete_endpoint()
```
| github_jupyter |
<h1>Neural Machine Translation for French to English</h1>
```
import pandas as pd
import tensorflow as tf
import numpy as np
import gzip
import codecs
import re
import time
from tensorflow.python.ops.rnn_cell_impl import _zero_state_tensors
from tensorflow.python.layers.core import Dense
from tensorflow.contrib.seq2seq import TrainingHelper, GreedyEmbeddingHelper, BasicDecoder, dynamic_decode
from tensorflow.contrib.seq2seq import BahdanauAttention, AttentionWrapper, sequence_loss
from tensorflow.contrib.rnn import GRUCell, DropoutWrapper
TOKEN_GO = '<GO>'
TOKEN_EOS = '<EOS>'
TOKEN_PAD = '<PAD>'
TOKEN_UNK = '<UNK>'
frdata=[]
endata=[]
with open('data/train_fr_lines.txt') as frfile:
for li in frfile:
frdata.append(li)
with open('data/train_en_lines.txt') as enfile:
for li in enfile:
endata.append(li)
mtdata = pd.DataFrame({'FR':frdata,'EN':endata})
mtdata['FR_len'] = mtdata['FR'].apply(lambda x: len(x.split(' ')))
mtdata['EN_len'] = mtdata['EN'].apply(lambda x: len(x.split(' ')))
print(mtdata['FR'].head(2).values)
print(mtdata['EN'].head(2).values)
mtdata_fr = []
for fr in mtdata.FR:
mtdata_fr.append(fr)
mtdata_en = []
for en in mtdata.EN:
mtdata_en.append(en)
def count_words(words_dict, text):
for sentence in text:
for word in sentence.split():
if word not in words_dict:
words_dict[word] = 1
else:
words_dict[word] += 1
word_counts_dict_fr = {}
word_counts_dict_en = {}
count_words(word_counts_dict_fr, mtdata_fr)
count_words(word_counts_dict_en, mtdata_en)
print("Total French words in Vocabulary:", len(word_counts_dict_fr))
print("Total English words in Vocabulary", len(word_counts_dict_en))
def build_word_vector_matrix(vector_file):
embedding_index = {}
with codecs.open(vector_file, 'r', 'utf-8') as f:
for i, line in enumerate(f):
sr = line.split()
if(len(sr)<26):
continue
word = sr[0]
embedding = np.asarray(sr[1:], dtype='float32')
embedding_index[word] = embedding
return embedding_index
embeddings_index = build_word_vector_matrix('/Users/i346047/prs/temp/glove.6B.50d.txt')
def build_word2id_mapping(word_counts_dict):
word2int = {}
count_threshold = 20
value = 0
for word, count in word_counts_dict.items():
if count >= count_threshold or word in embeddings_index:
word2int[word] = value
value += 1
special_codes = [TOKEN_UNK,TOKEN_PAD,TOKEN_EOS,TOKEN_GO]
for code in special_codes:
word2int[code] = len(word2int)
int2word = {}
for word, value in word2int.items():
int2word[value] = word
return word2int,int2word
def build_embeddings(word2int):
embedding_dim = 50
nwords = len(word2int)
word_emb_matrix = np.zeros((nwords, embedding_dim), dtype=np.float32)
for word, i in word2int.items():
if word in embeddings_index:
word_emb_matrix[i] = embeddings_index[word]
else:
new_embedding = np.array(np.random.uniform(-1.0, 1.0, embedding_dim))
word_emb_matrix[i] = new_embedding
return word_emb_matrix
fr_word2int,fr_int2word = build_word2id_mapping(word_counts_dict_fr)
en_word2int,en_int2word = build_word2id_mapping(word_counts_dict_en)
fr_embeddings_matrix = build_embeddings(fr_word2int)
en_embeddings_matrix = build_embeddings(en_word2int)
print("Length of french word embeddings: ", len(fr_embeddings_matrix))
print("Length of english word embeddings: ", len(en_embeddings_matrix))
def convert_sentence_to_ids(text, word2int, eos=False):
wordints = []
word_count = 0
for sentence in text:
sentence2ints = []
for word in sentence.split():
word_count += 1
if word in word2int:
sentence2ints.append(word2int[word])
else:
sentence2ints.append(word2int[TOKEN_UNK])
if eos:
sentence2ints.append(word2int[TOKEN_EOS])
wordints.append(sentence2ints)
return wordints, word_count
id_fr, word_count_fr = convert_sentence_to_ids(mtdata_fr, fr_word2int)
id_en, word_count_en = convert_sentence_to_ids(mtdata_en, en_word2int, eos=True)
def unknown_tokens(sentence, word2int):
unk_token_count = 0
for word in sentence:
if word == word2int[TOKEN_UNK]:
unk_token_count += 1
return unk_token_count
en_filtered = []
fr_filtered = []
max_en_length = int(mtdata.EN_len.max())
max_fr_length = int(mtdata.FR_len.max())
min_length = 4
unknown_token_en_limit = 10
unknown_token_fr_limit = 10
for count,text in enumerate(id_en):
unknown_token_en = unknown_tokens(id_en[count],en_word2int)
unknown_token_fr = unknown_tokens(id_fr[count],fr_word2int)
en_len = len(id_en[count])
fr_len = len(id_fr[count])
if( (unknown_token_en>unknown_token_en_limit) or (unknown_token_fr>unknown_token_fr_limit) or
(en_len<min_length) or (fr_len<min_length) ):
continue
fr_filtered.append(id_fr[count])
en_filtered.append(id_en[count])
print("Length of filtered french/english sentences: ", len(fr_filtered), len(en_filtered) )
def model_inputs():
inputs_data = tf.placeholder(tf.int32, [None, None], name='input_data')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
dropout_probs = tf.placeholder(tf.float32, name='dropout_probs')
en_len = tf.placeholder(tf.int32, (None,), name='en_len')
max_en_len = tf.reduce_max(en_len, name='max_en_len')
fr_len = tf.placeholder(tf.int32, (None,), name='fr_len')
return inputs_data, targets, learning_rate, dropout_probs, en_len, max_en_len, fr_len
def process_encoding_input(target_data, word2int, batch_size):
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
decoding_input = tf.concat([tf.fill([batch_size, 1], word2int[TOKEN_GO]), ending], 1)
return decoding_input
def get_rnn_cell(rnn_cell_size,dropout_prob):
rnn_c = GRUCell(rnn_cell_size)
rnn_c = DropoutWrapper(rnn_c, input_keep_prob = dropout_prob)
return rnn_c
def encoding_layer(rnn_cell_size, sequence_len, n_layers, rnn_inputs, dropout_prob):
for l in range(n_layers):
with tf.variable_scope('encoding_l_{}'.format(l)):
rnn_fw = get_rnn_cell(rnn_cell_size,dropout_prob)
rnn_bw = get_rnn_cell(rnn_cell_size,dropout_prob)
encoding_output, encoding_state = tf.nn.bidirectional_dynamic_rnn(rnn_fw, rnn_bw,
rnn_inputs,
sequence_len,
dtype=tf.float32)
encoding_output = tf.concat(encoding_output,2)
return encoding_output, encoding_state
def training_decoding_layer(decoding_embed_input, en_len, decoding_cell, initial_state, op_layer,
v_size, max_en_len):
helper = TrainingHelper(inputs=decoding_embed_input,sequence_length=en_len, time_major=False)
dec = BasicDecoder(decoding_cell,helper,initial_state,op_layer)
logits, _, _ = dynamic_decode(dec,output_time_major=False,impute_finished=True,
maximum_iterations=max_en_len)
return logits
def inference_decoding_layer(embeddings, start_token, end_token, decoding_cell, initial_state, op_layer,
max_en_len, batch_size):
start_tokens = tf.tile(tf.constant([start_token], dtype=tf.int32), [batch_size], name='start_tokens')
inf_helper = GreedyEmbeddingHelper(embeddings,start_tokens,end_token)
inf_decoder = BasicDecoder(decoding_cell,inf_helper,initial_state,op_layer)
inf_logits, _, _ = dynamic_decode(inf_decoder,output_time_major=False,impute_finished=True,
maximum_iterations=max_en_len)
return inf_logits
def decoding_layer(decoding_embed_inp, embeddings, encoding_op, encoding_st, v_size, fr_len,
en_len,max_en_len, rnn_cell_size, word2int, dropout_prob, batch_size, n_layers):
for l in range(n_layers):
with tf.variable_scope('dec_rnn_layer_{}'.format(l)):
gru = tf.contrib.rnn.GRUCell(rnn_len)
decoding_cell = tf.contrib.rnn.DropoutWrapper(gru,input_keep_prob = dropout_prob)
out_l = Dense(v_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
attention = BahdanauAttention(rnn_cell_size, encoding_op,fr_len,
normalize=False,
name='BahdanauAttention')
decoding_cell = AttentionWrapper(decoding_cell,attention,rnn_len)
attention_zero_state = decoding_cell.zero_state(batch_size , tf.float32 )
attention_zero_state = attention_zero_state.clone(cell_state = encoding_st[0])
with tf.variable_scope("decoding_layer"):
logits_tr = training_decoding_layer(decoding_embed_inp,
en_len,
decoding_cell,
attention_zero_state,
out_l,
v_size,
max_en_len)
with tf.variable_scope("decoding_layer", reuse=True):
logits_inf = inference_decoding_layer(embeddings,
word2int[TOKEN_GO],
word2int[TOKEN_EOS],
decoding_cell,
attention_zero_state,
out_l,
max_en_len,
batch_size)
return logits_tr, logits_inf
def seq2seq_model(input_data, target_en_data, dropout_prob, fr_len, en_len, max_en_len,
v_size, rnn_cell_size, n_layers, word2int_en, batch_size):
input_word_embeddings = tf.Variable(fr_embeddings_matrix, name="input_word_embeddings")
encoding_embed_input = tf.nn.embedding_lookup(input_word_embeddings, input_data)
encoding_op, encoding_st = encoding_layer(rnn_cell_size, fr_len, n_layers, encoding_embed_input, dropout_prob)
decoding_input = process_encoding_input(target_en_data, word2int_en, batch_size)
decoding_embed_input = tf.nn.embedding_lookup(en_embeddings_matrix, decoding_input)
tr_logits, inf_logits = decoding_layer(decoding_embed_input,
en_embeddings_matrix,
encoding_op,
encoding_st,
v_size,
fr_len,
en_len,
max_en_len,
rnn_cell_size,
word2int_en,
dropout_prob,
batch_size,
n_layers)
return tr_logits, inf_logits
def pad_sentences(sentences_batch,word2int):
max_sentence = max([len(sentence) for sentence in sentences_batch])
return [sentence + [word2int[TOKEN_PAD]] * (max_sentence - len(sentence)) for sentence in sentences_batch]
def get_batches(en_text, fr_text, batch_size):
for batch_idx in range(0, len(fr_text)//batch_size):
start_idx = batch_idx * batch_size
en_batch = en_text[start_idx:start_idx + batch_size]
fr_batch = fr_text[start_idx:start_idx + batch_size]
pad_en_batch = np.array(pad_sentences(en_batch, en_word2int))
pad_fr_batch = np.array(pad_sentences(fr_batch,fr_word2int))
pad_en_lens = []
for en_b in pad_en_batch:
pad_en_lens.append(len(en_b))
pad_fr_lens = []
for fr_b in pad_fr_batch:
pad_fr_lens.append(len(fr_b))
yield pad_en_batch, pad_fr_batch, pad_en_lens, pad_fr_lens
epochs = 20
batch_size = 64
rnn_len = 256
n_layers = 2
lr = 0.005
dr_prob = 0.75
logs_path='/tmp/models/'
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, learning_rate, dropout_probs, en_len, max_en_len, fr_len = model_inputs()
logits_tr, logits_inf = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
dropout_probs,
fr_len,
en_len,
max_en_len,
len(en_word2int)+1,
rnn_len,
n_layers,
en_word2int,
batch_size)
logits_tr = tf.identity(logits_tr.rnn_output, 'logits_tr')
logits_inf = tf.identity(logits_inf.sample_id, name='predictions')
seq_masks = tf.sequence_mask(en_len, max_en_len, dtype=tf.float32, name='masks')
with tf.name_scope("optimizer"):
tr_cost = sequence_loss(logits_tr,targets,seq_masks)
optimizer = tf.train.AdamOptimizer(learning_rate)
gradients = optimizer.compute_gradients(tr_cost)
capped_gradients = [(tf.clip_by_value(gradient, -5., 5.), var) for gradient, var in gradients
if gradient is not None]
train_op = optimizer.apply_gradients(capped_gradients)
tf.summary.scalar("cost", tr_cost)
print("Graph created.")
min_learning_rate = 0.0006
display_step = 20
stop_early_count = 0
stop_early_max_count = 3
per_epoch = 3
update_loss = 0
batch_loss = 0
summary_update_loss = []
en_train = en_filtered[0:30000]
fr_train = fr_filtered[0:30000]
update_check = (len(fr_train)//batch_size//per_epoch)-1
checkpoint = logs_path + 'best_so_far_model.ckpt'
with tf.Session(graph=train_graph) as sess:
tf_summary_writer = tf.summary.FileWriter(logs_path, graph=train_graph)
merged_summary_op = tf.summary.merge_all()
sess.run(tf.global_variables_initializer())
for epoch_i in range(1, epochs+1):
update_loss = 0
batch_loss = 0
for batch_i, (en_batch, fr_batch, en_text_len, fr_text_len) in enumerate(
get_batches(en_train, fr_train, batch_size)):
before = time.time()
_,loss,summary = sess.run(
[train_op, tr_cost,merged_summary_op],
{input_data: fr_batch,
targets: en_batch,
learning_rate: lr,
en_len: en_text_len,
fr_len: fr_text_len,
dropout_probs: dr_prob})
batch_loss += loss
update_loss += loss
after = time.time()
batch_time = after - before
tf_summary_writer.add_summary(summary, epoch_i * batch_size + batch_i)
if batch_i % display_step == 0 and batch_i > 0:
print('** Epoch {:>3}/{} Batch {:>4}/{} - Batch Loss: {:>6.3f}, seconds: {:>4.2f}'
.format(epoch_i,
epochs,
batch_i,
len(fr_filtered) // batch_size,
batch_loss / display_step,
batch_time*display_step))
batch_loss = 0
if batch_i % update_check == 0 and batch_i > 0:
print("Average loss:", round(update_loss/update_check,3))
summary_update_loss.append(update_loss)
if update_loss <= min(summary_update_loss):
print('Saving model')
stop_early_count = 0
saver = tf.train.Saver()
saver.save(sess, checkpoint)
else:
print("No Improvement.")
stop_early_count += 1
if stop_early_count == stop_early_max_count:
break
update_loss = 0
if stop_early_count == stop_early_max_count:
print("Stopping Training.")
break
#random = np.random.randint(3000,len(fr_filtered))
random = np.random.randint(0,3000)
fr_text = fr_filtered[random]
checkpoint = logs_path + 'best_so_far_model.ckpt'
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
loader = tf.train.import_meta_graph(checkpoint + '.meta')
loader.restore(sess, checkpoint)
input_data = loaded_graph.get_tensor_by_name('input_data:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
fr_length = loaded_graph.get_tensor_by_name('fr_len:0')
en_length = loaded_graph.get_tensor_by_name('en_len:0')
dropout_prob = loaded_graph.get_tensor_by_name('dropout_probs:0')
result_logits = sess.run(logits, {input_data: [fr_text]*batch_size,
en_length: [len(fr_text)],
fr_length: [len(fr_text)]*batch_size,
dropout_prob: 1.0})[0]
pad = en_word2int[TOKEN_PAD]
#print('\nOriginal Text:', input_sentence)
print('\nFrench Text')
print(' Word Ids: {}'.format([i for i in fr_text]))
print(' Input Words: {}'.format(" ".join( [fr_int2word[i] for i in fr_text ] )))
print('\nEnglish Text')
print(' Word Ids: {}'.format([i for i in result_logits if i != pad]))
print(' Response Words: {}'.format(" ".join( [en_int2word[i]for i in result_logits if i!=pad] )))
print(' Ground Truth: {}'.format(" ".join( [en_int2word[i] for i in en_filtered[random]] )))
fr_int2word[0]
```
| github_jupyter |
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*
*The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*
*No changes were made to the contents of this notebook from the original.*
<!--NAVIGATION-->
< [Visualization with Matplotlib](04.00-Introduction-To-Matplotlib.ipynb) | [Contents](Index.ipynb) | [Simple Scatter Plots](04.02-Simple-Scatter-Plots.ipynb) >
# Simple Line Plots
Perhaps the simplest of all plots is the visualization of a single function $y = f(x)$.
Here we will take a first look at creating a simple plot of this type.
As with all the following sections, we'll start by setting up the notebook for plotting and importing the packages we will use:
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
```
For all Matplotlib plots, we start by creating a figure and an axes.
In their simplest form, a figure and axes can be created as follows:
```
fig = plt.figure()
ax = plt.axes()
```
In Matplotlib, the *figure* (an instance of the class ``plt.Figure``) can be thought of as a single container that contains all the objects representing axes, graphics, text, and labels.
The *axes* (an instance of the class ``plt.Axes``) is what we see above: a bounding box with ticks and labels, which will eventually contain the plot elements that make up our visualization.
Throughout this book, we'll commonly use the variable name ``fig`` to refer to a figure instance, and ``ax`` to refer to an axes instance or group of axes instances.
Once we have created an axes, we can use the ``ax.plot`` function to plot some data. Let's start with a simple sinusoid:
```
fig = plt.figure()
ax = plt.axes()
x = np.linspace(0, 10, 1000)
ax.plot(x, np.sin(x));
```
Alternatively, we can use the pylab interface and let the figure and axes be created for us in the background
(see [Two Interfaces for the Price of One](04.00-Introduction-To-Matplotlib.ipynb#Two-Interfaces-for-the-Price-of-One) for a discussion of these two interfaces):
```
plt.plot(x, np.sin(x));
```
If we want to create a single figure with multiple lines, we can simply call the ``plot`` function multiple times:
```
plt.plot(x, np.sin(x))
plt.plot(x, np.cos(x));
```
That's all there is to plotting simple functions in Matplotlib!
We'll now dive into some more details about how to control the appearance of the axes and lines.
## Adjusting the Plot: Line Colors and Styles
The first adjustment you might wish to make to a plot is to control the line colors and styles.
The ``plt.plot()`` function takes additional arguments that can be used to specify these.
To adjust the color, you can use the ``color`` keyword, which accepts a string argument representing virtually any imaginable color.
The color can be specified in a variety of ways:
```
plt.plot(x, np.sin(x - 0), color='blue') # specify color by name
plt.plot(x, np.sin(x - 1), color='g') # short color code (rgbcmyk)
plt.plot(x, np.sin(x - 2), color='0.75') # Grayscale between 0 and 1
plt.plot(x, np.sin(x - 3), color='#FFDD44') # Hex code (RRGGBB from 00 to FF)
plt.plot(x, np.sin(x - 4), color=(1.0,0.2,0.3)) # RGB tuple, values 0 to 1
plt.plot(x, np.sin(x - 5), color='chartreuse'); # all HTML color names supported
```
If no color is specified, Matplotlib will automatically cycle through a set of default colors for multiple lines.
Similarly, the line style can be adjusted using the ``linestyle`` keyword:
```
plt.plot(x, x + 0, linestyle='solid')
plt.plot(x, x + 1, linestyle='dashed')
plt.plot(x, x + 2, linestyle='dashdot')
plt.plot(x, x + 3, linestyle='dotted');
# For short, you can use the following codes:
plt.plot(x, x + 4, linestyle='-') # solid
plt.plot(x, x + 5, linestyle='--') # dashed
plt.plot(x, x + 6, linestyle='-.') # dashdot
plt.plot(x, x + 7, linestyle=':'); # dotted
```
If you would like to be extremely terse, these ``linestyle`` and ``color`` codes can be combined into a single non-keyword argument to the ``plt.plot()`` function:
```
plt.plot(x, x + 0, '-g') # solid green
plt.plot(x, x + 1, '--c') # dashed cyan
plt.plot(x, x + 2, '-.k') # dashdot black
plt.plot(x, x + 3, ':r'); # dotted red
```
These single-character color codes reflect the standard abbreviations in the RGB (Red/Green/Blue) and CMYK (Cyan/Magenta/Yellow/blacK) color systems, commonly used for digital color graphics.
There are many other keyword arguments that can be used to fine-tune the appearance of the plot; for more details, I'd suggest viewing the docstring of the ``plt.plot()`` function using IPython's help tools (See [Help and Documentation in IPython](01.01-Help-And-Documentation.ipynb)).
## Adjusting the Plot: Axes Limits
Matplotlib does a decent job of choosing default axes limits for your plot, but sometimes it's nice to have finer control.
The most basic way to adjust axis limits is to use the ``plt.xlim()`` and ``plt.ylim()`` methods:
```
plt.plot(x, np.sin(x))
plt.xlim(-1, 11)
plt.ylim(-1.5, 1.5);
```
If for some reason you'd like either axis to be displayed in reverse, you can simply reverse the order of the arguments:
```
plt.plot(x, np.sin(x))
plt.xlim(10, 0)
plt.ylim(1.2, -1.2);
```
A useful related method is ``plt.axis()`` (note here the potential confusion between *axes* with an *e*, and *axis* with an *i*).
The ``plt.axis()`` method allows you to set the ``x`` and ``y`` limits with a single call, by passing a list which specifies ``[xmin, xmax, ymin, ymax]``:
```
plt.plot(x, np.sin(x))
plt.axis([-1, 11, -1.5, 1.5]);
```
The ``plt.axis()`` method goes even beyond this, allowing you to do things like automatically tighten the bounds around the current plot:
```
plt.plot(x, np.sin(x))
plt.axis('tight');
```
It allows even higher-level specifications, such as ensuring an equal aspect ratio so that on your screen, one unit in ``x`` is equal to one unit in ``y``:
```
plt.plot(x, np.sin(x))
plt.axis('equal');
```
For more information on axis limits and the other capabilities of the ``plt.axis`` method, refer to the ``plt.axis`` docstring.
## Labeling Plots
As the last piece of this section, we'll briefly look at the labeling of plots: titles, axis labels, and simple legends.
Titles and axis labels are the simplest such labels—there are methods that can be used to quickly set them:
```
plt.plot(x, np.sin(x))
plt.title("A Sine Curve")
plt.xlabel("x")
plt.ylabel("sin(x)");
```
The position, size, and style of these labels can be adjusted using optional arguments to the function.
For more information, see the Matplotlib documentation and the docstrings of each of these functions.
When multiple lines are being shown within a single axes, it can be useful to create a plot legend that labels each line type.
Again, Matplotlib has a built-in way of quickly creating such a legend.
It is done via the (you guessed it) ``plt.legend()`` method.
Though there are several valid ways of using this, I find it easiest to specify the label of each line using the ``label`` keyword of the plot function:
```
plt.plot(x, np.sin(x), '-g', label='sin(x)')
plt.plot(x, np.cos(x), ':b', label='cos(x)')
plt.axis('equal')
plt.legend();
```
As you can see, the ``plt.legend()`` function keeps track of the line style and color, and matches these with the correct label.
More information on specifying and formatting plot legends can be found in the ``plt.legend`` docstring; additionally, we will cover some more advanced legend options in [Customizing Plot Legends](04.06-Customizing-Legends.ipynb).
## Aside: Matplotlib Gotchas
While most ``plt`` functions translate directly to ``ax`` methods (such as ``plt.plot()`` → ``ax.plot()``, ``plt.legend()`` → ``ax.legend()``, etc.), this is not the case for all commands.
In particular, functions to set limits, labels, and titles are slightly modified.
For transitioning between MATLAB-style functions and object-oriented methods, make the following changes:
- ``plt.xlabel()`` → ``ax.set_xlabel()``
- ``plt.ylabel()`` → ``ax.set_ylabel()``
- ``plt.xlim()`` → ``ax.set_xlim()``
- ``plt.ylim()`` → ``ax.set_ylim()``
- ``plt.title()`` → ``ax.set_title()``
In the object-oriented interface to plotting, rather than calling these functions individually, it is often more convenient to use the ``ax.set()`` method to set all these properties at once:
```
ax = plt.axes()
ax.plot(x, np.sin(x))
ax.set(xlim=(0, 10), ylim=(-2, 2),
xlabel='x', ylabel='sin(x)',
title='A Simple Plot');
```
<!--NAVIGATION-->
< [Visualization with Matplotlib](04.00-Introduction-To-Matplotlib.ipynb) | [Contents](Index.ipynb) | [Simple Scatter Plots](04.02-Simple-Scatter-Plots.ipynb) >
| github_jupyter |
# Inductive node classification and representation learning using GraphSAGE
<table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/node-classification/graphsage-inductive-node-classification.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/node-classification/graphsage-inductive-node-classification.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
This notebook demonstrates inductive representation learning and node classification using the GraphSAGE [1] algorithm applied to inferring the subject of papers in a citation network.
To demonstrate inductive representation learning, we train a GraphSAGE model on a subgraph of the Pubmed-Diabetes citation network. Next, we use the trained model to predict the subject of nodes that were excluded from the subgraph used for model training.
We remove 20 percent of the network nodes (including all the edges from these nodes to any other nodes in the network) and then train a GraphSAGE model using this network comprised of the remaining 80 percent of nodes. For training, we only use 5 percent of labeled data.
After training the model, we use it to predict the labels, i.e., paper subjects, of the nodes originally held out after re-inserting them in the network. For prediction, we do not retrain the GraphSAGE model.
**References**
[1] Inductive Representation Learning on Large Graphs. W.L. Hamilton, R. Ying, and J. Leskovec arXiv:1706.02216 [cs.SI], 2017.
```
# install StellarGraph if running on Google Colab
import sys
if 'google.colab' in sys.modules:
%pip install -q stellargraph[demos]==1.3.0b
# verify that we're using the correct version of StellarGraph for this notebook
import stellargraph as sg
try:
sg.utils.validate_notebook_version("1.3.0b")
except AttributeError:
raise ValueError(
f"This notebook requires StellarGraph version 1.3.0b, but a different version {sg.__version__} is installed. Please see <https://github.com/stellargraph/stellargraph/issues/1172>."
) from None
import networkx as nx
import pandas as pd
import numpy as np
import itertools
import os
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
import stellargraph as sg
from stellargraph import globalvar
from stellargraph.mapper import GraphSAGENodeGenerator
from stellargraph.layer import GraphSAGE
from tensorflow.keras import layers, optimizers, losses, metrics, Model
from sklearn import preprocessing, feature_extraction, model_selection
from stellargraph import datasets
from IPython.display import display, HTML
import matplotlib.pyplot as plt
%matplotlib inline
```
## Loading the dataset
(See [the "Loading from Pandas" demo](../basics/loading-pandas.ipynb) for details on how data can be loaded.)
```
dataset = datasets.PubMedDiabetes()
display(HTML(dataset.description))
graph_full, labels = dataset.load()
print(graph_full.info())
```
We aim to train a graph-ML model that will predict the "label" attribute on the nodes. These labels are one of 3 categories:
```
set(labels)
```
We are going to **remove 20 percent of the nodes from the graph**. Then, we are going to train a GraphSAGE model on the reduced graph with the remaining 80 percent of the nodes from the original graph. Later, we are going to re-introduce the removed nodes and try to predict their labels without re-training the GraphSAGE model.
```
len(labels)
labels_sampled = labels.sample(frac=0.8, replace=False, random_state=101)
```
`labels_sampled` is a Series that holds the node IDs (the series index) and the associated label of the each of the nodes in the subgraph we are going to use for training the GraphSAGE model.
```
len(labels_sampled)
```
Now, we are going to extract the subgraph corresponding to the sampled nodes.
```
graph_sampled = graph_full.subgraph(labels_sampled.index)
print(graph_sampled.info())
```
Note above that both the number of nodes and edges have been reduced after removing 20 percent of the nodes in the original graph.
### Splitting the data
For machine learning we want to take a subset of the nodes for training, and use the rest for testing. We'll use scikit-learn to do this.
We are going to use 5 percent of the data for training and of the remaining nodes, we are going to use 20 percent as the validation set. The other 80 percent we can treat as a test set.
```
train_labels, test_labels = model_selection.train_test_split(
labels_sampled,
train_size=0.05,
test_size=None,
stratify=labels_sampled,
random_state=42,
)
val_labels, test_labels = model_selection.train_test_split(
test_labels, train_size=0.2, test_size=None, stratify=test_labels, random_state=100,
)
```
Note using stratified sampling gives the following counts:
```
from collections import Counter
Counter(train_labels)
```
The training set has class imbalance that might need to be compensated, e.g., via using a weighted cross-entropy loss in model training, with class weights inversely proportional to class support. However, we will ignore the class imbalance in this example, for simplicity.
### Converting to numeric arrays
For our categorical target, we will use one-hot vectors that will be fed into a soft-max Keras layer during training.
```
target_encoding = preprocessing.LabelBinarizer()
train_targets = target_encoding.fit_transform(train_labels)
val_targets = target_encoding.transform(val_labels)
test_targets = target_encoding.transform(test_labels)
```
## Creating the GraphSAGE model in Keras
To feed data from the graph to the Keras model we need a generator. The generators are specialized to the model and the learning task so we choose the `GraphSAGENodeGenerator` as we are predicting node attributes with a GraphSAGE model.
We need two other parameters, the `batch_size` to use for training and the number of nodes to sample at each level of the model. Here we choose a two-layer model with 10 nodes sampled in the first layer, and 10 in the second.
```
batch_size = 50
num_samples = [10, 10]
```
A `GraphSAGENodeGenerator` object is required to send the node features in sampled subgraphs to Keras
```
generator = GraphSAGENodeGenerator(graph_sampled, batch_size, num_samples)
```
Using the `generator.flow()` method, we can create iterators over nodes that should be used to train, validate, or evaluate the model. For training we use only the training nodes returned from our splitter and the target values. The `shuffle=True` argument is given to the `flow` method to improve training.
```
train_gen = generator.flow(train_labels.index, train_targets, shuffle=True)
```
Now we can specify our machine learning model, we need a few more parameters for this:
* the `layer_sizes` is a list of hidden feature sizes of each layer in the model. In this example we use 32-dimensional hidden node features at each layer.
* The `bias` and `dropout` are internal parameters of the model.
```
graphsage_model = GraphSAGE(
layer_sizes=[32, 32], generator=generator, bias=True, dropout=0.5,
)
```
Now we create a model to predict the 3 categories using Keras softmax layers. Note that we need to use the `G.get_target_size` method to find the number of categories in the data.
```
x_inp, x_out = graphsage_model.in_out_tensors()
prediction = layers.Dense(units=train_targets.shape[1], activation="softmax")(x_out)
prediction.shape
```
### Training the model
Now let's create the actual Keras model with the graph inputs `x_inp` provided by the `graph_model` and outputs being the predictions from the softmax layer
```
model = Model(inputs=x_inp, outputs=prediction)
model.compile(
optimizer=optimizers.Adam(lr=0.005),
loss=losses.categorical_crossentropy,
metrics=["acc"],
)
```
Train the model, keeping track of its loss and accuracy on the training set, and its generalisation performance on the validation set (we need to create another generator over the validation data for this)
```
val_gen = generator.flow(val_labels.index, val_targets)
history = model.fit(
train_gen, epochs=15, validation_data=val_gen, verbose=0, shuffle=False
)
sg.utils.plot_history(history)
```
Now we have trained the model we can evaluate on the test set.
```
test_gen = generator.flow(test_labels.index, test_targets)
test_metrics = model.evaluate(test_gen)
print("\nTest Set Metrics:")
for name, val in zip(model.metrics_names, test_metrics):
print("\t{}: {:0.4f}".format(name, val))
```
### Making predictions with the model
We want to use the trained model to predict the nodes we put aside earlier. For this, we must use the original StellarGraph object and a new node generator.
The new generator feeds data from this full graph into the model trained on the sampled partial graph
```
generator = GraphSAGENodeGenerator(graph_full, batch_size, num_samples)
```
Now let's get the predictions themselves for all nodes in the hold out set. We are going to use an iterator `generator.flow()` over the hold-out nodes for this.
```
hold_out_nodes = labels.index.difference(labels_sampled.index)
labels_hold_out = labels[hold_out_nodes]
len(hold_out_nodes)
hold_out_targets = target_encoding.transform(labels_hold_out)
hold_out_gen = generator.flow(hold_out_nodes, hold_out_targets)
```
Now that we have a generator for our hold out data, we can use our trained model to make predictions for them.
```
hold_out_predictions = model.predict(hold_out_gen)
```
These predictions will be the output of the softmax layer, so to get final categories we'll use the `inverse_transform` method of our target attribute specification to turn these values back to the original categories
```
hold_out_predictions = target_encoding.inverse_transform(hold_out_predictions)
len(hold_out_predictions)
```
Let's have a look at a few:
```
results = pd.Series(hold_out_predictions, index=hold_out_nodes)
df = pd.DataFrame({"Predicted": results, "True": labels_hold_out})
df.head(10)
hold_out_metrics = model.evaluate(hold_out_gen)
print("\nHold Out Set Metrics:")
for name, val in zip(model.metrics_names, hold_out_metrics):
print("\t{}: {:0.4f}".format(name, val))
```
We see that inductive performance of the model on the hold out nodes (not present in the graph during training) is on par with the performance on the test set of nodes that were present in the graph during training, but whose labels were concealed. This demonstrates good inductive performance of GraphSAGE model.
## Node embeddings for hold out nodes
We are going to extract node embeddings as activations of the output of GraphSAGE layer stack, and visualise them, coloring nodes by their subject label.
The GraphSAGE embeddings are the output of the GraphSAGE layers, namely the `x_out` variable. Let's create a new model with the same inputs as we used previously `x_inp` but now the output is the embeddings rather than the predicted class. Additionally note that the weights trained previously are kept in the new model.
```
embedding_model = Model(inputs=x_inp, outputs=x_out)
emb = embedding_model.predict(hold_out_gen)
emb.shape
```
Project the embeddings to 2d using either TSNE or PCA transform, and visualise, coloring nodes by their subject label
```
X = emb
y = np.argmax(target_encoding.transform(labels_hold_out), axis=1)
if X.shape[1] > 2:
transform = TSNE # PCA
trans = transform(n_components=2)
emb_transformed = pd.DataFrame(trans.fit_transform(X), index=hold_out_nodes)
emb_transformed["label"] = y
else:
emb_transformed = pd.DataFrame(X, index=hold_out_nodes)
emb_transformed = emb_transformed.rename(columns={"0": 0, "1": 1})
emb_transformed["label"] = y
alpha = 0.7
fig, ax = plt.subplots(figsize=(10, 8))
ax.scatter(
emb_transformed[0],
emb_transformed[1],
c=emb_transformed["label"].astype("category"),
cmap="jet",
alpha=alpha,
)
ax.set(aspect="equal", xlabel="$X_1$", ylabel="$X_2$")
plt.title(
"{} visualization of GraphSAGE embeddings of hold out nodes for pubmed dataset".format(
transform.__name__
)
)
plt.show()
```
This notebook demonstrated inductive representation learning and node classification using the GraphSAGE algorithm.
More specifically, the notebook demonstrated how to use the `stellargraph` library to train a GraphSAGE model using one network and then use this trained model to predict node attributes for a different (but similar) network.
Classification accuracy on the latter network was on par with classification accuracy on a set of training network test nodes with hidden labels.
<table><tr><td>Run the latest release of this notebook:</td><td><a href="https://mybinder.org/v2/gh/stellargraph/stellargraph/master?urlpath=lab/tree/demos/node-classification/graphsage-inductive-node-classification.ipynb" alt="Open In Binder" target="_parent"><img src="https://mybinder.org/badge_logo.svg"/></a></td><td><a href="https://colab.research.google.com/github/stellargraph/stellargraph/blob/master/demos/node-classification/graphsage-inductive-node-classification.ipynb" alt="Open In Colab" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg"/></a></td></tr></table>
| github_jupyter |
# Æco
***Calc module for Mistral***
<br/><br/>
✈️♻️
This code is part of the Master's Thesis "Streamlined Life Cycle Assessment Method for Aircraft Eco-Design", by Giácomo Parolin.
This module is used to calculate Life-Cycle Inventories and Life-Cycle Impact Assessment results and export them to a `.xlsx` file.
***Log***
**December 23, 2019: Æco, calc module**
* Separates calculations from data visualization
* Outputs results to .csv file
**December 18, 2019: Æco, gamma version**
* Tidied up LCI results
**December 08, 2019: Æco, beta version**
* Tidied up reading of Input and UP files
* Implemented Sensitivity Analysis
* Full translation to english
**December 08, 2019: Æco, alpha version**
* Todos os UPs lastreados no ecoinvent3.6 consquential
* Método atualizado com modificações propostas na qualificação
**November 10, 2019: Monte Carlo Try-out**
* Todos os inputs partem de um arquivo excel.
* Inputs atrelados a funções de densidade de probabilidade
**October 24, 2019: Code_Johanning_v2**
* Troca de _numpy_ para _pandas_ para todos (ou maioria dos) os vetores e matrizes.
* Leitura dos EF a partir de planilha excel.
# Initial Definitions
**Please enter the following information:**
```
aircraft_type = 'cargo' #pax or cargo
input_path = '.\\data\\mistral_whatif.xlsx'
input_sheet = 'Composite+'
output_path = '.\\results\\Mistral_' + input_sheet + '_outputs.xlsx'
iterations = 2000
```
**Reference Unit**: Final results of midpoint and endpoint indicators are expressed in terms of *impact per pkm* or *impact per tkm*.
**Uncertainty and Sensitivity Analysis**: Monte-Carlo Simulation is used. Input varies according to a Beta-PERT probability distribution, built from most-likely, maximum and minimum values.
The first part of the code loads some python packages and defines some functions which will be used later. The file containing the inputs is loaded into the code and some initial calculations are made. The unit process (UP) and characterization factor (CF) data used in the code are read from an excel file ("database.xlsx"). Most of the data was obtained from the [ecoinvent database](https://www.ecoinvent.org/).
```
#packages
import numpy as np
import pandas as pd
from scipy.stats import spearmanr as spear
import warnings
#functions
def pert(row, it):
''' Gera uma amostra de uma distribuição Beta-PERT
nom : float
valor nominal da variável
pes: float
valor pessimista (mínimo) da variável
opt: float
valor otimista (máximo) da variável
sample: list
quantidade e formato dos valores amostrados (output)
'''
sample = [1,it] #gerar array para sampling
df = inputs.loc[[row]] #achar linha da variável em questão
nom = df['nom'].iloc[0] #valor nominal
pes = df['pes'].iloc[0] #valor pessimista
opt = df['opt'].iloc[0] #valor otimista
if pes == opt and pes == nom:
alpha = nom
beta = nom
mean = nom
sample_PERT = (np.random.beta(alpha, beta, sample) * (opt - pes)) + pes
elif pes > nom or opt < nom:
raise ValueError("Valor 'nominal' deve ser igual ou estar entre 'pessimista' e 'otimista'")
else:
mean = (pes + 4*nom + opt)/6
alpha = 6*((mean - pes)/(opt - pes))
beta = 6*((opt - mean)/(opt - pes))
sample_PERT = (np.random.beta(alpha, beta, sample) * (opt - pes)) + pes
return sample_PERT.ravel()
def unitproc(col):
''' Obtem um UP da database em forma de Series com Multi Index. Col é um string.
'''
Col = Unit_Processes[['compartment','subcompartment', col]]
Col.set_index(['compartment','subcompartment'], append=True, inplace=True)
#Col.sort_index(inplace=True)
Col = Col.squeeze()
#Col.index = Col.index.set_levels(Col.index.levels[1].str.lower(), level=1)
return Col.fillna(0)
def mtp(series, array):
''' Multiplica um pd.Series de UP por um array de PERT para gerar um pd.Series de LCI
'''
return series.apply(lambda x: x*array)
def charact_factors(category):
''' Retorna pd.Series de com os CF de uma category de impactos. Category é uma string.
'''
series = MP.loc[category]
series.set_index(['name','compartment','subcompartment'], inplace=True)
return series.squeeze()
def div(series, array):
''' Divide um pd.Series de UP por um array de PERT para gerar um pd.Series de LCI
'''
return series.apply(lambda x: x/array)
def avg(series):
''' Calcula a média de cada linha de um pd.Series composto de np.arrays
'''
return series.apply(lambda x: np.mean(x))
def electricity(E):
''' Calcula o LCI de consumo de eletricidade com a grade brasileira
'''
E_wind = E * inp['grid_wind']
E_gas = E * inp['grid_gas']
E_hydro = E * inp['grid_hydro']
LCI_E = mtp(UP['Elec_wind'], E_wind) + mtp(UP['Elec_gas'], E_gas) \
+ mtp(UP['Elec_hydro'], E_hydro)
return LCI_E
def database(series):
''' Transforma um pd.Series de listas em um pd. DataFrame
'''
db = pd.DataFrame.from_records(zip(*series.values))
db.columns = series.index
return db
#Leitura dos inputs da planilha inputs.xlsx
pert_eye = np.linspace(1,1,iterations) #ones vector for monte carlo
with pd.ExcelFile(input_path) as xlsx:
inputs_full = pd.read_excel(xlsx, input_sheet, header = 2, index_col = 0, na_values="", usecols="B:G")
inputs_full = inputs_full.rename(columns = {"nominal": "nom",
"low":"pes",
"high":"opt"}) #renomear título das colunas
inputs_unit = inputs_full["unit"] #unidades dos inputs
inputs_description = inputs_full["description"] #descrição dos inputs
inputs = inputs_full.drop(columns = {"unit", "description"}) #valores dos inputs
inp = {}
for variable in inputs.index:
inp[variable] = pert(variable, iterations)
pert_eye = np.linspace(1,1,iterations) #ones vector for monte carlo
inputs_full = None
inputs = None
variable = None
#Leitura dos UP e CF da planilha database.xlsx
with pd.ExcelFile('.\\data\\database_mistral.xlsx') as xlsx:
Unit_Processes = pd.read_excel(xlsx, 'UP', header = 5, index_col = 0, na_values=0)
UP = {}
for column in Unit_Processes.columns.values[3:]:
UP[column] = unitproc(column)
Unit_Processes = None
#Definição do inventário
LCI_columns = [['DEV','DEV','DEV','DEV','DEV','MFG','MFG','MFG','MFG','OP','OP','OP','OP','OP','EOL','EOL','EOL'],\
['Office','Prototype','Construction','Certification','Capital','Material','Factory','Logistics','Sustaining',\
'LTO','CCD','Maintenance','Airport','Fuel','Recycling','Landfill','Incineration']]
LCI_index = UP['Aluminium'].index
LCI = pd.DataFrame(index = LCI_index, columns = pd.MultiIndex.from_arrays(LCI_columns, names=['Phase', 'Subphase']))
LCI_index = None
LCI_columns = None
#definições iniciais de pkm ou tkm
if aircraft_type == "cargo":
pkm_flight = inp['payload'] * inp['d_f'] * inp['loadfactor'] #tkm por voo
elif aircraft_type == "pax":
npax = inp['seat_max'] * inp['p_seat'] * inp['loadfactor'] #número de passageiros por voo
pkm_flight = npax * inp['d_f'] #pkm por voo
else:
raise Exception("Aircraft type must be 'pax' or 'cargo'")
pkm_year = pkm_flight * inp['flights_year'] #pkm por ano
pkm_life = pkm_year * inp['lifetime'] #pkm por vida
pkm_fleet = pkm_life * inp['fleet'] #pkm da frota
```
# LCI
The aircraft's life cycle was divided into four parts:
1. Development and Engineering
2. Manufacturing and Assembly
3. Operation
4. End-of-Life
## Development
### Use of Office Buildings
Impacts of the daily work of people developing the aircraft.
#### Office building electricity consumption
```
LCI_E_office = electricity(inp['E_office']) #per month
LCI_E_office = mtp(LCI_E_office,inp['devmonths']) #per development
```
#### Office building water consumption
```
LCI_water_office = mtp(UP['Water'],inp['water_office']) \
+ mtp(UP['Wastewater'],inp['wastewater_office']) #per month
LCI_water_office = mtp(LCI_water_office,inp['devmonths']) #per development
```
#### Commuting and Business Travel
```
travel = 18470 / 12 * inp['developers'] * inp['devmonths'] #km
LCI_travel = mtp(UP['Car'],travel * 0.1) + mtp(UP['Airplane'],travel * 0.9) #per development
```
#### Use of office supplies
```
LCI_paper = mtp(UP['Paper'],(inp['developers'] * inp['paper_use'])) #per year
LCI_paper = mtp(LCI_paper,(inp['devmonths']/12)) #per development
```
Total LCI for "Use of Office Building":
```
LCI['DEV','Office'] = div((LCI_E_office + LCI_water_office + LCI_paper + \
+ LCI_travel),pkm_fleet)
```
### Prototype Manufacturing
Impacts of manufacturing the prototype aircraft used during development.
*Calculated only after running the [Manufacturing and Assembly section](#mfg).*
### Certification Campaign
Impacts of the flight hours performed during development and certification.
*Calculated only after running the [Operation: Flights section](#op).*
### OEM Infrastructure Preparation
Impacts that may exists if new buildings must be built in order to manufacture the aircraft.
```
LCI['DEV','Construction'] = div(mtp(UP['Facilities'],inp['new_factory']/2.74e5),pkm_fleet)
```
### Capital goods manufacturing
Impacts surrounding the acquisition of machines and jigs.
```
new_jigs = inp['OEW'] * 500 #50t of jigs per 100kg of product
UP_capital = UP['Steel'].add(UP['Jigs'],fill_value=0) #material plus transformation
LCI['DEV','Capital'] = div(mtp(UP_capital,new_jigs)+mtp(UP['Machine'],inp['new_machine'])\
,pkm_fleet)
```
## Manufacturing
<a id='mfg'></a>
### Material Extraction and Transformation
Impacts surrounding the extraction and transformation of the raw materials that are used to manufacture the aircraft.
Since Mistral reuses aircraft, this section had to be adjusted accordingly.
```
Al = inp['p_Al'] * inp['b2f_Al'] * inp['OEW'] * inp['reuse']
steel = inp['p_steel'] * inp['b2f_steel'] * inp['OEW'] * inp['reuse']
Ti = inp['p_Ti'] * inp['b2f_Ti'] * inp['OEW'] * inp['reuse']
inconel = inp['p_inconel'] * inp['b2f_inconel'] * inp['OEW'] * inp['reuse']
GFRP = inp['p_GFRP'] * inp['b2f_GFRP'] * inp['OEW'] * inp['reuse']
CFRP = inp['p_CFRP'] * inp['b2f_CFRP'] * inp['OEW'] * inp['reuse']
#Aluminium
LCI_Al = mtp(UP['Aluminium'], Al)
#Steel
LCI_steel = mtp(UP['Steel'], steel)
#Titanium
LCI_Ti = mtp(UP['Titanium'], Ti)
#Inconel
LCI_inconel = mtp(UP['Inconel'], inconel)
#GFRP
LCI_GFRP = mtp(UP['GFRP'], GFRP)
#CFRP
LCI_CFRP = mtp(UP['CFRP'], CFRP)
#LCI Material Extraction and Transformation
LCI['MFG','Material'] = div(LCI_Al + LCI_steel + LCI_Ti + LCI_inconel + LCI_GFRP + LCI_CFRP,pkm_life)
```
### Use of Industrial Facilities
Impacts of running a factory that manufactures aircraft.
#### Electricity use of industrial facilities
```
LCI_E_factory = electricity(inp['E_factory'])
LCI_E_factory = mtp(LCI_E_factory,inp['takt']) / 30 #per aircraft
```
#### Water use of industrial facilities
```
LCI_water_factory = (mtp(UP['Water'], inp['water_factory']) \
+ mtp(UP['Wastewater'], inp['wastewater_factory'])) #per month
LCI_water_factory = mtp(LCI_water_factory,inp['takt']) / 30 #per aircraft
```
#### Lubricating oils use
```
LCI_lube = mtp(UP['Lubricant'], inp['lubricant']) #per month
LCI_lube = mtp(LCI_lube,inp['takt']) / 30 #per aircraft
```
#### Industrial facilities maintenance
```
facilities_maint = inp['OEW'] * 4.58e-10 #use per kg of product
LCI_facilities_maint = mtp(UP['Facilities'], facilities_maint) * 0.02 #per year
LCI_facilities_maint = mtp(LCI_facilities_maint,inp['takt']) / 365 #per aircraft
```
#### Total LCI for "Use of Industrial Facilities":
```
LCI['MFG','Factory'] = div(LCI_E_factory + LCI_water_factory + LCI_lube + LCI_facilities_maint,pkm_life)
```
### Logistics
Impacts of transporting parts and assemblies between productive sites.
```
lorry = inp['d_lorry'] * inp['m_lorry'] #tonne * km
sea = inp['d_sea'] * inp['m_sea'] #tonne * km
air = inp['d_air'] * inp['m_air'] #tonne * km
LCI['MFG','Logistics'] = div(mtp(UP['Lorry'],lorry) + mtp(UP['Sea'],sea) + mtp(UP['Air'],air),pkm_life)
```
### Sustaining Engineering and Development
Impacts of maintaining an engineering workforce during serial production.
```
LCI_sustaining = LCI['DEV','Office'] * 0.01 / 30 #per day
LCI['MFG','Sustaining'] = div(mtp(LCI_sustaining,inp['takt']),pkm_life)
```
## Operation
<a id='op'></a>
### Flights
```
#tempo CCD
t_ccd = inp['FH']*60 - inp['ff_lto']
LCI['OP','LTO'] = div(mtp(UP['LTO'],(inp['ff_lto']*inp['t_lto']*60)),pkm_flight)
LCI['OP','CCD'] = div(mtp(UP['CCD'],(inp['ff_ccd']*t_ccd*60)),pkm_flight)
```
### Aircraft Maintenance
```
LCI_maint = mtp(UP['Aluminium'], inp['maint_Al']) + mtp(UP['Steel'],inp['maint_steel']) + \
mtp(UP['Polymer'],inp['maint_pol']) + mtp(UP['Battery'],inp['maint_battery']) #por ano
LCI['OP','Maintenance'] = div(div(LCI_maint,inp['flights_year']),pkm_flight)
```
### Airport Infrastructure
Impacts of building, operating and maintaining the airport and its surrounding infrastructure.
```
if aircraft_type == "cargo":
ap_impact = 0.132 #13,2% of airport impacts are due to cargo
elif aircraft_type == "pax":
ap_impact = 0.868
else:
ap_impact = 1
f_pax_ap = inp['pax_ap']/22500000 #fração de passageiros em relação ao aeroporto de zurich em 2000
#f_flights = pert('flights_year').mean() / pert('flights_ap').mean() # fração de voos da aeronave em relação ao aeroporto
#LCI_E_ap = electricity(inp['E_ap'])
#LCI_ap = div(mtp(UP['Heat'],inp['heat_ap']) + mtp(UP['Water'],inp['water_ap']) \
# + mtp(UP['Wastewater'],inp['wastewater_ap']) + LCI_E_ap, inp['flights_ap'])
LCI_ap = div(mtp(UP['Airport'],f_pax_ap/100), inp['flights_ap']) #100 anos de vida para o prédio
LCI['OP','Airport'] = div(mtp(LCI_ap,ap_impact),pkm_flight)
```
### Fuel production
```
fuel_total = inp['ff_lto']*inp['t_lto']*60 + inp['ff_ccd']*t_ccd*60
LCI['OP','Fuel']= div(mtp(UP['Kerosene'],fuel_total),pkm_flight)
```
## End-of-Life
Since Mistral reuses aircraft, this section had to be adjusted accordingly.
```
E_sort_constant = 0.4645/3.6 #kWh/kg of material, on average
E_sort = E_sort_constant * inp['OEW'] * (2 - inp['reuse'])
LCI_sort = electricity(E_sort)
#Aluminium
p_ldf_Al = inp['p_ldf_Al'] * Al * (2 - inp['reuse'])
LCI_ldf_Al = mtp(UP['Landfill'],p_ldf_Al)
p_incin_Al = inp['p_incin_Al'] * Al * (2 - inp['reuse'])
LCI_incin_Al = mtp(UP['Incineration'],p_incin_Al)
p_recycl_Al = inp['p_recycl_Al'] * Al * (2 - inp['reuse'])
LCI_recycl_Al = mtp(UP['Aluminium'],p_recycl_Al)
#Steel
p_ldf_steel = inp['p_ldf_steel'] * steel * (2 - inp['reuse'])
LCI_ldf_steel = mtp(UP['Landfill'],p_ldf_steel)
p_incin_steel = inp['p_incin_steel'] * steel * (2 - inp['reuse'])
LCI_incin_steel = mtp(UP['Incineration'],p_incin_steel)
p_recycl_steel = inp['p_recycl_steel'] * steel * (2 - inp['reuse'])
LCI_recycl_steel = mtp(UP['Steel'],p_recycl_steel)
#Titanium
p_ldf_Ti = inp['p_ldf_Ti'] * Ti * (2 - inp['reuse'])
LCI_ldf_Ti = mtp(UP['Landfill'],p_ldf_Ti)
p_incin_Ti = inp['p_incin_Ti'] * Ti * (2 - inp['reuse'])
LCI_incin_Ti = mtp(UP['Incineration'],p_incin_Ti)
p_recycl_Ti = inp['p_recycl_Ti'] * Ti * (2 - inp['reuse'])
LCI_recycl_Ti = mtp(UP['Titanium'],p_recycl_Ti)
#Inconel
p_ldf_inconel = inp['p_ldf_inconel'] * inconel * (2 - inp['reuse'])
LCI_ldf_inconel = mtp(UP['Landfill'],p_ldf_inconel)
p_incin_inconel = inp['p_incin_inconel'] * inconel * (2 - inp['reuse'])
LCI_incin_inconel = mtp(UP['Incineration'],p_incin_inconel)
p_recycl_inconel = inp['p_recycl_inconel'] * inconel * (2 - inp['reuse'])
LCI_recycl_inconel = mtp(UP['Inconel'],p_recycl_inconel)
#GFRP
p_ldf_GFRP = inp['p_ldf_GFRP'] * GFRP * (2 - inp['reuse'])
LCI_ldf_GFRP = mtp(UP['Landfill'],p_ldf_GFRP)
p_incin_GFRP = inp['p_incin_GFRP'] * GFRP * (2 - inp['reuse'])
LCI_incin_GFRP = mtp(UP['Incineration'],p_incin_GFRP)
p_recycl_GFRP = inp['p_recycl_GFRP'] * GFRP * (2 - inp['reuse'])
LCI_recycl_GFRP = mtp(UP['GFRP'],p_recycl_GFRP)
#CFRP
p_ldf_CFRP = inp['p_ldf_CFRP'] * CFRP * (2 - inp['reuse'])
LCI_ldf_CFRP = mtp(UP['Landfill'],p_ldf_CFRP)
p_incin_CFRP = inp['p_incin_CFRP'] * CFRP * (2 - inp['reuse'])
LCI_incin_CFRP = mtp(UP['Incineration'],p_incin_CFRP)
p_recycl_CFRP = inp['p_recycl_CFRP'] * CFRP * (2 - inp['reuse'])
LCI_recycl_CFRP = mtp(UP['CFRP'],p_recycl_CFRP)
LCI['EOL','Recycling'] = div(LCI_sort - (LCI_recycl_Al + LCI_recycl_Ti + LCI_recycl_steel \
+ LCI_recycl_inconel + LCI_recycl_GFRP + LCI_recycl_CFRP),pkm_life)
LCI['EOL','Incineration'] = div(LCI_incin_Al + LCI_incin_Ti + LCI_incin_steel +\
LCI_incin_inconel + LCI_incin_GFRP + LCI_incin_CFRP, pkm_life)
LCI['EOL', 'Landfill'] = div(LCI_ldf_Al + LCI_ldf_Ti + LCI_ldf_steel + LCI_ldf_inconel \
+ LCI_ldf_GFRP + LCI_ldf_CFRP, pkm_life)
```
## LCI Summary
```
#pending calculations from development section:
LCI['DEV','Prototype'] = div(mtp(LCI['MFG'].sum(axis=1),inp['prototypes']) + \
mtp(LCI['MFG'].sum(axis=1),(inp['ironbirds']*0.3)),pkm_fleet)
cert_flights = inp['test_FH'] / inp['FH']
LCI['DEV','Certification'] = div(mtp(LCI['OP','LTO'] + LCI['OP','CCD'],cert_flights),pkm_fleet) #per development
```
# LCIA
The LCIA method used here is the ReCiPe 2008.
```
with pd.ExcelFile('.\\data\\database_mistral.xlsx') as xlsx:
MP = pd.read_excel(xlsx, 'MP', header = 0, index_col = 0, na_values=0)
EP = pd.read_excel(xlsx, 'EP', header = 0, index_col = 0, na_values=0)
```
## Midpoint
The midpoint impact categories are:
```
mp_categories = MP.index.unique()
mp_categories
```
The conversion factors used to transform the LCI into LCIA results are:
```
mp_factors = pd.DataFrame(index = LCI.index)
for category in mp_categories:
category_factor = charact_factors(category)
category_factor = category_factor.rename(category)
mp_factors[category] = category_factor
mp_factors.fillna(0, inplace=True)
MP = None
```
<br></br>
**LCIA midpoint results:**
```
LCIA_MP = pd.DataFrame(columns=LCI.columns)
for column in LCI:
LCIA_MP[column] = mp_factors.mul(LCI[column], axis=0).sum()
#correction of the natural land transformation values
LCIA_MP.loc['natural land transformation'] = -LCIA_MP.loc['natural land transformation']
```
## Endpoint
The endpoint impact categories are:
* Damage to Human Health
* Damage to Ecosystem Diversity
* Damage to Resource Availability
<br></br>
The conversion factors used to transform the midpoint into endpoint results are:
```
ep_factors = EP.fillna(0)
LCIA_EP = pd.DataFrame(columns=LCIA_MP.columns, index=ep_factors.columns)
for column in LCIA_EP:
LCIA_EP[column] = ep_factors.mul(LCIA_MP[column], axis=0).sum()
```
## Contribution to Variance
CTV of inputs with midpoint results:
```
with warnings.catch_warnings():
warnings.filterwarnings("ignore")
corr = pd.DataFrame(index=inp.keys(), columns=LCIA_MP.index)
for row in corr.index:
for column in corr:
corr[column][row] = spear(inp[row],LCIA_MP.loc[column].sum())[1]
corr_sq = corr ** 2
corr_sq_sum = corr_sq.sum(axis=0)
CTV_MP = pd.DataFrame(index=inp.keys(), columns=LCIA_MP.index)
CTV_MP = corr_sq / corr_sq_sum
```
CTV of inputs with endpoint results:
```
with warnings.catch_warnings():
warnings.filterwarnings("ignore")
corr = pd.DataFrame(index=inp.keys(), columns=LCIA_EP.index)
for row in corr.index:
for column in corr:
corr[column][row] = spear(inp[row],LCIA_EP.loc[column].sum())[1]
corr_sq = corr ** 2
corr_sq_sum = corr_sq.sum(axis=0)
CTV_EP = pd.DataFrame(index=inp.keys(), columns=LCIA_EP.index)
CTV_EP = corr_sq / corr_sq_sum
```
# Export Results
```
float_formatter = "{:.8e}".format
np.set_printoptions(threshold=iterations, linewidth = 100, formatter={'float_kind':float_formatter})
with pd.ExcelWriter(output_path) as writer:
LCIA_MP.to_excel(writer, sheet_name='MP')
LCIA_EP.to_excel(writer, sheet_name='EP')
CTV_MP.to_excel(writer, sheet_name='CTV_MP')
CTV_EP.to_excel(writer, sheet_name='CTV_EP')
print(f"LCA complete! Check output file at {output_path}")
```
| github_jupyter |
```
%matplotlib inline
```
## Intro
Este tutorial é baseado no tutorial [Aprendendo representações de frases utilizando um codificador-decodificador para máquinas de tradução estatística](https://github.com/bentrevett/pytorch-seq2seq/blob/master/2%20-%20Learning%20Phrase%20Representations%20using%20RNN%20Encoder-Decoder%20for%20Statistical%20Machine%20Translation.ipynb)
### Dependências
* torchtext
* spacy
### Arquitetura de um codificador-decodificador
Vamos lembrar da visão geral de um codificador-decodificador:

Nós usamos o codificador (em verde) na sequência de entrada para gerar um vetor de contexto `z` (em vermelho).
Esse vetor é então utilizado em um decodificador (em azul) e uma camada linear (em roxo) para gerar a sequência de saída.
Neste modelo, estamos usando um modelo de múltiplas camadas implementando uma memória de curto e longo prazo (`LSTM`):

Um dos problemas deste modelo, é que o decodificador está tentando colocar muita informação nos estados intermediários do nosso modelo. No momento da decodificação, o estado intermediário deverá conter informação sobre toda a sequência de entrada codificada até o momento bem como todos os tokens decodificados até então. Isto exige muita memória. Seria interessante amenizar o processo de compressão para que possamos ter um modelo melhor!
Para isso, usaremos uma unidade recorrente de porta (`GRU`).
## Dados
Inicialmente, vamos importar algumas das bibliotecas necessárias para manipular os nossos dados:
```
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext.datasets import TranslationDataset, Multi30k
from torchtext.data import Field, BucketIterator
import spacy
import random
import math
import os
import time
```
Em seguida, vamos usar a mesma `SEED` para garantir que os nossos resultados sejam reproduzíveis/determinísticos
```
SEED = 1
random.seed(SEED)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
```
Por fim, vamos utilizar modelos em alemão e inglês.
**obs.:** Eu gostaria muito de que este tutorial fosse em alemão/português ou inglês/português, mas eu preciso achar um módulo/base de dados com ambos os idiomas
No seu ambiente conda, execute a linha de comando abaixo para baixar os modelos:
```bash
python -m spacy download en
python -m spacy download de
```
Em caso de sucesso, você obterá uma mensagem similar a:
```bash
You can now load the model via spacy.load('en')
```
```
spacy_de = spacy.load('de')
spacy_en = spacy.load('en')
```
Ao processar os textos de entrada, vamos utilizar uma técnica chamada de [tokenização](https://en.wikipedia.org/wiki/Lexical_analysis#Tokenization). Basicamente, teremos uma sentença de entrada e fragmentaremos a sentence em unidades léxicas, como por exemplo, palavras.
```
def tokenize_de(text):
"""
Tokenizes German text from a string into a list of strings
"""
return [tok.text for tok in spacy_de.tokenizer(text)]
def tokenize_en(text):
"""
Tokenizes English text from a string into a list of strings
"""
return [tok.text for tok in spacy_en.tokenizer(text)]
```
Também vamos definir dois tokens especiais para indicar o início de uma sentença (`sos`) e o final de uma sentença (`eos`).
Também converteremos a sentença para que ela contenha apenas letras minúsculas.
```
SRC = Field(tokenize=tokenize_de, init_token='<sos>', eos_token='<eos>', lower=True)
TRG = Field(tokenize=tokenize_en, init_token='<sos>', eos_token='<eos>', lower=True)
```
Finalmente, vamos carregar nossos dados:
```
train_data, valid_data, test_data = Multi30k.splits(exts=('.de', '.en'), fields=(SRC, TRG))
```
Vamos checar uma frase de entrada e saída para garantir que nossos dados estão corretos:
```
import pprint
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(vars(train_data.examples[0]))
```
Por fim, vamos construir o nosso vocabulário convertendo todas as palavras que aparecem menos de duas vezes em termos desconhecidos (`<unk>`)
```
SRC.build_vocab(train_data, min_freq=2)
TRG.build_vocab(train_data, min_freq=2)
```
Também vamos dividir os nossos dados em dados de treinamento, validação e testes.
A primeira linha define se pytorch utilizará uma unidade de processamento `CUDA` ou uma `CPU`
```
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
BATCH_SIZE = 128
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
(train_data, valid_data, test_data), batch_size=BATCH_SIZE, device=device)
```
## Construindo o nosso modelo
### Codificador
O codificador será um modelo com uma única camada de memória de curto longo prazo (`multi-layer LSTM`). E utilizaremos uma única unidade de porta recorrente. Não passaremos valores de dropout para a GRU uma vez que o dropout é utilizado entre as camadas de uma RNN. Como temos apenas uma única camada, PyTorch PyTorch alertará que temos uma única camada no nosso modelo.
Outra informação importante sobre uma GRU é que ela requer e retorna uma única camada intermediária. Não há um estado como em uma LSTM.
\begin{equation}
h_t = \text{GRU}(x_t, h_{t - 1}) \\
(h_t, c_t) = \text{LSTM}(x_t, (h_{t - 1}, c_{t - 1})) \\
h_t = \text{RNN}(x_t, h_{t - 1})
\end{equation}
De acordo com a equação acima, não há tanta diferença entre uma GRU e uma RNN. Entretanto, dentro de uma GRU existem inúmeros mecânismos de porta que controlam o fluxo de informação entrando e saindo de um estado intermediário. Para mais detalhes, checar este [post](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) (em inglês).
**obs.:** No futuro, eu pretendo fazer um tutorial explicando em mais detalhes como uma LSTM funciona.
O restante do codificador segue a arquitetura "padrão" de um codificador. Isto é, o codificador recebe a sequência de entrada $X = {x_1, x_2, ..., x_T}$ e computa estados intermediários de forma recorrende, $H = {h_1, h_2, ..., h_T}$. Por fim, é retornado o vetor de contexto (que corresponde ao último estado intermediário computado), $z = h_T$. Onde
\begin{equation}
h_t = \text{EncoderGRU}(x_t, h_{t - 1})
\end{equation}
Este codificador é idêntico a arquitetura de um codificador de um modelo de sequência para sequência (`seq2seq`), mas toda a "mágica" ocorre dentro da GRU (em verde)

```
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, dropout):
super().__init__()
self.input_dim = input_dim
self.emb_dim = emb_dim
self.hid_dim = hid_dim
self.dropout = dropout
self.embedding = nn.Embedding(input_dim, emb_dim) #no dropout as only one layer!
self.rnn = nn.GRU(emb_dim, hid_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, src):
#src = [src sent len, batch size]
embedded = self.dropout(self.embedding(src))
#embedded = [src sent len, batch size, emb dim]
outputs, hidden = self.rnn(embedded) #no cell state!
#outputs = [src sent len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#outputs are always from the top hidden layer
return hidden
```
### Decodificador
O decodificador é onde tentamos aliviar o problema da compressão de informação (discutido na introdução deste notebook).
Ao invés da GRU no decodificador utilizar apenas as palavras de saída $y_t$ e o estado intermediário anterior $s_{t - 1}$ como entrada, o decodificador também utilizará o vetor de contexto $Z$
\begin{equation}
s_t = \text{DecoderGRU}(y_t, h_{t - 1}, Z)
\end{equation}
Note que o vetor de contexto $Z$ não possui um parâmetro $t$. Iso indica que utilizamos o mesmo vetor de contexto (retornado pelo codificador) a cada intervalo de tempo $t$.
Para predizermos o próximo estado, utilizaremos uma camada linear, $f$. Esta camada utiliza apenas a última camada intermediária do decodificador naquele intervalo de tempo, $s_t$. Sendo assim, para predizermos $\hat{y}_{t + 1}$, passaremos como parâmetros o token atual na sequência $\hat{y_t}$ e o vetor de contexto para uma camada linear, conforme equação:
\begin{equation}
\hat{y}_{t + 1} = f(y_t, s_t, Z)
\end{equation}
O nosso decodificador se parece com algo como:

O estado intermediário inicial, $s_0$ é o vetor de contexto $Z$. Então, ao gerar o primeiro token da saída, nos estamos passando dois vetores indênticos para a GRU.
**Como essas modificações reduzem a compressão de informação?** Tecnicamente, o estado intermediário $s_t$ não mais precisa manter qualquer informação sobre a sequência de entrada, uma vez que isto esta disponível como uma entrada através do vetor de contexto. Sendo assim, ele só precisa manter informação sobre os tokens gerados até então. A adição de $y_t$ a camanda linear também significa que esta camada pode utilizar qual é o último token visto, sem necessariamente necessitar desta informação como algo que deveria estar comprimido no último estado intermediário computado.
Entretanto, a explicação anterior é apenas uma hipótese. É impossível determinar com precisão como o decodificador utiliza toda a informação fornecida. Dito isto, a explicação anterior serve como uma boa intuição para compreender o que está acontecendo e os resultados oriundos destas modificações indicam que estas modificações são uma boa ideia!
**Implementação** passaremos $y_t$ e o vetor de contexto $Z$ para a GRU concatenando ambos os vetores. Desta forma, as dimensões do vetor de entrada da GRU serão *emb_dim + hid_dim*. A camada linear tomará como entrada $y_t$, $s_t$ e $Z$ concatenando estes vetores. Assim, o vetor de entrada para a camada linear terá dimensão *emb_dim + 2 x hid_dim*. Também não utilizaremos um dropout uma vez que a GRU só possui uma única camada.
A função *forward* agora receberá um parâmetro, i.e. o contexto. Dentro da função, concatenaremos $y_t$ e $z$ em *emb_con* antes de passar este vetor para a GRU. Também concatenamos $y_t$, $s_t$ e $Z$ e passamos estes vetores para a camada linear para obtermos uma predição, isto é, $\hat{y}_{t + 1}$.
```
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, dropout):
super().__init__()
self.emb_dim = emb_dim
self.hid_dim = hid_dim
self.output_dim = output_dim
self.dropout = dropout
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.GRU(emb_dim + hid_dim, hid_dim)
self.out = nn.Linear(emb_dim + hid_dim*2, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, context):
#input = [batch size]
#hidden = [n layers * n directions, batch size, hid dim]
#context = [n layers * n directions, batch size, hid dim]
#n layers and n directions in the decoder will both always be 1, therefore:
#hidden = [1, batch size, hid dim]
#context = [1, batch size, hid dim]
input = input.unsqueeze(0) # irá criar um vetor [...]
#input = [1, batch size]
embedded = self.dropout(self.embedding(input))
#embedded = [1, batch size, emb dim]
emb_con = torch.cat((embedded, context), dim=2)
#emb_con = [1, batch size, emb dim + hid dim]
output, hidden = self.rnn(emb_con, hidden)
#output = [sent len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#sent len, n layers and n directions will always be 1 in the decoder, therefore:
#output = [1, batch size, hid dim]
#hidden = [1, batch size, hid dim]
output = torch.cat((embedded.squeeze(0), hidden.squeeze(0), context.squeeze(0)), dim=1)
#output = [batch size, emb dim + hid dim * 2]
prediction = self.out(output)
#prediction = [batch size, output dim]
return prediction, hidden
```
### Seq2Seq
Agora, vamos juntar todas as peças de nosso quebra-cabeças para gerar um modelo de sequência-para-sequência (`seq2seq`).
```
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
def forward(self, src, trg, teacher_forcing_ratio=0.5):
#src = [src sent len, batch size]
#trg = [trg sent len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
batch_size = trg.shape[1]
max_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device)
#last hidden state of the encoder is the context
context = self.encoder(src)
#context also used as the initial hidden state of the decoder
hidden = context
#first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, max_len):
output, hidden = self.decoder(input, hidden, context)
outputs[t] = output
teacher_force = random.random() < teacher_forcing_ratio
top1 = output.max(1)[1]
input = (trg[t] if teacher_force else top1)
return outputs
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
ENC_EMB_DIM = 256
DEC_EMB_DIM = 256
HID_DIM = 512
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, DEC_DROPOUT)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = Seq2Seq(enc, dec, device).to(device)
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
optimizer = optim.Adam(model.parameters())
pad_idx = TRG.vocab.stoi['<pad>']
criterion = nn.CrossEntropyLoss(ignore_index=pad_idx)
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
optimizer.zero_grad()
output = model(src, trg)
#trg = [trg sent len, batch size]
#output = [trg sent len, batch size, output dim]
output = output[1:].view(-1, output.shape[-1])
trg = trg[1:].view(-1)
#trg = [(trg sent len - 1) * batch size]
#output = [(trg sent len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
output = model(src, trg, 0) #turn off teacher forcing
#trg = [trg sent len, batch size]
#output = [trg sent len, batch size, output dim]
output = output[1:].view(-1, output.shape[-1])
trg = trg[1:].view(-1)
#trg = [(trg sent len - 1) * batch size]
#output = [(trg sent len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
```
Training this model without a CUDA takes 6h+ no meu velho Macbook Pro 2013
```
N_EPOCHS = 10
CLIP = 1
SAVE_DIR = 'models'
MODEL_SAVE_PATH = os.path.join(SAVE_DIR, 'tut2_model.pt')
best_valid_loss = float('inf')
if not os.path.isdir(f'{SAVE_DIR}'):
os.makedirs(f'{SAVE_DIR}')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), MODEL_SAVE_PATH)
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
model.load_state_dict(torch.load(MODEL_SAVE_PATH))
test_loss = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
```
| github_jupyter |
```
from fastai.text import *
```
## Reduce original dataset to questions
```
path = Config().data_path()/'giga-fren'
```
You only need to execute the setup cells once, uncomment to run. The dataset can be downloaded [here](https://s3.amazonaws.com/fast-ai-nlp/giga-fren.tgz).
```
#! wget https://s3.amazonaws.com/fast-ai-nlp/giga-fren.tgz -P {path}
#! tar xf {path}/giga-fren.tgz -C {path}
# with open(path/'giga-fren.release2.fixed.fr') as f:
# fr = f.read().split('\n')
# with open(path/'giga-fren.release2.fixed.en') as f:
# en = f.read().split('\n')
# re_eq = re.compile('^(Wh[^?.!]+\?)')
# re_fq = re.compile('^([^?.!]+\?)')
# en_fname = path/'giga-fren.release2.fixed.en'
# fr_fname = path/'giga-fren.release2.fixed.fr'
# lines = ((re_eq.search(eq), re_fq.search(fq))
# for eq, fq in zip(open(en_fname, encoding='utf-8'), open(fr_fname, encoding='utf-8')))
# qs = [(e.group(), f.group()) for e,f in lines if e and f]
# qs = [(q1,q2) for q1,q2 in qs]
# df = pd.DataFrame({'fr': [q[1] for q in qs], 'en': [q[0] for q in qs]}, columns = ['en', 'fr'])
# df.to_csv(path/'questions_easy.csv', index=False)
# del en, fr, lines, qs, df # free RAM or restart the nb
### fastText pre-trained word vectors https://fasttext.cc/docs/en/crawl-vectors.html
#! wget https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.fr.300.bin.gz -P {path}
#! wget https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.en.300.bin.gz -P {path}
#! gzip -d {path}/cc.fr.300.bin.gz
#! gzip -d {path}/cc.en.300.bin.gz
path.ls()
```
## Put them in a DataBunch
Our questions look like this now:
```
df = pd.read_csv(path/'questions_easy.csv')
df.head()
```
To make it simple, we lowercase everything.
```
df['en'] = df['en'].apply(lambda x:x.lower())
df['fr'] = df['fr'].apply(lambda x:x.lower())
```
The first thing is that we will need to collate inputs and targets in a batch: they have different lengths so we need to add padding to make the sequence length the same;
```
def seq2seq_collate(samples:BatchSamples, pad_idx:int=1, pad_first:bool=True, backwards:bool=False) -> Tuple[LongTensor, LongTensor]:
"Function that collect samples and adds padding. Flips token order if needed"
samples = to_data(samples)
max_len_x,max_len_y = max([len(s[0]) for s in samples]),max([len(s[1]) for s in samples])
res_x = torch.zeros(len(samples), max_len_x).long() + pad_idx
res_y = torch.zeros(len(samples), max_len_y).long() + pad_idx
if backwards: pad_first = not pad_first
for i,s in enumerate(samples):
if pad_first:
res_x[i,-len(s[0]):],res_y[i,-len(s[1]):] = LongTensor(s[0]),LongTensor(s[1])
else:
res_x[i,:len(s[0]):],res_y[i,:len(s[1]):] = LongTensor(s[0]),LongTensor(s[1])
if backwards: res_x,res_y = res_x.flip(1),res_y.flip(1)
return res_x,res_y
```
Then we create a special `DataBunch` that uses this collate function.
```
class Seq2SeqDataBunch(TextDataBunch):
"Create a `TextDataBunch` suitable for training an RNN classifier."
@classmethod
def create(cls, train_ds, valid_ds, test_ds=None, path:PathOrStr='.', bs:int=32, val_bs:int=None, pad_idx=1,
pad_first=False, device:torch.device=None, no_check:bool=False, backwards:bool=False, **dl_kwargs) -> DataBunch:
"Function that transform the `datasets` in a `DataBunch` for classification. Passes `**dl_kwargs` on to `DataLoader()`"
datasets = cls._init_ds(train_ds, valid_ds, test_ds)
val_bs = ifnone(val_bs, bs)
collate_fn = partial(seq2seq_collate, pad_idx=pad_idx, pad_first=pad_first, backwards=backwards)
train_sampler = SortishSampler(datasets[0].x, key=lambda t: len(datasets[0][t][0].data), bs=bs//2)
train_dl = DataLoader(datasets[0], batch_size=bs, sampler=train_sampler, drop_last=True, **dl_kwargs)
dataloaders = [train_dl]
for ds in datasets[1:]:
lengths = [len(t) for t in ds.x.items]
sampler = SortSampler(ds.x, key=lengths.__getitem__)
dataloaders.append(DataLoader(ds, batch_size=val_bs, sampler=sampler, **dl_kwargs))
return cls(*dataloaders, path=path, device=device, collate_fn=collate_fn, no_check=no_check)
```
And a subclass of `TextList` that will use this `DataBunch` class in the call `.databunch` and will use `TextList` to label (since our targets are other texts).
```
class Seq2SeqTextList(TextList):
_bunch = Seq2SeqDataBunch
_label_cls = TextList
```
Thats all we need to use the data block API!
```
src = Seq2SeqTextList.from_df(df, path = path, cols='fr').split_by_rand_pct().label_from_df(cols='en', label_cls=TextList)
np.percentile([len(o) for o in src.train.x.items] + [len(o) for o in src.valid.x.items], 90)
np.percentile([len(o) for o in src.train.y.items] + [len(o) for o in src.valid.y.items], 90)
```
We remove the items where one of the target is more than 30 tokens long.
```
src = src.filter_by_func(lambda x,y: len(x) > 30 or len(y) > 30)
len(src.train) + len(src.valid)
data = src.databunch()
data.save()
data = load_data(path)
data.show_batch()
```
## Model
### Pretrained embeddings
To install fastText:
```
$ git clone https://github.com/facebookresearch/fastText.git
$ cd fastText
$ pip install .
```
```
# Installation: https://github.com/facebookresearch/fastText#building-fasttext-for-python
import fastText as ft
fr_vecs = ft.load_model(str((path/'cc.fr.300.bin')))
en_vecs = ft.load_model(str((path/'cc.en.300.bin')))
```
We create an embedding module with the pretrained vectors and random data for the missing parts.
```
def create_emb(vecs, itos, em_sz=300, mult=1.):
emb = nn.Embedding(len(itos), em_sz, padding_idx=1)
wgts = emb.weight.data
vec_dic = {w:vecs.get_word_vector(w) for w in vecs.get_words()}
miss = []
for i,w in enumerate(itos):
try: wgts[i] = tensor(vec_dic[w])
except: miss.append(w)
return emb
emb_enc = create_emb(fr_vecs, data.x.vocab.itos)
emb_dec = create_emb(en_vecs, data.y.vocab.itos)
torch.save(emb_enc, path/'models'/'fr_emb.pth')
torch.save(emb_dec, path/'models'/'en_emb.pth')
```
Free some RAM
```
del fr_vecs
del en_vecs
```
### QRNN seq2seq
Our model we use QRNNs at its base (you can use GRUs or LSTMs by adapting a little bit). Using QRNNs require you have properly installed cuda (a version that matches your PyTorch install).
```
from fastai.text.models.qrnn import QRNN, QRNNLayer
```
The model in itself consists in an encoder and a decoder

The encoder is a (quasi) recurrent neural net and we feed it our input sentence, producing an output (that we discard for now) and a hidden state. That hidden state is then given to the decoder (an other RNN) which uses it in conjunction with the outputs it predicts to get produce the translation. We loop until the decoder produces a padding token (or at 30 iterations to make sure it's not an infinite loop at the beginning of training).
```
class Seq2SeqQRNN(nn.Module):
def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25,
p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):
super().__init__()
self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx
self.emb_enc = emb_enc
self.emb_enc_drop = nn.Dropout(p_inp)
self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc)
self.out_enc = nn.Linear(n_hid, emb_enc.weight.size(1), bias=False)
self.hid_dp = nn.Dropout(p_hid)
self.emb_dec = emb_dec
self.decoder = QRNN(emb_dec.weight.size(1), emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)
self.out_drop = nn.Dropout(p_out)
self.out = nn.Linear(emb_dec.weight.size(1), emb_dec.weight.size(0))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
bs,sl = inp.size()
self.encoder.reset()
self.decoder.reset()
hid = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, hid = self.encoder(emb, hid)
hid = self.out_enc(self.hid_dp(hid))
dec_inp = inp.new_zeros(bs).long() + self.bos_idx
outs = []
for i in range(self.max_len):
emb = self.emb_dec(dec_inp).unsqueeze(1)
out, hid = self.decoder(emb, hid)
out = self.out(self.out_drop(out[:,0]))
outs.append(out)
dec_inp = out.max(1)[1]
if (dec_inp==self.pad_idx).all(): break
return torch.stack(outs, dim=1)
def initHidden(self, bs): return one_param(self).new_zeros(self.n_layers, bs, self.n_hid)
```
#### Loss function
The loss pads output and target so that they are of the same size before using the usual flattened version of cross entropy. We do the same for accuracy.
```
def seq2seq_loss(out, targ, pad_idx=1):
bs,targ_len = targ.size()
_,out_len,vs = out.size()
if targ_len>out_len: out = F.pad(out, (0,0,0,targ_len-out_len,0,0), value=pad_idx)
if out_len>targ_len: targ = F.pad(targ, (0,out_len-targ_len,0,0), value=pad_idx)
return CrossEntropyFlat()(out, targ)
def seq2seq_acc(out, targ, pad_idx=1):
bs,targ_len = targ.size()
_,out_len,vs = out.size()
if targ_len>out_len: out = F.pad(out, (0,0,0,targ_len-out_len,0,0), value=pad_idx)
if out_len>targ_len: targ = F.pad(targ, (0,out_len-targ_len,0,0), value=pad_idx)
out = out.argmax(2)
return (out==targ).float().mean()
```
#### Bleu metric (see dedicated notebook)
In translation, the metric usually used is BLEU, see the corresponding notebook for the details.
```
class NGram():
def __init__(self, ngram, max_n=5000): self.ngram,self.max_n = ngram,max_n
def __eq__(self, other):
if len(self.ngram) != len(other.ngram): return False
return np.all(np.array(self.ngram) == np.array(other.ngram))
def __hash__(self): return int(sum([o * self.max_n**i for i,o in enumerate(self.ngram)]))
def get_grams(x, n, max_n=5000):
return x if n==1 else [NGram(x[i:i+n], max_n=max_n) for i in range(len(x)-n+1)]
def get_correct_ngrams(pred, targ, n, max_n=5000):
pred_grams,targ_grams = get_grams(pred, n, max_n=max_n),get_grams(targ, n, max_n=max_n)
pred_cnt,targ_cnt = Counter(pred_grams),Counter(targ_grams)
return sum([min(c, targ_cnt[g]) for g,c in pred_cnt.items()]),len(pred_grams)
class CorpusBLEU(Callback):
def __init__(self, vocab_sz):
self.vocab_sz = vocab_sz
self.name = 'bleu'
def on_epoch_begin(self, **kwargs):
self.pred_len,self.targ_len,self.corrects,self.counts = 0,0,[0]*4,[0]*4
def on_batch_end(self, last_output, last_target, **kwargs):
last_output = last_output.argmax(dim=-1)
for pred,targ in zip(last_output.cpu().numpy(),last_target.cpu().numpy()):
self.pred_len += len(pred)
self.targ_len += len(targ)
for i in range(4):
c,t = get_correct_ngrams(pred, targ, i+1, max_n=self.vocab_sz)
self.corrects[i] += c
self.counts[i] += t
def on_epoch_end(self, last_metrics, **kwargs):
precs = [c/t for c,t in zip(self.corrects,self.counts)]
len_penalty = exp(1 - self.targ_len/self.pred_len) if self.pred_len < self.targ_len else 1
bleu = len_penalty * ((precs[0]*precs[1]*precs[2]*precs[3]) ** 0.25)
return add_metrics(last_metrics, bleu)
```
We load our pretrained embeddings to create the model.
```
emb_enc = torch.load(path/'models'/'fr_emb.pth')
emb_dec = torch.load(path/'models'/'en_emb.pth')
model = Seq2SeqQRNN(emb_enc, emb_dec, 256, 30, n_layers=2)
learn = Learner(data, model, loss_func=seq2seq_loss, metrics=[seq2seq_acc, CorpusBLEU(len(data.y.vocab.itos))])
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8, 1e-2)
```
So how good is our model? Let's see a few predictions.
```
def get_predictions(learn, ds_type=DatasetType.Valid):
learn.model.eval()
inputs, targets, outputs = [],[],[]
with torch.no_grad():
for xb,yb in progress_bar(learn.dl(ds_type)):
out = learn.model(xb)
for x,y,z in zip(xb,yb,out):
inputs.append(learn.data.train_ds.x.reconstruct(x))
targets.append(learn.data.train_ds.y.reconstruct(y))
outputs.append(learn.data.train_ds.y.reconstruct(z.argmax(1)))
return inputs, targets, outputs
inputs, targets, outputs = get_predictions(learn)
inputs[700], targets[700], outputs[700]
inputs[701], targets[701], outputs[701]
inputs[2513], targets[2513], outputs[2513]
inputs[4000], targets[4000], outputs[4000]
```
It's usually beginning well, but falls into easy word at the end of the question.
### Teacher forcing
One way to help training is to help the decoder by feeding it the real targets instead of its predictions (if it starts with wrong words, it's very unlikely to give us the right translation). We do that all the time at the beginning, then progressively reduce the amount of teacher forcing.
```
class TeacherForcing(LearnerCallback):
def __init__(self, learn, end_epoch):
super().__init__(learn)
self.end_epoch = end_epoch
def on_batch_begin(self, last_input, last_target, train, **kwargs):
if train: return {'last_input': [last_input, last_target]}
def on_epoch_begin(self, epoch, **kwargs):
self.learn.model.pr_force = 1 - 0.5 * epoch/self.end_epoch
class Seq2SeqQRNN(nn.Module):
def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25,
p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):
super().__init__()
self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx
self.emb_enc = emb_enc
self.emb_enc_drop = nn.Dropout(p_inp)
self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc)
self.out_enc = nn.Linear(n_hid, emb_enc.weight.size(1), bias=False)
self.hid_dp = nn.Dropout(p_hid)
self.emb_dec = emb_dec
self.decoder = QRNN(emb_dec.weight.size(1), emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)
self.out_drop = nn.Dropout(p_out)
self.out = nn.Linear(emb_dec.weight.size(1), emb_dec.weight.size(0))
self.out.weight.data = self.emb_dec.weight.data
self.pr_force = 0.
def forward(self, inp, targ=None):
bs,sl = inp.size()
hid = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, hid = self.encoder(emb, hid)
hid = self.out_enc(self.hid_dp(hid))
dec_inp = inp.new_zeros(bs).long() + self.bos_idx
res = []
for i in range(self.max_len):
emb = self.emb_dec(dec_inp).unsqueeze(1)
outp, hid = self.decoder(emb, hid)
outp = self.out(self.out_drop(outp[:,0]))
res.append(outp)
dec_inp = outp.data.max(1)[1]
if (dec_inp==self.pad_idx).all(): break
if (targ is not None) and (random.random()<self.pr_force):
if i>=targ.shape[1]: break
dec_inp = targ[:,i]
return torch.stack(res, dim=1)
def initHidden(self, bs): return one_param(self).new_zeros(self.n_layers, bs, self.n_hid)
emb_enc = torch.load(path/'models'/'fr_emb.pth')
emb_dec = torch.load(path/'models'/'en_emb.pth')
model = Seq2SeqQRNN(emb_enc, emb_dec, 256, 30, n_layers=2)
learn = Learner(data, model, loss_func=seq2seq_loss, metrics=[seq2seq_acc, CorpusBLEU(len(data.y.vocab.itos))],
callback_fns=partial(TeacherForcing, end_epoch=8))
learn.fit_one_cycle(8, 1e-2)
inputs, targets, outputs = get_predictions(learn)
inputs[700],targets[700],outputs[700]
inputs[2513], targets[2513], outputs[2513]
inputs[4000], targets[4000], outputs[4000]
#get_bleu(learn)
```
## Bidir
A second things that might help is to use a bidirectional model for the encoder.
```
class Seq2SeqQRNN(nn.Module):
def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25,
p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):
super().__init__()
self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx
self.emb_enc = emb_enc
self.emb_enc_drop = nn.Dropout(p_inp)
self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc, bidirectional=True)
self.out_enc = nn.Linear(2*n_hid, emb_enc.weight.size(1), bias=False)
self.hid_dp = nn.Dropout(p_hid)
self.emb_dec = emb_dec
self.decoder = QRNN(emb_dec.weight.size(1), emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)
self.out_drop = nn.Dropout(p_out)
self.out = nn.Linear(emb_dec.weight.size(1), emb_dec.weight.size(0))
self.out.weight.data = self.emb_dec.weight.data
self.pr_force = 0.
def forward(self, inp, targ=None):
bs,sl = inp.size()
hid = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, hid = self.encoder(emb, hid)
hid = hid.view(2,self.n_layers, bs, self.n_hid).permute(1,2,0,3).contiguous()
hid = self.out_enc(self.hid_dp(hid).view(self.n_layers, bs, 2*self.n_hid))
dec_inp = inp.new_zeros(bs).long() + self.bos_idx
res = []
for i in range(self.max_len):
emb = self.emb_dec(dec_inp).unsqueeze(1)
outp, hid = self.decoder(emb, hid)
outp = self.out(self.out_drop(outp[:,0]))
res.append(outp)
dec_inp = outp.data.max(1)[1]
if (dec_inp==self.pad_idx).all(): break
if (targ is not None) and (random.random()<self.pr_force):
if i>=targ.shape[1]: break
dec_inp = targ[:,i]
return torch.stack(res, dim=1)
def initHidden(self, bs): return one_param(self).new_zeros(2*self.n_layers, bs, self.n_hid)
emb_enc = torch.load(path/'models'/'fr_emb.pth')
emb_dec = torch.load(path/'models'/'en_emb.pth')
model = Seq2SeqQRNN(emb_enc, emb_dec, 256, 30, n_layers=2)
learn = Learner(data, model, loss_func=seq2seq_loss, metrics=[seq2seq_acc, CorpusBLEU(len(data.y.vocab.itos))],
callback_fns=partial(TeacherForcing, end_epoch=8))
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8, 1e-2)
inputs, targets, outputs = get_predictions(learn)
inputs[700], targets[700], outputs[700]
inputs[701], targets[701], outputs[701]
inputs[4001], targets[4001], outputs[4001]
#get_bleu(learn)
```
## Attention
Attention is a technique that uses the output of our encoder: instead of discarding it entirely, we use it with our hidden state to pay attention to specific words in the input sentence for the predictions in the output sentence. Specifically, we compute attention weights, then add to the input of the decoder the linear combination of the output of the encoder, with those attention weights.
```
def init_param(*sz): return nn.Parameter(torch.randn(sz)/math.sqrt(sz[0]))
class Seq2SeqQRNN(nn.Module):
def __init__(self, emb_enc, emb_dec, n_hid, max_len, n_layers=2, p_inp:float=0.15, p_enc:float=0.25,
p_dec:float=0.1, p_out:float=0.35, p_hid:float=0.05, bos_idx:int=0, pad_idx:int=1):
super().__init__()
self.n_layers,self.n_hid,self.max_len,self.bos_idx,self.pad_idx = n_layers,n_hid,max_len,bos_idx,pad_idx
self.emb_enc = emb_enc
self.emb_enc_drop = nn.Dropout(p_inp)
self.encoder = QRNN(emb_enc.weight.size(1), n_hid, n_layers=n_layers, dropout=p_enc, bidirectional=True)
self.out_enc = nn.Linear(2*n_hid, emb_enc.weight.size(1), bias=False)
self.hid_dp = nn.Dropout(p_hid)
self.emb_dec = emb_dec
emb_sz = emb_dec.weight.size(1)
self.decoder = QRNN(emb_sz + 2*n_hid, emb_dec.weight.size(1), n_layers=n_layers, dropout=p_dec)
self.out_drop = nn.Dropout(p_out)
self.out = nn.Linear(emb_sz, emb_dec.weight.size(0))
self.out.weight.data = self.emb_dec.weight.data #Try tying
self.enc_att = nn.Linear(2*n_hid, emb_sz, bias=False)
self.hid_att = nn.Linear(emb_sz, emb_sz)
self.V = init_param(emb_sz)
self.pr_force = 0.
def forward(self, inp, targ=None):
bs,sl = inp.size()
hid = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, hid = self.encoder(emb, hid)
hid = hid.view(2,self.n_layers, bs, self.n_hid).permute(1,2,0,3).contiguous()
hid = self.out_enc(self.hid_dp(hid).view(self.n_layers, bs, 2*self.n_hid))
dec_inp = inp.new_zeros(bs).long() + self.bos_idx
res = []
enc_att = self.enc_att(enc_out)
for i in range(self.max_len):
hid_att = self.hid_att(hid[-1])
u = torch.tanh(enc_att + hid_att[:,None])
attn_wgts = F.softmax(u @ self.V, 1)
ctx = (attn_wgts[...,None] * enc_out).sum(1)
emb = self.emb_dec(dec_inp)
outp, hid = self.decoder(torch.cat([emb, ctx], 1)[:,None], hid)
outp = self.out(self.out_drop(outp[:,0]))
res.append(outp)
dec_inp = outp.data.max(1)[1]
if (dec_inp==self.pad_idx).all(): break
if (targ is not None) and (random.random()<self.pr_force):
if i>=targ.shape[1]: break
dec_inp = targ[:,i]
return torch.stack(res, dim=1)
def initHidden(self, bs): return one_param(self).new_zeros(2*self.n_layers, bs, self.n_hid)
emb_enc = torch.load(path/'models'/'fr_emb.pth')
emb_dec = torch.load(path/'models'/'en_emb.pth')
model = Seq2SeqQRNN(emb_enc, emb_dec, 256, 30, n_layers=2)
learn = Learner(data, model, loss_func=seq2seq_loss, metrics=[seq2seq_acc, CorpusBLEU(len(data.y.vocab.itos))],
callback_fns=partial(TeacherForcing, end_epoch=8))
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8, 3e-3)
inputs, targets, outputs = get_predictions(learn)
inputs[700], targets[700], outputs[700]
inputs[701], targets[701], outputs[701]
inputs[4002], targets[4002], outputs[4002]
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
```
## The forward and backward passes
```
#export
from exp.nb_01 import *
def get_data():
path = datasets.download_data(MNIST_URL, ext='.gz')
with gzip.open(path, 'rb') as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')
return map(tensor, (x_train,y_train,x_valid,y_valid))
def normalize(x, m, s): return (x-m)/s
x_train,y_train,x_valid,y_valid = get_data()
x_train.mean(),x_train.std()
train_mean,train_std = x_train.mean(),x_train.std()
x_train = normalize(x_train, train_mean, train_std)
# NB: Use training, not validation mean for validation set
x_valid = normalize(x_valid, train_mean, train_std)
train_mean,train_std = x_train.mean(),x_train.std()
#export
def test_near_zero(a,tol=1e-3): assert a.abs()<tol, f"Near zero: {a}"
test_near_zero(x_train.mean())
test_near_zero(1-x_train.std())
n,m = x_train.shape
c = y_train.max()+1
n,m,c
```
## Foundations version
### Basic architecture
```
# num hidden
nh = 50
# simplified kaiming init / he init
w1 = torch.randn(m,nh)/math.sqrt(m)
b1 = torch.zeros(nh)
w2 = torch.randn(nh,1)/math.sqrt(m)
b2 = torch.zeros(1)
assert w1.mean().abs()<1e-3
assert (w1.std()-1/math.sqrt(m)).abs()<1e-3
def relu(x): return x.clamp_min(0.)
def lin(x, w, b): return x@w + b
# This should be ~ (0,1) (mean,std)...
x_valid.mean(),x_valid.std()
t = lin(x_valid, w1, b1)
#...so should this, because we used kaiming init, which is designed to do this
t.mean(),t.std()
t = relu(lin(x_valid, w1, b1))
#...actually it really should be this!
t.mean(),t.std()
```
From pytorch docs: `a: the negative slope of the rectifier used after this layer (0 for ReLU by default)`
$$\text{std} = \sqrt{\frac{2}{(1 + a^2) \times \text{fan_in}}}$$
This was introduced in the paper that described the Imagenet-winning approach from *He et al*: [Delving Deep into Rectifiers](https://arxiv.org/abs/1502.01852), which was also the first paper that claimed "super-human performance" on Imagenet (and, most importantly, it introduced resnets!)
```
# kaiming init / he init for relu
w1 = torch.randn(m,nh)*math.sqrt(2/m)
w1.mean(),w1.std()
import torch.nn
w1.shape
torch.nn.Linear(m,nh).weight.shape
torch.nn.Linear??
torch.nn.functional.linear??
#export
from torch.nn import init
w1 = torch.zeros(m,nh)
init.kaiming_normal_(w1, mode='fan_out')
t = relu(lin(x_valid, w1, b1))
w1.mean(),w1.std()
t.mean(),t.std()
torch.nn.Conv2d??
torch.nn.modules.conv._ConvNd.reset_parameters??
# what if...?
def relu(x): return x.clamp_min(0.) - 0.5
# kaiming init / he init for relu
w1 = torch.randn(m,nh)*math.sqrt(2./m )
t1 = relu(lin(x_valid, w1, b1))
t1.mean(),t1.std()
def model(xb):
l1 = lin(xb, w1, b1)
l2 = relu(l1)
l3 = lin(l2, w2, b2)
return l3
%timeit -n 10 _=model(x_valid)
assert model(x_valid).shape==torch.Size([x_valid.shape[0],1])
```
### Loss function: MSE
```
model(x_valid).shape
```
We need `squeeze()` to get rid of that trailing (,1), in order to use `mse`. (Of course, `mse` is not a suitable loss function for multi-class classification; we'll use a better loss function soon. We'll use `mse` for now to keep things simple.)
```
def mse(output, targ): return (output.squeeze() - targ).pow(2).mean()
y_train,y_valid = y_train.float(),y_valid.float()
preds = model(x_train)
preds.shape
mse(preds, y_train)
```
### Gradients and backward pass
```
def mse_grad(inp, targ):
# grad of loss with respect to output
inp.g = 2. * (inp.squeeze() - targ).unsqueeze(-1) / inp.shape[0]
def relu_grad(inp, out):
# grad of relu with respect to input activations
inp.g = (inp>0).float() * out.g
def lin_grad(inp, out, w, b):
# grad of matmul with respect to input
inp.g = out.g @ w.t()
w.g = (inp.unsqueeze(-1) * out.g.unsqueeze(1)).sum(0)
b.g = out.g.sum(0)
def forward_and_backward(inp, targ):
# forward pass:
l1 = inp @ w1 + b1
l2 = relu(l1)
out = l2 @ w2 + b2
# we don't actually need the loss in backward!
loss = mse(out, targ)
# backward pass:
mse_grad(out, targ)
lin_grad(l2, out, w2, b2)
relu_grad(l1, l2)
lin_grad(inp, l1, w1, b1)
forward_and_backward(x_train, y_train)
# Save for testing against later
w1g = w1.g.clone()
w2g = w2.g.clone()
b1g = b1.g.clone()
b2g = b2.g.clone()
ig = x_train.g.clone()
```
We cheat a little bit and use PyTorch autograd to check our results.
```
xt2 = x_train.clone().requires_grad_(True)
w12 = w1.clone().requires_grad_(True)
w22 = w2.clone().requires_grad_(True)
b12 = b1.clone().requires_grad_(True)
b22 = b2.clone().requires_grad_(True)
def forward(inp, targ):
# forward pass:
l1 = inp @ w12 + b12
l2 = relu(l1)
out = l2 @ w22 + b22
# we don't actually need the loss in backward!
return mse(out, targ)
loss = forward(xt2, y_train)
loss.backward()
test_near(w22.grad, w2g)
test_near(b22.grad, b2g)
test_near(w12.grad, w1g)
test_near(b12.grad, b1g)
test_near(xt2.grad, ig )
```
## Refactor model
### Layers as classes
```
class Relu():
def __call__(self, inp):
self.inp = inp
self.out = inp.clamp_min(0.)-0.5
return self.out
def backward(self): self.inp.g = (self.inp>0).float() * self.out.g
class Lin():
def __init__(self, w, b): self.w,self.b = w,b
def __call__(self, inp):
self.inp = inp
self.out = inp@self.w + self.b
return self.out
def backward(self):
self.inp.g = self.out.g @ self.w.t()
# Creating a giant outer product, just to sum it, is inefficient!
self.w.g = (self.inp.unsqueeze(-1) * self.out.g.unsqueeze(1)).sum(0)
self.b.g = self.out.g.sum(0)
def lin_grad(inp, out, w, b):
# grad of matmul with respect to input
inp.g = out.g @ w.t()
w.g = (inp.unsqueeze(-1) * out.g.unsqueeze(1)).sum(0)
b.g = out.g.sum(0)
class Mse():
def __call__(self, inp, targ):
self.inp = inp
self.targ = targ
self.out = (inp.squeeze() - targ).pow(2).mean()
return self.out
def backward(self):
self.inp.g = 2. * (self.inp.squeeze() - self.targ).unsqueeze(-1) / self.targ.shape[0]
class Model():
def __init__(self, w1, b1, w2, b2):
self.layers = [Lin(w1,b1), Relu(), Lin(w2,b2)]
self.loss = Mse()
def __call__(self, x, targ):
for l in self.layers: x = l(x)
return self.loss(x, targ)
def backward(self):
self.loss.backward()
for l in reversed(self.layers): l.backward()
def mse_grad(inp, targ):
# grad of loss with respect to output
inp.g = 2. * (inp.squeeze() - targ).unsqueeze(-1) / targ.shape[0]
w1.g,b1.g,w2.g,b2.g = [None]*4
model = Model(w1, b1, w2, b2)
%time loss = model(x_train, y_train)
%time model.backward()
test_near(w2g, w2.g)
test_near(b2g, b2.g)
test_near(w1g, w1.g)
test_near(b1g, b1.g)
test_near(ig, x_train.g)
```
### Module.forward()
```
class Module():
def __call__(self, *args):
self.args = args
self.out = self.forward(*args)
return self.out
def forward(self): raise Exception('not implemented')
def backward(self): self.bwd(self.out, *self.args)
class Relu(Module):
def forward(self, inp): return inp.clamp_min(0.)-0.5
def bwd(self, out, inp): inp.g = (inp>0).float() * out.g
class Lin(Module):
def __init__(self, w, b): self.w,self.b = w,b
def forward(self, inp): return inp@self.w + self.b
def bwd(self, out, inp):
inp.g = out.g @ self.w.t()
self.w.g = torch.einsum("bi,bj->ij", inp, out.g)
self.b.g = out.g.sum(0)
class Mse(Module):
def forward (self, inp, targ): return (inp.squeeze() - targ).pow(2).mean()
def bwd(self, out, inp, targ): inp.g = 2*(inp.squeeze()-targ).unsqueeze(-1) / targ.shape[0]
class Model():
def __init__(self):
self.layers = [Lin(w1,b1), Relu(), Lin(w2,b2)]
self.loss = Mse()
def __call__(self, x, targ):
for l in self.layers: x = l(x)
return self.loss(x, targ)
def backward(self):
self.loss.backward()
for l in reversed(self.layers): l.backward()
w1.g,b1.g,w2.g,b2.g = [None]*4
model = Model()
%time loss = model(x_train, y_train)
%time model.backward()
test_near(w2g, w2.g)
test_near(b2g, b2.g)
test_near(w1g, w1.g)
test_near(b1g, b1.g)
test_near(ig, x_train.g)
```
### Without einsum
```
class Lin(Module):
def __init__(self, w, b): self.w,self.b = w,b
def forward(self, inp): return inp@self.w + self.b
def bwd(self, out, inp):
inp.g = out.g @ self.w.t()
self.w.g = inp.t() @ out.g
self.b.g = out.g.sum(0)
w1.g,b1.g,w2.g,b2.g = [None]*4
model = Model()
%time loss = model(x_train, y_train)
%time model.backward()
test_near(w2g, w2.g)
test_near(b2g, b2.g)
test_near(w1g, w1.g)
test_near(b1g, b1.g)
test_near(ig, x_train.g)
```
### nn.Linear and nn.Module
```
#export
from torch import nn
class Model(nn.Module):
def __init__(self, n_in, nh, n_out):
super().__init__()
self.layers = [nn.Linear(n_in,nh), nn.ReLU(), nn.Linear(nh,n_out)]
self.loss = mse
def __call__(self, x, targ):
for l in self.layers: x = l(x)
return self.loss(x.squeeze(), targ)
model = Model(m, nh, 1)
%time loss = model(x_train, y_train)
%time loss.backward()
```
## Export
```
!./notebook2script.py 02_fully_connected.ipynb
```
| github_jupyter |
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.layers import Embedding, LSTM, Dense, Dropout, Bidirectional
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
### YOUR CODE HERE
from tensorflow.keras import regularizers
###
import tensorflow.keras.utils as ku
import numpy as np
tokenizer = Tokenizer()
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sonnets.txt \
-O /tmp/sonnets.txt
data = open('/tmp/sonnets.txt').read()
corpus = data.lower().split("\n")
tokenizer.fit_on_texts(corpus)
total_words = len(tokenizer.word_index) + 1
# create input sequences using list of tokens
input_sequences = []
for line in corpus:
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(token_list)):
n_gram_sequence = token_list[:i+1]
input_sequences.append(n_gram_sequence)
# pad sequences
max_sequence_len = max([len(x) for x in input_sequences])
input_sequences = np.array(pad_sequences(input_sequences, maxlen=max_sequence_len, padding='pre'))
# create predictors and label
predictors, label = input_sequences[:,:-1],input_sequences[:,-1]
label = ku.to_categorical(label, num_classes=total_words)
model = Sequential()
model.add(Embedding(input_dim=total_words, output_dim=100, input_length=max_sequence_len-1))
model.add(Bidirectional(LSTM(150, return_sequences=True)))
model.add(Dropout(0.2))
model.add(LSTM(100))
model.add(Dense(units=total_words/2, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model.add(Dense(units=total_words, activation='softmax'))
# Pick an optimizer
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
history = model.fit(predictors, label, epochs=100, verbose=1)
import matplotlib.pyplot as plt
acc = history.history['accuracy']
loss = history.history['loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'b', label='Training accuracy')
plt.title('Training accuracy')
plt.figure()
plt.plot(epochs, loss, 'b', label='Training Loss')
plt.title('Training loss')
plt.legend()
plt.show()
seed_text = "Help me Obi Wan Kenobi, you're my only hope"
next_words = 100
for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_len-1, padding='pre')
predicted = model.predict_classes(token_list, verbose=0)
output_word = ""
for word, index in tokenizer.word_index.items():
if index == predicted:
output_word = word
break
seed_text += " " + output_word
print(seed_text)
```
| github_jupyter |
```
import pickle
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
mnist = pickle.Unpickler(open('mnist.pkl', 'rb'), encoding = 'latin1').load()
(train, validation, test) = mnist
(train_images, train_labels) = train
(validation_images, validation_labels) = validation
(test_images, test_labels) = test
image_size = 28
features_size = 784
classes_count = 10
print(train_images.shape)
print(validation_images.shape)
print(test_images.shape)
for i in range(10):
plt.subplot(1, 10, i + 1)
plt.title(str(train_labels[i]))
plt.imshow(train_images[i].reshape((image_size, image_size)))
plt.axis('off')
hidden_neurons = 400
minibatch_size = 200
regularization_factor = 1e-6
learning_rate = 0.5
learning_rate_decay = 0.999
# 2 relu layers, 1 output softmax layer
# it's actually stemmed 4 layers network hence strange names of variables
W2 = np.random.uniform(high = 1.0 / features_size, size = (features_size, hidden_neurons))
b2 = np.zeros(shape = (hidden_neurons,))
W3 = np.random.uniform(high = 1.0 / hidden_neurons, size = (hidden_neurons, hidden_neurons))
b3 = np.zeros(shape = (hidden_neurons,))
W4 = np.random.uniform(high = 1.0 / hidden_neurons, size = (hidden_neurons, classes_count))
b4 = np.zeros(shape = (classes_count,))
learning_history = []
for epoch in range(5000):
choice = np.random.choice(train_images.shape[0], minibatch_size)
X = train_images[choice]
y = train_labels[choice]
# forward pass
H1 = X
H2 = np.maximum(np.dot(H1, W2) + b2, 0)
H3 = np.maximum(np.dot(H2, W3) + b3, 0)
H4 = np.dot(H3, W4) + b4
scores = H4 - np.max(H4, axis = 1, keepdims = True)
probs = np.exp(scores)
probs /= np.sum(probs, axis = 1, keepdims = True)
labels = np.argmax(probs, axis = 1)
accuracy = np.mean(labels == y)
loss = np.sum(-np.log(probs[range(minibatch_size), y])) / minibatch_size
loss += 0.5 * regularization_factor * (np.sum(W2 * W2) + np.sum(b2 * b2))
loss += 0.5 * regularization_factor * (np.sum(W3 * W3) + np.sum(b3 * b3))
loss += 0.5 * regularization_factor * (np.sum(W4 * W4) + np.sum(b4 * b4))
if epoch % 200 == 0:
H1v = validation_images
H2v = np.maximum(np.dot(H1v, W2) + b2, 0)
H3v = np.maximum(np.dot(H2v, W3) + b3, 0)
H4v = np.dot(H3v, W4) + b4
validation_pred = np.argmax(H4v, axis = 1)
validation_accuracy = np.mean(validation_labels == validation_pred)
learning_history.append((accuracy, validation_accuracy, loss))
print('epoch %d: accuracy = %f, validation_accuracy = %f, loss = %f' % (epoch, accuracy, validation_accuracy, loss))
if epoch % 500 == 0:
plt.subplot(1, 10, epoch / 500 + 1)
#plt.imshow(W2[:, 100].reshape(image_size, image_size))
plt.imshow(W3[:, 100].reshape(20, 20))
plt.axis('off')
# backprop
# layer 4
dL_dH4 = np.array(probs)
dL_dH4[range(minibatch_size), y] -= 1
dL_dH4 /= minibatch_size
dH4_dW4 = np.array(H3)
dL_dW4 = np.dot(dH4_dW4.T, dL_dH4)
dL_dW4 += regularization_factor * W4
dL_db4 = np.sum(dL_dH4, axis = 0)
dL_db4 += regularization_factor * b4
# layer 3
dH3_dW3 = np.array(H2)
dH3_db3 = np.ones(shape = H2.shape[0])
dH4_dH3 = np.array(W4)
dL_dH3 = np.dot(dL_dH4, dH4_dH3.T)
dL_dH3[H3 <= 0] = 0
dL_dW3 = np.dot(dH3_dW3.T, dL_dH3)
dL_dW3 += regularization_factor * W3
dL_db3 = np.dot(dH3_db3.T, dL_dH3)
dL_db3 += regularization_factor * b3
# layer 2
dH2_dW2 = np.array(H1)
dH2_db2 = np.ones(shape = H1.shape[0])
dH3_dH2 = np.array(W3)
dL_dH2 = np.dot(dL_dH3, dH3_dH2.T)
dL_dH2[H2 <= 0] = 0
dL_dW2 = np.dot(dH2_dW2.T, dL_dH2)
dL_dW2 += regularization_factor * W2
dL_db2 = np.dot(dH2_db2.T, dL_dH2)
dL_db2 += regularization_factor * b2
# sgd step
W4 += - learning_rate * dL_dW4
b4 += - learning_rate * dL_db4
W3 += - learning_rate * dL_dW3
b3 += - learning_rate * dL_db3
W2 += - learning_rate * dL_dW2
b2 += - learning_rate * dL_db2
learning_rate *= learning_rate_decay
plt.subplot(3, 1, 1)
plt.plot([x[0] for x in learning_history])
plt.xlabel('iteration')
plt.ylabel('train accuracy')
plt.subplot(3, 1, 2)
plt.plot([x[1] for x in learning_history])
plt.xlabel('iteration')
plt.ylabel('val accuracy')
plt.subplot(3, 1, 3)
plt.plot([x[1] for x in learning_history])
plt.xlabel('iteration')
plt.ylabel('loss')
for i in range(25):
plt.subplot(5, 5, i + 1)
plt.imshow(W2[:, i * 10].reshape(image_size, image_size))
plt.axis('off')
for i in range(25):
plt.subplot(5, 5, i + 1)
plt.imshow(W3[:, i * 10].reshape(20, 20))
plt.axis('off')
H1t = test_images
H2t = np.maximum(np.dot(H1t, W2) + b2, 0)
H3t = np.maximum(np.dot(H2t, W3) + b3, 0)
H4t = np.dot(H3t, W4) + b4
test_pred = np.argmax(H4t, axis = 1)
test_accuracy = np.mean(test_labels == test_pred)
print("test set accuracy =", test_accuracy)
```
| github_jupyter |
# Pandas (continues)
```
import pandas as pd
import numpy as np
```
## Catenating datasets
We already saw in the NumPy section how we can catenate arrays along an axis: `axis=0` catenates vertically and `axis=1` catenates horizontally, and so on. With the DataFrames of Pandas it works similarly except that the row indices and the column names require extra attention. Also note a slight difference in the name: `np.concatenate` but `pd.concat`.
Let's start by considering catenation along the axis 0, that is, vertical catenation. We will first make a helper function to easily create DataFrames for testing.
```
def makedf(cols, ind):
data = {c : [str(c) + str(i) for i in ind] for c in cols}
return pd.DataFrame(data, ind)
```
Next we will create some example DataFrames:
```
a=makedf("AB", [0,1])
a
b=makedf("AB", [2,3])
b
c=makedf("CD", [0,1])
c
d=makedf("BC", [2,3])
d
```
In the following simple case, the `concat` function works exactly as we expect it would:
```
pd.concat([a,b]) # The default axis is 0
```
The next, however, will create duplicate indices:
```
r=pd.concat([a,a])
r
r.loc[0,"A"]
```
This is not usually what we want! There are three solutions to this. Firstly, deny creation of duplicated indices by giving the `verify_integrity` parameter to the `concat` function:
```
try:
pd.concat([a,a], verify_integrity=True)
except ValueError as e:
import sys
print(e, file=sys.stderr)
```
Secondly, we can ask for automatic renumbering of rows:
```
pd.concat([a,a], ignore_index=True)
```
Thirdly, we can ask for *hierarchical indexing*. The indices can contain multiple levels, but on this course we don't consider hierarchical indices in detail. Hierarchical indices can make a two dimensional array to work like higher dimensional array.
```
r2=pd.concat([a,a], keys=['first', 'second'])
r2
r2["A"]["first"][0]
```
Everything works similarly, when we want to catenate horizontally:
```
pd.concat([a,c], axis=1)
```
We have so far assumed that when concatenating vertically the columns of both DataFrames are the same, and when joining horizontally the indices are the same. This is, however, not required:
```
pd.concat([a,d], sort=False) # sort option is used to silence a deprecation message
```
It expanded the non-existing cases with `NaN`s. This method is called an *outer join*, which forms the union of columns in the two DataFrames. The alternative is *inner join*, which forms the intersection of columns:
```
pd.concat([a,d], join="inner")
```
## Merging dataframes
Merging combines two DataFrames based on some common field.
Let's recall the earlier DataFrame about wages and ages of persons:
```
df = pd.DataFrame([[1000, "Jack", 21], [1500, "John", 29]], columns=["Wage", "Name", "Age"])
df
```
Now, create a new DataFrame with the occupations of persons:
```
df2 = pd.DataFrame({"Name" : ["John", "Jack"], "Occupation": ["Plumber", "Carpenter"]})
df2
```
The following function call will merge the two DataFrames on their common field, and, importantly, will keep the indices *aligned*. What this means is that even though the names are listed in different order in the two frames, the merge will still give correct result.
```
pd.merge(df, df2)
```
This was an example of a simple one-to-one merge, where the keys in the `Name` columns had 1-to-1 correspondence. Sometimes not all the keys appear in both DataFrames:
```
df3 = pd.concat([df2, pd.DataFrame({ "Name" : ["James"], "Occupation":["Painter"]})], ignore_index=True)
df3
pd.merge(df, df3) # By default an inner join is computed
pd.merge(df, df3, how="outer") # Outer join
```
Also, many-to-one and many-to-many relationships can occur in merges:
```
books = pd.DataFrame({"Title" : ["War and Peace", "Good Omens", "Good Omens"] ,
"Author" : ["Tolstoi", "Terry Pratchett", "Neil Gaiman"]})
books
collections = pd.DataFrame([["Oodi", "War and Peace"],
["Oodi", "Good Omens"],
["Pasila", "Good Omens"],
["Kallio", "War and Peace"]], columns=["Library", "Title"])
collections
```
All combinations with matching keys (`Title`) are created:
```
libraries_with_books_by = pd.merge(books, collections)
libraries_with_books_by
```
## Aggregates and groupings
Let us use again the weather dataset. First, we make the column names a bit more uniform and concise. For example the columns `Year`, `m`, and `d` are not uniformly named.
We can easily change the column names with the `rename` method of the DataFrame. Note that we cannot directly change the index `wh.columns` as it is immutable.
```
wh = pd.read_csv("https://www.cs.helsinki.fi/u/jttoivon/dap/data/fmi/kumpula-weather-2017.csv")
wh3 = wh.rename(columns={"m": "Month", "d": "Day", "Precipitation amount (mm)" : "Precipitation",
"Snow depth (cm)" : "Snow", "Air temperature (degC)" : "Temperature"})
wh3.head()
```
Pandas has an operation that splits a DataFrame into groups, performs some operation on each of the groups, and then combines the result from each group into a resulting DataFrame. This split-apply-combine functionality is really flexible and powerful operation. In Pandas you start by calling the `groupby` method, which splits the DataFrame into groups. In the following example the rows that contain measurements from the same month belong to the same group:
```
groups = wh3.groupby("Month")
groups
```
Nothing happened yet, but the `groupby` object knows how the division into groups is done. This is called a lazy operation. We can query the number of groups in the `groupby` object:
```
len(groups)
```
We can iterate through all the groups:
```
for key, group in groups:
print(key, len(group))
groups.get_group(2) # Group with index two is February
```
The `groupby` object functions a bit like a DataFrame, so some operations which are allowed for DataFrames are also allowed for the `groupby` object. For example, we can get a subset of columns:
```
groups["Temperature"]
```
For each DataFrame corresponding to a group the Temperature column was chosen. Still nothing was shown, because we haven't applied any operation on the groups.
The common methods also include the aggregation methods. Let's try to apply the `mean` aggregation:
```
groups["Temperature"].mean()
```
Now what happened was that after the mean aggregation was performed on each group, the results were automatically combined into a resulting DataFrame. Let's try some other aggregation:
```
groups["Precipitation"].sum()
groups["Precipitation"].sum().plot(kind='bar')
```
Ok, the -1.0 values in the Precipitation field are causing trouble here, let's convert them to zeros:
```
wh4 = wh3.copy()
wh4.loc[wh4.Precipitation == -1, "Precipitation"] = 0
wh4.loc[wh4.Snow == -1, "Snow"] = 0
wh4.head()
wh4.groupby("Month")["Precipitation"].sum()
wh4.groupby("Month")["Precipitation"].sum().plot(kind='bar')
```
### Other ways to operate on groups
The aggregations are not the only possible operations on groups. The other possibilities are filtering, transformation, and application.
In **filtering** some of the groups can be filtered out.
```
def myfilter(df): # The filter function must return a boolean value
return df["Precipitation"].sum() >= 150
wh4.groupby("Month").filter(myfilter) # Filter out months with total precipitation less that 150 mm
```
In a **transformation** each group's DataFrame is manipulated in a way that retains its shape. An example of centering values, so that the deviations from the monthly means are shown:
```
pd.concat([wh4.iloc[:, 0:3],
wh4.groupby("Month")[["Precipitation", "Snow", "Temperature"]].transform(lambda x : x - x.mean())],
axis=1)
```
The **apply** method is very generic and only requires that for each group's DataFrame the given function returns a DataFrame, Series, or a scalar. In the following example, we sort within each group by the temperature:
```
wh4.groupby("Month").apply(lambda df : df.sort_values("Temperature"))
```
## Time series
If a measurement is made at certain points in time, the resulting values with their measurement times is called a time series. In Pandas a Series whose index consists of dates/times is a time series.
Let's make a copy of the DataFrame that we can mess with:
```
wh2 = wh3.copy()
wh2.columns
```
The column names `Year`, `Month`, and `Day` are now in appropriate form for the `to_datetime` function. It can convert these fields into a timestamp series, which we will add to the DataFrame.
```
wh2["Date"] = pd.to_datetime(wh2[["Year", "Month", "Day"]])
wh2.head()
```
We can now drop the useless fields:
```
wh2=wh2.drop(columns=["Year", "Month", "Day"])
wh2.head()
```
The following method call will set the Date field as the index of the DataFrame.
```
wh2 = wh2.set_index("Date")
wh2.head()
```
We can now easily get a set of rows using date slices:
```
wh2["2017-01-15":"2017-02-03"]
```
By using the `date_range` function even more complicated sets can be formed. The following gets all the Mondays of July:
```
r=pd.date_range("2017-07-01", "2017-07-31", freq="w-mon")
r
wh2.index.difference(r)
wh2.loc[r,:]
```
The following finds all the business days (Monday to Friday) of July:
```
pd.date_range("2017-07-01", "2017-07-31", freq="b")
```
We can get a general idea about the `Temperature` column by plotting it. Note how the index time series is shown nicely on the x-axis.
```
%matplotlib inline
wh2["Temperature"].plot()
```
The graph looks a bit messy at this level of detail. We can smooth it by taking averages over a sliding window of length 30 days:
```
rolling = wh2.Temperature.rolling(30, center=True)
rolling
data = pd.DataFrame({"Temperature" : wh2.Temperature, "Rolling mean" : rolling.mean()})
data.plot();
```
## Additional information
[Pandas cheat sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) Summary of most important Pandas' functions and methods.
Read the article [Tidy Data](https://www.jstatsoft.org/article/view/v059i10/v59i10.pdf). The article uses the statistical software R as an example, but the ideas are relevant in general. Pandas operations maintain data in the tidy format.
Pandas handles only one dimensional data (Series) and two dimensional data (DataFrame). While you can use [hierarchical indices](http://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#hierarchical-indexing-multiindex) to simulate higher dimensional arrays, you should use the [xarray](http://xarray.pydata.org/en/stable/index.html) library, if you need proper higher-dimensional arrays with labels. It is basically a cross between NumPy and Pandas.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.