text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
# More Python Basics --- --- ## Going Further with Python for Text-mining This notebook builds on the [previous notebook](1-intro-to-python-and-text.ipynb) to teach you a bit more Python so you can understand the text-mining examples presented in the following notebooks. These are the fundamentals in working with strings in Python and other basics that every Python user may use every day. However, this is just an introduction and it is not expected that you will be ready and capable of simply diving in to coding straight after completing this course. Rather, these notebooks and the accompanying live teaching sessions are supposed to give you just a taster of what text-mining with Python is about. By the end of the course I hope you will come away with either: an interest to learn more; or equally valid, an informed feeling that coding is not for you. Having said this, there are many approaches to learning programming, and it is often only once you happen upon the right approach for you that you make good progress. It is worth trying different topics, teachers, media and learning styles. I tried to learn programming several times over many years and eventually found the right course that kickstarted my own coding journey. --- --- ## Recap of Strings Welcome back! Here's a quick recap of what we learnt in [1-intro-to-python-and-text](1-intro-to-python-and-text.ipynb). Strings are the way that Python deals with text. Create a *string* and store it with a *name*: ``` my_sentence = 'The Moon formed 4.51 billion years ago.' my_sentence ``` _Concatenate_ strings together: ``` my_sentence + " " + "It is the fifth-largest satellite in the Solar System." ``` _Index_ a string. Remember that indexing in Python starts at 0. ``` my_sentence[16] ``` _Slice_ a string. Remember that the slice goes from the first index up to but _not_ including the second index. ``` my_sentence[0:20] ``` Transform a string with _string methods_. Important: the original string `my_sentence` is unchanged. Instead, a string method _returns_ a new string. ``` my_sentence.swapcase() ``` Test a string with string methods: ``` my_sentence.islower() ``` Create a _list_ of strings: ``` my_list = ['The Moon formed 4.51 billion years ago', "The Moon is Earth's only permanent natural satellite", 'The Moon was first reached in September 1959'] my_list ``` _Slice_ a list. Add a _step_ to jump through a string or list by more than one. Use a step of `-1` to go backwards. ``` my_list[0:3:2] ``` --- --- ## Create a List of Strings with List Comprehensions Let's get going on some new material. We can create new lists in a quick and elegant way by using _list comprehensions_. Essentially, a list comprehension _loops_ over each item in a list, one by one, and returns something each time, and creates a new list. For example, here is a list of strings: `['banana', 'apple', 'orange', 'kiwi']` We could use a list comprehension to loop over this list and create a new list with every item made UPPERCASE. The resulting list would look like this: `['BANANA', 'APPLE', 'ORANGE', 'KIWI']` The code for doing this is below: ``` fruit = ['banana', 'apple', 'orange', 'kiwi'] fruit_u = [item.upper() for item in fruit] fruit_u ``` The pattern is as follows: `[return_something for each_item in list]` First thing to say is the `for` and `in` are *keywords*, that is, they are special reserved words in Python. These must be present exactly in this order in every list comprehension. The other words (`return_something`, `each_item`, `list`) are placeholders for whatever variables (names) you are working with in your case. ![List comprehensions diagram](assets/list-comprehension.png) > Let's look at some of the details: * A list comprehension goes inside square brackets (`[]`), which tells Python to create a new list. * `list` is the name of your list. It has to be a list you have already created in a previous step. * The `each_item in list` part is the loop. * `each_item` is the name you assign to each item as it is selected by the loop. The name you choose should be something descriptive that helps you remember what it is. * The `return_something for` part is what happens each time it loops over an item. The `return_something` could just be the original item, or it could be something fairly complicated. The most basic example is just to return exactly the same item each time it loops over and return all items in a list. Here is an example where we have taken our original list `my_list` and created a new list `new_list` with the exact same items unchanged: ``` new_list = [item for item in my_list] new_list ``` Why do this? There does not seem much point to creating the same list again. ### Manipulate Lists with String Methods By adding a string method to a list comprehension we have a powerful way to manipulate a list. We have already seen this in the `fruit` example above. Here's another example of the same thing with the 'Moon' list we've been working with. Every time the Python loops over an item it transforms it to uppercase before adding it to the new list: ``` new_list_upper = [item.upper() for item in my_list] new_list_upper # Write code to transform every item in the list with a string method (of your choice) ``` Hint: see the [full documentation on string methods](https://docs.python.org/3.8/library/stdtypes.html#string-methods). ### Filter Lists with a Condition We can _filter_ a list by adding a _condition_ so that only certain items are included in the new list: ``` new_list_p = [item for item in my_list if 'p' in item] new_list_p ``` The pattern is as follows: `[return_something for each_item in list if some_condition]` ![List comprehensions with condition diagram](assets/list-comprehension-with-condition.png) Essentially, what we are saying here is that **if** the character "p" is **in** the item when Python loops over it, keep it and add it to the new list, otherwise ignore it and throw it away. Thus, the new list has only two of the strings in it. The first string has a "p" in "permanent"; the second has a "p" in "September". ``` # Write code to filter the list for items that include a number (of your choice) ``` --- --- ## Adding New Capabilities with Imports Python has a lot of amazing capabilities built-in to the language itself, like being able to manipulate strings. However, in any Python project you are likely to want to use Python code written by someone else to go beyond the built-in capabilities. Code 'written by someone else' comes in the form of a file (or files) separate to the one you are currently working on. An external Python file (or sometimes a *package* of files) is called a *module* and in order to use them in your code, you need to *import* it. This is a simple process using the keyword `import` and the name of the module. Just make sure that you `import` something _before_ you want to use it! The pattern is as follows: `import module_name` Here are a series of examples. See if you can guess what each one is doing before running it. ``` import math math.pi import random random.random() import locale locale.getlocale() ``` The answers are: the value of the mathematical constant *pi*, a random number (different every time you run it), and the current locale that the computer thinks it is working in. --- --- ## Reusing Code with Functions A function is a _reusable block of code_ that has been wrapped up and given a _name_. The function might have been written by someone else, or it could have been written by you. We don't cover how to write functions in this course; just how to run functions written by someone else. In order to run the code of a function, we use the name followed by parentheses `()`. The pattern is as follows: `name_of_function()` We have already seen this earlier. Here are a selection of functions (or methods) we have run so far: ``` # 'lower()' is the function (aka method) my_sentence = 'Butterflies are important as pollinators.' my_sentence.lower() # 'isalpha()' is the function (aka method) my_sentence.isalpha() # 'random()' is the function random.random() ``` --- #### Functions and Methods There is a technical difference between functions and methods. You don't need to worry about the distinction for our course. We will treat all functions and methods as the same. If you are interested in learning more about functions and methods try this [Datacamp Python Functions Tutorial](https://www.datacamp.com/community/tutorials/functions-python-tutorial). --- ### Functions that Take Arguments If we need to pass particular information to a function, we put that information _in between_ the `()`. Like this: ``` math.sqrt(25) ``` The `25` is the value we want to pass to the `sqrt()` function so it can do its work. This value is called an _argument_ to the function. Functions may take any number of arguments, depending on what the function needs. Here is another function with an argument: ``` import calendar calendar.isleap(2020) ``` Essentially, you can think of a function as a box. ![Function black box diagram](assets/function-black-box.png) You put an input into the box (the input may be nothing), the box does something with the input, and then the box gives you back an output. You generally don't need to worry _how_ the function does what it does (unless you really want to, in which case you can look at its code). You just know that it works. > ***Functions are the basis of how we 'get stuff done' in Python.*** For example, we can use the `requests` module to get the text of a Web page: ``` import requests response = requests.get('https://www.wikipedia.org/') response.text[137:267] ``` The string `'https://www.wikipedia.org/'` is the argument we pass to the `get()` function for it to open the Web page and read it for us. Why not try your own URL? What happens if you print the whole of `response.text` instead of slicing out some of the characters? --- --- ## Summary Here's what we've covered - how to: * Create and manipulate a new list with a **list comprehension**. * Filter a list with a **condition**. * **import** a **module** to add new capabilities. * Run a **function** with parentheses. * Pass input **arguments** into a function.
github_jupyter
<a href="https://colab.research.google.com/github/airctic/icevision/blob/master/notebooks/getting_started_semantic_segmentation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Getting Started with Semantic Segmentation using IceVision ## Introduction to IceVision IceVision is a Framework for object detection, instance segmentation and semantic segmentation that makes it easier to prepare data, train an object detection model, and use that model for inference. The IceVision Framework provides a layer across multiple deep learning engines, libraries, models, and data sets. It enables you to work with multiple training engines, including [fastai](https://github.com/fastai/fastai), and [pytorch-lightning](https://github.com/PyTorchLightning/pytorch-lightning). It enables you to work with some of the best deep learning libraries including [mmdetection](https://arxiv.org/abs/1906.07155), [Ross Wightman's efficientdet implementation](https://github.com/rwightman/efficientdet-pytorch) and model library, [torchvision](https://pytorch.org/vision/stable/index.html), [ultralytics Yolo](https://github.com/ultralytics/yolov5), and [mmsegmentation](https://github.com/open-mmlab/mmsegmentation). It enables you to select from many possible models and backbones from these libraries. IceVision lets you switch between them with ease. This means that you can pick the engine, library, model, and data format that work for you now and easily change them in the future. You can experiment with with them to see which ones meet your requirements. ## Getting Started with Semantic Segmentation This notebook will walk you through the training of models for **semantic segmentation** - a task that consists in classifying each pixel of an image into one of multiple classes. In this tutorial, you will learn how to 1. Install IceVision. This will include the IceData package that provides easy access to several sample datasets, as well as the engines and libraries that IceVision works with. 2. Download and prepare a dataset to work with. 3. Select an object detection library, model, and backbone. 4. Instantiate the model, and then train it with both the fastai engine. 5. And finally, use the model to identify objects in images. The notebook is set up so that you can easily select different libraries, models, and backbones to try. ## Install IceVision and IceData The following downloads and runs a short shell script. The script installs IceVision, IceData, the MMDetection library, the MMSegmentation library and Yolo v5 as well as the fastai and pytorch lightning engines. Install from pypi... Install from pypi... ``` # Torch - Torchvision - IceVision - IceData - MMDetection - YOLOv5 - EfficientDet - mmsegmentation Installation !wget https://raw.githubusercontent.com/airctic/icevision/master/icevision_install.sh # Choose your installation target: cuda11 or cuda10 or cpu !bash icevision_install.sh cuda11 ``` ... or from icevision master ``` # # Torch - Torchvision - IceVision - IceData - MMDetection - YOLOv5 - EfficientDet - mmsegmentation Installation # !wget https://raw.githubusercontent.com/airctic/icevision/master/icevision_install.sh # # Choose your installation target: cuda11 or cuda10 or cpu # !bash icevision_install.sh cuda11 master # Restart kernel after installation import IPython IPython.Application.instance().kernel.do_shutdown(True) ``` ## Imports All of the IceVision components can be easily imported with a single line. ``` from icevision.all import * ``` ## Download and prepare a dataset Now we can start by downloading the camvid tiny dataset, which contains . This tiny dataset contains 100 images whose pixels are classified in 33 classes, including: - animal, - car, - bridge, - building. IceVision provides methods to load a dataset, parse annotation files, and more. Download the camvid tiny dataset and load it using `icedata` ``` # Download data data_url = 'https://s3.amazonaws.com/fast-ai-sample/camvid_tiny.tgz' data_dir = icedata.load_data(data_url, 'camvid_tiny') / 'camvid_tiny' ``` Retrieve class codes from dataset file and create a class map (a structure that maps a class identifier, in this case an integer, to the actual class) ``` codes = np.loadtxt(data_dir/'codes.txt', dtype=str) class_map = ClassMap(list(codes)) ``` Get images files ``` images_dir = data_dir/'images' labels_dir = data_dir/'labels' image_files = get_image_files(images_dir) ``` ## Parse the dataset A unit of data in IceVision is called a record, which contains all the information required to handle a given image (e.g. path to the image, segmentation masks, class map, etc..). Here, we build a collection of records by iterating through the image files. ``` records = RecordCollection(SemanticSegmentationRecord) for image_file in pbar(image_files): record = records.get_by_record_id(image_file.stem) if record.is_new: record.set_filepath(image_file) record.set_img_size(get_img_size(image_file)) record.segmentation.set_class_map(class_map) mask_file = SemanticMaskFile(labels_dir / f'{image_file.stem}_P.png') record.segmentation.set_mask(mask_file) records = records.autofix() train_records, valid_records = records.make_splits(RandomSplitter([0.8, 0.2])) ``` ## Take a peak at records Using `show_records`, we can preview the content of the records we created ``` sample_records = random.choices(records, k=3) show_records(sample_records, ncols=3) ``` ## Creating datasets with augmentations and transforms Data augmentations are essential for robust training and results on many datasets and deep learning tasks. IceVision ships with the [Albumentations](https://albumentations.ai/docs/) library for defining and executing transformations, but can be extended to use others. For this tutorial, we apply the Albumentation's default `aug_tfms` to the training set. `aug_tfms` randomly applies broadly useful transformations including rotation, cropping, horizontal flips, and more. See the Albumentations documentation to learn how to customize each transformation more fully. The validation set is only resized (with padding). We then create `Datasets` for both. The dataset applies the transforms to the annotations (such as bounding boxes) and images in the data records. ``` presize, size = 512, 384 presize, size = ImgSize(presize, int(presize*.75)), ImgSize(size, int(size*.75)) aug_tfms = tfms.A.aug_tfms(presize=presize, size=size, pad=None, crop_fn=partial(tfms.A.RandomCrop, p=0.5), shift_scale_rotate=tfms.A.ShiftScaleRotate(rotate_limit=2), ) train_tfms = tfms.A.Adapter([*aug_tfms, tfms.A.Normalize()]) valid_tfms = tfms.A.Adapter([tfms.A.resize(size), tfms.A.Normalize()]) train_ds = Dataset(train_records, train_tfms) valid_ds = Dataset(valid_records, valid_tfms) ``` ### Understanding the transforms The Dataset transforms are only applied when we grab (get) an item. Several of the default `aug_tfms` have a random element to them. For example, one might perform a rotation with probability 0.5 where the angle of rotation is randomly selected between +45 and -45 degrees. This means that the learner sees a slightly different version of an image each time it is accessed. This effectively increases the size of the dataset and improves learning. We can look at result of getting the 0th image from the dataset a few times and see the differences. Each time you run the next cell, you will see different results due to the random element in applying transformations. ``` ds_samples = [train_ds[0] for _ in range(3)] show_samples(ds_samples, ncols=3) ``` # Select a library, model, and backbone In order to create a model, we need to: - Choose one of the **libraries** supported by IceVision - Choose one of the **models** supported by the library - Choose one of the **backbones** corresponding to a chosen model You can access any supported models by following the IceVision unified API, use code completion to explore the available models for each library. ### Creating a model Selections only take two simple lines of code. For example, to try the `mmsegmentation` library using the `deeplabv3` model and the `resnet50_d8` backbone could be specified by: ```python model_type = models.mmseg.deeplab3 backbone = model_type.backbones.backbones.resnet50_d8 ``` As pretrained models are used by default, we typically leave this out of the backbone creation step. We've selected a few of the many options below. You can easily pick which option you want to try by setting the value of `selection`. This shows you how easy it is to try new libraries, models, and backbones. ``` selection = 0 if selection == 0: model_type = models.fastai.unet backbone = model_type.backbones.resnet34(pretrained=True) model = model_type.model(backbone=backbone, num_classes=class_map.num_classes, img_size=size) if selection == 1: model_type = models.mmseg.deeplabv3 backbone = model_type.backbones.resnet50_d8(pretrained=True) model = model_type.model(backbone=backbone, num_classes=class_map.num_classes) if selection == 2: model_type = models.mmseg.deeplabv3 backbone = model_type.backbones.resnet50_d8(pretrained=True) model = model_type.model(backbone=backbone, num_classes=class_map.num_classes) if selection == 3: model_type = models.mmseg.segformer backbone = model_type.backbones.mit_b0(pretrained=True) model = model_type.model(backbone=backbone, num_classes=class_map.num_classes) ``` ## Data Loader The Data Loader is specific to a model_type. The job of the data loader is to get items from a dataset and batch them up in the specific format required by each model. This is why creating the data loaders is separated from creating the datasets. We can take a look at the first batch of items from the `valid_dl`. Remember that the `valid_tfms` only resized (with padding) and normalized records, so different images, for example, are not returned each time. This is important to provide consistent validation during training. ``` # Data Loaders train_dl = model_type.train_dl(train_ds, batch_size=8, num_workers=4, shuffle=True) valid_dl = model_type.valid_dl(valid_ds, batch_size=8, num_workers=4, shuffle=False) # show batch model_type.show_batch(first(valid_dl), ncols=4) ``` ## Metrics The fastai and pytorch lightning engines collect metrics to track progress during training. IceVision provides metric classes that work across the engines and libraries. The same metrics can be used for both fastai and pytorch lightning. As this is a segmentation problem, we are going to use two metrics: multi-class Diece coefficient, and segmentation accuracy. Note that we are ignoring "void" when computing accuracy ``` metrics = [MulticlassDiceCoefficient(), SegmentationAccuracy(ignore_class=class_map.get_by_name("Void"))] ``` ## Training IceVision is an agnostic framework meaning it can be plugged into other DL learning engines such as [fastai2](https://github.com/fastai/fastai2), and [pytorch-lightning](https://github.com/PyTorchLightning/pytorch-lightning). ### Training using fastai ``` learn = model_type.fastai.learner(dls=[train_dl, valid_dl], model=model, metrics=metrics) ``` Because we use fastai, we get access to its features such as the learning rate finder ``` learn.lr_find() learn.fine_tune(10, 1e-4) ``` ## Using the model - inference and showing results The first step in reviewing the model is to show results from the validation dataset. This is easy to do with the `show_results` function. ``` model_type.show_results(model, valid_ds, num_samples=2) ``` ### Prediction Sometimes you want to have more control than `show_results` provides. You can construct an inference dataloader using `infer_dl` from any IceVision dataset and pass this to `predict_dl` and use `show_preds` to look at the predictions. A prediction is returned as a dict with keys: `scores`, `labels`, `bboxes`, and possibly `masks`. Prediction functions that take a `keep_images` argument will only return the (tensor representation of the) image when it is `True`. In interactive environments, such as a notebook, it is helpful to see the image with bounding boxes and labels applied. In a deployment context, however, it is typically more useful (and efficient) to return the bounding boxes by themselves. ``` infer_dl = model_type.infer_dl(valid_ds, batch_size=4, shuffle=False) preds = model_type.predict_from_dl(model, infer_dl, keep_images=True) show_sample(preds[0].pred) ``` ## Happy Learning! If you need any assistance, feel free to join our [forum](https://discord.gg/JDBeZYK).
github_jupyter
# Generate Images March 1, 2021 ``` import argparse import os import sys import torch import torch.nn as nn import torch.nn.parallel import torch.backends.cudnn as cudnn import torch.optim as optim import torch.utils.data from torchsummary import summary import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from IPython.display import HTML from torch.utils.data import DataLoader, TensorDataset import time from datetime import datetime import glob import pickle import yaml import collections %matplotlib widget sys.path.append('/global/u1/v/vpa/project/jpt_notebooks/Cosmology/Cosmo_GAN/repositories/cosmogan_pytorch/code/5_3d_cgan/1_main_code/') from utils import * ``` ## Build Model structure ``` def f_load_config(config_file): with open(config_file) as f: config = yaml.load(f, Loader=yaml.SafeLoader) return config def f_manual_add_argparse(): ''' use only in jpt notebook''' args=argparse.Namespace() args.config='1_main_code/config_3d_cgan_64_cori.yaml' args.mode='fresh' args.ip_fldr='' # args.local_rank=0 args.facility='cori' args.distributed=False args.ngpu=1 # args.mode='continue' # args.ip_fldr='/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/128sq/20201211_093818_nb_test/' return args def weights_init(m): '''custom weights initialization called on netG and netD ''' classname = m.__class__.__name__ if classname.find('Conv') != -1: nn.init.normal_(m.weight.data, 0.0, 0.02) elif classname.find('BatchNorm') != -1: nn.init.normal_(m.weight.data, 1.0, 0.02) nn.init.constant_(m.bias.data, 0) def f_init_gdict(args,gdict): ''' Create global dictionary gdict from args and config file''' ## read config file config_file=args.config with open(config_file) as f: config_dict= yaml.load(f, Loader=yaml.SafeLoader) gdict=config_dict['parameters'] args_dict=vars(args) ## Add args variables to gdict for key in args_dict.keys(): gdict[key]=args_dict[key] return gdict def f_gen_images(gdict,netG,optimizerG,sigma,ip_fname,op_loc,op_strg='inf_img_',op_size=500): '''Generate images for best saved models Arguments: gdict, netG, optimizerG, sigma : sigma input value ip_fname: name of input file op_strg: [string name for output file] op_size: Number of images to generate ''' nz,device=gdict['nz'],gdict['device'] try: if torch.cuda.is_available(): checkpoint=torch.load(ip_fname) else: checkpoint=torch.load(ip_fname,map_location=torch.device('cpu')) except Exception as e: print(e) print("skipping generation of images for ",ip_fname) return ## Load checkpoint if gdict['multi-gpu']: netG.module.load_state_dict(checkpoint['G_state']) else: netG.load_state_dict(checkpoint['G_state']) ## Load other stuff iters=checkpoint['iters'] epoch=checkpoint['epoch'] optimizerG.load_state_dict(checkpoint['optimizerG_state_dict']) # Generate batch of latent vectors noise = torch.randn(op_size, 1, 1, 1, nz, device=device) tnsr_cosm_params=(torch.ones(op_size,device=device)*sigma).view(op_size,1) # Generate fake image batch with G netG.eval() ## This is required before running inference with torch.no_grad(): ## This is important. fails without it for multi-gpu gen = netG(noise,tnsr_cosm_params) gen_images=gen.detach().cpu().numpy() print(gen_images.shape) op_fname='%s_label-%s_epoch-%s_step-%s.npy'%(op_strg,sigma,epoch,iters) np.save(op_loc+op_fname,gen_images) print("Image saved in ",op_fname) if __name__=="__main__": torch.backends.cudnn.benchmark=True t0=time.time() ################################# args=f_manual_add_argparse() ### Set up ### # Initilize variables gdict={} gdict=f_init_gdict(args,gdict) print(gdict) ## Add args variables to gdict # for key in ['ngpu']: # gdict[key]=vars(args)[key] gdict['device']=torch.device("cuda" if (torch.cuda.is_available() and gdict['ngpu'] > 0) else "cpu") gdict['ngpu']=torch.cuda.device_count() gdict['multi-gpu']=True if (gdict['device'].type == 'cuda') and (gdict['ngpu'] > 1) else False Generator, Discriminator=f_get_model(gdict['model'],gdict) print("Building GAN networks") # Create Generator netG = Generator(gdict).to(gdict['device']) netG.apply(weights_init) # print(netG) # summary(netG,(1,1,64)) print("Number of GPUs used %s"%(gdict['ngpu'])) if (gdict['multi-gpu']): netG = nn.DataParallel(netG, list(range(gdict['ngpu']))) optimizerG = optim.Adam(netG.parameters(), lr=gdict['learn_rate_g'], betas=(gdict['beta1'], 0.999),eps=1e-7) ``` ## Run Inference ``` ls /global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/3d_cGAN/ # ## For single checkpoint # main_dir='/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/128sq/' # fldr='20210119_134802_cgan_predict_0.65_m2' # param_label=1.1 # op_loc=main_dir+fldr+'/images/' # ip_fname=main_dir+fldr+'/models/checkpoint_best_spec.tar' # f_gen_images(gdict,netG,optimizerG,param_label,ip_fname,op_loc,op_strg='inference_spec',op_size=1000) ## For multiple checkpoints main_dir='/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/3d_cGAN/' fldr='20210619_224213_cgan_bs32_nodes8_lr0.0001-vary_fm50' param_list=[0.5,0.65,0.8,1.1] op_loc=main_dir+fldr+'/images/' step_list=[37810, 33080, 34460] for step in step_list: for param_label in param_list: try: # ip_fname=main_dir+fldr+'/models/checkpoint_{0}.tar'.format(step) # f_gen_images(gdict,netG,optimizerG,param_label,ip_fname,op_loc,op_strg='inference_spec',op_size=1000) ip_fname=glob.glob(main_dir+fldr+'/models/checkpoint_*{0}.tar'.format(step))[0] print(ip_fname) f_gen_images(gdict,netG,optimizerG,param_label,ip_fname,op_loc,op_strg='inference',op_size=128) except Exception as e: print(e) print("skipping ",step) # fname=op_loc+'inference_spec_epoch-11_step-37040.npy' # a1=np.load(fname) # print(a1.shape) ```
github_jupyter
``` import re import string import warnings import numpy as np import pandas as pd import itertools import seaborn as sns from time import sleep from pprint import pprint from itertools import chain import matplotlib.pyplot as plt from tqdm.notebook import tqdm_notebook as tqdm import gensim import preprocessor as pre import gensim.corpora as corpora from gensim.utils import simple_preprocess from gensim.models import CoherenceModel import nltk from nltk.corpus import stopwords from nltk.tokenize import word_tokenize import pyLDAvis import pyLDAvis.gensim import pyLDAvis.sklearn from IPython.core.display import HTML from sklearn.pipeline import Pipeline from sklearn.base import TransformerMixin from sklearn.feature_extraction.text import CountVectorizer,TfidfVectorizer import spacy import en_core_web_sm from wordcloud import WordCloud # Word tokenization from spacy.lang.en import English warnings.filterwarnings('ignore') warnings.simplefilter('ignore') warnings.simplefilter(action='ignore', category=FutureWarning) pd.set_option('max_colwidth',500) spacy_stopwords = spacy.lang.en.stop_words.STOP_WORDS ROOT_DATA = 'D:/Statistical Programming Projects/Social Network and Sentiment Analysis/data/' tqdm.pandas() stop_words = set(stopwords.words('english')) sns.set(rc={'figure.figsize':(9,6),'lines.linewidth': 5, 'lines.markersize': 10}) plt.style.use('seaborn-whitegrid') sns.set_context("notebook", font_scale=1.2) sns.set_style("whitegrid",{"font.family": ["Corbel"]}) pd.options.display.float_format = '{:20,.2f}'.format ``` ## A. Topic Modelling ### 1. Load tweet database ``` health_tweets = pd.read_csv(ROOT_DATA+"covid_nlp_data.csv") topic_health_tweets = health_tweets[['id','orig_tweet']] topic_health_tweets = topic_health_tweets[~topic_health_tweets.orig_tweet.isnull()] topic_health_tweets.reset_index(inplace=True,drop=True) topic_health_tweets.head(5) ``` ### 2. prepare tweet column to analyse ``` # HappyEmoticons emoticons_happy = set([ ':-)', ':)', ';)', ':o)', ':]', ':3', ':c)', ':>', '=]', '8)', '=)', ':}', ':^)', ':-D', ':D', '8-D', '8D', 'x-D', 'xD', 'X-D', 'XD', '=-D', '=D', '=-3', '=3', ':-))', ":'-)", ":')", ':*', ':^*', '>:P', ':-P', ':P', 'X-P', 'x-p', 'xp', 'XP', ':-p', ':p', '=p', ':-b', ':b', '>:)', '>;)', '>:-)', '<3' ]) # Sad Emoticons emoticons_sad = set([ ':L', ':-/', '>:/', ':S', '>:[', ':@', ':-(', ':[', ':-||', '=L', ':<', ':-[', ':-<', '=\\', '=/', '>:(', ':(', '>.<', ":'-(", ":'(", ':\\', ':-c', ':c', ':{', '>:\\', ';(' ]) emoticons = emoticons_happy.union(emoticons_sad) def remove_links(tweet): tweet = str(tweet) tweet = re.sub(r'#', '', tweet) ##revome hashtags from the tweet tweet = re.sub(r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', '',tweet) #remove links from tweet tweet = re.sub(r'\n', ' ', tweet) ##revome next line characters from the tweet tweet = re.sub(r'&amp','and',tweet) tweet = re.sub(r'&gt;', '', tweet) tweet = re.sub(r'‚Ķ', '', tweet) tweet = re.sub(r'[(\U0001F600-\U0001F92F|\U0001F300-\U0001F5FF|\U0001F680-\U0001F6FF|\U0001F190-\U0001F1FF|\U00002702-\U000027B0|\U0001F926-\U0001FA9F|\u200d|\u2640-\u2642|\u2600-\u2B55|\u23cf|\u23e9|\u231a|\ufe0f)]+','', tweet)#Remove symbols & pictographs tweet = re.sub(r'@[\w]+', '', tweet).strip() tweet = re.sub(r'\r', '', tweet).strip() return tweet def remove_numbers(tweet): tweet = " ".join([word for word in tweet.split() if not word.isnumeric()]) tweet = re.sub(r'\d{1,2}:\d{2} {0,}(pm){0,} {0,}(p\.m\.){0,} {0,}(am){0,} {0,}(a\.m\.){0,}','',tweet) tweet = re.sub(r'(PM){1,}|(AM){1,}|(P\.M\.){1,}|(A\.M\.){1,}','',tweet) #remove time flags tweet = re.sub(r'EST|BST|GMT|CST|PST|UTC|EDT','',tweet) #remove timezone codes tweet = " ".join([re.sub(r'^[0-9].*','',word) for word in tweet.split()]) return tweet.strip() def clean_tweets(tweet): tweet = str(tweet) words = nltk.word_tokenize(tweet) new_words= [word for word in words if word.isalnum()] return " ".join(new_words) def split_capitalised(tweet): bow = tweet.split() words = [word if word.isupper() or len(re.findall('.[^A-Z]{1,}[a-z]+',word))==0 else " ".join(re.findall('.[^A-Z]{1,}[a-z]+', word)) for word in bow] return " ".join(words) print("\nRemove links and symbols from tweet") topic_health_tweets['clean_tweet'] = topic_health_tweets.orig_tweet.progress_apply(remove_links) print("\nRemove numbers from tweet") topic_health_tweets['simplified_tweet'] = topic_health_tweets.clean_tweet.progress_apply(remove_numbers) print("\nRemove punctuations and emojis from tweet") topic_health_tweets['filtered_tweet'] = topic_health_tweets.simplified_tweet.progress_apply(clean_tweets) print("\nRemove punctuations and emojis from tweet") topic_health_tweets['sentence_tweet'] = topic_health_tweets.filtered_tweet.progress_apply(split_capitalised) topic_health_tweets.head(10) ``` ### 3. Here comes SpaCy ``` # Initialize spacy 'en' model, keeping only tagger component (for efficiency) nlp = spacy.load("en_core_web_sm",disable=['parser', 'ner'])# Do lemmatization keeping only noun, adj, vb, adv # My list of stop words. custom_stop_list = set(['covid','covid19','coronavirus','cov19','covid-19','cov-19','covidー','corona','april','apr','may','pandemic','india','trump']) spacy_stopwords = spacy_stopwords.union(custom_stop_list) my_stop_words = stop_words.union(custom_stop_list) # Create our list of punctuation marks punctuations = string.punctuation def remove_stopwords(texts): print("\nPreprocess tweet data") return [[word for word in simple_preprocess(str(doc)) if word not in my_stop_words] for doc in tqdm(texts)] def make_bigrams(tweets): print("\nMake bigrams of tweet data") return [bigram_mod[tweet] for tweet in tqdm(tweets)] def make_trigrams(tweet): print("\nMake trigrams of tweet data") return [trigram_mod[bigram_mod[tweet]] for tweet in tqdm(tweets)] def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']): """https://spacy.io/api/annotation""" texts_out = [] print("\nLematize tweet data") for sent in tqdm(texts): doc = nlp(" ".join(sent)) texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags]) return texts_out ``` ### How many topics to use? ``` data_words = topic_health_tweets['sentence_tweet'].values.tolist() data_words_nostops = remove_stopwords(data_words) # Remove Stop Words #data_words_bigrams = make_bigrams(data_words_nostops)# Form Bigrams bigram = gensim.models.Phrases(data_words_nostops,min_count=3,threshold=40) # higher threshold fewer phrases. bigram_mod = gensim.models.phrases.Phraser(bigram) data_words_bigrams = make_bigrams(data_words_nostops) data_lemmatized = lemmatization(data_words_bigrams, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']) # Build the bigram and trigram models # bigram = gensim.models.Phrases(tweet_corpus,min_count=5,threshold=100) # higher threshold fewer phrases. # trigram = gensim.models.Phrases(bigram[tweet_corpus],threshold=100) # Faster way to get a sentence clubbed as a trigram/bigram # bigram_mod = gensim.models.phrases.Phraser(bigram) # trigram_mod = gensim.models.phrases.Phraser(trigram) # data_words_bigrams = [make_bigrams(tweet) for tweet in data_tweets_lemmatized] # data_words_trigrams = [make_trigrams(tweet) for tweet in data_tweets_lemmatized] # Create Dictionary id2word = corpora.Dictionary(data_lemmatized) # Term Document Frequency corpus = [id2word.doc2bow(tweet) for tweet in data_lemmatized] def compute_coherence_values(corpus, dictionary, k): lda_model = gensim.models.LdaMulticore(corpus=corpus, id2word=id2word, num_topics=k, random_state=100, chunksize=100, passes=10, eval_every = 1, #alpha='auto', #eta=b, workers=7, #Logical processors-1 minimum_probability=0, per_word_topics=True) coherence_model_lda = CoherenceModel(model=lda_model, texts=data_lemmatized, dictionary=id2word, coherence='c_v') return coherence_model_lda.get_coherence() try: model_results_plt = pd.read_csv(ROOT_DATA+"lda_tuning_results.csv") except FileNotFoundError: # supporting function grid = {} grid['Validation_Set'] = {} # Topics range min_topics = 2 max_topics = 11 step_size = 1 topics_range = range(min_topics, max_topics, step_size) # Alpha parameter #alpha = list(np.arange(0.01, 1, 0.19)) #alpha.append('symmetric') #alpha.append('asymmetric') # Beta parameter #beta = list(np.arange(0.01, 1, 0.19)) #beta.append('symmetric') # Validation sets num_of_docs = len(corpus) corpus_sets = [gensim.utils.ClippedCorpus(corpus, round(num_of_docs*0.55)), gensim.utils.ClippedCorpus(corpus, round(num_of_docs*0.7)), gensim.utils.ClippedCorpus(corpus, round(num_of_docs*0.85)), corpus] corpus_title = ['55% Corpus','70% Corpus','85% Corpus','100% Corpus'] model_results = {'Validation_Set': [], 'Topics': [], #'Alpha': [], #'Beta': [], 'Coherence': [] } # Hyperparameters hyper = [list(range(0,len(corpus_sets),1)),topics_range] hyper = list(itertools.product(*hyper)) # Can take a long time to run print("Best hyper-parameters for the model") for param in tqdm(hyper): sleep(0.01) cv = compute_coherence_values(corpus=corpus_sets[param[0]] ,dictionary=id2word ,k=param[1] #a=param[2], #b=param[3] ) # Save the model results model_results['Validation_Set'].append(corpus_title[param[0]]) model_results['Topics'].append(param[1]) #model_results['Alpha'].append(param[2]) #model_results['Beta'].append(param[3]) model_results['Coherence'].append(cv) pd.DataFrame(model_results).to_csv('lda_tuning_results.csv', index=False) model_results_plt = pd.DataFrame(model_results) plt.rc('xtick', labelsize=16) plt.rc('ytick', labelsize=16) fig, axs = plt.subplots(nrows=1,ncols=1,figsize=(8,5)) fig.suptitle("Coherence by topics in tweets",weight='bold').set_fontsize('18') axs = sns.lineplot(x="Topics", y="Coherence", ci=None, hue="Validation_Set", palette=sns.color_palette("bright", 4), style="Validation_Set", markers=True, markersize=10, data=model_results_plt) axs.set_xlabel('Number of Topics',fontsize=20) axs.set_ylabel('Coherence Score',fontsize=20) handles, labels = axs.get_legend_handles_labels() plt.legend(handles=handles[1:], labels=labels[1:],fontsize=16,ncol=2, bbox_to_anchor=(.95, -0.25)) for lh in axs.legend_.legendHandles: lh._sizes = [100] plt.show() lda_model = gensim.models.LdaMulticore(corpus=corpus, id2word=id2word, num_topics=7, #Set based on the coherence score of graph above #alpha='auto', #eta=0.96, random_state=100, chunksize=1000, passes=10, workers=7, #Logical cores -1 eval_every = 1, minimum_probability=0, per_word_topics=True) # Prints the top topics. for top in lda_model.print_topics(): print(top) clouds = [] for t in range(lda_model.num_topics): wordcloud = WordCloud(background_color='white').fit_words(dict(lda_model.show_topic(t, 200))) clouds.append(wordcloud) fig, axes = plt.subplots(3,3,figsize=(15,12),gridspec_kw={'wspace':.05, 'hspace':.05}, squeeze=True) for index, axis in enumerate(axes.reshape(-1)): if index < 7: axis.imshow(clouds[index]) axis.set_title("Topic #" + str(index+1),) axis.axis('off') fig.tight_layout() tweet_topics = {0: [], 1: [], 2: [], 3: [] , 4:[], 5:[], 6:[]} #Based on number of topics included def get_tweet_topics(i): top_topics = lda_model.get_document_topics(corpus[i], minimum_probability=0.0) top_topics = dict(top_topics) {key: value.append(top_topics[key]) for key, value in tweet_topics.items()} return True isDone = [get_tweet_topics(tweet_row) for tweet_row in range(len(corpus))] tweet_topics = pd.DataFrame(tweet_topics) tweet_topics.rename(columns={ 0:'ReportTestStaySafeWork' ,1:'DailyUpdateDeathLiveWebinar' ,2:'CasesPeopleDiedDetailGlobalNeed' ,3:'GetPublicWorkerDeathsOutbreak' ,4:'CasesDeathConfirmedNewUpdate' ,5:'WebinarJoinLiveRegisterImpactWorkLearn' ,6:'JoinFutureDiscussCrisisLink' },inplace=True) tweet_topics['KeyTopic'] = tweet_topics.idxmax(axis=1) topic_health_tweets = pd.concat([topic_health_tweets,tweet_topics],axis=1) topic_health_tweets.tail(5) dashboard = pyLDAvis.gensim.prepare(lda_model, corpus, id2word, mds='tsne') viz = pyLDAvis.display(dashboard,template_type='notebook') pyLDAvis.enable_notebook(local=True) viz health_tweets = pd.merge(health_tweets,topic_health_tweets[['id','simplified_tweet','KeyTopic']],on='id',how="outer") #health_tweets.to_csv(ROOT_DATA+"01_covid_with_topics.csv") health_tweets.head(5) ```
github_jupyter
# Convert TFLite model to PyTorch This uses the model **face_detection_front.tflite** from [MediaPipe](https://github.com/google/mediapipe/tree/master/mediapipe/models). Prerequisites: 1) Clone the MediaPipe repo: ``` git clone https://github.com/google/mediapipe.git ``` 2) Install **flatbuffers**: ``` git clone https://github.com/google/flatbuffers.git cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Release make -j cd flatbuffers/python python setup.py install ``` 3) Clone the TensorFlow repo. We only need this to get the FlatBuffers schema files (I guess you could just download [schema.fbs](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs)). ``` git clone https://github.com/tensorflow/tensorflow.git ``` 4) Convert the schema files to Python files using **flatc**: ``` ./flatbuffers/flatc --python tensorflow/tensorflow/lite/schema/schema.fbs ``` Now we can use the Python FlatBuffer API to read the TFLite file! ``` import os import numpy as np from collections import OrderedDict ``` ## Get the weights from the TFLite file Load the TFLite model using the FlatBuffers library: ``` from tflite import Model data = open("./mediapipe/mediapipe/models/face_detection_front.tflite", "rb").read() model = Model.Model.GetRootAsModel(data, 0) subgraph = model.Subgraphs(0) subgraph.Name() def get_shape(tensor): return [tensor.Shape(i) for i in range(tensor.ShapeLength())] ``` List all the tensors in the graph: ``` for i in range(0, subgraph.TensorsLength()): tensor = subgraph.Tensors(i) print("%3d %30s %d %2d %s" % (i, tensor.Name(), tensor.Type(), tensor.Buffer(), get_shape(subgraph.Tensors(i)))) ``` Make a look-up table that lets us get the tensor index based on the tensor name: ``` tensor_dict = {(subgraph.Tensors(i).Name().decode("utf8")): i for i in range(subgraph.TensorsLength())} ``` Grab only the tensors that represent weights and biases. ``` parameters = {} for i in range(subgraph.TensorsLength()): tensor = subgraph.Tensors(i) if tensor.Buffer() > 0: name = tensor.Name().decode("utf8") parameters[name] = tensor.Buffer() len(parameters) ``` The buffers are simply arrays of bytes. As the docs say, > The data_buffer itself is an opaque container, with the assumption that the > target device is little-endian. In addition, all builtin operators assume > the memory is ordered such that if `shape` is [4, 3, 2], then index > [i, j, k] maps to `data_buffer[i*3*2 + j*2 + k]`. For weights and biases, we need to interpret every 4 bytes as being as float. On my machine, the native byte ordering is already little-endian so we don't need to do anything special for that. ``` def get_weights(tensor_name): i = tensor_dict[tensor_name] tensor = subgraph.Tensors(i) buffer = tensor.Buffer() shape = get_shape(tensor) assert(tensor.Type() == 0) # FLOAT32 W = model.Buffers(buffer).DataAsNumpy() W = W.view(dtype=np.float32) W = W.reshape(shape) return W W = get_weights("conv2d/Kernel") b = get_weights("conv2d/Bias") W.shape, b.shape ``` Now we can get the weights for all the layers and copy them into our PyTorch model. ## Convert the weights to PyTorch format ``` import torch from blazeface import BlazeFace net = BlazeFace() net ``` Make a lookup table that maps the layer names between the two models. We're going to assume here that the tensors will be in the same order in both models. If not, we should get an error because shapes don't match. ``` probable_names = [] for i in range(0, subgraph.TensorsLength()): tensor = subgraph.Tensors(i) if tensor.Buffer() > 0 and tensor.Type() == 0: probable_names.append(tensor.Name().decode("utf-8")) probable_names[:5] convert = {} i = 0 for name, params in net.state_dict().items(): convert[name] = probable_names[i] i += 1 ``` Copy the weights into the layers. Note that the ordering of the weights is different between PyTorch and TFLite, so we need to transpose them. Convolution weights: TFLite: (out_channels, kernel_height, kernel_width, in_channels) PyTorch: (out_channels, in_channels, kernel_height, kernel_width) Depthwise convolution weights: TFLite: (1, kernel_height, kernel_width, channels) PyTorch: (channels, 1, kernel_height, kernel_width) ``` new_state_dict = OrderedDict() for dst, src in convert.items(): W = get_weights(src) print(dst, src, W.shape, net.state_dict()[dst].shape) if W.ndim == 4: if W.shape[0] == 1: W = W.transpose((3, 0, 1, 2)) # depthwise conv else: W = W.transpose((0, 3, 1, 2)) # regular conv new_state_dict[dst] = torch.from_numpy(W) net.load_state_dict(new_state_dict, strict=True) ``` No errors? Then the conversion was successful! ## Save the checkpoint ``` torch.save(net.state_dict(), "blazeface.pth") ```
github_jupyter
# Create a 3D Point Cloud Labeling Job with Amazon SageMaker Ground Truth This notebook will demonstrate how you can pre-process your 3D point cloud input data to create an [object tracking labeling job](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-object-tracking.html) and include sensor and camera data for sensor fusion. In object tracking, you are tracking the movement of an object (e.g., a pedestrian on the side walk) while your point of reference (e.g., the autonomous vehicle) is moving. When performing object tracking, your data must be in a global reference coordinate system such as [world coordinate system](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-sensor-fusion-details.html#sms-point-cloud-world-coordinate-system) because the ego vehicle itself is moving in the world. You can transform point cloud data in local coordinates to the world coordinate system by multiplying each of the points in a 3D frame with the extrinsic matrix for the LiDAR sensor. In this notebook, you will transform 3D frames from a local coordinate system to a world coordinate system using extrinsic matrices. You will use the KITTI dataset<sup>[1](#The-Dataset-and-Input-Manifest-Files)</sup>, an open source autonomous driving dataset. The KITTI dataset provides an extrinsic matrix for each 3D point cloud frame. You will use [pykitti](https://github.com/utiasSTARS/pykitti) and the [numpy matrix multiplication function](https://numpy.org/doc/1.18/reference/generated/numpy.matmul.html) to multiple this matrix with each point in the frame to translate that point to the world coordinate system used by the KITTI dataset. You include camera image data and provide workers with more visual information about the scene they are labeling. Through sensor fusion, workers will be able to adjust labels in the 3D scene and in 2D images, and label adjustments will be mirrored in the other view. Ground Truth computes your sensor and camera extrinsic matrices for sensor fusion using sensor and camera **pose data** - position and heading. The KITTI raw dataset includes rotation matrix and translations vectors for extrinsic transformations for each frame. This notebook will demonstrate how you can extract **position** and **heading** from KITTI rotation matrices and translations vectors using [pykitti](https://github.com/utiasSTARS/pykitti). In summary, you will: * Convert a dataset to a world coordinate system. * Learn how you can extract pose data from your LiDAR and camera extrinsict matrices for sensor fusion. * Create a sequence input manifest file for an object tracking labeling job. * Create an object tracking labeling job. * Preview the worker UI and tools provided by Ground Truth. ## Prerequisites To run this notebook, you can simply execute each cell in order. To understand what's happening, you'll need: * An S3 bucket you can write to -- please provide its name in `BUCKET`. The bucket must be in the same region as this SageMaker Notebook instance. You can also change the `EXP_NAME` to any valid S3 prefix. All the files related to this experiment will be stored in that prefix of your bucket. **Important: you must attach the CORS policy to this bucket. See the next section for more information**. * Familiarity with the [Ground Truth 3D Point Cloud Labeling Job](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud.html). * Familiarity with Python and [numpy](http://www.numpy.org/). * Basic familiarity with [AWS S3](https://docs.aws.amazon.com/s3/index.html). * Basic understanding of [AWS Sagemaker](https://aws.amazon.com/sagemaker/). * Basic familiarity with [AWS Command Line Interface (CLI)](https://aws.amazon.com/cli/) -- ideally, you should have it set up with credentials to access the AWS account you're running this notebook from. This notebook has only been tested on a SageMaker notebook instance. The runtimes given are approximate. We used an `ml.t2.medium` instance in our tests. However, you can likely run it on a local instance by first executing the cell below on SageMaker and then copying the `role` string to your local copy of the notebook. ### IMPORTANT: Attach CORS policy to your bucket You must attach the following CORS policy to your S3 bucket for the labeling task to render. To learn how to add a CORS policy to your S3 bucket, follow the instructions in [How do I add cross-domain resource sharing with CORS?](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/add-cors-configuration.html). Paste the following policy in the CORS configuration editor: ``` <?xml version="1.0" encoding="UTF-8"?> <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <AllowedMethod>HEAD</AllowedMethod> <AllowedMethod>PUT</AllowedMethod> <MaxAgeSeconds>3000</MaxAgeSeconds> <ExposeHeader>Access-Control-Allow-Origin</ExposeHeader> <AllowedHeader>*</AllowedHeader> </CORSRule> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> </CORSRule> </CORSConfiguration> ``` ``` import boto3 import time import pprint import json import sagemaker from sagemaker import get_execution_role from datetime import datetime, timezone pp = pprint.PrettyPrinter(indent=4) sagemaker_client = boto3.client('sagemaker') BUCKET = '' EXP_NAME = '' # Any valid S3 prefix. # Make sure the bucket is in the same region as this notebook. sess = sagemaker.session.Session() role = sagemaker.get_execution_role() region = boto3.session.Session().region_name s3 = boto3.client('s3') bucket_region = s3.head_bucket(Bucket=BUCKET)['ResponseMetadata']['HTTPHeaders']['x-amz-bucket-region'] assert bucket_region == region, "Your S3 bucket {} and this notebook need to be in the same region.".format(BUCKET) ``` ## The Dataset and Input Manifest Files The dataset and resources used in this notebook are located in the following Amazon S3 bucket: https://aws-ml-blog.s3.amazonaws.com/artifacts/gt-point-cloud-demos/. This bucket contains a single scene from the [KITTI datasets](http://www.cvlibs.net/datasets/kitti/raw_data.php). KITTI created datasets for computer vision and machine learning research, including for 2D and 3D object detection and object tracking. The datasets are captured by driving around the mid-size city of Karlsruhe, in rural areas and on highways. \[1\] The KITTI dataset is subject to its own license. Please make sure that any use of the dataset conforms to the license terms and conditions. ## Download and unzip data ``` rm -rf sample_data* !wget https://aws-ml-blog.s3.amazonaws.com/artifacts/gt-point-cloud-demos/sample_data.zip !unzip -o sample_data ``` Let's take a look at the sample_data folder. You'll see that we have images which can be used for sensor fusion, and point cloud data in ASCII format (.txt files). We will use a script to convert this point cloud data from the LiDAR sensor's local coordinates to a world coordinate system. ``` !ls sample_data/2011_09_26/2011_09_26_drive_0005_sync/ !ls sample_data/2011_09_26/2011_09_26_drive_0005_sync/oxts/data ``` ## Use the Kitti2GT script to convert the raw data to Ground Truth format You can use this script to do the following: * Transform the KITTI dataset with respect to the LIDAR sensor's orgin in the first frame as the world cooridinate system ( global frame of reference ), so that it can be consumed by SageMaker Ground Truth. * Extract pose data in world coordinate system using the camera and LiDAR extrinsic matrices. You will supply this pose data in your sequence file to enable sensor fusion. First, the script uses [pykitti](https://github.com/utiasSTARS/pykitti) python module to load the KITTI raw data and calibrations. Let's look at the two main data-transformation functions of the script: ### Data Transformation to a World Coordinate System In general, multiplying a point in a LIDAR frame with a LIDAR extrinsic matrix transforms it into world coordinates. Using pykitti `dataset.oxts[i].T_w_imu` gives the lidar extrinsic transform for the `i`th frame. This matrix can be multiplied with the points of the frame to convert it to a world frame using the numpy matrix multiplication, function, [matmul](https://numpy.org/doc/1.18/reference/generated/numpy.matmul.html): `matmul(lidar_transform_matrix, points)`. Let's look at the function that performs this transformation: ``` # transform points from lidar to global frame using lidar_extrinsic_matrix def generate_transformed_pcd_from_point_cloud(points, lidar_extrinsic_matrix): tps = [] for point in points: transformed_points = np.matmul(lidar_extrinsic_matrix, np.array([point[0], point[1], point[2], 1], dtype=np.float32).reshape(4,1)).tolist() if len(point) > 3 and point[3] is not None: tps.append([transformed_points[0][0], transformed_points[1][0], transformed_points[2][0], point[3]]) return tps ``` If your point cloud data includes more than four elements for each point, for example, (x,y,z) and r,g,b, modify the `if` statement in the function above to ensure your r, g, b values are copied. ### Extracting Pose Data from LiDAR and Camera Extrinsic for Sensor Fusion For sensor fusion, you provide your extrinsic matrix in the form of sensor-pose in terms of origin position (for translation) and heading in quaternion (for rotation of the 3 axis). The following is an example of the pose JSON you use in the sequence file. ``` { "position": { "y": -152.77584902657554, "x": 311.21505956090624, "z": -10.854137529636024 }, "heading": { "qy": -0.7046155108831117, "qx": 0.034278837280808494, "qz": 0.7070617895701465, "qw": -0.04904659893885366 } } ``` All of the positional coordinates (x, y, z) are in meters. All the pose headings (qx, qy, qz, qw) are measured in Spatial Orientation in Quaternion. Separately for each camera, you provide pose data extracted from the extrinsic of that camera. Both LIDAR sensors and and cameras have their own extrinsic matrices, and they are used by SageMaker Ground Truth to enable the sensor fusion feature. In order to project a label from 3D point cloud to camera image plane Ground Truth needs to transform 3D points from LIDAR’s own coordinate system to the camera’s coordinate system. This is typically done by first transforming 3D points from LIDAR’s own coordinate to a world coordinate system using the LIDAR extrinsic matrix. Then we use the camera inverse extrinsic (world to camera) to transform the 3D points from the world coordinate system we obtained in previous step into camera image plane. If your 3D data is already transformed into world coordinate system then the first transformation doesn’t have any impact and label translation depends only on the camera extrinsic. If you have a rotation matrix (made up of the axis rotations) and translation vector (or origin) in world coordinate system instead of a single 4x4 rigid transformation matrix, then you can directly use rotation and translation to compute pose. For example: ``` import numpy as np rotation = [[ 9.96714314e-01, -8.09890350e-02, 1.16333982e-03], [ 8.09967396e-02, 9.96661051e-01, -1.03090934e-02], [-3.24531964e-04, 1.03694477e-02, 9.99946183e-01]] origin= [1.71104606e+00, 5.80000039e-01, 9.43144935e-01] from scipy.spatial.transform import Rotation as R # position is the origin position = origin r = R.from_matrix(np.asarray(rotation)) # heading in WCS using scipy heading = r.as_quat() print(f"position:{position}\nheading: {heading}") ``` If you indeed have a 4x4 extrinsic transformation matrix then the transformation matrix is just in the form of ```[R T; 0 0 0 1]``` where R is the rotation matrix and T is the origin translation vector. That means you can extract rotation matrix and translation vector from the transformation matrix as follows ``` import numpy as np transformation = [[ 9.96714314e-01, -8.09890350e-02, 1.16333982e-03, 1.71104606e+00], [ 8.09967396e-02, 9.96661051e-01, -1.03090934e-02, 5.80000039e-01], [-3.24531964e-04, 1.03694477e-02, 9.99946183e-01, 9.43144935e-01], [0, 0, 0, 1]] transformation = np.array(transformation) rotation = transformation[0:3, 0:3] origin= transformation[0:3, 3] from scipy.spatial.transform import Rotation as R # position is the origin position = origin r = R.from_matrix(np.asarray(rotation)) # heading in WCS using scipy heading = r.as_quat() print(f"position:{position}\nheading: {heading}") ``` For convenience, in this blog you will use [pykitti](https://github.com/utiasSTARS/pykitti) development kit to load the raw data and calibrations. With pykitti you will extract sensor pose in the world coordinate system from KITTI extrinsic which is provided as a rotation matrix and translation vector in the raw calibrations data. We will then format this pose data using the JSON format required for the 3D point cloud sequence input manifest. With pykitti the ```dataset.oxts[i].T_w_imu``` gives the LiDAR extrinsic matrix ( lidar_extrinsic_transform ) for the i'th frame. Similarly, with pykitti the camera extrinsic matrix ( camera_extrinsic_transform ) for cam0 in i'th frame can be calculated by ```inv(matmul(dataset.calib.T_cam0_velo, inv(dataset.oxts[i].T_w_imu)))``` and this can be converted into heading and position for cam0. In the script, the following functions are used to extract this pose data from the LiDAR extrinsict and camera inverse extrinsic matrices. ``` # utility to convert extrinsic matrix to pose heading quaternion and position def convert_extrinsic_matrix_to_trans_quaternion_mat(lidar_extrinsic_transform): position = lidar_extrinsic_transform[0:3, 3] rot = np.linalg.inv(lidar_extrinsic_transform[0:3, 0:3]) quaternion= R.from_matrix(np.asarray(rot)).as_quat() trans_quaternions = { "translation": { "x": position[0], "y": position[1], "z": position[2] }, "rotation": { "qx": quaternion[0], "qy": quaternion[1], "qz": quaternion[2], "qw": quaternion[3] } } return trans_quaternions def convert_camera_inv_extrinsic_matrix_to_trans_quaternion_mat(camera_extrinsic_transform): position = camera_extrinsic_transform[0:3, 3] rot = np.linalg.inv(camera_extrinsic_transform[0:3, 0:3]) quaternion= R.from_matrix(np.asarray(rot)).as_quat() trans_quaternions = { "translation": { "x": position[0], "y": position[1], "z": position[2] }, "rotation": { "qx": quaternion[0], "qy": quaternion[1], "qz": quaternion[2], "qw": -quaternion[3] } } return trans_quaternions ``` ### Generate a Sequence File After you've converted your data to a world coordinate system and extracted sensor and camera pose data for sensor fusion, you can create a sequence file. This is accomplished with the function `convert_to_gt` in the python script. A **sequence** specifies a temporal series of point cloud frames. When a task is created using a sequence file, all point cloud frames in the sequence are sent to a worker to label. Your input manifest file will contain a single sequence per line. To learn more about the sequence input manifest format, see [Create a Point Cloud Frame Sequence Input Manifest](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-multi-frame-input-data.html). If you want to use this script to create a frame input manifest file, which is required for 3D point cloud object tracking and semantic segmentation labeling jobs, you can modify the for-loop in the function `convert_to_gt` to produce the required content for `source-ref-metadata`. To learn more about the frame input manifest format, see [Create a Point Cloud Frame Input Manifest File](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-single-frame-input-data.html). Now, let's download the script and run it on the KITTI dataset to process the data you'll use for your labeling job. ``` !wget https://aws-ml-blog.s3.amazonaws.com/artifacts/gt-point-cloud-demos/kitti2gt.py !pygmentize kitti2gt.py ``` ### Install pykitti ``` !pip install pykitti !pip install --upgrade scipy from kitti2gt import * if(EXP_NAME == ''): s3loc = f's3://{BUCKET}/frames/' else: s3loc = f's3://{BUCKET}/{EXP_NAME}/frames/' convert_to_gt(basedir='sample_data', date='2011_09_26', drive='0005', output_base='sample_data_out', s3prefix = s3loc) ``` The following folders that will contain the data you'll use for the labeling job. ``` !ls sample_data_out/ !ls sample_data_out/frames ``` Now, you'll upload the data to your bucket in S3. ``` if(EXP_NAME == ''): !aws s3 cp sample_data_out/kitti-gt-seq.json s3://{BUCKET}/ else: !aws s3 cp sample_data_out/kitti-gt-seq.json s3://{BUCKET}/{EXP_NAME}/ if(EXP_NAME == ''): !aws s3 sync sample_data_out/frames/ s3://{BUCKET}/frames/ else: !aws s3 sync sample_data_out/frames s3://{BUCKET}/{EXP_NAME}/frames/ if(EXP_NAME == ''): !aws s3 sync sample_data_out/images/ s3://{BUCKET}/frames/images/ else: !aws s3 sync sample_data_out/images s3://{BUCKET}/{EXP_NAME}/frames/images/ ``` ### Write and Upload Multi-Frame Input Manifest File Now, let's create a **sequence input manifest file**. Each line in the input manifest (in this demo, there is only one) will point to a sequence file in your S3 bucket, `BUCKET/EXP_NAME`. ``` with open('manifest.json','w') as f: if(EXP_NAME == ''): json.dump({"source-ref": "s3://{}/kitti-gt-seq.json".format(BUCKET)},f) else: json.dump({"source-ref": "s3://{}/{}/kitti-gt-seq.json".format(BUCKET,EXP_NAME)},f) ``` Our manifest file is one line long, and identifies a single sequence file in your S3 bucket. ``` !cat manifest.json if(EXP_NAME == ''): !aws s3 cp manifest.json s3://{BUCKET}/ input_manifest_s3uri = f's3://{BUCKET}/manifest.json' else: !aws s3 cp manifest.json s3://{BUCKET}/{EXP_NAME}/ input_manifest_s3uri = f's3://{BUCKET}/{EXP_NAME}/manifest.json' input_manifest_s3uri ``` ## Create a Labeling Job In the following cell, we specify object tracking as our [3D Point Cloud Task Type](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-task-types.html). ``` task_type = "3DPointCloudObjectTracking" ``` ## Identify Resources for Labeling Job ### Specify Human Task UI ARN The following will be used to identify the HumanTaskUiArn. When you create a 3D point cloud labeling job, Ground Truth provides a worker UI that is specific to your task type. You can learn more about this UI and the assistive labeling tools that Ground Truth provides for Object Tracking on the [Object Tracking task type page](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-object-tracking.html). ``` ## Set up human_task_ui_arn map, to be used in case you chose UI_CONFIG_USE_TASK_UI_ARN ## Supported for GA ## Set up human_task_ui_arn map, to be used in case you chose UI_CONFIG_USE_TASK_UI_ARN human_task_ui_arn = f'arn:aws:sagemaker:{region}:394669845002:human-task-ui/PointCloudObjectTracking' human_task_ui_arn ``` ### Label Category Configuration File Your label category configuration file is used to specify labels, or classes, for your labeling job. When you use the object detection or object tracking task types, you can also include **label attributes** in your [label category configuration file](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-label-category-config.html). Workers can assign one or more attributes you provide to annotations to give more information about that object. For example, you may want to use the attribute *occluded* to have workers identify when an object is partially obstructed. Let's look at an example of the label category configuration file for an object detection or object tracking labeling job. ``` !wget https://aws-ml-blog.s3.amazonaws.com/artifacts/gt-point-cloud-demos/label-category-config/label-category.json with open('label-category.json', 'r') as j: json_data = json.load(j) print("\nA label category configuration file: \n\n",json.dumps(json_data, indent=4, sort_keys=True)) if(EXP_NAME == ''): !aws s3 cp label-category.json s3://{BUCKET}/label-category.json label_category_config_s3uri = f's3://{BUCKET}/label-category.json' else: !aws s3 cp label-category.json s3://{BUCKET}/{EXP_NAME}/label-category.json label_category_config_s3uri = f's3://{BUCKET}/{EXP_NAME}/label-category.json' label_category_config_s3uri ``` To learn more about the label category configuration file, see [Create a Label Category Configuration File](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-point-cloud-label-category-config.html) Run the following cell to identify the labeling category configuration file. ### Set up a private work team If you want to preview the worker task UI, create a private work team and add yourself as a worker. If you have already created a private workforce, follow the instructions in [Add or Remove Workers](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-management-private-console.html#add-remove-workers-sm) to add yourself to the work team you use to create a lableing job. #### Create a private workforce and add yourself as a worker To create and manage your private workforce, you can use the **Labeling workforces** page in the Amazon SageMaker console. When following the instructions below, you will have the option to create a private workforce by entering worker emails or importing a pre-existing workforce from an Amazon Cognito user pool. To import a workforce, see [Create a Private Workforce (Amazon Cognito Console)](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-create-private-cognito.html). To create a private workforce using worker emails: * Open the Amazon SageMaker console at https://console.aws.amazon.com/sagemaker/. * In the navigation pane, choose **Labeling workforces**. * Choose Private, then choose **Create private team**. * Choose **Invite new workers by email**. * Paste or type a list of up to 50 email addresses, separated by commas, into the email addresses box. * Enter an organization name and contact email. * Optionally choose an SNS topic to subscribe the team to so workers are notified by email when new Ground Truth labeling jobs become available. * Click the **Create private team** button. After you import your private workforce, refresh the page. On the Private workforce summary page, you'll see your work team ARN. Enter this ARN in the following cell. ``` ##Use Beta Private Team till GA workteam_arn = '' ``` #### Task Time Limits 3D point cloud annotation jobs can take workers hours. Workers will be able to save their work as they go, and complete the task in multiple sittings. Ground Truth will also automatically save workers' annotations periodically as they work. When you configure your task, you can set the total amount of time that workers can work on each task when you create a labeling job using `TaskTimeLimitInSeconds`. The maximum time you can set for workers to work on tasks is 7 days. The default value is 3 days. It is recommended that you create labeling tasks that can be completed within 12 hours. If you set `TaskTimeLimitInSeconds` to be greater than 8 hours, you must set `MaxSessionDuration` for your IAM execution to at least 8 hours. To update your execution role's `MaxSessionDuration`, use [UpdateRole](https://docs.aws.amazon.com/IAM/latest/APIReference/API_UpdateRole.html) or use the [IAM console](https://docs.aws.amazon.com/IAM/latest/UserGuide/roles-managingrole-editing-console.html#roles-modify_max-session-duration). You an identify the name of your role at the end of your role ARN. ``` #See your execution role ARN. The role name is located at the end of the ARN. role ac_arn_map = {'us-west-2': '081040173940', 'us-east-1': '432418664414', 'us-east-2': '266458841044', 'eu-west-1': '568282634449', 'ap-northeast-1': '477331159723'} prehuman_arn = 'arn:aws:lambda:{}:{}:function:PRE-{}'.format(region, ac_arn_map[region],task_type) acs_arn = 'arn:aws:lambda:{}:{}:function:ACS-{}'.format(region, ac_arn_map[region],task_type) ``` ## Set Up HumanTaskConfig `HumanTaskConfig` is used to specify your work team, and configure your labeling job tasks. Modify the following cell to identify a `task_description`, `task_keywords`, `task_title`, and `job_name`. ``` from datetime import datetime ## Set up Human Task Config ## Modify the following task_description = 'add a task description here' #example keywords task_keywords = ['lidar', 'pointcloud'] #add a task title task_title = 'Add a Task Title Here - This is Displayed to Workers' #add a job name to identify your labeling job job_name = 'add-job-name' human_task_config = { "AnnotationConsolidationConfig": { "AnnotationConsolidationLambdaArn": acs_arn, }, "UiConfig": { "HumanTaskUiArn": human_task_ui_arn, }, "WorkteamArn": workteam_arn, "PreHumanTaskLambdaArn": prehuman_arn, "MaxConcurrentTaskCount": 200, # 200 images will be sent at a time to the workteam. "NumberOfHumanWorkersPerDataObject": 1, # One worker will work on each task "TaskAvailabilityLifetimeInSeconds": 18000, # Your workteam has 5 hours to complete all pending tasks. "TaskDescription": task_description, "TaskKeywords": task_keywords, "TaskTimeLimitInSeconds": 3600, # Each seq must be labeled within 1 hour. "TaskTitle": task_title } print(json.dumps(human_task_config, indent=4, sort_keys=True)) ``` ## Set up Create Labeling Request The following formats your labeling job request. For Object Tracking task types, the `LabelAttributeName` must end in `-ref`. ``` if(EXP_NAME == ''): s3_output_path = f's3://{BUCKET}' else: s3_output_path = f's3://{BUCKET}/{EXP_NAME}' s3_output_path ## Set up Create Labeling Request labelAttributeName = job_name + "-ref" if task_type == "3DPointCloudObjectDetection" or task_type == "Adjustment3DPointCloudObjectDetection": labelAttributeName = job_name ground_truth_request = { "InputConfig" : { "DataSource": { "S3DataSource": { "ManifestS3Uri": input_manifest_s3uri, } }, "DataAttributes": { "ContentClassifiers": [ "FreeOfPersonallyIdentifiableInformation", "FreeOfAdultContent" ] }, }, "OutputConfig" : { "S3OutputPath": s3_output_path, }, "HumanTaskConfig" : human_task_config, "LabelingJobName": job_name, "RoleArn": role, "LabelAttributeName": labelAttributeName, "LabelCategoryConfigS3Uri": label_category_config_s3uri } print(json.dumps(ground_truth_request, indent=4, sort_keys=True)) ``` ## Call CreateLabelingJob ``` sagemaker_client.create_labeling_job(**ground_truth_request) print(f'Labeling Job Name: {job_name}') ``` ## Check Status of Labeling Job ``` ## call describeLabelingJob describeLabelingJob = sagemaker_client.describe_labeling_job( LabelingJobName=job_name ) print(describeLabelingJob) ``` ## Start Working on tasks When you add yourself to a private work team, you recieve an email invitation to access the worker portal that looks similar to this [image](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2020/04/16/a2i-critical-documents-26.gif). Use this invitation to sign in to the protal and view your 3D point cloud annotation tasks. Tasks may take up to 10 minutes to show up the worker portal. Once you are done working on the tasks, click **Submit**. ### View Output Data Once you have completed all of the tasks, you can view your output data in the S3 location you specified in `OutputConfig`. To read more about Ground Truth output data format for your task type, see [Output Data](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-data-output.html#sms-output-point-cloud-object-tracking). # Acknowledgments We would like to thank the KITTI team for letting us use this dataset to demonstrate how to prepare your 3D point cloud data for use in SageMaker Ground Truth.
github_jupyter
# Regularização - l1 - l2 - elasticnet ``` import numpy as np import pandas as pd from sklearn import linear_model from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.metrics import accuracy_score # https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data df_wine = pd.read_csv('datasets/wine.data', header=None) df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Magnesium', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline'] print('Class labels', np.unique(df_wine['Class label'])) df_wine.head() X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values X_train, X_test, y_train, y_test =\ train_test_split(X, y, test_size=0.3, random_state=0, stratify=y) stdsc = StandardScaler() X_train_std = stdsc.fit_transform(X_train) X_test_std = stdsc.transform(X_test) model_none = linear_model.LogisticRegression(penalty='l1', random_state=0) model_none.fit(X_train_std, y_train) y_pred = model_none.predict(X_test_std) print('X.shape:{}'.format(X.shape)) print('np.unique(y):{}'.format(np.unique(y))) print('accuracy_score:{}\nintercept_:\n{}\ncoef_:\n{}'.format(accuracy_score(y_test, y_pred), model_none.intercept_, model_none.coef_)) model_none = linear_model.LogisticRegression(penalty='l2', random_state=0) model_none.fit(X_train_std, y_train) y_pred = model_none.predict(X_test_std) print('X.shape:{}'.format(X.shape)) print('np.unique(y):{}'.format(np.unique(y))) print('accuracy_score:{}\nintercept_:\n{}\ncoef_:\n{}'.format(accuracy_score(y_test, y_pred), model_none.intercept_, model_none.coef_)) model_none = linear_model.LogisticRegression(multi_class='multinomial', solver='newton-cg', penalty='l2', random_state=0) model_none.fit(X_train_std, y_train) y_pred = model_none.predict(X_test_std) print('X.shape:{}'.format(X.shape)) print('np.unique(y):{}'.format(np.unique(y))) print('accuracy_score:{}\nintercept_:\n{}\ncoef_:\n{}'.format(accuracy_score(y_test, y_pred), model_none.intercept_, model_none.coef_)) model_none = linear_model.SGDClassifier(loss='modified_huber', penalty='none', random_state=0) model_none.fit(X_train_std, y_train) y_pred = model_none.predict(X_test_std) print('accuracy_score:{}\nintercept_:\n{}\ncoef_:\n{}'.format(accuracy_score(y_test, y_pred), model_none.intercept_, model_none.coef_)) model_l1 = linear_model.SGDClassifier(loss='modified_huber', penalty='l1', random_state=0) model_l1.fit(X_train_std, y_train) y_pred = model_l1.predict(X_test_std) print('accuracy_score:{}\nintercept_:\n{}\ncoef_:\n{}'.format(accuracy_score(y_test, y_pred), model_none.intercept_, model_none.coef_)) model_l2 = linear_model.SGDClassifier(loss='modified_huber', penalty='l2', random_state=0,) model_l2.fit(X_train_std, y_train) y_pred = model_l2.predict(X_test_std) print('accuracy_score:{}\nintercept_:\n{}\ncoef_:\n{}'.format(accuracy_score(y_test, y_pred), model_none.intercept_, model_none.coef_)) model = linear_model.SGDClassifier(loss='modified_huber', penalty='elasticnet', random_state=0) model.fit(X_train_std, y_train) y_pred = model.predict(X_test_std) print('accuracy_score:{}\nintercept_:\n{}\ncoef_:\n{}'.format(accuracy_score(y_test, y_pred), model_none.intercept_, model_none.coef_)) np.unique(y) ```
github_jupyter
<img align="left" src="https://ithaka-labs.s3.amazonaws.com/static-files/images/tdm/tdmdocs/CC_BY.png"><br /> Created by [Nathan Kelber](http://nkelber.com) and Ted Lawless for [JSTOR Labs](https://labs.jstor.org/) under [Creative Commons CC BY License](https://creativecommons.org/licenses/by/4.0/)<br /> For questions/comments/improvements, email nathan.kelber@ithaka.org.<br /> ____ # Exploring Word Frequencies **Description:** This [notebook](https://docs.constellate.org/key-terms/#jupyter-notebook) shows how to find the most common words in a [dataset](https://docs.constellate.org/key-terms/#dataset). The following processes are described: * Using the `tdm_client` to create a Pandas DataFrame * Filtering based on a pre-processed ID list * Filtering based on a [stop words list](https://docs.constellate.org/key-terms/#stop-words) * Using a `Counter()` object to get the most common words **Use Case:** For Learners (Detailed explanation, not ideal for researchers) [Take me to the **Research Version** of this notebook ->](./exploring-word-frequencies-for-research.ipynb) **Difficulty:** Intermediate **Completion time:** 60 minutes **Knowledge Required:** * Python Basics ([Start Python Basics I](./python-basics-1.ipynb)) **Knowledge Recommended:** * [Working with Dataset Files](./working-with-dataset-files.ipynb) * [Pandas I](./pandas-1.ipynb) * [Counter Objects](./counter-objects.ipynb) * [Creating a Stopwords List](./creating-stopwords-list.ipynb) **Data Format:** [JSON Lines (.jsonl)](https://docs.constellate.org/key-terms/#jsonl) **Libraries Used:** * **[tdm_client](https://docs.constellate.org/key-terms/#tdm-client)** to collect, unzip, and read our dataset * **[NLTK](https://docs.constellate.org/key-terms/#nltk)** to help [clean](https://docs.constellate.org/key-terms/#clean-data) up our dataset * [Counter](https://docs.constellate.org/key-terms/#python-counter) from **Collections** to help sum up our word frequencies **Research Pipeline:** 1. Build a dataset 2. Create a "Pre-Processing CSV" with [Exploring Metadata](./exploring-metadata.ipynb) (Optional) 3. Create a "Custom Stopwords List" with [Creating a Stopwords List](./creating-stopwords-list.ipynb) (Optional) 4. Complete the word frequencies analysis with this notebook ___ ## Import your dataset We'll use the tdm_client library to automatically retrieve the dataset in the JSON file format. Enter a [dataset ID](https://docs.constellate.org/key-terms/#dataset-ID) in the next code cell. If you don't have a dataset ID, you can: * Use the sample dataset ID already in the code cell * [Create a new dataset](https://constellate.org/builder) * [Use a dataset ID from other pre-built sample datasets](https://constellate.org/dataset/dashboard) ``` # Creating a variable `dataset_id` to hold our dataset ID # The default dataset is Shakespeare Quarterly, 1950-present dataset_id = "7e41317e-740f-e86a-4729-20dab492e925" ``` Next, import the `tdm_client`, passing the `dataset_id` as an argument using the `get_dataset` method. ``` # Importing your dataset with a dataset ID import tdm_client # Pull in the dataset that matches `dataset_id` # in the form of a gzipped JSON lines file. # The .get_dataset() method downloads the gzipped JSONL file # to the /data folder and returns a string for the file name and location # dataset_metadata will be a string containing that file name and location dataset_file = tdm_client.get_dataset(dataset_id) ``` ## Apply Pre-Processing Filters (if available) If you completed pre-processing with the "Exploring Metadata and Pre-processing" notebook, you can use your CSV file of dataset IDs to automatically filter the dataset. Your pre-processed CSV file should be in the /data directory. ``` # Import a pre-processed CSV file of filtered dataset IDs. # If you do not have a pre-processed CSV file, the analysis # will run on the full dataset and may take longer to complete. import pandas as pd import os # Define a string that describes the path to the CSV pre_processed_file_name = f'data/pre-processed_{dataset_id}.csv' # Test if the path to the CSV exists # If true, then read the IDs into filtered_id_list if os.path.exists(pre_processed_file_name): df = pd.read_csv(pre_processed_file_name) filtered_id_list = df["id"].tolist() use_filtered_list = True print(f'Pre-Processed CSV found. Filtered dataset is ' + str(len(df)) + ' documents.') else: use_filtered_list = False print('No pre-processed CSV file found. Full dataset will be used.') ``` ## Extract Unigram Counts from the JSON file (No cleaning) We pulled in our dataset using a `dataset_id`. The file, which resides in the datasets/ folder, is a compressed JSON Lines file (jsonl.gz) that contains all the metadata information found in the metadata CSV *plus* the textual data necessary for analysis including: * Unigram Counts * Bigram Counts * Trigram Counts * Full-text (if available) To complete our analysis, we are going to pull out the unigram counts for each document and store them in a Counter() object. We will import `Counter` which will allow us to use Counter() objects for counting unigrams. Then we will initialize an empty Counter() object `word_frequency` to hold all of our unigram counts. ``` # Import Counter() from collections import Counter # Create an empty Counter object called `word_frequency` word_frequency = Counter() ``` We can read in each document using the tdm_client.dataset_reader. ``` # Gather unigramCounts from documents in `filtered_id_list` if it is available for document in tdm_client.dataset_reader(dataset_file): if use_filtered_list is True: document_id = document['id'] # Skip documents not in our filtered_id_list if document_id not in filtered_id_list: continue unigrams = document.get("unigramCount", []) for gram, count in unigrams.items(): word_frequency[gram] += count # Print success message if use_filtered_list is True: print('Unigrams have been collected only for the ' + str(len(df)) + ' documents listed in your CSV file.') else: print('Unigrams have been collected for all documents without filtering from a CSV file.') ``` ### Find Most Common Unigrams Now that we have a list of the frequency of all the unigrams in our corpus, we need to sort them to find which are most common ``` for gram, count in word_frequency.most_common(25): print(gram.ljust(20), count) ``` ### Some issues to consider We have successfully created a word frequency list. There are a couple small issues, however, that we still need to address: 1. There are many [function words](https://docs.constellate.org/key-terms/#function-words), words like "the", "in", and "of" that are grammatically important but do not carry as much semantic meaning like [content words](https://docs.constellate.org/key-terms/#content-words), such as nouns and verbs. 2. The words represented here are actually case-sensitive [strings](https://docs.constellate.org/key-terms/#string). That means that the string "the" is a different from the string "The". You may notice this in your results above. ## Extract Unigram Counts from the JSON File (with cleaning) To address these issues, we need to find a way to remove common [function words](https://docs.constellate.org/key-terms/#function-words) and combine [strings](https://docs.constellate.org/key-terms/#string) that may have capital letters in them. We can address these issues by: 1. Using a [stopwords](https://docs.constellate.org/key-terms/#stop-words) list to remove common [function words](https://docs.constellate.org/key-terms/#function-words) 2. Lowercasing all the characters in each string to combine our counts ### Load Stopwords List If you have created a stopword list in the stopwords notebook, we will import it here. (You can always modify the CSV file to add or subtract words then reload the list.) Otherwise, we'll load the NLTK [stopwords](https://docs.constellate.org/key-terms/#stop-words) list automatically. ``` # Load a custom data/stop_words.csv if available # Otherwise, load the nltk stopwords list in English # Create an empty Python list to hold the stopwords stop_words = [] # The filename of the custom data/stop_words.csv file stopwords_list_filename = 'data/stop_words.csv' if os.path.exists(stopwords_list_filename): import csv with open(stopwords_list_filename, 'r') as f: stop_words = list(csv.reader(f))[0] print('Custom stopwords list loaded from CSV') else: # Load the NLTK stopwords list from nltk.corpus import stopwords stop_words = stopwords.words('english') print('NLTK stopwords list loaded') # Preview stop words list(stop_words) ``` ### Modify Stopwords List The following code examples can be used to modify a stopwords list. We recommend storing your stopwords in a CSV file as shown in the [Creating Stopwords List](./creating-stopwords-list.ipynb) notebook. |code|change| |---|---| |`stop_words.append('word_to_add')| Append a single word to the list| |stop_words = stop_words + ['word_one', 'word_two', 'word_three']| Concatenate multiple words to the list| |stop_words.remove('word_to_remove')| Delete a word from the list| ### Gather unigrams again with extra cleaning steps In addition to using a stopwords list, we will clean up the tokens by lowercasing all tokens and combining them. This will combine tokens with different capitalization such as "quarterly" and "Quarterly." We will also remove any tokens that are not alphanumeric. ``` # Gather unigramCounts from documents in `filtered_id_list` if available # and apply the processing. word_frequency = Counter() for document in tdm_client.dataset_reader(dataset_file): if use_filtered_list is True: document_id = document['id'] # Skip documents not in our filtered_id_list if document_id not in filtered_id_list: continue unigrams = document.get("unigramCount", []) for gram, count in unigrams.items(): clean_gram = gram.lower() if clean_gram in stop_words: continue if not clean_gram.isalpha(): continue word_frequency[clean_gram] += count ``` ## Display Results Finally, we will display the 20 most common words by using the `.most_common()` method on the `Counter()` object. ``` # Print the most common processed unigrams and their counts for gram, count in word_frequency.most_common(25): print(gram.ljust(20), count) ``` ## Export Results to a CSV File The word frequency data can be exported to a CSV file. ``` # Add output method to csv import csv with open(f'./data/word_frequencies_{dataset_id}.csv', 'w') as f: writer = csv.writer(f) writer.writerow(['unigram', 'count']) for gram, count in word_frequency.most_common(): writer.writerow([gram, count]) ``` ## Create a Word Cloud to Visualize the Data A visualization using the WordCloud library in Python. To learn more about customizing a wordcloud, [see the documentation](http://amueller.github.io/word_cloud/generated/wordcloud.WordCloud.html). ``` # Add wordcloud from wordcloud import WordCloud import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt from PIL import Image # Create a wordcloud from our data # Adding a mask shape of a cloud to your word cloud # By default, the shape will be a rectangle # You can specify any shape you like based on an image file cloud_mask = np.array(Image.open('./data/sample_cloud.png')) # Specifies the location of the mask shape cloud_mask = np.where(cloud_mask > 3, 255, cloud_mask) # this line will take all values greater than 3 and make them 255 (white) wordcloud = WordCloud( width = 800, # Change the pixel width of the image if blurry height = 600, # Change the pixel height of the image if blurry background_color = "white", # Change the background color colormap = 'viridis', # The colors of the words, see https://matplotlib.org/stable/tutorials/colors/colormaps.html max_words = 150, # Change the max number of words shown min_font_size = 4, # Do not show small text # Add a shape and outline (known as a mask) to your wordcloud contour_color = 'blue', # The outline color of your mask shape mask = cloud_mask, # contour_width = 1 ).generate_from_frequencies(word_frequency) mpl.rcParams['figure.figsize'] = (20,20) # Change the image size displayed plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") ```
github_jupyter
# DATAFRAME ### (DataFrame) ``` using XPlot.Plotly; using System.Linq; ``` # 1. DataFrame ``` #r "nuget:ApexCode.Interactive.Formatting,0.0.1-beta.5" using ApexCode.Interactive.Formatting; using Microsoft.Data.Analysis; using Microsoft.AspNetCore.Html; Formatters.Register<DataFrame>(); ``` ### Load data into data frame ``` const string SENSORS_DATASET_PATH = "./sensors.csv"; const string LABELS_DATASET_PATH = "./classes.csv"; var df1 = DataFrame.LoadCsv(SENSORS_DATASET_PATH); var df2 = DataFrame.LoadCsv(LABELS_DATASET_PATH); // prepare sensors new df var sensorsCreatedAts = df1.Columns["CreatedAt"]; sensorsCreatedAts.SetName("SensorCreatedAt"); var sensorsCreatedAtsEnum = sensorsCreatedAts as IEnumerable<string>; var sensorsTicks = sensorsCreatedAtsEnum.ToList().Select(d => DateTime.Parse(d.ToString()).Ticks / 10000000); // prepare labels new df var labelsCreatedAts = df2.Columns["CreatedAt"]; labelsCreatedAts.SetName("LabelCreatedAt"); var labelsCreatedAtsEnum = labelsCreatedAts as IEnumerable<string>; var labelsTicks = labelsCreatedAtsEnum.ToList().Select(d => DateTime.Parse(d.ToString()).Ticks / 10000000); var ids = new PrimitiveDataFrameColumn<int>("SensorId", df1.Rows.Count); var sensorsTicksArray = new PrimitiveDataFrameColumn<long>("Id", sensorsTicks); var sensorDates = new StringDataFrameColumn("SensorDates", df1.Columns["CreatedAt"] as IEnumerable<string>); //var df11 = new DataFrame(ids.Apply<Guid>(_ => System.Guid.NewGuid()), sensorsCreatedAts, sensorDates, df1.Columns["Temperature"], df1.Columns["Luminosity"], df1.Columns["Infrared"], df1.Columns["Distance"], sensorsTicksArray); //var df11 = new DataFrame(df1.Columns["Temperature"], df1.Columns["Luminosity"], df1.Columns["Infrared"], df1.Columns["Distance"], sensorsTicksArray); var df11 = new DataFrame(df1.Columns["Temperature"], sensorsTicksArray, df1.Columns["CreatedAt"]); var labelsTicksArray = new PrimitiveDataFrameColumn<long>("Id", labelsTicks); var labelDates = new StringDataFrameColumn("LabelDates", df2.Columns["CreatedAt"] as IEnumerable<string>); //var df22 = new DataFrame(labelsCreatedAts, labelDates, df2.Columns["Label"], labelsTicksArray); //var df22 = new DataFrame(df2.Columns["Label"], df2.Columns["Score"], labelsTicksArray); var df22 = new DataFrame(df2.Columns["CreatedAt"], df2.Columns["Label"], df2.Columns["Score"], labelsTicksArray); var df = df11.Merge<long>(df22, "Id", "Id", "_sensor", "_label", JoinAlgorithm.FullOuter); display(df); ``` # Various ``` var cleanedDf = df.DropNulls(); display(cleanedDf); var sources = df2.GroupBy("Label"); display(sources.Count()); var groupBy = df1.GroupBy("Temperature"); var groupCounts = groupBy.Count(); // Alternatively find the sum of the values in each group in Ints var intGroupSum = groupBy.Sum("Distance"); groupCounts display(Chart.Plot( new Graph.Histogram() { x = df2.Columns["Label"] } )); for (int i = 0; i < df2.Rows.Count(); i++) { if (df2.Rows[i][1].Equals("Flash")) { Console.WriteLine($"{df2.Rows[i][5]} to FlashLight"); df2.Rows[i][1] = "FlashLight"; } } display(Chart.Plot( new Graph.Histogram() { x = df2.Columns["Label"] } )); intGroupSum dataFrame.Info() var col0 = dataFrame.Columns[2]; display(col0) dataFrame.Info() dataFrame.Rows.ToList() // be careful at the size of collection! use .Take(n) when visualizing ```
github_jupyter
``` import numpy as np import tensorflow as tf from sklearn.utils import shuffle import re import time import collections import os import itertools from tqdm import tqdm def build_dataset(words, n_words, atleast=1): count = [['PAD', 0], ['GO', 1], ['EOS', 2], ['UNK', 3]] counter = collections.Counter(words).most_common(n_words) counter = [i for i in counter if i[1] >= atleast] count.extend(counter) dictionary = dict() for word, _ in count: dictionary[word] = len(dictionary) data = list() unk_count = 0 for word in words: index = dictionary.get(word, 0) if index == 0: unk_count += 1 data.append(index) count[0][1] = unk_count reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys())) return data, count, dictionary, reversed_dictionary import json with open('augment-normalizer-v4.json') as fopen: texts = json.load(fopen) before, after = [], [] for splitted in texts: if len(splitted) < 2: continue before.append(list(splitted[0])) after.append(list(splitted[1])) assert len(before) == len(after) concat_from = list(itertools.chain(*before)) vocabulary_size_from = len(list(set(concat_from))) data_from, count_from, dictionary_from, rev_dictionary_from = build_dataset(concat_from, vocabulary_size_from) print('vocab from size: %d'%(vocabulary_size_from)) print('Most common words', count_from[4:10]) print('Sample data', data_from[:10], [rev_dictionary_from[i] for i in data_from[:10]]) print('filtered vocab size:',len(dictionary_from)) print("% of vocab used: {}%".format(round(len(dictionary_from)/vocabulary_size_from,4)*100)) concat_to = list(itertools.chain(*after)) vocabulary_size_to = len(list(set(concat_to))) data_to, count_to, dictionary_to, rev_dictionary_to = build_dataset(concat_to, vocabulary_size_to) print('vocab from size: %d'%(vocabulary_size_to)) print('Most common words', count_to[4:10]) print('Sample data', data_to[:10], [rev_dictionary_to[i] for i in data_to[:10]]) print('filtered vocab size:',len(dictionary_to)) print("% of vocab used: {}%".format(round(len(dictionary_to)/vocabulary_size_to,4)*100)) GO = dictionary_from['GO'] PAD = dictionary_from['PAD'] EOS = dictionary_from['EOS'] UNK = dictionary_from['UNK'] for i in range(len(after)): after[i].append('EOS') class Stemmer: def __init__(self, size_layer, num_layers, embedded_size, from_dict_size, to_dict_size, learning_rate, dropout = 0.8, beam_width = 15): def lstm_cell(reuse=False): return tf.nn.rnn_cell.LSTMCell(size_layer, reuse=reuse) self.X = tf.placeholder(tf.int32, [None, None]) self.X_seq_len = tf.count_nonzero(self.X, 1, dtype=tf.int32) self.Y = tf.placeholder(tf.int32, [None, None]) self.Y_seq_len = tf.count_nonzero(self.Y, 1, dtype=tf.int32) batch_size = tf.shape(self.X)[0] encoder_embeddings = tf.Variable(tf.random_uniform([from_dict_size, embedded_size], -1, 1)) encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X) encoder_cells = tf.nn.rnn_cell.MultiRNNCell([lstm_cell() for _ in range(num_layers)]) self.encoder_out, self.encoder_state = tf.nn.dynamic_rnn(cell = encoder_cells, inputs = encoder_embedded, sequence_length = self.X_seq_len, dtype = tf.float32) self.encoder_state = tuple(self.encoder_state[-1] for _ in range(num_layers)) main = tf.strided_slice(self.Y, [0, 0], [batch_size, -1], [1, 1]) decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1) decoder_embeddings = tf.Variable(tf.random_uniform([to_dict_size, embedded_size], -1, 1)) dense_layer = tf.layers.Dense(to_dict_size) decoder_cells = tf.nn.rnn_cell.MultiRNNCell([lstm_cell() for _ in range(num_layers)]) with tf.variable_scope('decode'): training_helper = tf.contrib.seq2seq.ScheduledEmbeddingTrainingHelper( inputs = tf.nn.embedding_lookup(decoder_embeddings, decoder_input), sequence_length = self.Y_seq_len, embedding = decoder_embeddings, sampling_probability = 0.5, time_major = False) training_decoder = tf.contrib.seq2seq.BasicDecoder( cell = decoder_cells, helper = training_helper, initial_state = self.encoder_state, output_layer = dense_layer) training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode( decoder = training_decoder, impute_finished = True, maximum_iterations = tf.reduce_max(self.Y_seq_len)) # testing session with tf.variable_scope('decode', reuse=True): predicting_decoder = tf.contrib.seq2seq.BeamSearchDecoder( cell = decoder_cells, embedding = decoder_embeddings, start_tokens = tf.tile(tf.constant([GO], dtype=tf.int32), [batch_size]), end_token = EOS, initial_state = tf.contrib.seq2seq.tile_batch(self.encoder_state, beam_width), beam_width = beam_width, output_layer = dense_layer, length_penalty_weight = 0.0) predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode( decoder = predicting_decoder, impute_finished = False, maximum_iterations = 2 * tf.reduce_max(self.X_seq_len)) self.training_logits = training_decoder_output.rnn_output self.predicting_ids = tf.identity(predicting_decoder_output.predicted_ids[:, :, 0],name="logits") masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32) self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits, targets = self.Y, weights = masks) self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(self.cost) y_t = tf.argmax(self.training_logits,axis=2) y_t = tf.cast(y_t, tf.int32) self.prediction = tf.boolean_mask(y_t, masks) mask_label = tf.boolean_mask(self.Y, masks) correct_pred = tf.equal(self.prediction, mask_label) correct_index = tf.cast(correct_pred, tf.float32) self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) size_layer = 256 num_layers = 2 embedded_size = 128 learning_rate = 1e-3 batch_size = 128 epoch = 10 tf.reset_default_graph() sess = tf.InteractiveSession() model = Stemmer(size_layer, num_layers, embedded_size, len(dictionary_from), len(dictionary_to), learning_rate) sess.run(tf.global_variables_initializer()) def str_idx(corpus, dic, UNK=3): X = [] for i in corpus: ints = [] for k in i: ints.append(dic.get(k, UNK)) X.append(ints) return X X = str_idx(before, dictionary_from) Y = str_idx(after, dictionary_to) from sklearn.cross_validation import train_test_split train_X, test_X, train_Y, test_Y = train_test_split(X, Y, test_size=0.1) def pad_sentence_batch(sentence_batch, pad_int): padded_seqs = [] seq_lens = [] max_sentence_len = max([len(sentence) for sentence in sentence_batch]) for sentence in sentence_batch: padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence))) seq_lens.append(len(sentence)) return padded_seqs, seq_lens EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 3, 0, 0, 0 while True: lasttime = time.time() if CURRENT_CHECKPOINT == EARLY_STOPPING: print('break epoch:%d\n' % (EPOCH)) break total_loss, total_accuracy, total_loss_test, total_accuracy_test = 0, 0, 0, 0 train_X, train_Y = shuffle(train_X, train_Y) test_X, test_Y = shuffle(test_X, test_Y) pbar = tqdm(range(0, len(train_X), batch_size), desc='train minibatch loop') for k in pbar: batch_x, _ = pad_sentence_batch(train_X[k: min(k+batch_size,len(train_X))], PAD) batch_y, _ = pad_sentence_batch(train_Y[k: min(k+batch_size,len(train_X))], PAD) acc, loss, _ = sess.run([model.accuracy, model.cost, model.optimizer], feed_dict={model.X:batch_x, model.Y:batch_y}) total_loss += loss total_accuracy += acc pbar.set_postfix(cost=loss, accuracy = acc) pbar = tqdm(range(0, len(test_X), batch_size), desc='test minibatch loop') for k in pbar: batch_x, _ = pad_sentence_batch(test_X[k: min(k+batch_size,len(test_X))], PAD) batch_y, _ = pad_sentence_batch(test_Y[k: min(k+batch_size,len(test_X))], PAD) acc, loss = sess.run([model.accuracy, model.cost], feed_dict={model.X:batch_x, model.Y:batch_y}) total_loss_test += loss total_accuracy_test += acc pbar.set_postfix(cost=loss, accuracy = acc) total_loss /= (len(train_X) / batch_size) total_accuracy /= (len(train_X) / batch_size) total_loss_test /= (len(test_X) / batch_size) total_accuracy_test /= (len(test_X) / batch_size) if total_accuracy_test > CURRENT_ACC: print( 'epoch: %d, pass acc: %f, current acc: %f' % (EPOCH, CURRENT_ACC, total_accuracy_test) ) CURRENT_ACC = total_accuracy_test CURRENT_CHECKPOINT = 0 else: CURRENT_CHECKPOINT += 1 print('epoch: %d, avg loss: %f, avg accuracy: %f'%(EPOCH, total_loss, total_accuracy)) print('epoch: %d, avg loss test: %f, avg accuracy test: %f'%(EPOCH, total_loss_test, total_accuracy_test)) EPOCH += 1 saver = tf.train.Saver(tf.trainable_variables()) saver.save(sess, "beamsearch-lstm-normalize/model.ckpt") strings = ','.join( [ n.name for n in tf.get_default_graph().as_graph_def().node if ('Variable' in n.op or 'Placeholder' in n.name or 'logits' in n.name or 'alphas' in n.name) and 'Adam' not in n.name and 'beta' not in n.name and 'OptimizeLoss' not in n.name and 'Global_Step' not in n.name ] ) def freeze_graph(model_dir, output_node_names): if not tf.gfile.Exists(model_dir): raise AssertionError( "Export directory doesn't exists. Please specify an export " "directory: %s" % model_dir) checkpoint = tf.train.get_checkpoint_state(model_dir) input_checkpoint = checkpoint.model_checkpoint_path absolute_model_dir = "/".join(input_checkpoint.split('/')[:-1]) output_graph = absolute_model_dir + "/frozen_model.pb" clear_devices = True with tf.Session(graph=tf.Graph()) as sess: saver = tf.train.import_meta_graph(input_checkpoint + '.meta', clear_devices=clear_devices) saver.restore(sess, input_checkpoint) output_graph_def = tf.graph_util.convert_variables_to_constants( sess, tf.get_default_graph().as_graph_def(), output_node_names.split(",") ) with tf.gfile.GFile(output_graph, "wb") as f: f.write(output_graph_def.SerializeToString()) print("%d ops in the final graph." % len(output_graph_def.node)) freeze_graph("beamsearch-lstm-normalize", strings) def load_graph(frozen_graph_filename): with tf.gfile.GFile(frozen_graph_filename, "rb") as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) with tf.Graph().as_default() as graph: tf.import_graph_def(graph_def) return graph g=load_graph('beamsearch-lstm-normalize/frozen_model.pb') x = g.get_tensor_by_name('import/Placeholder:0') logits = g.get_tensor_by_name('import/logits:0') test_sess = tf.InteractiveSession(graph=g) predicted = test_sess.run(logits,feed_dict={x:str_idx(['kecomelan'],dictionary_from)})[0] print('PREDICTED AFTER:',''.join([rev_dictionary_to[n] for n in predicted if n not in[0,1,2,3]])) x = g.get_tensor_by_name('import/Placeholder:0') logits = g.get_tensor_by_name('import/logits:0') test_sess = tf.InteractiveSession(graph=g) predicted = test_sess.run(logits,feed_dict={x:str_idx(['xjdi'],dictionary_from)})[0] print('PREDICTED AFTER:',''.join([rev_dictionary_to[n] for n in predicted if n not in[0,1,2,3]])) predicted = test_sess.run(logits,feed_dict={x:str_idx(['xperjlnan'],dictionary_from)})[0] print('PREDICTED AFTER:',''.join([rev_dictionary_to[n] for n in predicted if n not in[0,1,2,3]])) import json with open('beamsearch-lstm-normalize.json','w') as fopen: fopen.write(json.dumps({'dictionary_from':dictionary_from, 'dictionary_to':dictionary_to, 'rev_dictionary_to':rev_dictionary_to, 'rev_dictionary_from':rev_dictionary_from})) ```
github_jupyter
``` #!/usr/bin/env python # -*- encoding: utf-8 from __future__ import absolute_import, division, print_function try: import wx except ImportError: import sys sys.path += [ "/usr/lib/python2.7/dist-packages/wx-2.8-gtk2-unicode", "/usr/lib/python2.7/dist-packages" ] import wx import matplotlib matplotlib.use('WXAgg') from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg as FigureCanvas from matplotlib.backends.backend_wx import NavigationToolbar2Wx from matplotlib.figure import Figure from bisect import bisect import numpy as np import pandas as pd # unused import required to allow 'eval' of date filters import datetime from datetime import date # try to get nicer plotting styles try: import seaborn seaborn.set() except ImportError: try: from matplotlib import pyplot as plt plt.style.use('ggplot') except AttributeError: pass class ListCtrlDataFrame(wx.ListCtrl): # TODO: we could do something more sophisticated to come # TODO: up with a reasonable column width... DEFAULT_COLUMN_WIDTH = 100 TMP_SELECTION_COLUMN = 'tmp_selection_column' def __init__(self, parent, df, status_bar_callback): wx.ListCtrl.__init__( self, parent, -1, style=wx.LC_REPORT | wx.LC_VIRTUAL | wx.LC_HRULES | wx.LC_VRULES | wx.LB_MULTIPLE ) self.status_bar_callback = status_bar_callback self.df_orig = df self.original_columns = self.df_orig.columns[:] self.current_columns = self.df_orig.columns[:] self.sort_by_column = None self._reset_mask() # prepare attribute for alternating colors of rows self.attr_light_blue = wx.ListItemAttr() self.attr_light_blue.SetBackgroundColour("#D6EBFF") self.Bind(wx.EVT_LIST_COL_CLICK, self._on_col_click) self.Bind(wx.EVT_RIGHT_DOWN, self._on_right_click) self.df = pd.DataFrame({}) # init empty to force initial update self._update_rows() self._update_columns(self.original_columns) def _reset_mask(self): #self.mask = [True] * self.df_orig.shape[0] self.mask = pd.Series([True] * self.df_orig.shape[0], index=self.df_orig.index) def _update_columns(self, columns): self.ClearAll() for i, col in enumerate(columns): self.InsertColumn(i, col) self.SetColumnWidth(i, self.DEFAULT_COLUMN_WIDTH) # Note that we have to reset the count as well because ClearAll() # not only deletes columns but also the count... self.SetItemCount(len(self.df)) def set_columns(self, columns_to_use): """ External interface to set the column projections. """ self.current_columns = columns_to_use self._update_rows() self._update_columns(columns_to_use) def _update_rows(self): old_len = len(self.df) self.df = self.df_orig.loc[self.mask.values, self.current_columns] new_len = len(self.df) if old_len != new_len: self.SetItemCount(new_len) self.status_bar_callback(0, "Number of rows: {}".format(new_len)) def apply_filter(self, conditions): """ External interface to set a filter. """ old_mask = self.mask.copy() if len(conditions) == 0: self._reset_mask() else: self._reset_mask() # set all to True for destructive conjunction no_error = True for column, condition in conditions: if condition.strip() == '': continue condition = condition.replace("_", "self.df_orig['{}']".format(column)) print("Evaluating condition:", condition) try: tmp_mask = eval(condition) if isinstance(tmp_mask, pd.Series) and tmp_mask.dtype == np.bool: self.mask &= tmp_mask except Exception as e: print("Failed with:", e) no_error = False self.status_bar_callback( 1, "Evaluating '{}' failed with: {}".format(condition, e) ) if no_error: self.status_bar_callback(1, "") has_changed = any(old_mask != self.mask) if has_changed: self._update_rows() return len(self.df), has_changed def get_selected_items(self): """ Gets the selected items for the list control. Selection is returned as a list of selected indices, low to high. """ selection = [] current = -1 # start at -1 to get the first selected item while True: next = self.GetNextItem(current, wx.LIST_NEXT_ALL, wx.LIST_STATE_SELECTED) if next == -1: return selection else: selection.append(next) current = next def get_filtered_df(self): return self.df_orig.loc[self.mask, :] def _on_col_click(self, event): """ Sort data frame by selected column. """ # get currently selected items selected = self.get_selected_items() # append a temporary column to store the currently selected items self.df[self.TMP_SELECTION_COLUMN] = False self.df.iloc[selected, -1] = True # get column name to use for sorting col = event.GetColumn() # determine if ascending or descending if self.sort_by_column is None or self.sort_by_column[0] != col: ascending = True else: ascending = not self.sort_by_column[1] # store sort column and sort direction self.sort_by_column = (col, ascending) try: # pandas 0.17 self.df.sort_values(self.df.columns[col], inplace=True, ascending=ascending) except AttributeError: # pandas 0.16 compatibility self.df.sort(self.df.columns[col], inplace=True, ascending=ascending) # deselect all previously selected for i in selected: self.Select(i, on=False) # determine indices of selection after sorting selected_bool = self.df.iloc[:, -1] == True selected = self.df.reset_index().index[selected_bool] # select corresponding rows for i in selected: self.Select(i, on=True) # delete temporary column del self.df[self.TMP_SELECTION_COLUMN] def _on_right_click(self, event): """ Copies a cell into clipboard on right click. Unfortunately, determining the clicked column is not straightforward. This appraoch is inspired by the TextEditMixin in: /usr/lib/python2.7/dist-packages/wx-2.8-gtk2-unicode/wx/lib/mixins/listctrl.py More references: - http://wxpython-users.1045709.n5.nabble.com/Getting-row-col-of-selected-cell-in-ListCtrl-td2360831.html - https://groups.google.com/forum/#!topic/wxpython-users/7BNl9TA5Y5U - https://groups.google.com/forum/#!topic/wxpython-users/wyayJIARG8c """ if self.HitTest(event.GetPosition()) != wx.NOT_FOUND: x, y = event.GetPosition() row, flags = self.HitTest((x, y)) col_locs = [0] loc = 0 for n in range(self.GetColumnCount()): loc = loc + self.GetColumnWidth(n) col_locs.append(loc) scroll_pos = self.GetScrollPos(wx.HORIZONTAL) # this is crucial step to get the scroll pixel units unit_x, unit_y = self.GetMainWindow().GetScrollPixelsPerUnit() col = bisect(col_locs, x + scroll_pos * unit_x) - 1 value = self.df.iloc[row, col] # print(row, col, scroll_pos, value) clipdata = wx.TextDataObject() clipdata.SetText(str(value)) wx.TheClipboard.Open() wx.TheClipboard.SetData(clipdata) wx.TheClipboard.Close() def OnGetItemText(self, item, col): """ Implements the item getter for a "virtual" ListCtrl. """ value = self.df.iloc[item, col] # print("retrieving %d %d %s" % (item, col, value)) return str(value) def OnGetItemAttr(self, item): """ Implements the attribute getter for a "virtual" ListCtrl. """ if item % 2 == 0: return self.attr_light_blue else: return None class DataframePanel(wx.Panel): """ Panel providing the main data frame table view. """ def __init__(self, parent, df, status_bar_callback): wx.Panel.__init__(self, parent) self.df_list_ctrl = ListCtrlDataFrame(self, df, status_bar_callback) sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(self.df_list_ctrl, 1, wx.ALL | wx.EXPAND | wx.GROW, 5) self.SetSizer(sizer) self.Show() class ListBoxDraggable(wx.ListBox): """ Helper class to provide ListBox with extended behavior. """ def __init__(self, parent, size, data, *args, **kwargs): wx.ListBox.__init__(self, parent, size, **kwargs) self.data = data self.InsertItems(data, 0) self.Bind(wx.EVT_LISTBOX, self.on_selection_changed) self.Bind(wx.EVT_LEFT_DOWN, self.on_left_down) self.Bind(wx.EVT_RIGHT_DOWN, self.on_right_down) self.Bind(wx.EVT_RIGHT_UP, self.on_right_up) self.Bind(wx.EVT_MOTION, self.on_move) self.index_iter = range(len(self.data)) self.selected_items = [True] * len(self.data) self.index_mapping = list(range(len(self.data))) self.drag_start_index = None self.update_selection() self.SetFocus() def on_left_down(self, event): if self.HitTest(event.GetPosition()) != wx.NOT_FOUND: index = self.HitTest(event.GetPosition()) self.selected_items[index] = not self.selected_items[index] # doesn't really work to update selection direclty (focus issues) # instead we wait for the EVT_LISTBOX event and fix the selection # there... # self.update_selection() # TODO: we could probably use wx.CallAfter event.Skip() def update_selection(self): # self.SetFocus() # print(self.selected_items) for i in self.index_iter: if self.IsSelected(i) and not self.selected_items[i]: #print("Deselecting", i) self.Deselect(i) elif not self.IsSelected(i) and self.selected_items[i]: #print("Selecting", i) self.Select(i) def on_selection_changed(self, evt): self.update_selection() evt.Skip() def on_right_down(self, event): if self.HitTest(event.GetPosition()) != wx.NOT_FOUND: index = self.HitTest(event.GetPosition()) self.drag_start_index = index def on_right_up(self, event): self.drag_start_index = None event.Skip() def on_move(self, event): if self.drag_start_index is not None: if self.HitTest(event.GetPosition()) != wx.NOT_FOUND: index = self.HitTest(event.GetPosition()) if self.drag_start_index != index: self.swap(self.drag_start_index, index) self.drag_start_index = index def swap(self, i, j): self.index_mapping[i], self.index_mapping[j] = self.index_mapping[j], self.index_mapping[i] self.SetString(i, self.data[self.index_mapping[i]]) self.SetString(j, self.data[self.index_mapping[j]]) self.selected_items[i], self.selected_items[j] = self.selected_items[j], self.selected_items[i] # self.update_selection() # print("Updated mapping:", self.index_mapping) new_event = wx.PyCommandEvent(wx.EVT_LISTBOX.typeId, self.GetId()) self.GetEventHandler().ProcessEvent(new_event) def get_selected_data(self): selected = [] for i, col in enumerate(self.data): if self.IsSelected(i): index = self.index_mapping[i] value = self.data[index] selected.append(value) # print("Selected data:", selected) return selected class ColumnSelectionPanel(wx.Panel): """ Panel for selecting and re-arranging columns. """ def __init__(self, parent, columns, df_list_ctrl): wx.Panel.__init__(self, parent) self.columns = columns self.df_list_ctrl = df_list_ctrl self.list_box = ListBoxDraggable(self, -1, columns, style=wx.LB_EXTENDED) self.Bind(wx.EVT_LISTBOX, self.update_selected_columns) sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(self.list_box, 1, wx.ALL | wx.EXPAND | wx.GROW, 5) self.SetSizer(sizer) self.list_box.SetFocus() def update_selected_columns(self, evt): selected = self.list_box.get_selected_data() self.df_list_ctrl.set_columns(selected) class FilterPanel(wx.Panel): """ Panel for defining filter expressions. """ def __init__(self, parent, columns, df_list_ctrl, change_callback): wx.Panel.__init__(self, parent) columns_with_neutral_selection = [''] + list(columns) self.columns = columns self.df_list_ctrl = df_list_ctrl self.change_callback = change_callback self.num_filters = 10 self.main_sizer = wx.BoxSizer(wx.VERTICAL) self.combo_boxes = [] self.text_controls = [] for i in range(self.num_filters): combo_box = wx.ComboBox(self, choices=columns_with_neutral_selection, style=wx.CB_READONLY) text_ctrl = wx.TextCtrl(self, wx.ID_ANY, '') self.Bind(wx.EVT_COMBOBOX, self.on_combo_box_select) self.Bind(wx.EVT_TEXT, self.on_text_change) row_sizer = wx.BoxSizer(wx.HORIZONTAL) row_sizer.Add(combo_box, 0, wx.ALL, 5) row_sizer.Add(text_ctrl, 1, wx.ALL | wx.EXPAND | wx.ALIGN_RIGHT, 5) self.combo_boxes.append(combo_box) self.text_controls.append(text_ctrl) self.main_sizer.Add(row_sizer, 0, wx.EXPAND) self.SetSizer(self.main_sizer) def on_combo_box_select(self, event): self.update_conditions() def on_text_change(self, event): self.update_conditions() def update_conditions(self): # print("Updating conditions") conditions = [] for i in range(self.num_filters): column_index = self.combo_boxes[i].GetSelection() condition = self.text_controls[i].GetValue() if column_index != wx.NOT_FOUND and column_index != 0: # since we have added a dummy column for "deselect", we have to subtract one column = self.columns[column_index - 1] conditions += [(column, condition)] num_matching, has_changed = self.df_list_ctrl.apply_filter(conditions) if has_changed: self.change_callback() # print("Num matching:", num_matching) class HistogramPlot(wx.Panel): """ Panel providing a histogram plot. """ def __init__(self, parent, columns, df_list_ctrl): wx.Panel.__init__(self, parent) columns_with_neutral_selection = [''] + list(columns) self.columns = columns self.df_list_ctrl = df_list_ctrl self.figure = Figure(facecolor="white", figsize=(1, 1)) self.axes = self.figure.add_subplot(111) self.canvas = FigureCanvas(self, -1, self.figure) chart_toolbar = NavigationToolbar2Wx(self.canvas) self.combo_box1 = wx.ComboBox(self, choices=columns_with_neutral_selection, style=wx.CB_READONLY) self.Bind(wx.EVT_COMBOBOX, self.on_combo_box_select) row_sizer = wx.BoxSizer(wx.HORIZONTAL) row_sizer.Add(self.combo_box1, 0, wx.ALL | wx.ALIGN_CENTER, 5) row_sizer.Add(chart_toolbar, 0, wx.ALL, 5) sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(self.canvas, 1, flag=wx.EXPAND, border=5) sizer.Add(row_sizer) self.SetSizer(sizer) def on_combo_box_select(self, event): self.redraw() def redraw(self): column_index1 = self.combo_box1.GetSelection() if column_index1 != wx.NOT_FOUND and column_index1 != 0: # subtract one to remove the neutral selection index column_index1 -= 1 df = self.df_list_ctrl.get_filtered_df() if len(df) > 0: self.axes.clear() column = df.iloc[:, column_index1] is_string_col = column.dtype == np.object and isinstance(column.values[0], str) if is_string_col: value_counts = column.value_counts().sort_index() value_counts.plot(kind='bar', ax=self.axes) else: self.axes.hist(column.values, bins=100) self.canvas.draw() class ScatterPlot(wx.Panel): """ Panel providing a scatter plot. """ def __init__(self, parent, columns, df_list_ctrl): wx.Panel.__init__(self, parent) columns_with_neutral_selection = [''] + list(columns) self.columns = columns self.df_list_ctrl = df_list_ctrl self.figure = Figure(facecolor="white", figsize=(1, 1)) self.axes = self.figure.add_subplot(111) self.canvas = FigureCanvas(self, -1, self.figure) chart_toolbar = NavigationToolbar2Wx(self.canvas) self.combo_box1 = wx.ComboBox(self, choices=columns_with_neutral_selection, style=wx.CB_READONLY) self.combo_box2 = wx.ComboBox(self, choices=columns_with_neutral_selection, style=wx.CB_READONLY) self.Bind(wx.EVT_COMBOBOX, self.on_combo_box_select) row_sizer = wx.BoxSizer(wx.HORIZONTAL) row_sizer.Add(self.combo_box1, 0, wx.ALL | wx.ALIGN_CENTER, 5) row_sizer.Add(self.combo_box2, 0, wx.ALL | wx.ALIGN_CENTER, 5) row_sizer.Add(chart_toolbar, 0, wx.ALL, 5) sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(self.canvas, 1, flag=wx.EXPAND, border=5) sizer.Add(row_sizer) self.SetSizer(sizer) def on_combo_box_select(self, event): self.redraw() def redraw(self): column_index1 = self.combo_box1.GetSelection() column_index2 = self.combo_box2.GetSelection() if column_index1 != wx.NOT_FOUND and column_index1 != 0 and \ column_index2 != wx.NOT_FOUND and column_index2 != 0: # subtract one to remove the neutral selection index column_index1 -= 1 column_index2 -= 1 df = self.df_list_ctrl.get_filtered_df() # It looks like using pandas dataframe.plot causes something weird to # crash in wx internally. Therefore we use plain axes.plot functionality. # column_name1 = self.columns[column_index1] # column_name2 = self.columns[column_index2] # df.plot(kind='scatter', x=column_name1, y=column_name2) if len(df) > 0: self.axes.clear() self.axes.plot(df.iloc[:, column_index1].values, df.iloc[:, column_index2].values, 'o', clip_on=False) self.canvas.draw() class MainFrame(wx.Frame): """ The main GUI window. """ def __init__(self, df): wx.Frame.__init__(self, None, -1, "Pandas DataFrame GUI") # Here we create a panel and a notebook on the panel p = wx.Panel(self) nb = wx.Notebook(p) self.nb = nb columns = df.columns[:] self.CreateStatusBar(2, style=0) self.SetStatusWidths([200, -1]) # create the page windows as children of the notebook self.page1 = DataframePanel(nb, df, self.status_bar_callback) self.page2 = ColumnSelectionPanel(nb, columns, self.page1.df_list_ctrl) #self.page3 = FilterPanel(nb, columns, self.page1.df_list_ctrl, self.selection_change_callback) #self.page4 = HistogramPlot(nb, columns, self.page1.df_list_ctrl) #self.page5 = ScatterPlot(nb, columns, self.page1.df_list_ctrl) # add the pages to the notebook with the label to show on the tab nb.AddPage(self.page1, "Data Frame") nb.AddPage(self.page2, "Columns") #nb.AddPage(self.page3, "Filters") #nb.AddPage(self.page4, "Histogram") #nb.AddPage(self.page5, "Scatter Plot") nb.Bind(wx.EVT_NOTEBOOK_PAGE_CHANGED, self.on_tab_change) # finally, put the notebook in a sizer for the panel to manage # the layout sizer = wx.BoxSizer() sizer.Add(nb, 1, wx.EXPAND) p.SetSizer(sizer) self.SetSize((800, 600)) self.Center() def on_tab_change(self, event): self.page2.list_box.SetFocus() page_to_select = event.GetSelection() wx.CallAfter(self.fix_focus, page_to_select) event.Skip(True) def fix_focus(self, page_to_select): page = self.nb.GetPage(page_to_select) page.SetFocus() if isinstance(page, DataframePanel): self.page1.df_list_ctrl.SetFocus() elif isinstance(page, ColumnSelectionPanel): self.page2.list_box.SetFocus() def status_bar_callback(self, i, new_text): self.SetStatusText(new_text, i) def selection_change_callback(self): self.page4.redraw() self.page5.redraw() def show(df): """ The main function to start the data frame GUI. """ app = wx.App(False) frame = MainFrame(df) frame.Show() app.MainLoop() # -*- coding:utf-8 -*- from tkinter import tix as Tix # la ligne suivante est indispensable même si Tix l'importe également !!! from tkinter import * # création de la fenêtre principale : pas comme avec Tkinter, car root = Tix.Tk() root = Tix.Tk() root.geometry("800x600") # création d'un menu (à la manière de Tkinter : ok) barredemenu = Menu(root) root.config(menu=barredemenu) menu1 = Menu(barredemenu, tearoff=0) barredemenu.add_cascade(label="Programme", menu=menu1) menu1.add_command(label="Quitter") # création de deux frames à gauche et en bas (à la manière de Tkinter : ok) frame_lateral = Frame(root, bg="red", width=150) frame_en_bas = Frame(root, bg="green", height=150) # création du notebook : à la manière de Tix, et c'est normal : monnotebook = Tix.NoteBook(root) monnotebook.add("page1", label="Onglet un") monnotebook.add("page2", label="Onglet deux") monnotebook.add("page3", label="Onglet trois") monnotebook.add("page4", label="C'est la fête") p1 = monnotebook.subwidget_list["page1"] p2 = monnotebook.subwidget_list["page2"] p3 = monnotebook.subwidget_list["page3"] p4 = monnotebook.subwidget_list["page4"] cv1 = Canvas(p1, bg="tan") cv2 = Canvas(p2, bg="white") cv3 = Canvas(p3, bg="yellow") fra4 = Frame(p4) # il semble que les premiers conteneurs qui sont placé dans un onglet DOIVENT # être positionné avec pack() cv1.pack(expand=1, fill=Tix.BOTH) cv2.pack(expand=1, fill=Tix.BOTH) cv3.pack(expand=1, fill=Tix.BOTH) fra4.pack() # créons deux sous-frames qui seront mis DANS fra4 # cela se fait à la manière de Tkinter : ok ! sous_fra1 = Frame(fra4, bg="black", width=200, height=300) sous_fra1.grid(row=0, column=0) sous_fra2 = Frame(fra4, bg="tan") sous_fra2.grid(row=0, column=1) # les enfants du sous-frame sous_fra2 sont placé à la manière de Tkinter : ok lab1 = Label(sous_fra2, text="Label un", bg="tan") lab2 = Label(sous_fra2, text="Label deux", bg="tan") lab3 = Label(sous_fra2, text="Label trois", bg="tan") lab4 = Label(sous_fra2, text="Label quatre", bg="tan") ent1 = Entry(sous_fra2) ent2 = Entry(sous_fra2) ent3 = Entry(sous_fra2) ent4 = Entry(sous_fra2) lab1.grid(row=0, column=0, padx=5, pady=5) lab2.grid(row=1, column=0, padx=5, pady=5) lab3.grid(row=2, column=0, padx=5, pady=5) lab4.grid(row=3, column=0, padx=5, pady=5) ent1.grid(row=0, column=1, padx=5, pady=5) ent2.grid(row=1, column=1, padx=5, pady=5) ent3.grid(row=2, column=1, padx=5, pady=5) ent4.grid(row=3, column=1, padx=5, pady=5) # les frames latéral et du bas sont placés à la manière de Tkinter : ok frame_en_bas.pack(side=BOTTOM, fill=X) frame_lateral.pack(side=LEFT, fill=Y) # le frame contenant le notebook est placé avec Tix : cf le "fill=Tix.BOTH" monnotebook.pack(side=LEFT, fill=Tix.BOTH, expand=1, padx=5, pady=5) root.mainloop() ```
github_jupyter
# Tutorial Exercise: Yelp reviews (Solution) ## Introduction This exercise uses a small subset of the data from Kaggle's [Yelp Business Rating Prediction](https://www.kaggle.com/c/yelp-recsys-2013) competition. **Description of the data:** - **`yelp.csv`** contains the dataset. It is stored in the repository (in the **`data`** directory), so there is no need to download anything from the Kaggle website. - Each observation (row) in this dataset is a review of a particular business by a particular user. - The **stars** column is the number of stars (1 through 5) assigned by the reviewer to the business. (Higher stars is better.) In other words, it is the rating of the business by the person who wrote the review. - The **text** column is the text of the review. **Goal:** Predict the star rating of a review using **only** the review text. **Tip:** After each task, I recommend that you check the shape and the contents of your objects, to confirm that they match your expectations. ``` # for Python 2: use print only as a function from __future__ import print_function ``` ## Task 1 Read **`yelp.csv`** into a pandas DataFrame and examine it. ``` # read yelp.csv using a relative path import pandas as pd path = 'data/yelp.csv' yelp = pd.read_csv(path) # examine the shape yelp.shape # examine the first row yelp.head(1) # examine the class distribution yelp.stars.value_counts().sort_index() ``` ## Task 2 Create a new DataFrame that only contains the **5-star** and **1-star** reviews. - **Hint:** [How do I apply multiple filter criteria to a pandas DataFrame?](http://nbviewer.jupyter.org/github/justmarkham/pandas-videos/blob/master/pandas.ipynb#9.-How-do-I-apply-multiple-filter-criteria-to-a-pandas-DataFrame%3F-%28video%29) explains how to do this. ``` # filter the DataFrame using an OR condition yelp_best_worst = yelp[(yelp.stars==5) | (yelp.stars==1)] # equivalently, use the 'loc' method yelp_best_worst = yelp.loc[(yelp.stars==5) | (yelp.stars==1), :] # examine the shape yelp_best_worst.shape ``` ## Task 3 Define X and y from the new DataFrame, and then split X and y into training and testing sets, using the **review text** as the only feature and the **star rating** as the response. - **Hint:** Keep in mind that X should be a pandas Series (not a DataFrame), since we will pass it to CountVectorizer in the task that follows. ``` # define X and y X = yelp_best_worst.text y = yelp_best_worst.stars # split X and y into training and testing sets from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) # examine the object shapes print(X_train.shape) print(X_test.shape) print(y_train.shape) print(y_test.shape) ``` ## Task 4 Use CountVectorizer to create **document-term matrices** from X_train and X_test. ``` # import and instantiate CountVectorizer from sklearn.feature_extraction.text import CountVectorizer vect = CountVectorizer() # fit and transform X_train into X_train_dtm X_train_dtm = vect.fit_transform(X_train) X_train_dtm.shape # transform X_test into X_test_dtm X_test_dtm = vect.transform(X_test) X_test_dtm.shape ``` ## Task 5 Use multinomial Naive Bayes to **predict the star rating** for the reviews in the testing set, and then **calculate the accuracy** and **print the confusion matrix**. - **Hint:** [Evaluating a classification model](https://github.com/justmarkham/scikit-learn-videos/blob/master/09_classification_metrics.ipynb) explains how to interpret both classification accuracy and the confusion matrix. ``` # import and instantiate MultinomialNB from sklearn.naive_bayes import MultinomialNB nb = MultinomialNB() # train the model using X_train_dtm nb.fit(X_train_dtm, y_train) # make class predictions for X_test_dtm y_pred_class = nb.predict(X_test_dtm) # calculate accuracy of class predictions from sklearn import metrics metrics.accuracy_score(y_test, y_pred_class) # print the confusion matrix metrics.confusion_matrix(y_test, y_pred_class) ``` ## Task 6 (Challenge) Calculate the **null accuracy**, which is the classification accuracy that could be achieved by always predicting the most frequent class. - **Hint:** [Evaluating a classification model](https://github.com/justmarkham/scikit-learn-videos/blob/master/09_classification_metrics.ipynb) explains null accuracy and demonstrates two ways to calculate it, though only one of those ways will work in this case. Alternatively, you can come up with your own method to calculate null accuracy! ``` # examine the class distribution of the testing set y_test.value_counts() # calculate null accuracy y_test.value_counts().head(1) / y_test.shape # calculate null accuracy manually 838 / float(838 + 184) ``` ## Task 7 (Challenge) Browse through the review text of some of the **false positives** and **false negatives**. Based on your knowledge of how Naive Bayes works, do you have any ideas about why the model is incorrectly classifying these reviews? - **Hint:** [Evaluating a classification model](https://github.com/justmarkham/scikit-learn-videos/blob/master/09_classification_metrics.ipynb) explains the definitions of "false positives" and "false negatives". - **Hint:** Think about what a false positive means in this context, and what a false negative means in this context. What has scikit-learn defined as the "positive class"? ``` # first 10 false positives (1-star reviews incorrectly classified as 5-star reviews) X_test[y_test < y_pred_class].head(10) # false positive: model is reacting to the words "good", "impressive", "nice" X_test[1781] # false positive: model does not have enough data to work with X_test[1919] # first 10 false negatives (5-star reviews incorrectly classified as 1-star reviews) X_test[y_test > y_pred_class].head(10) # false negative: model is reacting to the words "complain", "crowds", "rushing", "pricey", "scum" X_test[4963] ``` ## Task 8 (Challenge) Calculate which 10 tokens are the most predictive of **5-star reviews**, and which 10 tokens are the most predictive of **1-star reviews**. - **Hint:** Naive Bayes automatically counts the number of times each token appears in each class, as well as the number of observations in each class. You can access these counts via the `feature_count_` and `class_count_` attributes of the Naive Bayes model object. ``` # store the vocabulary of X_train X_train_tokens = vect.get_feature_names() len(X_train_tokens) # first row is one-star reviews, second row is five-star reviews nb.feature_count_.shape # store the number of times each token appears across each class one_star_token_count = nb.feature_count_[0, :] five_star_token_count = nb.feature_count_[1, :] # create a DataFrame of tokens with their separate one-star and five-star counts tokens = pd.DataFrame({'token':X_train_tokens, 'one_star':one_star_token_count, 'five_star':five_star_token_count}).set_index('token') # add 1 to one-star and five-star counts to avoid dividing by 0 tokens['one_star'] = tokens.one_star + 1 tokens['five_star'] = tokens.five_star + 1 # first number is one-star reviews, second number is five-star reviews nb.class_count_ # convert the one-star and five-star counts into frequencies tokens['one_star'] = tokens.one_star / nb.class_count_[0] tokens['five_star'] = tokens.five_star / nb.class_count_[1] # calculate the ratio of five-star to one-star for each token tokens['five_star_ratio'] = tokens.five_star / tokens.one_star # sort the DataFrame by five_star_ratio (descending order), and examine the first 10 rows # note: use sort() instead of sort_values() for pandas 0.16.2 and earlier tokens.sort_values('five_star_ratio', ascending=False).head(10) # sort the DataFrame by five_star_ratio (ascending order), and examine the first 10 rows tokens.sort_values('five_star_ratio', ascending=True).head(10) ``` ## Task 9 (Challenge) Up to this point, we have framed this as a **binary classification problem** by only considering the 5-star and 1-star reviews. Now, let's repeat the model building process using all reviews, which makes this a **5-class classification problem**. Here are the steps: - Define X and y using the original DataFrame. (y should contain 5 different classes.) - Split X and y into training and testing sets. - Create document-term matrices using CountVectorizer. - Calculate the testing accuracy of a Multinomial Naive Bayes model. - Compare the testing accuracy with the null accuracy, and comment on the results. - Print the confusion matrix, and comment on the results. (This [Stack Overflow answer](http://stackoverflow.com/a/30748053/1636598) explains how to read a multi-class confusion matrix.) - Print the [classification report](http://scikit-learn.org/stable/modules/model_evaluation.html#classification-report), and comment on the results. If you are unfamiliar with the terminology it uses, research the terms, and then try to figure out how to calculate these metrics manually from the confusion matrix! ``` # define X and y using the original DataFrame X = yelp.text y = yelp.stars # check that y contains 5 different classes y.value_counts().sort_index() # split X and y into training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) # create document-term matrices using CountVectorizer X_train_dtm = vect.fit_transform(X_train) X_test_dtm = vect.transform(X_test) # fit a Multinomial Naive Bayes model nb.fit(X_train_dtm, y_train) # make class predictions y_pred_class = nb.predict(X_test_dtm) # calculate the accuary metrics.accuracy_score(y_test, y_pred_class) # calculate the null accuracy y_test.value_counts().head(1) / y_test.shape ``` **Accuracy comments:** At first glance, 47% accuracy does not seem very good, given that it is not much higher than the null accuracy. However, I would consider the 47% accuracy to be quite impressive, given that humans would also have a hard time precisely identifying the star rating for many of these reviews. ``` # print the confusion matrix metrics.confusion_matrix(y_test, y_pred_class) ``` **Confusion matrix comments:** - Nearly all 4-star and 5-star reviews are classified as 4 or 5 stars, but they are hard for the model to distinguish between. - 1-star, 2-star, and 3-star reviews are most commonly classified as 4 stars, probably because it's the predominant class in the training data. ``` # print the classification report print(metrics.classification_report(y_test, y_pred_class)) ``` **Precision** answers the question: "When a given class is predicted, how often are those predictions correct?" To calculate the precision for class 1, for example, you divide 55 by the sum of the first column of the confusion matrix. ``` # manually calculate the precision for class 1 precision = 55 / float(55 + 28 + 5 + 7 + 6) print(precision) ``` **Recall** answers the question: "When a given class is the true class, how often is that class predicted?" To calculate the recall for class 1, for example, you divide 55 by the sum of the first row of the confusion matrix. ``` # manually calculate the recall for class 1 recall = 55 / float(55 + 14 + 24 + 65 + 27) print(recall) ``` **F1 score** is a weighted average of precision and recall. ``` # manually calculate the F1 score for class 1 f1 = 2 * (precision * recall) / (precision + recall) print(f1) ``` **Support** answers the question: "How many observations exist for which a given class is the true class?" To calculate the support for class 1, for example, you sum the first row of the confusion matrix. ``` # manually calculate the support for class 1 support = 55 + 14 + 24 + 65 + 27 print(support) ``` **Classification report comments:** - Class 1 has low recall, meaning that the model has a hard time detecting the 1-star reviews, but high precision, meaning that when the model predicts a review is 1-star, it's usually correct. - Class 5 has high recall and precision, probably because 5-star reviews have polarized language, and because the model has a lot of observations to learn from.
github_jupyter
``` from numpy import pi from qiskit import (Aer, ClassicalRegister, QuantumCircuit, QuantumRegister, assemble, transpile) from qiskit.visualization import plot_histogram, plot_bloch_vector def create_QuantumRegister(size=1, name='q'): return QuantumRegister(size, name) def create_ClassicalRegister(size=1, name='c'): return ClassicalRegister(size, name) def generate_QuantumCircuit(quantum_register, classical_register): return QuantumCircuit(quantum_register, classical_register) ``` # Block 1: Convert integer to binary for initilization phase ``` # Integer Input num_1 = 3 num_2 = 3 def binarize(num): return bin(num)[2:] ``` # Block 2: Design a circuit ``` def adder(num_1, num_2): qr, c, q1, q2 = create_register(num_1, num_2) qc = init_circuit(num_1, num_2) init_1 = [0 for _ in range(2**len(q1))] init_2 = [0 for _ in range(2**len(q1))] init_1[num_1] = 1 init_2[num_2] = 1 qc.initialize(init_1, q1) qc.initialize(init_2, q2) qc.h(q2) qc.z(q1) for i in range(len(q1)): qc.cx(q1[i], qr[i]) qc.cx(q2[i], qr[i]) qc.toffoli(q1[i], q2[i], qr[i + 1]) qc.measure(qr, c) return qc def simulate(qc): simulator = Aer.get_backend('aer_simulator') qc = transpile(qc, simulator) # Run and get counts result = simulator.run(qc).result() counts = result.get_counts(qc) plot_histogram(counts, title='Bell-State counts', number_to_keep=len(counts)) return counts qc = adder(22, 3) counts = simulate(qc) plot_histogram(counts, number_to_keep=len(counts)) num = max(counts.items())[0] hex('0b101111') ``` # Modified ``` def bit_length_init(*num): bit_lenght = 0 for n in num: bit_lenght = max(bit_lenght, len(binarize(n))) return bit_lenght def create_register(*num): length = bit_length_init(*num) params = [] params.append(QuantumRegister(length + int((len(num) + 1) / 2), 'qr')) params.append(ClassicalRegister(length + int((len(num) + 1) / 2), 'c')) for i in range(len(num)): params.append(QuantumRegister(length, 'q{}'.format(i))) return tuple(params) def init_circuit(*num): params = create_register(*num) print(params) return QuantumCircuit(*params) def adder(*num): qr, c, q1, q2 = create_register(*num) qc = init_circuit(*num) init_1 = [0 for _ in range(2**len(q1))] init_2 = [0 for _ in range(2**len(q1))] init_1[num[0]] = 1 init_2[num[1]] = 1 qc.initialize(init_1, q1) qc.initialize(init_2, q2) for i in range(len(q1)): qc.cx(q1[i], qr[i]) qc.cx(q2[i], qr[i]) qc.toffoli(q1[i], q2[i], qr[i + 1]) qc.measure(qr, c) return qc qc = adder(3, 2) simulate(qc) ```
github_jupyter
## Fashion Classification with Monk and Densenet ## Blog Post -- [LINK]() #### Explanation of Dense blocks and Densenets [BLOG LINK](https://towardsdatascience.com/review-densenet-image-classification-b6631a8ef803) This is an excellent read comparing Densenets with other architectures and why Dense blocks achieve better accuracy while training lesser parameters #### Setup Monk We begin by setting up monk and installing dependencies for colab ``` !git clone https://github.com/Tessellate-Imaging/monk_v1 cd monk_v1 !pip install -r installation/requirements_cu10.txt cd .. ``` #### Prepare Dataset Next we grab the dataset. Credits to the original dataset -- [Kaggle](https://www.kaggle.com/paramaggarwal/fashion-product-images-small) ``` !wget https://www.dropbox.com/s/wzgyr1dx4sejo5u/dataset.zip %%capture !unzip dataset.zip ``` **Note** : Pytorch backend requires the images to have 3 channels when loading. We prepare a modified dataset for the same. ``` !mkdir mod_dataset !mkdir mod_dataset/images import cv2 import numpy as np from glob import glob from tqdm import tqdm def convert23channel(imagePath): #gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) img = cv2.imread(imagePath) img2 = np.zeros_like(img) b,g,r = cv2.split(img) img2[:,:,0] = b img2[:,:,1] = g img2[:,:,2] = r return img2 imageList = glob("./dataset/images/*.jpg") for i in tqdm(imageList): inPath = i out = convert23channel(inPath) outPath = "./mod_dataset/images/{}".format(inPath.split('/')[-1]) cv2.imwrite(outPath,out) ``` #### Data exploration [DOCUMENTATION](https://clever-noyce-f9d43f.netlify.com/#/compare_experiment) ``` import pandas as pd gt = pd.read_csv("./dataset/styles.csv",error_bad_lines=False) gt.head() ``` The dataset labels have multiple classification categories. We will train the sub category labels. Extract the sub category labels for images. The image id fields require image names with extension. ``` label_gt = gt[['id','subCategory']] label_gt['id'] = label_gt['id'].astype(str) + '.jpg' label_gt.to_csv('./mod_dataset/subCategory.csv',index=False) ``` # Pytorch with Monk ## Create an Experiment [DOCS](https://clever-noyce-f9d43f.netlify.com/#/quick_mode/quickmode_pytorch) Import Monk library ``` import os import sys sys.path.append("./monk_v1/monk/"); import psutil from pytorch_prototype import prototype ``` ## Experiment 1 with Densenet121 Create a new experiment ``` ptf = prototype(verbose=1); ptf.Prototype("fashion", "exp1"); ``` Load the training images and ground truth labels for sub category classification. We select **densenet121** as our neural architecture and set number of epochs to **5** ``` ptf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory.csv", model_name="densenet121", freeze_base_network=True, num_epochs=5); ``` **Note** : The dataset has a few missing images. We can find the missing and corrupt images by performing EDA ## EDA documentation [DOCS](https://clever-noyce-f9d43f.netlify.com/#/aux_functions) ``` ptf.EDA(check_missing=True, check_corrupt=True); ``` #### Clean the labels file ``` corruptImageList = ['39403.jpg','39410.jpg','39401.jpg','39425.jpg','12347.jpg'] def cleanCSV(csvPath,labelColumnName,imageIdColumnName,appendExtension=False,extension = '.jpg',corruptImageList = []): gt = pd.read_csv(csvPath, error_bad_lines=False) print("LABELS\n{}".format(gt["{}".format(labelColumnName)].unique())) label_gt = gt[["{}".format(imageIdColumnName),"{}".format(labelColumnName)]] if appendExtension == True: label_gt['id'] = label_gt['id'].astype(str) + extension for i in corruptImageList: label_gt = label_gt[label_gt.id != i] print("Total images : {}".format(label_gt.shape[0])) return label_gt subCategory_gt = cleanCSV('./dataset/styles.csv','subCategory','id',True,'.jpg',corruptImageList) subCategory_gt.to_csv("./mod_dataset/subCategory_cleaned.csv",index=False) ``` ## Update the experiment [DOCS](https://clever-noyce-f9d43f.netlify.com/#/update_mode/update_dataset) Now that we have a clean ground truth labels file and modified images, we can update the experiment to take these as our inputs. **Note** Remember to reload the experiment after any updates. Check out the docs -- [DOCUMENTATION](https://clever-noyce-f9d43f.netlify.com/#/update_mode/update_dataset) ``` ptf.update_dataset(dataset_path="./mod_dataset/images",path_to_csv="./mod_dataset/subCategory_cleaned.csv"); ptf.Reload() ``` #### Start Training ``` ptf.Train() ``` After training for 5 epochs we reach a validation accuracy of 89% which is quite good. Lets see if other densenet architectures can help improve this performance ## Experiment 2 with Densenet169 ``` ptf = prototype(verbose=1); ptf.Prototype("fashion", "exp2"); ptf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory_cleaned.csv", model_name="densenet169", freeze_base_network=True, num_epochs=5); ptf.Train() ``` We do improve the validation accuracy but not much. Next we run the experiment with densenet201 ## Experiment 3 with Densenet201 ``` ptf = prototype(verbose=1); ptf.Prototype("fashion", "exp3"); ptf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory_cleaned.csv", model_name="densenet201", freeze_base_network=True, num_epochs=5); ptf.Train() ``` We can see that the 3 versions of densenet give us quite similar results. We can quickly compare the experiments to see variations in losses and training times to choose a fitting experiment ## Compare experiments [DOCS](https://clever-noyce-f9d43f.netlify.com/#/compare_experiment) ``` from compare_prototype import compare ctf = compare(verbose=1); ctf.Comparison("Fashion_Pytorch_Densenet"); ctf.Add_Experiment("fashion", "exp1"); ctf.Add_Experiment("fashion", "exp2"); ctf.Add_Experiment("fashion", "exp3"); ctf.Generate_Statistics(); ``` # Gluon with Monk Lets repeat the same experiments but while using a different backend framework **Gluon** ``` from gluon_prototype import prototype ``` #### Experiment 4 with Densenet121 ``` %%capture gtf = prototype(verbose=1); gtf.Prototype("fashion", "exp4"); gtf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory_cleaned.csv", model_name="densenet121", freeze_base_network=True, num_epochs=5); gtf.Train() ``` #### Experiment 5 with Densenet169 ``` %%capture gtf = prototype(verbose=1); gtf.Prototype("fashion", "exp5"); gtf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory_cleaned.csv", model_name="densenet169", freeze_base_network=True, num_epochs=5); gtf.Train() ``` #### Experiment 6 with Densenet201 ``` %%capture gtf = prototype(verbose=1); gtf.Prototype("fashion", "exp6"); gtf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory_cleaned.csv", model_name="densenet201", freeze_base_network=True, num_epochs=5); gtf.Train() ``` Lets compare the performance of Gluon backend and Densenet architecture. ``` ctf = compare(verbose=1); ctf.Comparison("Fashion_Gluon_Densenet"); ctf.Add_Experiment("fashion", "exp4"); ctf.Add_Experiment("fashion", "exp5"); ctf.Add_Experiment("fashion", "exp6"); ctf.Generate_Statistics(); ``` We can also compare how Pytorch and Gluon fared with our training, but before that lets use Keras backend to train densenets and compare all three frameworks together. # Keras with Monk ``` from keras_prototype import prototype ``` #### Experiment 7 with Densenet121 ``` %%capture ktf = prototype(verbose=1); ktf.Prototype("fashion", "exp7"); ktf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory_cleaned.csv", model_name="densenet121", freeze_base_network=True, num_epochs=5); ktf.Train() ``` #### Experiment 8 with Densenet169 ``` %%capture ktf = prototype(verbose=1); ktf.Prototype("fashion", "exp8"); ktf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory_cleaned.csv", model_name="densenet169", freeze_base_network=True, num_epochs=5); ktf.Train() ``` #### Experiment 9 with Densenet201 ``` %%capture ktf = prototype(verbose=1); ktf.Prototype("fashion", "exp9"); ktf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory_cleaned.csv", model_name="densenet201", freeze_base_network=True, num_epochs=5); ktf.Train() ``` # Compare experiments After using different architectures and backend frameworks, lets compare their performance on accuracy, losses and resource usage. ``` ctf = compare(verbose=1); ctf.Comparison("Fashion_Densenet_Compare"); ctf.Add_Experiment("fashion", "exp1"); ctf.Add_Experiment("fashion", "exp2"); ctf.Add_Experiment("fashion", "exp3"); ctf.Add_Experiment("fashion", "exp4"); ctf.Add_Experiment("fashion", "exp5"); ctf.Add_Experiment("fashion", "exp6"); ctf.Add_Experiment("fashion", "exp7"); ctf.Add_Experiment("fashion", "exp8"); ctf.Add_Experiment("fashion", "exp9"); ctf.Generate_Statistics(); ``` You can find the generated plots inside **workspace/comparison/Fashion_Densenet_Compare** Lets visualise the training accuracy and GPU utilisation plots ``` from IPython.display import Image Image('workspace/comparison/Fashion_Densenet_Compare/train_accuracy.png') Image('workspace/comparison/Fashion_Densenet_Compare/stats_training_time.png') ```
github_jupyter
# XGBoost Ensemble tree model: $$\hat{y}_{i} = \sum_{k=1}^{K}f_{k}(x_{i}), f_{k}\in\mathcal{F}$$ The objective function: $$\mbox{obj} = \sum_{i=1}^{n}l(y_{i}, \hat{y}_{i}) + \sum_{k=1}^{K}\Omega(f_{k})$$ where $l$ is the loss term, $\Omega$ is the regularization term. ## Second Order Objective Step by step: $$ \begin{equation} \begin{split} \mbox{obj}^{(t)} =& \sum_{i=1}^{n}l(y_{i}, \hat{y}_{i}^{(t)}) + \sum_{k=1}^{t}\Omega(f_{k}) \\ =& \sum_{i=1}^{n}l(y_{i}, \hat{y}_{i}^{(t - 1)} + f_{t}(x_{i})) + \Omega(f_{t}) + \mbox{constant} \end{split} \end{equation} $$ Take the taylor expansion up to the second order: $$\mbox{obj}^{(t)} \approx \sum_{i=1}^{n}[l(y_{i}, \hat{y}_{i}^{(t - 1)}) + g_{i}f_{t}(x_{i}) + \frac{1}{2}h_{i}f_{t}(x_{i})^{2}] + \Omega(f_{t}) + \mbox{constant}$$ $$g_{i} = \frac{\partial l(y_{i}, \hat{y}_{i}^{(t - 1)})}{\partial \hat{y}_{i}^{(t-1)}}$$ $$h_{i} = \frac{\partial^{2} l(y_{i}, \hat{y}_{i}^{(t - 1)})}{{\partial \hat{y}_{i}^{(t-1)}}^2}$$ After removing all the constants, the specific objective at step $t$ becomes: $$\sum_{i=1}^{n}[g_{i}f_{t}(x_{i}) + \frac{1}{2}h_{i}f_{t}(x_{i})^{2}] + \Omega(f_{t})$$ ## Optimization Given Tree Structure We can define the complexity as ( $w_{j}$ is the value of the $j$-th leaf ): $$\Omega(f_{t}) = \gamma{T} + \frac{1}{2}\lambda\sum_{j=1}^{T}w_{j}^{2}$$ Rearrange the loss function: $$ \begin{equation} \begin{split} \mbox{obj}^{(t)} = & \sum_{i=1}^{n}[g_{i}f_{t}(x_{i}) + \frac{1}{2}h_{i}f_{t}(x_{i})^{2}] + \gamma{T} + \frac{1}{2}\lambda\sum_{j=1}^{T}w_{j}^{2} \\ =& \sum_{j=1}^{T}[(\sum_{i\in{I_{j}}}g_{i})w_{j} + \frac{1}{2}(\sum_{i\in{I_j}}h_{i} + \lambda)w_{j}^2] + \gamma{T} \end{split} \end{equation} $$ where $I_{j} = \{i|q_{i}=j\}$ is the set of indices of data-points assign to the $j$-th leaf. Denote $G_{j} = \sum_{i\in{I_{j}}}g_{i}, H_{j} = \sum_{i\in{I_j}}h_{i}$: $$\mbox{obj}^{(t)} = \sum_{j=1}^{T}[G_{j}w_{j} + \frac{1}{2}(H_{j} + \lambda)w_{j}^2] + \gamma{T}$$ Quadratic optimization result: $$w_{j}^{\ast} = -\frac{G_{j}}{H_{j} + \lambda}$$ $$\mbox{obj}^{\ast} = -\frac{1}{2}\sum_{i=1}^{T}\frac{G_{j}^2}{H_{j} + \lambda} + \gamma{T}$$ ## Learn Tree Structure When we try to split a leaf into two leaves, the score gain is: $$\mbox{Gain} = \frac{1}{2}\left [\frac{G_L^2}{H_L + \lambda} + \frac{G_R^2}{H_R + \lambda} - \frac{(G_L + G_R)^2}{H_L + H_R + \lambda}\right ] - \gamma$$ Select split that maximize that Gain, thus form tree structure step-by-step. ## Examples ``` """quadratic dataset""" import numpy as np np.random.seed(42) X = np.random.rand(100, 1) - 0.5 y = 3 * X[:, 0] ** 2 + 0.05 * np.random.randn(100) """basic xgboost""" import xgboost reg = xgboost.XGBRegressor() reg.fit(X, y) ```
github_jupyter
``` import sys, os pardir = os.path.abspath(os.path.join(os.path.dirname('__file__'), os.path.pardir)) sys.path.append(pardir) import numpy as np from scipy.io import wavfile from aubio import onset from scipy.signal import butter, lfilter import matplotlib.pyplot as plt import dsp from read_labels import read_labels %matplotlib inline rootdir = '/home/tracek/Data/Birdman/' filename = os.path.join(rootdir, 'raw/STHELENA-02_20140605_200000_1.wav') outdir = os.path.join(rootdir, 'raw/samples/') sheets = read_labels('/home/tracek/Data/Birdman/labels/sthelena_labels.xls') filename_noext = os.path.splitext(os.path.basename(filename))[0] sheet = sheets[filename_noext] # in seconds [s] signal_start_s = 0 signal_end_s = 95 min_duration_s = 0.200 max_duration_s = 0.750 win = 256 # samples hop = win // 2 onset_detector = 'hfc' # other options: complex, phase, specdiff, kl, mkl, specflux, energy onset_threshold = 0.01 onset_silence_threshold = -70 highpass_cut = 500 # Hz write_waves = False sr, signal = wavfile.read(filename) scaling_factor = 2**15 # signal.max() signal_norm = signal.astype('float32') / scaling_factor condition = (sheet['Time Start'] > signal_start_s) & (sheet['Time End'] < signal_end_s) sheet_selected = sheet[condition] signal_selected = signal_norm[int(signal_start_s * sr): int(signal_end_s * sr)] signal_filtered = dsp.highpass_filter(signal_selected, sr=sr, highcut=highpass_cut) signal_windowed = np.array_split(signal_filtered, np.arange(hop, len(signal_filtered), hop)) def onset_in_call(onset, calls_list, buffer=0): for index, call in calls_list.iterrows(): if call['Time Start'] - buffer <= onset <= call['Time End'] + buffer: return call['Species'] else: return None def plot_specgram_extra(signal, sr, nfft, hop, signal_start_s, signal_end_s, labels=None, onsets=None): x_spacing = np.linspace(signal_start_s, signal_end_s, len(signal)) fig = plt.figure(figsize=(15,5)) ax = fig.add_subplot(111) if labels is not None: for index, row in labels.iterrows(): if row['Species'] == 'Storm Petrel': ax.axvspan(xmin=row['Time Start'], xmax=row['Time End'], color='red', alpha=0.1) else: ax.axvspan(xmin=row['Time Start'], xmax=row['Time End'], color='green', alpha=0.2) if onsets: for onset in onsets: ax.axvline(x=onset, color='blue', alpha=0.3) if onset_in_call(onset, labels, buffer=0): plt.plot(onset, 6000, 'ro') spec = plt.specgram(signal_filtered, Fs=sr, NFFT=nfft, noverlap=hop, xextent=(signal_start_s, signal_end_s)) onsets = dsp.get_onsets(signal_filtered, sr, nfft=win, hop=hop, onset_detector_type=onset_detector, onset_threshold=onset_threshold, onset_silence_threshold=onset_silence_threshold, min_duration_s=min_duration_s) plot_specgram_extra(signal_selected, sr, win, hop, signal_start_s, signal_end_s, sheet_selected, onsets) slices_fixed = dsp.get_slices(signal_filtered, sr, nfft=win, hop=hop, onset_detector_type=onset_detector, onset_threshold=onset_threshold, onset_silence_threshold=onset_silence_threshold, min_duration_s=min_duration_s, max_duration_s=max_duration_s) print('No slices fixed: ', len(slices_fixed)) slices_dyn = dsp.get_slices(signal_filtered, sr, nfft=win, hop=hop, onset_detector_type=onset_detector, onset_threshold=onset_threshold, onset_silence_threshold=onset_silence_threshold, min_duration_s=min_duration_s, max_duration_s=None, method='dynamic') print('No slices dynamic: ', len(slices_dyn)) slices_fixed_len = [end - start for start, end in slices_fixed] slices_dyn_len = [end - start for start, end in slices_dyn] slices_fixed_len h = plt.hist(slices_dyn_len, bins=20) # sample *= 32767 if write_waves: for start, end in chunks_s[1:]: basename = os.path.basename(filename) dirname = os.path.dirname(filename) call = onset_in_call(start, sheet_selected, buffer=min_duration_s/2) if call: chunk_name = '{}_{:07.3f}_{:07.3f}_{}.wav'.format(os.path.splitext(basename)[0], start, end, call) else: chunk_name = '{}_{:07.3f}_{:07.3f}.wav'.format(os.path.splitext(basename)[0], start, end) start_sample = int(start * sr) end_sample = int(end * sr) chunk_path_out = os.path.join(dirname, chunk_name) wavfile.write(chunk_path_out, sr, signal[start_sample: end_sample]) ```
github_jupyter
This is a reimplementation of [convolutional variational autoencoder](https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/cvae.ipynb) using `autodiff`. ``` import gzip import numpy as np import autodiff as ad from autodiff import initializers from autodiff import optimizers train_size = 60000 batch_size = 32 latent_size = 2 def read_mnist_images(fn): with gzip.open(fn, 'rb') as f: content = f.read() num_images = int.from_bytes(content[4:8], byteorder='big') height = int.from_bytes(content[8:12], byteorder='big') width = int.from_bytes(content[12:16], byteorder='big') images = np.fromstring(content[16:], dtype=np.uint8).reshape((num_images, height, width)) return images train_images = read_mnist_images('train-images-idx3-ubyte.gz') test_images = read_mnist_images('t10k-images-idx3-ubyte.gz') def preprocess_images(images): images = images.reshape((images.shape[0], 28, 28, 1)) / 255. return np.where(images > .5, 1.0, 0.0).astype('float32') train_images = preprocess_images(train_images) test_images = preprocess_images(test_images) hui = initializers.GlorotUniformInitializer() zi = initializers.ZerosInitializer() oi = initializers.OnesInitializer() def build_encoder_variables(): conv0_weight = ad.variable((3, 3, 1, 32), hui) conv0_biases = ad.variable((32,), zi) conv1_weight = ad.variable((3, 3, 32, 64), hui) conv1_biases = ad.variable((64,), zi) dense0_weight = ad.variable((2304, latent_size*2), hui) dense0_biases = ad.variable((latent_size*2,), zi) var_list = [conv0_weight, conv0_biases, conv1_weight, conv1_biases, dense0_weight, dense0_biases] return var_list def build_encoder(images, var_list): [conv0_weight, conv0_biases, conv1_weight, conv1_biases, dense0_weight, dense0_biases] = var_list conv0 = ad.relu(ad.conv2d(images, conv0_weight, [2, 2], 'VALID') + conv0_biases) conv1 = ad.relu(ad.conv2d(conv0, conv1_weight, [2, 2], 'VALID') + conv1_biases) reshaped = ad.reshape(conv1, (batch_size, -1)) params = ad.matmul(reshaped, dense0_weight) + dense0_biases return params def build_decoder_variables(): dense0_weight = ad.variable((latent_size, 7*7*32), hui) dense0_biases = ad.variable((7*7*32,), zi) tconv0_weight = ad.variable((3, 3, 64, 32), hui) tconv0_biases = ad.variable((64,), zi) tconv1_weight = ad.variable((3, 3, 32, 64), hui) tconv1_biases = ad.variable((32,), zi) tconv2_weight = ad.variable((3, 3, 1, 32), hui) tconv2_biases = ad.variable((1,), zi) var_list = [dense0_weight, dense0_biases, tconv0_weight, tconv0_biases, tconv1_weight, tconv1_biases, tconv2_weight, tconv2_biases] return var_list def build_decoder(z, var_list): dense0_weight, dense0_biases, tconv0_weight, tconv0_biases, tconv1_weight, tconv1_biases, tconv2_weight, tconv2_biases = var_list dense0 = ad.relu(ad.matmul(z, dense0_weight) + dense0_biases) reshaped = ad.reshape(dense0, [-1, 7, 7, 32]) tconv0 = ad.relu(ad.conv2d_transpose(reshaped, tconv0_weight, [2, 2], 'SAME') + tconv0_biases) tconv1 = ad.relu(ad.conv2d_transpose(tconv0, tconv1_weight, [2, 2], 'SAME') + tconv1_biases) tconv2 = ad.conv2d_transpose(tconv1, tconv2_weight, [1, 1], 'SAME') + tconv2_biases return tconv2 log2pi = np.log(2. * np.pi).astype('float32') def log_normal_pdf(sample, mean, logvar, axis=1): return ad.reduce_sum( -.5 * ((sample - mean) * (sample - mean) * ad.exp(-logvar) + logvar + log2pi), axis=axis) graph = ad.Graph() with graph.as_default_graph(): images = ad.placeholder((batch_size, 28, 28, 1)) encoder_var_list = build_encoder_variables() params = build_encoder(images, encoder_var_list) mean = ad.slice(params, [0, 0], [batch_size, latent_size]) logvar = ad.slice(params, [0, latent_size], [batch_size, latent_size]) eps = ad.placeholder(mean.shape) z = eps * ad.exp(logvar * .5) + mean decoder_var_list = build_decoder_variables() x_logits = build_decoder(z, decoder_var_list) cross_ent = ad.sigmoid_cross_entropy_loss(images, x_logits) logpx_z = -ad.reduce_sum(cross_ent, axis=[1, 2, 3]) logpz = ad.reduce_sum(-0.5 * (z * z + log2pi), axis=1) logqz_x = log_normal_pdf(z, mean, logvar) loss = -ad.reduce_mean(logpx_z + logpz - logqz_x) graph.initialize_variables() runtime = ad.RunTime() graph.set_runtime(runtime) var_list = encoder_var_list + decoder_var_list optimizer = optimizers.AdamOptimizer(alpha=1e-4, beta1=0.9, beta2=0.999, epsilon=1e-7) epochs = 10 for i in range(epochs): indices = np.random.permutation(train_images.shape[0]) for j in range(train_images.shape[0] // batch_size): feed_dict = { images: train_images[j:j+batch_size], eps: np.random.normal(size=(batch_size, latent_size))} with runtime.forward_backward_cycle(): grads_and_vars = optimizer.compute_gradients(loss, feed_dict, var_list) loss_val = loss.forward(feed_dict) optimizer.apply_gradients(grads_and_vars) if j % 100 == 0: print('epochs', i, 'iteration', j, 'loss', loss_val) graph.save_variables('epoch_%d' % i) print() graph.save_variables('final') n = 20 digit_size = 28 norm_x = np.random.normal(size=(1000,)) norm_y = np.random.normal(size=(1000,)) grid_x = np.quantile(norm_x, np.linspace(0.05, 0.95, n)) grid_y = np.quantile(norm_y, np.linspace(0.05, 0.95, n)) image_width = digit_size*n image_height = image_width image = np.zeros((image_height, image_width)) test_graph = ad.Graph() with test_graph.as_default_graph(): z = ad.placeholder((1, latent_size)) encoder_var_list = build_encoder_variables() decoder_var_list = build_decoder_variables() x_logits = build_decoder(z, decoder_var_list) weights = np.load('epoch_2.npy', allow_pickle=True).item() for i, v in enumerate(decoder_var_list): v.set_val(weights[v.name]) sigmoid = lambda x: 1. / (1 + np.exp(-x)) runtime = ad.RunTime() test_graph.set_runtime(runtime) import matplotlib.pyplot as plt for i, yi in enumerate(grid_x): for j, xi in enumerate(grid_y): z_value = np.array([[xi, yi]]) feed_dict = {z: z_value} with runtime.forward_backward_cycle(): x_logits_value = x_logits.forward(feed_dict) probs_value = sigmoid(x_logits_value) digit = np.reshape(probs_value[0], (digit_size, digit_size)) image[i * digit_size: (i + 1) * digit_size, j * digit_size: (j + 1) * digit_size] = digit plt.figure(figsize=(20, 20)) plt.imshow(image, cmap='Greys_r') plt.axis('Off') ```
github_jupyter
To run this notebook on your own RescueTime data, go to https://exploratory.openhumans.org/notebook/161/. If you don't have an account on OpenHumans, it only takes a couple of minutes to make one and [add your RescueTime](https://www.openhumans.org/activity/rescuetime-connection/) credentials (no API knowledge necessary). Once you are done with the sign up and the one step data connection, edit the input sections (date range is the only you really need to modify) and run the notebook :) # Where does one's time go in a day? In this notebook, I will visualize a specified range of days using a **modified** stacked bar chart with data from `RescueTime` (supported) and `Fitbit` (not yet supported). Viusalizing my data using a stacked 24-hour plot, helps me quickly identify where my time is going and which activities I am spending time on. The anomolies in this chart are just as interesting as the patterns that arrise. For more questions, please feel free to reach out to @paula on OpenHumans.org. ## Steps 1. Define your inputs - This is the only step that YOU need to make slight modifications to!! 2. Get the data 3. Reformat the data - Trasform it to be on the minute and add tags for every minute of the day (don't worry this is all automatic). To see a more helpful guide for understanding the API's, I recommend the `rescue-vs-step-counts` notebook. 4. Visualize - Stacked Bar Chart --- ## Step 1: Enter Inputs #### Date Range Either enter number of days from today or enter a date range ``` USING_DAYS_FROM_TODAY = False # If True: modify the number below DAYS_FROM_TODAY = 14 # If False: modify the date range below START_DATE = '2020-05-11' END_DATE ='2020-05-26' ``` #### Data Sources Note: Currently this notebook only supports RescueTime ``` USING_RESCUETIME = True USING_FITBIT = False ``` #### Level of Detail Select granularity of data RescueTime Options: `activity`, `category`, `productivity` (Example values: youtube.com, Video, -2) ``` RESC_DETAIL = 'category' ``` #### Colors Specify colors for known tag names, all remaining unspecified tags will be assigned a random color. ``` MY_COLORS = {'asleep': '#783f04ff', 'device': '#70a1f5', 'active': '#f1c232ff', 'restless': '#ffccf2', 'awake': '#7fffd4', 'no entry': '#ffffff', 'other': '#e8e8e8'} ``` #### *Optional*: Reduce Tag List Highlight only specific tag(s), otherwise all available categories will have a unique color. Adding tag names here will result in only those tags being highlighted in the final chart. ``` # If this list is left empty, all categories will be shown and have a unique color HIGHLIGHT_ONLY_CATEGORY_LIST = [] ``` --- ## Step 2: Get the Data **Load modules and set up api** ``` import json import os import requests from ohapi import api from datetime import date, datetime, timedelta as td import matplotlib.dates as mdates import arrow import numpy as np import pandas as pd import random import matplotlib.mlab as mlab import matplotlib.pyplot as plt import seaborn as sns sns.set(color_codes=True) ``` **Chart Size** ``` fsize = 22 params = { 'axes.labelsize': fsize, 'axes.titlesize':fsize, 'axes.titlepad': 20, 'xtick.labelsize':fsize, 'xtick.major.pad': 5, 'ytick.labelsize':fsize, 'axes.labelpad': 20, 'lines.linewidth' : 3, 'figure.titlesize': fsize, 'figure.figsize' : (16,8), 'legend.title_fontsize': fsize, 'legend.fontsize': fsize #*0.925, } plt.rcParams.update(params) plt.close('all') ``` **Date Range** ``` if USING_DAYS_FROM_TODAY == True: START_DATE = (datetime.now() - pd.DateOffset(days=DAYS_FROM_TODAY)).strftime("%Y-%m-%d") END_DATE = (datetime.now() + pd.DateOffset(days=1)).strftime("%Y-%m-%d") # If number of days is False, then the user defined date range will be used print(f'Data from {START_DATE} through {END_DATE}, not including the latest date.') ``` **Colors** ``` # Display colors my_color_schema = MY_COLORS my_color_palette = [value for (key, value) in sorted(my_color_schema.items())] my_color_categories = [key for (key, value) in sorted(my_color_schema.items())] sns.palplot(my_color_palette) ``` **Open Humans Api** ``` user = api.exchange_oauth2_member(os.environ.get('OH_ACCESS_TOKEN')) ``` ### Data Source: RescueTime ``` fileurl = '' for entry in user['data']: if entry['source'] == "direct-sharing-149": fileurl = entry['download_url'] break rescuetime_data = requests.get(fileurl).json() date = [] time_spent_seconds = [] activity = [] category = [] productivity = [] for element in rescuetime_data['rows']: date.append(element[0]) time_spent_seconds.append(element[1]) activity.append(element[3]) category.append(element[4]) productivity.append(element[5]) date = [datetime.strptime(dt,"%Y-%m-%dT%H:%M:%S") for dt in date] rt_df = pd.DataFrame(data={ 'time_spent_seconds': date, 'Seconds': time_spent_seconds, 'activity': activity, 'category': category, 'productivity': productivity }) # Remove the time but retain the date type rt_df['Date'] = pd.to_datetime(rt_df['time_spent_seconds'].dt.date) # We don't need all the data, use the date range specified rt_df = rt_df[(rt_df['Date'] >= START_DATE) & (rt_df['Date'] <= END_DATE)].copy() # Data is recorded every 5 minutes, but there is a timer second so we can re-create a more accurrate start/end date rt_df['Seconds Sum Per 5Min'] = rt_df.groupby(['time_spent_seconds'])['Seconds'].apply(lambda x: x.cumsum()) rt_df['Date End'] = rt_df['time_spent_seconds'] + pd.to_timedelta(rt_df['Seconds Sum Per 5Min'], unit='s') rt_df['Date Start'] = rt_df['Date End'] - pd.to_timedelta(rt_df['Seconds'], unit='s') ``` ### WIP Data Source: Fitbit Need intra level data ``` # fileurl = '' # for entry in user['data']: # if entry['source'] == "direct-sharing-102": # fileurl = entry['download_url'] # break # fitbit_data = requests.get(fileurl).json() # date = [] # steps = [] # for year in fitbit_data['tracker-steps'].keys(): # for entry in fitbit_data['tracker-steps'][year]['activities-tracker-steps']: # date.append(entry['dateTime']) # steps.append(entry['value']) # fitbit_steps = pd.DataFrame(data={ # 'date':date, # 'steps': steps}) # fitbit_steps['date'] = pd.to_datetime(fitbit_steps['date']) # fitbit_steps = fitbit_steps.set_index('date') ``` --- ## Step 3: Reshape the Data ``` def time_dataframe_prep(df, start_date, end_date, start_date_column, end_date_column, category_column): """ Returns an exploded dataframe, with every minute labeled with the event name or 'no entry'. Parameters ---------- df : dataframe A dataframe that contains tagged timstamps start_date : str Date of first entry end_date :str Date of last entry start_date_column : datetime Column that contains when the event started end_date_column : datetime Column that contains when the event ended category_column : str Column that contains the event tag name Returns ------- df_minutes_se : dataframe Table with every minute tagged """ ######################## ## Step 1: Create a dataframe of just the end dates ######################## df_end = df[[end_date_column]].copy() # Add a column for 'no entry' df_end[category_column] = 'no entry' # If there is no gap in data (as in there is an entry immediately following the previous), # remove the record from the df_end dataframe start_date_pt_list = list(df[start_date_column].unique()) df_end = df_end[~df_end[end_date_column].isin(start_date_pt_list)] ######################## ## Step 2: Combine End and Start Dates into single dataframe ######################## # Create a two column data frame with the start date and the category df_start = df[[start_date_column, category_column]].copy() # Update column names to match that of df_start df_end.rename(columns = {end_date_column: start_date_column}, inplace = True) # Append the df_end dataframe to the bottom df_entries = pd.concat([df_start, df_end]) ######################## ## Step 3: Expand Dataset - Every Second ######################## # Create a dataframe of second intevals between two dates time_range = pd.date_range(start_date, end_date, freq= '1s') time_range_df = pd.DataFrame(time_range).rename(columns = {0: 'date_time'}) # Convert to time time_range_df['date_time'] = pd.to_datetime(time_range_df['date_time']) ######################## ## Step 4: Add our time stamps to the expanded time dataframe ######################## df_seconds = pd.merge(time_range_df, df_entries, how = 'left', left_on = 'date_time', right_on = start_date_column) # Find the first date_time with a category entry date_of_first_entry = df_seconds[(df_seconds[category_column] != 'no entry') & (~df_seconds[category_column].isna()) ]['date_time'].min() # Find the index of the first entry index_of_first_entry = df_seconds.index[df_seconds['date_time'] == date_of_first_entry][0] # Reduce the dataframe to begin with the first entry df_seconds2 = df_seconds[index_of_first_entry:].copy() ######################## ## Step 5: Label every minute ######################## # Forward fill the category until next entry df_seconds2[category_column] = df_seconds2[category_column].ffill() df_seconds2[start_date_column] = df_seconds2[start_date_column].ffill() ######################## ## Step 6: Pick the end of a minute entry (at 58 seconds) ######################## # Expand the time stamp into the relevant time components # df_seconds2[['hour','minute','second']] = pd.to_timedelta( # df_seconds2['date_time']).dt.components.iloc[:, 1:4] df_seconds2['hour'] = df_seconds2['date_time'].dt.hour df_seconds2['minute'] = df_seconds2['date_time'].dt.minute df_seconds2['second'] = df_seconds2['date_time'].dt.second # Select the entries at specified second interval (otherwise the frequency is too much for the chart) df_minutes = df_seconds2[df_seconds2['second'] == 58].reset_index() df_minutes['date_time_min'] = df_minutes['date_time'].values.astype('<M8[m]') ######################## ## Step 7: Add duration columns ######################## df_minutes['duration_minutes'] = 1 # Find the index of the latest entry latest_date = df_minutes[df_minutes[category_column] != 'no entry']['date_time'].max() index_of_last_entry = df_minutes.index[df_minutes['date_time'] == latest_date][0] # Reduce the dataframe to begin with the first entry df_minutes_se = df_minutes[0:index_of_last_entry].copy() return df_minutes_se CATEGORY_COL_NAME_1 = RESC_DETAIL # Expand our rescuetime data as that is bulk of my data res_time_data = time_dataframe_prep(df = rt_df, start_date = START_DATE, end_date = END_DATE, start_date_column = 'Date Start', end_date_column = 'Date End', category_column = CATEGORY_COL_NAME_1) res_time_data.sort_values('date_time_min', ascending = True).head(5) ``` ## Step 4: Visualize Data ``` def create_chart_xy_components(d, date_time_column, start_date, end_date, category_column): """ Returns a dataframe with columns specifically for visualizing the stack bar plot. Parameters ---------- d : dataframe Exploded dataframe date_time_column : datetime A column that contains a timestamp start_date : str Date of first entry end_date :str Date of last entry category_column : str Column that contains the event tag name Returns ------- d : dataframe A dataframe with additional columns for plotting """ # Reduce the dataframe to the specified date range d = d[(d[date_time_column] >= start_date) & (d[date_time_column] <= end_date)].copy() # Remove time stamps (x-axis) d['Date Abr'] = d['date_time'].dt.date # Add day of week d['date_week'] = d['date_time'].dt.strftime('%Y-%m-%d, %a') # A float conversion of time (y-axis scaled for 24 hour period) d['time_from_day_start'] = (d[date_time_column] - d[date_time_column].dt.normalize()).dt.total_seconds().fillna(0)/(60*60) return d def organize_categories_colors(d, category_column, my_color_categories, specified_category_list = []): """ Returns a two lists and a dictionary of color hashes, ordered tag names and the pairs. Parameters ---------- d : dataframe Exploded dataframe with additional columns for plotting category_column : str Column that contains the event tag name my_color_categories : str list List of colors for a list of specified tags specified_category_list : str list Optional list of event tag names with unique colors, while the rest are all uniform and grey Returns ------- color_palette : list A list of color hashes category_list_names_ordered : list A list of event tag names that is ordered for legend display to match the color order. color_pairs_main : dictionary A combination of the tags and matching color hashes. """ ### Colors & Categories category_list = list(d[category_column].unique()) ## Which categories have not yet been assigned a color in the my_color_schema unknown_category_list = list(set(category_list) - set(my_color_categories)) # Generate color palette for the tags that weren't specified previously r = lambda: random.randint(0,255) long_color_list = [] for i in range(0, len(unknown_category_list)): long_color_list.append('#%02X%02X%02X' % (r(),r(),r())) color_list = long_color_list # Zip colors color_pairs_new = dict(zip(unknown_category_list, color_list)) # Add the category/color pairs already defined in my_color_schema dictionary known_category_list = list(set(category_list) & set(my_color_categories)) modified_my_color_schema = {x: my_color_schema[x] for x in known_category_list} # Combine new color_pairs = {**color_pairs_new, **modified_my_color_schema} # Focus only a subset of categories if specified_category_list != []: # Create a list where all but the specified entries are included category_list_remaining = category_list.copy() [category_list_remaining.remove(x) for x in specified_category_list] # Convert all the not specified entries to the same color (make it easier to visually inspect for patterns) color_pairs.update(dict.fromkeys(category_list_remaining, '#e8e8e8')) # Ordered categories and colors category_list_names_ordered = [key for (key, value) in sorted(color_pairs.items())] color_palette = [value for (key, value) in sorted(color_pairs.items())] return color_palette, category_list_names_ordered, color_pairs def daily_chart_24_hours(d, category_column, category_list_names_ordered, color_palette, add_reference_lines = False, top_line = 9, bottom_line = 17, legend_on = False, turn_xaxis_on = True): """ Returns a multi-day 24-hour stacked bar plot. Parameters ---------- d : dataframe Exploded dataframe with additional columns for plotting category_column : str Column that contains the event tag name category_list_names_ordered : list Ordered string tags that match the color palette order color_palette : list List of color hashes that correspond to the tags Optional Parameters ---------- add_reference_lines : boolean Display two horizonatal lines to draw your eye to particular times top_line : int Military time value for 1st horizontal line bottom_line : int Military time value for 2nd horizontal line legend_on : boolean Display legend turn_xaxis_on : boolean Display xaxis date values """ plt.style.use('fivethirtyeight') v_val= 0 h_val= 200 verts = list(zip([-h_val,h_val,h_val,-h_val],[-v_val,-v_val,v_val,v_val])) fig, ax = plt.subplots() for i in range(len(category_list_names_ordered)): plt.scatter(d[d[category_column] == category_list_names_ordered[i]]['Date Abr'], d[d[category_column] == category_list_names_ordered[i]]['time_from_day_start'], s = 2000, c = color_palette[i], marker = (verts) ) plt.yticks(np.arange(0, 25, step=6)) xstart = d['Date Abr'].min() - pd.DateOffset(days=1) xend = d['Date Abr'].max() + pd.DateOffset(days=1) plt.xlim(xstart, xend) # Add labels with Day of the week at the end, ordered plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=1)) plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%b %-d, %a')) # Remove the extra date at the front and end locs, labels = plt.xticks() date_label_list = list(d['date_time'].dt.strftime('%b %-d, %a').unique()) plt.xticks(np.arange(locs[0], locs[0] + len(date_label_list) + 1, step =1), [""] + date_label_list + [""], rotation=90) if turn_xaxis_on == False: ax.tick_params(labelbottom=False) if legend_on == True: leg = plt.legend(category_list_names_ordered, bbox_to_anchor=(1.15,0.5), loc="center", title = (r"$\bf{" + category_column + "}$"), fancybox=True) for i in leg.legendHandles: i.set_linewidth(5) else: plt.legend('') plt.ylabel('Time of Day') plt.gca().invert_yaxis() plt.xlabel('Date') plt.title(r"$\bf{" + 'Recorded' + "}$" + ' ' + r"$\bf{" + 'Daily' + "}$" + ' ' + r"$\bf{" + 'Minutes' + "}$" + f"\nDate Range: {str(xstart.strftime('%Y-%m-%d'))} to {str(xend.strftime('%Y-%m-%d'))}") # Reference Lines if add_reference_lines == True: plt.axhline(y=top_line, linewidth=2, color='black', linestyle = '--') top_line_text = ' Start: {}'.format(top_line) plt.text(x=xend, y=top_line, s=top_line_text, alpha=0.7, color='#334f8d') bottom_line_text = ' End: {}'.format(bottom_line) plt.axhline(y=bottom_line, linewidth=2, color='black', linestyle = '--') plt.text(x=xend, y=bottom_line, s=bottom_line_text, alpha=0.7, color='#334f8d') plt.show() # What level of granularity are the tags we will be displaying? breakdown_level_col = RESC_DETAIL data = create_chart_xy_components(d = res_time_data, date_time_column = 'date_time', start_date = START_DATE, end_date = END_DATE, category_column = breakdown_level_col) # Color and Category Selection (color_palette, category_list_names_ordered, color_pairs_main) = organize_categories_colors(d = data, category_column = breakdown_level_col, my_color_categories = my_color_categories, specified_category_list = HIGHLIGHT_ONLY_CATEGORY_LIST ) # Output color palette sns.palplot(color_palette) # Color table # pd.DataFrame(color_pairs_main.items(), columns = ['category', 'color']).sort_values('category') daily_chart_24_hours(d = data, category_column = breakdown_level_col, category_list_names_ordered = category_list_names_ordered, color_palette = color_palette, add_reference_lines = True, top_line = 9, bottom_line = 17, legend_on = False, turn_xaxis_on = True ) ```
github_jupyter
# Aprendizado Supervisionado ## SVM Neste tutorial vamos trabalhar com o algoritmo do SVM. Este material é baseado nos seguintes links: * **SVM Tutorial: ** [https://www.svm-tutorial.com/](https://www.svm-tutorial.com/) * **Série de 3 tutoriais baseados no curso da Udacity: ** [https://justinmaes.wordpress.com/category/udacity-intro-to-machine-learning/support-vector-machine-svm/](https://justinmaes.wordpress.com/category/udacity-intro-to-machine-learning/support-vector-machine-svm/) * **Curso de Machine Learning da Udacity: ** [https://br.udacity.com/course/intro-to-machine-learning--ud120/](https://br.udacity.com/course/intro-to-machine-learning--ud120/) O SVM é mais um método de classificação. Neste caso, mais especificamente, para problemas que possuem duas classes. Ele pode ser aplicado para problemas com mais de 2 classes, mas, neste caso, o algoritmo possui um alto custo computacional. Esse tipo de classificador não vai ser objeto de estudo deste tutorial. Para começar a enteder o SVM, vamos partir de um exemplo. ### Exemplo 1 Considere o seguinte conjunto de dados. *Exemplo retirado [deste link](http://scikit-learn.org/stable/auto_examples/svm/plot_separating_hyperplane.html)* ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn import svm from IPython.display import YouTubeVideo, Image %matplotlib inline # we create 40 separable points np.random.seed(0) X = np.r_[np.random.randn(20, 2) - [2, 2], np.random.randn(20, 2) + [2, 2]] Y = [0] * 20 + [1] * 20 plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired) ``` O objetivo do SVM é encontrar um hyperplano que seja capaz de separar os dados de cada uma das classes. Na verdade, ele encontra, dentre todos os possíveis hyperplanos de separação, aquele que melhor separa as classes. No nosso exemplo, como estamos trabalhando com duas dimensões, o hyperplano é representado por uma reta. Vamos treinar um modelo a partir do SVM e plotar o hyperplano para os dados do gráfico anterior. ``` # fit the model clf = svm.SVC(kernel='linear') clf.fit(X, Y) # Plota o hyperplano # get the separating hyperplane w = clf.coef_[0] a = -w[0] / w[1] xx = np.linspace(-5, 5) yy = a * xx - (clf.intercept_[0]) / w[1] # plot the parallels to the separating hyperplane that pass through the # support vectors b = clf.support_vectors_[0] yy_down = a * xx + (b[1] - a * b[0]) b = clf.support_vectors_[-1] yy_up = a * xx + (b[1] - a * b[0]) # plot the line, the points, and the nearest vectors to the plane plt.plot(xx, yy, 'k-') plt.plot(xx, yy_down, 'k--') plt.plot(xx, yy_up, 'k--') plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=80, facecolors='none') plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired) plt.axis('tight') plt.show() ``` A imagem anterior exibe o hyperplano (a reta) criada para separar os dados. As linhas pontilhadas representam as fronteiras de cada classe. Elas passam pelos pontos que representam os *support vectors* de cada classe. A linha preenchida é o hyperplano de separação. O objetivo é maximizar a distância entre as classes representadas pelos hyperplanos pontilhados. Um aprofundamento matemático de como o SVM funciona pode ser encontrado [neste link](https://www.svm-tutorial.com/2014/11/svm-understanding-math-part-1/). Vale muito a pena dar uma olhada. Alguns pontos abordados a seguir foram retirados desta série de tutorial. No entanto, nesses primeiros tutoriais não tenho o objetivo de aprofundar todos os conceitos matemáticos dos modelos. ## Definindo o SVM > Uma máquina de vetores de suporte (SVM, do inglês: *support vector machine*) é um conceito na ciência da computação para um conjunto de métodos do aprendizado supervisionado que analisam os dados e reconhecem padrões, usado para classificação e análise de regressão. O SVM padrão toma como entrada um conjunto de dados e prediz, para cada entrada dada, qual de duas possíveis classes a entrada faz parte, o que faz do SVM um classificador linear binário não probabilístico. >> Fonte: [Wikipedia](https://pt.wikipedia.org/wiki/M%C3%A1quina_de_vetores_de_suporte) Sendo assim, o objetivo do SVM é encontrar um hyperplano de separação ótimo que maximiza as margens do dado de treinamento. No entanto, é fácil perceber que existem vários hyperplanos que podem serparar os dados. Vamos ver alguns exemplos que ilustra isso. Imagens retiradas do tutorial: [https://www.svm-tutorial.com/2014/11/svm-understanding-math-part-1/](https://www.svm-tutorial.com/2014/11/svm-understanding-math-part-1/) Considere a imagem a seguir: ``` from IPython.display import Image from IPython.display import display Image("https://www.svm-tutorial.com/wp-content/uploads/2014/11/01_svm-dataset1.png", width=500) ``` É fácil perceber que existem vários hyperplanos que separam esse conjunto de dados. ``` Image("https://www.svm-tutorial.com/wp-content/uploads/2014/11/01_svm-dataset1-separated-2.png", width=500) ``` O hyperplano escolhido pelo SVM deve generalizar a classificação de dados não vistos ainda. É importante selecionar o hyperplano correto. Vamos supor que o SVM escolha o hyperplano verde e usa-o para classificar novos dados: ``` Image("https://www.svm-tutorial.com/wp-content/uploads/2014/11/01_svm-dataset1-separated-bad.png", width=500) ``` Observe que ele classfica de forma errada algumas instâncias das mulheres. Isso acontece porque ele pegou um hyperplano que está muito próximo da fronteira da classe *mulher*. Sendo assim, a probabilidade de classificar erroneamente instâncias que seriam mulher aumenta. O mesmo aconteceria se fosse selecionado um hyperplano muito próximo da classe *homem*. Neste caso, a probabilidade de classificar erroneamente instâncias de homens seria maior. Desta forma, o ideal é que o SVM selecione um hyperplano que seja o mais distante possível de cada categoria de classificação. Como mostra a imagem a seguir: ``` Image("https://www.svm-tutorial.com/wp-content/uploads/2014/11/07_withMidpointsAndSeparator1.png", width=500) ``` Ou seja, ele quer maximizar o que chamamos de *Margin* (ou fronteira). De forma geral, quando o SVM otimiza a escolha do hyperplano ele garante que os dados de treinamento foram classificados corretamente e seleciona um hyperplano que melhor generaliza os dados não vistos. No entanto, vamos ver mais a frente que isso depende de outros fatores como alguns parâmetros que serão trabalhados. Vale ressaltar que o SVM dar preferência por hyperplanos que classifiquem corretamente os dados de treinamento e em seguida o critério de maximizar a margem entre as classes. Para deixar mais claro esse conceitos, vamos analisar a imagem a seguir. Imagem retiradas de: [https://justinmaes.wordpress.com/category/udacity-intro-to-machine-learning/support-vector-machine-svm/](https://justinmaes.wordpress.com/category/udacity-intro-to-machine-learning/support-vector-machine-svm/) ``` Image("https://justinmaes.files.wordpress.com/2016/10/screen-shot-2016-10-09-at-5-00-24-am.png", width=500) ``` A reta identificada pelo **o** maximiza melhor a distância entre as classes. No entanto, ela permite classificar uma instância de treinamento de forma errada. Diferente da reta com o **x**, onde mesmo que a distância das fronteiras seja menor, ela garante a classificação correta de todas as instâncias de treinamento. Sendo assim, o SVM leva primeiro em consideração a classificação correta da base e em seguida a distância entre as fronteiras. Se isso não fosse verdade, a reta vertical ao lado do gráfico seria uma boa solução já que ela maximiza a distância entre as duas classes. ## Separando dados não-lineares Vamos olhar para o seguinte conjunto de dados: ``` data = pd.read_csv("http://www.data2learning.com/machinelearning/datasets/toyexample1.csv", sep=";") colors = {0:'red', 1:'blue'} # Plotagem dos valores da base de treinamento data.plot(kind='scatter', x='A', y='B',c=data['Classe'].apply(lambda x: colors[x]),grid=True) ``` É possível separar tais dados linearmente? Em outras palavras, é possível definir um hyperplano capaz de separar os dados das duas classes? Nitidamente, vemos que não é possível. Agora, vamos calcular uma nova *feature* para o gráfico. Essa *feature* será identificada por Z e consiste de $x^2 + y^2$ (a distância de um ponto para a origem). ``` import math def new_value(row): return math.pow(row['A'], 2) + math.pow(row['B'], 2) data['Z'] = data.apply(lambda row: new_value(row),axis=1) ``` Vamos plotar agora a nova dimensão e o eixo A. O que aconteceu? É possível separa-los linearmente? ``` data.plot(kind='scatter', x='A', y='Z',c=data['Classe'].apply(lambda x: colors[x]),grid=True) ``` Sim. Conseguimos com um hyperplano separar o que é vermelho e o que é azul. ``` data.plot(kind='scatter', x='A', y='Z',c=data['Classe'].apply(lambda x: colors[x]),grid=True) xx = [-5, 5] yy = [9, 9] plt.plot(xx, yy, 'k-') ``` No grafo original, essa separação corresponde a um círculo que separa os dois conjunto de dados. ``` data.plot(kind='scatter', x='A', y='B',c=data['Classe'].apply(lambda x: colors[x]),grid=True) circle1 = plt.Circle((0, 0.5), 2.5, color='k',fill=False) fig = plt.gcf() ax = fig.gca() ax.add_artist(circle1) ``` O vídeo a seguir ilustra esse processo: ``` YouTubeVideo("9NrALgHFwTo") ``` Vamos considerar um outro exemplo: ``` data = pd.read_csv("http://www.data2learning.com/machinelearning/datasets/toyexample2.csv", sep=";") data colors = {0:'red', 1:'blue'} # Plotagem dos valores da base de treinamento data.plot(kind='scatter', x='A', y='B',c=data['Classe'].apply(lambda x: colors[x]),grid=True) ``` Mais uma vez é fácil ver que este conjunto de dados não pode ser separado de forma linear. Agora, vamos calcular uma nova *feature* para o gráfico. Essa *feature* será identificada por Z e consiste de |x| (módulo de x), neste caso. ``` import math def new_value(row): return math.fabs(row['A']) data['Z'] = data.apply(lambda row: new_value(row),axis=1) ``` Plotando a nova feature, temos o grafo a seguir: ``` data.plot(kind='scatter', x='Z', y='B',c=data['Classe'].apply(lambda x: colors[x]),grid=True) ``` Que pode ser separado linearmente. ``` data.plot(kind='scatter', x='Z', y='B',c=data['Classe'].apply(lambda x: colors[x]),grid=True) xx = [0, 5] yy = [-4, 6] plt.plot(xx, yy, 'k-') ``` No grafo original, essa separação corresponde a duas retas que separam os dados. ``` data.plot(kind='scatter', x='A', y='B',c=data['Classe'].apply(lambda x: colors[x]),grid=True) xx = [0, -5] yy = [-4, 6] xx_ = [0,5] yy_ = [-4,6] plt.plot(xx, yy, 'k-') plt.plot(xx_, yy_, 'k-') ``` Esses dois exemplos dão uma idéia geral de como a mudança de dimensão dos dados pode auxiliar no propósito de separação destes. Essa adição de uma nova *feature* é um recurso bastante útil. No entanto, como saber qua *feature* deve ser utiizada? A vantagem é que não precisamos saber e isso se dar por conta do que é chamada de *kernel trick*. Uma breve (em 1 min) explicação disso pode ser encontrada no vídeo a seguir. *Ative a legenda em português* ``` YouTubeVideo("3Xw6FKYP7e4") ``` Uma função de kernel pega um espaço de entrada (características) de baixa dimensão e mapeia-o para um espaço dimensional muito alto. Desta forma, o que anteriormente não era linearmente separável torna-se linearmente separável usando SVM. Por fim, a solução é trazida de volta para o espaço original, o que resulta em uma separação não-linear. ## Entendendo os parâmetros do SVM Todos os métodos de aprendizagem estudados possuem uma série de parâmetros que devem ser configurados a medida que os testes forem executados. Mais para frente vamos estudar um método que permite otimizar estes parâmetros: [GridSearch](http://scikit-learn.org/stable/modules/grid_search.html). Por enquanto vamos apenas modificar alguns parâmetros e estudar o reflexo destas alterações na acurácia do modelo. Voltemos ao exemplo mostrado no início do tutorial. ``` import numpy as np import matplotlib.pyplot as plt from sklearn import svm # we create 40 separable points np.random.seed(0) X = np.r_[np.random.randn(20, 2) - [2, 2], np.random.randn(20, 2) + [2, 2]] Y = [0] * 20 + [1] * 20 plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired) ``` Vamos variar o parâmetro **C**. Esse parâmetro controla o *tradeoff* entre uma fronteira de decisão suave e uma que classifica todos os pontos corretamente. Um baixo valor de C faz com que a fronteira de decisão seja suavizada, enquanto que altos valores de C fazem com que o modelo classifique todos os dados de treinamento corretamente. Este último caso pode fazer com que o modelo sofra de *overfiting*. Sendo assim, em alguns casos é melhor um modelo mais suave no treinamento, mas que consiga generalizar na base de teste. Para ilustrar, vamos analisar o efeito do parâmetro C no hyperplano que separa os dados. Foram escolhidos quatro valores para C: 0.001, 0.01, 0.1 e 1. Para isso vamos treinar 4 modelos lineares para cada valor de C. ``` clf_0001 = svm.SVC(kernel="linear", C=0.0001) clf_001 = svm.SVC(kernel="linear", C=0.001) clf_01 = svm.SVC(kernel="linear", C=0.1) clf_1 = svm.SVC(kernel="linear", C=1) clf_0001.fit(X, Y) clf_001.fit(X, Y) clf_01.fit(X, Y) clf_1.fit(X, Y) ``` Em seguida, vamos plotar os hyperplanos. ``` # Plota o hyperplano def get_hyperplane(classifier): # get the separating hyperplane w = classifier.coef_[0] a = -w[0] / w[1] xx = np.linspace(-5, 5) yy = a * xx - (classifier.intercept_[0]) / w[1] # plot the parallels to the separating hyperplane that pass through the # support vectors b = classifier.support_vectors_[0] yy_down = a * xx + (b[1] - a * b[0]) b = classifier.support_vectors_[-1] yy_up = a * xx + (b[1] - a * b[0]) return xx, yy, yy_down, yy_up, classifier.support_vectors_[:, 0], classifier.support_vectors_[:, 1] x_0001, y_0001, ydown_0001, yup_0001, sv0001_0, sv0001_1 = get_hyperplane(clf_0001) x_001, y_001, ydown_001, yup_001, sv001_0, sv001_1 = get_hyperplane(clf_001) x_01, y_01, ydown_01, yup_01, sv01_0, sv01_1 = get_hyperplane(clf_01) x_1, y_1, ydown_1, yup_1, sv1_0, sv1_1 = get_hyperplane(clf_1) plt.clf() plt.figure(figsize=(15,10)) plt.subplot(221) # plot the line, the points, and the nearest vectors to the plane plt.plot(x_0001, y_0001, 'k-') plt.plot(x_0001, ydown_0001, 'k--') plt.plot(x_0001, yup_0001, 'k--') plt.scatter(sv0001_0, sv0001_1, s=80, facecolors='none') plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired) plt.axis('tight') plt.subplot(222) plt.plot(x_001, y_001, 'k-') plt.plot(x_001, ydown_001, 'k--') plt.plot(x_001, yup_001, 'k--') plt.scatter(sv001_0, sv001_1, s=80, facecolors='none') plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired) plt.axis('tight') plt.subplot(223) plt.plot(x_01, y_01, 'k-') plt.plot(x_01, ydown_01, 'k--') plt.plot(x_01, yup_01, 'k--') plt.scatter(sv01_0, sv01_1, s=80, facecolors='none') plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired) plt.axis('tight') plt.subplot(224) plt.plot(x_1, y_1, 'k-') plt.plot(x_1, ydown_1, 'k--') plt.plot(x_1, yup_1, 'k--') plt.scatter(sv01_0, sv01_1, s=80, facecolors='none') plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired) plt.axis('tight') plt.show() ``` Observer que quando menor o valor de C, menos rígida fica a separação dos dados. Os modelos com valores de C menor possuem menor valor de acurácia na base de treinamento. ``` print("C = 0.0001", "%0.6f" % clf_0001.score(X, Y)) print("C = 0.001", "%0.6f" %clf_001.score(X, Y)) print("C = 0.01", "%0.6f" %clf_01.score(X, Y)) print("C = 0.1", "%0.6f" %clf_1.score(X, Y)) ``` Um outro parâmetro que podemos modificar é o *kernel*. De forma geral (bem geral), podemo definir o kernel como uma função que permite transformar dados que estão em uma dimensão **n** para uma dimensão **m**, onde, normalmente, **m** é muito maior do que **n**. Foi exatamente isso que a gente fez nas transformações mostradas no início desse tutorial. O grande problema é que encontrar essa nova dimensão é relativamente custosa. O *kernel* nos fornece um atalho para que possamos realizar estes cálculos de forma mais simples e eficiente. Uma boa discussão sobre o que é um *kernel* pode ser encontrado [neste link](https://www.quora.com/What-are-Kernels-in-Machine-Learning-and-SVM) do Quora. Particulamente, gosto da resposta a seguir: > by Lili Jiang, Data Scientist at Quora at Updated Oct 9, 2016 > **Briefly speaking** , a kernel is a shortcut that helps us do certain calculation faster which otherwise would involve computations in higher dimensional space. > **Mathematical definition:** $K(x, y) = <f(x), f(y)>$. Here $K$ is the kernel function, $x$, $y$ are $n$ dimensional inputs. $f$ is a map from n-dimension to m-dimension space. $<x,y>$ denotes the dot product. Usually $m$ is much larger than $n$. > **Intuition:** normally calculating $<f(x), f(y)>$ requires us to calculate $f(x)$, $f(y)$ first, and then do the dot product. These two computation steps can be quite expensive as they involve manipulations in $m$ dimensional space, where $m$ can be a large number. But after all the trouble of going to the high dimensional space, the result of the dot product is really a scalar: we come back to one-dimensional space again! Now, the question we have is: do we really need to go through all the trouble to get this one number? do we really have to go to the m-dimensional space? The answer is no, if you find a clever kernel. > **Simple Example:** $x = (x1, x2, x3); y = (y1, y2, y3)$. Then for the function $f(x) = (x1x1, x1x2, x1x3, x2x1, x2x2, x2x3, x3x1, x3x2, x3x3)$, the kernel is $K(x, y ) = (<x, y>)^2$. > Let's plug in some numbers to make this more intuitive: suppose $x = (1, 2, 3); y = (4, 5, 6)$. Then: > $f(x) = (1, 2, 3, 2, 4, 6, 3, 6, 9)$ > $f(y) = (16, 20, 24, 20, 25, 30, 24, 30, 36)$ > $<f(x), f(y)> = 16 + 40 + 72 + 40 + 100+ 180 + 72 + 180 + 324 = 1024$ > A lot of algebra. Mainly because f is a mapping from 3-dimensional to 9 dimensional space. > Now let us use the kernel instead: > K(x, y) = (4 + 10 + 18 ) ^2 = 32^2 = 1024 > Same result, but this calculation is so much easier. > **Additional beauty of Kernel:** kernels allow us to do stuff in infinite dimensions! Sometimes going to higher dimension is not just computationally expensive, but also impossible. f(x) can be a mapping from n dimension to infinite dimension which we may have little idea of how to deal with. Then kernel gives us a wonderful shortcut. > **Relation to SVM:** now how is related to SVM? The idea of SVM is that $y = w \phi(x)+b$, where $w$ is the weight, $\phi$ is the feature vector, and $b$ is the bias. if $y > 0$, then we classify datum to class 1, else to class 0. We want to find a set of weight and bias such that the margin is maximized. Previous answers mention that kernel makes data linearly separable for SVM. I think a more precise way to put this is, kernels do not make the the data linearly separable. The feature vector $\phi(x)$ makes the data linearly separable. Kernel is to make the calculation process faster and easier, especially when the feature vector $\phi$ is of very high dimension (for example, $x1, x2, x3, ..., x_D^n, x1^2, x2^2, ...., x_D^2$). > **Why it can also be understood as a measure of similarity:** if we put the definition of kernel above, $<f(x), f(y)>$, in the context of SVM and feature vectors, it becomes $<\phi(x), \phi(y)>$. The inner product means the projection of $\phi(x)$ onto $\phi(y)$. or colloquially, how much overlap do $x$ and $y$ have in their feature space. In other words, how similar they are. O scikit-learn possui algumas funções de *kernel* disponíveis: *‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable*. Vamos analisar o linear e o rbf ( indicado para dados que não são linearmente separáveis). Considere a base de dados a seguir e analise a influência dos difentes *kernels* na acurácia final do modelo. ``` data = pd.read_csv("http://www.data2learning.com/machinelearning/datasets/manwoman.csv", sep=";") data.head() ``` Plotando os dados: ``` colors = {0:'red', 1:'blue'} # Plotagem dos valores da base de treinamento data.plot(kind='scatter', x='weight', y='height',c=data['male'].apply(lambda x: colors[x])) X = data[['weight','height']] y = data.male ``` ### Kernel Linear versus Kernel RBF ``` from sklearn.model_selection import train_test_split from sklearn.model_selection import cross_val_score # fit the model clf = svm.SVC(kernel='linear') scores = cross_val_score(clf, X, y, cv=10, scoring='accuracy') print("Acurácia 10 folds kernel linear: ", scores.mean()) # fit the model clf = svm.SVC(kernel='rbf') scores = cross_val_score(clf, X, y, cv=10, scoring='accuracy') print("Acurácia 10 folds kernel rbf: ", scores.mean()) ``` É fácil ver que os dados que estamos trabalhando não são linearmente separáveis. Por conta disso, um *kernel* mais apropriado a este tipo de dado como o *RBF* é mais indicado. ### Relação do C com a acurácia no Treino e Teste O propósito de termos uma fronteira de decisão mais suave é poder deixar o modelo menos "rígido" e, provavelmente, torna-lo mais geral quando testado na base de testes. Vamos mostrar um pouco disso no exemplo a seguir. **Primeiro com C = 1:** ``` # fit the model clf = svm.SVC(kernel='rbf', C=1, random_state=4) X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.2, random_state=4) clf.fit(X_train,Y_train) print("Acc no Treino: ", clf.score(X_train, Y_train)) print("Acc no Teste: ", clf.score(X_test, Y_test)) # fit the model clf = svm.SVC(kernel='rbf', C=100, random_state=4) X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.2, random_state=4) clf.fit(X_train,Y_train) print("Acc no Treino: ", clf.score(X_train, Y_train)) print("Acc no Teste: ", clf.score(X_test, Y_test)) # fit the model clf = svm.SVC(kernel='rbf', C=0.2, random_state=4) X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.2, random_state=4) clf.fit(X_train,Y_train) print("Acc no Treino: ", clf.score(X_train, Y_train)) print("Acc no Teste: ", clf.score(X_test, Y_test)) ``` **Resumindo:** | C | Acc Treino | Acc Teste | |------|------|------| | 1 | 0.868965517241 | 0.715596330275 | | 100 | 0.983908045977 | 0.651376146789 | | 0.2 | 0.836781609195 | 0.752293577982 | Observe que com C muito alto, a acurácia de treino é muito alta. No entanto, a acurácia na base de teste não acompanha esses valores no treino. Já quando o C é menor, a acurácia na base de treino cai, mas a acurácia em teste aumenta. O fato de termos uma fronteira de decisão mais suave, permite que o modelo não fique tão específico a base de treino, perdendo em generalização na base de teste. Bom é isso pessoal. Agora é aplicar estes conceitos no exercício correspondente. **Até o próximo tutorial ;)**
github_jupyter
# Use this notebook to train supervised models ``` import tensorflow as tf import numpy as np from utils import * from VDSH_S import * from VDSH_SP import * from __future__ import print_function filename = 'dataset/ng20.tfidf.mat' data = Load_Dataset(filename) latent_dim = 32 sess = get_session("1", 0.10) model = VDSH_S(sess, latent_dim, data.n_feas, data.n_tags, use_cross_entropy=True) # create an optimizer learning_rate=0.001 decay_rate = 0.96 #decay_step = 10000 step = tf.Variable(0, trainable=False) lr = tf.train.exponential_decay(learning_rate, step, 10000, decay_rate, staircase=True, name="lr") my_optimizer = tf.train.AdamOptimizer(learning_rate=lr) \ .minimize(model.cost, global_step=step) init = tf.global_variables_initializer() model.sess.run(init) total_epoch = 15 kl_weight = 0. kl_inc = 1 / 5000. # set the annealing rate for KL loss pred_weight = 0. pred_inc = 0.1 max_pred_weight = 150. for epoch in xrange(total_epoch): epoch_loss = [] for i in range(len(data.train)): # get doc doc = data.train[i] labels = data.gnd_train[i] word_indice = np.where(doc > 0)[0] # indices opt, loss = model.sess.run((my_optimizer, model.cost), feed_dict={model.input_bow: doc.reshape((-1, data.n_feas)), model.input_bow_idx: word_indice, model.labels: labels.reshape((-1, data.n_tags)), model.kl_weight: kl_weight, model.tag_weight: pred_weight, model.keep_prob: 0.9}) kl_weight = min(kl_weight + kl_inc, 1.0) pred_weight = min(pred_weight + pred_inc, max_pred_weight) epoch_loss.append(loss[0]) if i % 50 == 0: print("\rEpoch:{}/{} {}/{}: Loss:{:.3f} AvgLoss:{:.3f}".format(epoch+1, total_epoch, i, data.n_trains, loss[0], np.mean(epoch_loss)), end='') # run experiment on CV dataset zTrain = model.transform(data.train) zCV = model.transform(data.cv) zTrain = np.array(zTrain) zCV = np.array(zCV) medHash = MedianHashing() cbTrain = medHash.fit_transform(zTrain) cbCV = medHash.transform(zCV) TopK=100 print('Retrieve Top{} candidates using hamming distance'.format(TopK)) results = run_topK_retrieval_experiment(cbTrain, cbCV, data.gnd_train, data.gnd_cv, TopK) # run experiment here zTrain = model.transform(data.train) zTest = model.transform(data.test) zTrain = np.array(zTrain) zTest = np.array(zTest) medHash = MedianHashing() cbTrain = medHash.fit_transform(zTrain) cbTest = medHash.transform(zTest) TopK=100 print('Retrieve Top{} candidates using hamming distance'.format(TopK)) results = run_topK_retrieval_experiment(cbTrain, cbTest, data.gnd_train, data.gnd_test, TopK) ```
github_jupyter
# Varios En este notebook comentamos brevemente algunas técnicas específicas. ## Detección de caras El método clásico ([Viola & Jones, 2001](https://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework)) esta basado en [AdaBoost](https://en.wikipedia.org/wiki/AdaBoost), que es una técnica muy importante dentro de *Machine Learning* basada en la combinación de "weak features" para conseguir un clasificador de alta precisión. OpenCV incluye una buena implementación. Detecta la cara y dentro de ella otras zonas como los ojos, nariz, etc., como se muestra a continuación. Pero en la actualidad es más recomendable el detector de caras disponible en el paquete DLIB que hemos utilizado ya en el ejercicio de los *face landmarks*. Lo usaremos también en el capítulo de Deep Learning como apoyo para el reconocimiento de la identidad de la personas. Por su interés histórico mostramos aquí el uso de la implementación del método AdaBoost disponible en OpenCV. ``` import numpy as np import cv2 as cv import matplotlib.pyplot as plt %matplotlib inline def readrgb(file): return cv.cvtColor( cv.imread('../images/'+file), cv.COLOR_BGR2RGB) def rgb2gray(x): return cv.cvtColor(x,cv.COLOR_RGB2GRAY) # intentamos encontrar automáticamente la ubicación de las cascadas (los clasificadores preentrenados) # ! conda info import os conda_prefix = os.getenv("CONDA_PREFIX") print(conda_prefix) cpath = conda_prefix + "/share/OpenCV/haarcascades/" # Comprobamos que los dectectores se han cargado bien face_cascade = cv.CascadeClassifier(cpath+'/haarcascade_frontalface_default.xml') print(not face_cascade.empty()) eye_cascade = cv.CascadeClassifier(cpath+'haarcascade_eye.xml') print(not eye_cascade.empty()) ``` Lo probamos con algunas imágenes del repositorio. También es fácil ponerlo en marcha con la webcam. ``` img = readrgb('monty-python1.jpg') plt.imshow(img); faces = face_cascade.detectMultiScale(img) ``` Devuelve una colección de rectángulos. ``` faces out = img.copy() for (x,y,w,h) in faces: cv.rectangle(out,(x,y),(x+w,y+h),(255,0,0),2) plt.imshow(out); img = readrgb('monty-python2.jpg') faces = face_cascade.detectMultiScale(img) print(faces) out = img.copy() for (x,y,w,h) in faces: cv.rectangle(out,(x,y),(x+w,y+h),(255,0,0),2) plt.imshow(out); img = readrgb('scarlett.jpg') faces = face_cascade.detectMultiScale(img) print(faces) out = img.copy() for (x,y,w,h) in faces: cv.rectangle(out,(x,y),(x+w,y+h),(255,0,0),2) plt.imshow(out); ``` Dentro de las caras detectamos los ojos con el "eye_cascade". (En la foto de Scarlett mi versión de opencv detecta 3 ojos). ``` out = img.copy() for (x,y,w,h) in faces: cv.rectangle(out,(x,y),(x+w,y+h),(255,0,0),2) roi = out[y:y+h, x:x+w] eyes = eye_cascade.detectMultiScale(roi) for (ex,ey,ew,eh) in eyes: cv.rectangle(roi,(ex,ey),(ex+ew,ey+eh),(0,255,0),2) plt.imshow(out); ``` Como ejercicio puedes comparar informalmente la precisión y el tiempo de cómputo de este detector y el disponible en DLIB. ## Reconocimiento óptico de caracteres (OCR) El reconocimiento de caracteres impresos en imágenes de alta resolución escaneadas con buena calidad y sin ruido puede abordarse mediante una sencilla comparación con los modelos de cada letra. Pero en situaciones más realistas, donde hay ruido y menor resolución, es necesario un ataque más elaborado. El problema se complica aún más si en lugar de escanear el texto lo capturamos con una cámara. La imagen siguiente es un texto escaneado (hace tiempo) en modo binario: <img src="../images/texto.png" width="800px"/> Cuando ampliamos la imagen observamos que muchas letras son componentes conexas independientes, que en principio pueden reconocerse fácilmente. Pero también hay algunos casos de letras unidas, como la secuencia "arru", en la que la comparación de las manchas con los modelos requiere partirla correctamente en trozos. Esto da lugar a una combinatoria que finalmente solo se puede resolver de forma probabilística y con un diccionario de palabras válidas. Si la intensidad de la tinta es pequeña, algunas letras pueden aparecer divididas en varias componentes conexas, lo cual aumenta aún más la combinatoria. <img src="../images/demos/texto-detalle.png" width="500px"/> Se puede abordar el desarrollo de un OCR sencillo como práctica opcional. Si estás interesado consulta con el profesor. ### Tesseract Existen muchos OCR comerciales. Dentro de las soluciones de software libre destacamos el paquete [tesseract](https://github.com/tesseract-ocr). Dispone de un ejecutable y de una biblioteca que se puede utilizar por programa desde Python y otros lenguajes. Veamos el resultado que consigue el ejecutable con el texto de prueba anterior (disponible en el repositorio). **> tesseract texto.png resul** <pre>Quizé sea éste uno de los articulos més ilusos que uno pueda escribir hoy en d1’a en una socicdad tan autocomplaciente y autoindulgente como la espafiola actual, y eso que tengo conciencia de haber ya publicado unos cuantos dc esa indole —i1usa, quiero decir-. Porque si para algo no estzi la superficialidad ambiente es para atender, a estas alturas, a asuntos que ni siquiera sé cémo calificar si no es con anticuadas palabras, casi arrumbadas; y desde luego no deseo recurrir a la ya vacua -por estrujada- "ética": Lasuntos que atafien a la rectitud? g,A1o venial y a lo grave? g,A1as conductas? (,A1a dignidad? Si, todo suena ya trasnochado.</pre> Se producen muchos errores porque no hemos indicado el idioma. Cuando lo hacemos el resultado es prácticamente perfecto: **> tesseract texto.png resul -l spa** <pre>Quizá sea éste uno de los artículos más ilusos que uno pueda escribir hoy en día en una sociedad tan autocomplaciente y autoindulgente como la española actual, y eso que tengo conciencia de haber ya publicado unos cuantos de esa índole —ilusa, quiero decir-. Porque si para algo no está la superficialidad ambiente es para atender, a estas alturas, a asuntos que ni siquiera sé cómo calificar si no es con anticuadas palabras, casi arrumbadas; y desde luego no deseo recurrir a la ya vacua -por estrujada- "ética": ¿asuntos que atañen ala rectitud? ¿A lo venial y a lo grave? ¿A las conductas? ¿Ala dignidad? Sí, todo suena ya trasnochado.</pre> Curiosamente, introduce [ligaduras](https://en.wikipedia.org/wiki/Orthographic_ligature): super<b>fi</b>cialidad, cali<b>fi</b>car, y como no entiende el significado ha juntado "a la" en "ala". El motor computacional de tesseract se puede utilizar en programas de Python mediante diversos paquetes que proporcionan interfaces más o menos cómodos. Uno muy conocido es `pytesseract`, pero tiene el inconveniente de que en realidad hace simples llamadas al ejecutable a través de archivos intermedios. Esto es muy lento si se hacen llamadas sucesivas en imágenes en vivo o en muchos recortes de una imagen. Mi recomendación es usar el paquete [tesserocr](https://pypi.org/project/tesserocr/). Proporciona el API de tesseract, de modo que se puede usar de forma mucho más eficiente. Se instala fácilmente con pip, pero necesita algún paquete del sistema: sudo apt install tesseract-ocr tesseract-ocr-spa libtesseract-dev pip install tesserocr En la sesión de prácticas veremos un código de ejemplo muy sencillo que muestra cómo utilizar este módulo en imágenes en vivo. ![ocr1](../images/demos/ocr1.png) Tolera pequeñas rotaciones, ![ocr2](../images/demos/ocr2.png) diferentes tipos de letra, ![ocr3](../images/demos/ocr3.png) y desenfoque ![ocr](../images/demos/ocr4.png) ## Códigos de barras Los códigos de barras, los códigos bidimensionales QR y otras variantes, como los códigos en color que vemos en las paradas del tranvía, son objetos artificiales diseñados expresamente para ser fácilmente distinguibles. En principio se podría abordar como ejercicio avanzado un prototipo de lector de códigos sencillos en condiciones favorables. Debido a las limitaciones de tiempo nos limitaremos a comentar el paquete de software libre [zbar](http://zbar.sourceforge.net/), que es capaz de leer varios tipos de códigos. ### zbar Hay que instalar el paquete del systema `libzbar-dev`. El ejecutable `zbarimg` admite ficheros de imagen: <img src="../images/demos/barcode.jpg" width="400px"/> <pre>$ zbarimg barcode.jpg EAN-13:9789813083714 scanned 1 barcode symbols from 1 images in 0.35 seconds</pre> La utilidad `zbarcam` trabaja con imágenes en vivo de la webcam: <img src="../images/demos/barcode1.png" width="500px"/> <img src="../images/demos/barcode2.png" width="500px"/> <pre>$ zbarcam --prescale=640x480 EAN-13:3134375261920 EAN-13:9780201814149 EAN-13:9780801854149 EAN-13:3134375261920 EAN-13:3134375261920 EAN-13:9780801854149</pre> ![image](../images/demos/barcode3.png) En los casos anteriores detecta el código y el estándard utilizado, en este caso [EAN-13](https://en.wikipedia.org/wiki/International_Article_Number). En esta último pantallazo detecta también un "CODE-128". ### QR codes También es capaz de leer códigos QR. En el pantallazo siguiente se muestra la decodificación del [código QR de ejemplo](../images/qrcode.png) disponible en el repositorio, que contiene como texto el comando de instalación del módulo umucv. ![image](../images/demos/qrcode.png) ### zbar in Python El motor de decodificación se puede utilizar fácilmente en Python a través del paquete `pyzbar`. En la sesión práctica veremos el ejemplo de código [zbardemo.py](../code/zbardemo.py) para decodificar los códigos que aparezcan en cualquier secuencia de imágenes. ![barcode](../images/demos/barcode.png) En los QR obtenemos también las esquinas con precisión, muy útiles en algunas aplicaciones de geometría visual. ![qr](../images/demos/qr2.png) ## GrabCut Los algoritmos de [cortes de grafos][graphcuts] tienen aplicación en [algunos problemas](https://en.wikipedia.org/wiki/Graph_cuts_in_computer_vision) de visión artificial. Por ejemplo, en segmentación de imágenes (distinguir objetos del fondo) se define un grafo donde los vértices son los pixels, se establecen arcos entre pixels vecinos con pesos que indican si son parecidos o no, y se busca el menor corte del grafo (eliminar arcos) que maximiza la separación entre el objeto y el fondo. Aquí tenemos un breve [tutorial][tutorial] de OpenCV. El código fuente está en el repositorio: [code/grabcut.py](../code/grabcut.py). Es un procedimiento interactivo que comentaremos en una sesión de prácticas. [graphcuts]: https://en.wikipedia.org/wiki/Cut_(graph_theory) [tutorial]: https://docs.opencv.org/3.4/d8/d83/tutorial_py_grabcut.html ## Detección de elipses Lo explicaremos en detalle en una sesión de prácticas.
github_jupyter
# Graphes - correction Correction des exercices sur les graphes avec [matplotlib](http://matplotlib.org/). Pour avoir des graphiques inclus dans le notebook, il faut ajouter cette ligne et l'exécuter en premier. ``` %matplotlib inline ``` On change le style pour un style plus moderne, celui de [ggplot](http://ggplot2.org/) : ``` from jyquickhelper import add_notebook_menu add_notebook_menu() ``` ## Données ### élections Pour tous les exemples qui suivent, on utilise les résultat [élection présidentielle de 2012](https://www.data.gouv.fr/fr/datasets/election-presidentielle-2012-resultats-572124/). Si vous n'avez pas le module [actuariat_python](http://www.xavierdupre.fr/app/actuariat_python/helpsphinx/index.html), il vous suffit de recopier le code de la fonction [elections_presidentielles](http://www.xavierdupre.fr/app/actuariat_python/helpsphinx/_modules/actuariat_python/data/elections.html#elections_presidentielles) qui utilise la fonction [read_excel](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html) : ``` from actuariat_python.data import elections_presidentielles dict_df = elections_presidentielles(local=True, agg="dep") def cleandep(s): if isinstance(s, str): r = s.lstrip('0') else: r = str(s) return r dict_df["dep1"]["Code du département"] = dict_df["dep1"]["Code du département"].apply(cleandep) dict_df["dep2"]["Code du département"] = dict_df["dep2"]["Code du département"].apply(cleandep) deps = dict_df["dep1"].merge(dict_df["dep2"], on="Code du département", suffixes=("T1", "T2")) deps["rHollandeT1"] = deps['François HOLLANDE (PS)T1'] / (deps["VotantsT1"] - deps["Blancs et nulsT1"]) deps["rSarkozyT1"] = deps['Nicolas SARKOZY (UMP)T1'] / (deps["VotantsT1"] - deps["Blancs et nulsT1"]) deps["rNulT1"] = deps["Blancs et nulsT1"] / deps["VotantsT1"] deps["rHollandeT2"] = deps["François HOLLANDE (PS)T2"] / (deps["VotantsT2"] - deps["Blancs et nulsT2"]) deps["rSarkozyT2"] = deps['Nicolas SARKOZY (UMP)T2'] / (deps["VotantsT2"] - deps["Blancs et nulsT2"]) deps["rNulT2"] = deps["Blancs et nulsT2"] / deps["VotantsT2"] data = deps[["Code du département", "Libellé du départementT1", "VotantsT1", "rHollandeT1", "rSarkozyT1", "rNulT1", "VotantsT2", "rHollandeT2", "rSarkozyT2", "rNulT2"]] data_elections = data # parfois data est remplacé dans la suite data.head() ``` ### localisation des villes ``` from pyensae.datasource import download_data download_data("villes_france.csv", url="http://sql.sh/ressources/sql-villes-france/") cols = ["ncommune", "numero_dep", "slug", "nom", "nom_simple", "nom_reel", "nom_soundex", "nom_metaphone", "code_postal", "numero_commune", "code_commune", "arrondissement", "canton", "pop2010", "pop1999", "pop2012", "densite2010", "surface", "superficie", "dlong", "dlat", "glong", "glat", "slong", "slat", "alt_min", "alt_max"] import pandas villes = pandas.read_csv("villes_france.csv", header=None,low_memory=False, names=cols) ``` ## exercice 1 : centrer la carte de la France On recentre la carte. Seule modification : ``[-5, 10, 38, 52]``. ``` import matplotlib.pyplot as plt import cartopy.crs as ccrs import cartopy.feature as cfeature fig = plt.figure(figsize=(6,6)) ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree()) ax.set_extent([-5, 10, 38, 52]) ax.add_feature(cfeature.OCEAN.with_scale('50m')) ax.add_feature(cfeature.RIVERS.with_scale('50m')) ax.add_feature(cfeature.BORDERS.with_scale('50m'), linestyle=':') ax.set_title('France'); ``` ## exercice 2 : placer les plus grandes villes de France sur la carte On reprend la fonction ``carte_france`` donnée par l'énoncé et modifié avec le résultat de la question précédente. ``` def carte_france(figsize=(7, 7)): fig = plt.figure(figsize=figsize) ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree()) ax.set_extent([-5, 10, 38, 52]) ax.add_feature(cfeature.OCEAN.with_scale('50m')) ax.add_feature(cfeature.RIVERS.with_scale('50m')) ax.add_feature(cfeature.BORDERS.with_scale('50m'), linestyle=':') ax.set_title('France'); return ax carte_france(); ``` On ne retient que les villes de plus de 100.000 habitants. Toutes les villes ne font pas partie de la métropole : ``` grosses_villes = villes[villes.pop2012 > 100000][["dlong","dlat","nom", "pop2012"]] grosses_villes.describe() grosses_villes.sort_values("dlat").head() ``` Saint-Denis est à la Réunion. On l'enlève de l'ensemble : ``` grosses_villes = villes[(villes.pop2012 > 100000) & (villes.dlat > 40)] \ [["dlong","dlat","nom", "pop2012"]] ``` On dessine la carte souhaitée en ajoutant un marqueur pour chaque ville dont la surface dépend du nombre d'habitant. Sa taille doit être proportionnelle à à la racine carrée du nombre d'habitants. ``` import matplotlib.pyplot as plt ax = carte_france() def affiche_ville(ax, x, y, nom, pop): ax.plot(x, y, 'ro', markersize=pop**0.5/50) ax.text(x, y, nom) for lon, lat, nom, pop in zip(grosses_villes["dlong"], grosses_villes["dlat"], grosses_villes["nom"], grosses_villes["pop2012"]): affiche_ville(ax, lon, lat, nom, pop) ax; ``` **rappel : fonction zip** La fonction [zip](https://docs.python.org/3.4/library/functions.html#zip) *colle* deux séquences ensemble. ``` list(zip([1,2,3], ["a", "b", "c"])) ``` On l'utilise souvent de cette manière : ``` for a,b in zip([1,2,3], ["a", "b", "c"]): # faire quelque chose avec a et b print(a,b) ``` Sans la fonction [zip](https://docs.python.org/3.4/library/functions.html#zip) : ``` ax = carte_france() def affiche_ville(ax, x, y, nom, pop): ax.plot(x, y, 'ro', markersize=pop**0.5/50) ax.text(x, y, nom) def affiche_row(ax, row): affiche_ville(ax, row["dlong"], row["dlat"], row["nom"], row["pop2012"]) grosses_villes.apply(lambda row: affiche_row(ax, row), axis=1) ax; ``` Ou encore : ``` import matplotlib.pyplot as plt ax = carte_france() def affiche_ville(ax, x, y, nom, pop): ax.plot(x, y, 'ro', markersize=pop**0.5/50) ax.text(x, y, nom) for i in range(0, grosses_villes.shape[0]): ind = grosses_villes.index[i] # important ici, les lignes sont indexées par rapport à l'index de départ # comme les lignes ont été filtrées pour ne garder que les grosses villes, # il faut soit utiliser reset_index soit récupérer l'indice de la ligne lon, lat = grosses_villes.loc[ind, "dlong"], grosses_villes.loc[ind, "dlat"] nom, pop = grosses_villes.loc[ind, "nom"], grosses_villes.loc[ind, "pop2012"] affiche_ville(ax, lon, lat, nom, pop) ax; ``` ## exercice 3 : résultats des élections par département On reprend le résultat des élections, on construit d'abord un dictionnaire dans lequel ``{ departement: vainqueur }``. ``` data_elections.shape, data_elections[data_elections.rHollandeT2 > data_elections.rSarkozyT2].shape ``` Il y a 63 départements où Hollande est vainqueur. ``` hollande_gagnant = dict(zip(data_elections["Libellé du départementT1"], data_elections.rHollandeT2 > data_elections.rSarkozyT2)) list(hollande_gagnant.items())[:5] ``` On récupère les formes de chaque département : ``` from pyensae.datasource import download_data try: download_data("GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01.7z", website="https://wxs-telechargement.ign.fr/oikr5jryiph0iwhw36053ptm/telechargement/inspire/" + \ "GEOFLA_THEME-DEPARTEMENTS_2015_2$GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01/file/") except Exception as e: # au cas le site n'est pas accessible download_data("GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01.7z", website="xd") from pyquickhelper.filehelper import un7zip_files try: un7zip_files("GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01.7z", where_to="shapefiles") departements = 'shapefiles/GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01/GEOFLA/1_DONNEES_LIVRAISON_2015/' + \ 'GEOFLA_2-1_SHP_LAMB93_FR-ED152/DEPARTEMENT/DEPARTEMENT.shp' except FileNotFoundError as e: # Il est possible que cette instruction ne fonctionne pas. # Dans ce cas, on prendra une copie de ce fichier. import warnings warnings.warn("Plan B parce que " + str(e)) download_data("DEPARTEMENT.zip") departements = "DEPARTEMENT.shp" import os if not os.path.exists(departements): raise FileNotFoundError("Impossible de trouver '{0}'".format(departements)) import shapefile shp = departements r = shapefile.Reader(shp) shapes = r.shapes() records = r.records() records[0] ``` Le problème est que les codes sont difficiles à associer aux résultats des élections. La page [Wikipedia de Bas-Rhin](https://fr.wikipedia.org/wiki/Bas-Rhin) lui associe le code 67. Le Bas-Rhin est orthographié ``BAS RHIN`` dans la liste des résultats. Le code du département n'apparaît pas dans les *shapefile* récupérés. Il faut *matcher* sur le nom du département. On met tout en minuscules et on enlève espaces et tirets. ``` hollande_gagnant_clean = { k.lower().replace("-", "").replace(" ", ""): v for k,v in hollande_gagnant.items()} list(hollande_gagnant_clean.items())[:5] ``` Et comme il faut aussi remplacer les accents, on s'inspire de la fonction [remove_diacritic](http://www.xavierdupre.fr/app/pyquickhelper/helpsphinx//pyquickhelper/texthelper/diacritic_helper.html#module-pyquickhelper.texthelper.diacritic_helper) : ``` import unicodedata def retourne_vainqueur(nom_dep): s = nom_dep.lower().replace("-", "").replace(" ", "") nkfd_form = unicodedata.normalize('NFKD', s) only_ascii = nkfd_form.encode('ASCII', 'ignore') s = only_ascii.decode("utf8") if s in hollande_gagnant_clean: return hollande_gagnant_clean[s] else: keys = list(sorted(hollande_gagnant_clean.keys())) keys = [_ for _ in keys if _[0].lower() == s[0].lower()] print("impossible de savoir pour ", nom_dep, "*", s, "*", " --- ", keys[:5]) return None import math def lambert932WGPS(lambertE, lambertN): class constantes: GRS80E = 0.081819191042816 LONG_0 = 3 XS = 700000 YS = 12655612.0499 n = 0.7256077650532670 C = 11754255.4261 delX = lambertE - constantes.XS delY = lambertN - constantes.YS gamma = math.atan(-delX / delY) R = math.sqrt(delX * delX + delY * delY) latiso = math.log(constantes.C / R) / constantes.n sinPhiit0 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * math.sin(1))) sinPhiit1 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit0)) sinPhiit2 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit1)) sinPhiit3 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit2)) sinPhiit4 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit3)) sinPhiit5 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit4)) sinPhiit6 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit5)) longRad = math.asin(sinPhiit6) latRad = gamma / constantes.n + constantes.LONG_0 / 180 * math.pi longitude = latRad / math.pi * 180 latitude = longRad / math.pi * 180 return longitude, latitude lambert932WGPS(99217.1, 6049646.300000001), lambert932WGPS(1242417.2, 7110480.100000001) ``` Puis on utilise le code de l'énoncé en changeant la couleur. Pas de couleur indique les départements pour lesquels on ne sait pas. ``` import cartopy.crs as ccrs import matplotlib.pyplot as plt ax = carte_france((8,8)) from matplotlib.collections import LineCollection import shapefile import geopandas from shapely.geometry import Polygon from shapely.ops import cascaded_union, unary_union shp = departements r = shapefile.Reader(shp) shapes = r.shapes() records = r.records() polys = [] colors = [] for i, (record, shape) in enumerate(zip(records, shapes)): # Vainqueur dep = retourne_vainqueur(record[2]) if dep is not None: couleur = "red" if dep else "blue" else: couleur = "gray" # les coordonnées sont en Lambert 93 if i == 0: print(record, shape.parts, couleur) geo_points = [lambert932WGPS(x,y) for x, y in shape.points] if len(shape.parts) == 1: # Un seul polygone poly = Polygon(geo_points) else: # Il faut les fusionner. ind = list(shape.parts) + [len(shape.points)] pols = [Polygon(geo_points[ind[i]:ind[i+1]]) for i in range(0, len(shape.parts))] try: poly = unary_union(pols) except Exception as e: print("Cannot merge: ", record) print([_.length for _ in pols], ind) poly = Polygon(geo_points) polys.append(poly) colors.append(couleur) data = geopandas.GeoDataFrame(dict(geometry=polys, colors=colors)) geopandas.plotting.plot_polygon_collection(ax, data['geometry'], facecolor=data['colors'], values=None, edgecolor='black'); ``` La fonction fait encore une erreur pour la Corse du Sud... Je la laisse en guise d'exemple. ## exercice 3 avec les shapefile etalab Les données sont disponibles sur [GEOFLA® Départements](https://www.data.gouv.fr/fr/datasets/geofla-departements-30383060/) mais vous pouvez reprendre le code ce-dessus pour les télécharger. ``` # ici, il faut dézipper manuellement les données # à terminer ``` ## exercice 4 : même code, widget différent On utilise des checkbox pour activer ou désactiver l'un des deux candidats. ``` import matplotlib.pyplot as plt from ipywidgets import interact, Checkbox def plot(candh, cands): fig, axes = plt.subplots(1, 1, figsize=(14,5), sharey=True) if candh: data_elections.plot(x="rHollandeT1", y="rHollandeT2", kind="scatter", label="H", ax=axes) if cands: data_elections.plot(x="rSarkozyT1", y="rSarkozyT2", kind="scatter", label="S", ax=axes, c="red") axes.plot([0.2,0.7], [0.2,0.7], "g--") return axes candh = Checkbox(description='Hollande', value=True) cands = Checkbox(description='Sarkozy', value=True) interact(plot, candh=candh, cands=cands); ``` Si aucune case à cocher n'apparaît, il faut se reporter à l'installation du module [ipywidgets](http://ipywidgets.readthedocs.io/en/stable/user_install.html). La cause la plus probable est l'oubli la seconde ligne ``` pip install ipywidgets jupyter nbextension enable --py widgetsnbextension ``` qui active l'extension. Il faut relancer le serveur de notebook après l'avoir exécutée depuis la ligne de commande.
github_jupyter
# Computer Vision Service用いた画像解析 <p style='text-align:center'><img src='./images/computer_vision.jpg' alt='写真を持っているロボット'/></p> *コンピュータビジョン*は、カメラを介してリアルタイムに、あるいは画像や映像を解析することで、世界を「見る」ことができるAIシステムの開発を目指す人工知能(AI)の一分野です。これは、デジタル画像が基本的に数値のピクセル値の配列であるという事実によって可能になり、それらのピクセル値を*特徴*として使用して機械学習モデルを訓練することができます。 ## Computer Vision Cognitive Serviceを使う Microsoft Azureには、一般的なAI機能をカプセル化した多くの *cognitive services* が含まれており、その中にはコンピュータビジョンソリューションの構築に役立つものも含まれています。 *コンピュータビジョン* コグニティブサービスは、Azureでのコンピュータビジョンの探索の出発点となるものです。これは、事前に訓練された機械学習モデルを使用して画像を分析し、画像に関する情報を抽出します。 例えば、Northwind Tradersが「スマートストア」を実装することを決めたとしましょう。コンピュータビジョンサービスを利用することで、店舗内の至る所に設置されたカメラで撮影された画像を分析して、何を描写しているのかを意味のある説明をすることができるようになります。 まずはAzureサブスクリプションで **Cognitive services** リソースを作成してみましょう。: 1. 別のブラウザタブで、https://portal.azure.com の Azure ポータルを開き、Microsoft アカウントでサインインします。 2. **[&#65291;リソースの作成]**ボタンをクリックして、*Cognitive Services* を検索し、次の設定で**Cognitive Services**リソースを作成します。: - **Name**: *一意の名前を入力してください*. - **Subscription**: *Azureサブスクリプション*. - **Location**: *利用可能なリージョンを選択します*. - **Pricing tier**: S0 - **Resource group**: *一意な名前を持つリソースグループを作成します*. 3. デプロイが完了するのを待ちます。次に、Cognitive Servicesリソースに移動し、**クイックスタート** ページで、キーとエンドポイントに注意してください。クライアント アプリケーションからCognitive Services リソースに接続するには、これらが必要です。 4. リソースの **Key1** をコピーして、**YOUR_COG_KEY** を置き換えて、以下のコードに貼り付けます。 5. リソースの **endpoint** をコピーして、以下のコードに貼り付け、 **YOUR_COG_ENDPOINT** を置き換えます 6. 下のセルの緑色の<span style="color:green">&#9655</span>ボタン(セルの左上)をクリックして、下のセルでコードを実行します。 ``` cog_key = 'YOUR_COG_KEY' cog_endpoint = 'YOUR_COG_ENDPOINT' print('キー{}を使用して{}でCognitive Servicesを使用する準備ができました。'.format(cog_key, cog_endpoint)) ``` これでキーとエンドポイントの設定が完了しました。 次のステップへ進む前に、AzureのCognitive Servicesの関連パッケージのインストールが必要になります。以下のセルを実行してください: ``` ! pip install azure-cognitiveservices-vision-computervision ``` これで設定が完了しましたので、カスタムビジョンサービスを使用して画像を解析することができます。 以下のセルを実行して、/data/vision/store_cam1.jpg ファイル内の画像の説明を取得します。 ``` from azure.cognitiveservices.vision.computervision import ComputerVisionClient from msrest.authentication import CognitiveServicesCredentials from python_code import vision import os %matplotlib inline # 画像ファイルへのパスを取得します image_path = os.path.join('data', 'vision', 'store_cam1.jpg') # computer vision serviceのクライアントを初期化する computervision_client = ComputerVisionClient(cog_endpoint, CognitiveServicesCredentials(cog_key)) # computer vision serviceから説明を得る image_stream = open(image_path, "rb") description = computervision_client.describe_image_in_stream(image_stream) # 画像とキャプションを表示する (コードはhelper_scripts/vision.pyにあります) vision.show_image_caption(image_path, description) ``` 妥当な精度のようですね。 別の画像を見てみよう ``` # 画像ファイルへのパスを取得します image_path = os.path.join('data', 'vision', 'store_cam2.jpg') # コンピュータビジョンサービスから説明を得る image_stream = open(image_path, "rb") description = computervision_client.describe_image_in_stream(image_stream) # 画像とキャプションを表示する (コードはhelper_scripts/vision.pyにあります) vision.show_image_caption(image_path, description) ``` 繰り返しになりますが、提案されたキャプションはかなり正確なようです。 ## 画像の特徴を分析する ここまでで、Computer Vision serviceを使用して、いくつかの画像に説明的なキャプションを作成しましたが、できることはもっとたくさんあります。Computer Vision serviceは、以下のような詳細な情報を抽出できる分析機能を提供します。 - 画像内で検出された一般的な種類の物体の位置。 - 画像に写っている人の顔の位置、性別、年齢の目安。 - 画像に「アダルト」、「レイシー」、「グロい」コンテンツが含まれているかどうか。 - 画像を見つけやすくするために、データベースにある画像に関連するタグ。 以下のコードを実行して、買い物客の画像を分析します。 ``` # 画像ファイルへのパスを取得する image_path = os.path.join('data', 'vision', 'store_cam1.jpg') # 解析したい機能を指定します features = ['Description', 'Tags', 'Adult', 'Objects', 'Faces'] # computer vision serviceから解析結果を受ける image_stream = open(image_path, "rb") analysis = computervision_client.analyze_image_in_stream(image_stream, visual_features=features) # 解析結果を表示する (コードはhelper_scripts/vision.pyにあります) vision.show_image_analysis(image_path, analysis) ``` ## Learn More このノートブックで探索した機能に加えて、Computer Vision cognitive serviceには以下の機能が含まれています。 - 画像内の有名人を識別する。 - 画像内のブランドロゴを検出する - 光学式文字認識(OCR)を実行して、画像内のテキストを読み取る。 コンピュータビジョンコグニティブサービスの詳細については、[コンピュータビジョンのドキュメント] (https://docs.microsoft.com/azure/cognitive-services/computer-vision/)を参照してください。
github_jupyter
Since a recipe is linked with an ontology, it allows for making changes with labels and attributes. When the recipe is set as the default one for a dataset, the same applies for the dataset entity - it can be used for making changes with the labels and attributes which are ultimately linked to it through the recipe and its ontology. ## Working With Recipes ``` # Get recipe from a list recipe = dataset.recipes.list()[0] # Get recipe by ID - ID can be retrieved from the page URL when opening the recipe in the platform recipe = dataset.recipes.get(recipe_id='your-recipe-id') # Delete recipe - applies only for deleted datasets dataset.recipes.get(recipe_id='your-recipe-id').delete() ``` ## Cloning Recipes When you want to create a new recipe that’s only slightly different from an existing recipe, it can be easier to start by cloning the original recipe and then making changes on its clone. shallow: If True, link to existing ontology, If false clone all ontologies that are links to the recipe as well. ``` dataset = project.datasets.get(dataset_name="myDataSet") recipe = dataset.recipes.get(recipe_id="recipe_id") recipe2 = recipe.clone(shallow=False) ``` ## View Dataset Labels ``` # as objects labels = dataset.labels # as instance map labels = dataset.instance_map ``` ## Add Labels by Dataset Working with dataset labels can be done one-by-one or as a list. The Dataset entity documentation details all label options - read <a href="https://console.dataloop.ai/sdk-docs/dtlpy.entities.html#dtlpy.entities.dataset.Dataset.add_label" target="_blank">here</a>. ``` # Add multiple labels dataset.add_labels(label_list=['person', 'animal', 'object']) # Add single label with specific color and attributes dataset.add_label(label_name='person', color=(34, 6, 231)) # Add single label with a thumbnail/icon dataset.add_label(label_name='person', icon_path='/home/project/images/icon.jpg') ``` ## Add Labels Using Label Object ``` # Create Labels list using Label object labels = [ dl.Label(tag='Donkey', color=(255, 100, 0)), dl.Label(tag='Mammoth', color=(34, 56, 7)), dl.Label(tag='Bird', color=(100, 14, 150)) ] # Add Labels to Dataset dataset.add_labels(label_list=labels) # or you can also create a recipe from the label list recipe = dataset.recipes.create(recipe_name='My-Recipe-name', labels=labels) ``` ## Add a Label and Sub-Labels ``` label = dl.Label(tag='Fish', color=(34, 6, 231), children=[dl.Label(tag='Shark', color=(34, 6, 231)), dl.Label(tag='Salmon', color=(34, 6, 231))] ) dataset.add_labels(label_list=label) # or you can also create a recipe from the label list recipe = dataset.recipes.create(recipe_name='My-Recipe-name', labels=labels) ``` ## Add Hierarchy Labels with Nested Different options for hierarchy label creation. ``` # Option A # add father label labels = dataset.add_label(label_name="animal", color=(123, 134, 64)) # add child label labels = dataset.add_label(label_name="animal.Dog", color=(45, 34, 164)) # add grandchild label labels = dataset.add_label(label_name="animal.Dog.poodle") # Option B: only if you dont have attributes # parent and grandparent (animal and dog) will be generated automatically labels = dataset.add_label(label_name="animal.Dog.poodle") # Option C: with the Big Dict nested_labels = [ {'label_name': 'animal.Dog', 'color': '#220605', 'children': [{'label_name': 'poodle', 'color': '#298345'}, {'label_name': 'labrador', 'color': '#298651'}]}, {'label_name': 'animal.cat', 'color': '#287605', 'children': [{'label_name': 'Persian', 'color': '#298345'}, {'label_name': 'Balinese', 'color': '#298651'}]} ] # Add Labels to the dataset: labels = dataset.add_labels(label_list=nested_labels) ``` ## Delete Labels by Dataset ``` dataset.delete_labels(label_names=['Cat', 'Dog']) ``` ## Update Label Features ``` # update existing label , if not exist fails dataset.update_label(label_name='Cat', color="#000080") # update label, if not exist add it dataset.update_label(label_name='Cat', color="#fcba03", upsert=True) ```
github_jupyter
# Deep Learning & Art: Neural Style Transfer Welcome to the second assignment of this week. In this assignment, you will learn about Neural Style Transfer. This algorithm was created by Gatys et al. (2015) (https://arxiv.org/abs/1508.06576). **In this assignment, you will:** - Implement the neural style transfer algorithm - Generate novel artistic images using your algorithm Most of the algorithms you've studied optimize a cost function to get a set of parameter values. In Neural Style Transfer, you'll optimize a cost function to get pixel values! ``` import os import sys import scipy.io import scipy.misc import matplotlib.pyplot as plt from matplotlib.pyplot import imshow from PIL import Image from nst_utils import * import numpy as np import tensorflow as tf %matplotlib inline ``` ## 1 - Problem Statement Neural Style Transfer (NST) is one of the most fun techniques in deep learning. As seen below, it merges two images, namely, a "content" image (C) and a "style" image (S), to create a "generated" image (G). The generated image G combines the "content" of the image C with the "style" of image S. In this example, you are going to generate an image of the Louvre museum in Paris (content image C), mixed with a painting by Claude Monet, a leader of the impressionist movement (style image S). <img src="images/louvre_generated.png" style="width:750px;height:200px;"> Let's see how you can do this. ## 2 - Transfer Learning Neural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of that. The idea of using a network trained on a different task and applying it to a new task is called transfer learning. Following the original NST paper (https://arxiv.org/abs/1508.06576), we will use the VGG network. Specifically, we'll use VGG-19, a 19-layer version of the VGG network. This model has already been trained on the very large ImageNet database, and thus has learned to recognize a variety of low level features (at the earlier layers) and high level features (at the deeper layers). Run the following code to load parameters from the VGG model. This may take a few seconds. ``` model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat") print(model) ``` The model is stored in a python dictionary where each variable name is the key and the corresponding value is a tensor containing that variable's value. To run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the [tf.assign](https://www.tensorflow.org/api_docs/python/tf/assign) function. In particular, you will use the assign function like this: ```python model["input"].assign(image) ``` This assigns the image as an input to the model. After this, if you want to access the activations of a particular layer, say layer `4_2` when the network is run on this image, you would run a TensorFlow session on the correct tensor `conv4_2`, as follows: ```python sess.run(model["conv4_2"]) ``` ## 3 - Neural Style Transfer We will build the NST algorithm in three steps: - Build the content cost function $J_{content}(C,G)$ - Build the style cost function $J_{style}(S,G)$ - Put it together to get $J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$. ### 3.1 - Computing the content cost In our running example, the content image C will be the picture of the Louvre Museum in Paris. Run the code below to see a picture of the Louvre. ``` content_image = scipy.misc.imread("images/louvre.jpg") imshow(content_image) ``` The content image (C) shows the Louvre museum's pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds. ** 3.1.1 - How do you ensure the generated image G matches the content of the image C?** As we saw in lecture, the earlier (shallower) layers of a ConvNet tend to detect lower-level features such as edges and simple textures, and the later (deeper) layers tend to detect higher-level features such as more complex textures as well as object classes. We would like the "generated" image G to have similar content as the input image C. Suppose you have chosen some layer's activations to represent the content of an image. In practice, you'll get the most visually pleasing results if you choose a layer in the middle of the network--neither too shallow nor too deep. (After you have finished this exercise, feel free to come back and experiment with using different layers, to see how the results vary.) So, suppose you have picked one particular hidden layer to use. Now, set the image C as the input to the pretrained VGG network, and run forward propagation. Let $a^{(C)}$ be the hidden layer activations in the layer you had chosen. (In lecture, we had written this as $a^{[l](C)}$, but here we'll drop the superscript $[l]$ to simplify the notation.) This will be a $n_H \times n_W \times n_C$ tensor. Repeat this process with the image G: Set G as the input, and run forward progation. Let $$a^{(G)}$$ be the corresponding hidden layer activation. We will define as the content cost function as: $$J_{content}(C,G) = \frac{1}{4 \times n_H \times n_W \times n_C}\sum _{ \text{all entries}} (a^{(C)} - a^{(G)})^2\tag{1} $$ Here, $n_H, n_W$ and $n_C$ are the height, width and number of channels of the hidden layer you have chosen, and appear in a normalization term in the cost. For clarity, note that $a^{(C)}$ and $a^{(G)}$ are the volumes corresponding to a hidden layer's activations. In order to compute the cost $J_{content}(C,G)$, it might also be convenient to unroll these 3D volumes into a 2D matrix, as shown below. (Technically this unrolling step isn't needed to compute $J_{content}$, but it will be good practice for when you do need to carry out a similar operation later for computing the style const $J_{style}$.) <img src="images/NST_LOSS.png" style="width:800px;height:400px;"> **Exercise:** Compute the "content cost" using TensorFlow. **Instructions**: The 3 steps to implement this function are: 1. Retrieve dimensions from a_G: - To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()` 2. Unroll a_C and a_G as explained in the picture above - If you are stuck, take a look at [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape). 3. Compute the content cost: - If you are stuck, take a look at [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract). ``` # GRADED FUNCTION: compute_content_cost def compute_content_cost(a_C, a_G): """ Computes the content cost Arguments: a_C -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image C a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image G Returns: J_content -- scalar that you compute using equation 1 above. """ ### START CODE HERE ### # Retrieve dimensions from a_G (≈1 line) m, n_H, n_W, n_C = None # Reshape a_C and a_G (≈2 lines) a_C_unrolled = None a_G_unrolled = None # compute the cost with tensorflow (≈1 line) J_content = None ### END CODE HERE ### return J_content tf.reset_default_graph() with tf.Session() as test: tf.set_random_seed(1) a_C = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4) a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4) J_content = compute_content_cost(a_C, a_G) print("J_content = " + str(J_content.eval())) ``` **Expected Output**: <table> <tr> <td> **J_content** </td> <td> 6.76559 </td> </tr> </table> <font color='blue'> **What you should remember**: - The content cost takes a hidden layer activation of the neural network, and measures how different $a^{(C)}$ and $a^{(G)}$ are. - When we minimize the content cost later, this will help make sure $G$ has similar content as $C$. ### 3.2 - Computing the style cost For our running example, we will use the following style image: ``` style_image = scipy.misc.imread("images/monet_800600.jpg") imshow(style_image) ``` This painting was painted in the style of *[impressionism](https://en.wikipedia.org/wiki/Impressionism)*. Lets see how you can now define a "style" const function $J_{style}(S,G)$. ### 3.2.1 - Style matrix The style matrix is also called a "Gram matrix." In linear algebra, the Gram matrix G of a set of vectors $(v_{1},\dots ,v_{n})$ is the matrix of dot products, whose entries are ${\displaystyle G_{ij} = v_{i}^T v_{j} = np.dot(v_{i}, v_{j}) }$. In other words, $G_{ij}$ compares how similar $v_i$ is to $v_j$: If they are highly similar, you would expect them to have a large dot product, and thus for $G_{ij}$ to be large. Note that there is an unfortunate collision in the variable names used here. We are following common terminology used in the literature, but $G$ is used to denote the Style matrix (or Gram matrix) as well as to denote the generated image $G$. We will try to make sure which $G$ we are referring to is always clear from the context. In NST, you can compute the Style matrix by multiplying the "unrolled" filter matrix with their transpose: <img src="images/NST_GM.png" style="width:900px;height:300px;"> The result is a matrix of dimension $(n_C,n_C)$ where $n_C$ is the number of filters. The value $G_{ij}$ measures how similar the activations of filter $i$ are to the activations of filter $j$. One important part of the gram matrix is that the diagonal elements such as $G_{ii}$ also measures how active filter $i$ is. For example, suppose filter $i$ is detecting vertical textures in the image. Then $G_{ii}$ measures how common vertical textures are in the image as a whole: If $G_{ii}$ is large, this means that the image has a lot of vertical texture. By capturing the prevalence of different types of features ($G_{ii}$), as well as how much different features occur together ($G_{ij}$), the Style matrix $G$ measures the style of an image. **Exercise**: Using TensorFlow, implement a function that computes the Gram matrix of a matrix A. The formula is: The gram matrix of A is $G_A = AA^T$. If you are stuck, take a look at [Hint 1](https://www.tensorflow.org/api_docs/python/tf/matmul) and [Hint 2](https://www.tensorflow.org/api_docs/python/tf/transpose). ``` # GRADED FUNCTION: gram_matrix def gram_matrix(A): """ Argument: A -- matrix of shape (n_C, n_H*n_W) Returns: GA -- Gram matrix of A, of shape (n_C, n_C) """ ### START CODE HERE ### (≈1 line) GA = None ### END CODE HERE ### return GA tf.reset_default_graph() with tf.Session() as test: tf.set_random_seed(1) A = tf.random_normal([3, 2*1], mean=1, stddev=4) GA = gram_matrix(A) print("GA = " + str(GA.eval())) ``` **Expected Output**: <table> <tr> <td> **GA** </td> <td> [[ 6.42230511 -4.42912197 -2.09668207] <br> [ -4.42912197 19.46583748 19.56387138] <br> [ -2.09668207 19.56387138 20.6864624 ]] </td> </tr> </table> ### 3.2.2 - Style cost After generating the Style matrix (Gram matrix), your goal will be to minimize the distance between the Gram matrix of the "style" image S and that of the "generated" image G. For now, we are using only a single hidden layer $a^{[l]}$, and the corresponding style cost for this layer is defined as: $$J_{style}^{[l]}(S,G) = \frac{1}{4 \times {n_C}^2 \times (n_H \times n_W)^2} \sum _{i=1}^{n_C}\sum_{j=1}^{n_C}(G^{(S)}_{ij} - G^{(G)}_{ij})^2\tag{2} $$ where $G^{(S)}$ and $G^{(G)}$ are respectively the Gram matrices of the "style" image and the "generated" image, computed using the hidden layer activations for a particular hidden layer in the network. **Exercise**: Compute the style cost for a single layer. **Instructions**: The 3 steps to implement this function are: 1. Retrieve dimensions from the hidden layer activations a_G: - To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()` 2. Unroll the hidden layer activations a_S and a_G into 2D matrices, as explained in the picture above. - You may find [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape) useful. 3. Compute the Style matrix of the images S and G. (Use the function you had previously written.) 4. Compute the Style cost: - You may find [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract) useful. ``` # GRADED FUNCTION: compute_layer_style_cost def compute_layer_style_cost(a_S, a_G): """ Arguments: a_S -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image S a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image G Returns: J_style_layer -- tensor representing a scalar value, style cost defined above by equation (2) """ ### START CODE HERE ### # Retrieve dimensions from a_G (≈1 line) m, n_H, n_W, n_C = None # Reshape the images to have them of shape (n_C, n_H*n_W) (≈2 lines) a_S = None a_G = None # Computing gram_matrices for both images S and G (≈2 lines) GS = None GG = None # Computing the loss (≈1 line) J_style_layer = None ### END CODE HERE ### return J_style_layer tf.reset_default_graph() with tf.Session() as test: tf.set_random_seed(1) a_S = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4) a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4) J_style_layer = compute_layer_style_cost(a_S, a_G) print("J_style_layer = " + str(J_style_layer.eval())) ``` **Expected Output**: <table> <tr> <td> **J_style_layer** </td> <td> 9.19028 </td> </tr> </table> ### 3.2.3 Style Weights So far you have captured the style from only one layer. We'll get better results if we "merge" style costs from several different layers. After completing this exercise, feel free to come back and experiment with different weights to see how it changes the generated image $G$. But for now, this is a pretty reasonable default: ``` STYLE_LAYERS = [ ('conv1_1', 0.2), ('conv2_1', 0.2), ('conv3_1', 0.2), ('conv4_1', 0.2), ('conv5_1', 0.2)] ``` You can combine the style costs for different layers as follows: $$J_{style}(S,G) = \sum_{l} \lambda^{[l]} J^{[l]}_{style}(S,G)$$ where the values for $\lambda^{[l]}$ are given in `STYLE_LAYERS`. We've implemented a compute_style_cost(...) function. It simply calls your `compute_layer_style_cost(...)` several times, and weights their results using the values in `STYLE_LAYERS`. Read over it to make sure you understand what it's doing. <!-- 2. Loop over (layer_name, coeff) from STYLE_LAYERS: a. Select the output tensor of the current layer. As an example, to call the tensor from the "conv1_1" layer you would do: out = model["conv1_1"] b. Get the style of the style image from the current layer by running the session on the tensor "out" c. Get a tensor representing the style of the generated image from the current layer. It is just "out". d. Now that you have both styles. Use the function you've implemented above to compute the style_cost for the current layer e. Add (style_cost x coeff) of the current layer to overall style cost (J_style) 3. Return J_style, which should now be the sum of the (style_cost x coeff) for each layer. !--> ``` def compute_style_cost(model, STYLE_LAYERS): """ Computes the overall style cost from several chosen layers Arguments: model -- our tensorflow model STYLE_LAYERS -- A python list containing: - the names of the layers we would like to extract style from - a coefficient for each of them Returns: J_style -- tensor representing a scalar value, style cost defined above by equation (2) """ # initialize the overall style cost J_style = 0 for layer_name, coeff in STYLE_LAYERS: # Select the output tensor of the currently selected layer out = model[layer_name] # Set a_S to be the hidden layer activation from the layer we have selected, by running the session on out a_S = sess.run(out) # Set a_G to be the hidden layer activation from same layer. Here, a_G references model[layer_name] # and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that # when we run the session, this will be the activations drawn from the appropriate layer, with G as input. a_G = out # Compute style_cost for the current layer J_style_layer = compute_layer_style_cost(a_S, a_G) # Add coeff * J_style_layer of this layer to overall style cost J_style += coeff * J_style_layer return J_style ``` **Note**: In the inner-loop of the for-loop above, `a_G` is a tensor and hasn't been evaluated yet. It will be evaluated and updated at each iteration when we run the TensorFlow graph in model_nn() below. <!-- How do you choose the coefficients for each layer? The deeper layers capture higher-level concepts, and the features in the deeper layers are less localized in the image relative to each other. So if you want the generated image to softly follow the style image, try choosing larger weights for deeper layers and smaller weights for the first layers. In contrast, if you want the generated image to strongly follow the style image, try choosing smaller weights for deeper layers and larger weights for the first layers !--> <font color='blue'> **What you should remember**: - The style of an image can be represented using the Gram matrix of a hidden layer's activations. However, we get even better results combining this representation from multiple different layers. This is in contrast to the content representation, where usually using just a single hidden layer is sufficient. - Minimizing the style cost will cause the image $G$ to follow the style of the image $S$. </font color='blue'> ### 3.3 - Defining the total cost to optimize Finally, let's create a cost function that minimizes both the style and the content cost. The formula is: $$J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$$ **Exercise**: Implement the total cost function which includes both the content cost and the style cost. ``` # GRADED FUNCTION: total_cost def total_cost(J_content, J_style, alpha = 10, beta = 40): """ Computes the total cost function Arguments: J_content -- content cost coded above J_style -- style cost coded above alpha -- hyperparameter weighting the importance of the content cost beta -- hyperparameter weighting the importance of the style cost Returns: J -- total cost as defined by the formula above. """ ### START CODE HERE ### (≈1 line) J = None ### END CODE HERE ### return J tf.reset_default_graph() with tf.Session() as test: np.random.seed(3) J_content = np.random.randn() J_style = np.random.randn() J = total_cost(J_content, J_style) print("J = " + str(J)) ``` **Expected Output**: <table> <tr> <td> **J** </td> <td> 35.34667875478276 </td> </tr> </table> <font color='blue'> **What you should remember**: - The total cost is a linear combination of the content cost $J_{content}(C,G)$ and the style cost $J_{style}(S,G)$ - $\alpha$ and $\beta$ are hyperparameters that control the relative weighting between content and style ## 4 - Solving the optimization problem Finally, let's put everything together to implement Neural Style Transfer! Here's what the program will have to do: <font color='purple'> 1. Create an Interactive Session 2. Load the content image 3. Load the style image 4. Randomly initialize the image to be generated 5. Load the VGG16 model 7. Build the TensorFlow graph: - Run the content image through the VGG16 model and compute the content cost - Run the style image through the VGG16 model and compute the style cost - Compute the total cost - Define the optimizer and the learning rate 8. Initialize the TensorFlow graph and run it for a large number of iterations, updating the generated image at every step. </font> Lets go through the individual steps in detail. You've previously implemented the overall cost $J(G)$. We'll now set up TensorFlow to optimize this with respect to $G$. To do so, your program has to reset the graph and use an "[Interactive Session](https://www.tensorflow.org/api_docs/python/tf/InteractiveSession)". Unlike a regular session, the "Interactive Session" installs itself as the default session to build a graph. This allows you to run variables without constantly needing to refer to the session object, which simplifies the code. Lets start the interactive session. ``` # Reset the graph tf.reset_default_graph() # Start interactive session sess = tf.InteractiveSession() ``` Let's load, reshape, and normalize our "content" image (the Louvre museum picture): ``` content_image = scipy.misc.imread("images/louvre_small.jpg") content_image = reshape_and_normalize_image(content_image) ``` Let's load, reshape and normalize our "style" image (Claude Monet's painting): ``` style_image = scipy.misc.imread("images/monet.jpg") style_image = reshape_and_normalize_image(style_image) ``` Now, we initialize the "generated" image as a noisy image created from the content_image. By initializing the pixels of the generated image to be mostly noise but still slightly correlated with the content image, this will help the content of the "generated" image more rapidly match the content of the "content" image. (Feel free to look in `nst_utils.py` to see the details of `generate_noise_image(...)`; to do so, click "File-->Open..." at the upper-left corner of this Jupyter notebook.) ``` generated_image = generate_noise_image(content_image) imshow(generated_image[0]) ``` Next, as explained in part (2), let's load the VGG16 model. ``` model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat") ``` To get the program to compute the content cost, we will now assign `a_C` and `a_G` to be the appropriate hidden layer activations. We will use layer `conv4_2` to compute the content cost. The code below does the following: 1. Assign the content image to be the input to the VGG model. 2. Set a_C to be the tensor giving the hidden layer activation for layer "conv4_2". 3. Set a_G to be the tensor giving the hidden layer activation for the same layer. 4. Compute the content cost using a_C and a_G. ``` # Assign the content image to be the input of the VGG model. sess.run(model['input'].assign(content_image)) # Select the output tensor of layer conv4_2 out = model['conv4_2'] # Set a_C to be the hidden layer activation from the layer we have selected a_C = sess.run(out) # Set a_G to be the hidden layer activation from same layer. Here, a_G references model['conv4_2'] # and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that # when we run the session, this will be the activations drawn from the appropriate layer, with G as input. a_G = out # Compute the content cost J_content = compute_content_cost(a_C, a_G) ``` **Note**: At this point, a_G is a tensor and hasn't been evaluated. It will be evaluated and updated at each iteration when we run the Tensorflow graph in model_nn() below. ``` # Assign the input of the model to be the "style" image sess.run(model['input'].assign(style_image)) # Compute the style cost J_style = compute_style_cost(model, STYLE_LAYERS) ``` **Exercise**: Now that you have J_content and J_style, compute the total cost J by calling `total_cost()`. Use `alpha = 10` and `beta = 40`. ``` ### START CODE HERE ### (1 line) J = None ### END CODE HERE ### ``` You'd previously learned how to set up the Adam optimizer in TensorFlow. Lets do that here, using a learning rate of 2.0. [See reference](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer) ``` # define optimizer (1 line) optimizer = tf.train.AdamOptimizer(2.0) # define train_step (1 line) train_step = optimizer.minimize(J) ``` **Exercise**: Implement the model_nn() function which initializes the variables of the tensorflow graph, assigns the input image (initial generated image) as the input of the VGG16 model and runs the train_step for a large number of steps. ``` def model_nn(sess, input_image, num_iterations = 200): # Initialize global variables (you need to run the session on the initializer) ### START CODE HERE ### (1 line) None ### END CODE HERE ### # Run the noisy input image (initial generated image) through the model. Use assign(). ### START CODE HERE ### (1 line) None ### END CODE HERE ### for i in range(num_iterations): # Run the session on the train_step to minimize the total cost ### START CODE HERE ### (1 line) None ### END CODE HERE ### # Compute the generated image by running the session on the current model['input'] ### START CODE HERE ### (1 line) generated_image = None ### END CODE HERE ### # Print every 20 iteration. if i%20 == 0: Jt, Jc, Js = sess.run([J, J_content, J_style]) print("Iteration " + str(i) + " :") print("total cost = " + str(Jt)) print("content cost = " + str(Jc)) print("style cost = " + str(Js)) # save current generated image in the "/output" directory save_image("output/" + str(i) + ".png", generated_image) # save last generated image save_image('output/generated_image.jpg', generated_image) return generated_image ``` Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after ≈140 iterations. Neural Style Transfer is generally trained using GPUs. ``` model_nn(sess, generated_image) ``` **Expected Output**: <table> <tr> <td> **Iteration 0 : ** </td> <td> total cost = 5.05035e+09 <br> content cost = 7877.67 <br> style cost = 1.26257e+08 </td> </tr> </table> You're done! After running this, in the upper bar of the notebook click on "File" and then "Open". Go to the "/output" directory to see all the saved images. Open "generated_image" to see the generated image! :) You should see something the image presented below on the right: <img src="images/louvre_generated.png" style="width:800px;height:300px;"> We didn't want you to wait too long to see an initial result, and so had set the hyperparameters accordingly. To get the best looking results, running the optimization algorithm longer (and perhaps with a smaller learning rate) might work better. After completing and submitting this assignment, we encourage you to come back and play more with this notebook, and see if you can generate even better looking images. Here are few other examples: - The beautiful ruins of the ancient city of Persepolis (Iran) with the style of Van Gogh (The Starry Night) <img src="images/perspolis_vangogh.png" style="width:750px;height:300px;"> - The tomb of Cyrus the great in Pasargadae with the style of a Ceramic Kashi from Ispahan. <img src="images/pasargad_kashi.png" style="width:750px;height:300px;"> - A scientific study of a turbulent fluid with the style of a abstract blue fluid painting. <img src="images/circle_abstract.png" style="width:750px;height:300px;"> ## 5 - Test with your own image (Optional/Ungraded) Finally, you can also rerun the algorithm on your own images! To do so, go back to part 4 and change the content image and style image with your own pictures. In detail, here's what you should do: 1. Click on "File -> Open" in the upper tab of the notebook 2. Go to "/images" and upload your images (requirement: (WIDTH = 300, HEIGHT = 225)), rename them "my_content.png" and "my_style.png" for example. 3. Change the code in part (3.4) from : ```python content_image = scipy.misc.imread("images/louvre.jpg") style_image = scipy.misc.imread("images/claude-monet.jpg") ``` to: ```python content_image = scipy.misc.imread("images/my_content.jpg") style_image = scipy.misc.imread("images/my_style.jpg") ``` 4. Rerun the cells (you may need to restart the Kernel in the upper tab of the notebook). You can also tune your hyperparameters: - Which layers are responsible for representing the style? STYLE_LAYERS - How many iterations do you want to run the algorithm? num_iterations - What is the relative weighting between content and style? alpha/beta ## 6 - Conclusion Great job on completing this assignment! You are now able to use Neural Style Transfer to generate artistic images. This is also your first time building a model in which the optimization algorithm updates the pixel values rather than the neural network's parameters. Deep learning has many different types of models and this is only one of them! <font color='blue'> What you should remember: - Neural Style Transfer is an algorithm that given a content image C and a style image S can generate an artistic image - It uses representations (hidden layer activations) based on a pretrained ConvNet. - The content cost function is computed using one hidden layer's activations. - The style cost function for one layer is computed using the Gram matrix of that layer's activations. The overall style cost function is obtained using several hidden layers. - Optimizing the total cost function results in synthesizing new images. This was the final programming exercise of this course. Congratulations--you've finished all the programming exercises of this course on Convolutional Networks! We hope to also see you in Course 5, on Sequence models! ### References: The Neural Style Transfer algorithm was due to Gatys et al. (2015). Harish Narayanan and Github user "log0" also have highly readable write-ups from which we drew inspiration. The pre-trained network used in this implementation is a VGG network, which is due to Simonyan and Zisserman (2015). Pre-trained weights were from the work of the MathConvNet team. - Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, (2015). A Neural Algorithm of Artistic Style (https://arxiv.org/abs/1508.06576) - Harish Narayanan, Convolutional neural networks for artistic style transfer. https://harishnarayanan.org/writing/artistic-style-transfer/ - Log0, TensorFlow Implementation of "A Neural Algorithm of Artistic Style". http://www.chioka.in/tensorflow-implementation-neural-algorithm-of-artistic-style - Karen Simonyan and Andrew Zisserman (2015). Very deep convolutional networks for large-scale image recognition (https://arxiv.org/pdf/1409.1556.pdf) - MatConvNet. http://www.vlfeat.org/matconvnet/pretrained/
github_jupyter
``` cd ../.. import pandas as pd import numpy as np from geostacks.utils import bearing, representation %pylab inline ls8 = pd.read_excel('./LS8_cornerPts.xlsx') ls8 ``` # Calculate Bearing per Row as Average Note that the `asending` and `dsending` varibles refer to direction of the moving window with regard to bearing calculation, ***not*** asending/desending paths of the satellite orbit. ``` asending = bearing(ls8.lat_CTR[248:497].values, ls8.lon_CTR[248:497].values, ls8.lat_CTR[247:496].values, ls8.lon_CTR[247:496].values) # 180 degree offset dsending = bearing(ls8.lat_CTR[247:496].values, ls8.lon_CTR[247:496].values, ls8.lat_CTR[248:497].values, ls8.lon_CTR[248:497].values) + 180. means = np.mean([asending[0:-1], dsending[1:]], axis=0) # Replace invalid first value with non-averaged valid value means[0] = dsending[1] # Same for last, but on other array means[-1] = asending[-2] len(means) plot(bearing(ls8.lat_CTR[1:248].values, ls8.lon_CTR[1:248].values, ls8.lat_CTR[0:247].values, ls8.lon_CTR[0:247].values)) plot(means) ``` # Comparing Landsat footprints ``` def get_corners(i): corners = np.zeros((4,2)) row = ls8[i:i+1] corners[0,1] = row.lat_UL.values corners[1,1] = row.lat_UR.values corners[2,1] = row.lat_LL.values corners[3,1] = row.lat_LR.values corners[0,0] = row.lon_UL.values corners[1,0] = row.lon_UR.values corners[2,0] = row.lon_LL.values corners[3,0] = row.lon_LR.values return corners ls8[20:21] get_corners(20) def compare(idx): ref = get_corners(idx) scene = ls8[idx:idx+1] corners, contains, _, _, _ = representation(radians(scene.lon_CTR.values)[0], radians(scene.lat_CTR.values)[0], means[scene.row.values -1][0], 185, 180) calc = np.degrees(corners) lat_error, lon_error = np.mean(np.abs(ref[ref[:,0].argsort()] - calc[calc[:,0].argsort()]), axis=0) return lat_error, lon_error lats_err, lons_err = [], [] for i in range(19,100): lat, lon = compare(i) lats_err.append(lat), lons_err.append(lon) (np.array(lats_err) < 0.01).all() == True plot(lats_err) plot(lons_err) # Swapping swadth dimentions... scene = ls8[20:21] # Change scene index above... need i and i+1 since pandas expects slice corners1, contains, _, _, _ = representation(radians(scene.lon_CTR.values)[0], radians(scene.lat_CTR.values)[0], means[scene.row.values -1][0], len_lon=185, len_lat=180) np.degrees(corners1) plot(get_corners(20)[:,0],get_corners(20)[:,1],'k*') plot(np.degrees(corners1[:,0]), np.degrees(corners1[:,1]), 'b*') # Swapping swadth dimentions... scene = ls8[50:51] # Change scene index above... need i and i+1 since pandas expects slice corners1, contains, _, _, _ = representation(radians(scene.lon_CTR.values)[0], radians(scene.lat_CTR.values)[0], means[scene.row.values -1][0], len_lon=185, len_lat=180) np.degrees(corners1) plot(get_corners(50)[:,0],get_corners(50)[:,1],'k*') plot(np.degrees(corners1[:,0]), np.degrees(corners1[:,1]), 'b*') ``` # Jonathan's Contains Code Let $C$ be the center point and $\theta$ is the tilt angle there is a corresponding "unit tangent vector" to the sphere with that tilt. Call this vector $v$. To move $C$ along $v$ a distance $D$ in a sphere of radius $R$ is something like $$ P_1 = \cos(A) \cdot C + R\cdot \sin (A) \cdot v $$ where $A$ corresponds to 90km in radians. This is `midpt_1` in code below. Moving in the direction $-v$ yields $$ P_2 = \cos(A) \cdot C - R\cdot \sin (A) \cdot v $$ which is referred to as `midpt_2` below. ``` import numpy as np import functools lat_deg, lon_deg = 77.875, -20.975 lat, lon, R, theta = lat_deg*(2*np.pi)/360, lon_deg*(2*np.pi)/360, 6371, -70 * 2 * np.pi / 360 boulder_lat, boulder_lon = lat, lon x, y, z = (R * np.cos(lat) * np.sin(lon), R * np.cos(lat) * np.cos(lon), R * np.sin(lat)) C = np.array([x,y,z]) ``` ## Computing $v$ from $\theta$ At a point $C=[x,y,z]$, a tilt can be thought of as moving through lat and lon along a line with direction vector $d=(d_lon, d_lat)$, so we have in parameters $t$ $$ x(t), y(t), z(t) = (R * \cos(lat_0 + t dlat) * \cos(lon_0 + t dlon), R * \cos(lat_0 + t dlat) * \sin(lon_0 + t dlon), R * \sin(lat_0 + t dlat)) $$ Differentiating with respect to $t$ (ignoring the $R$ scaling as we want normalized $v$) we see $v$ is parallel to $$ R\cdot (-\sin (lat_0) \cos(lon_0) dlat - \cos(lat_0) \sin(lon_0) dlon, -\sin(lat_0) \sin(lon_0) dlat + \cos(lat_0) \cos(lon_0) dlon, \cos(lat_0) dlat) $$ ``` dlat, dlon = np.sin(theta), np.cos(theta) v = np.array([-np.sin(lat) * np.sin(lon) * dlat + np.cos(lat) * np.cos(lon) * dlon, -np.sin(lat) * np.cos(lon) * dlat - np.cos(lat) * np.sin(lon) * dlon, np.cos(lat) * dlat]) v /= np.linalg.norm(v) np.sum(v*C) ``` The angle $A$ is $$ \frac{A}{2\pi} = \frac{90km}{2 \pi \cdot 6371km} $$ ``` A = 90/R A midpt_1 = np.cos(A) * C + R * np.sin(A) * v np.linalg.norm(midpt_1 - C), np.dot(midpt_1, C) / R**2, np.cos(A) ``` To find next corner, we move $\perp$ to $v$. That direction can be found by $$ v \times P_1. $$ Let $v^{\perp}$ be the unit vector in this direction. ``` v_perp = np.cross(midpt_1, v) # == np.cross(C, v) v_perp /= np.linalg.norm(v_perp) v_perp ``` We will then move 92.5km from $P_1$ in the direction $$ P_2 = \cos(B) \cdot P_1 + R \cdot \sin(B) \cdot v^{\perp} $$ where $$ \frac{B}{2\pi} = \frac{92.5km}{6371km} $$ ``` B = 92.5/6371 corners = [np.cos(B) * midpt_1 + R * np.sin(B) * v_perp] corners.append(np.cos(B) * midpt_1 - R * np.sin(B) * v_perp) v_perp = np.cross(midpt_1, v) # == np.cross(C, v) v_perp /= np.linalg.norm(v_perp) v_perp midpt_2 = np.cos(A) * C - R * np.sin(A) * v corners.append(np.cos(B) * midpt_2 + R * np.sin(B) * v_perp) corners.append(np.cos(B) * midpt_2 - R * np.sin(B) * v_perp) corners [np.linalg.norm(corner) for corner in corners] ``` We can find another corner $$ \cos(A') \cdot P_1 - R \cdot \sin(A') \cdot v^{\perp} $$ and similarly other corners. ### Now convert back to lat lon ``` lat_degs = [np.arcsin(z_ / R) / (2 * np.pi) * 360 for x_, y_, z_ in corners] lat_degs lon_degs = [np.arctan2(x_ / R, y_ / R) / (2 * np.pi) * 360 for x_, y_, z_ in corners] lon_degs %matplotlib inline import matplotlib.pyplot as plt plt.scatter(lon_degs, lat_degs) plt.scatter([lon_deg], [lat_deg]) ``` ### A representation of the scene that implements `contains` ``` def representation(center_lon, # in radians center_lat, # in radians instrument_tilt, # in degrees, rotation clockwise len_lon=180, # extent in km len_lat=185, # extent in km R=6371): # "radius" of earth tilt_deg = instrument_tilt * 2 * np.pi / 360 x, y, z = (R * np.cos(center_lat) * np.sin(center_lon), R * np.cos(center_lat) * np.cos(center_lon), R * np.sin(center_lat)) C = np.array([x,y,z]) # center of scene dlat, dlon = np.sin(-tilt_deg), np.cos(-tilt_deg) dir_lon = np.array([-np.sin(center_lat) * np.sin(center_lon) * dlat + np.cos(center_lat) * np.cos(center_lon) * dlon, -np.sin(center_lat) * np.cos(center_lon) * dlat - np.cos(center_lat) * np.sin(center_lon) * dlon, np.cos(center_lat) * dlat]) dir_lon /= np.linalg.norm(dir_lon) A = len_lon / 2 / R midpt_1 = np.cos(A) * C + R * np.sin(A) * dir_lon dir_lat = np.cross(midpt_1, dir_lon) dir_lat /= np.linalg.norm(dir_lat) B = len_lat/ 2 / R corners = [np.cos(B) * midpt_1 + R * np.sin(B) * dir_lat] corners.append(np.cos(B) * midpt_1 - R * np.sin(B) * dir_lat) midpt_2 = np.cos(A) * C - R * np.sin(A) * dir_lon corners.append(np.cos(B) * midpt_2 + R * np.sin(B) * dir_lat) corners.append(np.cos(B) * midpt_2 - R * np.sin(B) * dir_lat) corners = np.array(corners) corners_lon_lat = np.array([(np.arctan2(x_ / R, y_ / R), np.arcsin(z_ / R)) for x_, y_, z_ in corners]) # now work out halfspace # these are the edge segmentsin lon/lat space supports = [corners_lon_lat[0]-corners_lon_lat[1], corners_lon_lat[0]-corners_lon_lat[2], corners_lon_lat[1]-corners_lon_lat[3], corners_lon_lat[2]-corners_lon_lat[3]] # normals to each edge segment normals = np.array([(s[1],-s[0]) for s in supports]) pts = [corners_lon_lat[0], # a point within each edge corners_lon_lat[0], corners_lon_lat[1], corners_lon_lat[3]] bdry_values = np.array([np.sum(n * p) for n, p in zip(normals, pts)]) center_values = [np.sum(n * [center_lon, center_lat]) for n in normals] center_signs = np.sign(center_values - bdry_values) def _check(normals, center_signs, bdry_values, lon_lat_vals): normal_mul = np.asarray(lon_lat_vals).dot(normals.T) values_ = normal_mul - bdry_values[None,:] signs_ = np.sign(values_) * center_signs[None,:] return np.squeeze(np.all(signs_ == 1, 1)) _check = functools.partial(_check, normals, center_signs, bdry_values) return corners_lon_lat, _check, normals, bdry_values, center_signs ``` ### What needs to be stored - We need to store `normals`, `bdry_values` and `center_signs` for each scene. ``` corners, contains, normals, bdry_values, center_signs = representation(radians(-10.647337), radians(79.129883), 49.27267632, len_lat=200, len_lon=200) ``` ### How `contains` is determined - Function can check several query points at once.... ``` def _check(normals, center_signs, bdry_values, lon_lat_vals): normal_mul = np.asarray(lon_lat_vals).dot(normals.T) values_ = normal_mul - bdry_values[None,:] signs_ = np.sign(values_) * center_signs[None,:] return np.squeeze(np.all(signs_ == 1, 1)) import functools contains = functools.partial(_check, normals, center_signs, bdry_values) ```
github_jupyter
#### Removing words using pre-defined list ``` noise_removal=['a','the','i','he','she', 'am','is', 'this'] type(noise_removal) [c for c in noise_removal if c not in ['a'] ] " ".join(noise_removal) def f_noise_removal(text1): text=text1.split() cleaned_text= [c_text for c_text in text if c_text.lower() not in noise_removal] cleaned_text=' '.join(cleaned_text) return cleaned_text t='This is a #sample text' t.split() f_noise_removal(t) ``` #### Removing words based on pattern ``` import re def def_remove_regex(pattern,text): urls=re.finditer(pattern,text) for i in urls: input_text = re.sub(i.group().strip(), '', text) return input_text def_remove_regex('#[\w]*',t) ``` #### Stemming and Lemmatization ``` from nltk.stem.wordnet import WordNetLemmatizer lem = WordNetLemmatizer() lem.lemmatize('playing','v') from nltk.stem.porter import PorterStemmer stem = PorterStemmer() stem.stem('bravely') from nltk.stem.lancaster import LancasterStemmer lstem = LancasterStemmer() lstem.stem('bravely') from nltk.stem.snowball import SnowballStemmer sbstem = SnowballStemmer("english") sbstem.stem('bravely') ``` #### Object standarization ``` lookup_dict = {'awsm':'awesome', 'rt':'Retweet', 'dm':'direct message','luv' :'love'} def _lookup_words(input_text): words = input_text.split() new_words = [] for word in words: if word.lower() in lookup_dict: word = lookup_dict[word.lower()] new_words.append(word) new_text = ' '.join(new_words) return new_text _lookup_words('She is awsm rt he') lookup_dict = {'awsm':'awesome', 'rt':'Retweet', 'dm':'direct message','luv' :'love'} # new_words = [] new_words.append('there') new_words ``` #### Escaping HTML characters Data obtained from web usually contains a lot of html entities like &lt; &gt; &amp; which gets embedded in the original data. It is thus necessary to get rid of these entities. One approach is to directly remove them by the use of specific regular expressions. Another approach is to use appropriate packages and modules (for example htmlparser of Python), which can convert these entities to standard html tags. For example: &lt; is converted to “<” and &amp; is converted to “&” ``` original_tweet = "I luv my &lt;3 iphone &amp; you’re awsm apple. DisplayIsAwesome, sooo happppppy http://www.apple.com" from html.parser import HTMLParser html_parser= HTMLParser() tweet = html_parser.unescape(original_tweet) print(tweet) ``` #### Decoding data This is the process of transforming information from complex symbols to simple and easier to understand characters. Text data may be subject to different forms of decoding like “Latin”, “UTF8” etc. Therefore, for better analysis, it is necessary to keep the complete data in standard encoding format. UTF-8 encoding is widely accepted and is recommended to use. ``` tweet = original_tweet.encode('utf-8','ignore') tweet ``` #### Part of speech tagging Befenits of POS: 1. Word sense disambiguation 2. Improving word-based features 3. Normalization and Lemmatization 4. Efficient stopword removal ``` from nltk import word_tokenize, pos_tag text= "I am learning Natural Language Processing on Analytics Vidhya" tokens = word_tokenize(text) pos_tokens = pos_tag(tokens) pos_tokens ``` ### Entity Extraction #### 1. Named Entity Recognition (NER) ``` import nltk doc = '''Andrew Yan-Tak Ng is a Chinese American computer scientist.He is the former chief scientist at Baidu, where he led the company'sArtificial Intelligence Group. He is an adjunct professor (formerly associate professor) at Stanford University. Ng is also the co-founderand chairman at Coursera, an online education platform. Andrew was bornin the UK in 1976. His parents were both from Hong Kong.''' tokens = nltk.word_tokenize(doc) pos_tokens = nltk.pos_tag(tokens) ne_chunks = nltk.ne_chunk(pos_tokens) # extract all named entities named_entities = [] for tagged_tree in ne_chunks: if hasattr(tagged_tree, 'label'): entity_name = ' '.join(c[0] for c in tagged_tree.leaves()) entity_type = tagged_tree.label() # get NE category named_entities.append((entity_name, entity_type)) print(named_entities) ne_chunks ``` #### Topic Modelling ``` nltk.ne_chunk(pos_tokens) ```
github_jupyter
# Get shapefiles from OpenStreetMap with OSMnx - [Overview of OSMnx](http://geoffboeing.com/2016/11/osmnx-python-street-networks/) - [GitHub repo](https://github.com/gboeing/osmnx) - [Examples, demos, tutorials](https://github.com/gboeing/osmnx-examples) - [Documentation](https://osmnx.readthedocs.io/en/stable/) - [Journal article/citation](http://geoffboeing.com/publications/osmnx-complex-street-networks/) ``` import osmnx as ox %matplotlib inline ox.config(log_file=True, log_console=True, use_cache=True) ``` ## Get the shapefile for one city, project it, display it, and save it ``` # from some place name, create a GeoDataFrame containing the geometry of the place city = ox.gdf_from_place('Walnut Creek, California, USA') city # save the retrieved data as a shapefile ox.save_gdf_shapefile(city) # project the geometry to the appropriate UTM zone (calculated automatically) then plot it city = ox.project_gdf(city) fig, ax = ox.plot_shape(city) ``` ## Create a shapefile for multiple cities, project it, display it, and save it ``` # define a list of place names place_names = ['Berkeley, California, USA', 'Oakland, California, USA', 'Piedmont, California, USA', 'Emeryville, California, USA', 'Alameda, Alameda County, CA, USA'] # create a GeoDataFrame with rows for each place in the list east_bay = ox.gdf_from_places(place_names, gdf_name='east_bay_cities') east_bay # project the geometry to the appropriate UTM zone then plot it east_bay = ox.project_gdf(east_bay) fig, ax = ox.plot_shape(east_bay) # save the retrieved and projected data as a shapefile ox.save_gdf_shapefile(east_bay) ``` ## You can also construct buffered spatial geometries ``` # pass in buffer_dist in meters city_buffered = ox.gdf_from_place('Walnut Creek, California, USA', buffer_dist=250) fig, ax = ox.plot_shape(city_buffered) # you can buffer multiple places in a single query east_bay_buffered = ox.gdf_from_places(place_names, gdf_name='east_bay_cities', buffer_dist=250) fig, ax = ox.plot_shape(east_bay_buffered, alpha=0.7) ``` ## You can download boroughs, counties, states, or countries too Notice the polygon geometries represent political boundaries, not physical/land boundaries. ``` gdf = ox.gdf_from_place('Manhattan Island, New York, New York, USA') gdf = ox.project_gdf(gdf) fig, ax = ox.plot_shape(gdf) gdf = ox.gdf_from_place('Cook County, Illinois, United States') gdf = ox.project_gdf(gdf) fig, ax = ox.plot_shape(gdf) gdf = ox.gdf_from_place('Iowa') gdf = ox.project_gdf(gdf) fig, ax = ox.plot_shape(gdf) gdf = ox.gdf_from_places(['United Kingdom', 'Ireland']) gdf = ox.project_gdf(gdf) fig, ax = ox.plot_shape(gdf) ``` ## Be careful to pass the right place name that OSM needs Be specific and explicit, and sanity check the results. The function logs a warning if you get a point returned instead of a polygon. In the first example below, OSM resolves 'Melbourne, Victoria, Australia' to a single point at the center of the city. In the second example below, OSM correctly resolves 'City of Melbourne, Victoria, Australia' to the entire city and returns its polygon geometry. ``` melbourne = ox.gdf_from_place('Melbourne, Victoria, Australia') melbourne = ox.project_gdf(melbourne) type(melbourne['geometry'].iloc[0]) melbourne = ox.gdf_from_place('City of Melbourne, Victoria, Australia') melbourne = ox.project_gdf(melbourne) fig, ax = ox.plot_shape(melbourne) ``` ## Specify you wanted a *country* if it resolves to a *city* of the same name OSM resolves 'Mexico' to Mexico City and returns a single point at the center of the city. Instead we have a couple options: 1. We can pass a dict containing a structured query to specify that we want Mexico the country instead of Mexico the city. 2. We can also get multiple countries by passing a list of queries. These can be a mixture of strings and dicts. ``` mexico = ox.gdf_from_place('Mexico') mexico = ox.project_gdf(mexico) type(mexico['geometry'].iloc[0]) # instead of a string, you can pass a dict containing a structured query mexico = ox.gdf_from_place({'country':'Mexico'}) mexico = ox.project_gdf(mexico) fig, ax = ox.plot_shape(mexico) # you can pass multiple queries with mixed types (dicts and strings) mx_gt_tx = ox.gdf_from_places(queries=[{'country':'Mexico'}, 'Guatemala', {'state':'Texas'}]) mx_gt_tx = ox.project_gdf(mx_gt_tx) fig, ax = ox.plot_shape(mx_gt_tx) ``` ## You can request a specific result number By default, we only request 1 result from OSM. But, we can pass an optional `which_result` parameter to query OSM for *n* results and then process/return the *n*th. If you query 'France', OSM returns the country with all its overseas territories as result #1 and European France alone as result #2. Querying for 'France' returns just the first result (and thus all of France's overseas territories), but passing `which_result=2` instead retrieves the top 2 results from OSM and processes/returns the 2nd one (which is European France). You could have also done this to retrieve Mexico the country instead of Mexico City above. ``` france = ox.gdf_from_place('France') france = ox.project_gdf(france) fig, ax = ox.plot_shape(france) france = ox.gdf_from_place('France', which_result=2) france = ox.project_gdf(france) fig, ax = ox.plot_shape(france) ```
github_jupyter
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mravanba/comp551-notebooks/blob/master/CurseOfDimensionality.ipynb) # Curse of Dimensionality Some learning algorithms use a distance function to measure the similarity or dissimilarity of instances. Having many features corresponds to the instance "living" in a high-dimensional space. High dimensional data introduce difficulties, so called curse of dimensionality. However, by making some strong assumption about the data machine learning methods overcome these difficulties. First lets see the problems associated with high-dimensional data: 1. To "fill" a high dimensional space we need exponentially more samples. - in one dimension, it is enough to have 9 labeled examples in the range (0,10), to make sure any new observation will be at a distance of `1` to our training data. - in two dimensions, we need more than $9 \times 9$ labeled examples to satisfy the same condition. - the number of required samples is $\mathcal{O}(9^D)$, where $D$ is the dimension. 2. In high dimension, it becomes harder to identify close neighbours. Let us demonstrate this with an example. We will generate random instances in various dimensions and measure the pairwise distance between these examples. ``` import numpy as np %matplotlib inline import matplotlib.pyplot as plt from IPython.core.debugger import set_trace np.random.seed(1234) N = 1000 #number of randomly generated points Ds = [1, 2, 8, 32, 128, 784] #array for different dimensions we'll consider fig, axes = plt.subplots(ncols=len(Ds), nrows=1, constrained_layout=True, figsize=(len(Ds)*3, 3)) for i, D in enumerate(Ds): #generate random samples in a D dimensional space x = np.random.rand(N,D) #next compute the pairwise euclidean distance between the generated points (remember broadcasting?) dist = np.sqrt(np.sum((x[None,:,:] - x[:,None,:])**2, -1)) axes[i].hist(dist.ravel(), bins=100) #to plot a histogram axes[i].set_xlabel("pairwise distance") axes[i].set_ylabel("frequency") axes[i].set_title(f'{D} dimensions') plt.show() ``` It is evident from this plot that, very unintuitively, in high dimensions, most points in the space have similar distances with each other! If our datasets had a similar behaviour this could undermine learning in high-dimensions: if all instances are more or less similar to each other on what basis can we label them differently? Let's create similar plots for the `MNIST` dataset. For each dimension `D` we randomly select subset of dimensions from the `28 x 28 = 784` dimensions, and use them to measure the pairwise distance between examples. ``` #load the MNIST dataset from sklearn.datasets import fetch_openml x_org, y = fetch_openml('mnist_784', version=1, return_X_y=True) ``` Lets see what the data looks like ``` from mpl_toolkits.axes_grid1 import ImageGrid def plot_digits(data): num_plots = data.shape[0] fig = plt.figure(figsize=(num_plots, 10.*num_plots)) grid = ImageGrid(fig, 111, nrows_ncols=(1, num_plots), axes_pad=0.1) #makes a grid of images for i in range(num_plots): grid[i].imshow(data[i].reshape((28,28))) plt.show() plot_digits(x_org[:20]) ``` Now let's do the same measurement of pairwise distance, this time between instances in our dataset ``` N = 1000 # number of examples to use (for faster computation) x = np.reshape(x_org[:N], (N, -1)) #this flattens the images x = x / np.max(x) #normalize Ds = [32, 128, 784] fig, axes = plt.subplots(ncols=len(Ds), nrows=1, constrained_layout=True, figsize=(len(Ds)*3, 3)) dim_inds = np.random.permutation(x.shape[1]) for i, D in enumerate(Ds): #randomly slice D dimensions from the data x_D = x[:,dim_inds[:D]] #compute the pairwise euclidean distance between the N data samples dist = np.sqrt(np.sum((x_D[None,:,:] - x_D[:,None,:])**2, -1)) axes[i].hist(dist.ravel(), bins=100) axes[i].set_xlabel("pairwise distance") axes[i].set_ylabel("frequency") axes[i].set_title(f'{D} dimensions') plt.show() ``` This difference between real-world data and random data is explained by the **manifold hypothesis**. This hypothesis postulates that real-world data often reside close to high-dimensional *manifold*. Dimensionality reduction methods (aka manifold learning) try to estimate these low-dimensional encoding of the data. However, the point we wanted to show here is that because of this special behaviour, methods such as KNN continue to work with high-dimensional data. ## Another demonstration We put one hypercube inside another with unit length, such that they have one common corner. We then place a large number of points regularly inside the larger hypercube. Here again we change the dimension of the hypercubes and plot the portion of the points in the large hypercube that are also in the small hypercube (think of this as the portion of their volume) as we change the length of the side of the smaller hypercube. ``` Ds = [1, 2, 8, 128, 512, 784] N = 10000 grid_size = 1000 #generate grid points grid_points = np.linspace(0,1,grid_size, endpoint=True) result = np.zeros((len(Ds), grid_size)) for j, D in enumerate(Ds): x = np.random.rand(N, D) for i, g in enumerate(grid_points): result[j,i] = np.sum(np.all(x < g, axis=1)) / N #all the samples that fall in the cube of size g/grid_size plt.plot(grid_points, result[j,:], label=f'D={D}') plt.xlabel('side of the cube') plt.ylabel('portion of the points inside') plt.legend() plt.show() ``` that is to get 1% of the points to be inside the inner cube we need to have a side of length .993! This is another expression of the idea that in high dimensions, *the mass is mostly at the corners!* ``` #get the length of the sides side_length = np.argmin(np.abs(result - .01), axis=1)/grid_size plt.plot(Ds, side_length) plt.xlabel('dimension') plt.ylabel('side of the sub-cube with 1% of the data') print(f'to get 1% of points inside the inner cube for D=784 we need the side of the cube to be of length {side_length[-1]}') ```
github_jupyter
# Using Interrupts and asyncio for Buttons and Switches This notebook provides a simple example for using asyncio I/O to interact asynchronously with multiple input devices. A task is created for each input device and coroutines used to process the results. To demonstrate, we recreate the flashing LEDs example in the getting started notebook but using interrupts to avoid polling the GPIO devices. The aim is have holding a button result in the corresponding LED flashing. ## Initialising the Enviroment First we import an instantiate all required classes to interact with the buttons, switches and LED and ensure the base overlay is loaded. ``` from pynq import Overlay, PL from pynq.board import LED, Switch, Button Overlay('base.bit').download() buttons = [Button(i) for i in range(4)] leds = [LED(i) for i in range(4)] switches = [Switch(i) for i in range(2)] ``` ## Define the flash LED task Next step is to create a task that waits for the button to be pressed and flash the LED until the button is released. The `while True` loop ensures that the coroutine keeps running until cancelled so that multiple presses of the same button can be handled. ``` import asyncio @asyncio.coroutine def flash_led(num): while True: yield from buttons[num].wait_for_value_async(1) while buttons[num].read(): leds[num].toggle() yield from asyncio.sleep(0.1) leds[num].off() ``` ## Create the task As there are four buttons we want to check, we create four tasks. The function `asyncio.ensure_future` is used to convert the coroutine to a task and schedule it in the event loop. The tasks are stored in an array so they can be referred to later when we want to cancel them. ``` tasks = [asyncio.ensure_future(flash_led(i)) for i in range(4)] ``` ## Monitoring the CPU Usage One of the advantages of interrupt-based I/O is to minimised CPU usage while waiting for events. To see how CPU usages is impacted by the flashing LED tasks we create another task that prints out the current CPU utilisation every 3 seconds. ``` import psutil @asyncio.coroutine def print_cpu_usage(): # Calculate the CPU utilisation by the amount of idle time # each CPU has had in three second intervals last_idle = [c.idle for c in psutil.cpu_times(percpu=True)] while True: yield from asyncio.sleep(3) next_idle = [c.idle for c in psutil.cpu_times(percpu=True)] usage = [(1-(c2-c1)/3) * 100 for c1,c2 in zip(last_idle, next_idle)] print("CPU Usage: {0:3.2f}%, {1:3.2f}%".format(*usage)) last_idle = next_idle tasks.append(asyncio.ensure_future(print_cpu_usage())) ``` ## Run the event loop All of the blocking wait_for commands will run the event loop until the condition is met. All that is needed is to call the blocking `wait_for_level` method on the switch we are using as the termination condition. While waiting for switch 0 to get high, users can press any push button on the board to flash the corresponding LED. While this loop is running, try opening a terminal and running `top` to see that python is consuming no CPU cycles while waiting for peripherals. As this code runs until the switch 0 is high, make sure it is low before running the example. ``` if switches[0].read(): print("Please set switch 0 low before running") else: switches[0].wait_for_value(1) ``` ## Clean up Even though the event loop has stopped running, the tasks are still active and will run again when the event loop is next used. To avoid this, the tasks should be cancelled when they are no longer needed. ``` [t.cancel() for t in tasks] ``` Now if we re-run the event loop, nothing will happen when we press the buttons. The process will block until the switch is set back down to the low position. ``` switches[0].wait_for_value(0) ```
github_jupyter
``` # If you haven't a local copy of sidh-optimizer, run # #!pip install https://github.com/sidh-crypto/sidh-optimizer.git from sidh_optimizer.formulas import * sympy.init_printing() ``` ## A quick tour The `formulas` module contains symbolic representations of elliptic curve formulas. These representations can be visualized as `sympy` expressions, but also know about their computational cost. We start with the classic Edwards addition formula. Here an Edwards curve is a curve of the form $$E(x^2 + y^2) = E + Dx^2y^2.$$ Dividing both sides by $E$ gives the usual definition of Edwards curves. Here is the addition formulas in projective coordinates for this generlized Edwards curve. ``` X, Y, Z = ed_add(Var('X1'), Var('Y1'), Var('Z1'), Var('X2'), Var('Y2'), Var('Z2'), Var('D'), Var('E')) X, Y, Z ``` The formulas know about their computational cost. ``` cost(X, Y, Z) ``` Note that the above function shares common subexpressions. If we compute the costs individually, we obtain a higher operation count. ``` X.cost(), Y.cost(), Z.cost() ``` We can have a better look at the formulas by using `sympy` ``` [f.formula.simplify() for f in (X,Y,Z)] ``` This might be a little hard to check, let's fix some variables, so that we obtain the classic affine formula ``` X, Y, Z = ed_add(Var('X1'), Var('Y1'), Const(1), Var('X2'), Var('Y2'), Const(1), Var('d'), Const(1)) [f.formula.simplify() for f in (X,Y,Z)] ``` We have the same for doubling (the formula does not depend on $D,E$). ``` X, Y, Z = ed_double(Var('X'), Var('Y'), Var('Z')) [f.formula.simplify() for f in (X,Y,Z)] ``` ## Isogenies Let's move to the interesting stuff. The isogeny formula from Section 4.3 of https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/edwardsisogenies.pdf, reads $$ψ(x,y) = \left(x\prod_{i=1}^s\frac{β_i^2 x^2 - α_i^2y^2}{β_i^2 - d^2α_i^2β_i^4x^2y^2}, y\prod_{i=1}^s\frac{β_i^2 y^2 - α_i^2x^2}{β_i^2 - d^2α_i^2β_i^4x^2y^2}\right)$$ where $(±α_i,β_i)$ are the coordinates of the kernel of $ψ$, and $d=D/E$. Here is a projective version (both for the point coordinates and the curve invariant) of the formula. We separate the computations that depend upon $x,y$ (the *isogeny evaluation*) from those that are independent (the *next curve*); the function returns - The invaraint of the image curve; - Costants independent of $x,y$ needed for the isogeny evaluation; - The evaluation formula. ``` curve, consts, isog = edwards(3, Var('D'), Var('E'), Var('α'), Var('β'), Var('γ'), Var('X'), Var('Y'), Var('Z')) [c.formula for c in curve] [[c.formula for c in cs] for cs in consts] [f.formula.simplify() for f in isog] ``` In the latter formula, $A_0, B_0, D_0$ represent the pre-computed values stored in `consts`. Let's substitute them and check that the formula is correct. ``` isog2 = [f.formula.simplify().subs({ 'A0':consts[0][0].formula, 'B0':consts[0][1].formula, 'D0':consts[0][2].formula, }) for f in isog] isog2 ``` These formulas look correct, but to be even more convinced, let's substitute the curve equation $$X^2+Y^2-Z^2 = \frac{D X^2 Y^2}{E Z^2},$$ divide all members by $Eγ^6Z^3$, and replace affine coordinates. ``` X, Y, Z, a, b, c, D, E = sympy.symbols('X Y Z α β γ D E') [(i.subs({X**2+Y**2-Z**2: D*X**2*Y**2/(E*Z**2)}) / (E*c**6*Z**3)).expand().subs({ a/c: a, b/c: b, X/Z: X, Y/Z: Y, D/E: D, }).simplify() for i in isog2] ``` Let's now move to the costs. The *next curve* computation contains both the curve invariant and the pre-computed constants ``` cost(*curve, *sum(consts, ())) ``` While the *isogeny evaluation* only comprises the latter formulas ``` cost(*isog) ``` Obviously, these costs must be taken as upper bounds on the cost of isogeny evaluation. There are indeed (much) better formulas known for 3-isogenies of Edwards curves. Finally, let's find out how the costs increase with the degree. ``` def get_costs(ell): E, c, I = edwards(ell) pre, consts, isog = cost(*E, *sum(c, ())), cost(*sum(c,())), cost(*I) return consts, pre-consts, isog prev = [Cost()]*3 for ell in range(3, 20, 2): next = get_costs(ell) print([n-p for n, p in zip(next, prev)]) prev = next ``` We see that the cost of the *next curve* operation for $ℓ=2s+1$ is $(18s-13)M + (9s-4)S$ plus a (small) variable number of multiplications and squarings coming from the computation of the next invariant (the variability comes from the way $d^ℓ$ is computed). The cost for the *isogeny evaluation* is $8sM + 3S$. ## Montgomery curves Using the formulas from https://eprint.iacr.org/2017/504.pdf, https://eprint.iacr.org/2017/1198.pdf.
github_jupyter
## 乳がんのデータをロード ``` from sklearn.datasets import load_breast_cancer # 乳がんのデータをロード X_dataset, y_dataset = load_breast_cancer(return_X_y=True) ``` ## データの前処理 ``` from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split # X_datasetをX_trainとX_testに # y_datasetをy_trainとy_testに分割 X_train, X_test, y_train, y_test = train_test_split( X_dataset, y_dataset, test_size=0.2, random_state=42) # データを0~1の範囲にスケール scaler = MinMaxScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) # ラベルデータをone-hot形式に変換 from tensorflow.python import keras y_train = keras.utils.to_categorical(y_train, 2) y_test = keras.utils.to_categorical(y_test, 2) ``` ## Keras でモデル作成 ``` from tensorflow.python.keras.models import Sequential # Sequentialモデルで線形に積み重ねる model = Sequential() from tensorflow.python.keras.layers import Dense model.add(Dense(units=4, activation='relu', input_dim=30)) model.add(Dense(units=4, activation='relu')) model.add(Dense(units=2, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(), metrics=['accuracy']) ``` ## モデルを学習 ``` model.fit(X_train, y_train, epochs=50, batch_size=32) ``` ## 正解率の算出 ``` # 正解率の算出 score = model.evaluate(X_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` ## Fashion-MNIST のデータをロード ``` try: # tensorflow v1.8 まで from tensorflow.python.keras._impl.keras.datasets import fashion_mnist except: # tensorflow v1.9 以降 from tensorflow.python.keras.datasets import fashion_mnist (X_train, y_train), (X_test, y_test) = fashion_mnist.load_data() ``` ## データを可視化 ``` import matplotlib.pyplot as plt plt.axis('off') plt.set_cmap('gray_r') plt.imshow(X_train[0]) ``` ## データの前処理 ``` from tensorflow.python import keras # ラベルデータをone-hotの形に変換 y_train = keras.utils.to_categorical(y_train, 10) y_test = keras.utils.to_categorical(y_test, 10) # shapeを28x28画素x1チャネル(グレースケール)に変換 X_train = X_train.reshape(X_train.shape[0], 28, 28, 1) X_test = X_test.reshape(X_test.shape[0], 28, 28, 1) # 0~255の階調 から 0~1階調に変換 X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_test /= 255 print('X_train shape:', X_train.shape) print(X_train.shape[0], 'train samples') print(X_test.shape[0], 'test samples') ``` ## CNN の作成と学習 注意:GPU無しの構成だと学習に非常に時間がかかります ``` from tensorflow.python.keras.models import Sequential from tensorflow.python.keras.layers import Dense, Dropout, Flatten from tensorflow.python.keras.layers import Conv2D, MaxPooling2D model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1))) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(1024, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(10, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(), metrics=['accuracy']) # 学習 model.fit(X_train, y_train, batch_size=128, epochs=12, verbose=1, validation_data=(X_test, y_test)) ``` ## 正解率の算出 ``` # 正解率の算出 score = model.evaluate(X_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ```
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt % matplotlib inline import seaborn as sns #from seaborn.linearmodels import corrplot,symmatplot from sklearn.linear_model import LinearRegression from sklearn.cross_validation import KFold from sklearn.metrics import mean_squared_error, r2_score from sklearn.preprocessing import PolynomialFeatures from sklearn.pipeline import Pipeline DATA_DIR = '../data/' df_population = pd.read_csv(DATA_DIR + 'df_population_census.csv') #df_growth = df_population.groupby(['Year']).sum() #pop2016 = (df_growth['Year'] == 2016).sum() pop2000 = df_population.loc[df_population['Year'] == 2000, 'Population'].sum() pop2010 = df_population.loc[df_population['Year'] == 2010, 'Population'].sum() pop2016 = df_population.loc[df_population['Year'] == 2016, 'Population'].sum() growth2016 = pop2016/pop2010 growth2010 = pop2010/pop2000 print (growth2016,growth2010) df_2016 = df_population[df_population['Year'] == 2016] Year = [] Population = [] Block_Group = [] Population_x = [] for index, row in df_2016.iterrows(): Year.append(2010) Block_Group.append(row['Block_Group']) Population_x.append(row['Population']) Population.append(df_population.loc[(df_population['Year'] == 2010) & (df_population['Block_Group'] == row['Block_Group']), 'Population'].sum()) # create new dataframe for 2016 df_2010 = pd.DataFrame() df_2010['Block_Group'] = Block_Group df_2010['Population'] = Population df_2010['Year'] = Year df_2010['Population_x'] = Population_x df_2010['Population'] = np.where(df_2010['Population'] == 0, df_2010['Population_x']/growth2016, df_2010['Population']) df_2010['Population'] = df_2010['Population'] .astype(int) df_2010 = df_2010.drop(columns=['Population_x']) Year = [] Population = [] Block_Group = [] Population_x = [] for index, row in df_2010.iterrows(): Year.append(2000) Block_Group.append(row['Block_Group']) Population_x.append(row['Population']) Population.append(df_population.loc[(df_population['Year'] == 2000) & (df_population['Block_Group'] == row['Block_Group']), 'Population'].sum()) # create new dataframe for 2000 df_2000 = pd.DataFrame() df_2000['Block_Group'] = Block_Group df_2000['Population'] = Population df_2000['Year'] = Year df_2000['Population_x'] = Population_x df_2000['Population'] = np.where(df_2000['Population'] == 0, df_2000['Population_x']/growth2010, df_2000['Population']) df_2000['Population'] = df_2000['Population'] .astype(int) df_2000 = df_2000.drop(columns=['Population_x']) print (df_2016.head()) print (df_2010.head()) print (df_2000.head()) df_population = pd.concat([df_2016,df_2010,df_2000]) # Adjust ACS totals to match official census population df_population['Population'] = np.where(df_population['Year'] < 2010, df_population['Population'] * 1.09, df_population['Population']) df_population['Population'] = np.where(df_population['Year'] == 2010, df_population['Population'] * .9940, df_population['Population']) df_population['Population'] = np.where(df_population['Year'] == 2016, df_population['Population'] * 1.0203, df_population['Population']) print (df_population.head()) # If there is no year, calculate the average year Years = [] for i in range(21): Years.append(int(2000 + i)) df_years = pd.DataFrame({'Year':Years}) #print (df_years) Geographies = set(df_population["Block_Group"].tolist()) Block_Group = [] Population = [] Year = [] MSE = [] Status = [] df_predictions_full = pd.DataFrame() for geography in Geographies: df_data = df_population[df_population["Block_Group"] == geography] #print (df_data) for index, row in df_data.iterrows(): Block_Group.append(geography) Year.append(row["Year"]) Population.append(row["Population"]) Status.append("actual") if row["Year"] == 2016: X = df_data[["Year"]] y = df_data['Population'] # Instantiate the model #model = LinearRegression() model = Pipeline([('poly', PolynomialFeatures(degree=3)), ('linear', LinearRegression(fit_intercept=False))]) # Fit model to features and target model.fit(X,y) predictionsX = model.predict(X) mse = mean_squared_error(y, predictionsX) # Make predictions pred = df_years[["Year"]] predictions = model.predict(df_years) for i, name in enumerate(predictions): Block_Group.append(geography) Year.append(Years[i]) Population.append(predictions[i]) Status.append("prediction") df_predictions_full['Block_Group'] = Block_Group df_predictions_full['Year'] = Year df_predictions_full['Population'] = Population df_predictions_full['Status'] = Status # Manually fudge predictions to match State estimates #2011 0.9757 0.9722 0.9735 0.9857 0.9999 1.0161 1.0163 df_predictions_full['Population'] = np.where(df_predictions_full['Year'] == 2010, df_predictions_full['Population'] * 1.000, df_predictions_full['Population']) df_predictions_full['Population'] = np.where(df_predictions_full['Year'] == 2011, df_predictions_full['Population'] * 0.9874, df_predictions_full['Population']) df_predictions_full['Population'] = np.where(df_predictions_full['Year'] == 2012, df_predictions_full['Population'] * 0.9757, df_predictions_full['Population']) df_predictions_full['Population'] = np.where(df_predictions_full['Year'] == 2013, df_predictions_full['Population'] * 0.9722, df_predictions_full['Population']) df_predictions_full['Population'] = np.where(df_predictions_full['Year'] == 2014, df_predictions_full['Population'] * 0.9735, df_predictions_full['Population']) df_predictions_full['Population'] = np.where(df_predictions_full['Year'] == 2015, df_predictions_full['Population'] * 0.9857, df_predictions_full['Population']) df_predictions_full['Population'] = np.where(df_predictions_full['Year'] == 2016, df_predictions_full['Population'] * 1.0, df_predictions_full['Population']) df_predictions_full['Population'] = np.where(df_predictions_full['Year'] == 2017, df_predictions_full['Population'] * 1.0161, df_predictions_full['Population']) df_predictions_full['Population'] = np.where(df_predictions_full['Year'] == 2018, df_predictions_full['Population'] * 1.0163, df_predictions_full['Population']) df_predictions_full['Population'] = np.where(df_predictions_full['Year'] == 2019, df_predictions_full['Population'] * 1.0163, df_predictions_full['Population']) df_predictions_full['Population'] = np.where(df_predictions_full['Year'] == 2020, df_predictions_full['Population'] * 1.0163, df_predictions_full['Population']) df_predictions_full['Population'] = df_predictions_full['Population'] .astype(int) #print (df_predictions_full) df_predictions_full.to_csv(DATA_DIR + 'Population_Predictions.csv', mode='w', header=True, index=False) pop2000 = df_predictions_full.loc[(df_predictions_full['Year'] == 2000) & (df_predictions_full['Status'] == "prediction"), 'Population'].sum() pop2018 = df_predictions_full.loc[(df_predictions_full['Year'] == 2018) & (df_predictions_full['Status'] == "prediction"), 'Population'].sum() print (pop2018/pop2000) # Calulate mse #mse = mean_squared_error(y, predictions) #variance = r2_score(y, predictions) #df_population['predictions'] = predictions #print (df_population) df_predictions_full.to_csv(DATA_DIR + 'Population_Predictions.csv', mode='w', header=True, index=False) print (df_predictions_full) ```
github_jupyter
# Exploring Quantum Chemistry with GDB1k Most of the tutorials we've walked you through so far have focused on applications to the drug discovery realm, but DeepChem's tool suite works for molecular design problems generally. In this tutorial, we're going to walk through an example of how to train a simple molecular machine learning for the task of predicting the atomization energy of a molecule. (Remember that the atomization energy is the energy required to form 1 mol of gaseous atoms from 1 mol of the molecule in its standard state under standard conditions). ## Colab This tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepchem/deepchem/blob/master/examples/tutorials/Exploring_Quantum_Chemistry_with_GDB1k.ipynb) ``` !pip install --pre deepchem import deepchem deepchem.__version__ ``` With our setup in place, let's do a few standard imports to get the ball rolling. ``` import deepchem as dc from sklearn.ensemble import RandomForestRegressor from sklearn.kernel_ridge import KernelRidge ``` The ntext step we want to do is load our dataset. We're using a small dataset we've prepared that's pulled out of the larger GDB benchmarks. The dataset contains the atomization energies for 1K small molecules. ``` tasks = ["atomization_energy"] dataset_file = "../../datasets/gdb1k.sdf" smiles_field = "smiles" mol_field = "mol" ``` We now need a way to transform molecules that is useful for prediction of atomization energy. This representation draws on foundational work [1] that represents a molecule's 3D electrostatic structure as a 2D matrix $C$ of distances scaled by charges, where the $ij$-th element is represented by the following charge structure. $C_{ij} = \frac{q_i q_j}{r_{ij}^2}$ If you're observing carefully, you might ask, wait doesn't this mean that molecules with different numbers of atoms generate matrices of different sizes? In practice the trick to get around this is that the matrices are "zero-padded." That is, if you're making coulomb matrices for a set of molecules, you pick a maximum number of atoms $N$, make the matrices $N\times N$ and set to zero all the extra entries for this molecule. (There's a couple extra tricks that are done under the hood beyond this. Check out reference [1] or read the source code in DeepChem!) DeepChem has a built in featurization class `dc.feat.CoulombMatrixEig` that can generate these featurizations for you. ``` featurizer = dc.feat.CoulombMatrixEig(23, remove_hydrogens=False) ``` Note that in this case, we set the maximum number of atoms to $N = 23$. Let's now load our dataset file into DeepChem. As in the previous tutorials, we use a `Loader` class, in particular `dc.data.SDFLoader` to load our `.sdf` file into DeepChem. The following snippet shows how we do this: ``` loader = dc.data.SDFLoader( tasks=["atomization_energy"], featurizer=featurizer) dataset = loader.create_dataset(dataset_file) ``` For the purposes of this tutorial, we're going to do a random split of the dataset into training, validation, and test. In general, this split is weak and will considerably overestimate the accuracy of our models, but for now in this simple tutorial isn't a bad place to get started. ``` random_splitter = dc.splits.RandomSplitter() train_dataset, valid_dataset, test_dataset = random_splitter.train_valid_test_split(dataset) ``` One issue that Coulomb matrix featurizations have is that the range of entries in the matrix $C$ can be large. The charge $q_1q_2/r^2$ term can range very widely. In general, a wide range of values for inputs can throw off learning for the neural network. For this, a common fix is to normalize the input values so that they fall into a more standard range. Recall that the normalization transform applies to each feature $X_i$ of datapoint $X$ $\hat{X_i} = \frac{X_i - \mu_i}{\sigma_i}$ where $\mu_i$ and $\sigma_i$ are the mean and standard deviation of the $i$-th feature. This transformation enables the learning to proceed smoothly. A second point is that the atomization energies also fall across a wide range. So we apply an analogous transformation normalization transformation to the output to scale the energies better. We use DeepChem's transformation API to make this happen: ``` transformers = [ dc.trans.NormalizationTransformer(transform_X=True, dataset=train_dataset), dc.trans.NormalizationTransformer(transform_y=True, dataset=train_dataset)] for dataset in [train_dataset, valid_dataset, test_dataset]: for transformer in transformers: dataset = transformer.transform(dataset) ``` Now that we have the data cleanly transformed, let's do some simple machine learning. We'll start by constructing a random forest on top of the data. We'll use DeepChem's hyperparameter tuning module to do this. ``` def rf_model_builder(model_dir, **model_params): sklearn_model = RandomForestRegressor(**model_params) return dc.models.SklearnModel(sklearn_model, model_dir) params_dict = { "n_estimators": [10, 100], "max_features": ["auto", "sqrt", "log2", None], } metric = dc.metrics.Metric(dc.metrics.mean_absolute_error) optimizer = dc.hyper.GridHyperparamOpt(rf_model_builder) best_rf, best_rf_hyperparams, all_rf_results = optimizer.hyperparam_search( params_dict, train_dataset, valid_dataset, output_transformers=transformers, metric=metric, use_max=False) for key, value in all_rf_results.items(): print(f'{key}: {value}') print('Best hyperparams:', best_rf_hyperparams) ``` Let's build one more model, a kernel ridge regression, on top of this raw data. ``` def krr_model_builder(model_dir, **model_params): sklearn_model = KernelRidge(**model_params) return dc.models.SklearnModel(sklearn_model, model_dir) params_dict = { "kernel": ["laplacian"], "alpha": [0.0001], "gamma": [0.0001] } metric = dc.metrics.Metric(dc.metrics.mean_absolute_error) optimizer = dc.hyper.GridHyperparamOpt(krr_model_builder) best_krr, best_krr_hyperparams, all_krr_results = optimizer.hyperparam_search( params_dict, train_dataset, valid_dataset, output_transformers=transformers, metric=metric, use_max=False) for key, value in all_krr_results.items(): print(f'{key}: {value}') print('Best hyperparams:', best_krr_hyperparams) ``` # Congratulations! Time to join the Community! Congratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways: ## Star DeepChem on [GitHub](https://github.com/deepchem/deepchem) This helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build. ## Join the DeepChem Gitter The DeepChem [Gitter](https://gitter.im/deepchem/Lobby) hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation! # Bibliography: [1] https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.98.146401
github_jupyter
For our Sierra Leone example, we have IRI data for some of the roads, as collected by Road Lab Pro. Here, we match this data on to our much more detailed (and topologically correct) OSM road network ``` import os, sys import pandas as pd import geopandas as gpd import networkx as nx from shapely.geometry import Point, MultiPoint from shapely.wkt import loads from scipy import spatial from functools import partial import pyproj from shapely.ops import transform sys.path.append(r'C:\Users\charl\Documents\GitHub\GOST_PublicGoods\GOSTNets\GOSTNets') import GOSTnet as gn ``` Set the EPSG code for the projection. This will be the projection where real world distances are measured ``` code = 2161 ``` Set paths to your graph object and import ``` net_pth = r'C:\Users\charl\Documents\GOST\SierraLeone\RoadNet' net_name = r'largest_G.pickle' G = nx.read_gpickle(os.path.join(net_pth, net_name)) ``` Set paths to your IRI dataset and import. Ensure project is WGS 84 ``` iri_pth = r'C:\Users\charl\Documents\GOST\SierraLeone\IRI_data' iri_name = r'road_network_condition_vCombo.shp' iri_df = gpd.read_file(os.path.join(iri_pth, iri_name)) iri_df = iri_df.to_crs({'init':'epsg:4326'}) ``` Remove any records in the IRI dataframe which are equal to 0 - we only want to match on valid information ``` iri_df = iri_df.loc[iri_df.Avg_iri > 0] ``` Convert the LineString to a list object of the constituent point coordinates. We do this because linestring to linestring intersections are slow, painful and unpredictable. It is easier to conceptualize the intersect as line to point, or polygon to point. We pursue the latter here ``` iri_df['point_bag'] = iri_df.geometry.apply(lambda x: list(x.coords)) ``` Create a dictionary object of IRI:list(points) objects ``` bag = {} for index, row in iri_df.iterrows(): bag[row.Avg_iri] = MultiPoint(row['point_bag']) ``` Iterate out the points into their own list, with corresponding IRI list ``` points = [] iris = [] for b in bag: for c in bag[b].geoms: points.append(c) iris.append(b) ``` Generate a new dataframe composed only of geometry:IRI pairs ``` points_df = pd.DataFrame({'IRIs':iris, 'Points':points}) ``` Convert to GeoDataFrame using knonw projection (WGS 84) ``` points_gdf = gpd.GeoDataFrame(points_df, crs = {'init':'epsg:4326'}, geometry = 'Points') ``` Project over to metres to allow for binding on to graph ``` points_gdf_proj = points_gdf.to_crs({'init':'epsg:%s' % code}) ``` Save down as required ``` #points_gdf.to_file(os.path.join(net_pth, 'IRIpoints.shp'), driver = 'ESRI Shapefile') ``` Generate a spatial index. This will allow us to do faster intersections later ``` sindex = points_gdf_proj.sindex ``` Define projection method. This will be called many times in the next loop ``` source_crs = 'epsg:4326' target_crs = 'epsg:%s' % code project_WGS_to_UTM = partial( pyproj.transform, pyproj.Proj(init=source_crs), pyproj.Proj(init=target_crs)) ``` Iterate over all graph edges, perform fast spatial intersection, add on IRI data ``` # define a counter c = 0 # iterate over the edges in the graph for u, v, data in G.edges(data = True): # convert string object to shapely object if type(data['Wkt']) == str: polygon = loads(data['Wkt']) # if geometry appears to be a list, unbundle it first. elif type(data['Wkt']) == list: data['Wkt'] = gn.unbundle_geometry(data['Wkt']) polygon = data['Wkt'] # project shapely object to UTM zone of choice polygon_proj = transform(project_WGS_to_UTM, polygon) # buffer by 25 metres to capture nearby points polygon_proj = polygon_proj.buffer(10) # generate the list of possible matches - the index of the points that intersects the # boundary of the projected polygon possible_matches_index = list(sindex.intersection(polygon_proj.bounds)) # use this to .iloc the actual points GeodataFrame possible_matches = points_gdf_proj.iloc[possible_matches_index] # intersect this smaller dataframe with the actual geometry to get an accurate intersection precise_matches = possible_matches[possible_matches.intersects(polygon)] # match on median IRI as a data dictionary object if more than 10 points detected if len(possible_matches) > 3: data['iri'] = possible_matches.IRIs.mean() else: data['iri'] = 0 c+=1 if c % 10000 == 0: print('edges completed: ',c) ``` Save down the new graph ``` gn.save(G, 'IRI_adj', net_pth, pickle = True, nodes = False, edges = False) ```
github_jupyter
<a href="https://colab.research.google.com/github/AnkurMali/IST597_Spring_2022/blob/main/IST597_SP22_RNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # IST597 :Recurrent Neural Networks (RNNs) for sequence classification Thanks to @googleAI, @Keras, @madlalina @nvidia_research ---- We will be building a RNN for sentiment analysis on IMDB movie reviews ( [stanford_imdb](https://https://ai.stanford.edu/~amaas/data/sentiment/)). ``` import tensorflow as tf import pandas as pd import pickle import matplotlib.pyplot as plt %matplotlib inline tf.random.set_seed(1234) import sys sys.path.insert(1,'/content/') from data_utils import parse_imdb_sequence length_reviews = pickle.load(open('/content/length_reviews.pkl', 'rb')) pd.DataFrame(length_reviews, columns=['Length reviews']).hist(bins=100, color='blue'); plt.grid(False); train_dataset = tf.data.TFRecordDataset('/content/train.tfrecords') train_dataset = train_dataset.map(parse_imdb_sequence).shuffle(buffer_size=10000) train_dataset = train_dataset.padded_batch(512, padded_shapes=([None],[],[])) test_dataset = tf.data.TFRecordDataset('/content/test.tfrecords') test_dataset = test_dataset.map(parse_imdb_sequence).shuffle(buffer_size=10000) test_dataset = test_dataset.padded_batch(512, padded_shapes=([None],[],[])) # Read the word vocabulary word2idx = pickle.load(open('/content/word2idx.pkl', 'rb')) ``` ## RNN model for sequence classification, compatible with Eager API ---- In the cell below, you can find the class that I have created for the RNN model. The API is very similar with one I created in the previous tutorial, except that now we track the accuracy of the model instead of the loss. The idea of the network is very simple. We simply take each word in the review, select its corresponding word embedding (initialized randomly in the beginning), and pass it through the RNN cell. We then take the output of the RNN cell at the end of the sequence and pass it through a dense layer (with ReLU activation) to obtain the final predictions. Like usually, the network inherits from tf.keras.Model in order to keep track of all variables and save/restore them easily. ![img](tutorials_graphics/rnn_imdb.png) ``` class RNNModel(tf.keras.Model): def __init__(self, embedding_size=100, cell_size=64, dense_size=128, num_classes=2, vocabulary_size=None, rnn_cell='lstm', device='cpu:0', checkpoint_directory=None): ''' Define the parameterized layers used during forward-pass, the device where you would like to run the computation on and the checkpoint directory. Additionaly, you can also modify the default size of the network. Args: embedding_size: the size of the word embedding. cell_size: RNN cell size. dense_size: the size of the dense layer. num_classes: the number of labels in the network. vocabulary_size: the size of the word vocabulary. rnn_cell: string, either 'lstm' or 'ugrnn'. device: string, 'cpu:n' or 'gpu:n' (n can vary). Default, 'cpu:0'. checkpoint_directory: the directory where you would like to save or restore a model. ''' super(RNNModel, self).__init__() # Weights initializer function w_initializer = tf.compat.v1.keras.initializers.glorot_uniform() # Biases initializer function b_initializer = tf.zeros_initializer() # Initialize weights for word embeddings self.embeddings = tf.keras.layers.Embedding(vocabulary_size, embedding_size, embeddings_initializer=w_initializer) # Dense layer initialization self.dense_layer = tf.keras.layers.Dense(dense_size, activation=tf.nn.relu, kernel_initializer=w_initializer, bias_initializer=b_initializer) # Predictions layer initialization self.pred_layer = tf.keras.layers.Dense(num_classes, activation=None, kernel_initializer=w_initializer, bias_initializer=b_initializer) # Basic LSTM cell if rnn_cell=='lstm': self.rnn_cell = tf.compat.v1.nn.rnn_cell.BasicLSTMCell(cell_size) # Else RNN cell else: self.rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(cell_size) # Define the device self.device = device # Define the checkpoint directory self.checkpoint_directory = checkpoint_directory def predict(self, X, seq_length, is_training): ''' Predicts the probability of each class, based on the input sample. Args: X: 2D tensor of shape (batch_size, time_steps). seq_length: the length of each sequence in the batch. is_training: Boolean. Either the network is predicting in training mode or not. ''' # Get the number of samples within a batch num_samples = tf.shape(X)[0] # Initialize LSTM cell state with zeros state = self.rnn_cell.zero_state(num_samples, dtype=tf.float32) # Get the embedding of each word in the sequence embedded_words = self.embeddings(X) # Unstack the embeddings unstacked_embeddings = tf.unstack(embedded_words, axis=1) # Iterate through each timestep and append the predictions outputs = [] for input_step in unstacked_embeddings: output, state = self.rnn_cell(input_step, state) outputs.append(output) # Stack outputs to (batch_size, time_steps, cell_size) outputs = tf.stack(outputs, axis=1) # Extract the output of the last time step, of each sample idxs_last_output = tf.stack([tf.range(num_samples), tf.cast(seq_length-1, tf.int32)], axis=1) final_output = tf.gather_nd(outputs, idxs_last_output) # Add dropout for regularization #dropped_output = tf.compat.v1.layers.Dropout(final_output, rate=0.3, training=is_training) # Pass the last cell state through a dense layer (ReLU activation) dense = self.dense_layer(final_output) # Compute the unnormalized log probabilities logits = self.pred_layer(dense) return logits def loss_fn(self, X, y, seq_length, is_training): """ Defines the loss function used during training. """ preds = self.predict(X, seq_length, is_training) loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=preds) return loss def grads_fn(self, X, y, seq_length, is_training): """ Dynamically computes the gradients of the loss value with respect to the parameters of the model, in each forward pass. """ with tf.GradientTape() as tape: loss = self.loss_fn(X, y, seq_length, is_training) return tape.gradient(loss, self.variables) def restore_model(self): """ Function to restore trained model. """ with tf.device(self.device): # Run the model once to initialize variables dummy_input = tf.constant(tf.zeros((1,1))) dummy_length = tf.constant(1, shape=(1,)) dummy_pred = self.predict(dummy_input, dummy_length, False) # Restore the variables of the model saver = tf.compat.v1.train.Saver(self.variables) saver.restore(tf.train.latest_checkpoint (self.checkpoint_directory)) def save_model(self, global_step=0): """ Function to save trained model. """ tf.compat.v1.train.Saver(self.variables).save(save_path=self.checkpoint_directory, global_step=global_step) def fit(self, training_data, eval_data, optimizer, num_epochs=500, early_stopping_rounds=10, verbose=10, train_from_scratch=False): """ Function to train the model, using the selected optimizer and for the desired number of epochs. You can either train from scratch or load the latest model trained. Early stopping is used in order to mitigate the risk of overfitting the network. Args: training_data: the data you would like to train the model on. Must be in the tf.data.Dataset format. eval_data: the data you would like to evaluate the model on. Must be in the tf.data.Dataset format. optimizer: the optimizer used during training. num_epochs: the maximum number of iterations you would like to train the model. early_stopping_rounds: stop training if the accuracy on the eval dataset does not increase after n epochs. verbose: int. Specify how often to print the loss value of the network. train_from_scratch: boolean. Whether to initialize variables of the the last trained model or initialize them randomly. """ if train_from_scratch==False: self.restore_model() # Initialize best_acc. This variable will store the highest accuracy # on the eval dataset. best_acc = 0 # Initialize classes to update the mean accuracy of train and eval train_acc = tf.keras.metrics.Accuracy('train_acc') eval_acc = tf.keras.metrics.Accuracy('eval_acc') # Initialize dictionary to store the accuracy history self.history = {} self.history['train_acc'] = [] self.history['eval_acc'] = [] # Begin training with tf.device(self.device): for i in range(num_epochs): # Training with gradient descent for step, (X, y, seq_length) in enumerate(training_data): grads = self.grads_fn(X, y, seq_length, True) optimizer.apply_gradients(zip(grads, self.variables)) # Check accuracy train dataset for step, (X, y, seq_length) in enumerate(training_data): logits = self.predict(X, seq_length, False) preds = tf.argmax(logits, axis=1) train_acc(preds, y) self.history['train_acc'].append(train_acc.result().numpy()) # Reset metrics train_acc.reset_states() # Check accuracy eval dataset for step, (X, y, seq_length) in enumerate(eval_data): logits = self.predict(X, seq_length, False) preds = tf.argmax(logits, axis=1) eval_acc(preds, y) self.history['eval_acc'].append(eval_acc.result().numpy()) # Reset metrics eval_acc.reset_states() # Print train and eval accuracy if (i==0) | ((i+1)%verbose==0): print('Train accuracy at epoch %d: ' %(i+1), self.history['train_acc'][-1]) print('Eval accuracy at epoch %d: ' %(i+1), self.history['eval_acc'][-1]) # Check for early stopping if self.history['eval_acc'][-1]>best_acc: best_acc = self.history['eval_acc'][-1] count = early_stopping_rounds else: count -= 1 if count==0: break ``` ## Train model with gradient descent and early stopping ---- ### Model training with simple LSTM cells ---- ``` # Specify the path where you want to save/restore the trained variables. checkpoint_directory = 'models_checkpoints/ImdbRNN/' # Use the GPU if available. device = 'gpu:0' # Define optimizer. optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=1e-4) # Instantiate model. This doesn't initialize the variables yet. lstm_model = RNNModel(vocabulary_size=len(word2idx), device=device, checkpoint_directory=checkpoint_directory) # Train model lstm_model.fit(train_dataset, test_dataset, optimizer, num_epochs=10, early_stopping_rounds=5, verbose=1, train_from_scratch=True) #lstm_model.save_model() checkpoint = tf.train.Checkpoint(lstm_model) save_path = checkpoint.save(checkpoint_directory) ``` ### Model training with RNN cells --- ``` # Define optimizer. optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=1e-4) # Instantiate model. This doesn't initialize the variables yet. ugrnn_model = RNNModel(vocabulary_size=len(word2idx), rnn_cell='ugrnn', device=device, checkpoint_directory=checkpoint_directory) # Train model ugrnn_model.fit(train_dataset, test_dataset, optimizer, num_epochs=50, early_stopping_rounds=5, verbose=1, train_from_scratch=True) ``` ### Performance comparison --- ``` f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(10, 4)) ax1.plot(range(len(lstm_model.history['train_acc'])), lstm_model.history['train_acc'], label='LSTM Train Accuracy'); ax1.plot(range(len(lstm_model.history['eval_acc'])), lstm_model.history['eval_acc'], label='LSTM Test Accuracy'); ax2.plot(range(len(ugrnn_model.history['train_acc'])), ugrnn_model.history['train_acc'], label='UGRNN Train Accuracy'); ax2.plot(range(len(ugrnn_model.history['eval_acc'])), ugrnn_model.history['eval_acc'], label='UGRNN Test Accuracy'); ax1.legend(); ax2.legend(); ``` ## Test network on new samples --- ``` ############################################################### # Import/download necessary libraries to process new sequences ############################################################### import nltk try: nltk.data.find('tokenizers/punkt') except LookupError: nltk.download('punkt') from nltk.tokenize import word_tokenize import re def process_new_review(review): '''Function to process a new review. Args: review: original text review, string. Returns: indexed_review: sequence of integers, words correspondence from word2idx. seq_length: the length of the review. ''' indexed_review = re.sub(r'<[^>]+>', ' ', review) indexed_review = word_tokenize(indexed_review) indexed_review = [word2idx[word] if word in list(word2idx.keys()) else word2idx['Unknown_token'] for word in indexed_review] indexed_review = indexed_review + [word2idx['End_token']] seq_length = len(indexed_review) return indexed_review, seq_length sent_dict = {0: 'negative', 1: 'positive'} review_score_10 = "I think Bad Apples is a great time and I recommend! I enjoyed the opening, which gave way for the rest of the movie to occur. The main couple was very likable and I believed all of their interactions. They had great onscreen chemistry and made me laugh quite a few times! Keeping the girls in the masks but seeing them in action was something I loved. It kept a mystery to them throughout. I think the dialogue was great. The kills were fun. And the special surprise gore effect at the end was AWESOME!! I won't spoil that part ;) I also enjoyed how the movie wrapped up. It gave a very urban legends type feel of \"did you ever hear the story...\". Plus is leaves the door open for another film which I wouldn't mind at all. Long story short, I think if you take the film for what it is; a fun little horror flick, then you won't be disappointed! HaPpY eArLy HaLLoWeEn!" review_score_4 = "A young couple comes to a small town, where the husband get a job working in a hospital. The wife which you instantly hate or dislike works home, at the same time a horrible murders takes place in this small town by two masked killers. Bad Apples is just your tipical B-horror movie with average acting (I give them that. Altough you may get the idea that some of the actors are crazy-convervative Christians), but the script is just bad, and that's what destroys the film." review_score_1 = "When you first start watching this movie, you can tell its going to be a painful ride. the audio is poor...the attacks by the \"girls\" are like going back in time, to watching the old rocky films, were blows never touched. the editing is poor with it aswell, example the actress in is the bath when her husband comes home, clearly you see her wearing a flesh coloured bra in the bath. no hints or spoilers, just wait till you find it in a bargain basket of cheap dvds in a couple of weeks" new_reviews = [review_score_10, review_score_4, review_score_1] scores = [10, 4, 1] with tf.device(device): for original_review, score in zip(new_reviews, scores): indexed_review, seq_length = process_new_review(original_review) indexed_review = tf.reshape(tf.constant(indexed_review), (1,-1)) seq_length = tf.reshape(tf.constant(seq_length), (1,)) logits = lstm_model.predict(indexed_review, seq_length, False) pred = tf.argmax(logits, axis=1).numpy()[0] print('The sentiment for the review with score %d was found to be %s' %(score, sent_dict[pred])) ```
github_jupyter
# Neural networks with PyTorch Deep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks. ``` # Import necessary packages %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import torch import helper import matplotlib.pyplot as plt ``` Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample below <img src='assets/mnist.png'> Our goal is to build a neural network that can take one of these images and predict the digit in the image. First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later. ``` ### Run this cell from torchvision import datasets, transforms # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)), ]) # Download and load the training data trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) ``` We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like ```python for image, label in trainloader: ## do things with images and labels ``` You'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images. ``` dataiter = iter(trainloader) images, labels = dataiter.next() print(type(images)) print(images.shape) print(labels.shape) ``` This is what one of the images looks like. ``` plt.imshow(images[1].numpy().squeeze(), cmap='Greys_r'); ``` First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures. The networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors. Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next. > **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next. ``` def activation(x): return 1/(1+torch.exp(-x)) # Flatten the input images inputs = images.view(images.shape[0], -1) # Create parameters w1 = torch.randn(784, 256) b1 = torch.randn(256) w2 = torch.randn(256, 10) b2 = torch.randn(10) h = activation(torch.mm(inputs, w1) + b1) out = torch.mm(h, w2) + b2 ``` Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this: <img src='assets/image_distribution.png' width=500px> Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class. To calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like $$ \Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}} $$ What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one. > **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns. ``` def softmax(x): return torch.exp(x)/torch.sum(torch.exp(x), dim=1).view(-1, 1) probabilities = softmax(out) # Does it have the right shape? Should be (64, 10) print(probabilities.shape) # Does it sum to 1? print(probabilities.sum(dim=1)) ``` ## Building networks with PyTorch PyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output. ``` from torch import nn class Network(nn.Module): def __init__(self): super().__init__() # Inputs to hidden layer linear transformation self.hidden = nn.Linear(784, 256) # Output layer, 10 units - one for each digit self.output = nn.Linear(256, 10) # Define sigmoid activation and softmax output self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) def forward(self, x): # Pass the input tensor through each of our operations x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) return x ``` Let's go through this bit by bit. ```python class Network(nn.Module): ``` Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything. ```python self.hidden = nn.Linear(784, 256) ``` This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`. ```python self.output = nn.Linear(256, 10) ``` Similarly, this creates another linear transformation with 256 inputs and 10 outputs. ```python self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) ``` Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns. ```python def forward(self, x): ``` PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method. ```python x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) ``` Here the input tensor `x` is passed through each operation a reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method. Now we can create a `Network` object. ``` # Create the network and look at it's text representation model = Network() model ``` You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`. ``` import torch.nn.functional as F class Network(nn.Module): def __init__(self): super().__init__() # Inputs to hidden layer linear transformation self.hidden = nn.Linear(784, 256) # Output layer, 10 units - one for each digit self.output = nn.Linear(256, 10) def forward(self, x): # Hidden layer with sigmoid activation x = F.sigmoid(self.hidden(x)) # Output layer with softmax activation x = F.softmax(self.output(x), dim=1) return x ``` ### Activation functions So far we've only been looking at the softmax activation, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit). <img src="assets/activation.png" width=700px> In practice, the ReLU function is used almost exclusively as the activation function for hidden layers. ### Your Turn to Build a Network <img src="assets/mlp_mnist.png" width=600px> > **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function. ``` class Network(nn.Module): def __init__(self): super().__init__() # Defining the layers, 128, 64, 10 units each self.fc1 = nn.Linear(784, 128) self.fc2 = nn.Linear(128, 64) # Output layer, 10 units - one for each digit self.fc3 = nn.Linear(64, 10) def forward(self, x): ''' Forward pass through the network, returns the output logits ''' x = self.fc1(x) x = F.relu(x) x = self.fc2(x) x = F.relu(x) x = self.fc3(x) x = F.softmax(x, dim=1) return x model = Network() model ``` ### Initializing weights and biases The weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance. ``` print(model.fc1.weight) print(model.fc1.bias) ``` For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values. ``` # Set biases to all zeros model.fc1.bias.data.fill_(0) # sample from random normal with standard dev = 0.01 model.fc1.weight.data.normal_(std=0.01) ``` ### Forward pass Now that we have a network, let's see what happens when we pass in an image. ``` # Grab some data dataiter = iter(trainloader) images, labels = dataiter.next() # Resize images into a 1D vector, new shape is (batch size, color channels, image pixels) images.resize_(64, 1, 784) # or images.resize_(images.shape[0], 1, 784) to automatically get batch size # Forward pass through the network img_idx = 0 ps = model.forward(images[img_idx,:]) img = images[img_idx] helper.view_classify(img.view(1, 28, 28), ps) ``` As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random! ### Using `nn.Sequential` PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.html#torch.nn.Sequential)). Using this to build the equivalent network: ``` # Hyperparameters for our network input_size = 784 hidden_sizes = [128, 64] output_size = 10 # Build a feed-forward network model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]), nn.ReLU(), nn.Linear(hidden_sizes[0], hidden_sizes[1]), nn.ReLU(), nn.Linear(hidden_sizes[1], output_size), nn.Softmax(dim=1)) print(model) # Forward pass through the network and display output images, labels = next(iter(trainloader)) images.resize_(images.shape[0], 1, 784) ps = model.forward(images[0,:]) helper.view_classify(images[0].view(1, 28, 28), ps) ``` The operations are availble by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`. ``` print(model[0]) model[0].weight ``` You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_. ``` from collections import OrderedDict model = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(input_size, hidden_sizes[0])), ('relu1', nn.ReLU()), ('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])), ('relu2', nn.ReLU()), ('output', nn.Linear(hidden_sizes[1], output_size)), ('softmax', nn.Softmax(dim=1))])) model ``` Now you can access layers either by integer or the name ``` print(model[0]) print(model.fc1) ``` In the next notebook, we'll see how we can train a neural network to accuractly predict the numbers appearing in the MNIST images.
github_jupyter
## AI for Medicine Course 1 Week 1 lecture exercises # Data Exploration In the first assignment of this course, you will work with chest x-ray images taken from the public [ChestX-ray8 dataset](https://arxiv.org/abs/1705.02315). In this notebook, you'll get a chance to explore this dataset and familiarize yourself with some of the techniques you'll use in the first graded assignment. <img src="xray-image.png" alt="U-net Image" width="300" align="middle"/> The first step before jumping into writing code for any machine learning project is to explore your data. A standard Python package for analyzing and manipulating data is [pandas](https://pandas.pydata.org/docs/#). With the next two code cells, you'll import `pandas` and a package called `numpy` for numerical manipulation, then use `pandas` to read a csv file into a dataframe and print out the first few rows of data. ``` # Import necessary packages import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline import os import seaborn as sns sns.set() # Read csv file containing training datadata train_df = pd.read_csv("nih/train-small.csv") # Print first 5 rows print(f'There are {train_df.shape[0]} rows and {train_df.shape[1]} columns in this data frame') train_df.head() ``` Have a look at the various columns in this csv file. The file contains the names of chest x-ray images ("Image" column) and the columns filled with ones and zeros identify which diagnoses were given based on each x-ray image. ### Data types and null values check Run the next cell to explore the data types present in each column and whether any null values exist in the data. ``` # Look at the data type of each column and whether null values are present train_df.info() ``` ### Unique IDs check "PatientId" has an identification number for each patient. One thing you'd like to know about a medical dataset like this is if you're looking at repeated data for certain patients or whether each image represents a different person. ``` print(f"The total patient ids are {train_df['PatientId'].count()}, from those the unique ids are {train_df['PatientId'].value_counts().shape[0]} ") ``` As you can see, the number of unique patients in the dataset is less than the total number so there must be some overlap. For patients with multiple records, you'll want to make sure they do not show up in both training and test sets in order to avoid data leakage (covered later in this week's lectures). ### Explore data labels Run the next two code cells to create a list of the names of each patient condition or disease. ``` columns = train_df.keys() columns = list(columns) print(columns) # Remove unnecesary elements columns.remove('Image') columns.remove('PatientId') # Get the total classes print(f"There are {len(columns)} columns of labels for these conditions: {columns}") ``` Run the next cell to print out the number of positive labels (1's) for each condition ``` # Print out the number of positive labels for each class for column in columns: print(f"The class {column} has {train_df[column].sum()} samples") ``` Have a look at the counts for the labels in each class above. Does this look like a balanced dataset? ### Data Visualization Using the image names listed in the csv file, you can retrieve the image associated with each row of data in your dataframe. Run the cell below to visualize a random selection of images from the dataset. ``` # Extract numpy values from Image column in data frame images = train_df['Image'].values # Extract 9 random images from it random_images = [np.random.choice(images) for i in range(9)] # Location of the image dir img_dir = 'nih/images-small/' print('Display Random Images') # Adjust the size of your images plt.figure(figsize=(20,10)) # Iterate and plot random images for i in range(9): plt.subplot(3, 3, i + 1) img = plt.imread(os.path.join(img_dir, random_images[i])) plt.imshow(img, cmap='gray') plt.axis('off') # Adjust subplot parameters to give specified padding plt.tight_layout() ``` ### Investigate a single image Run the cell below to look at the first image in the dataset and print out some details of the image contents. ``` # Get the first image that was listed in the train_df dataframe sample_img = train_df.Image[0] raw_image = plt.imread(os.path.join(img_dir, sample_img)) plt.imshow(raw_image, cmap='gray') plt.colorbar() plt.title('Raw Chest X Ray Image') print(f"The dimensions of the image are {raw_image.shape[0]} pixels width and {raw_image.shape[1]} pixels height, one single color channel") print(f"The maximum pixel value is {raw_image.max():.4f} and the minimum is {raw_image.min():.4f}") print(f"The mean value of the pixels is {raw_image.mean():.4f} and the standard deviation is {raw_image.std():.4f}") ``` ### Investigate pixel value distribution Run the cell below to plot up the distribution of pixel values in the image shown above. ``` # Plot a histogram of the distribution of the pixels sns.distplot(raw_image.ravel(), label=f'Pixel Mean {np.mean(raw_image):.4f} & Standard Deviation {np.std(raw_image):.4f}', kde=False) plt.legend(loc='upper center') plt.title('Distribution of Pixel Intensities in the Image') plt.xlabel('Pixel Intensity') plt.ylabel('# Pixels in Image') ``` <a name="image-processing"></a> # Image Preprocessing in Keras Before training, you'll first modify your images to be better suited for training a convolutional neural network. For this task you'll use the Keras [ImageDataGenerator](https://keras.io/preprocessing/image/) function to perform data preprocessing and data augmentation. Run the next two cells to import this function and create an image generator for preprocessing. ``` # Import data generator from keras from keras.preprocessing.image import ImageDataGenerator # Normalize images image_generator = ImageDataGenerator( samplewise_center=True, #Set each sample mean to 0. samplewise_std_normalization= True # Divide each input by its standard deviation ) ``` ### Standardization The `image_generator` you created above will act to adjust your image data such that the new mean of the data will be zero, and the standard deviation of the data will be 1. In other words, the generator will replace each pixel value in the image with a new value calculated by subtracting the mean and dividing by the standard deviation. $$\frac{x_i - \mu}{\sigma}$$ Run the next cell to pre-process your data using the `image_generator`. In this step you will also be reducing the image size down to 320x320 pixels. ``` # Flow from directory with specified batch size and target image size generator = image_generator.flow_from_dataframe( dataframe=train_df, directory="nih/images-small/", x_col="Image", # features y_col= ['Mass'], # labels class_mode="raw", # 'Mass' column should be in train_df batch_size= 1, # images per batch shuffle=False, # shuffle the rows or not target_size=(320,320) # width and height of output image ) ``` Run the next cell to plot up an example of a pre-processed image ``` # Plot a processed image sns.set_style("white") generated_image, label = generator.__getitem__(0) plt.imshow(generated_image[0], cmap='gray') plt.colorbar() plt.title('Raw Chest X Ray Image') print(f"The dimensions of the image are {generated_image.shape[1]} pixels width and {generated_image.shape[2]} pixels height") print(f"The maximum pixel value is {generated_image.max():.4f} and the minimum is {generated_image.min():.4f}") print(f"The mean value of the pixels is {generated_image.mean():.4f} and the standard deviation is {generated_image.std():.4f}") ``` Run the cell below to see a comparison of the distribution of pixel values in the new pre-processed image versus the raw image. ``` # Include a histogram of the distribution of the pixels sns.set() plt.figure(figsize=(10, 7)) # Plot histogram for original iamge sns.distplot(raw_image.ravel(), label=f'Original Image: mean {np.mean(raw_image):.4f} - Standard Deviation {np.std(raw_image):.4f} \n ' f'Min pixel value {np.min(raw_image):.4} - Max pixel value {np.max(raw_image):.4}', color='blue', kde=False) # Plot histogram for generated image sns.distplot(generated_image[0].ravel(), label=f'Generated Image: mean {np.mean(generated_image[0]):.4f} - Standard Deviation {np.std(generated_image[0]):.4f} \n' f'Min pixel value {np.min(generated_image[0]):.4} - Max pixel value {np.max(generated_image[0]):.4}', color='red', kde=False) # Place legends plt.legend() plt.title('Distribution of Pixel Intensities in the Image') plt.xlabel('Pixel Intensity') plt.ylabel('# Pixel') ``` #### That's it for this exercise, you should now be a bit more familiar with the dataset you'll be using in this week's assignment!
github_jupyter
``` """ LICENSE MIT 2020 Guillaume Rozier Website : http://www.guillaumerozier.fr Mail : guillaume.rozier@telecomnancy.net README:s This file contains script that generate France maps and GIFs. Single images are exported to folders in 'charts/image/france'. GIFs are exported to 'charts/image/france'. I'm currently cleaning this file, please ask me is something is not clear enough! Requirements: please see the imports below (use pip3 to install them). """ from multiprocessing import Pool import requests import cv2 import pandas as pd import math import plotly.graph_objects as go import plotly.express as px import plotly from plotly.subplots import make_subplots from datetime import datetime from datetime import timedelta from tqdm import tqdm import imageio import json import locale import france_data_management as data import numpy as np import plotly.figure_factory as ff locale.setlocale(locale.LC_ALL, 'fr_FR.UTF-8') now = datetime.now() PATH = "../../" df_metro = data.import_data_metropoles() df_metro_65 = df_metro[df_metro["clage_65"] == 65] df_metro_0 = df_metro[df_metro["clage_65"] == 0] nb_last_days=40 metropoles = df_metro_0[df_metro_0["semaine_glissante"]==df_metro_0["semaine_glissante"].max()].sort_values(by=["ti"], ascending=False)["Metropole"].values metropoles_couvre_feu = ["Paris", "Saint Etienne", "Grenoble", "Montpellier", "Rouen", "Toulouse", "Lille", "Lyon", "Marseille"] metropoles_couvre_feu_sorted = [m for m in metropoles if m in metropoles_couvre_feu] fig=go.Figure() for i,metro in enumerate(metropoles_couvre_feu_sorted): #list(dict.fromkeys(list(df_metro['Metropole'].values)) y=df_metro_0[df_metro_0["Metropole"]==metro] y=y[len(y)-nb_last_days:] fig.add_trace(go.Scatter( x = [d[-10:] for d in y["semaine_glissante"].values], y = y["ti"], name = str(i+1) + ".<b> " + metro + "</b>" + "<br> Incidence : " + str(math.trunc(y["ti"].values[-1])) + "", line_width=5, marker_size=10, mode="markers+lines", opacity=1)) fig.update_yaxes(range=[0, df_metro_0["ti"].max()]) fig.update_layout( title={ 'text': "<b>Taux d'incidence du Covid19 dans les métropoles [avec couvre-feu le 17/10]<br></b>{}, nombre de cas sur 7 j. pour 100k. hab.".format("covidtracker.fr"), 'y':0.95, 'x':0.5, 'xanchor': 'center', 'yanchor': 'top'}, titlefont = dict( size=20), ) name_fig = "line_metropole_avec_couvre_feu" fig.write_image(PATH+"images/charts/france/{}.jpeg".format(name_fig), scale=2, width=800, height=800) plotly.offline.plot(fig, filename = PATH+'images/html_exports/france/{}.html'.format(name_fig), auto_open=False) fig=go.Figure() metropoles = df_metro_0[df_metro_0["semaine_glissante"]==df_metro_0["semaine_glissante"].max()].sort_values(by=["ti"], ascending=False)["Metropole"].values for i,metro in enumerate([m for m in metropoles if m not in metropoles_couvre_feu]): #list(dict.fromkeys(list(df_metro['Metropole'].values)) y=df_metro_0[df_metro_0["Metropole"]==metro] #y=y[len(y)-nb_last_days:] fig.add_trace(go.Scatter( x = [d[-10:] for d in y["semaine_glissante"].values], y = y["ti"], name = str(i+1) + ".<b> " + metro + "</b>" + "<br> Incidence : " + str(math.trunc(y["ti"].values[-1])) + "", line_width=5, marker_size=10, mode="markers+lines", opacity=1)) fig.update_yaxes(range=[0, df_metro_0["ti"].max()]) fig.update_layout( title={ 'text': "<b>Taux d'incidence du Covid19 dans les métropoles <b>[sans couvre-feu le 17/10]</b><br></b>{}, nombre de cas sur 7 j. pour 100k. hab.".format("covidtracker.fr"), 'y':0.95, 'x':0.5, 'xanchor': 'center', 'yanchor': 'top'}, titlefont = dict( size=20), ) name_fig = "line_metropoles_sans_couvre_feu" fig.write_image(PATH+"images/charts/france/{}.jpeg".format(name_fig), scale=2, width=800, height=800) plotly.offline.plot(fig, filename = PATH+'images/html_exports/france/{}.html'.format(name_fig), auto_open=False) im1 = cv2.imread(PATH+'images/charts/france/line_metropole_avec_couvre_feu.jpeg') im2 = cv2.imread(PATH+'images/charts/france/line_metropoles_sans_couvre_feu.jpeg') im3 = cv2.hconcat([im1, im2]) cv2.imwrite(PATH+'images/charts/france/line_metropoles_comp_couvre_feu.jpeg', im3) nb_last_days=25 for (title, df_temp, name) in [("Tous âges", df_metro_0, "0"), ("> 65 ans", df_metro_65, "65")]: metros = list(dict.fromkeys(list(df_temp['Metropole'].values))) metros_ordered = df_temp[df_temp['semaine_glissante'] == df_temp['semaine_glissante'].max()].sort_values(by=["ti"], ascending=True)["Metropole"].values dates_heatmap = list(dict.fromkeys(list(df_temp['semaine_glissante'].values))) array_incidence=[] for idx, metro in enumerate(metros_ordered): #deps_tests.drop("975", "976", "977", "978") values = df_temp[df_temp["Metropole"] == metro]['ti'].values.astype(int) values = values[len(values)-nb_last_days:] array_incidence += [values] #dates_heatmap=df_metro[df_metro["Metropole"] == metro]["semaine_glissante"].values.astype(str) fig = ff.create_annotated_heatmap( z=array_incidence, #df_tests_rolling[data].to_numpy() x=[("<b>" + a[-2:] + "/" + a[-5:-3] + "</b>") for a in dates_heatmap[-nb_last_days:]], #date[:10] for date in dates_heatmap y=[str(22-idx) + ". <b>" + metro[:9] +"</b>" for idx, metro in enumerate(metros_ordered)], showscale=True, font_colors=["white", "white"], coloraxis="coloraxis", #text=df_tests_rolling[data], #annotation_text = array_incidence ) annot = [] fig.update_xaxes(side="bottom", tickfont=dict(size=10)) fig.update_yaxes(tickfont=dict(size=15)) fig.update_layout( title={ 'text': "<b>Taux d'incidence du Covid19 dans les 22 métropoles<br>{}</b>, nombre de cas sur 7 j. pour 100k. hab.".format(title), 'y':0.97, 'x':0.5, 'xanchor': 'center', 'yanchor': 'top'}, titlefont = dict( size=20), coloraxis=dict( cmin=0, cmax=800, colorscale = [[0, "green"], [0.08, "#ffcc66"], [0.25, "#f50000"], [0.5, "#b30000"], [1, "#3d0000"]], #color_continuous_scale=["green", "red"], colorbar=dict( #title="{}<br>du Covid19<br> &#8205;".format(title), thicknessmode="pixels", thickness=12, lenmode="pixels", len=300, yanchor="middle", y=0.5, tickfont=dict(size=9), ticks="outside", ticksuffix="{}".format(" cas"), ) ), margin=dict( l=0, r=0, b=70, t=70, pad=0 ), ) annotations = annot + [ dict( x=0.5, y=-0.08, xref='paper', yref='paper', xanchor='center', opacity=0.6, font=dict(color="black", size=12), text='Lecture : une case correspond à l\'incidence pour chaque métropole (à lire à gauche) et à une date donnée (à lire en bas).<br>Du rouge correspond à une incidence élevée. <i>Date : {} - Source : covidtracker.fr - Données : Santé publique France</i>'.format(title.lower().replace("<br>", " "), title.lower().replace("<br>", " "), now.strftime('%d %B')), showarrow = False ), ] for i in range(len(fig.layout.annotations)): fig.layout.annotations[i].font.size = 10 fig.layout.annotations[i].text = "<b>"+fig.layout.annotations[i].text+"</b>" for annot in annotations: fig.add_annotation(annot) name_fig = "heatmaps_metropoles_" + name fig.write_image(PATH+"images/charts/france/{}.jpeg".format(name_fig), scale=2, width=1000, height=1000) fig.write_image(PATH+"images/charts/france/{}_SD.jpeg".format(name_fig), scale=0.5, width=900, height=900) #fig.show() plotly.offline.plot(fig, filename = PATH+'images/html_exports/france/{}.html'.format(name_fig), auto_open=False) ```
github_jupyter
``` % pylab inline from numpy import linalg as LA import glob from tqdm import tqdm import os import sklearn.preprocessing as prep import pickle import joblib import tensorflow as tf import pandas as pd import os #os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID' #os.environ['CUDA_VISIBLE_DEVICES'] = '1' IMAGE_WIDTH = 256 IMAGE_HEIGHT = 256 IMAGE_CHANNELS = 3 BATCH_SIZE = 8 LEARNING_RATE = 1e-4 N_EPOCHS = 100 N_LATENT = 100 CHECKPOINT_DIR = '/Z/personal-folders/interns/saket/vae_patches_train_valid_nlatent100' os.makedirs(CHECKPOINT_DIR, exist_ok=True) INPUT_DIM = IMAGE_CHANNELS*IMAGE_WIDTH*IMAGE_HEIGHT def min_max_scale(X): preprocessor = prep.MinMaxScaler().fit(X) X_scaled = preprocessor.transform(X) return X_scaled #config = tf.ConfigProto( # device_count = {'GPU': 0} #) config = tf.ConfigProto() config.gpu_options.per_process_gpu_memory_fraction = 0.3 config.gpu_options.visible_device_list = '1' class VAE(object): def __init__(self, input_dim, learning_rate=0.01, n_latent=8, batch_size=50): self.learning_rate = learning_rate self.n_latent = n_latent self.batch_size = batch_size self.input_dim = input_dim self._build_network() self._create_loss_optimizer() init = tf.global_variables_initializer() # Launch the session self.session = tf.Session(config=config) self.session.run(init) self.saver = tf.train.Saver(tf.global_variables()) def close_session(self): self.session.close() def _build_network(self): self.x = tf.placeholder(tf.float32, [None, self.input_dim]) dense1 = tf.layers.dense(activation=tf.nn.elu, inputs=self.x, units=256) dense2 = tf.layers.dense(activation=tf.nn.elu, inputs=dense1, units=256) dense3 = tf.layers.dense(activation=tf.nn.elu, inputs=dense2, units=256) dense4 = tf.layers.dense(activation=None, inputs=dense3, units=self.n_latent * 2) self.mu = dense4[:, :self.n_latent] self.sigma = tf.nn.softplus(dense4[:, self.n_latent:]) eps = tf.random_normal(shape=tf.shape(self.sigma), mean=0, stddev=1, dtype=tf.float32) self.z = self.mu + self.sigma * eps ddense1 = tf.layers.dense(activation=tf.nn.elu, inputs=self.z, units=256) ddense2 = tf.layers.dense(activation=tf.nn.elu, inputs=ddense1, units=256) ddense3 = tf.layers.dense(activation=tf.nn.elu, inputs=ddense2, units=256) self.reconstructed = tf.layers.dense(activation=tf.nn.sigmoid, inputs=ddense3, units=self.input_dim) def _create_loss_optimizer(self): epsilon = 1e-10 reconstruction_loss = -tf.reduce_sum( self.x * tf.log(epsilon+self.reconstructed) + (1-self.x) * tf.log(epsilon+1-self.reconstructed), axis=1 ) self.reconstruction_loss = tf.reduce_mean(reconstruction_loss) latent_loss = -0.5 * tf.reduce_sum(1 + tf.log(epsilon+self.sigma) - tf.square(self.mu) - tf.square(self.sigma), axis=1) latent_loss = tf.reduce_mean(latent_loss) self.latent_loss = latent_loss self.cost = tf.reduce_mean(self.reconstruction_loss + self.latent_loss) # ADAM optimizer self.optimizer = \ tf.train.AdamOptimizer(learning_rate=self.learning_rate).minimize(self.cost) def fit_minibatch(self, batch): _, cost, reconstruction_loss, latent_loss = self.session.run([self.optimizer, self.cost, self.reconstruction_loss, self.latent_loss], feed_dict = {self.x: batch}) return cost, reconstruction_loss, latent_loss def reconstruct(self, x): return self.session.run([self.reconstructed], feed_dict={self.x: x}) def decoder(self, z): return self.session.run([self.reconstructed], feed_dict={self.z: z}) def encoder(self, x): return self.session.run([self.z], feed_dict={self.x: x}) def save_model(self, checkpoint_path, epoch): self.saver.save(self.session, checkpoint_path, global_step = epoch) def load_model(self, checkpoint_dir): ckpt = tf.train.get_checkpoint_state(checkpoint_dir=checkpoint_dir, latest_filename='checkpoint') print('loading model: {}'.format(ckpt.model_checkpoint_path)) self.saver.restore(self.session, ckpt.model_checkpoint_path) train_df_file = '/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/train_df_with_mask.tsv' valid_df_file = '/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/validate_df_with_mask.tsv' train_df = pd.read_table(train_df_file) train_df.columns#() train_df_file = '/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/train_df_with_mask.tsv' valid_df_file = '/Z/personal-folders/interns/saket/github/pyvirchow/data/patch_df/validate_df_with_mask.tsv' def preprocess(image): return image/255.0 - 0.5 def _read_py_function(label, filename): image_decoded = joblib.load(filename) image_decoded = preprocess(image_decoded) #print(label) #print(image_decoded) return int32(eval(label)), image_decoded.astype(np.float32) def _resize_function(label, image_decoded): image_resized = tf.reshape(image_decoded, (-1, INPUT_DIM)) image_resized = tf.cast( image_resized, tf.float32) return tf.cast(label, tf.int32), image_resized def make_dataset(df): record_defaults = [[''], ['']] select_cols = [1, 7] dataset = tf.contrib.data.CsvDataset(df, record_defaults, header=True, field_delim='\t', select_cols=select_cols) #training_dataset = training_dataset.map(parser, #num_parallel_calls=BATCH_SIZE) dataset = dataset.map( lambda is_tumor, img_path: tuple(tf.py_func(_read_py_function, [is_tumor, img_path], [np.int32, np.float32]))) dataset = dataset.map(_resize_function) dataset = dataset.shuffle(buffer_size=10000) dataset = dataset.batch(BATCH_SIZE) return dataset training_dataset = make_dataset(train_df_file) validation_dataset = make_dataset(valid_df_file) training_iterator = training_dataset.make_one_shot_iterator() iterator = tf.data.Iterator.from_structure(training_dataset.output_types, training_dataset.output_shapes) training_init_op = iterator.make_initializer(training_dataset) #validation_init_op = iterator.make_initializer(validation_dataset) model = VAE(input_dim=INPUT_DIM, learning_rate=LEARNING_RATE, n_latent=N_LATENT, batch_size=BATCH_SIZE) total_losses = [] reconstruction_losses = [] latent_losses = [] sess = model.session training_next_batch = iterator.get_next() for epoch in range(N_EPOCHS): sess.run(training_init_op) while True: try: training_label_batch, training_image_batch = sess.run(training_next_batch) #print(training_image_batch) #print(training_label_batch) except tf.errors.OutOfRangeError: break input_batch = training_image_batch #input_batch = np.reshape(input_batch, (-1, )) #input_batch = np.asarray(input_batch, dtype=np.float32).reshape(-1, 256*256*3) total_loss, reconstruction_loss, latent_loss = model.fit_minibatch(input_batch) latent_losses.append(latent_loss) reconstruction_losses.append(reconstruction_loss) total_losses.append(total_loss) total_losses_path = os.path.join(CHECKPOINT_DIR, 'total_losses.pickle') latent_losses_path = os.path.join(CHECKPOINT_DIR, 'latent_losses.pickle') reconstruction_losses_path = os.path.join(CHECKPOINT_DIR, 'reconstruction_losses.pickle') joblib.dump(total_losses, total_losses_path) joblib.dump(latent_losses, latent_losses_path) joblib.dump(reconstruction_losses, reconstruction_losses_path) if epoch % 5 == 0: print('[Epoch {}] Loss: {}, Recon loss: {}, Latent loss: {}'.format( epoch, total_loss, reconstruction_loss, latent_loss)) checkpoint_path = os.path.join(CHECKPOINT_DIR, 'model.ckpt') model.save_model(checkpoint_path, epoch) print ("model saved to {}".format(checkpoint_path)) print('Done!') #return model, reconstruction_losses, lat training_image_batch ```
github_jupyter
**This notebook is an exercise in the [Computer Vision](https://www.kaggle.com/learn/computer-vision) course. You can reference the tutorial at [this link](https://www.kaggle.com/ryanholbrook/custom-convnets).** --- # Introduction # In these exercises, you'll build a custom convnet with performance competitive to the VGG16 model from Lesson 1. Get started by running the code cell below. ``` # Setup feedback system from learntools.core import binder binder.bind(globals()) from learntools.computer_vision.ex5 import * # Imports import os, warnings import matplotlib.pyplot as plt from matplotlib import gridspec import numpy as np import tensorflow as tf from tensorflow.keras.preprocessing import image_dataset_from_directory # Reproducability def set_seed(seed=31415): np.random.seed(seed) tf.random.set_seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) os.environ['TF_DETERMINISTIC_OPS'] = '1' set_seed() # Set Matplotlib defaults plt.rc('figure', autolayout=True) plt.rc('axes', labelweight='bold', labelsize='large', titleweight='bold', titlesize=18, titlepad=10) plt.rc('image', cmap='magma') warnings.filterwarnings("ignore") # to clean up output cells # Load training and validation sets ds_train_ = image_dataset_from_directory( '../input/car-or-truck/train', labels='inferred', label_mode='binary', image_size=[128, 128], interpolation='nearest', batch_size=64, shuffle=True, ) ds_valid_ = image_dataset_from_directory( '../input/car-or-truck/valid', labels='inferred', label_mode='binary', image_size=[128, 128], interpolation='nearest', batch_size=64, shuffle=False, ) # Data Pipeline def convert_to_float(image, label): image = tf.image.convert_image_dtype(image, dtype=tf.float32) return image, label AUTOTUNE = tf.data.experimental.AUTOTUNE ds_train = ( ds_train_ .map(convert_to_float) .cache() .prefetch(buffer_size=AUTOTUNE) ) ds_valid = ( ds_valid_ .map(convert_to_float) .cache() .prefetch(buffer_size=AUTOTUNE) ) ``` # Design a Convnet # Let's design a convolutional network with a block architecture like we saw in the tutorial. The model from the example had three blocks, each with a single convolutional layer. Its performance on the "Car or Truck" problem was okay, but far from what the pretrained VGG16 could achieve. It might be that our simple network lacks the ability to extract sufficiently complex features. We could try improving the model either by adding more blocks or by adding convolutions to the blocks we have. Let's go with the second approach. We'll keep the three block structure, but increase the number of `Conv2D` layer in the second block to two, and in the third block to three. <figure> <!-- <img src="./images/2-convmodel-2.png" width="250" alt="Diagram of a convolutional model."> --> <img src="https://i.imgur.com/Vko6nCK.png" width="250" alt="Diagram of a convolutional model."> </figure> # 1) Define Model # Given the diagram above, complete the model by defining the layers of the third block. ``` from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential([ # Block One layers.Conv2D(filters=32, kernel_size=3, activation='relu', padding='same', input_shape=[128, 128, 3]), layers.MaxPool2D(), # Block Two layers.Conv2D(filters=64, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Block Three # YOUR CODE HERE # ____, layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'), layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'), layers.MaxPool2D(), # Head layers.Flatten(), layers.Dense(6, activation='relu'), layers.Dropout(0.2), layers.Dense(1, activation='sigmoid'), ]) # Check your answer q_1.check() # Lines below will give you a hint or solution code #q_1.hint() #q_1.solution() model.summary() ``` # 2) Compile # To prepare for training, compile the model with an appropriate loss and accuracy metric for the "Car or Truck" dataset. ``` model.compile( optimizer=tf.keras.optimizers.Adam(epsilon=0.01), # YOUR CODE HERE: Add loss and metric loss='binary_crossentropy', metrics=['binary_accuracy'] ) # Check your answer q_2.check() model.compile( optimizer=tf.keras.optimizers.Adam(epsilon=0.01), loss='binary_crossentropy', metrics=['binary_accuracy'], ) q_2.assert_check_passed() # Lines below will give you a hint or solution code #q_2.hint() #q_2.solution() ``` Finally, let's test the performance of this new model. First run this cell to fit the model to the training set. ``` history = model.fit( ds_train, validation_data=ds_valid, epochs=50, ) ``` And now run the cell below to plot the loss and metric curves for this training run. ``` import pandas as pd history_frame = pd.DataFrame(history.history) history_frame.loc[:, ['loss', 'val_loss']].plot() history_frame.loc[:, ['binary_accuracy', 'val_binary_accuracy']].plot(); ``` # 3) Train the Model # How would you interpret these training curves? Did this model improve upon the model from the tutorial? ``` # View the solution (Run this code cell to receive credit!) q_3.check() ``` # Conclusion # These exercises showed you how to design a custom convolutional network to solve a specific classification problem. Though most models these days will be built on top of a pretrained base, it certain circumstances a smaller custom convnet might still be preferable -- such as with a smaller or unusual dataset or when computing resources are very limited. As you saw here, for certain problems they can perform just as well as a pretrained model. # Keep Going # Continue on to [**Lesson 6**](https://www.kaggle.com/ryanholbrook/data-augmentation), where you'll learn a widely-used technique that can give a boost to your training data: **data augmentation**. --- *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/196537) to chat with other Learners.*
github_jupyter
``` import sys sys.path.append('../scripts/') from ideal_robot import * from scipy.stats import expon, norm, uniform class Robot(IdealRobot): def __init__(self, pose, agent=None, sensor=None, color="black", \ noise_per_meter=5, noise_std=math.pi/60, bias_rate_stds=(0.1,0.1), \ expected_stuck_time=1e100, expected_escape_time = 1e-100,\ expected_kidnap_time=1e100, kidnap_range_x = (-5.0,5.0), kidnap_range_y = (-5.0,5.0)): #追加 super().__init__(pose, agent, sensor, color) self.noise_pdf = expon(scale=1.0/(1e-100 + noise_per_meter)) self.distance_until_noise = self.noise_pdf.rvs() self.theta_noise = norm(scale=noise_std) self.bias_rate_nu = norm.rvs(loc=1.0, scale=bias_rate_stds[0]) self.bias_rate_omega = norm.rvs(loc=1.0, scale=bias_rate_stds[1]) self.stuck_pdf = expon(scale=expected_stuck_time) self.escape_pdf = expon(scale=expected_escape_time) self.is_stuck = False self.time_until_stuck = self.stuck_pdf.rvs() self.time_until_escape = self.escape_pdf.rvs() self.kidnap_pdf = expon(scale=expected_kidnap_time) self.time_until_kidnap = self.kidnap_pdf.rvs() rx, ry = kidnap_range_x, kidnap_range_y self.kidnap_dist = uniform(loc=(rx[0], ry[0], 0.0), scale=(rx[1]-rx[0], ry[1]-ry[0], 2*math.pi )) def noise(self, pose, nu, omega, time_interval): self.distance_until_noise -= abs(nu)*time_interval + self.r*omega*time_interval if self.distance_until_noise <= 0.0: self.distance_until_noise += self.noise_pdf.rvs() pose[2] += self.theta_noise.rvs() return pose def bias(self, nu, omega): return nu*self.bias_rate_nu, omega*self.bias_rate_omega def stuck(self, nu, omega, time_interval): if self.is_stuck: self.time_until_escape -= time_interval if self.time_until_escape <= 0.0: self.time_until_escape += self.escape_pdf.rvs() self.is_stuck = False else: self.time_until_stuck -= time_interval if self.time_until_stuck <= 0.0: self.time_until_stuck += self.stuck_pdf.rvs() self.is_stuck = True return nu*(not self.is_stuck), omega*(not self.is_stuck) def kidnap(self, pose, time_interval): self.time_until_kidnap -= time_interval if self.time_until_kidnap <= 0.0: self.time_until_kidnap += self.kidnap_pdf.rvs() return np.array(self.kidnap_dist.rvs()).T else: return pose def one_step(self, time_interval): if not self.agent: return obs =self.sensor.data(self.pose) if self.sensor else None nu, omega = self.agent.decision(obs) nu, omega = self.bias(nu, omega) nu, omega = self.stuck(nu, omega, time_interval) self.pose = self.state_transition(nu, omega, time_interval, self.pose) self.pose = self.noise(self.pose, nu, omega, time_interval) self.pose = self.kidnap(self.pose, time_interval) class Camera(IdealCamera): ###camera_second### (initは省略) def __init__(self, env_map, distance_range=(0.5, 6.0), direction_range=(-math.pi/3, math.pi/3), distance_noise_rate=0.1, direction_noise=math.pi/90): super().__init__(env_map, distance_range, direction_range) self.distance_noise_rate = distance_noise_rate self.direction_noise = direction_noise def noise(self, relpos): #追加 ell = norm.rvs(loc=relpos[0], scale=relpos[0]*self.distance_noise_rate) phi = norm.rvs(loc=relpos[1], scale=self.direction_noise) return np.array([ell, phi]).T def data(self, cam_pose): observed = [] for lm in self.map.landmarks: z = self.observation_function(cam_pose, lm.pos) if self.visible(z): z = self.noise(z) #追加 observed.append((z, lm.id)) self.lastdata = observed return observed world = World(30, 0.1) ### 地図を生成して3つランドマークを追加 ### m = Map() m.append_landmark(Landmark(-4,2)) m.append_landmark(Landmark(2,-3)) m.append_landmark(Landmark(3,3)) world.append(m) ### ロボットを作る ### circling = Agent(0.2, 10.0/180*math.pi) r = Robot( np.array([ 0, 0, math.pi/6]).T, sensor=Camera(m), agent=circling) world.append(r) ### アニメーション実行 ### world.draw() ```
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import xgboost as xgb from sklearn.model_selection import train_test_split,TimeSeriesSplit from sklearn import metrics import statsmodels.api as sm from sklearn.ensemble import RandomForestRegressor from sklearn.datasets import make_regression from nested_cv import NestedCV def prepareData(lag): # all data prec = pd.read_csv('../data/MH25_vaisalawxt520prec_2017.csv') wind = pd.read_csv('../data/MH25_vaisalawxt520windpth_2017.csv') temp = pd.read_csv('../data/MH30_temperature_rock_2017.csv') radio = pd.read_csv('../data/MH15_radiometer__conv_2017.csv') # join all data by time temp0 = pd.merge(left=temp, right=prec, left_on='time', right_on='time') temp0 = pd.merge(left=temp0, right=wind, left_on='time', right_on='time') temp0 = pd.merge(left=temp0, right=radio, left_on='time', right_on='time') # format season, remove datetime and select columns temp0['time'] = pd.to_datetime(temp0['time']) temp0['season'] = np.round(pd.DatetimeIndex(temp0['time']).month/3) del temp0['time'] temp0 = temp0[['temperature_5cm [°C]', 'temperature_10cm [°C]', 'temperature_20cm [°C]', 'temperature_30cm [°C]', 'temperature_50cm [°C]', 'temperature_100cm [°C]', 'wind_speed_average [km/h]', 'net_radiation [Wm^-2]', 'season']] # difference between ground and deepest temperature --> objective diff = abs(temp0['temperature_100cm [°C]'] - temp0['temperature_5cm [°C]']) temp0.insert(0, 'delta_t', diff) # shift temp0.head() temp0['delta_t'] = temp0['delta_t'].shift(lag) temp0.head() # show correlation (within same time period) #print(abs(temp0.corr()['delta_t']).sort_values(ascending=False)) # clean column names and remove inf/nas temp0.columns = temp0.columns.str.replace("[", "_") temp0.columns = temp0.columns.str.replace("]", "_") temp0 = temp0.fillna(0) return temp0 shift = -24 temp0 = prepareData(shift) # create test and train samples n = temp0.shape[0] sep = int(np.round(0.7 * n)) X, y = temp0.iloc[:, 1:], temp0.iloc[:, 0] y_train = y.iloc[1:sep] y_test = y.iloc[sep+1:n] X_train = X.iloc[1:sep,:] X_test = X.iloc[sep+1:n,:] # train model and predict model = sm.OLS(y_train, X_train).fit() preds = model.predict(X_test) print(model.summary()) rmse = np.sqrt(metrics.mean_squared_error(y_test, preds)) print("The RMSE is " + str(rmse)) regressor = RandomForestRegressor(n_estimators=20, random_state=0) regressor.fit(X_train, y_train) preds = regressor.predict(X_test) rmse = np.sqrt(metrics.mean_squared_error(y_test, preds)) print("The RMSE is " + str(rmse)) regressor = RandomForestRegressor(n_estimators=20, random_state=0) tscv = TimeSeriesSplit(n_splits = 12) rmse = [] for train_index, test_index in tscv.split(X.iloc[:,1]): regressor.fit(X.iloc[train_index,:], y.iloc[train_index]) preds = regressor.predict(X.iloc[test_index,:]) rmse.append(np.sqrt(metrics.mean_squared_error(y[test_index], preds))) print(rmse) xg_reg = xgb.XGBRegressor(objective ='reg:squarederror', colsample_bytree = 0.3, learning_rate = 0.1, max_depth = 5, alpha = 10, n_estimators = 10) xg_reg.fit(X_train,y_train) preds = xg_reg.predict(X_test) rmse = np.sqrt(metrics.mean_squared_error(y_test, preds)) print("The RMSE is " + str(rmse)) ```
github_jupyter
``` # setup notebook if it is run on Google Colab, cwd = notebook file location try: # change notebook_path if this notebook is in a different subfolder of Google Drive notebook_path = "Projects/QuantumFlow/notebooks" import os from google.colab import drive drive.mount('/content/gdrive') os.chdir("/content/gdrive/My Drive/" + notebook_path) %tensorflow_version 2.x !pip install -q ruamel.yaml %load_ext tensorboard except: pass # imports import tensorflow as tf import numpy as np import matplotlib.pyplot as plt %matplotlib inline import ipywidgets as widgets from IPython.display import display # setup paths and variables for shared code (../quantumflow) and data (../data) import sys sys.path.append('../') data_dir = "../data" # import shared code, must run 0_create_shared_project_files.ipynb first! from quantumflow.utils import load_hyperparameters, train, build_model, QFDataset experiment = 'models' base_dir = os.path.join(data_dir, experiment) figures_dir = os.path.join(data_dir, 'models/figures') if not os.path.exists(figures_dir): os.makedirs(figures_dir) figsize = (10, 3) dpi=None %tensorboard --logdir=$base_dir models = ['cnn', 'resnet', 'resnet_vW_N2', 'models_large/resnet_vW_N2_1000', 'models_large/resnet_vW_N2_10000', 'models_large/resnet_vW_N2_100000'] model_params = {model: load_hyperparameters(os.path.join(base_dir, 'models_large'*model.startswith('models_large') , "hyperparams.config"), run_name=model.replace('models_large/', '')) for model in models} {model: params['model_kwargs']['l2_regularisation'] for model, params in model_params.items()} from tensorflow.python.summary.summary_iterator import summary_iterator import pandas as pd def load_summaries(event_dir): summaries = {} assert os.path.exists(event_dir), event_dir event_files = [os.path.join(event_dir, file) for file in os.listdir(event_dir) if '.tfevents' in file] assert len(event_files) > 0, event_dir def create_or_append(tag, step, wall_time, keys, values): try: if step not in summaries[tag]['step']: summaries[tag]['step'].append(step) #summaries[tag]['wall_time'].append(wall_time) if isinstance(keys, list): for key, value in zip(keys, values): summaries[tag][key].append(value) else: summaries[tag][keys].append(values) except KeyError: summaries[tag] = {'step': [step]}#, 'wall_time': [wall_time]} if isinstance(keys, list): for key, value in zip(keys, values): summaries[tag][key] = [value] else: summaries[tag][keys] = [values] for event_file in event_files: for summary in summary_iterator(event_file): if summary.summary.value.__len__() == 0: continue for entry in summary.summary.value: if entry.tag == 'keras': continue # model config elif 'bias' in entry.tag or 'kernel' in entry.tag: if 'image' in entry.tag: create_or_append('image/' + entry.tag.replace('image/', ''), summary.step, summary.wall_time, entry.tag.replace('image/', ''), entry.image.encoded_image_string) else: continue #histograms else: # metrics create_or_append(entry.tag, summary.step, summary.wall_time, 'simple_value', entry.simple_value) for key in summaries.keys(): summaries[key] = pd.DataFrame(data=summaries[key]).set_index('step') return summaries train_tfevents_dir = {model: os.path.join(base_dir, model, 'train') for model in models} model_summaries = {model: load_summaries(tfevents_dir) for model, tfevents_dir in train_tfevents_dir.items()} model_summaries['resnet'].keys() model_labels = {model: model.replace('models_large/', '').replace('_1', '').replace('0', '') + ' (M: {}, {} epochs)'.format(100 if model_params[model]['fit_kwargs']['epochs'] == 100000 else int(30000000/model_params[model]['fit_kwargs']['epochs']), model_params[model]['fit_kwargs']['epochs']) for model in models} plt.figure(figsize=figsize, dpi=dpi) for model, summary in model_summaries.items(): plt.plot(summary['learning_rate']['simple_value'].index/model_params[model]['fit_kwargs']['epochs']*(100000 if model_params[model]['fit_kwargs']['epochs'] == 100000 else 300000), summary['learning_rate'], label=model_labels[model]) plt.grid() plt.yscale('log') plt.legend() plt.ylabel('learning rate') plt.xlabel('training step') plt.savefig(os.path.join(figures_dir, 'learning_rate.eps'), format='eps', bbox_inches='tight') plt.show() plt.figure(figsize=figsize, dpi=dpi) for model, summary in model_summaries.items(): plt.plot(summary['learning_rate']['simple_value'].index/model_params[model]['fit_kwargs']['epochs'], summary['loss'], label=model_labels[model]) plt.yscale('log') plt.grid() plt.legend() plt.ylabel('loss') plt.xlabel('training progress') plt.savefig(os.path.join(figures_dir, 'loss.eps'), format='eps', bbox_inches='tight') plt.show() plt.figure(figsize=figsize, dpi=dpi) for model, summary in model_summaries.items(): plt.plot(summary['learning_rate']['simple_value'].index/model_params[model]['fit_kwargs']['epochs'], summary['mean_absolute_error/kinetic_energy'], label=model_labels[model]) plt.yscale('log') plt.grid() plt.legend() plt.ylabel('mean absolute error / hartree') plt.xlabel('training progress') plt.savefig(os.path.join(figures_dir, 'kinetic_energy.eps'), format='eps', bbox_inches='tight') plt.show() plt.figure(figsize=figsize, dpi=dpi) for model, summary in model_summaries.items(): plt.plot(summary['learning_rate']['simple_value'].index/model_params[model]['fit_kwargs']['epochs'], summary['mean_absolute_error/derivative'], label=model_labels[model]) plt.yscale('log') plt.grid() plt.legend() plt.ylabel('mean absolute error / hartree') plt.xlabel('training progress') plt.savefig(os.path.join(figures_dir, 'derivative.eps'), format='eps', bbox_inches='tight') plt.show() plt.figure(figsize=figsize, dpi=dpi) for model, summary in model_summaries.items(): if 'mean_absolute_error/kinetic_energy_density' in summary: plt.plot(summary['learning_rate']['simple_value'].index/model_params[model]['fit_kwargs']['epochs'], summary['mean_absolute_error/kinetic_energy_density'], label=model_labels[model]) else: plt.plot([], [], label='') plt.yscale('log') plt.grid() plt.legend() plt.ylabel('mean absolute error / hartree') plt.xlabel('training progress') plt.savefig(os.path.join(figures_dir, 'kinetic_energy_density.eps'), format='eps', bbox_inches='tight') plt.show() ``` ### Model Structure
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import tensorflow as tf from tqdm.auto import tqdm tqdm.pandas() pd.options.display.max_colwidth = None sns.set_style('darkgrid') dtrain = pd.read_csv('../input/quora-question-pairs/train.csv.zip') print(dtrain.shape) dtrain.head() dtest = pd.read_csv('../input/quora-question-pairs/test.csv') print(dtest.shape) dtest.head() ``` # Text Cleaning ``` %%time all_ques = pd.read_csv('../input/qqp-cleaned/quora-ques-pair-all-ques.csv') all_ques.head() %%time text_map = {x:y for x, y in zip(all_ques['RawText'].values, all_ques['CleanedText'].values)} dtrain['question1'] = dtrain['question1'].apply(lambda x: text_map[x]) dtrain['question2'] = dtrain['question2'].apply(lambda x: text_map[x]) dtest['question1'] = dtest['question1'].apply(lambda x: text_map[x]) dtest['question2'] = dtest['question2'].apply(lambda x: text_map[x]) del text_map ``` # Cross Validation ``` from sklearn.model_selection import StratifiedShuffleSplit sss = StratifiedShuffleSplit(n_splits=1, test_size=0.01, random_state=19) train_index, valid_index = list(sss.split(dtrain[['question1', 'question2']].values, dtrain['is_duplicate']))[0] from sklearn.metrics import confusion_matrix, classification_report def evaluate_model(model, x_train, x_valid, y_train, y_valid): print('Train Set:') print() print(classification_report(y_train, model.predict(x_train))) print() print() print('Validation Set:') print() print(classification_report(y_valid, model.predict(x_valid))) ``` # Vectorization ``` %%time from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer( ngram_range=(1, 1), min_df=1, max_df=1.0, sublinear_tf=True ).fit(all_ques['CleanedText'].fillna('').values) x_train = dtrain[['question1', 'question2']].iloc[train_index].reset_index(drop=True) x_valid = dtrain[['question1', 'question2']].iloc[valid_index].reset_index(drop=True) y_train = dtrain['is_duplicate'].iloc[train_index].reset_index(drop=True).values y_valid = dtrain['is_duplicate'].iloc[valid_index].reset_index(drop=True).values del all_ques y_train.shape, y_valid.shape def sparse_tensor(X): coo = X.tocoo() indices = np.mat([coo.row, coo.col]).transpose() return tf.sparse.reorder(tf.SparseTensor(indices, coo.data, coo.shape)) %%time from scipy import sparse x_train_1 = vectorizer.transform(x_train['question1'].fillna('')) x_train_2 = vectorizer.transform(x_train['question2'].fillna('')) x_valid_1 = vectorizer.transform(x_valid['question1'].fillna('')) x_valid_2 = vectorizer.transform(x_valid['question2'].fillna('')) x_test_1 = vectorizer.transform(dtest['question1'].fillna('')) x_test_2 = vectorizer.transform(dtest['question2'].fillna('')) x_train = [sparse_tensor(x_train_1), sparse_tensor(x_train_2)] x_valid = [sparse_tensor(x_valid_1), sparse_tensor(x_valid_2)] x_test = [sparse_tensor(x_test_1), sparse_tensor(x_test_2)] x = [ sparse_tensor(sparse.vstack([x_train_1, x_valid_1])), sparse_tensor(sparse.vstack([x_train_2, x_valid_2])) ] y = np.concatenate([y_train, y_valid]) del x_train_1, x_train_2, x_valid_1, x_valid_2, x_test_1, x_test_2 x_train[0].shape, x_valid[0].shape, x_test[0].shape ``` # Modelling ``` import tensorflow.keras.backend as K from tensorflow.keras.models import Model, Sequential from tensorflow.keras import layers, utils, callbacks, optimizers, regularizers def euclidean_distance(vectors): (featsA, featsB) = vectors sumSquared = K.sum(K.square(featsA - featsB), axis=1, keepdims=True) return K.sqrt(K.maximum(sumSquared, K.epsilon())) def cosine_similarity(vectors): (featsA, featsB) = vectors featsA = K.l2_normalize(featsA, axis=-1) featsB = K.l2_normalize(featsB, axis=-1) return K.mean(featsA * featsB, axis=-1, keepdims=True) class SiameseNetwork(Model): def __init__(self, inputShape, featExtractorConfig): super(SiameseNetwork, self).__init__() inpA = layers.Input(shape=inputShape) inpB = layers.Input(shape=inputShape) featureExtractor = self.build_feature_extractor(inputShape, featExtractorConfig) featsA = featureExtractor(inpA) featsB = featureExtractor(inpB) distance = layers.Concatenate()([featsA, featsB]) outputs = layers.Dense(1, activation="sigmoid")(distance) self.model = Model(inputs=[inpA, inpB], outputs=outputs) def build_feature_extractor(self, inputShape, featExtractorConfig): layers_config = [layers.Input(inputShape)] for i, n_units in enumerate(featExtractorConfig): layers_config.append(layers.Dense(n_units)) layers_config.append(layers.Dropout(0.5)) layers_config.append(layers.BatchNormalization()) layers_config.append(layers.Activation('relu')) model = Sequential(layers_config, name='feature_extractor') return model def call(self, x): return self.model(x) model = SiameseNetwork(inputShape=x_train[0].shape[1], featExtractorConfig=[100]) model.compile( loss="binary_crossentropy", optimizer=optimizers.Adam(learning_rate=0.0001), metrics=["accuracy"] ) model.model.layers[2].summary() model.model.summary() utils.plot_model(model.model, show_shapes=True, expand_nested=True) es = callbacks.EarlyStopping( monitor='val_loss', patience=5, verbose=1, restore_best_weights=True ) rlp = callbacks.ReduceLROnPlateau( monitor='val_loss', factor=0.1, patience=2, min_lr=1e-10, mode='min', verbose=1 ) history = model.fit( x_train, y_train, validation_data=(x_valid, y_valid), batch_size=32, epochs=100, callbacks=[es, rlp] ) fig, ax = plt.subplots(2, 1, figsize=(20, 8)) df = pd.DataFrame(history.history) df[['accuracy', 'val_accuracy']].plot(ax=ax[0]) df[['loss', 'val_loss']].plot(ax=ax[1]) ax[0].set_title('Model Accuracy', fontsize=12) ax[1].set_title('Model Loss', fontsize=12) fig.suptitle('Siamese Network: Learning Curve', fontsize=18); %%time submission = pd.DataFrame({ 'test_id': dtest.test_id.values, 'is_duplicate': np.ravel(model.predict(x_test, batch_size=32)) }) submission.to_csv('submission.csv', index=False) ```
github_jupyter
``` import sys sys.path.append('../') import os import torch import torch.nn as nn import accimage from PIL import Image from imageio import imread from torch.utils.data import Dataset, DataLoader from torchvision import datasets, models, transforms, set_image_backend, get_image_backend import data_utils import train_utils import numpy as np import pandas as pd import pickle import torch.nn.functional as F from collections import Counter %reload_ext autoreload %autoreload 2 # https://github.com/pytorch/accimage set_image_backend('accimage') get_image_backend() # set root dir for images root_dir = '/n/mounted-data-drive/COAD/' # normalize and tensorify jpegs normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) transform = transforms.Compose([transforms.ToTensor(), normalize]) sa_train, sa_val = data_utils.process_MSI_data() train_set = data_utils.TCGADataset_tiles(sa_train, root_dir, transform=transform) val_set = data_utils.TCGADataset_tiles(sa_val, root_dir, transform=transform) # set weights for random sampling of tiles such that batches are class balanced counts = [c[1] for c in sorted(Counter(train_set.all_labels).items())] weights = 1.0 / np.array(counts, dtype=float) * 1e3 reciprocal_weights =[] for index in range(len(train_set)): reciprocal_weights.append(weights[train_set.all_labels[index]]) batch_size = 256 sampler = torch.utils.data.sampler.WeightedRandomSampler(reciprocal_weights, len(reciprocal_weights), replacement=False) train_loader = DataLoader(train_set, batch_size=batch_size, pin_memory=True, sampler=sampler, num_workers=12) #len(train_set) / batch_size valid_loader = DataLoader(val_set, batch_size=batch_size, pin_memory=True, num_workers=12) #len(val_set) / batch_size ``` ## Fat Network ``` resnet = models.resnet18(pretrained=True) resnet.fc = nn.Linear(2048,2,bias=True)#resnet18: 2048, resnet50: 8192, resnet152: 8192 resnet.cuda() learning_rate = 1e-2 criterion_train = nn.BCEWithLogitsLoss(reduction = 'mean') criterion_val = nn.BCEWithLogitsLoss(reduction = 'none') optimizer = torch.optim.Adam(resnet.parameters(), lr = learning_rate) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=100, min_lr=1e-6) jpg_to_sample = val_set.jpg_to_sample for e in range(1): if e % 1 == 0: print('---------- LR: {0:0.5f} ----------'.format(optimizer.state_dict()['param_groups'][0]['lr'])) #train_utils.embedding_training_loop(e, train_loader, resnet, criterion, optimizer) val_loss = train_utils.embedding_validation_loop(e, valid_loader, resnet, criterion, jpg_to_sample, dataset='Val', scheduler=scheduler) #torch.save(resnet.state_dict(),'test.pt') ```
github_jupyter
# Mount Drive & Login to Wandb ``` from google.colab import drive from getpass import getpass import urllib import os # Mount drive drive.mount('/content/drive') !pip install wandb -qqq !wandb login ``` # Install dependencies ``` !rm -r pearl !git clone https://github.com/PAL-ML/PEARL_v1.git pearl %cd pearl !pip install -r requirements.txt %cd .. !pip install git+git://github.com/ankeshanand/pytorch-a2c-ppo-acktr-gail !pip install git+git://github.com/mila-iqia/atari-representation-learning.git !pip install torch==1.7.1{torch_version_suffix} torchvision==0.8.2{torch_version_suffix} -f https://download.pytorch.org/whl/torch_stable.html ftfy regex !pip install ftfy regex tqdm pip install git+https://github.com/openai/CLIP.git !pip install git+git://github.com/openai/baselines ! wget http://www.atarimania.com/roms/Roms.rar ! unrar x Roms.rar ! unzip ROMS.zip ! python -m atari_py.import_roms /content/ROMS !git clone https://github.com/princeton-vl/RAFT.git ``` # Imports ``` # ML libraries import torch.nn as nn import torch import pearl.src.benchmark.colab_data_preprocess as data_utils from pearl.src.benchmark.probe_training_wrapper import run_probe_training from pearl.src.benchmark.encoder_training_wrapper import run_encoder_training from pearl.src.methods.encoders import LinearRepEncoder, ClipEncoder, LinearCPCEncoder from pearl.src.benchmark.utils import appendabledict, load_encoder_from_checkpoint # Models import clip # Data processing from PIL import Image from torchvision.transforms import Compose, Resize, Normalize # Misc import numpy as np import wandb import os from tqdm import tqdm # Plotting import seaborn as sns import matplotlib.pyplot as plt ``` # Helper function(s) ``` def get_trained_encoder(rep_encoder, tr_episodes, val_episodes, config, wandb, method=None, train_encoder=False): if train_encoder: trained_encoder = run_encoder_training(rep_encoder, tr_episodes, val_episodes, config, wandb, method=method) else: trained_encoder = load_encoder_from_checkpoint(encoder_models_dir, config['model_name'], LinearCPCEncoder, input_size=rep_encoder.input_size, output_size=rep_encoder.feature_size, to_train=False) return trained_encoder ``` # Initialization & constants Saving parameters ``` # Run the train image masking in separate batches if the notebook crashes. tr_start_ind_ratio = 0 # Start index on a scale of 0-1 over the total number of episodes tr_end_ind_ratio = 1 # End index on a scale of 0-1 over the total number of episodes ``` General ``` env_name = "BreakoutNoFrameskip-v4" collect_mode = "random_agent" # random_agent or ppo_agent steps = 50000 training_input = "images" # embeddings or images ``` Encoder params ``` ''' input_size = 512 feature_size = 512 e_lr = 3e-4 e_batch_size = 64 e_num_epochs = 200 e_patience = 15 gru_layers = 2 gru_size = 256 seq_len = 100 steps_start = 0 steps_end = 99 steps_step = 4 ''' encoder_training_method = "optical_flow" encoder_type = "raft" model_name = encoder_training_method + '-' + encoder_type train_encoder = False use_encoder = True ``` Probe Params ``` p_lr = 3e-4 p_batch_size = 64 p_num_epochs = 100 p_patience = 15 probe_type = "linear" probe_name_suffix = "_OpticalFlowBlackMask" probe_name = encoder_training_method + "-" + encoder_type + "-" + probe_type + probe_name_suffix ``` Paths ``` data_path_suffix = "-latest-04-05-2021" probe_models_dir = os.path.join("drive/MyDrive/PAL_HILL_2021/Atari-RL/Models/probes", probe_name, env_name) encoder_models_dir = os.path.join("drive/MyDrive/PAL_HILL_2021/Atari-RL/Models/encoders", encoder_type, env_name) data_dir = os.path.join("/content/drive/MyDrive/PAL_HILL_2021/Atari-RL/Images_Labels_Clip_embeddings", env_name + data_path_suffix) if not os.path.exists(probe_models_dir): os.makedirs(probe_models_dir) if not os.path.exists(encoder_models_dir): os.makedirs(encoder_models_dir) ``` Wandb ``` wandb.init(project='atari-clip') config = wandb.config config.game = "{}-optical-flow-raft-linear-probe".format(env_name.replace("NoFrameskip-v4", "")) wandb.run.name = "{}_optical_flow_raft_linear".format(env_name.replace("NoFrameskip-v4", "")) ``` # Get episode data ``` tr_episodes, val_episodes,\ tr_labels, val_labels,\ test_episodes, test_labels = data_utils.get_data(training_input, data_dir, env_name=env_name, steps=steps, collect_mode=collect_mode, color=True) len(tr_episodes) ``` # Get masked episodes ## RAFT Initialisation ``` %cd RAFT !wget https://www.dropbox.com/s/4j4z58wuv8o0mfz/models.zip !unzip models.zip import sys sys.path.append('core') import argparse import os import cv2 import glob import numpy as np import torch from PIL import Image from raft import RAFT from utils import flow_viz from utils.utils import InputPadder from tqdm import tqdm DEVICE = 'cuda' ``` ## Function definitions ``` def flow_to_mask(flow_uv, mask_type="norm", clip_flow=None, convert_to_bgr=False): assert flow_uv.ndim == 3, 'input flow must have three dimensions' assert flow_uv.shape[2] == 2, 'input flow must have shape [H,W,2]' if clip_flow is not None: flow_uv = np.clip(flow_uv, 0, clip_flow) u = flow_uv[:,:,0] v = flow_uv[:,:,1] rad = np.sqrt(np.square(u) + np.square(v)) if mask_type == "norm": mask = rad / np.max(rad) elif mask_type == "clip": mask = np.clip(rad, 0, 1) return mask def mask_image(image, mask, add_background_noise=False): mask_3c = np.stack([mask for _ in range(3)]) inv_mask_3c = 1 - mask_3c masked_image = image * mask_3c if add_background_noise: if np.max(image) > 1: noise = np.random.randint(0, np.max(image), masked_image.shape) else: noise = np.random.random(masked_image.shape) masked_image = masked_image + noise*inv_mask_3c return masked_image def convert_episodes_to_optical_flow_masked_episodes( episodes, args, return_flow_output=False, add_background_noise=False, data_dir=None, converted_episodes_filename=None, save_per_episodes=None ): model = torch.nn.DataParallel(RAFT(args)) model.load_state_dict(torch.load(args.model)) model = model.module model.to(DEVICE) model.eval() if converted_episodes_filename: converted_episodes_filepath = os.path.join(data_dir, converted_episodes_filename) try: optical_flow_episodes = data_utils.load_npy(converted_episodes_filepath) optical_flow_episodes = list(optical_flow_episodes) except: optical_flow_episodes = [] else: optical_flow_episodes = [] num_episodes_done = len(optical_flow_episodes) with torch.no_grad(): for idx_ep, ep in tqdm(enumerate(episodes)): if idx_ep < num_episodes_done: continue images = ep converted_episode = [] for im1, im2 in zip(images[:-1], images[1:]): image1 = im1.float().unsqueeze(0).cuda() image2 = im2.float().unsqueeze(0).cuda() padder = InputPadder(image1.shape) image1, image2 = padder.pad(image1, image2) flow_low, flow_up = model(image1, image2, iters=20, test_mode=True) if return_flow_output: flow_output = flow_viz.flow_to_image(flow_up[0].permute(1,2,0).cpu().numpy()) converted_image = flow_output.transpose(2,0,1).astype(np.uint8) else: flow_numpy = flow_up[0].permute(1,2,0).cpu().numpy() image_numpy = image1[0].cpu().numpy() mask = flow_to_mask(flow_numpy) masked_image = mask_image(image_numpy, mask, add_background_noise=add_background_noise) converted_image = masked_image.astype(np.uint8) converted_image_tensor = torch.from_numpy(converted_image) converted_episode.append(converted_image_tensor) optical_flow_episodes.append(converted_episode) if converted_episodes_filename: if (idx_ep+1) % save_per_episodes == 0 or idx_ep == len(episodes)-1: # data_utils.save_npy(converted_episodes_filepath, np.array(optical_flow_episodes)) np.savez(converted_episodes_filepath, optical_flow_episodes) del converted_episode return optical_flow_episodes ``` ## Convert episodes ``` parser = argparse.ArgumentParser() parser.add_argument('--model', default="models/raft-things.pth", help="restore checkpoint") # parser.add_argument('--path', default="demo-frames", help="dataset for evaluation") parser.add_argument('--small', action='store_true', help='use small model') parser.add_argument('--mixed_precision', action='store_true', help='use mixed precision') parser.add_argument('--alternate_corr', action='store_true', help='use efficent correlation implementation') args = parser.parse_args(args=["--model=models/raft-things.pth"]) add_background_noise = False save_per_episodes = 20 start_ind = int(tr_start_ind_ratio*len(tr_episodes)) end_ind = int(tr_end_ind_ratio*len(tr_episodes)) converted_tr_episodes_filename = "optical_flow_black_mask_train_eps_" + str(start_ind) + "_" + str(end_ind-1) + ".npz" converted_tr_episodes = convert_episodes_to_optical_flow_masked_episodes(tr_episodes[start_ind:end_ind], args, add_background_noise=add_background_noise, data_dir=data_dir, converted_episodes_filename=converted_tr_episodes_filename, save_per_episodes=save_per_episodes) # converted_tr_episodes_filename = "optical_flow_black_mask_train_eps.npz" # converted_tr_episodes = convert_episodes_to_optical_flow_masked_episodes(tr_episodes, args, add_background_noise=add_background_noise, data_dir=data_dir, converted_episodes_filename=converted_tr_episodes_filename, save_per_episodes=save_per_episodes) add_background_noise = False save_per_episodes = 20 converted_val_episodes_filename = "optical_flow_black_mask_val_eps.npz" converted_val_episodes = convert_episodes_to_optical_flow_masked_episodes(val_episodes, args, add_background_noise=add_background_noise, data_dir=data_dir, converted_episodes_filename=converted_val_episodes_filename, save_per_episodes=save_per_episodes) converted_test_episodes_filename = "optical_flow_black_mask_test_eps.npz" converted_test_episodes = convert_episodes_to_optical_flow_masked_episodes(test_episodes, args, add_background_noise=add_background_noise, data_dir=data_dir, converted_episodes_filename=converted_test_episodes_filename, save_per_episodes=save_per_episodes) for i, item in enumerate(converted_val_episodes[0]): plt.imshow(item.permute(1,2,0).numpy()) plt.show() if i == 30: break ```
github_jupyter
# April 2019 Python Users Group - Manchester, New Hampshire, USA Author: Larry M. (Larz60+) *** *** ## MIT License Copyright (c) 2018 Larz60+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. *** ## Introduction Originally, I wanted to present examples of three types of scrapers, one using lxml, another using BeautifulSoup and finally one using Selenium. I don't think I could present all three in one evening and still have good coverage of each section, so I'll leave lxml for another presentation. Instead, I'll be scraping a site that requires both selenium and BeautifulSoup. A good candidate for this is the Connecticut Secretary of State Business Entity Search site. The modules needed to accomplish this are: ### Set-up * CreateCityFile.py - Create city dictionary, saved as json file by scraping 'https://ctstatelibrary.org/cttowns/counties' using selenium and BeautifulSoup. Will explain how to capture XPath using Inspect Element. Page will be captured using Selenium which will expand the java script. The parsing will be done using Beautifulsoup. * CreateDict.py - Module to create class for creating dictionaries, add nodes, and add cells. (reusable) * Copy PrettifyPage.py from other application- used to create formatted pretty pages (uses BeautifulSoup4 prettify, but modifies spacing to make it easier to read. * BusinessPaths.py - Create a Paths class, that contains entries for and creates directories (if necessary) for all modules in package. Included are all file names and locations. Having a file like this saves a lot of time trying to chase down where a particular file is located. It is used by all modules, and because of this, any file name and/or directory can be modified, and immediately propagated to all modules in package. Uses pathlib. ### Analysis * PreviewSearchPage.py - Runs a test using Seleium to see if there's a hidden API somewhere, which would greatly simplify scraping. ### Scrape * ScrapeConnecticut.py - Capture all summary pages searching by City, which is the best method for getting total coverage of all businesses. Following done using concurrentfutures.ProcessPoolExecutor. These files are stored in ./data/html and named City_pagex.html example: Andover_page12.html * ExtractBusinessEntities.py - Extracts business information from Summary files, and saves as preliminary information in ./data/json/CompanyMaster.json includes basic company information as well as URL of Detail information, and associated filename. * AddEntityDetail.py - Using information gathered in ExtractBusinessEntities.py, downloads all detail files, and adds detail information on each business to the ./data/json/CompanyMaster.json file. ### Process * CreateDatabase.py - Creates an sqlite3 database located at ./data/database/CompanyMaster.db. Populated from CompanyMain.json, CompanyDetail.json, CompanyPrincipals.json, CompanyAgents.json and CompanyFilings.json whics are generated from CompanyMaster.json. Calls CreateTables.py * CreateTables.py - Creates database and all tables in Company sqlite3 database * Future - Create GUI to access scraped data. (Might actually have something by meeting time). * The website used to scrape above business data is in process of change. Will need to rewrite all modules in Scrape section. ``` %load_ext autoreload %autoreload 2 ``` *** ## BusinessPaths.py This module creates all data directories (only if they don't already exist), filenames and paths (once in a while I will create tmp files on the fly, but will almost always use paths from this module). I have been doing this for some time now and have never regretted it. Advantages: * Can add new files, which instantly become available to every program in the package. * Never have to wonder where a file is stored, as only it's name is necessary for access. * Entire directory can be renamed with 100 % transparency. * Most of the directory names become standard to all the code that I write, and users know where to look for a certain type of file. Probably many other advantages that I can't think of at the moment. The very first statement in each and every one of these files is an anchor that establishes the relationship of all files and directorys, and the modules themselves. The anchor is based of the __file__ variables associated with a module, and is written as: os.chdir( os.path.abspath( os.path.dirname( __file__ ))) Initializes all paths, URLs and files (other that temporary files). It is imported into all modules Create .data directory and all sub directories if not already created, does not destroy existing directories or data. Safe to run. Script should be run by itself once on a new system install. ``` # BusinessPaths.py from pathlib import Path import os class BusinessPaths: def __init__(self): os.chdir(os.path.abspath(os.path.dirname(__file__))) self.homepath = Path('.') self.rootpath = self.homepath / '..' self.datapath = self.rootpath / 'data' self.datapath.mkdir(exist_ok=True) self.dbpath = self.datapath / 'database' self.dbpath.mkdir(exist_ok=True) self.htmlpath = self.datapath / 'html' self.htmlpath.mkdir(exist_ok=True) self.idpath = self.datapath / 'Idfiles' self.idpath.mkdir(exist_ok=True) self.jsonpath = self.datapath / 'json' self.jsonpath.mkdir(exist_ok=True) self.prettypath = self.datapath / 'pretty' self.prettypath.mkdir(exist_ok=True) self.textpath = self.datapath / 'text' self.textpath.mkdir(exist_ok=True) self.tmppath = self.datapath / 'tmp' self.tmppath.mkdir(exist_ok=True) self.base_url = 'http://searchctbusiness.ctdata.org/' self.cities_json = self.jsonpath / 'cities.json' self.city_list_url = 'https://ctstatelibrary.org/cttowns/counties' self.raw_city_file = self.tmppath / 'raw_city.html' # self.cities_text = self.textpath / 'cities.txt' self.company_master_json = self.jsonpath / 'CompanyMaster.json' self.CompanyMasterDb = self.dbpath / 'CompanyMaster.db' self.company_main = self.jsonpath / 'CompanyMain.json' self.company_detail = self.jsonpath / 'CompanyDetail.json' self.company_principals = self.jsonpath / 'CompanyPrincipals.json' self.company_agents = self.jsonpath / 'CompanyAgents.json' self.company_filings = self.jsonpath / 'CompanyFilings.json' if __name__ == '__main__': BusinessPaths() ``` *** ## CreateDict.py I got tired of trying to remember exactly how (and where) to add new dictionaries, nodes or cells, so I wrote this module. ### new_dict(self, dictname) * Create a new dictionary instance named dictname. I usually manually create the dictionary in the __init__ method of class, but this can be used as an alternative ### add_node(self, parent, nodename) * Adds a new nested node to parent node, named nodename ### Add cell(self, nodename, cellname, value) * Adds a cell to node nodename, named cellname, with and assigns value to cell. ### display_dict(self, dictname, level=0) * Display dictionary (or node of dictionary) in a nicely formatted and properly indented manner. * level is indent level, and never supplied by caller, it's used for determining indent level for recurrsive calls. * testit is a demo of usage. - Results of running testit: - CityList Dictionary Boston Resturants Spoke Wine Bar Addr1: 02144 City: Sommerville Phone: 617-718-9463 Highland Kitchen Addr1: 150 Highland Ave City: Sommerville ZipCode: 02144 Phone: 617-625-1131 raw data: {'Boston': {'Resturants': {'Spoke Wine Bar': {'Addr1': '02144', 'City': 'Sommerville', 'Phone': '617-718-9463'}, 'Highland Kitchen': {'Addr1': '150 Highland Ave', 'City': 'Sommerville', 'ZipCode': '02144', 'Phone': '617-625-1131'}}}} ``` # CreateDict.py import os class CreateDict: def __init__(self): os.chdir(os.path.abspath(os.path.dirname(__file__))) def new_dict(self, dictname): setattr(self, dictname, {}) def add_node(self, parent, nodename): node = parent[nodename] = {} return node def add_cell(self, nodename, cellname, value): cell = nodename[cellname] = value return cell def display_dict(self, dictname, level=0): indent = " " * (4 * level) for key, value in dictname.items(): if isinstance(value, dict): print(f'\n{indent}{key}') level += 1 self.display_dict(value, level) else: print(f'{indent}{key}: {value}') if level > 0: level -= 1 def testit(): # instantiate class cd = CreateDict() # create new dictionary named CityList cd.new_dict('CityList') # add node Boston boston = cd.add_node(cd.CityList, 'Boston') # add sub node Resturants bos_resturants = cd.add_node(boston, 'Resturants') # Add subnode 'Spoke Wine Bar' to parent bos_resturants spoke = cd.add_node(bos_resturants, 'Spoke Wine Bar') cd.add_cell(spoke, 'Addr1', '89 Holland St') cd.add_cell(spoke, 'City', 'Sommerville') cd.add_cell(spoke, 'Addr1', '02144') cd.add_cell(spoke, 'Phone', '617-718-9463') # Add subnode 'Highland Kitchen' to parent bos_resturants highland = cd.add_node(bos_resturants, 'Highland Kitchen') cd.add_cell(highland, 'Addr1', '150 Highland Ave') cd.add_cell(highland, 'City', 'Sommerville') cd.add_cell(highland, 'ZipCode', '02144') cd.add_cell(highland, 'Phone', '617-625-1131') # display dictionary print(f'\nCityList Dictionary') cd.display_dict(cd.CityList) print(f'\nraw data: {cd.CityList}') if __name__ == '__main__': testit() ``` *** ## CreateCityFile.py This script creates a json file that will be used to control the order and scope of page retrevial. This method was chosen over search by name Alphabetically because: * The Connecticut search by Name requires a minimum or two letters * It doesn't care where within the name those two letters occur * This means a search for 'AA' will find 'VEREIT AA STRATFORD CT, LLC' 'AA CLEANING LLC' but not 'AAA AFFORDABLE GLASS, INC.' * because of this, the number of pages needed for full coverage is massive, and probably impossible because of practability. Coverage by city has no page limit (at time of writing) * example a city search on bridgeport brings up 741 pages. * This allows for full coverage of all companies. The URL I chose to get city information is https://ctstatelibrary.org/cttowns/counties. This page contains more information than is needed for this package. Nevertheless, I'll save everything ... It may with high probability, prove useful in some future package. To start, capture the city page using selenium. This will capture the javascript city table in the browser page. * Show Inispect Element to get css select for header * Show how to determine table layout using print statements * Then uncomment code and run again, showing the dictionary that's written to the json file (data/json/cities.json). * Also demonstrate use of pythomn json formatter to create text file of dictionary Each city has the following format: * "Andover": { - "Town name": "Andover", - "County": "Tolland", - "Year Established": "1848", - "Parent Town": "Coventry, Hebron", - "History of incorporation": "May 18, 1848; taken from Hebron and Coventry", - "ContiguousCityName": "Andover" } ``` # CreateCityFile.py from selenium import webdriver from selenium.webdriver.common.by import By from bs4 import BeautifulSoup import BusinessPaths import time import PrettifyPage import CreateDict import json import sys class CreateCityFile: def __init__(self): self.bpath = BusinessPaths.BusinessPaths() self.pp = PrettifyPage.PrettifyPage() self.cd = CreateDict.CreateDict() self.header = [] self.city_info = {} self.get_city_info() def start_browser(self): caps = webdriver.DesiredCapabilities().FIREFOX caps["marionette"] = True self.browser = webdriver.Firefox(capabilities=caps) def stop_browser(self): self.browser.close() def get_city_info(self): if not self.bpath.cities_json.exists(): if self.bpath.raw_city_file.exists(): with self.bpath.raw_city_file.open() as fp: page = fp.read() soup = BeautifulSoup(page, "lxml") else: self.start_browser() self.browser.get(self.bpath.city_list_url) time.sleep(2) page = self.browser.page_source # save working copy and pretty copy for analysis in temp with self.bpath.raw_city_file.open('w') as fp: fp.write(page) soup = BeautifulSoup(page, "lxml") prettyfile = self.bpath.prettypath / 'raw_city_file_pretty.html' with prettyfile.open('w') as fp: fp.write(f'{self.pp.prettify(soup, 2)}') self.stop_browser() table = soup.find('table', {'summary': 'This table displays Connecticut towns and the year of their establishment.'}) trs = table.tbody.find_all('tr') # Create Node to separate Connecticut - May contain multiple states later masternode = self.cd.add_node(self.city_info, 'Connecticut') citynode = None contigname = 'Unspecified' for n, tr in enumerate(trs): if n == 0: self.header = [] for td in self.get_td(tr): self.header.append(td.p.b.i.text.strip()) self.cd.add_cell(masternode, 'Header', self.header) else: for n1, td in enumerate(self.get_td(tr)): # print(f'==================================== tr {n}, td: {n1} ====================================') # print(f'{self.pp.prettify(td, 2)}') if n1 == 0: citynode = self.cd.add_node(masternode, f'{td.p.text.strip()}') value = td.p if td.p is None: value = 'Unspecified' else: value = td.p.text.strip() if value == '—-': value ='No parent town' self.cd.add_cell(citynode, self.header[n1], value.strip()) if self.header[n1] == 'Town name': contigname = value.strip().replace(' ', '') self.cd.add_cell(citynode, 'ContiguousCityName', contigname) self.cd.display_dict(self.city_info) # Create json file with self.bpath.cities_json.open('w') as fp: json.dump(self.city_info, fp) def get_td(self, tr): tds = tr.find_all('td') for td in tds: yield td if __name__ == '__main__': CreateCityFile() from IPython.display import IFrame IFrame(src='http://searchctbusiness.ctdata.org/', width=1450, height=900) ``` *** ## PreviewSearchPage.py Take a look at the search page above. At this point in time, it's the go to page for any Business Entity Search for Connecticut. Connecticut is considering publishing an API for Business Entity search, but does not yet do that. I have found that many (if not most) searches do indeed have an underlying API. It only has to be discovered. The purpose of this script is to expose that API if indeed it does exist. The method used to do this is to scrape a page using Selenium, using a city that is known to contain several pages of data. What is garnered here? * URL for Main page, just as a warm and fuzzy (makes sure I am using proper and full URL) * URL for initial search page for selected city. * URL for 2nd page for selected city * URL for Detail information of just one company. If there is an API, it should be exposed by step 3. But need to see step 2 results to see if first page needs to be treated in a special way. ``` # PreviewSerchPage.py from selenium import webdriver from selenium.webdriver.common.by import By from bs4 import BeautifulSoup import BusinessPaths import time import PrettifyPage import GetPage import CreateDict import json import sys class PreviewSearchPage: def __init__(self): self.bpath = BusinessPaths.BusinessPaths() self.pp = PrettifyPage.PrettifyPage() self.cd = CreateDict.CreateDict() self.gp = GetPage.GetPage() self.getpage = self.gp.get_page self.analyze_page() def start_browser(self): caps = webdriver.DesiredCapabilities().FIREFOX caps["marionette"] = True self.browser = webdriver.Firefox(capabilities=caps) def stop_browser(self): self.browser.close() def save_page(self, filename): soup = BeautifulSoup(self.browser.page_source, "lxml") with filename.open('w') as fp: fp.write(self.pp.prettify(soup, 2)) def analyze_page(self): self.start_browser() self.get_search_page('Andover') self.stop_browser() def get_search_page(self, searchitem): # pick city with multiple pages url = 'http://searchctbusiness.ctdata.org' # Get the main page self.browser.get(url) time.sleep(2) # Even though we already know what the URL for first page is, display captured URL # in order to avoid any surprises. print(f'Main Page URL: {self.browser.current_url}') # Find the 'Business City' option and select it self.browser.find_element(By.XPATH, '/html/body/div[2]/div[4]/div/form/div/div/span[1]/select/option[3]').click() # Finde the search box, make sure it's cleared, and insert our test page 'Andover' searchbox = self.browser.find_element(By.XPATH, '//*[@id="query"]') searchbox.clear() searchbox.send_keys(searchitem) # Find the select button and click it to go to First page self.browser.find_element(By.XPATH, '/html/body/div[2]/div[4]/div/form/div/div/span[3]/button').click() time.sleep(2) # Display first results page URL print(f'Results Page 1 URL: {self.browser.current_url}') # find total number of pages, we don't need it now, but can start thinking about how to use this to # help with page navigation later on when scraping all pages. pages = self.browser.find_element(By.XPATH, '/html/body/div[2]/div/div[2]/div[3]/div[2]/div/span[2]') print(pages.get_attribute('innerHTML')) # get page 2 by locating next page button and clickcking self.browser.find_element(By.XPATH, '/html/body/div[2]/div/div[2]/div[3]/div[2]/div/span[1]/a/icon').click() time.sleep(2) page2savefile = self.bpath.prettypath / 'Page2Source.html' self.save_page(page2savefile) # Display second results page URL print(f'Results Page 2 URL: {self.browser.current_url}') # Get detail page for first company in list self.browser.find_element(By.XPATH, '/html/body/div[2]/div/div[2]/table/tbody/tr[1]/td[1]/a').click() time.sleep(2) # Display detail page URL print(f'Detail Page URL: {self.browser.current_url}') if __name__ == '__main__': PreviewSearchPage() ``` You can clearly see by the URL's that are gathered from the various pages that an API does exist. Although we don't have any documentation for this API (bacause it's not published) We can break down the fields to discover how to get all the pages for a city. Here's what was gathered: * Main Page URL: http://searchctbusiness.ctdata.org/ * Results Page 1 URL: http://searchctbusiness.ctdata.org/search_results?query=Andover&index_field=place_of_business_city&sort_by=nm_name&sort_order=asc&page=1 * 1 of 18 * Results Page 2 URL: http://searchctbusiness.ctdata.org/search_results?page=2&start_date=1900-01-01&end_date=2019-04-01&query=Andover&index_field=place_of_business_city&sort_by=nm_name&sort_order=asc * Detail Page URL: http://searchctbusiness.ctdata.org/business/0589152 The interesting thing here is the inconsistancy between page 1 and page 2, page 2 exposes much more of the API So if you break down the page 2 url, here are the components: * base url: http://searchctbusiness.ctdata.org/ * page selector: search_results?page=2 * &start_date=1900-01-01 * &end_date=2019-04-01 * &query=Andover * &index_field=place_of_business_city * &sort_by=nm_name * &sort_order=asc Pretty straight forward, can't get all options here but that's enough. Can try page 2 url modified for page 18 (last page) ``` from IPython.display import IFrame IFrame(src='http://searchctbusiness.ctdata.org/search_results?page=18&start_date=1900-01-01&end_date=2019-04-01&query=Andover&index_field=place_of_business_city&sort_by=nm_name&sort_order=asc', width=1450, height=900) ``` And It works just fine. I also tried it with page 1 and it works. So now this can be used to scrap all pages using requests and beautifulsoup4. *** ## ScrapeConnecticutBusiness.py The actual scraping Script. Note the module get_url uses the results created by PreviewSerchPage.py to create API access to Summary Page, this allows downlaoding all of the summary files using requests rather than selenium, which is much faster. The steps are: ### Setup * Create a list (self.citylist) with each entry being a sublist containing City Name, filename where results will be saved, and URL. This will be used to speed up The feed for Concurrent Futures. * Set a counter to show progress (self.pages_to_download) * An empty list (self.pages_to_download) to save future results (number of summary pages for given city) in. ### Initial scrape * Using (method: getnumpages) concurrent.futures.ProcessPoolExecutor (pure multiprocessing) fetch page1 for each city in self.citylist, - Each process calls parse method, passing city, filename, and url - Parse caches web pages when downloaded, the cache is checked for particular file, and file is loaded from cache if it exists. Otherwise, it is downloaded using requests. - The page is parsed using BeautifulSoup, extracting paginate-info. It finishes by returning city and numpages. - Use concurrent.futures.as_completed to capture results from parse (city and numpage) ### Second to final scrape * After getting the first page of each city, it is necessary to use numpages to add entries to self.citylist so the remaining pages can be downloaded. This is the job of method add_new_pages. Note that the method is called from a loop in the dispacth method, in groups separated by letter of the alphabet. This allows a break for the server in Connecticut, but mostly for diabolical reasons. - It may seem as if this entire process could be done in the get_numpages method, but I intentionally prepair the list so that the multiprocessing part of the program doesn't contain any bottlenecks. - Starting with page 2, create a new self.citylist for all remaining pages or each city. - Call getnumpages for each group ### Notes This entire process is blazingly fast, please note that max_workers should be changed for number of cores available in your CPU + 1. I add one, because for some reason, right or wrong, I get the feeling the script process will share one of the cores as a thread. ``` def get_url(self, city, page='1'): return f'{self.bpath.base_url}search_results?page={page}' \ f'&start_date=1900-01-01&end_date=2019-04-01&query={city.upper()}' \ f'&index_field=place_of_business_city&sort_by=nm_name&sort_order=asc' # ScrapeConnecticut.py import BusinessPaths import concurrent.futures from pathlib import Path from bs4 import BeautifulSoup import requests import string import time import json import os import sys class ScrapeConnecticut: def __init__(self): self.bpath = BusinessPaths.BusinessPaths() self.citydict = {} with self.bpath.cities_json.open() as fp: self.citydict = json.load(fp) self.citylist = [] for city, info in self.citydict['Connecticut'].items(): if city == 'Header': continue cityn = city.replace(' ', '') self.citylist.append([city, self.bpath.htmlpath / f'{cityn}_page1.html', self.get_url(city)]) self.pages_to_download = len(self.citylist) self.numpages = [] # self.test_parse() self.dispatch() def get_url(self, city, page='1'): return f'{self.bpath.base_url}search_results?page={page}' \ f'&start_date=1900-01-01&end_date=2019-04-01&query={city.upper()}' \ f'&index_field=place_of_business_city&sort_by=nm_name&sort_order=asc' def dispatch(self): self.get_numpages() for letter in string.ascii_uppercase: self.add_new_pages(letter) self.get_numpages() def add_new_pages(self, letter): self.citylist = [] findstart = True for city, npages in self.numpages: if findstart and not city.startswith(letter): print(f'findstart city: {city}') continue findstart = False if not city.startswith(letter): print(f'for break, city: {city}') break # Remove spaces from city name. cityn = city.replace(' ', '') currentpage = 2 finalpage = int(npages) while True: # Check if done if currentpage > finalpage: break url = self.get_url(city, page=str(currentpage)) entry = [ city, self.bpath.htmlpath / f'{cityn}_page{currentpage}.html', url ] self.citylist.append(entry) currentpage += 1 self.pages_to_download = len(self.citylist) # for debugging: # filename = self.bpath.tmppath / 'citylist.text' # with filename.open('w') as fp: # for entry in self.citylist: # fp.write(f'{entry}\n') print(f'Length citylist for {city}: {self.pages_to_download}') def parse(self, city, filename, url): # Will skip files already downloaded # print(f'city: {city}, filename: {filename}, url: {url}') print(f'fetching {filename.name}') if filename.exists(): with filename.open('rb') as fp: page = fp.read() else: response = requests.get(url) if response.status_code == 200: time.sleep(.25) page = response.content with filename.open('wb') as fp: fp.write(page) else: print("can;t download: {url}") return False soup = BeautifulSoup(page, 'lxml') numpages = soup.find('span', {'class': "paginate-info"}).text.split()[2] return city, numpages def test_parse(self): for city, filename, url in self.citylist: city, numpages = self.parse(city, filename, url) print(f'{city}: {numpages}') def get_numpages(self): countdown = self.pages_to_download with concurrent.futures.ProcessPoolExecutor(max_workers=5) as executor: city_pages = [ executor.submit(self.parse, city, filename, url) for city, filename, url in self.citylist ] for future in concurrent.futures.as_completed(city_pages): countdown -= 1 try: city, numpages = future.result() print(f'{city}: {numpages}') except TypeError as exc: print(f'TypeError exception: {exc}') else: self.numpages.append([city, numpages]) print(f'Remaining files: {countdown}') if __name__ == '__main__': ScrapeConnecticut() ``` *** ## ExtractBusinessEntities.py This module extracts Business information from the files gathered by ScrapeConnecticut.py. It creates an intermediate dictionary which is saved as the JSON file /data/json/CompanyMaster.json Each cell of the dictionary contains the following information: * "0667703": { - "BusinessId": "0667703", - "BusinessName": "119 SPENCER STREET, L.L.C.", - "DetailUrl": "http://searchctbusiness.ctdata.org/business/0667703", - "Filename": "../data/Idfiles/Id0667703.html", - "DateFormed": "12/05/2000", - "Status": "Active", - "Address": "218 LAKE ROAD, ANDOVER, CT 06232", - "CityState": "ANDOVER, CT", - "PrincipalNames": "EDWARD A. HATEM", - "AgentNames": "EDWARD A. HATEM" - } Key is BusinessId (also included in body for use later when creating SQL database). Other fields are self explainatory. ### Setup * Create a summary file list (self.summary_file_list) which is just a file list form /data/html it uses 'page' from file name to identify proper files. ### extract_company_info(self) * This method extracts the business data form each file using BeautifulSoup. For each file in summary_file_list: - read file into variable page - Convert to soup, using 'lxml' parser - find tag 'table' which contains all of the business information - find all 'th' tags which contain the header - Replace various items in each header element (see self.header_replacepairs) - save in self.header list - Extract all 'tr' tags and pass to method strip_business ### strip_business(self, trs) * continuation of extract_company_info pulls data from table tr tags - for each element in trs: - extract all td tags. - if first td tag: - there is a link to the detail page, extract that - Extract the BusinessId from the URL - Extract business name from link text - create the filename where detail information will be saved - save all this to the BusinessInfo dictionary - Otherwise: - extract td text, match to header and save to BusinessInfo dictionary ### Save BusinessInfo dictionary as json file in data/jaon/CompanyMaster.json ``` # ExtractBusinessEntities.py import BusinessPaths from pathlib import Path from bs4 import BeautifulSoup import PrettifyPage import CreateDict import concurrent.futures import requests import string import time import json import os import sys class ExtractBusinessEntities: def __init__(self): self.bpath = BusinessPaths.BusinessPaths() self.pp = PrettifyPage.PrettifyPage() self.cd = CreateDict.CreateDict() self.summary_file_list = [] self.BusinessInfo = {} self.city_node = {} self.current_node = {} self.header_replacepairs = [ ('(', ''), (')', ''), ('/', ''), (' ', ''), ('MMDDYYYY', '') ] self.header = [] self.create_business_info() def create_business_info(self): self.get_summary_file_list() self.extract_company_info() self.save_as_json() # |++++++|+++++++++|+++++++++| Section 1 - Setup |+++++++++|+++++++++|+++++++++| def get_summary_file_list(self): path = self.bpath.htmlpath self.summary_file_list = \ [ filename for filename in path.iterdir() if filename.is_file() \ and 'page' in filename.stem ] self.summary_file_list.sort() # |++++++|+++++++++|+++ Section 2 - Parse Summary Pages +++|+++++++++|+++++++++| def extract_company_info(self): self.header = [] for file in self.summary_file_list: print(f'Processing {file.name}') city = str(file.stem).split('_')[0] with file.open('rb') as fp: page = fp.read() soup = BeautifulSoup(page, 'lxml') table = soup.find('table') head = table.thead.find_all('th') for element in head: if not len(element): continue item = element.text.strip() for a, b in self.header_replacepairs: item = item.replace(a, b) self.header.append(item) trs = table.tbody.find_all('tr') self.strip_business(trs) def strip_business(self, trs): base_url = 'http://searchctbusiness.ctdata.org' for tr in trs: tds = tr.find_all('td') for n1, td in enumerate(tds): if n1 == 0: detail_url = f"{base_url}{td.a.get('href')}" business_id = detail_url.split('/')[-1] business_name = td.a.text.strip() detail_filename = self.bpath.idpath / f'Id{business_id}.html' self.current_node = self.cd.add_node(self.BusinessInfo, business_id) self.cd.add_cell(self.current_node, 'BusinessId', business_id) self.cd.add_cell(self.current_node, 'BusinessName', business_name) self.cd.add_cell(self.current_node, 'DetailUrl', detail_url) self.cd.add_cell(self.current_node, 'Filename', os.fspath(detail_filename)) else: self.cd.add_cell(self.current_node, self.header[n1], td.text.strip()) # |++++++|+++++++++| Section 3 - Get and Parse Detail Pages +++++++++|+++++++++| def save_as_json(self): with self.bpath.company_master_json.open('w') as fp: json.dump(self.BusinessInfo, fp) if __name__ == '__main__': ExtractBusinessEntities() ``` *** ## AddEntityDetail.py This module does all of the remaining scraping work. It uses the information that has already been stored in CompanyMaster.json to download and parse the detail company information. All if the files downloaded are sroted in the /data/Idfiles directory, and each bears the filename Idxxxxxxx.html where xxxxxxx is the BusinessId. There are close to 400,000 files in this group. The process uses concurrent.futures.ProcessPoolExecutor, since I started this project, download speed has improved greatly, at first I was getting times of 1.3 seconds per page, but at some point, I believe someone at the database location added an index that greatly increased the spped of download, now about .125 seconds or less per page. ### method add_entity_detail(self) This method controls the entire process, from setup, download, parse and re-writing the updated CompanyMaster.json file. when done, each entry in the master will contain, the main information gathered by ExtractBusinessEntities.py, with the addition of additional company detail, principal information, agent information, and a list of all recorded filings. Here's what a typical entry looks like: * "1169720": { - "BusinessId": "1169720", - "BusinessName": "182 WHEELING ROAD, LLC", - "DetailUrl": "http://searchctbusiness.ctdata.org/business/1169720", - "Filename": "../data/Idfiles/Id1169720.html", - "DateFormed": "03/10/2015", - "Status": "Active", - "Address": "182 WHEELING ROAD, ANDOVER, CT 06232", - "CityState": "ANDOVER, CT", - "PrincipalNames": "HEATHER L. MORTIMER", - "AgentNames": "GLENN T. TERK, ESQ.", - "BusinessDetail": { - "BusinessName": "182 WHEELING ROAD, LLC", - "CitizenshipStateInc": "Domestic / CT", - "LatestReport": "No Reports Found", - "BusinessAddress": "182 WHEELING ROAD, ANDOVER, CT 06232", - "BusinessType": "Unspecified", - "MailingAddress": "C/O LAZ PARKING, 15 LEWIS STREET, HARTFORD, CT 06103", - "BusinessStatus": "Active", - "DateIncRegistration": "Mar 10, 2015", - "Unused": "Unspecified" - }, - "PrincipalsDetail": { - "NameTitle": "HEATHER L. MORTIMER MANAGER", - "BusinessAddress": "C/O LAZ PARKING, 15 LEWIS STREET, HARTFORD, CT 06103", - "ResidenceAddress": "28 JONES STREET, AMSTON, CT 06231" - }, - "AgentDetail": { - "AgentName": "GLENN T. TERK, ESQ.", - "AgentBusinessAddress": "15 LEWIS STREET, HARTFORD, CT 06103", - "AgentResidenceAddress": "449 OLD RESERVOIR ROAD, WETHERSFIELD, CT 06109" - }, - "FilingsDetail": { - "1": {}, This is a harmless software feature, so called because I didn't have enough time to find what causes it. - "2": { - "FilingID": "0005294665", - "FilingType": "ORGANIZATION", - "DateofFiling": "03/10/2015", - "VolumeType": "B", - "Volume": "02044", - "StartPage": "0685", - "Pages": "2" - } - } * } Parsing one of these files is different from the summary files. Each section is divided into a thead section and a tbody section The thead section only identifies what is to follow in the tbody section for example: <th class='table-name' colspan="4">Business Details ... </th> and does not contain headers for the actual tbody content. Instead, the tbody section is made up of tr statements each contain pairs of td statements, odd one containing element name, and even element value. The filings section does not conform to this structure. Instead, it has a true heading contained in the thead statement, followed by a tbody statement containing one or more tr statements, each containing as many td statements as there are th statements in the thead section. There are 4 parts to each Business: * Business Details * Principals Details * Agent Summary * Filings Details ### The process: *** ### Setup * create variables and lists used in __init__ method * load CustomerMaster.json into self.BusinessInfo dictionary * create detail download list (self.download_list) containing filename, url pairs for concurrent.futures download ### Download method: download_detail() - calls fetch_url using all 4 cpu cores (change max_workers= #cores +1 for your processor) Files are cached, so this process can be interrupted and started again at a leater time without data loss. With about 400,000 pages for entire state, Total download time can be as long as 55.5 hours, though I have experienced much faster times in recent runs (I expect someone added an index to Db). ### Parse detail files method: parse_detail() * For each file in self.filelist: - Extract BusinessId from stem of filename. - Read existing Business data from CompanyMaster (BusinessInfo dictionary) - Read file and convert contents to soup - Get node from BusinessInfo dictionary, if not found (shouldn't be the case, but possible if some files manually downloaded) add to mising link list/ - Now there are 4 sections of the file that need to be parsed, Business Details, Principal info agent info and filing info. All are dispatched the same way: - try: - discard = self.current_node['BusinessDetail'] - except KeyError: - bnode = self.cd.add_node(self.current_node, 'BusinessDetail') = self.parse_business_details(soup, bnode) - This is trick code used to bypass already loaded nodes. the try will fail if the node doesn't exist in the dictionary, causing a new node to be created, and then populated with a call to method parse_business_details(soup, bnode) Note that the node is named differently for each section, bnode, pnode, anode, or fnode (1st letter same as 1st letter of section name (bnode = Business Details), so when looking at code, you know which section of data you ar edealing with. - After all files have been processed (400,000) for full boat, save missing list to /data/json (can be use later to retry for missing files). ### parse_business_details(self, soup, bnode) * Parser for Business Details html - Find table tag with id = 'detail-table' - Find all 'tr' tags. - for each tr tag: - Find all 'td' tags These are arranged as pairs, title in odd td's, value in even td's. - set pair flag to odd - If an element is not present, there will be a pair of empty td cells this is what the skipeven flag is used for, usually I will provide empty node with proper title and value of unspecified. - For each td: - If odd: - extract title, fills some empty tag conditions, and replaces special characters (that will give SQL inserts the fits). - Set pair flag to even - If even: - if empty element, set value to 'Unspecified' - There is a duplicate BusinessId tag in this section, bypass it when found as it is redundit. - Add cell to dictionary - set pair flag to odd ### parse_principals(self, soup, pnode) * Parser for Principal information - The html for principals is different from Business Details. Rather that use the odd, even td approach, it uses a header to get title information. - Find table tag with id = 'principals' - Find 'tr' tag in child node 'thead' (only one) - find all 'th' tags - for each, extract text, replace special characters and save in pheader list - find all 'tr' tags in child node 'tbody' (many) - for each 'tr' node: - find all 'td' tags - if 'td' element is empty, set value to 'Unspecified' - Otherwise extract text value - save cell to dictionary. ### parse_agents(self, soup, anode) * parser for agent information This is the same format as principals, only difference here is the 'id' of the table element - Find table tag with id = 'agents' - remainder same as for parse_principals method ### parse_filings(self, soup, fnode) * Parser for company filings This table is similar to principals and agents, with the exception that there may be None, or many. To compensate for this, each filing will be given a separate node in the dictionary, with a generated sequence number as key. - Find table tag with id = 'filings' - Header extraction same as for principals and agents. - tbody section has a separate tr for each filing, and contains the following elements: - Filing ID - Filing Type - Date of Filing (MM/DD/YYYY) - Volume Type - Volume - Start Page - Pages # - Parsing is the same as principals and agents with the addition of a sequence Node. ### Save results rewrite CustomerMaster.json file with new iformation. ``` # AddEntityDetail.py import BusinessPaths from pathlib import Path from bs4 import BeautifulSoup import PrettifyPage import CreateDict import json import concurrent.futures import os import requests import time import sys # use Id0667703.html for testing, has all fields class AddEntityDetail: def __init__(self): self.bpath = BusinessPaths.BusinessPaths() self.pp = PrettifyPage.PrettifyPage() self.cd = CreateDict.CreateDict() self.header_replacepairs = [ ('(', ''), (')', ''), ('/', ''), (' ', ''), ('MMDDYYYY', '') ] self.missing = [] self.BusinessInfo = {} self.new_business_info = {} self.current_node = {} self.download_list = [] self.filelist = [] self.filecount = 0 self.add_entity_detail() def load_business_info(self): with self.bpath.company_master_json.open() as fp: self.BusinessInfo = json.load(fp) for BusinessId in self.BusinessInfo.keys(): url = self.BusinessInfo[BusinessId]['DetailUrl'] filename = Path(self.BusinessInfo[BusinessId]['Filename']) self.download_list.append([filename, url]) self.filecount += 1 self.download_list.sort() # self.cd.display_dict(self.BusinessInfo) def add_entity_detail(self): self.load_business_info() # self.download_detail() self.parse_detail() self.save_as_json() def download_detail(self): print('starting download') countdown = self.filecount with concurrent.futures.ProcessPoolExecutor(max_workers=5) as executor: detail_info = [ executor.submit(self.fetch_url, filename, url) for filename, url in self.download_list if not filename.exists() ] for future in concurrent.futures.as_completed(detail_info): countdown -= 1 print(f'countdown: {countdown}') filename = future.result() def fetch_url(self, filename, url): print(f'fetching {filename.name}') response = requests.get(url) if response.status_code == 200: # time.sleep(.25) with filename.open('wb') as fp: fp.write(response.content) else: print(f"can't download: {url}") return False return filename def create_file_list(self): self.filelist = [filename for filename in self.bpath.idpath.iterdir() if filename.is_file and filename.stem.startswith('Id') ] self.filelist.sort() def parse_detail(self): self.create_file_list() for filename in self.filelist: print(f'processing {filename.name}') # Get existing dictionary node business_id = filename.stem[2:] try: self.current_node = self.BusinessInfo[business_id] except KeyError: self.missing.append(os.fspath(filename)) continue # read file with filename.open() as fp: page = fp.read() soup = BeautifulSoup(page, 'lxml') # Add Business Detail try: discard = self.current_node['BusinessDetail'] except KeyError: bnode = self.cd.add_node(self.current_node, 'BusinessDetail') self.parse_business_details(soup, bnode) # Add Principals Detail try: discard = self.current_node['PrincipalsDetail'] except KeyError: pnode = self.cd.add_node(self.current_node, 'PrincipalsDetail') self.parse_principals(soup, pnode) # Add Agent Detail try: discard = self.current_node['AgentDetail'] except KeyError: anode = self.cd.add_node(self.current_node, 'AgentDetail') self.parse_agents(soup, anode) # Add Filings Detail try: discard = self.current_node['FilingsDetail'] except KeyError: fnode = self.cd.add_node(self.current_node, 'FilingsDetail') self.parse_filings(soup, fnode) missingfiles = self.bpath.jsonpath / 'missing.json' with missingfiles.open('w') as fp: json.dump(self.missing, fp) # verify # self.cd.display_dict(self.current_node) # print(self.current_node) def parse_business_details(self, soup, bnode): table = soup.find('table', {'class': 'detail-table'}) trs = table.tbody.find_all('tr') for n, tr in enumerate(trs): tds = tr.find_all('td') odd = True skipeven = False for n1, td in enumerate(tds): # print(f'\n======================== tr_{n}, tr_ {n1} ========================') # print(f'{self.pp.prettify(td, 2)}') if skipeven: skipeven = False continue if odd: if not len(td): if n == 2 and n1 == 2: title = 'BusinessType' value = 'Unspecified' self.cd.add_cell(bnode, title, value) elif n == 4 and n1 == 2: title = 'Unused' value = 'Unspecified' self.cd.add_cell(bnode, title, value) skipeven = True continue title = td.text.strip() if title[-1] == ':': title = title[:-1] title = title.replace('/', '') title = title.replace(' ', '') odd = False else: if len(td): value = td.text.strip() else: value = 'Unspecified' # Already have business id, don't need twice if title == 'BusinessID': odd = True continue self.cd.add_cell(bnode, title, value) odd = True def parse_principals(self, soup, pnode): # Get header principals = soup.find('table', {'id': 'principals'}) if principals: phead = principals.thead.find('tr') pheader = [] ths = phead.find_all('th') for th in ths: item = th.text.strip() for a, b in self.header_replacepairs: item = item.replace(a, b) pheader.append(item) trs = principals.tbody.find_all('tr') for tr in trs: tds = tr.find_all('td') for n1, td in enumerate(tds): if len(td): self.cd.add_cell(pnode, pheader[n1], td.text.strip()) else: self.cd.add_cell(pnode, pheader[n1], 'Unspecified') def parse_agents(self, soup, anode): agents = soup.find('table', {'id': 'agents'}) if agents: aheader = [] ahead = agents.thead.find('tr') ths = ahead.find_all('th') for th in ths: item = th.text.strip() for a, b in self.header_replacepairs: item = item.replace(a, b) aheader.append(item) trs = agents.tbody.find_all('tr') for tr in trs: tds = tr.find_all('td') for n1, td in enumerate(tds): if len(td): self.cd.add_cell(anode, aheader[n1], td.text.strip()) else: self.cd.add_cell(anode, aheader[n1], 'Unspecified') def parse_filings(self, soup, fnode): filings = soup.find('table', {'id': 'filings'}) if filings: fheader = [] fhead = filings.thead.find('tr') ths = fhead.find_all('th') for th in ths: title = th.text.strip() if '#' in title: title = title[:-2] for a, b in self.header_replacepairs: title = title.replace(a, b) fheader.append(title) trs = filings.find_all('tr') seq = 1 for tr in trs: tds = tr.find_all('td') fitem = self.cd.add_node(fnode, str(seq)) for n1, td in enumerate(tds): if len(td): self.cd.add_cell(fitem, fheader[n1], td.text.strip()) else: self.cd.add_cell(fitem, fheader[n1], 'Unspecified') seq += 1 def save_as_json(self): with self.bpath.company_master_json.open('w') as fp: json.dump(self.BusinessInfo, fp) if __name__ == '__main__': AddEntityDetail() ``` *** ## Database creation and load There are two modules involved Createtables.py and CreateDatabase.py *** ### CreateTables.py This is a simple module that will create a new sqlite3 (can be simply modified for PostgreSQL, Oracle, Sybase or others) It also builds Insert Queries when passed table name and a list of values (from called module) Explain on the fly ``` # CreateTables.py import BusinessPaths import sqlite3 import sys class CreateTables: def __init__(self): self.bpath = BusinessPaths.BusinessPaths() self.insert_statements = {} self.dbcon = None self.dbcur = None def db_connect(self): try: self.dbcon = sqlite3.connect(self.bpath.CompanyMasterDb) self.dbcur = self.dbcon.cursor() except sqlite3.Error as e: print(e) def create_tables(self): self.db_connect() company = [ 'BusinessId', 'BusinessName', 'DetailURL', 'Filename', 'DateFormed', 'Status', 'Address', 'CityState', 'PrincipalNames', 'AgentNames' ] self.insert_statements['Company'] = self.create_table(company, 'Company') details = [ 'BusinessId', 'BusinessName', 'CitizenshipStateInc', 'LatestReport', 'BusinessAddress', 'BusinessType', 'MailingAddress', 'BusinessStatus', 'DateIncRegistration', 'Unused' ] self.insert_statements['Details'] = self.create_table( details, 'Details') # need to assign principalId principals = [ 'BusinessId', 'NameTitle', 'BusinessAddress', 'ResidenceAddress' ] self.insert_statements['Principals'] = self.create_table( principals, 'Principals') agents = [ 'BusinessId', 'AgentName', 'AgentBusinessAddress', 'AgentResidenceAddress' ] self.insert_statements['Agents'] = self.create_table( agents, 'Agents') filings = [ 'BusinessId', 'SeqNo', 'FilingId', 'FilingType', 'DateofFiling', 'VolumeType', 'Volume', 'StartPage', 'Pages' ] self.insert_statements['Filings'] = self.create_table( filings, 'Filings') def create_table(self, header, tablename='tname'): """ Create CorpTable from header record of self.bpath.corporation_master """ qmarks = (f"?, " * len(header))[:-2] base_insert = f"INSERT INTO {tablename} VALUES " columns = ', '.join(header) sqlstr = f'DROP TABLE IF EXISTS {tablename};' self.dbcur.execute(sqlstr) sqlstr = f'CREATE TABLE IF NOT EXISTS {tablename} ({columns});' # print(sqlstr) self.dbcur.execute(sqlstr) self.db_commit() return base_insert def insert_data(self, tablename, columns): # print(f'\n{tablename}: {columns}') dbcolumns = None try: dbcolumns = '(' for item in columns: dbcolumns = f"{dbcolumns}'{item}', " dbcolumns = f"{dbcolumns[:-2]});" sqlstr = f'{self.insert_statements[tablename]}{dbcolumns}' # print(f'\n{tablename}: {sqlstr}') self.dbcur.execute(sqlstr) except sqlite3.OperationalError: print(f'OperationalError:\n{sqlstr}') sys.exit(0) except sqlite3.IntegrityError: print(f'IntegrityError:\n{sqlstr}') sys.exit(0) def db_close(self, rollback=False): if rollback: self.dbcon.rollback() else: self.dbcon.commit() self.dbcon.close() def db_commit(self): self.dbcon.commit() if __name__ == '__main__': ct = CreateTables() ct.create_tables() ``` *** ### CreateDatabase.py This module creates the CompanyMaster.db Database in /data/database. ``` # CreateDatabase.py import BusinessPaths import CreateTables import sqlite3 import json import sys class CreateDatabase: def __init__(self): self.bpath = BusinessPaths.BusinessPaths() self.cretab = CreateTables.CreateTables() with self.bpath.company_master_json.open() as fp: self.master = json.load(fp) self.dbcon = None self.dbcur = None self.CompanyMain = {} self.Detail = {} self.Principals = {} self.Agents = {} self.Filings = {} self.insert_statements = {} self.db_connect() self.makedb() self.db_close() def db_connect(self): try: self.dbcon = sqlite3.connect(self.bpath.CompanyMasterDb) self.dbcur = self.dbcon.cursor() except sqlite3.Error as e: print(e) def db_close(self, rollback=False): if rollback: self.dbcon.rollback() else: self.dbcon.commit() self.dbcon.close() def db_commit(self): self.dbcon.commit() def insert_business_id(self, business_id, oldvalue): newvalue = {} newvalue['BusinessId'] = business_id for key, dvalue in oldvalue.items(): newvalue[key] = dvalue return newvalue def split_dict(self): if not self.bpath.company_main.exists(): for business_id, details in self.master.items(): firstcompany = True print(f'Business_id: {business_id}') for key, value in details.items(): if key == 'BusinessDetail': if len(value): newvalue = self.insert_business_id(business_id, value) self.Detail[business_id] = newvalue elif key == 'PrincipalsDetail': if len(value): newvalue = self.insert_business_id(business_id, value) self.Principals[business_id] = newvalue elif key == 'AgentDetail': if len(value): newvalue = self.insert_business_id(business_id, value) self.Agents[business_id] = newvalue elif key == 'FilingsDetail': if len(value): newvalue = self.insert_business_id(business_id, value) self.Filings[business_id] = newvalue else: if firstcompany: cnode = self.CompanyMain[business_id] = {} firstcompany = False cnode[key] = value self.save_split_files() else: self.load_split_files() def save_split_files(self): with self.bpath.company_main.open('w') as fp: json.dump(self.CompanyMain, fp) with self.bpath.company_detail.open('w') as fp: json.dump(self.Detail, fp) with self.bpath.company_principals.open('w') as fp: json.dump(self.Principals, fp) with self.bpath.company_agents.open('w') as fp: json.dump(self.Agents, fp) with self.bpath.company_filings.open('w') as fp: json.dump(self.Filings, fp) def load_split_files(self): with self.bpath.company_main.open() as fp: self.CompanyMain = json.load(fp) with self.bpath.company_detail.open() as fp: self.Detail = json.load(fp) with self.bpath.company_principals.open() as fp: self.Principals = json.load(fp) with self.bpath.company_agents.open() as fp: self.Agents = json.load(fp) with self.bpath.company_filings.open() as fp: self.Filings = json.load(fp) def makedb(self): self.cretab.create_tables() self.split_dict() self.insert_data(self.CompanyMain, 'Company') self.insert_data(self.Detail, 'Details') self.insert_data(self.Principals, 'Principals') self.insert_data(self.Agents, 'Agents') self.insert_data(self.Filings, 'Filings') def insert_data(self, data_dict, tablename): print(f'Loading {tablename} table') self.base_insert = f"INSERT INTO {tablename} VALUES " keys = list(data_dict.keys()) for key in keys: try: data = data_dict[key] # print(f'data: {data}') columns = f"(" for item in data: value = data_dict[key][item] if not len(value): continue if isinstance(value, dict): columns = f"{columns}'{item}', " for key1, subitem in value.items(): subitem = subitem.replace("'", "''") columns = f"{columns}'{subitem}', " break value = value.replace("'", "''") columns = f"{columns}'{value}', " columns = f"{columns[:-2]});" sqlstr = f'{self.base_insert}{columns}' # print(f'sqlstr: {sqlstr}') self.dbcon.execute(sqlstr) except sqlite3.OperationalError: print(f'sqlite3.OperationalError\ndata: {data}\nsqlstr: {sqlstr}') sys.exit(0) except AttributeError: print(f'AttributeError\nkey: {key}, item: {item}, value: {value}, data: {data}') sys.exit(0) self.db_commit() if __name__ == '__main__': CreateDatabase() ``` # How to backup data including all 18 thousand summary pages and 400 thousand detail pages: * Use rsync which is a quantum leap over cp in speed (FYI: includes using over network) * get a list: - find ./Idfiles/ -name *.html > ./tmp/Idlist.txt - run from makerProjectApril2019 directory - rsync -avz ./data/ backup_directory/data/
github_jupyter
``` import pickle import json import os import math from gensim.models import Word2Vec data = [] for i in range(100): with open('processed/sentences/sen'+str(i), 'rb') as f: data += pickle.load(f) len(data) with open('processed/entities.pkl', 'rb') as f: entities = pickle.load(f) ``` ## 关系标注 ``` # 根据规则 替换所有关系 tgs = { "per": ["人物", "歌手", "演员", "作家"], "org": ["机构", "企业", "公司", "学校", "部门", "大学"], "pl": ["地点", "地理", "城市", "国家", "地区"] } config = { # person 9 "per2per_family_members" : ["父亲","母亲","丈夫","妻子","儿子","女儿","哥哥","妹妹","姐姐","弟弟","孙子", "孙女","爷爷","奶奶","外婆", "外公","家人","家庭成员" ,"夫人","对象","夫君"], "per2per_social_members" : ["朋友", "好友", "同学", "合作", "搭档", "经纪人", "师从"], "per2pl_birth_place" : ["出生地", "出生于", "来自", "歌手出生地", "作者出生地", "出生在", "作者出生地", "出生"], "per2pl_live_place" : ["居住地", "主要居住地", "居住", "现居住", "目前居住地", "现居住于", "居住地点", "居住于"], "per2pl_country": ["国籍", "国家"], "per2org_graduate_from" : ["毕业院校", "毕业于", "毕业学院", "本科毕业院校", "最后毕业院校", "毕业高中", "毕业地点", "本科毕业学校", "知名校友"], "per2org_belong_to" : ["隶属单位", "经纪公司", "隶属关系", "行政隶属", "隶属学校", "隶属大学", "隶属地区", "所属公司", "签约公司", "任职公司", "工作单位", "所属"], "per2oth_profession" : ['职业'], "per2oth_nation" : ['民族'], # orgnazition 9 "org2per_owner" : ["拥有", "拥有者"], "org2per_founder" : ["创始人", "创始", "主要创始人", "集团创始人"], "org2per_school_leader" : ["校长", "现任校长", "学校校长", "总校长"], "org2per_leader" : ["领导", "现任领导", "领导单位", "主要领导", "领导人", "主要领导人"], "org2org_surroundings" : ["周围景观", "周边景点"], "org2pl_location" : ["所属地区","国家", "地区", "地理位置", "位于", "区域", "地点", "总部地点", "所在地", "所在区域", "位于城市", "总部位于", "酒店位于", "学校位于", "最早位于", "地址", "所在城市", "城市", "主要城市", "坐落于"], # place 7 "pl2per_main_character" : ["相关人物", "知名人物", "历史人物"], "pl2pl_location" : ["所属地区","所属国", "所属洲", "所属州", "所属国家", "最大城市", "地区", "地理位置", "位于", "区域", "地点", "总部地点", "所在地", "所在区域", "位于城市", "总部位于", "酒店位于", "学校位于", "最早位于", "地址", "所在城市", "城市", "主要城市", "坐落于"], "pl2pl_adjacement" : ["毗邻", "东邻", "邻近行政区", "相邻", "紧邻", "邻近", "北邻", "南邻", "邻国"], "pl2pl_contains" : ["包含", "包含国家", "包含人物", "下辖地区", "下属"], "pl2pl_captial" : ["首都"], "pl2org_sights" : ["著名景点", "主要景点", "旅游景点", "特色景点"], "pl2oth_climate" : ["气候类型", "气候条件", "气候", "气候带"], } name = { "per2per_family_members": "/人物/人物/家庭成员", "per2per_social_members": "/人物/人物/社交关系", "per2pl_birth_place": "/人物/地点/出生地", "per2pl_live_place": "/人物/地点/居住地", "per2pl_country" : "/人物/地点/国籍", "per2org_graduate_from": "/人物/组织/毕业于", "per2org_belong_to": "/人物/组织/属于", "per2oth_profession": "/人物/其它/职业", "per2oth_nation": "/人物/其它/民族", "org2per_owner": "/组织/人物/拥有者", "org2per_founder": "/组织/人物/创始人", "org2per_school_leader": "/组织/人物/校长", "org2per_leader": "/组织/人物/领导人", "org2org_surroundings": "/组织/组织/周边", "org2pl_location": "/组织/地点/位于", "pl2per_main_character": "/地点/人物/相关人物", "pl2pl_location": "/地点/地点/位于", "pl2pl_adjacement": "/地点/地点/毗邻", "pl2pl_contains": "/地点/地点/包含", "pl2pl_captial": "/地点/地点/首都", "pl2org_sights": "/地点/组织/景点", "pl2oth_climate": "/地点/其它/气候" } def check(string, tgs): for t in tgs: if t in string: return True return False # 标注 processed_data = [] for can in data: # can [sentence, head, tail, segment] sentence, head, tail, segment = can if tail not in entities[can[1]].values(): can.append('NA') processed_data.append(can) continue rel = '' for key, value in entities[can[1]].items(): if value==tail: rel = key for key, value in config.items(): if rel in value: tp = key.split('_')[0].split('2') if check(entities[can[1]].get('BaiduTAG', ""), tgs[tp[0]]): if tp[1]=='oth': can.append(name[key]) processed_data.append(can) elif check(entities[can[2]].get('BaiduTAG', ""), tgs[tp[1]]): can.append(name[key]) processed_data.append(can) break len(processed_data) e2id = {} count = 0 e_set = set() for i in processed_data: e_set.add(i[1]) e_set.add(i[2]) for e in e_set: e2id[e] = count count += 1 total_data = [] for d in processed_data: total_data.append({ 'head':{ 'word': d[1], 'id': str(e2id[d[1]]) }, 'relation': d[-1], 'tail': { 'word': d[2], 'id': str(e2id[d[2]]) }, 'sentence': ' '.join(d[-2]), 'ori_sen': d[0], 'sen_seg': d[-2] }) rl = {} for i in total_data: rl[i['relation']] = rl.get(i['relation'], 0) + 1 trl = {} record = {} for k,v in rl.items(): trl[k] = int(max(math.floor(v*0.25), 1)) record[k] = 0 train = [] test = [] for i in total_data: if record[i['relation']]<=trl[i['relation']]: test.append(i) record[i['relation']] += 1 else: train.append(i) if not os.path.isdir('../data'): os.mkdir('../data') if not os.path.isdir('../data/chinese'): os.mkdir('../data/chinese') with open('../data/chinese/train.json', 'w', encoding='utf8') as f: json.dump(train, f) with open('../data/chinese/test.json', 'w', encoding='utf8') as f: json.dump(test, f) count = 0 r2id = {} for k in list(rl.keys()): r2id[k] = count count +=1 with open('../data/chinese/rel2id.json', 'w', encoding='utf8') as f: json.dump(r2id, f) ``` ## 训练词向量 ``` senlist = [] for d in data: senlist.append(d[-1]) model = Word2Vec(senlist, sg=5, min_count=1, size=50, workers=4) w2v = {} for i in model.wv.index2word: w2v[i] = model[i] len(w2v) new_w2v = [] for word, vec in w2v.items(): new_w2v.append({ 'word': word, 'vec': [float(i) for i in vec] }) with open('../data/chinese/word_vec.json', 'w', encoding='utf8') as f: json.dump(new_w2v, f) ```
github_jupyter
``` import numpy as np import pandas as pd import sys print('Python version: ',sys.version) print('NumPy version: \t',np.__version__) print('Pandas version:\t',pd.__version__) ``` **This notebook is based on Python 3.5.2, NumPy 1.12.0 and Pandas 0.20.1, if the versions are not completely matched, some problems may occur while running it.** # <span id = "SETTING UP">SETTING UP</span> ## <span id = "What is NumPy">1.1 What is NumPy</span> At the core of the NumPy package, is the *ndarrray* object. ### Difference between NumPy arrays and Python sequence - NumPy arrays have a fixed size at creation, unlike Python lists, which can grow dynamically. Changing the size of an *ndarray* will create a new array and delete the original - The elements in a Numpy array are all required to be of the same data type, and thus will be the same size in memory. **The exception:** one can have arrays of (Python, including NumPy) objects, thereby allowing for arrays of different sized elements. - NumPy arrays facilitate advanced mathematical and other types of operations on large numbers of data. Typically, such operations are executed more efficiently and with less code than is possible using Python's built-in sequences. - A growing plethora of scientific and mathematical... er... Nevermind, it means NumPy is getting more and more important. The points about **sequence size and speed** are particularly important in scientific computing. As a simple example, consider the case of multiplying each element in a 1-D sequence with the corresponding element in another sequence of the same length. If the data are stored in two Python lists, `a` and `b`, we could iterate over each element: ``` c = [] for i in range(len(a)): c.append(a[i]*b[i]) ``` This produces the correct answer, but if `a` and `b` each contain millions of numbers, we will pay the price for the inefficiencies of looping in Python. We could accomplish the same task much more quickly in C by writing (for clarity we neglect variable declarations and initializations, memory allocation, etc.) ``` for (i=0; i< rows; i++):{ c[i] = a[i] + b[i]; } ``` This saves all the overhead involved in interpreting the Python code and manipulating Python objects, but at the expense of the benefits gained from coding in Python. Futhermore, the coding work required increases with the dimensionality of our data. In the case of a 2-D array, for example, the C code (abridged as before) expands to ``` for (i = 0; i < rows; i++):{ for (j = 0; j < columns; j++):{ c[i][j] = a[i][j]*b[i][j]; } } ``` NumPy gives us the best of both worlds: element-by-element operations are the "default mode" when an *ndarray* is involved, but the element-by-element operation is speedily executed by pre-compiled C code. In NumPy ``` c = a * b ``` does what the earlier examples do, at **near-C speeds**, but with the code **simplicity** we expect from something based on Python. Indeed, the NumPy idiom is even simpler! This last example illustrates two of NumPy's features which are the basis of much of its power: **vectorization** and **boradcasting**. **Vectorization** describes the absence of any explicit looping, indexing, etc., in the code - these things are taking place, of course, just "behind the scenes" in optimized, pre-compiled C code. Vectorized code has many advantages, among which are - vectorized code is more **concise** and **easier** to read - fewer lines of code generally means **fewer bugs** - the code more closely resembles **standard mathematical notation** (making it easier, typically, to correctly code mathematical constructs) - vectorization results in more "Pythonic" code. Without vectorization, our code would be littered with inefficient and difficult to read `for` loops *** **Broadcasting** is the term used to describe the implicit element-by-element behavior of operations; generally speaking, in NumPy all operations, not just arithmetic operations, but logical, bit-wise, functional, etc., behave in this implicit element-by-element fashion, i.e., the broadcast. Moreover, in the example above, `a` and `b` could be multidimensional arrays of the same shape, or a scalar and no array, or even two arrays, or even two arrays of with different shapes, provided that the smaller array is "expandable" to the shape of the larger in such a way that the resulting broadcasting is unambiguous. For detailed "rules" of broadcasting see [numpy.doc.broadcasting](#Broadcasting) *** Numpy fully supports an **object-oriented approach**, startinng, once again, with *ndarray*. For example, *ndarray* is a class, possessing numerous methods and attributes. Many of its methods mirror functions in the outer-most (最外层的) NumPy namespace, giving the programmer complete freedom to code in whichever paradigm (范式) she prefers and/or which seems most approprite to the task at hand. ## <span id = "Installing NumPy">1.2 What is Numpy</span> In most use cases the best way to install NumPy on your system is by using an pre-built package for your operating system. Please see http://scipy.org/install.html for links to available options. For instructions on building for source package, see [*Building from source*](#BUILDING FROM SOURCE). This information is useful mainly for advanced users. # <span id="QUICKSTART TUTORIAL">QUICKSTART TUTORIAL</span> ## <span id="Prerequisites">2.1 Prerequisites</span> Before reading this tutorial you shold know a bit of Python. If you would like to refresh your memory, take a look at the [Python tutorial](http://docs.python.org/3/tutorial/ "Link to tutorial"). If you wish to work the examples in this tutorial, you must also have some software installed on your computer. Please see http://scipy.org/install.html for instructions. ## <span id="The Basics">2.2 The Basics</span> NumPy's main object is the homogeneous multidimensional array. It is a table of elements (usually numbers), all off the same type, indexed by a tuple of positive integers. In NumPy dimensions are called *axes*. The numbers of axes is *rank*. For example, the coordinates of a point in 3D space `[1,2,1]` is an array of rank 1, because it has one axis. That axis has a length of 3. In the example pictured below, the array has rank 2 (it is 2-dimensional). The first dimension (axis) has a length of 2, the second dimension has a length of 3. ``` [[1.,0.,0.], [0.,1.,2.]] ``` NumPy's array class is called `ndarray`. It is also known by the alias `array`. Note that `numpy.array` is **not the same as the Standard Python Library class** `array.array`, which **only handles one-dimensional arrays** and offers less functionally. The more important attributes of an `ndarray` objects are: - **ndarray.ndim** the number of axes (dimensions) of the array. In the Python world, the number of dimensions is referred to as *rank* - **ndarray.shape** the dimensions of the array. This is a tuple of integers indicating the size of the array in each dimension. For a matrix with *n* rows and *m* columns, `shape` will be `(n,m)`. The length of the `shape` tuple is therefore the rank, or number of dimensions, `ndim`. - **ndarray.size** the total number of elements of the array. This is equal to the product of the elements of `shape`. - **ndarray.dtype** an object describing the type of the elements in the array. One can create or specify dtype's using standard Python types. Additionally NumPy provides types of its own. numpy.int32, numpy.int16, and numpy.float64 are some examples. - **ndarray.itemsize** the size in bytes of each element of the array. For example, an array of elements of type `float64` has `itemsize` 8 =(64/8), while one of type `complex32` has `itemsize` 4 (=32/8). It is equivalent to `ndarray.dtype.itemsize`. - **ndarray.data** the buffer containing the actual elements of the array. Normally, we won't need to use this attribute because we will access the elements in an array using indexing facilities. ### <span id = "2.2.1 An example">2.2.1 An example</span> ``` a = np.arange(15).reshape(3,5) a print(a.shape) print(a.ndim) print(a.dtype.name) print(a.itemsize) print(a.size) print(type(a)) b = np.array([6,7,8]) print(b) b print(type(b)) type(b) ``` ### <span id = "2.2.2 Array Creation">2.2.2 Array Creation</span> There are several ways to creat arrays. For example, you can create an array from a regular Python *list* or *tuple* using the `array` function. The type of the resulting array is deduced from the type of the elements in the sequences. ``` # import numpy as np c = np.array([2,3,4]) c c.dtype d = np.array([2.2,3.3,4.4]) d.dtype ``` *** **Important** A frequent error consists in calling `array` with multiple numeric arguments, rather than providing a single list of numbers as an argument. *** `array` transforms sequences of sequences into two-dimensional arrays, sequences of sequences of sequences into three-dimensional arrays, and so on. ``` b = np.array([(1.5,2,3), (4,5,6)]) b ``` The type of the array can also be explicitly specified at creation time: ``` c = np.array( [ [1,2], [3,4] ], dtype=complex) c ``` Often, the elements of an array are originally unknown, but its size is known. Hence, NumPy offers several functions to create arrays with initial placeholder content. There minimize the necessity of growing arrays, an **expensive operation.** The function `zeros` createa an array full of zeros, the function `ones` creates an array full of ones, the the function `empty` creates an array whose initial content is random and depends on the state of the memory. By default, the dtype of the created array is **`float64`**. ``` np.zeros( (3,4) ) np.ones( (2,3,4), dtype=np.int16) #dtype can also be specified np.empty( (2,3) ) # uninitialized, output may vary ``` To create sequences of numbers, NumPy provides a function analogous to `range` that returns **arrays** instead of **lists**. ``` np.arange( 10, 30, 5 ) np.arange( 0, 2, 0.3 ) ``` When `arange` is used with floating point arguments, it is generally not possible to predict the number of elements obtained, due to the finite point precision. For this reason, it is usually better to use the funtion `linspace` that receives as an argument the number of elements that we want, instead of the step: ``` from numpy import pi np.linspace( 0, 2, 9 ) # 9 numbers from 0 to 2 x = np.linspace( 0, 2*pi, 100) f = np.sin(x) ``` **See also:** array, zeros, zeros_like, ones, ones_like, empty, empty_like, arange.linspace, numpy.random.rand, numpy.random.randn, fromfunction, fromfile ### <span id = "2.2.3 Printing Arrays">2.2.3 Printing Arrays</span> When you print an array, NumPy displays it in a similar way to nested lists, but with the following layout: * the last axis is printed from left to right, * the second-to-last is printed from top to bottom, * the rest are also printed from top to bottom, with each slice separated from the next by an empty line. One-dimensional arrays are then printed as rows, bidmensionals as matrices and tridimentionals as lists of matrices ``` a = np.arange(6) print(a , '\n---a---') b = np.arange(12).reshape(4,3) print(b , '\n---b---') c = np.arange(24).reshape(2,3,4) print(c , '\n---c---') ``` See [*below*](#2.3 Shape Manipulation) to get more details on `reshape`. If an array is too *large* to be printed, NumPy automatically **skips** the *central part* of the array and **only** prints the *corners*: ``` print(np.arange(10000),'\n- - - - - - -\n',np.arange(10000).reshape(100,100)) ``` *** **Important** To disable this behaviour and force NumPy to print the entire array, you can change the printing options using **`set_printoptions`**. `>>> np.set_printoptions(threshold='nan')` *** ### <span id = "2.2.4 Basic Operations">2.2.4 Basic Operations</span> Arithmetic operations on arrays apply ***elementwise***. A new array is created and filled with the result. ``` a = np.array( [20,30,40,50]) b = np.arange(4) b c = a-b c c**2 10*np.sin(a) a<35 ``` Unlike in many matrix languages, the product operator * operates elementwise in NumPy arrays. The matrix product can be performed using the `dot` function or method: ``` A = np.array( [[1,1], [0,1]] ) B = np.array( [[2,0], [3,4]] ) print(A) print(B) print("\nTheir elementwise product is:\n",A*B) print("\nTheir matrix product is:\n", A.dot(B)) print("\nOr we can write np.dot(A, B):\n", np.dot(A, B)) ``` Some operations, such as **`+=`** and **`*=`**, act in place to *modify* an existing array **rather than** create a *new* one ``` a = np.ones((2,3), dtype = int) b = np.random.random((2,3)) a *= 3 print('- - a - -\n', a) b += a print('- - b - -\n', b) ``` *** **Important** When operating with arrays of different types, the type of the resulting array corresponds to the more general or precise one (a behaviour known as upcasting) ``` >>> a += b Traceback (most recent call last): ... TypeError: Canot cast ufunc add output from dtype('float64') to dtype('int32') with casting rule 'same_kind' ``` See the next cell for more information *** ``` a = np.ones(3, dtype=np.int32) b = np.linspace(0,pi,3) print('>>> b.dtype.name\n', b.dtype.name) c = a+b print('>>> c\n', c) print('>>> c.dtype.name\n', c.dtype.name) d = np.exp(c*1j) # use 'j' to present the complex number unit print('>>> d\n', d) print('>>> d.dtype.name\n', d.dtype.name) ``` Many unary operations, such as computing the sum of all the elements in the array, are implemented as methods of the `ndarray` class. ``` a = np.random.random((2,3)) print('>>> a\n', a) print('>>> a.sum()\n', a.sum()) print('>>> a.min()\n', a.min()) print('>>> a.max()\n', a.max()) ``` By default, these operations apply to the array as though it were a list of numbers, regardless of its shape. However, by specifying the **`axis`** parameter you can apply an operation along the specified axis of an array: *Here, **axis=1** for calculation along each **row** and **0** for **column*** ``` b = np.arange(12).reshape(3,4) print('>>> b\n', b) print('>>> b.sum(axis=0)\n', b.sum(axis=0)) print('>>> b.min(axis=1)\n', b.min(axis=1)) print('>>> b.cumsum(axis=1)\n', b.cumsum(axis=1)) ``` ### <span id = "2.2.5 Universal Functions">2.2.5 Universal Functions</span> NumPy provides familiar mathematical functions such as *sin*, *cos*, and *exp*. In NumPy, these are called "universal functions" (`ufunc`). Within NumPy, these functions operate *elementwise* on an array, producing an *array* as **output**. ``` B = np.arange(3) print(">>> B\n", B) print(">>> np.exp(B)\n", np.exp(B)) print(">>> np.sqrt(B)\n", np.sqrt(B)) C = np.array([2., -1., 4.]) print(">>> np.add(B, C)\n", np.add(B, C)) ``` **See also:** all, any, apply_along_axis, argmax, argmin, argsort, average, bincount, ceil, clip, conj, corrcoef, cov, cross, cumprod, cumsum, diff, dot, floor, inner, *inv*, lexsort, [max](https://docs.python.org/dev/library/functions.html#max "Link to max"), maximum, mean, median, [min](https://docs.python.org/dev/library/functions.html#min "Link to min"), minimum, nonzero, outer, prod, [re](https://docs.python.org/dev/library/re.html#module-re "Link to re"), [round](https://docs.python.org/dev/library/functions.html#round "Link to round"), sort, std, sum, trace, transpose, var, vdot, vectorize, where ### <span id = "2.2.6 Indexing, Slicing and Iterating">2.2.6 Indexing, Slicing and Iterating</span> **One-dimensional** arrays can be indexed, sliced and iterated over, much like [lists](https://docs.python.org/3/tutorial/introduction.html#lists "Link to lists") and other Python sequences. ``` a = np.arange(10)**3 print(">>> a\n", a) print(">>> a[2]\n", a[2]) print(">>> a[2:5]\n", a[2:5]) a[:6:2] = -1000 # equivalent to a[0:6:2] = -1000; from start to position 6, exclusive, set even position numbers -1000 print(">>> a\n", a) print(">>> a[ : :-1]\n", a[ : :-1]) # the inverse of a print('- - cube root of elements in a - -') for i in a: print(i**(1/3.)) ``` *** *Why there is an error here?* Because the cubic root of a negative number is more than one, can be real of complex, who knows *** *** **Multidimensional** arrays can have one index per axis. These indices are given in a tuple separated by commas: ``` def f(x,y): return 10*x+y b = np.fromfunction(f,(5,4),dtype=int) print(">>> b\n", b) print(">>> b[2,3]\n", b[2,3]) print(">>> b[0:5,1]\n", b[0:5,1]) print(">>> b[ : ,1]\n", b[ : ,1]) print(">>> b[1:3, : ]\n", b[1:3, : ]) ``` When fewer indices are provided than the number of axes, the missing indeces are considered complete slices, here `b[-1]` is equivalent to `b[-1, :]` ``` print(">>> b[-1]\n", b[-1]) ``` The expression within brackets in `b[i]` is treated as an `i` followed by as many instances of : as needed to represent the remaining axes. NumPy also allows you to write this using dots as `b[i,...]`. The **dots**(...) represent as many colons as needed to preduce a complete indexting tuple. For instance, if `x` is a rank 5 array (i.e., it has 5 axes), then - `x[1,2,...]` is equivalent to `x[1,2,:,:,:]` or more axes, - `x[...,3]` to `x[:,:,:,:,:,:,3]` and, - `x[4,...,5,:]` to `x[4,:,:,5,:]`. ``` c = np.array( [[[ 0, 1, 2], [ 10, 12, 13]], [[100,101,102], [110,112,113]]]) print(">>> c.shape\n", c.shape) print(">>> c[1,...]\n", c[1,...]) # same as c[1,:,:] or c[1] print(">>> c[...,2]\n", c[...,2]) # same as c[:,:,2] ``` *** **Iterating** over multidimensional arrays is done with respect to the first axis: ``` for row in b: print(row) ``` However, if one wants to perform an operation on **each element** in the array, one can use the **`flat`** attribute which is an [iterator](https://docs.python.org/2/tutorial/classes.html#iterators "Link to iterator") over all the elements of the array: ``` for ele in b.flat: print(ele) ``` **See also:** [*Indexing*](#3.4 Indexing), *arrays.indexing* (reference), newaxis, ndenumerate, indices ## <span id = "2.3 Shape Manipulation">2.3 Shape Manipulation</span> ### <span id = "2.3.1 Changing the shape of an array">2.3.1 Changing the shape of an array</span> An array has a shape given by the number of elements along each axis ``` a = np.floor(10*np.random.random((3,4))) print(a) print(">>> a.shape\n", a.shape) ``` The shape of an array can be changed with various commands: ``` print(">>> a.ravel()\n", a.ravel()) a.shape = (6, 2) print('- - After let the shape be (6,2) - -') print(a) print('- - After a transpose operation - -') print(a.T) ``` The order of the elements in the array resulting from ravel() is **normally "C-style", that is, the rightmost index "changes the fastest"**, so the element after a [0,0] is a [0,1], that is to say, *we move from the first element in the first row to the second element in the first row, if it exists, not the first element in the second row.* If the array is reshaped to some other shape, again the array is treated as "C-style". Numpy normally creates arrays stored in this order, so ravel() will usually not need to copy its argument, but if the array was made by taking slices of another array or created with unusual options, it may need to be copied. The function ravel() and reshape() can also be instructed, using an optional argument, to use FORTRAN-style arrays, in which the leftmost index changes the fastest. *** The `reshape` function returns its argument with a modified shape, whereas the `ndarray.resize` method modifies the array itself: ``` a a.reshape((2,6)) a a.resize((2,6)) a ``` If a dimension is given as -1 in a reshapeing operation, the other dimensions are automatically calculated: ``` print(a.reshape((2,-1))) ``` **Important** *ValueError: negative dimensions not allowed*, if you use -1 in a resizing operation like this: a.resize((2,-1)) **See also:** ndarray.shape, reshape, resize, ravel ### <span id = "2.3.2 Stacking together different arrays">2.3.2 Stacking together different arrays</span> Several arrays can be stacked together along different axes: ``` a = np.floor(10*np.random.random((2,2))) a b = np.floor(10*np.random.random((2,2))) b np.vstack((a,b)) # stack them vertically np.hstack((a,b)) # stack them horizontally ``` The function `column_stack` stacks 1D arrays as columns into a 2D array. It is equivalent to `vstack` only for 1D array: ``` np.column_stack((a,b)) # with 2D array a = np.array([4., 2.]) b = np.array([2., 8.]) from numpy import newaxis a[:,newaxis] ``` For arrays of with more than two dimensions, `hstack` stacks along their second axes, `vstack` stacks along their first axes, and `concatenate` allows for an optional arguments giving the number of the axis along which the concatenation should happen. *** **Note** In complex cases, `r_` (real part) and `c_` (~complex part~) are useful for creating arrays by stacking numbers along one axis. They allow the use of range literals (":") **Important** Actually, `r_`, translates slice objects to concatenation along the first axis; `c_`, translates slice objects to concatenation along the second axis. For more uses see the documentation. ``` print(np.r_[1:4, 6,8,3:9]) print(np.c_[1:4, 5:8, 12:15]) ``` When used with arrays as arguments, `r_` and `c_` are similar to `vstack` and `hstack` in their default behavior, but allow for an optional argument giving the number of the axis along which to concatenate. **See also:** hstack, vstack, column\_stack, concatenate, c\_,r\_ ### <span id = "2.3.3 Splitting one array into several smaller ones">2.3.3 Splitting one array into several smaller ones</span> Using `hsplit`, you can split an array along its horizontal axis, either by specifying the number of equally shaped arrays to return, or by specifying the columns after which the division should occur: ``` a = np.floor(10*np.random.random((2,12))) print('>>> a\n', a) print('>>> np.hsplit(a,3)\n', np.hsplit(a,3)) # this split array a into 3 equal length parts print('>>> np.hsplit(a,(3,4))\n', np.hsplit(a,(3,4))) # this split a after the third and forth column, both horizontally ``` ## <span id = "2.4 Copies and Views">2.4 Copies and Views</span> While operating and manipulating arrays, their data is sometimes *copied into a new array* and sometimes not. There are **3 cases:** ### <span id = "2.4.1 No Copy at All">2.4.1 No Copy at All</span> - Simple assignments make no copy of array objects or of their data ``` a = np.arange(12) b = a print('>>> b is a\n', b is a) b.shape = 3,4 print('>>> b.shape = 3,4') print('>>> a.shape\n', a.shape) ``` - Function calls make no copy, for Python passes mutable objects as references ``` def f(x): print(id(x)) print(id(a)) #id is a unique identifier of an object f(a) # this can't apply the print function to f(a) cause there's no return!!!! ``` ### <span id = "2.4.2 View or Shallow Copy">2.4.2 View or Shallow Copy</span> Different array objects can share the same data. The `view` method creates a new array object that looks at the same data. ``` c = a.view() c is a c.base is a #c is a view of the data owned by a c.flags.owndata c.shape = 2,6 a.shape # here a's shape doesn't change with c's c[0,4] = 1236454465 a # here a's value changes with c's ``` Slicing an array returns a view of it! ``` s = a[:,1:3] s # Now we use slicing to change its view, which can result in the change of a s[:]=10 a ``` *** **Important** This copied array share nothing but the base with the original array. Thus change the view data can lead to the same change in the original data. ### <span id = "2.4.3 Deep Copy">2.4.3 Deep Copy</span> The `copy` method makes a complete copy of the array and its data. ``` d = a.copy() print('d is a', d is a) print('d.base is a', d.base is a) d[0,0]=9999 a ``` *** **Important** This copied array share nothing with the original array ### <span id = "2.4.4 Function and Methods Overview">2.4.4 Function and Methods Overview</span> Here is a list of some useful NumPy functions and methods names ordered in categories. See *routines* for the full list. **Array Creation** ``` arrange, array, copy, empty, empty_like, eye, fromfile, fromfunction, identity, linspace, logspace, mgrid, ogrid, ones, ones_like, r, zeros, zeros_like ``` **Conversions** ``` arrange, array, copy, empty, empty_like, eye, fromfile, fromfunction, identity, linspace, logspace, mgrid, ogrid, ones, ones_like, r, zeros, zeros_like ``` **Conversions** ``` ndarray.astype, atleast_1d, atleast_2d, atleast_3d, mat ``` **Manipulations** ``` array_split, column_stack, concatenate, diagonal, dsplit, dstack, hsplit, hstack, ndarray.item, newaxis, ravel, repeat, reshape, resize, squeeze, swapaxes, take, transpose, vsplit, vstack ``` **Questions** ``` all, any, nonzero, where ``` **Ordering** ``` argmax, argmin, argsort, max, min, ptp, searchsorted, sort ``` **Operations** ``` choose, compress, cumprod, cumsum, inner, ndarray.fill, imag, prod, put, putmask, real, sum ``` **Basic Statistics** ``` cov, mean, std, var ``` **Basic Linear Algebra** ``` cross, dot, outer, linalg.svd, vdot ``` ## <span id = "2.5 Less Basic">2.5 Less Basic</span> ### <span id = "2.5.1 Broadcasting rules">2.5.1 Broadcasting rules</span> Broadcasting allows universal functions to deal in a meaningful way with inputs that do not have exactly the same shape. **The first rule of broadcasting** is that if all input arrays do not have the same number of dimensions, a "1" will be repeatedly prepended to the shapes of the smaller arrays until all the arrays have the same number of dimensions. **The second rule of broadcasting** ensures that arrays with a size of 1 along a particular dimension act as if they had the size of the array with the largest shape along that dimension. The value of the array elemnet is assumed to be the same along the dimension for the "broadcast" array. After application of the broadcasting rules, the sizes of all arrays must match. More details can be found in [*Broadcasting*](#3.5 Broadcasting). ## <span id = "2.6 Fancy indexing and index tricks">2.6 Fancy indexing and index tricks</span> NumPy offers more indexing facilities than regular Python sequences. In addition to indexing by integers and slies, as we saw before, arrays can be indexed by **arrays of integers of booleans**. ### <span id = "2.6.1 Indexing with Arrays of Indices">2.6.1 Indexing with Arrays of Indices</span> ``` a = np.arange(12)**2 i = np.array([1,1,3,8,5]) j = np.array([[3,4],[9,7]]) print('a[i]: \n', a[i]) print('a[j]: \n', a[j]) ``` When the indexed array is *multidimensional*, a single array of indices refers to the first dimension of `a`. The following example shows this behavior by converting an image of labels into a color image using a palette调色板. ``` palette = np.array( [ [0,0,0], [255,0,0], [0,255,0], [0,0,255], [255,255,255] ] ) image = np.array( [ [0,1,2,0], [0,3,4,0] ] ) palette[image] ``` *** **Important** The returned array is just the same shape of how you slicing the original array. *** We can also give indexes for more than one dimension. The arrays of indices for each dimension must have the same shape. ``` a = np.arange(12).reshape(3,4) i = np.array( [ [0,1], [1,2] ] ) j=np.array( [ [2,1], [3,3] ] ) print('a[i,j]:\n',a[i,j]) print('a[:,j]:\n',a[:,j]) print('a[i,:]:\n',a[i,:]) ``` *** **Important** Thus how you slicing must obey the rule of being a matrix. Another important thing to notice is shown below. ``` # now i and j can be seen as a tuple of two element with each element be a x-y plane coordinate l = [i,j] a[l] s = np.array( [i,j] ) print('s:\n',s) print('l:\n',l) print('THEY ARE NOT THE SAME ARRAY!!!') a[tuple(s)] ``` We cannot use `a[s]`, because this array will be interpreted as indexing the first dimension of a. Obviously out of range! *** Another common use of indexing with arrays is the search of the maximum value of time-dependent series: ``` time = np.linspace(20,145,5) data = np.sin(np.arange(20).reshape(5,4)) print('time:\n',time) print('data:\n',data) ind = data.argmax(axis = 0) print('ind:\n',ind) time_max = time[ind] data_max = data[ind, range(data.shape[1])] print('time_max:\n',time_max) print('data_max:\n',data_max) np.all(data_max == data.max(axis=0)) ``` `data[ind, range(data.shape[1])]`, equals to `[data[ind[0],0],data[ind[1],1],data[ind[2],2],data[ind[3],3]]`. And we can let all these operation be substituted by `data.max(axis=0)`. You can also use indexing with **arrays as a target to assign** to: ``` a = np.arange(5) a[[1,3,4]] = 0 a ``` If the list of indices have **same index more than once**, the assignment will finnaly leaving **the last value** as is last assigned to. ``` a = np.arange(5) a[[1,1,1,1,1,1]] = [1,2,3,4,5,6] a ``` Unless you use another operation, the Python's `+=` construct, things may get more complex... ``` a = np.arange(5) a[[1,1,1,1,1,1]] += [1,2,3,4,5,6] a ``` ### <span id = "2.6.2 Indexing with Boolean Arrays">2.6.2 Indexing with Boolean Arrays</span> ## <span id = "3.4 Indexing">3.4 Indexing</span>
github_jupyter
``` import pandas as pd df = pd.read_csv("https://api.featurelabs.com/datasets/online-retail-logs-2018-08-28.csv") df['order_product_id'] = range(df.shape[0]) df.head(5) ``` As you can see, this is a dataframe containing several different data types, including dates, categorical values, numeric values, and natural language descriptions. Next, initialize Woodwork on this DataFrame. ## Initializing Woodwork on a DataFrame Importing Woodwork creates a special namespace on your DataFrames, `DataFrame.ww`, that can be used to set or update the typing information for the DataFrame. As long as Woodwork has been imported, initializing Woodwork on a DataFrame is as simple as calling `.ww.init()` on the DataFrame of interest. An optional name parameter can be specified to label the data. ``` import woodwork as ww df.ww.init(name="retail") df.ww ``` Using just this simple call, Woodwork was able to infer the logical types present in the data by analyzing the DataFrame dtypes as well as the information contained in the columns. In addition, Woodwork also added semantic tags to some of the columns based on the logical types that were inferred. All Woodwork methods and properties can be accessed through the `ww` namespace on the DataFrame. DataFrame methods called from the Woodwork namespace will be passed to the DataFrame, and whenever possible, Woodwork will be initialized on the returned object, assuming it is a Series or a DataFrame. As an example, use the `head` method to create a new DataFrame containing the first 5 rows of the original data, with Woodwork typing information retained. ``` head_df = df.ww.head(5) head_df.ww head_df ``` ## Updating Logical Types If the initial inference was not to our liking, the logical type can be changed to a more appropriate value. Let's change some of the columns to a different logical type to illustrate this process. In this case, set the logical type for the `order_product_id` and `country` columns to be `Categorical` and set `customer_name` to have a logical type of `PersonFullName`. ``` df.ww.set_types(logical_types={ 'customer_name': 'PersonFullName', 'country': 'Categorical', 'order_product_id': 'Categorical' }) df.ww.types ``` Inspect the information in the `types` output. There, you can see that the Logical type for the three columns has been updated with the logical types you specified. ## Selecting Columns Now that you've prepared logical types, you can select a subset of the columns based on their logical types. Select only the columns that have a logical type of `Integer` or `Double`. ``` numeric_df = df.ww.select(['Integer', 'Double']) numeric_df.ww ``` This selection process has returned a new Woodwork DataFrame containing only the columns that match the logical types you specified. After you have selected the columns you want, you can use the DataFrame containing just those columns as you normally would for any additional analysis. ``` numeric_df ``` ## Adding Semantic Tags Next, let’s add semantic tags to some of the columns. Add the tag of `product_details` to the `description` column, and tag the `total` column with `currency`. ``` df.ww.set_types(semantic_tags={'description':'product_details', 'total': 'currency'}) df.ww ``` Select columns based on a semantic tag. Only select the columns tagged with `category`. ``` category_df = df.ww.select('category') category_df.ww ``` Select columns using multiple semantic tags or a mixture of semantic tags and logical types. ``` category_numeric_df = df.ww.select(['numeric', 'category']) category_numeric_df.ww mixed_df = df.ww.select(['Boolean', 'product_details']) mixed_df.ww ``` To select an individual column, specify the column name. Woodwork will be initialized on the returned Series and you can use the Series for additional analysis as needed. ``` total = df.ww['total'] total.ww total ``` Select multiple columns by supplying a list of column names. ``` multiple_cols_df = df.ww[['product_id', 'total', 'unit_price']] multiple_cols_df.ww ``` ## Removing Semantic Tags Remove specific semantic tags from a column if they are no longer needed. In this example, remove the `product_details` tag from the `description` column. ``` df.ww.remove_semantic_tags({'description':'product_details'}) df.ww ``` Notice how the ``product_details`` tag has been removed from the ``description`` column. If you want to remove all user-added semantic tags from all columns, you can do that, too. ``` df.ww.reset_semantic_tags() df.ww ``` ## Set Index and Time Index At any point, you can designate certain columns as the Woodwork `index` or `time_index` with the methods [set_index](generated/woodwork.table_accessor.WoodworkTableAccessor.set_index.rst) and [set_time_index](generated/woodwork.table_schema.TableSchema.set_time_index.rst). These methods can be used to assign these columns for the first time or to change the column being used as the index or time index. Index and time index columns contain `index` and `time_index` semantic tags, respectively. ``` df.ww.set_index('order_product_id') df.ww.index df.ww.set_time_index('order_date') df.ww.time_index df.ww ``` ``` series = pd.Series([1, 2, 3], dtype='int64') series.ww.init(logical_type='Integer') series.ww ``` In the example above, we specified the `Integer` LogicalType for the Series. Because `Integer` has a physical type of `int64` and this matches the dtype used to create the Series, no Series dtype conversion was needed and the initialization succeeds. In cases where the LogicalType requires the Series dtype to change, a helper function `ww.init_series` must be used. This function will return a new Series object with Woodwork initialized and the dtype of the series changed to match the physical type of the LogicalType. To demonstrate this case, first create a Series, with a `string` dtype. Then, initialize a Woodwork Series with a `Categorical` logical type using the `init_series` function. Because `Categorical` uses a physical type of `category`, the dtype of the Series must be changed, and that is why we must use the `init_series` function here. The series that is returned will have Woodwork initialized with the LogicalType set to `Categorical` as expected, with the expected dtype of `category`. ``` string_series = pd.Series(['a', 'b', 'a'], dtype='string') ww_series = ww.init_series(string_series, logical_type='Categorical') ww_series.ww ``` As with DataFrames, Woodwork provides several methods that can be used to update or change the typing information associated with the series. As an example, add a new semantic tag to the series. ``` series.ww.add_semantic_tags('new_tag') series.ww ``` As you can see from the output above, the specified tag has been added to the semantic tags for the series. You can also access Series properties methods through the Woodwork namespace. When possible, Woodwork typing information will be retained on the value returned. As an example, you can access the Series `shape` property through Woodwork. ``` series.ww.shape ``` You can also call Series methods such as `sample`. In this case, Woodwork typing information is retained on the Series returned by the `sample` method. ``` sample_series = series.ww.sample(2) sample_series.ww sample_series ``` ## List Logical Types Retrieve all the Logical Types present in Woodwork. These can be useful for understanding the Logical Types, as well as how they are interpreted. ``` from woodwork.type_sys.utils import list_logical_types list_logical_types() ```
github_jupyter
# <p style="text-align: center;"> Part Three: Outliers </p> ![title](Images\Outlier.jpg) ``` from IPython.display import HTML HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide(); } else { $('div.input').show(); } code_show = !code_show } $( document ).ready(code_toggle); </script> The raw code for this IPython notebook is by default hidden for easier reading. To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''') ``` # <p style="text-align: center;"> Table of Contents </p> - ## 1. [Introduction](#Introduction) - ### 1.1 [Abstract](#abstract) - ### 1.2 [Importing Libraries](#importing_libraries) - ## 2. [What are Outliers?](#outliers) - ### 2.1 [Types of Outliers](#outlier_types) - ### 2.2 [Impact of Outliers on a Dataset](#outlier_impact) - ### 2.3 [Outliers Detection Techniques](#outlier_detection) - ### 2.4 [Implementation](#implementation) - ### 2.4.1 [Inter Quartile Range](#iqr) - ### 2.4.2 [Standard Deviation](#sd) - ### 2.4.3 [Z-Score](#z_score) - ### 2.4.4 [Isolation Forest](#isolation_forest) - ## 3. [Conclusion](#Conclusion) - ## 4. [Contribution](#Contribution) - ## 5. [Citation](#Citation) - ## 6. [License](#License) # <p style="text-align: center;"> 1.0 Introduction </p> <a id='Introduction'></a> # 1.1 Abstract <a id='abstract'></a> Contrary to what most data science courses would have you believe, not every dataset is a perfectly curated group of observations with no missing values or outliers (For example mtcars and iris datasets). Real-world data is messy which means we need to clean and wrangle it into an acceptable format before we can even start the analysis. Data cleaning is an un-glamorous, but necessary part of most actual data science problems. In this notebook, I will try to explain what are outliers and it's types, how to detect outliers and also remidial measures for outliers [Back to top](#Introduction) # 1.2 Importing Libraries <a id='importing_libraries'></a> This is the official start to any Data Science or Machine Learning Project. A Python library is a reusable chunk of code that you may want to include in your programs/ projects. In this step we import a few libraries that are required in our program. Some major libraries that are used are Numpy, Pandas, MatplotLib, Seaborn, Sklearn etc. [Back to top](#Introduction) ``` # modules we'll use import pandas as pd import numpy as np # from sklearn import preprocessing # plotting modules import seaborn as sns import matplotlib import matplotlib.pyplot as plt from astropy.table import Table, Column import warnings warnings.filterwarnings('ignore') %matplotlib inline matplotlib.style.use('ggplot') np.random.seed(34) ``` # 2.0 What are Outliers? <a id='outliers'></a> An outlier is an observation that is unlike the other observations. It is rare, or distinct, or does not fit in some way. It is also called anomalies. Outliers can have many causes, such as: - Measurement or input error. - Data corruption. - True outlier observation. There is no precise way to define and identify outliers in general because of the specifics of each dataset. Instead, you, or a domain expert, must interpret the raw observations and decide whether a value is an outlier or not. Nevertheless, we can use statistical methods to identify observations that appear to be rare or unlikely given the available data. This does not mean that the values identified are outliers and should be removed. A good tip is to consider plotting the identified outlier values, perhaps in the context of non-outlier values to see if there are any systematic relationships or patterns to the outliers. If there is, perhaps they are not outliers and can be explained, or perhaps the outliers themselves can be identified more systematically.¶ [Back to Top](#Introduction) ## 2.1 Types of Outliers <a id='outlier_types'></a> Outlier can be of two types: 1. Univariate 2. Multivariate. Univariate outliers can be found when we look at distribution of a single variable. Multi-variate outliers are outliers in an n-dimensional space. In order to find them, you have to look at distributions in multi-dimensions. Let us understand this with an example. Let us say we are understanding the relationship between height and weight. Below, we have univariate and bivariate distribution for Height, Weight. Take a look at the box plot. We do not have any outlier (above and below 1.5*IQR, most common method). Now look at the scatter plot. Here, we have two values below and one above the average in a specific segment of weight and height ![title](Images\outlier_types.png) [Back to Top](#Introduction) ## 2.2 Impact of Outliers on a Dataset <a id='outlier_impact'></a> Outliers can drastically change the results of the data analysis and statistical modeling. There are numerous unfavourable impacts of outliers in the data set: - It increases the error variance and reduces the power of statistical tests - If the outliers are non-randomly distributed, they can decrease normality - They can bias or influence estimates that may be of substantive interest - They can also impact the basic assumption of Regression, ANOVA and other statistical model assumptions. To understand the impact deeply, let’s take an example to check what happens to a data set with and without outliers in the data set. ![title](Images\outlier_impact.png) As you can see, data set with outliers has significantly different mean and standard deviation. In the first scenario, we will say that average is 5.45. But with the outlier, average soars to 30. This would change the estimate completely. [Back to Top](#Introduction) ## 2.3 Outliers Detection Techniques <a id='outlier_detection'></a> ### 1. Interquartile Range Method The concept of the Interquartile Range (IQR) is used to build the boxplot graphs. IQR is a concept in statistics that is used to measure the statistical dispersion and data variability by dividing the dataset into quartiles. In simple words, any dataset or any set of observations is divided into four defined intervals based upon the values of the data and how they compare to the entire dataset. A quartile is what divides the data into three points and four intervals. It is the difference between the third quartile and the first quartile (IQR = Q3 -Q1). Outliers in this case are defined as the observations that are below (Q1 − 1.5x IQR) or boxplot lower whisker or above (Q3 + 1.5x IQR) or boxplot upper whisker. It can be visually represented by the box plot. ![title](Images\IQR.png) ### 2. Standard Deviation Method Standard deviation is a metric of variance i.e. how much the individual data points are spread out from the mean. In statistics, If a data distribution is approximately normal then about 68% of the data values lie within one standard deviation of the mean and about 95% are within two standard deviations, and about 99.7% lie within three standard deviations ![title](Images\Standard_Deviation.png) ### 3. Z-Score method The Z-score is the signed number of standard deviations by which the value of an observation or data point is above the mean value of what is being observed or measured. The intuition behind Z-score is to describe any data point by finding their relationship with the Standard Deviation and Mean of the group of data points. Z-score is finding the distribution of data where mean is 0 and standard deviation is 1 i.e. normal distribution. You must be wondering that, how does this help in identifying the outliers? Well, while calculating the Z-score we re-scale and center the data and look for data points which are too far from zero. These data points which are way too far from zero will be treated as the outliers. In most of the cases a threshold of 3 or -3 is used i.e if the Z-score value is greater than or less than 3 or -3 respectively, that data point will be identified as outliers. This technique assumes a Gaussian distribution of the data. The outliers are the data points that are in the tails of the distribution and therefore far from the mean. How far depends on a set threshold zthr for the normalized data points zi calculated with the formula: > Z_score= (Xi - mean) / standard deviation where Xi is a data point, 'mean' is the mean of all X and 'standard deviation' the standard deviation of all X. An outlier is then a normalized data point which has an absolute value greater than Zthr. That is: > |Z_score| > Zthr Commonly used Zthr values are 2.5, 3.0 and 3.5. Here we will be using 3.0 ### 4. Isolation Forest Isolation forest is an algorithm to detect outliers. It partitions the data using a set of trees and provides an anomaly score looking at how isolated the point is in the structure found. The anomaly score is then used to tell apart outliers from normal observations. An important concept in this method is the isolation number. The isolation number is the number of splits needed to isolate a data point. This number of splits is ascertained by following these steps: - A point “a” to isolate is selected randomly. - A random data point “b” is selected that is between the minimum and maximum value and different from “a”. - If the value of “b” is lower than the value of “a”, the value of “b” becomes the new lower limit. - If the value of “b” is greater than the value of “a”, the value of “b” becomes the new upper limit. - This procedure is repeated as long as there are data points other than “a” between the upper and the lower limit. It requires fewer splits to isolate an outlier than it does to isolate a non-outlier, i.e. an outlier has a lower isolation number in comparison to a non-outlier point. A data point is therefore defined as an outlier if its isolation number is lower than the threshold. The threshold is defined based on the estimated percentage of outliers in the data, which is the starting point of this outlier detection algorithm. ![title](Images\Isolation_Forest.png) [Back to Top](#Introduction) ## 2.4 Implementation <a id='implementation'></a> Lets load a few datasets to see the implementation of each of the methods above ## 2.4.1 Inter Quartile Range <a id='iqr'></a> ``` df_1 = pd.read_csv("Datasets/heart.csv") df_1.head() ``` ### Basic Preprocessing checks ``` df_1.isnull().values.any() df_1.describe() ``` Let's consider serum cholestoral in mg/dl column i.e. "chol" for our analysis. I'll plot a simple box plot which is the best visualization for detecting outliers ``` plt.figure(figsize = (4,8)) sns.boxplot(y = df_1.chol) ``` From the above box plot, we can surely observe that there are outliers in it! Let's define a function to find out the IQR, lower and the upper whisker. ``` def out_iqr(df , column): global lower,upper q25, q75 = np.quantile(df[column], 0.25), np.quantile(df[column], 0.75) # calculate the IQR iqr = q75 - q25 # calculate the outlier cutoff cut_off = iqr * 1.5 # calculate the lower and upper bound value lower, upper = q25 - cut_off, q75 + cut_off print('The IQR is',iqr) print('The lower bound value is', lower) print('The upper bound value is', upper) # Calculate the number of records below and above lower and above bound value respectively df1 = df[df[column] > upper] df2 = df[df[column] < lower] return print('Total number of outliers are', df1.shape[0]+ df2.shape[0]) out_iqr(df_1,'chol') #Input the dataset and the required column ``` As per the IQR method, there are 5 outliers. **Visual representation:** ``` plt.figure(figsize = (10,6)) sns.distplot(df_1.chol, kde=False, color="b") plt.axvspan(xmin = lower,xmax= df_1.chol.min(),alpha=0.2, color='red') plt.axvspan(xmin = upper,xmax= df_1.chol.max(),alpha=0.2, color='red') ``` Here the red zone represents the outlier zone! The records present in that zone are considered as outliers **Remedial Measure:** Remove the records which are above the upper bound value and records below the lower bound value! [Back to Top](#Introduction) ``` #Data Frame without outliers df_new_1 = df_1[(df_1['chol'] < upper) | (df_1['chol'] > lower)] ``` ## 2.4.2 Standard Deviation <a id='sd'></a> ``` df_2 = pd.read_csv("Datasets/StudentsPerformance.csv") df_2.head() ``` ### Basic Preprocessing checks ``` df_2.isnull().values.any() df_2.describe() ``` Let's consider the "writing score" for inspection. I'll plot a simple density plot which is also one of the best visualization for detecting outliers ``` plt.figure(figsize = (10,5)) sns.distplot(df_2['writing score'],color="b") ``` By the looks of it, it is left tailed and it surely has outliers. Let's define a function to find out the lower and the upper whisker using SDM: ``` def out_std(df, column): global lower,upper # calculate the mean and standard deviation of the data frame data_mean, data_std = df[column].mean(), df[column].std() # calculate the cutoff value cut_off = data_std * 3 # calculate the lower and upper bound value lower, upper = data_mean - cut_off, data_mean + cut_off print('The lower bound value is', lower) print('The upper bound value is', upper) # Calculate the number of records below and above lower and above bound value respectively df1 = df[df[column] > upper] df2 = df[df[column] < lower] return print('Total number of outliers are', df1.shape[0]+ df2.shape[0]) out_std(df_2,'writing score') ``` So as per the SD method, there are 4 ouliters **Visual Representation:** ``` plt.figure(figsize = (10,5)) sns.distplot(df_2['writing score'], kde=False, color="b") plt.axvspan(xmin = lower,xmax= df_2['writing score'].min(),alpha=0.2, color='red') plt.axvspan(xmin = upper,xmax= df_2['writing score'].max(),alpha=0.2, color='red') ``` Here the red zone represents the outlier zone! The records present in that zone are considered as outliers **Remedial Measure:** Remove the records which are above the upper bound value and records below the lower bound value! [Back to Top](#Introduction) ``` #Data Frame without outliers df_new_2 = df_2[(df_2['writing score'] < upper) | (df_2['writing score'] > lower)] ``` ## 2.4.3 Z-Score <a id='z_score'></a> ``` df_3 = pd.read_csv("Datasets/insurance.csv") df_3.head() ``` ### Basic Preprocessing checks ``` df_2.isnull().values.any() df_3.describe() ``` Let's consider the "charges" for inspection. I'll plot a simple density plot which is one of the best visualization for detecting outliers ``` plt.figure(figsize = (10,5)) sns.distplot(df_3['charges'], color="b") ``` By the looks of it, it is right tailed and it surely has outliers. Let's define a function to find out the lower and the upper whisker using Z-Score method: ``` def out_zscore(data): global outliers,zscore outliers = [] zscore = [] threshold = 3 mean = np.mean(data) std = np.std(data) for i in data: z_score= (i - mean)/std zscore.append(z_score) if np.abs(z_score) > threshold: outliers.append(i) return print("Total number of outliers are",len(outliers)) out_zscore(df_3.charges) ``` According to z-score method, it has 7 outliers **Visual Representation:** ``` plt.figure(figsize = (10,5)) sns.distplot(zscore, color="b") plt.axvspan(xmin = 3 ,xmax= max(zscore),alpha=0.2, color='red') ``` Here the red zone represents the outlier zone! The records present in that zone are considered as outliers **Remedial Measure:** Remove the records which are above the upper bound value and records below the lower bound value! [Back to Top](#Introduction) ``` #Data Frame without outliers df_new_3 = df_3[(df_3['charges'] < 3) | (df_3['charges'] > -3)] ``` ## 2.4.4 Isolation Forest <a id='isolation_forest'></a> ``` #Import necessary libraries from sklearn.ensemble import IsolationForest #The required columns cols = ['writing score','reading score','math score'] #Plotting the sub plot fig, axs = plt.subplots(1, 3, figsize=(20, 5), facecolor='w', edgecolor='k') axs = axs.ravel() for i, column in enumerate(cols): isolation_forest = IsolationForest() isolation_forest.fit(df_2[column].values.reshape(-1,1)) xx = np.linspace(df_2[column].min(), df_2[column].max(), len(df_2)).reshape(-1,1) anomaly_score = isolation_forest.decision_function(xx) outlier = isolation_forest.predict(xx) axs[i].plot(xx, anomaly_score, label='anomaly score', color="b") axs[i].fill_between(xx.T[0], np.min(anomaly_score), np.max(anomaly_score), where=outlier==-1, color='r', alpha=.4, label='outlier region') axs[i].legend() axs[i].set_title(column) ``` In the snippet above, we have trained our IsolationForest using the data generated, computed the anomaly score for each observation, and classified each observation as an outlier or non-outlier. The chart shows the anomaly scores and the regions where the outliers are. As expected, the anomaly score reflects the shape of the underlying distribution and the outlier regions correspond to low probability areas [Back to Top](#Introduction) # <p style="text-align: center;">Conclusion<p><a id='Conclusion'></a> While outlier removal forms an essential part of a dataset normalization, it’s important to ensure zero errors in the assumptions that influence outlier removal. Data with even significant number of outliers may not always be bad data and a rigorous investigation of the dataset in itself is often warranted, but overlooked, by data scientists in their processes [Back to top](#Introduction) # <p style="text-align: center;">Contribution<p><a id='Contribution'></a> This was a fun project in which we explore the idea of Data cleaning and Data Preprocessing. We take inspiration from kaggle learning course and create our own notebook enhancing the same idea and supplementing it with our own contributions from our experiences and past projects. - Code by self : 65% - Code from external Sources : 35% [Back to top](#Introduction) # <p style="text-align: center;">Citation<p><a id='Citation'></a> - https://towardsdatascience.com/preprocessing-with-sklearn-a-complete-and-comprehensive-guide-670cb98fcfb9 - https://www.kaggle.com/rpsuraj/outlier-detection-techniques-simplified?select=insurance.csv - https://statisticsbyjim.com/basics/remove-outliers/ - https://statisticsbyjim.com/basics/outliers/ # <p style="text-align: center;">License<p><a id='License'></a> Copyright (c) 2020 Manali Sharma, Rushabh Nisher Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. [Back to top](#Introduction)
github_jupyter
# Boundary and Initial data ``` #r "BoSSSpad.dll" using System; using System.Collections.Generic; using System.Linq; using ilPSP; using ilPSP.Utils; using BoSSS.Platform; using BoSSS.Foundation; using BoSSS.Foundation.Grid; using BoSSS.Foundation.Grid.Classic; using BoSSS.Foundation.IO; using BoSSS.Solution; using BoSSS.Solution.Control; using BoSSS.Solution.GridImport; using BoSSS.Solution.Statistic; using BoSSS.Solution.Utils; using BoSSS.Solution.Gnuplot; using BoSSS.Application.BoSSSpad; using BoSSS.Application.XNSE_Solver; using static BoSSS.Application.BoSSSpad.BoSSSshell; Init(); ``` This tutorial demostrates the **definition**, resp. the **import** of data for **boundary and initial values**. In order to demonstrate the usage, we employ the exemplaric **Poisson solver**. ``` using BoSSS.Application.SipPoisson; ``` We use a **temporary database** for this tutorial: ``` var tempDb = CreateTempDatabase(); ``` We use the following helper function to create a **template for the multiple solver runs**. ``` Func<SipControl> PreDefinedControl = delegate() { SipControl c = new SipControl(); c.SetDGdegree(2); c.GridFunc = delegate() { // define a grid of 10x10 cells double[] nodes = GenericBlas.Linspace(-1, 1, 11); var grd = Grid2D.Cartesian2DGrid(nodes, nodes); // set the entire boundary to Dirichlet b.c. grd.DefineEdgeTags(delegate (double[] X) { return BoundaryType.Dirichlet.ToString(); }); return grd; }; c.SetDatabase(tempDb); c.savetodb = true; return c; }; ``` Again, we are using the **workflow management** ``` BoSSSshell.WorkflowMgm.Init("Demo_BoundaryAndInitialData"); ``` ## Textual and Embedded formulas ``` SipControl c1 = PreDefinedControl(); ``` Provide **initial data** as a text: ``` c1.AddInitialValue("RHS","X => Math.Sin(X[0])*Math.Cos(X[1])", TimeDependent:false); ``` Finally, all initial data is stored in the ***AppControl.InitialVa*** dictionary and all boundary data is stored in the ***AppControl.BoundaryValues*** dictionary. The common interface for all varinats to specify boundary and initial data is ***IBoundaryAndInitialData***. The snippet above is only a shortcut to add a ***Formula*** object, which implements the ***IBoundaryAndInitialData*** interface. ``` c1.InitialValues c1.InitialValues["RHS"] ``` In **BoSSSpad**, such objects can also be extracted from static methods of classes; note that these should not depend on any other object in the worksheet. ``` Formula BndyFormula = new Formula( "BndyValue.BndyFunction", false, "static class BndyValue {"+ " public static double BndyFunction(double[] X) {"+ " return 1.0;"+ " }"+ "}"); c1.AddBoundaryValue(BoundaryType.Dirichlet.ToString(), "T", BndyFormula); ``` Creates a Job named **J1** and runs it ``` var J1 = c1.RunBatch(); ``` The next line prints the Status of the Job **J1**. ``` BoSSSshell.WorkflowMgm.BlockUntilAllJobsTerminate(3600*4); ``` We can print the Status of the Job **J1**. ``` J1.Status ``` We can also check via a method if the Job **J1** is truly finished ``` NUnit.Framework.Assert.IsTrue(J1.Status == JobStatus.FinishedSuccessful); ``` ## 1D Splines **Splines** can be used to interpolate nodal data onto a DG field; currently, only 1D is supported. ``` SipControl c2 = PreDefinedControl(); // create test data for the spline double[] xNodes = GenericBlas.Linspace(-2,2,13); double[] yNodes = xNodes.Select(x => x*0.4).ToArray(); var rhsSpline = new Spline1D(xNodes, yNodes, 0, Spline1D.OutOfBoundsBehave.Extrapolate); /// BoSSScmdSilent double err = 0; // test the spline: a line must be interpolated exactly. foreach(double xtst in GenericBlas.Linspace(-3,3,77)) { double sVal = rhsSpline.Evaluate(new double[] {xtst , 0, 0 }, 0.0); double rVal = xtst*0.4; err += Math.Abs(sVal - rVal); } NUnit.Framework.Assert.Less(err, 1.0e-10, "Slpine implementation fail."); c2.AddInitialValue("RHS", rhsSpline); var J2 = c2.RunBatch(); BoSSSshell.WorkflowMgm.BlockUntilAllJobsTerminate(3600*4); J2.Status /// BoSSScmdSilent NUnit.Framework.Assert.IsTrue(J2.Status == JobStatus.FinishedSuccessful); ``` ## Interpolating values from other Calculations For demonstrational purposes, we use the result (i.e. the last time-step) of a previous calculation as a right-hand-side for the next calculation. ``` var j2Sess = J2.LatestSession; j2Sess j2Sess.Timesteps var lastTimeStep = j2Sess.Timesteps.Last(); ``` We encapsulate the value **T** in the **ForeignGridValue** object, which allows interpolation between different meshes: ``` var newForeignMesh = new ForeignGridValue(lastTimeStep,"T"); /// Use different mesh in the control file: SipControl c3 = PreDefinedControl(); c3.GridFunc = delegate() { // define a grid of *triangle* cells double[] nodes = GenericBlas.Linspace(-1, 1, 11); var grd = Grid2D.UnstructuredTriangleGrid(nodes, nodes); // set the entire boundary to Dirichlet b.c. grd.DefineEdgeTags(delegate (double[] X) { return BoundaryType.Dirichlet.ToString(); }); return grd; }; // we also save the RHS in the database c3.AddFieldOption("RHS", SaveOpt: FieldOpts.SaveToDBOpt.TRUE); /// finally, we define the RHS: c3.AddInitialValue("RHS", newForeignMesh); /// BoSSScmdSilent double orgProbe = newForeignMesh.Evaluate(new double[] {0.5,0.5}, 0.0); double newProbe = lastTimeStep.GetField("T").ProbeAt(new double[] {0.5,0.5}); NUnit.Framework.Assert.Less(Math.Abs(orgProbe - newProbe), 1.0e-10, "Check (1) on ForeignGridValue failed"); var J3 = c3.RunBatch(); BoSSSshell.WorkflowMgm.BlockUntilAllJobsTerminate(3600*4); J3.Status /// BoSSScmdSilent NUnit.Framework.Assert.IsTrue(J3.Status == JobStatus.FinishedSuccessful); ``` Since the quadrilateral mesh used for the original right-hand-side is geometrically embedded in the triangular mesh the **interpolation error should be zero** (up to machine precision). ``` var firstTimeStep = J3.LatestSession.Timesteps.First(); DGField RhsOnTriangles = firstTimeStep.GetField("rhs"); // case-insensitive! DGField RhsOriginal = lastTimeStep.GetField("T"); // note: we have to cast DGField to ConventionalDGField in order to use // the 'L2Distance' function: ((ConventionalDGField)RhsOnTriangles).L2Distance((ConventionalDGField)RhsOriginal) /// BoSSScmdSilent var H1err = ((ConventionalDGField)RhsOnTriangles).H1Distance((ConventionalDGField)RhsOriginal); NUnit.Framework.Assert.Less(H1err, 1.0e-10, "Check (2) on ForeignGridValue failed."); ``` ## Restart from Dummy-Sessions Dummy sessions are kind of fake siolver runs, with the only purpose of using them for a restart. ``` DGField RHSforRestart = firstTimeStep.GetField("RHS"); ``` We save the DG field ***RHSforRestart*** in the database; This automatically creates a timestep and a session which host the DG field: ``` var RestartTimestep = tempDb.SaveTimestep(RHSforRestart); RestartTimestep RestartTimestep.Session ``` This time step can be used as a restart value.: ``` var c4 = PreDefinedControl(); c4.InitialValues.Clear(); c4.SetRestart(RestartTimestep); var J4 = c4.RunBatch(); BoSSSshell.WorkflowMgm.BlockUntilAllJobsTerminate(3600*4); J4.Status /// BoSSScmdSilent NUnit.Framework.Assert.IsTrue(J4.Status == JobStatus.FinishedSuccessful); ``` ### Note Since no mesh interpolation is performed for the restart, it is much faster than **ForeignGridValue**, but less flexible (a restart is always performed on the same mesh). To avoid multiple mesh interpolations (e.g. when multiple runs are required) one coudl therefore speed up the process by doing the mesh interpolation once ***use ProjectFromForeignGrid*** in BoSSSpad and save the interpolation in a dummy session.
github_jupyter
# Week 4 - Flow Control with Conditionals and Error Handling ## The following play critical roles: ### 1. Conditional Statements - adding logic to our statements. ### 2. Error Handling - getting around problems our code may encounter. ## 1. Conditional Statements ``` ## create a list of 10 random numbers anywhere from -100 to 100 ##name the list numbers import random numbers = random.sample(range(-100, 100), 10) numbers ## create conditional statements that tell us if the last number and the penultimate number ## are positive or negative ## print a sentence that reads: ## "The last number [what is it?] is [positive or negative] while ## the penultimate number [what is it?] is [negative or positive]." if numbers[-2] > 0: almost_last_status = "positive" else: almost_last_status = "negative" if numbers[-1] > 0: last_status = "positive" else: last_status = "negative" print(f"The last number ({numbers[-1]}) is {last_status} while \ the second to last number({numbers[-2]}) is {almost_last_status}") ``` ## Conditionals at work: <img src="support_files/scraped-etf.PNG" style="width: 50%;"> To produce the chart above, we had to scrape the numbers from text published in articles. The numbers, however, were in string format with words like "million", "billion" and "trillion" describing their magnitude: <img src="support_files/magnitude.png" style="width: 100%;"> In order to run calculations on them, we had to convert them to floating point numbers. Take the excerpt list below called big_numbers and convert each item in the list to a floating point number. You will have to use basic math and conditional statements. ``` big_numbers = [ "130.4 million", "67.2 million", "125.4 million", "5.04 million", "1.3 billion", "2.2 trillion", "7.2 million", "3.1 billion" ] big_numbers ## Time for conditionals: for each_number in big_numbers: items = each_number.split() print(items) if items[1] == "million": number = float(items[0]) * 1_000_000 print(number) elif items[1] == "billion": number = float(items[0]) * 1_000_000_000 print(number) elif items[1] == "trillion": number = float(items[0]) * 1E12 print(number) else: print("not million, billions or trillions") ``` ### The scraped text actually included the dollar sign along with the number. We'll have to remove the dollar sign to successfully run calculations on the numbers. ``` dollar_numbers = [ "$130.4 million", "$67.2 million", "$125.4 million", "$5.04 million", "$1.3 billion", "$2.2 trillion", "$7.2 million", "$3.1 billion" ] dollar_numbers ## Tweak the previous conditional we wrote to also remove the $ signs and ## return only the numbers: ``` # DRY ### Coding Ethos ### "Don't Repeat Yourself!" ``` dollar_numbers = ["$130.4 million", "$67.2 million", "$125.4 million", "$5.04 million", "$1.3 billion", "$2.2 trillion", "$7.2 million", "$3.1 billion"] dollar_numbers ## Let's make the code a little DRYer. ## You already have the skills! ``` ### The previous examples result only in printed numbers -- not numbers stored in memory. ``` ## Let's tweak the code to store the floating point numbers in a list. ## Confirm the numbers in the list are actually floats ``` ## 2. Error Handling ``` ## We have a list of numbers. ## we want square each in our calculation mylist = [1, 2, 3, "four", 5] ## write a for loop that squares each number in the list ## How do we get around the code breaking everytime it hits an error? ## You can even tell it what type of error it might encounter ## Store each square into an updated list called updated_numbers ``` ### We need to track which items in the list were NOT correctly processed ``` moreNumbers = [1, 2, 3, "four", 5, "six", 7, 8, "nine"] ## write some code that returns an updated_numbers list AND ## also a failed list called incomplete ```
github_jupyter
# <center>Preprocessing Code for the </center> # <center>Department of Homeland Security </center> # <center>Passenger Screening Algorithm Challenge.</center> # General imports and initializations ``` import math import os import pdb import numpy as np import scipy as sp import matplotlib import matplotlib.pyplot as plt import datetime import csv from scipy import ndimage from scipy.signal import medfilt from scipy.signal import savgol_filter from scipy.signal import resample from scipy.ndimage.filters import gaussian_filter from scipy.ndimage.filters import median_filter from scipy.ndimage.morphology import binary_fill_holes from scipy.ndimage.morphology import binary_dilation from scipy.misc import imsave from scipy.ndimage import imread from copy import deepcopy from scipy import linalg from scipy.interpolate import interp1d from skimage.transform import resize from sklearn.preprocessing import binarize !pip3 install Cython %load_ext cython %matplotlib inline %pdb off plt.style.use('classic') ``` # Project-specific imports and initializations ``` # Get project functions from CompetitionFileIOFunctions import initRootFolders, initLog, log, filenames, loadFile # Init root folders inCloud = True if inCloud: # Working in the cloud. # Read scans from the bucket and save results locally: initRootFolders(bucketName='kaggle_passenger_screening123407', localIOPath='') else: # Working on a desktop # Don't bother with the bucket, but set a constant local IO path to sidestep versioning initRootFolders( bucketName='', localIOPath='/media/qwerty/science/science data/2017-10-18 Kaggle passenger screening/' ) # Name the input/output folders # cloud/ and local/ refer to locations defined in bucketName and localIOPath (above) scanDir1 = "cloud/stage1_a3d/" scanDir2 = "cloud/stage2_a3d/" embeddedDir1 = "local/embedded2D/stage1/" embeddedDir2 = "local/embedded2D/stage2/" logDir = "local/log/" # Initialize log file initLog(logDir, 'embed') # Threshold for finding body region in 3D threshold3D = .0002 ``` # Read file data ``` inputFiles = filenames(scanDir1) fileNum = 1 inputFile = inputFiles[fileNum] #################################################### log('+++++ Read file ' + str(fileNum) + ' +++++') #################################################### # Note: title comments like the one above indicate # notebook cells that are copy/pasted to the main # processing loop at the bottom of the notebook. data = loadFile(inputFile) ``` # Threshold ``` #################################################### log('Threshold') #################################################### thresholded = np.clip(data, threshold3D, 100000) - threshold3D print('Total reflectivity: ', np.sum(thresholded)) ``` # Average along axii to get 2D mean images ``` #################################################### log('Average along axii to get 2D mean images') #################################################### # Summed images will have yz, xz, xy axes, respectively means2D = list([thresholded.mean(axis=i) for i in range(3)]) mean2DAxii = [('y', 'z'), ('x','z'), ('x','y')] fig, ax = plt.subplots(3, figsize=(30,30)) for i in range(3): ax[i].imshow(means2D[i], cmap = 'viridis', interpolation = 'nearest') ``` # Get center of mass ``` #################################################### log('Get center of mass') #################################################### # Could do this on the 3D image, but it's much faster on 2D images xCenter, zCenter = np.round(ndimage.measurements.center_of_mass(means2D[1])).astype(int) yCenter, dmy = np.round(ndimage.measurements.center_of_mass(means2D[0])).astype(int) print('xCenter:', xCenter, ' yCenter:', yCenter, ' zCenter:', zCenter) ``` # Get end of trunk z-range (top of shoulders) ``` def firstOn(dat, threshold=0.33): filt = medfilt(dat, 11) thresh = threshold * max(filt) binarized = filt>thresh first = np.argmax(binarized) return first def firstOff(dat, threshold=0.33): filt = medfilt(dat, 11) thresh = threshold * max(filt) binarized = filt<=thresh first = np.argmax(binarized) return first #################################################### log('Get end of trunk z-range (top of shoulders)') #################################################### zStart, zEnd = (int(zCenter * 1.2), int(zCenter * 1.7)) if zEnd > data.shape[2]: zEnd = data.shape[2] xStart = int(xCenter - .25 * zCenter) xEnd = int(xCenter + .25 * zCenter) midStrip = means2D[1][xStart:xEnd, zStart:zEnd] off = midStrip < 0.00000001 numOff = np.sum(off, axis=0) trunkEnd = firstOn(numOff, .15) trunkEnd += zStart fig, ax = plt.subplots(1, figsize=(3,3),facecolor='white') ax.imshow(off, cmap = 'viridis', interpolation = 'nearest') ax.set_xlabel('z') ax.set_ylabel('x') fig, ax = plt.subplots(1,facecolor='white') ax.plot(numOff) ax.set_xlabel('z') ax.set_ylabel('Number of empty pixels') print(trunkEnd) ``` # Get start of trunk z-range (bottom of groin) ``` #################################################### log('Get start of trunk z-range (bottom of groin)') #################################################### zStart = int(0.5 * zCenter) zStrip = np.mean(thresholded[:, yCenter:, zStart:zCenter], axis=0) zStrip = gaussian_filter(zStrip, (5,5)) buttProfile = np.array([firstOff(x, .05) for x in np.transpose(zStrip)]) buttProfile = median_filter(buttProfile, 5) buttProfileDer = savgol_filter(buttProfile, 11, 2, deriv=1, mode='nearest') trunkStart = zStart + np.argmax(buttProfileDer) fig, ax = plt.subplots(1, figsize=(5,5)) ax.imshow(zStrip, cmap = 'viridis', interpolation = 'nearest') fig, ax = plt.subplots(1, figsize=(12,3.5), facecolor='white') ax.plot(buttProfile) ax.set_xlabel('z') ax.set_ylabel('Buttock profile') fig, ax = plt.subplots(1, figsize=(12,3.5), facecolor='white') ax.plot(buttProfileDer) ax.set_xlabel('z') ax.set_ylabel('Derivative of buttock profile w.r.t. z') print(trunkStart) ``` # Erase head ``` %%cython #--annotate cimport cython import math import numpy as np from cython.view cimport array as cvarray cdef extern from "math.h": int floor(float x) @cython.boundscheck(False) # turn off bounds-checking for entire function @cython.wraparound(False) # turn off negative index wrapping for entire function @cython.cdivision(True) def eraseHead(float [:,:,:] arr, float centerX, float centerZ, float radiusX, float radiusZ): """Replace head region with zeros""" cdef int nx = arr.shape[0] cdef int ny = arr.shape[1] cdef int nz = arr.shape[2] cdef int x, y, z cdef float [:,:,:] out = np.array(arr, dtype=np.float32) cdef float radiusX2 = radiusX**2 cdef float radiusZ2 = radiusZ**2 cdef float r cdef int xStart = floor(centerX - radiusX) cdef int xEnd = floor(centerX + radiusX) cdef int zStart = floor(centerZ - radiusZ) cdef int zEnd = floor(centerZ + radiusZ) xStart = 0 if xStart<0 else xStart xEnd = nx if xEnd>nx else xEnd zStart = 0 if zStart<0 else zStart zEnd = nz if zEnd>nz else zEnd for x in range(xStart, xEnd): for z in range(zStart, zEnd): r = (x - centerX)**2 / radiusX2 + (z - centerZ)**2 / radiusZ2 if r < 1: out[x,:,z] = 0. return np.array(out, dtype=np.float32) #################################################### log('Erase head') #################################################### # Head position, size headCenterZ = trunkEnd * 1.12 headHeight = trunkEnd * .15 headWidth = trunkEnd * .13 # Get center X for head, which may be shifted relative to body headStripStartZ = math.floor(trunkEnd * 1.05) headStripEndZ = math.floor(trunkEnd * 1.1) headStripStartX = math.floor(xCenter - trunkEnd*.18) headStripEndX = math.floor(xCenter + trunkEnd*.18) headStripMean = np.mean(thresholded[headStripStartX:headStripEndX, :, headStripStartZ:headStripEndZ], axis=(1,2)) middle = math.floor(len(headStripMean)/2) rightMost = firstOff(headStripMean[middle:0:-1], .05) leftMost = firstOff(headStripMean[middle:-1], .05) headOffsetX = math.floor((leftMost-rightMost)/2) # Erase head by deleting cylindrical region thresholded = eraseHead(thresholded, xCenter + headOffsetX, headCenterZ, headWidth, headHeight) data = eraseHead(data, xCenter + headOffsetX, headCenterZ, headWidth, headHeight) # Recalculate averages along axii means2D = list([thresholded.mean(axis=i) for i in range(3)]) mean2DAxii = [('y', 'z'), ('x','z'), ('x','y')] print(headStripStartZ, headStripEndZ, headStripStartX, headStripEndX) plt.plot(headStripMean) fig, ax = plt.subplots(1, figsize=(30,30)) ax.imshow(np.mean(thresholded, axis=1), cmap = 'viridis', interpolation = 'nearest') ``` # Get waffle stacks ``` %%cython #--annotate cimport cython import math import numpy as np from cython.view cimport array as cvarray cdef extern from "math.h": int floor(float x) @cython.boundscheck(False) # turn off bounds-checking for entire function @cython.wraparound(False) # turn off negative index wrapping for entire function @cython.cdivision(True) def rotateArray( float [:,:,:] arrayXYZ, int [:] xRange, int [:] yRange, int [:] zRange, float [:] origin, float [:,:] basis, int radius, int length, float threshold): """ Given a volumetric scan, rotates it so that the given basis becomes the standard basis. Input: arrayXYZ: The volumetric scan to be rotated xRange, yRange, zRange: The ranges of the input array to be copied origin: The origin for the rotation basis: An orthogonal basis for the rotation. Row vectors become standard basis axii after the rotation. radius: The output array's height, width and depth will be 2*radius+1 length: Voxels will be skipped if their projection on the last basis vector is greater then this threshold: Voxels will be skipped if they are less than this value. Output: Rotated array """ cdef float [:,:,:,:] out = np.zeros(shape=(radius*2, radius*2, length, 2), dtype=np.float32) cdef int xStart = min(xRange) cdef int xEnd = max(xRange) cdef int yStart = min(yRange) cdef int yEnd = max(yRange) cdef int zStart = min(zRange) cdef int zEnd = max(zRange) if zEnd > arrayXYZ.shape[2]: zEnd = arrayXYZ.shape[2] # Basis matrix: rows (u,v,w), columns (x,y,z) # It's stored as a bunch of floats to maximize the chance they'll end up in CPU registers # I'm avoiding Numpy since vectorization won't help much with tiny matrices and vectors cdef float bux = basis[0,0] cdef float buy = basis[0,1] cdef float buz = basis[0,2] cdef float bvx = basis[1,0] cdef float bvy = basis[1,1] cdef float bvz = basis[1,2] cdef float bwx = basis[2,0] cdef float bwy = basis[2,1] cdef float bwz = basis[2,2] # xyMask is used to skip vertical slices that are empty (I.e. below threshold) cdef float [:,:] xyMask = np.mean(arrayXYZ, axis=2) cdef float xyThreshold = threshold / 100 cdef int x,y,z,ou,ov,ow cdef float val, ox, oy, oz, norm # Loop over voxels, accumulate 2D embedding for x in range(xStart, xEnd): for y in range(yStart, yEnd): # Skip empty z-slices if xyMask[x,y] < xyThreshold: continue for z in range(zStart, zEnd): # Skip empty voxels val = arrayXYZ[x,y,z] if val < threshold: continue ox, oy, oz = (x-origin[0], y-origin[1], z-origin[2]) # Transform (ox,oy,oz) to uvw basis ou, ov, ow = ( floor(bux*ox + buy*oy + buz*oz) + radius, floor(bvx*ox + bvy*oy + bvz*oz) + radius, floor(bwx*ox + bwy*oy + bwz*oz)) if ou>=0 and ou<radius*2 and ov>=0 and ov<radius*2 and ow>=0 and ow<length: out[ou,ov,ow,0] += 1 out[ou,ov,ow,1] += val # Normalize by number of voxels for ou in range(0, 2* radius): for ov in range(0, 2* radius): for ow in range(0, length): norm = out[ou, ov, ow, 0] if norm > .00000001: out[ou, ov, ow, 1] /= norm return np.array(out[:,:,:,1], dtype=np.float32) def getCenterAndRadius(xIn, thresholdIn): """ Finds the left and right edge of an object in xIn. Returns the center and radius. """ x = medfilt(xIn, 11) threshold = np.max(x)*thresholdIn binarized = x>threshold first = np.argmax(binarized) last = len(x) - np.argmax(np.flipud(binarized)) return (math.floor((first+last)/2), (last-first)/2) def makeWaffles(arrayXYZ, xRange, yRange, zRange, startIn, endIn, basis, radius, threshold, filterWidth): """ Given a volumetric scan arrayXYZ that contains a tube-like surface (a leg, for instance), this function finds a stack of ellipses that approximates the surface. Input: xRange, yRange, zRange: Voxels outside these ranges are ignored startIn: A pre-estimate of the center of the first ellipse endIn: A pre-estimate of the center of the last ellipse basis: An orthogonal basis for the cylindrical coordinate transformation. The last row vector corresponds to the axial coordinate. The first two row vectors define the plane for the ellipses. The ellipse axii will be aligned to the first two row vectors (not rotated in the plane). radius: radius for the array rotation, if required threshold: threshold for the array rotation, if required filterWidth: Used when filtering the lists of ellipse centers and axii lengths Output: A "waffle stack": a dictionary containing a starting point, basis, ellipse parameters, etc. """ if type(startIn)==type(0): # The start and end positions are z positions. # Get x and y positions # This should only be done when the basis is the standard basis # and waffles are stacked along z. zSlice = arrayXYZ[xRange[0]:xRange[1], yRange[0]:yRange[1], startIn] startX, dmy = getCenterAndRadius(np.mean(zSlice, axis=1), 0.1) startY, dmy = getCenterAndRadius(np.mean(zSlice, axis=0), 0.1) start = np.array([startX+xRange[0], startY+yRange[0], startIn], dtype=np.int32) zSlice = arrayXYZ[xRange[0]:xRange[1], yRange[0]:yRange[1], endIn] endX, dmy = getCenterAndRadius(np.mean(zSlice, axis=1), 0.1) endY, dmy = getCenterAndRadius(np.mean(zSlice, axis=0), 0.1) end = np.array([endX+xRange[0], endY+yRange[0], endIn], dtype=np.int32) length = endIn - startIn else: # We already have x,y,z coordinates for start and end positions. start, end = (startIn, endIn) length = int(math.floor(np.linalg.norm(end-start))) # Rotate scane into specified UVW basis arrayUVW = rotateArray( arrayXYZ, np.array(xRange, dtype=np.int32), np.array(yRange, dtype=np.int32), np.array(zRange, dtype=np.int32), np.array(start, dtype=np.float32), np.array(basis, dtype=np.float32), radius, length, threshold ) #return arrayUVW # Initialize waffle offsets and widths uOffsets = np.empty(length, np.float64) uWidths = uOffsets.copy() vOffsets = uOffsets.copy() vWidths = uOffsets.copy() # Get waffle widths and offsets from center for i in range(0, length): zSlice = arrayUVW[:,:,i] uOffsets[i], uWidths[i] = getCenterAndRadius(zSlice.mean(axis=1), 0.05) vOffsets[i], vWidths[i] = getCenterAndRadius(zSlice.mean(axis=0), 0.05) uOffsets -= radius vOffsets -= radius #plt.plot(uOffsets) # Reduce impulse noise uOffsets = medfilt(uOffsets, 5) uWidths = medfilt(uWidths, 5) vOffsets = medfilt(vOffsets, 5) vWidths = medfilt(vWidths, 5) # Smooth filtLength = math.floor(length * filterWidth/2)*2+1 filtLength = filtLength if filtLength>=5 else 5 uOffsets = savgol_filter(uOffsets, filtLength, 1) uWidths = savgol_filter(uWidths, filtLength, 1) vOffsets = savgol_filter(vOffsets, filtLength, 1) vWidths = savgol_filter(vWidths, filtLength, 1) # Return waffle stack return { "rangeXYZ": np.array([xRange, yRange, zRange], dtype=np.int32), "basis": np.array(basis, dtype=np.float32), "start": np.array(start, dtype=np.float32), "uOffsets": np.array(uOffsets, dtype=np.float32), "uWidths": np.array(uWidths, dtype=np.float32), "vOffsets": np.array(vOffsets, dtype=np.float32), "vWidths": np.array(vWidths, dtype=np.float32), "butter": 'LandOfLakes', "syrup": 'Maple' } ``` # Get leg and trunk waffle stacks ``` #################################################### log('Get leg and trunk waffle stacks') #################################################### # Right leg end = int(1.27 * trunkStart) rightLeg = makeWaffles( thresholded, (0, xCenter), (0, thresholded.shape[1]), (0, end), 0, end, np.eye(3), 200, 0.00001, .16 ) # Left leg end = int(1.27 * trunkStart) leftLeg = makeWaffles( thresholded, (xCenter, thresholded.shape[0]), (0, thresholded.shape[1]), (0, end), 0, end, np.eye(3), 200, 0.00001, .16 ) # Trunk start = int(.95 * trunkStart) end = int(trunkEnd) trunk = makeWaffles( thresholded, (0, thresholded.shape[0]), (0, thresholded.shape[1]), (start, end), start, end, np.eye(3), 250, 0.00001, .16 ) ``` # Get arm waffles The following function is a dirty kitchen sink. There's also a problem with the elbows: the way they're done now, waffles near an elbow are 90 degrees from the direction they should be. The ugly code and bad elbow embedding could be fixed by implementing curvy waffle stacks (waffles not parallel). This would require changing all the 2D embedding code - about 1 week of work. Elbow embedding would improve and the following function could be much cleaner. ``` def getArmJointCoordinates(thresholdedIn, means2DIn, xCenter, armpitZ, leftSide): #################################### # Get data for left or right side #################################### if leftSide: thresholded = np.flipud(thresholdedIn[xCenter:-1]) xzSlice = np.flipud(means2DIn[1][xCenter:-1,armpitZ:thresholded.shape[2]]) else: thresholded = thresholdedIn[0:xCenter] xzSlice = means2D[1][0:xCenter,armpitZ:thresholded.shape[2]] #################################### # Get fingertip height #################################### zStrip = np.mean(xzSlice, axis=0) fingertipZ = int(firstOn(np.flipud(zStrip), .05)) fingertipZ = thresholded.shape[2] - fingertipZ - 1 #################################### # Get arm waffles #################################### arm = makeWaffles( thresholded, (0, thresholded.shape[0]), (0, thresholded.shape[1]), (armpitZ, fingertipZ), armpitZ, fingertipZ, np.eye(3), 200, .00001, .1 ) #################################### # Get shoulder position #################################### shoulderPos = arm["start"] shoulderPos[0] -= armpitZ * .05 #################################### # Estimate elbow position #################################### armLength = len(arm["uOffsets"]) elbowZ = math.floor(0.4 * armLength) # Get second derivative filterWidth = math.floor(0.15 * armLength) d2x = gaussian_filter(arm["uOffsets"], (filterWidth), order=2, mode="nearest") # Improve estimate of elbow height if arm is bent if d2x[elbowZ] > .01: elbowIndicator = -arm["uOffsets"] + 25*70*d2x middleThird = [math.floor(armLength/3), math.floor(armLength*2/3)] elbowZ = np.argmax(elbowIndicator[middleThird[0]:middleThird[1]]) elbowZ = elbowZ + middleThird[0] # Get full-scan coordinates elbowX = math.floor(arm['start'][0] + arm['uOffsets'][elbowZ]) elbowY = math.floor(arm['start'][1] + arm['vOffsets'][elbowZ]) elbowZ += armpitZ elbowPos = np.array([elbowX, elbowY, elbowZ]) #################################### # Estimate wrist position #################################### numBins = 50 wristAt = 0.55 wristDel = 0.1 # Get full-scan XYZ coordinates of voxels in the forearm region forearmImg = thresholded[:, :,elbowZ:fingertipZ] forearmCoords = np.argwhere(forearmImg>0.00000001).astype(np.float32) forearmCoords[:,2] += elbowZ # Get vector from elbow to forearm center of mass forearmCoM = np.mean(forearmCoords, axis=0) forearmDirection = forearmCoM - elbowPos forearmLength = np.linalg.norm(forearmDirection) forearmDirection /= forearmLength # Project forearm coordinates on forearm vector forearmCoords -= elbowPos forearmW = np.dot(forearmCoords, forearmDirection) forearmR = np.linalg.norm(forearmCoords - np.outer(forearmW, forearmDirection), axis=1) # Build trajectory in wr space orderW = np.argsort(forearmW) trajectory = np.zeros(shape=(numBins,2), dtype=np.float32) mass = np.zeros(shape=numBins, dtype=np.float32) binNumberFromW = numBins / (3 * forearmLength) for i in range(len(orderW)): w = forearmW[orderW[i]] r = forearmR[orderW[i]] binIdx = math.floor(w * binNumberFromW) if binIdx > numBins-1 or binIdx < 0: continue mass[binIdx] += 1 trajectory[binIdx] += [w,r] # Normalize by mass and delete bins without mass endBin = -1 gotOne = False for i in range(numBins): if mass[i] > 0.00000001: trajectory[i] /= mass[i] gotOne = True elif gotOne: endBin = i break trajectory = trajectory[0:endBin] mass = mass[0:endBin] # Get length of trajectory trajectoryDels = np.linalg.norm(trajectory[1:-1] - trajectory[0:-2], axis=1) trajectoryLength = sum(trajectoryDels) # Get middle of trajectory wristPos = trajectoryLength * wristAt trajectoryAccum = 0 for i in range(len(trajectory)): trajectoryAccum += trajectoryDels[i] if trajectoryAccum > wristPos: wristPos = i break wristW = trajectory[wristPos, 0] # Get mean XYZ position of voxels near wrist startW = wristW - wristDel * forearmLength endW = wristW + wristDel * forearmLength wristPos = np.array([0,0,0], dtype=np.float32) wristMass = 0 for i in range(len(forearmCoords)): if forearmW[i] > startW and forearmW[i] < endW: wristPos += forearmCoords[i] wristMass += 1 wristPos /= wristMass wristPos += elbowPos #################################### # Return results #################################### # Flip x-coordinate if leftSide: shoulderPos[0] = thresholdedIn.shape[0] - shoulderPos[0] elbowPos[0] = thresholdedIn.shape[0] - elbowPos[0] wristPos[0] = thresholdedIn.shape[0] - wristPos[0] return (shoulderPos, elbowPos, wristPos, fingertipZ) def getArmSegmentBasis(start, end): w = end - start length = np.linalg.norm(w) w /= length u = np.cross(w, [0,0,1]) u /= np.linalg.norm(u) v = np.cross(u, w) v /= np.linalg.norm(v) return np.array([u,v,w], dtype=np.float32) #################################################### log('Get arm waffle stacks') #################################################### # Guess armpit height armpitZ = int(trunkStart + 0.9 * (trunkEnd - trunkStart)) # Right arm shoulderPos, elbowPos, wristPos, fingertipZ = getArmJointCoordinates(thresholded, means2D, xCenter, armpitZ, False) # Right bicep rightBicep = makeWaffles( thresholded, (0, xCenter), (0, thresholded.shape[1]), (math.floor(shoulderPos[2]*.9), elbowPos[2]+20), shoulderPos, elbowPos, getArmSegmentBasis(shoulderPos, elbowPos), 80, 0.00001, .7 ) # Right forearm rightForearm = makeWaffles( thresholded, (0, xCenter), (0, thresholded.shape[1]), (elbowPos[2]-20, wristPos[2]+20), elbowPos, wristPos, getArmSegmentBasis(elbowPos, wristPos), 60, 0.00001, .9 ) # Left arm shoulderPos, elbowPos, wristPos, fingertipZ = getArmJointCoordinates(thresholded, means2D, xCenter, armpitZ, True) # Left bicep leftBicep = makeWaffles( thresholded, (xCenter, thresholded.shape[0]), (0, thresholded.shape[1]), (math.floor(shoulderPos[2]*.9), elbowPos[2]+20), shoulderPos, elbowPos, getArmSegmentBasis(shoulderPos, elbowPos), 80, 0.00001, .7 ) # Left forearm leftForearm = makeWaffles( thresholded, (xCenter, thresholded.shape[0]), (0, thresholded.shape[1]), (elbowPos[2]-20, wristPos[2]+20), elbowPos, wristPos, getArmSegmentBasis(elbowPos, wristPos), 60, 0.00001, .9 ) ``` # Plot segment waffle stacks This does not go in the main processing loop. It's only used here, in the notebook, for debugging and development. ``` def drawSegment(array, segment): length = len(segment["uOffsets"]) start = segment["start"] basis = segment["basis"] out = np.array(array) pos = [] wNorm = np.linalg.norm(basis[2]) for w in range(length): uo = segment["uOffsets"][w] uw = segment["uWidths"][w] vo = segment["vOffsets"][w] vw = segment["vWidths"][w] wPos = (w / wNorm**2) pos.append(start + np.dot([uo, vo, wPos], basis)) pos.append(start + np.dot([uo+uw, vo, wPos], basis)) pos.append(start + np.dot([uo-uw, vo, wPos], basis)) pos.append(start + np.dot([uo, vo+vw, wPos], basis)) pos.append(start + np.dot([uo, vo-vw, wPos], basis)) pos = np.array(np.floor(pos), dtype=np.int32) out[pos[:,0], pos[:,1], pos[:,2]] = .005 return out s = thresholded s = drawSegment(s, trunk) s = drawSegment(s, rightLeg) s = drawSegment(s, leftLeg) s = drawSegment(s, rightBicep) s = drawSegment(s, leftBicep) s = drawSegment(s, rightForearm) s = drawSegment(s, leftForearm) # Front xz = np.mean(s, axis=1) fig, ax = plt.subplots(1, figsize=(30,30)) ax.imshow(xz, cmap = 'viridis', interpolation = 'nearest') # Right yz = np.mean(s[0:xCenter+20,:,:], axis=0) fig, ax = plt.subplots(1, figsize=(30,30)) ax.imshow(yz, cmap = 'viridis', interpolation = 'nearest') # Left yz = np.mean(s[xCenter-20:-1,:,:], axis=0) fig, ax = plt.subplots(1, figsize=(30,30)) ax.imshow(yz, cmap = 'viridis', interpolation = 'nearest') ``` # Forearm may be bad due to movement during scan ``` fig, ax = plt.subplots(1, figsize=(30,30)) ax.imshow(thresholded[:,:,560], cmap = 'viridis', interpolation = 'nearest') ``` # Embed in cylindrical coordinates This section and the next few sections contain code that will be placed in the function getSegmentImage(), instead of going directly in the main processing loop. ``` %%cython #--annotate cimport cython import math import numpy as np from cython.view cimport array as cvarray cdef extern from "math.h": float atan2(float x, float y) cdef extern from "math.h": int floor(float x) cdef extern from "math.h": float sqrt(float x) cdef extern from "math.h": float fabs(float x) @cython.boundscheck(False) # turn off bounds-checking for entire function @cython.wraparound(False) # turn off negative index wrapping for entire function @cython.cdivision(True) def toCylindricalCoordinates( float threshold, float [:,:,:] arrayXYZ, int [:,:] rangeXYZ, float [:,:] basis, float [:] startXYZ, float [:] uOffsets, float [:] uWidths, float [:] vOffsets, float [:] vWidths, int nTheta, int nRadius, float [:] radiusRange ): """ Get a cylindrical coordinate transformation of a portion of the input arrayXYZ. The input array should have a tube-like object (a leg for instance). The output array will contain the tube-like object, "unrawapped" by a cylindrical coordinate transformation. Notes on geometry: Argument 'startXYZ' is the origin of the ray along which waffles are stacked. It's in the standard basis. Argument 'basis' contains basis vectors u^hat, v^hat, w^hat. Arguments 'uOffsets' and 'vOffsets' contain offsets for the waffles, in the u+v subspace Algorithm steps: 1: Subtract startXYZ from a voxel's xyz coordinates 2: Transform xyz coordinates to uvw coordinates 3: Subtract the offset in u+v space. This yields cylU and cylV. 4: Get cylindrical coordinates: cylR = sqrt(cylU**2 + cylV**2) cylTheta = atan2(cylV/vWidth, cylU/uWidth) cylW = w coordinate from step 2 5: Add voxel's value to output array The output output array is indexed by spatial coordinate (cylTheta, cylR, cylW) """ # wLen is the number of waffles used in the cylindrical coordinate embedding cdef int wLen = len(uOffsets) # Variable out is a 2D image with 3 planes. # Plane 0: total intensity (0th moment) # Plane 1: intensity-weighted distance (1st moment) # Plane 2: 2nd moment # It is indexed by spatial coordinate (cylW, cylTheta) cdef float [:,:,:,:] out = cvarray(shape=(nRadius, nTheta, wLen, 2), itemsize=sizeof(float), format="f") out[:,:,:] = 0. # xyMask is used to skip vertical slices that are empty (I.e. below threshold) cdef float [:,:] xyMask = np.mean(arrayXYZ, axis=2) cdef float xyThreshold = threshold / 100 # Basis matrix: rows (u,v,w), columns (x,y,z) # It's stored as single floats to maximize the chance they'll end up in CPU registers # I'm avoiding Numpy since vectorization won't help much with tiny matrices and vectors cdef float bux = basis[0,0] cdef float buy = basis[0,1] cdef float buz = basis[0,2] cdef float bvx = basis[1,0] cdef float bvy = basis[1,1] cdef float bvz = basis[1,2] cdef float bwx = basis[2,0] cdef float bwy = basis[2,1] cdef float bwz = basis[2,2] # Voxel coordinates cdef int x,y,z # Reflectivity at voxel cdef float val # Cylindrical coordinates cdef float cylU, cylV, cylTheta, cylR cdef int cylW, cylThetaIdx, cylRIdx # Offset of voxel from the waffle stack's starting point, in xyz basis cdef float ox, oy, oz # Offset of voxel from waffle center, in uv basis cdef float ou, ov cdef float radiusRescale = (nRadius-1) / (radiusRange[1]-radiusRange[0]) cdef float norm cdef int xStart = min(rangeXYZ[0]) cdef int xEnd = max(rangeXYZ[0]) cdef int yStart = min(rangeXYZ[1]) cdef int yEnd = max(rangeXYZ[1]) cdef int zStart = min(rangeXYZ[2]) cdef int zEnd = max(rangeXYZ[2]) # Loop over voxels, accumulate 2D embedding for x in range(xStart, xEnd): for y in range(yStart, yEnd): # Skip empty z-slices if xyMask[x,y] < xyThreshold: continue for z in range(zStart, zEnd): # Skip empty voxels val = arrayXYZ[x,y,z] if val < threshold: continue # Get voxel's offset from starting point, in xyz basis ox, oy, oz = (x-startXYZ[0], y-startXYZ[1], z-startXYZ[2]) # Get axial cylindrical coordinate cylW = floor(bwx*ox + bwy*oy + bwz*oz) # Ensure that voxel is within the output image's w-range if cylW >= 0 and cylW < wLen: # Transform (ox,oy,oz) to uv basis ou, ov = ( bux*ox + buy*oy + buz*oz, bvx*ox + bvy*oy + bvz*oz) # Get voxel's offset from waffle center, in uv basis # An axis-aligned scaling transform is also applied cylU = (ou - uOffsets[cylW]) / (uWidths[cylW]) cylV = (ov - vOffsets[cylW]) / (vWidths[cylW]) # Get radial cylindrical coordinate cylR = sqrt(cylU**2 + cylV**2) # Ensure that voxel is not far from waffle center if cylR >= radiusRange[0] and cylR <= radiusRange[1]: # Get angular cylindrical coordinate cylTheta = atan2(cylU, -cylV) # Get array index from theta cylThetaIdx = floor(nTheta/2 + cylTheta * (nTheta-1) / 2 / 3.14159) cylThetaIdx = 0 if cylThetaIdx < 0 else cylThetaIdx cylThetaIdx = nTheta-1 if cylThetaIdx >= nTheta else cylThetaIdx # Get array index from radius cylRIdx = floor((cylR - radiusRange[0])*radiusRescale) # Accumulate in bin out[cylRIdx, cylThetaIdx, cylW, 0] += 1 out[cylRIdx, cylThetaIdx, cylW, 1] += val # Normalize by number of voxels for cylRIdx in range(nRadius): for cylThetaIdx in range(nTheta): for cylW in range(wLen): norm = out[cylRIdx, cylThetaIdx, cylW, 0] if norm > .00000001: out[cylRIdx, cylThetaIdx, cylW, 1] /= norm return np.array(out[:,:,:,1], dtype=np.float32) trunk['tLen'] = 250 trunk['rLen'] = 120 trunk['highpassRadius'] = 5 trunk['rescaleRadius'] = 0.1 trunk['rescaleReflec'] = 500 trunk['threshold'] = .00015 rightLeg['tLen'] = 80 rightLeg['rLen'] = 40 rightLeg['highpassRadius'] = 5 rightLeg['rescaleRadius'] = 0.1 rightLeg['rescaleReflec'] = 500 rightLeg['threshold'] = .0001 leftLeg['tLen'] = 80 leftLeg['rLen'] = 40 leftLeg['highpassRadius'] = 5 leftLeg['rescaleRadius'] = 0.1 leftLeg['rescaleReflec'] = 500 leftLeg['threshold'] = .0001 rightBicep['tLen'] = 60 rightBicep['rLen'] = 40 rightBicep['highpassRadius'] = (2,5) rightBicep['rescaleRadius'] = 0.1 rightBicep['rescaleReflec'] = 500 rightBicep['threshold'] = .00001 leftBicep['tLen'] = 60 leftBicep['rLen'] = 40 leftBicep['highpassRadius'] = (2,5) leftBicep['rescaleRadius'] = 0.1 leftBicep['rescaleReflec'] = 500 leftBicep['threshold'] = .00001 rightForearm['tLen'] = 30 rightForearm['rLen'] = 35 rightForearm['highpassRadius'] = (2,5) rightForearm['rescaleRadius'] = 0.1 rightForearm['rescaleReflec'] = 500 rightForearm['threshold'] = .00001 leftForearm['tLen'] = 30 leftForearm['rLen'] = 35 leftForearm['highpassRadius'] = (2,5) leftForearm['rescaleRadius'] = 0.1 leftForearm['rescaleReflec'] = 500 leftForearm['threshold'] = .00001 # Simulated arguments to getSegmentImage() segment = rightForearm dat = data rRange = np.array([0.2,2], dtype=np.float32) highPassRadius = segment['highpassRadius'] threshold = segment['threshold'] rescaleRadius = segment['rescaleRadius'] rescaleReflec = segment['rescaleReflec'] tLen = segment['tLen'] rLen = segment['rLen'] #################################################### # Embed segment in cylindrical coords. (Place in getSegmentImage) #################################################### #%time \ cylindrical = toCylindricalCoordinates( \ threshold, dat, \ segment['rangeXYZ'], segment['basis'], segment['start'], \ segment['uOffsets'], segment['uWidths'], \ segment['vOffsets'], segment['vWidths'], \ tLen, rLen, \ np.array(rRange, dtype=np.float32) \ ) fig, ax = plt.subplots(3, figsize=(30,30)) for i in range(3): ax[i].imshow(np.take(cylindrical, math.floor(cylindrical.shape[i]/2), axis=i), cmap = 'viridis', interpolation = 'nearest') ``` # Pad image on theta axis ``` def pad(arr, portionOfCircle): nTheta = arr.shape[0] nPad = math.floor(nTheta * portionOfCircle) out = np.array(arr, dtype=np.float32) out = np.concatenate([out[nTheta-nPad:nTheta], out, out[0:nPad]], axis=0) return out #################################################### # Pad image on theta axis. (Place in getSegmentImage) #################################################### cylindrical = np.transpose(cylindrical, axes=(1,0,2)) cylindrical = pad(cylindrical, .25) cylindrical = np.transpose(cylindrical, axes=(1,0,2)) fig, ax = plt.subplots(3, figsize=(30,30)) for i in range(3): ax[i].imshow(np.take(cylindrical, math.floor(cylindrical.shape[i]/2), axis=i), cmap = 'viridis', interpolation = 'nearest') ``` # Smooth and inpaint to remove Moire gaps ``` %%cython #--annotate cimport cython import math import numpy as np from cython.view cimport array as cvarray @cython.boundscheck(False) # turn off bounds-checking for entire function @cython.wraparound(False) # turn off negative index wrapping for entire function @cython.cdivision(True) def boxFilterNonzero(float [:,:,:] array): """ Convolutional filter that gets the average of non-zero neighbors """ cdef int nr = array.shape[0] cdef int nt = array.shape[1] cdef int nw = array.shape[2] cdef float [:,:,:] out = array.copy() cdef int r, t, w, dr, dt, dw, n cdef float dest, source, total for r in range(1, nr-1): for t in range(1, nt-1): for w in range(1, nw-1): dest = array[r,t,w] if dest < 0.00000001: n = 0 total = 0 for dr in range(-1,1): for dt in range(-1,1): for dw in range(-1,1): source = array[r+dr,t+dt,w+dw] if source > 0.00000001: n += 1 total += source if n > 0: total /= n out[r,t,w] = total return np.array(out, dtype=np.float32) %%cython #--annotate cimport cython import math import numpy as np from cython.view cimport array as cvarray @cython.boundscheck(False) # turn off bounds-checking for entire function @cython.wraparound(False) # turn off negative index wrapping for entire function @cython.cdivision(True) def inpaint(float [:,:,:] unsmoothed, float [:,:,:] smoothed, float [:,:,:] smoothedPresence): """ Inpaints an image by replacing zeros in the image with values taken from a smoothed image. It's much faster than scipy's inpainting function. """ cdef int nr = unsmoothed.shape[0] cdef int nt = unsmoothed.shape[1] cdef int nw = unsmoothed.shape[2] cdef float [:,:,:] out = unsmoothed.copy() cdef int r, t, w for r in range(nr): for t in range(nt): for w in range(nw): if unsmoothed[r,t,w] < .00000001 and smoothedPresence[r,t,w] > 0.01: out[r,t,w] = smoothed[r,t,w]/smoothedPresence[r,t,w] return out #################################################### # Filter and inpaint to remove Moire gaps. (Place in getSegmentImage) #################################################### boxFiltered = boxFilterNonzero(cylindrical) smoothed = sp.ndimage.filters.gaussian_filter(boxFiltered, (1,2,2)) presence = np.array(boxFiltered > 0.00000001, dtype=np.float32) smoothedPresence = sp.ndimage.filters.gaussian_filter(presence, (1,2,2)) inpainted = inpaint(boxFiltered, smoothed, smoothedPresence) fig, ax = plt.subplots(3, figsize=(30,30)) for i in range(3): ax[i].imshow(np.take(inpainted, math.floor(inpainted.shape[i]/2), axis=i), cmap = 'viridis', interpolation = 'nearest') ``` smoothed = sp.ndimage.filters.median_filter(cylindrical, (2,4,4)) smoothed = sp.ndimage.filters.gaussian_filter(smoothed, (1,2,2)) inpainted = inpaint(cylindrical, smoothed) # Get surface moments ``` %%cython #--annotate cimport cython import math import numpy as np from cython.view cimport array as cvarray cdef extern from "math.h": int floor(float x) @cython.boundscheck(False) # turn off bounds-checking for entire function @cython.wraparound(False) # turn off negative index wrapping for entire function @cython.cdivision(True) def surfaceMoments(float [:,:,:] array, float thresholdPortion): """ Given an array with dimensions (r, theta, w), this function calculates the peak moments in the r lists, and returns a 2+1 dimensional array with moments 0, 1, and 2 as image channels. """ cdef int nr = array.shape[0] cdef int nt = array.shape[1] cdef int nw = array.shape[2] cdef float [:,:,:] out = np.zeros(shape=(nt,nw,3), dtype=np.float32) cdef float val, maxVal, maxPos cdef int r, windowStart, windowEnd cdef float total0, total1, total2 cdef float threshold, subtract for w in range(nw): for t in range(nt): # Get maximum maxVal = -1 maxPos = 0 for r in range(nr): val = array[r,t,w] if val > maxVal: maxVal = val maxPos = r # Improve peak position windowStart = floor(maxPos) - 2 windowStart = 0 if windowStart<0 else windowStart windowEnd = windowStart + 4 windowEnd = nr if windowEnd>nr else windowEnd total0 = 0 total1 = 0 for r in range(windowStart, windowEnd): total0 += array[r,t,w] total1 += array[r,t,w] * r if total0 > .00000001: maxPos = total1 / total0 # Get a threshold for the next part threshold = maxVal * thresholdPortion subtract = threshold # set this to zero for flatter mass distribution # Get spatial moment 0, using thresholded masses total0 = 0 total1 = 0 for r in range(nr): val = array[r,t,w] if val >= threshold: val -= subtract total0 += val if total0 > 0.00000001: # Get spatial moment 2, using thresholded masses total2 = 0 for r in range(nr): val = array[r,t,w] if val >= threshold: val -= subtract total2 += val * (r - maxPos) ** 2 total2 = (total2 / total0) ** 0.5 out[t,w,0] = maxVal out[t,w,1] = maxPos out[t,w,2] = total2 return np.array(out, dtype=np.float32) #################################################### # Get surface moments. (Place in getSegmentImage) #################################################### moments = surfaceMoments(inpainted, 0.5) fig, ax = plt.subplots(3, figsize=(30,30)) for i in range(3): ax[i].imshow(moments[:,:,i]) ``` # Rescale image channels ``` def imageAdjust(img, scale): out = img.copy() for i in range(img.shape[2]): median = np.median(img[:,:,i]) out[:,:,i] = out[:,:,i] - median out[:,:,i] *= scale[i] out[:,:,i] += 0.5 out[:,:,i] = np.clip(out[:,:,i], 0, 1) return out #################################################### # Rescale moments to (0,1). (Place in getSegmentImage) #################################################### img = imageAdjust(moments, (1000, 0.01, 0.05)) fig, ax = plt.subplots(1, figsize=(30,30)) ax.imshow(img) ``` # Depad/repad image ``` def depad(arr, portion): nTheta = len(arr) nPad = math.floor(nTheta * portion) out = arr[nPad:-nPad] return out #################################################### # Depad/repad image. (Place in getSegmentImage) #################################################### img = depad(img, .1666) img = pad(img, .25) fig, ax = plt.subplots(1, figsize=(30,30)) ax.imshow(img) ``` # Get image for a segment ``` def getSegmentImage( threshold, dat, segment, tLen, rLen, rRange, highpassRadius, rescaleRadius, rescaleReflec, rescaleThickness): global threshold3D #################################################### # Embed segment in cylindrical coords. (Place in getSegmentImage) #################################################### cylindrical = toCylindricalCoordinates( \ threshold, dat, \ segment['rangeXYZ'], segment['basis'], segment['start'], \ segment['uOffsets'], segment['uWidths'], \ segment['vOffsets'], segment['vWidths'], \ tLen, rLen, \ np.array(rRange, dtype=np.float32) \ ) #################################################### # Pad image on theta axis. (Place in getSegmentImage) #################################################### cylindrical = np.transpose(cylindrical, axes=(1,0,2)) cylindrical = pad(cylindrical, .25) cylindrical = np.transpose(cylindrical, axes=(1,0,2)) #################################################### # Filter and inpaint to remove Moire gaps. (Place in getSegmentImage) #################################################### boxFiltered = boxFilterNonzero(cylindrical) smoothed = sp.ndimage.filters.gaussian_filter(boxFiltered, (1,2,2)) presence = np.array(boxFiltered > 0.00000001, dtype=np.float32) smoothedPresence = sp.ndimage.filters.gaussian_filter(presence, (1,2,2)) inpainted = inpaint(boxFiltered, smoothed, smoothedPresence) #################################################### # Get surface moments. (Place in getSegmentImage) #################################################### moments = surfaceMoments(inpainted, 0.5) #################################################### # Rescale moments to (0,1). (Place in getSegmentImage) #################################################### img = imageAdjust(moments, (rescaleReflec, rescaleRadius, rescaleThickness)) #################################################### # Depad/repad image. (Place in getSegmentImage) #################################################### img = depad(img, .1666) img = pad(img, .25) return img ``` # Get trunk image The remaining sections are to be placed in the main loop, not in getSegmentImage() like the last few sections. ``` #################################################### log('Get trunk image') #################################################### trunkImg = getSegmentImage( threshold = .00015, dat = data, segment = trunk, tLen = 250, rLen = 120, rRange = (0.2,2), highpassRadius = 5, rescaleReflec = 1500, rescaleRadius = .03, rescaleThickness = .13 ) fig, ax = plt.subplots(1, figsize=(30,30)) ax.imshow(trunkImg) ``` # Right leg ``` #################################################### log('Get right leg image') #################################################### rightLegImg = getSegmentImage( threshold = .0001, dat = data, segment = rightLeg, tLen = 100, rLen = 80, rRange = (0.2,2), highpassRadius = 5, rescaleReflec = 1500, rescaleRadius = .03, rescaleThickness = .133 ) fig, ax = plt.subplots(1, figsize=(30,30)) ax.imshow(rightLegImg) ``` # Left leg ``` #################################################### log('Get left leg image') #################################################### leftLegImg = getSegmentImage( threshold = .0001, dat = data, segment = leftLeg, tLen = 100, rLen = 80, rRange = (0.2,2), highpassRadius = 5, rescaleReflec = 1500, rescaleRadius = .03, rescaleThickness = .133 ) leftLegImg = np.flipud(leftLegImg) fig, ax = plt.subplots(1, figsize=(30,30)) ax.imshow(leftLegImg) ``` # Right bicep ``` #################################################### log('Get right bicep image') #################################################### rightBicepImg = getSegmentImage( threshold = .00001, dat = data, segment = rightBicep, tLen = 60, rLen = 40, rRange = (0.2,3), highpassRadius = (2,5), rescaleReflec = 1500, rescaleRadius = .03, rescaleThickness = .133 ) fig, ax = plt.subplots(1, figsize=(30,30)) ax.imshow(rightBicepImg) ``` # Left bicep ``` #################################################### log('Get left bicep image') #################################################### leftBicepImg = getSegmentImage( threshold = .00001, dat = data, segment = leftBicep, tLen = 60, rLen = 40, rRange = (0.2,3), highpassRadius = (2,5), rescaleReflec = 1500, rescaleRadius = .03, rescaleThickness = .133 ) leftBicepImg = np.flipud(leftBicepImg) fig, ax = plt.subplots(1, figsize=(30,30)) ax.imshow(leftBicepImg) ``` # Right forearm ``` #################################################### log('Get right forearm image') #################################################### rightForearmImg = getSegmentImage( threshold = .00001, dat = data, segment = rightForearm, tLen = 50, rLen = 35, rRange = (0.2,3), highpassRadius = (2,5), rescaleReflec = 1500, rescaleRadius = .02, rescaleThickness = .133 ) fig, ax = plt.subplots(1, figsize=(30,30)) ax.imshow(rightForearmImg) ``` # Left forearm ``` #################################################### log('Get left forearm image') #################################################### leftForearmImg = getSegmentImage( threshold = .00001, dat = data, segment = leftForearm, tLen = 50, rLen = 35, rRange = (0.2,3), highpassRadius = (2,5), rescaleReflec = 1500, rescaleRadius = .02, rescaleThickness = .133 ) leftForearmImg = np.flipud(leftForearmImg) fig, ax = plt.subplots(1, figsize=(30,30)) ax.imshow(leftForearmImg) ``` # Combine images ``` #################################################### log('Combine images') #################################################### trunkResized = resize(trunkImg, (360,256), mode='edge') rightLegResized = resize(rightLegImg, (180,256), mode='edge') leftLegResized = resize(leftLegImg, (180,256), mode='edge') rightBicepResized = resize(rightBicepImg, (90,128), mode='edge') leftBicepResized = resize(leftBicepImg, (90,128), mode='edge') rightForearmResized = resize(rightForearmImg, (70,128), mode='edge') leftForearmResized = resize(leftForearmImg, (70,128), mode='edge') forearmEmpty = np.full(shape=(20,128,3), fill_value=.5) rightForearmResized = np.concatenate([rightForearmResized, forearmEmpty], axis=0) leftForearmResized = np.concatenate([leftForearmResized, forearmEmpty], axis=0) rightArm = np.concatenate([rightBicepResized, rightForearmResized], axis=1) leftArm = np.concatenate([leftBicepResized, leftForearmResized], axis=1) bodyImg = np.concatenate([ trunkResized, rightLegResized, leftLegResized, rightArm, leftArm ], axis=0) fig, ax = plt.subplots(1, figsize=(30,30)) ax.imshow(bodyImg) ``` # Save image ``` outputDir = embeddedDir1 #################################################### log('Save image') #################################################### base = inputFile.split('/')[-1] base = base.split('.')[-2] outputFile = outputDir[6:] + base + '.png' imsave(outputFile, bodyImg) ``` # Function to process a scan (all of the above copy/pasted here) ``` def processScan(inputFile, outputDir): #################################################### log('+++++ Read file ' + inputFile + ' +++++') #################################################### # Note: title comments like the one above indicate # notebook cells that are copy/pasted to the main # processing loop at the bottom of the notebook. data = loadFile(inputFile) #################################################### log('Threshold') #################################################### thresholded = np.clip(data, threshold3D, 100000) - threshold3D #################################################### log('Average along axii to get 2D mean images') #################################################### # Summed images will have yz, xz, xy axes, respectively means2D = list([thresholded.mean(axis=i) for i in range(3)]) mean2DAxii = [('y', 'z'), ('x','z'), ('x','y')] #################################################### log('Get center of mass') #################################################### # Could do this on the 3D image, but it's much faster on 2D images xCenter, zCenter = np.round(ndimage.measurements.center_of_mass(means2D[1])).astype(int) yCenter, dmy = np.round(ndimage.measurements.center_of_mass(means2D[0])).astype(int) #################################################### log('Get end of trunk z-range (top of shoulders)') #################################################### zStart, zEnd = (int(zCenter * 1.2), int(zCenter * 1.7)) if zEnd > data.shape[2]: zEnd = data.shape[2] xStart = int(xCenter - .25 * zCenter) xEnd = int(xCenter + .25 * zCenter) midStrip = means2D[1][xStart:xEnd, zStart:zEnd] off = midStrip < 0.00000001 numOff = np.sum(off, axis=0) trunkEnd = firstOn(numOff, .15) trunkEnd += zStart #################################################### log('Get start of trunk z-range (bottom of groin)') #################################################### zStart = int(0.5 * zCenter) zStrip = np.mean(thresholded[:, yCenter:, zStart:zCenter], axis=0) zStrip = gaussian_filter(zStrip, (5,5)) buttProfile = np.array([firstOff(x, .05) for x in np.transpose(zStrip)]) buttProfile = median_filter(buttProfile, 5) buttProfile = savgol_filter(buttProfile, 11, 2, deriv=1, mode='nearest') trunkStart = zStart + np.argmax(buttProfile) #################################################### log('Erase head') #################################################### # Head position, size headCenterZ = trunkEnd * 1.12 headHeight = trunkEnd * .15 headWidth = trunkEnd * .13 # Get center X for head, which may be shifted relative to body headStripStartZ = math.floor(trunkEnd * 1.05) headStripEndZ = math.floor(trunkEnd * 1.1) headStripStartX = math.floor(xCenter - trunkEnd*.18) headStripEndX = math.floor(xCenter + trunkEnd*.18) headStripMean = np.mean(thresholded[headStripStartX:headStripEndX, :, headStripStartZ:headStripEndZ], axis=(1,2)) middle = math.floor(len(headStripMean)/2) rightMost = firstOff(headStripMean[middle:0:-1], .05) leftMost = firstOff(headStripMean[middle:-1], .05) headOffsetX = math.floor((leftMost-rightMost)/2) # Erase head by deleting cylindrical region thresholded = eraseHead(thresholded, xCenter + headOffsetX, headCenterZ, headWidth, headHeight) data = eraseHead(data, xCenter + headOffsetX, headCenterZ, headWidth, headHeight) # Recalculate averages along axii means2D = list([thresholded.mean(axis=i) for i in range(3)]) mean2DAxii = [('y', 'z'), ('x','z'), ('x','y')] #################################################### log('Get leg and trunk waffle stacks') #################################################### # Right leg end = int(1.27 * trunkStart) rightLeg = makeWaffles( thresholded, (0, xCenter), (0, thresholded.shape[1]), (0, end), 0, end, np.eye(3), 200, 0.00001, .16 ) # Left leg end = int(1.27 * trunkStart) leftLeg = makeWaffles( thresholded, (xCenter, thresholded.shape[0]), (0, thresholded.shape[1]), (0, end), 0, end, np.eye(3), 200, 0.00001, .16 ) # Trunk start = int(.95 * trunkStart) end = int(trunkEnd) trunk = makeWaffles( thresholded, (0, thresholded.shape[0]), (0, thresholded.shape[1]), (start, end), start, end, np.eye(3), 250, 0.00001, .16 ) #################################################### log('Get arm waffle stacks') #################################################### # Guess armpit height armpitZ = int(trunkStart + 0.9 * (trunkEnd - trunkStart)) # Right arm shoulderPos, elbowPos, wristPos, fingertipZ = getArmJointCoordinates(thresholded, means2D, xCenter, armpitZ, False) # Right bicep rightBicep = makeWaffles( thresholded, (0, xCenter), (0, thresholded.shape[1]), (math.floor(shoulderPos[2]*.9), elbowPos[2]+20), shoulderPos, elbowPos, getArmSegmentBasis(shoulderPos, elbowPos), 80, 0.00001, .7 ) # Right forearm rightForearm = makeWaffles( thresholded, (0, xCenter), (0, thresholded.shape[1]), (elbowPos[2]-20, wristPos[2]+20), elbowPos, wristPos, getArmSegmentBasis(elbowPos, wristPos), 60, 0.00001, .9 ) # Left arm shoulderPos, elbowPos, wristPos, fingertipZ = getArmJointCoordinates(thresholded, means2D, xCenter, armpitZ, True) # Left bicep leftBicep = makeWaffles( thresholded, (xCenter, thresholded.shape[0]), (0, thresholded.shape[1]), (math.floor(shoulderPos[2]*.9), elbowPos[2]+20), shoulderPos, elbowPos, getArmSegmentBasis(shoulderPos, elbowPos), 80, 0.00001, .7 ) # Left forearm leftForearm = makeWaffles( thresholded, (xCenter, thresholded.shape[0]), (0, thresholded.shape[1]), (elbowPos[2]-20, wristPos[2]+20), elbowPos, wristPos, getArmSegmentBasis(elbowPos, wristPos), 60, 0.00001, .9 ) #################################################### log('Get trunk image') #################################################### trunkImg = getSegmentImage( threshold = .00015, dat = data, segment = trunk, tLen = 250, rLen = 120, rRange = (0.2,2), highpassRadius = 5, rescaleReflec = 1500, rescaleRadius = .03, rescaleThickness = .13 ) #################################################### log('Get right leg image') #################################################### rightLegImg = getSegmentImage( threshold = .0001, dat = data, segment = rightLeg, tLen = 100, rLen = 80, rRange = (0.2,2), highpassRadius = 5, rescaleReflec = 1500, rescaleRadius = .03, rescaleThickness = .133 ) #################################################### log('Get left leg image') #################################################### leftLegImg = getSegmentImage( threshold = .0001, dat = data, segment = leftLeg, tLen = 100, rLen = 80, rRange = (0.2,2), highpassRadius = 5, rescaleReflec = 1500, rescaleRadius = .03, rescaleThickness = .133 ) leftLegImg = np.flipud(leftLegImg) #################################################### log('Get right bicep image') #################################################### rightBicepImg = getSegmentImage( threshold = .00001, dat = data, segment = rightBicep, tLen = 60, rLen = 40, rRange = (0.2,3), highpassRadius = (2,5), rescaleReflec = 1500, rescaleRadius = .03, rescaleThickness = .133 ) #################################################### log('Get left bicep image') #################################################### leftBicepImg = getSegmentImage( threshold = .00001, dat = data, segment = leftBicep, tLen = 60, rLen = 40, rRange = (0.2,3), highpassRadius = (2,5), rescaleReflec = 1500, rescaleRadius = .03, rescaleThickness = .133 ) leftBicepImg = np.flipud(leftBicepImg) #################################################### log('Get right forearm image') #################################################### rightForearmImg = getSegmentImage( threshold = .00001, dat = data, segment = rightForearm, tLen = 50, rLen = 35, rRange = (0.2,3), highpassRadius = (2,5), rescaleReflec = 1500, rescaleRadius = .02, rescaleThickness = .133 ) #################################################### log('Get left forearm image') #################################################### leftForearmImg = getSegmentImage( threshold = .00001, dat = data, segment = leftForearm, tLen = 50, rLen = 35, rRange = (0.2,3), highpassRadius = (2,5), rescaleReflec = 1500, rescaleRadius = .02, rescaleThickness = .133 ) leftForearmImg = np.flipud(leftForearmImg) #################################################### log('Combine images') #################################################### trunkResized = resize(trunkImg, (360,256), mode='edge') rightLegResized = resize(rightLegImg, (180,256), mode='edge') leftLegResized = resize(leftLegImg, (180,256), mode='edge') rightBicepResized = resize(rightBicepImg, (90,128), mode='edge') leftBicepResized = resize(leftBicepImg, (90,128), mode='edge') rightForearmResized = resize(rightForearmImg, (70,128), mode='edge') leftForearmResized = resize(leftForearmImg, (70,128), mode='edge') forearmEmpty = np.full(shape=(20,128,3), fill_value=.5) rightForearmResized = np.concatenate([rightForearmResized, forearmEmpty], axis=0) leftForearmResized = np.concatenate([leftForearmResized, forearmEmpty], axis=0) rightArm = np.concatenate([rightBicepResized, rightForearmResized], axis=1) leftArm = np.concatenate([leftBicepResized, leftForearmResized], axis=1) bodyImg = np.concatenate([ trunkResized, rightLegResized, leftLegResized, rightArm, leftArm ], axis=0) #################################################### log('Save image') #################################################### base = inputFile.split('/')[-1] base = base.split('.')[-2] outputFile = outputDir[6:] + base + '.png' imsave(outputFile, bodyImg) ``` # Function to process a folder of scans ``` def processFolder(inputFolder, outputFolder, startFileNum=0): inputFiles = filenames(inputFolder) for fileNum in range(startFileNum, len(inputFiles)): # Print file number (don't just freeze for hours) print(str(datetime.datetime.now()) + " Processing file " + str(fileNum) + " of " + str(len(inputFiles))) try: processScan(inputFiles[fileNum], outputFolder) except Exception as exc: errorStr = 'Error: ' + str(exc) log(errorStr) print(errorStr) break log("Finished") print(str(datetime.datetime.now()) + " Finished") ``` # Process scans Manually clear all output now so that autosave doesn't waste CPU cycles. I didn't find a way to do this in code. ``` # Process stage 1 scans # Takes about 15 hours processFolder(scanDir1, embeddedDir1, 0) # Process stage 2 scans # Takes about 15 hours processFolder(scanDir2, embeddedDir2, 0) ``` # Do we have them all? ``` inputFiles = filenames(scanDir1) inputFiles = set([x[17:-4] for x in inputFiles]) outputFiles = filenames(embeddedDir1) outputFiles = set([x[24:-4] for x in outputFiles if x[-3:] == 'png']) print("length: ", len(outputFiles), len(inputFiles)) print("same length? ", len(outputFiles)==len(inputFiles)) print("difference: ", outputFiles.symmetric_difference(inputFiles)) ``` # Compare scans and embeddings ``` inputFile = filenames(scanDir1)[100] # Load scan data = loadFile(inputFile) thresholded = np.clip(data, threshold3D, 100000) - threshold3D # Plot scan fig, ax = plt.subplots(3, figsize=(30,30)) for i in range(3): ax[i].imshow(np.mean(thresholded, axis=i), cmap = 'viridis', interpolation = 'nearest') # Load embedding png = loadFile(embeddedDir1 + inputFile[17:-4] + '.png') # Plot embedding fig, ax = plt.subplots(1, figsize=(30,30)) ax.imshow(png) ```
github_jupyter
``` import torch from tqdm.notebook import tqdm from transformers import BertTokenizer from torch.utils.data import TensorDataset from transformers import BertForSequenceClassification import pandas as pd df = pd.read_csv("data/mn_data.csv") df.head() df["Conference"].value_counts() possible_labels = df.Conference.unique() label_dict = {} for index, possible_label in enumerate(possible_labels): label_dict[possible_label] = index label_dict df['label'] = df.Conference.replace(label_dict) from sklearn.model_selection import train_test_split X_train, X_val, y_train, y_val = train_test_split(df.index.values, df.label.values, test_size=0.15, random_state=42, stratify=df.label.values) df['data_type'] = ['not_set']*df.shape[0] df.loc[X_train, 'data_type'] = 'train' df.loc[X_val, 'data_type'] = 'val' df.groupby(['Conference', 'label', 'data_type']).count() tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) # Tokenizer Parameters # model_max_length (int, optional) – The maximum length (in number of tokens) for the inputs to the transformer model. When the tokenizer is loaded with from_pretrained(), this will be set to the value stored for the associated model in max_model_input_sizes (see above). If no value is provided, will default to VERY_LARGE_INTEGER (int(1e30)). # padding_side – (str, optional): The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name. # model_input_names (List[string], optional) – The list of inputs accepted by the forward pass of the model (like "token_type_ids" or "attention_mask"). Default value is picked from the class attribute of the same name. # bos_token (str or tokenizers.AddedToken, optional) – A special token representing the beginning of a sentence. Will be associated to self.bos_token and self.bos_token_id. # eos_token (str or tokenizers.AddedToken, optional) – A special token representing the end of a sentence. Will be associated to self.eos_token and self.eos_token_id. # unk_token (str or tokenizers.AddedToken, optional) – A special token representing an out-of-vocabulary token. Will be associated to self.unk_token and self.unk_token_id. # sep_token (str or tokenizers.AddedToken, optional) – A special token separating two different sentences in the same input (used by BERT for instance). Will be associated to self.sep_token and self.sep_token_id. # pad_token (str or tokenizers.AddedToken, optional) – A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by attention mechanisms or loss computation. Will be associated to self.pad_token and self.pad_token_id. # cls_token (str or tokenizers.AddedToken, optional) – A special token representing the class of the input (used by BERT for instance). Will be associated to self.cls_token and self.cls_token_id. # mask_token (str or tokenizers.AddedToken, optional) – A special token representing a masked token (used by masked-language modeling pretraining objectives, like BERT). Will be associated to self.mask_token and self.mask_token_id. # additional_special_tokens (tuple or list of str or tokenizers.AddedToken, optional) – A tuple or a list of additional special tokens. Add them here to ensure they won’t be split by the tokenization process. Will be associated to self.additional_special_tokens and self.additional_special_tokens_ids. encoded_data_train = tokenizer.batch_encode_plus( df[df.data_type=='train'].Title.values, add_special_tokens=True, return_attention_mask=True, pad_to_max_length=True, max_length=512, truncation=True, return_tensors='pt' ) encoded_data_val = tokenizer.batch_encode_plus( df[df.data_type=='val'].Title.values, add_special_tokens=True, return_attention_mask=True, pad_to_max_length=True, max_length=512, truncation=True, return_tensors='pt' ) input_ids_train = encoded_data_train['input_ids'] attention_masks_train = encoded_data_train['attention_mask'] labels_train = torch.tensor(df[df.data_type=='train'].label.values) input_ids_val = encoded_data_val['input_ids'] attention_masks_val = encoded_data_val['attention_mask'] labels_val = torch.tensor(df[df.data_type=='val'].label.values) dataset_train = TensorDataset(input_ids_train, attention_masks_train, labels_train) dataset_val = TensorDataset(input_ids_val, attention_masks_val, labels_val) model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=len(label_dict), output_attentions=False, output_hidden_states=False) from torch.utils.data import DataLoader, RandomSampler, SequentialSampler batch_size = 3 dataloader_train = DataLoader(dataset_train, sampler=RandomSampler(dataset_train), batch_size=batch_size) dataloader_validation = DataLoader(dataset_val, sampler=SequentialSampler(dataset_val), batch_size=batch_size) from transformers import AdamW, get_linear_schedule_with_warmup # optimizers: # AdamW # params (Iterable[torch.nn.parameter.Parameter]) – Iterable of parameters to optimize or dictionaries defining parameter groups. # lr (float, optional, defaults to 1e-3) – The learning rate to use. # betas (Tuple[float,float], optional, defaults to (0.9, 0.999)) – Adam’s betas parameters (b1, b2). # eps (float, optional, defaults to 1e-6) – Adam’s epsilon for numerical stability. # weight_decay (float, optional, defaults to 0) – Decoupled weight decay to apply. # correct_bias (bool, optional, defaults to True) – Whether ot not to correct bias in Adam (for instance, in Bert TF repository they use False). # AdaFactor # params (Iterable[torch.nn.parameter.Parameter]) – Iterable of parameters to optimize or dictionaries defining parameter groups. # lr (float, optional) – The external learning rate. # eps (Tuple[float, float], optional, defaults to (1e-30, 1e-3)) – Regularization constants for square gradient and parameter scale respectively # clip_threshold (float, optional, defaults 1.0) – Threshold of root mean square of final gradient update # decay_rate (float, optional, defaults to -0.8) – Coefficient used to compute running averages of square # beta1 (float, optional) – Coefficient used for computing running averages of gradient # weight_decay (float, optional, defaults to 0) – Weight decay (L2 penalty) # scale_parameter (bool, optional, defaults to True) – If True, learning rate is scaled by root mean square # relative_step (bool, optional, defaults to True) – If True, time-dependent learning rate is computed instead of external learning rate # warmup_init (bool, optional, defaults to False) – Time-dependent learning rate computation depends on whether warm-up initialization is being used optimizer = AdamW(model.parameters(), lr=1e-5, eps=1e-8) epochs = 5 scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=0, num_training_steps=len(dataloader_train)*epochs) from sklearn.metrics import f1_score def f1_score_func(preds, labels): preds_flat = np.argmax(preds, axis=1).flatten() labels_flat = labels.flatten() return f1_score(labels_flat, preds_flat, average='weighted') def accuracy_per_class(preds, labels): label_dict_inverse = {v: k for k, v in label_dict.items()} preds_flat = np.argmax(preds, axis=1).flatten() labels_flat = labels.flatten() for label in np.unique(labels_flat): y_preds = preds_flat[labels_flat==label] y_true = labels_flat[labels_flat==label] print(f'Class: {label_dict_inverse[label]}') print(f'Accuracy: {len(y_preds[y_preds==label])}/{len(y_true)}\n') import random import numpy as np seed_val = 17 random.seed(seed_val) np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val) def evaluate(dataloader_val): model.eval() loss_val_total = 0 predictions, true_vals = [], [] for batch in dataloader_val: batch = tuple(b.to(device) for b in batch) inputs = {'input_ids': batch[0], 'attention_mask': batch[1], 'labels': batch[2], } with torch.no_grad(): outputs = model(**inputs) loss = outputs[0] logits = outputs[1] loss_val_total += loss.item() logits = logits.detach().cpu().numpy() label_ids = inputs['labels'].cpu().numpy() predictions.append(logits) true_vals.append(label_ids) loss_val_avg = loss_val_total/len(dataloader_val) predictions = np.concatenate(predictions, axis=0) true_vals = np.concatenate(true_vals, axis=0) return loss_val_avg, predictions, true_vals device = "cpu" # cpu, # cuda, for nvidia gpus # mkldnn, not supported by pytorch # opengl, not supported by pytorch # opencl, not supported by pytorch # ideep, not supported by pytorch # hip, not supported by pytorch # msnpu, not supported by pytorch # xla, not supported by pytorch for epoch in tqdm(range(1, epochs+1)): model.train() loss_train_total = 0 progress_bar = tqdm(dataloader_train, desc='Epoch {:1d}'.format(epoch), leave=False, disable=False) for batch in progress_bar: model.zero_grad() batch = tuple(b.to(device) for b in batch) inputs = {'input_ids': batch[0], 'attention_mask': batch[1], 'labels': batch[2], } outputs = model(**inputs) loss = outputs[0] loss_train_total += loss.item() loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() scheduler.step() progress_bar.set_postfix({'training_loss': '{:.3f}'.format(loss.item()/len(batch))}) torch.save(model.state_dict(), f'modeling/finetuned_BERT_epoch_{epoch}.model') tqdm.write(f'\nEpoch {epoch}') loss_train_avg = loss_train_total/len(dataloader_train) tqdm.write(f'Training loss: {loss_train_avg}') val_loss, predictions, true_vals = evaluate(dataloader_validation) val_f1 = f1_score_func(predictions, true_vals) tqdm.write(f'Validation loss: {val_loss}') tqdm.write(f'F1 Score (Weighted): {val_f1}') ```
github_jupyter
# Learning This notebook serves as supporting material for topics covered in **Chapter 18 - Learning from Examples** , **Chapter 19 - Knowledge in Learning**, **Chapter 20 - Learning Probabilistic Models** from the book *Artificial Intelligence: A Modern Approach*. This notebook uses implementations from [learning.py](https://github.com/aimacode/aima-python/blob/master/learning.py). Let's start by importing everything from the module: ``` from learning import * ``` ## Contents * Machine Learning Overview * Datasets * Distance Functions * Plurality Learner * k-Nearest Neighbours * Naive Bayes Learner * Perceptron * MNIST Handwritten Digits * Loading and Visualising * Testing * kNN Classifier ## Machine Learning Overview In this notebook, we learn about agents that can improve their behavior through diligent study of their own experiences. An agent is **learning** if it improves its performance on future tasks after making observations about the world. There are three types of feedback that determine the three main types of learning: * **Supervised Learning**: In Supervised Learning the agent observes some example input-output pairs and learns a function that maps from input to output. **Example**: Let's think of an agent to classify images containing cats or dogs. If we provide an image containing a cat or a dog, this agent should output a string "cat" or "dog" for that particular image. To teach this agent, we will give a lot of input-output pairs like {cat image-"cat"}, {dog image-"dog"} to the agent. The agent then learns a function that maps from an input image to one of those strings. * **Unsupervised Learning**: In Unsupervised Learning the agent learns patterns in the input even though no explicit feedback is supplied. The most common type is **clustering**: detecting potential useful clusters of input examples. **Example**: A taxi agent would develop a concept of *good traffic days* and *bad traffic days* without ever being given labeled examples. * **Reinforcement Learning**: In Reinforcement Learning the agent learns from a series of reinforcements—rewards or punishments. **Example**: Let's talk about an agent to play the popular Atari game—[Pong](http://www.ponggame.org). We will reward a point for every correct move and deduct a point for every wrong move from the agent. Eventually, the agent will figure out its actions prior to reinforcement were most responsible for it. ## Datasets For the following tutorials we will use a range of datasets, to better showcase the strengths and weaknesses of the algorithms. The datasests are the following: * [Fisher's Iris](https://github.com/aimacode/aima-data/blob/a21fc108f52ad551344e947b0eb97df82f8d2b2b/iris.csv): Each item represents a flower, with four measurements: the length and the width of the sepals and petals. Each item/flower is categorized into one of three species: Setosa, Versicolor and Virginica. * [Zoo](https://github.com/aimacode/aima-data/blob/a21fc108f52ad551344e947b0eb97df82f8d2b2b/zoo.csv): The dataset holds different animals and their classification as "mammal", "fish", etc. The new animal we want to classify has the following measurements: 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 4, 1, 0, 1 (don't concern yourself with what the measurements mean). To make using the datasets easier, we have written a class, `DataSet`, in `learning.py`. The tutorials found here make use of this class. Let's have a look at how it works before we get started with the algorithms. ### Intro A lot of the datasets we will work with are .csv files (although other formats are supported too). We have a collection of sample datasets ready to use [on aima-data](https://github.com/aimacode/aima-data/tree/a21fc108f52ad551344e947b0eb97df82f8d2b2b). Two examples are the datasets mentioned above (*iris.csv* and *zoo.csv*). You can find plenty datasets online, and a good repository of such datasets is [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets.html). In such files, each line corresponds to one item/measurement. Each individual value in a line represents a *feature* and usually there is a value denoting the *class* of the item. You can find the code for the dataset here: ``` %psource DataSet ``` ### Class Attributes * **examples**: Holds the items of the dataset. Each item is a list of values. * **attrs**: The indexes of the features (by default in the range of [0,f), where *f* is the number of features. For example, `item[i]` returns the feature at index *i* of *item*. * **attrnames**: An optional list with attribute names. For example, `item[s]`, where *s* is a feature name, returns the feature of name *s* in *item*. * **target**: The attribute a learning algorithm will try to predict. By default the last attribute. * **inputs**: This is the list of attributes without the target. * **values**: A list of lists which holds the set of possible values for the corresponding attribute/feature. If initially `None`, it gets computed (by the function `setproblem`) from the examples. * **distance**: The distance function used in the learner to calculate the distance between two items. By default `mean_boolean_error`. * **name**: Name of the dataset. * **source**: The source of the dataset (url or other). Not used in the code. * **exclude**: A list of indexes to exclude from `inputs`. The list can include either attribute indexes (attrs) or names (attrnames). ### Class Helper Functions These functions help modify a `DataSet` object to your needs. * **sanitize**: Takes as input an example and returns it with non-input (target) attributes replaced by `None`. Useful for testing. Keep in mind that the example given is not itself sanitized, but instead a sanitized copy is returned. * **classes_to_numbers**: Maps the class names of a dataset to numbers. If the class names are not given, they are computed from the dataset values. Useful for classifiers that return a numerical value instead of a string. * **remove_examples**: Removes examples containing a given value. Useful for removing examples with missing values, or for removing classes (needed for binary classifiers). ### Importing a Dataset #### Importing from aima-data Datasets uploaded on aima-data can be imported with the following line: ``` iris = DataSet(name="iris") ``` To check that we imported the correct dataset, we can do the following: ``` print(iris.examples[0]) print(iris.inputs) ``` Which correctly prints the first line in the csv file and the list of attribute indexes. When importing a dataset, we can specify to exclude an attribute (for example, at index 1) by setting the parameter `exclude` to the attribute index or name. ``` iris2 = DataSet(name="iris",exclude=[1]) print(iris2.inputs) ``` ### Attributes Here we showcase the attributes. First we will print the first three items/examples in the dataset. ``` print(iris.examples[:3]) ``` Then we will print `attrs`, `attrnames`, `target`, `input`. Notice how `attrs` holds values in [0,4], but since the fourth attribute is the target, `inputs` holds values in [0,3]. ``` print("attrs:", iris.attrs) print("attrnames (by default same as attrs):", iris.attrnames) print("target:", iris.target) print("inputs:", iris.inputs) ``` Now we will print all the possible values for the first feature/attribute. ``` print(iris.values[0]) ``` Finally we will print the dataset's name and source. Keep in mind that we have not set a source for the dataset, so in this case it is empty. ``` print("name:", iris.name) print("source:", iris.source) ``` A useful combination of the above is `dataset.values[dataset.target]` which returns the possible values of the target. For classification problems, this will return all the possible classes. Let's try it: ``` print(iris.values[iris.target]) ``` ### Helper Functions We will now take a look at the auxiliary functions found in the class. First we will take a look at the `sanitize` function, which sets the non-input values of the given example to `None`. In this case we want to hide the class of the first example, so we will sanitize it. Note that the function doesn't actually change the given example; it returns a sanitized *copy* of it. ``` print("Sanitized:",iris.sanitize(iris.examples[0])) print("Original:",iris.examples[0]) ``` Currently the `iris` dataset has three classes, setosa, virginica and versicolor. We want though to convert it to a binary class dataset (a dataset with two classes). The class we want to remove is "virginica". To accomplish that we will utilize the helper function `remove_examples`. ``` iris2 = DataSet(name="iris") iris2.remove_examples("virginica") print(iris2.values[iris2.target]) ``` We also have `classes_to_numbers`. For a lot of the classifiers in the module (like the Neural Network), classes should have numerical values. With this function we map string class names to numbers. ``` print("Class of first example:",iris2.examples[0][iris2.target]) iris2.classes_to_numbers() print("Class of first example:",iris2.examples[0][iris2.target]) ``` As you can see "setosa" was mapped to 0. Finally, we take a look at `find_means_and_deviations`. It finds the means and standard deviations of the features for each class. ``` means, deviations = iris.find_means_and_deviations() print("Setosa feature means:", means["setosa"]) print("Versicolor mean for first feature:", means["versicolor"][0]) print("Setosa feature deviations:", deviations["setosa"]) print("Virginica deviation for second feature:",deviations["virginica"][1]) ``` ## Distance Functions In a lot of algorithms (like the *k-Nearest Neighbors* algorithm), there is a need to compare items, finding how *similar* or *close* they are. For that we have many different functions at our disposal. Below are the functions implemented in the module: ### Manhattan Distance (`manhattan_distance`) One of the simplest distance functions. It calculates the difference between the coordinates/features of two items. To understand how it works, imagine a 2D grid with coordinates *x* and *y*. In that grid we have two items, at the squares positioned at `(1,2)` and `(3,4)`. The difference between their two coordinates is `3-1=2` and `4-2=2`. If we sum these up we get `4`. That means to get from `(1,2)` to `(3,4)` we need four moves; two to the right and two more up. The function works similarly for n-dimensional grids. ``` def manhattan_distance(X, Y): return sum([abs(x - y) for x, y in zip(X, Y)]) distance = manhattan_distance([1,2], [3,4]) print("Manhattan Distance between (1,2) and (3,4) is", distance) ``` ### Euclidean Distance (`euclidean_distance`) Probably the most popular distance function. It returns the square root of the sum of the squared differences between individual elements of two items. ``` def euclidean_distance(X, Y): return math.sqrt(sum([(x - y)**2 for x, y in zip(X,Y)])) distance = euclidean_distance([1,2], [3,4]) print("Euclidean Distance between (1,2) and (3,4) is", distance) ``` ### Hamming Distance (`hamming_distance`) This function counts the number of differences between single elements in two items. For example, if we have two binary strings "111" and "011" the function will return 1, since the two strings only differ at the first element. The function works the same way for non-binary strings too. ``` def hamming_distance(X, Y): return sum(x != y for x, y in zip(X, Y)) distance = hamming_distance(['a','b','c'], ['a','b','b']) print("Hamming Distance between 'abc' and 'abb' is", distance) ``` ### Mean Boolean Error (`mean_boolean_error`) To calculate this distance, we find the ratio of different elements over all elements of two items. For example, if the two items are `(1,2,3)` and `(1,4,5)`, the ration of different/all elements is 2/3, since they differ in two out of three elements. ``` def mean_boolean_error(X, Y): return mean(int(x != y) for x, y in zip(X, Y)) distance = mean_boolean_error([1,2,3], [1,4,5]) print("Mean Boolean Error Distance between (1,2,3) and (1,4,5) is", distance) ``` ### Mean Error (`mean_error`) This function finds the mean difference of single elements between two items. For example, if the two items are `(1,0,5)` and `(3,10,5)`, their error distance is `(3-1) + (10-0) + (5-5) = 2 + 10 + 0 = 12`. The mean error distance therefore is `12/3=4`. ``` def mean_error(X, Y): return mean([abs(x - y) for x, y in zip(X, Y)]) distance = mean_error([1,0,5], [3,10,5]) print("Mean Error Distance between (1,0,5) and (3,10,5) is", distance) ``` ### Mean Square Error (`ms_error`) This is very similar to the `Mean Error`, but instead of calculating the difference between elements, we are calculating the *square* of the differences. ``` def ms_error(X, Y): return mean([(x - y)**2 for x, y in zip(X, Y)]) distance = ms_error([1,0,5], [3,10,5]) print("Mean Square Distance between (1,0,5) and (3,10,5) is", distance) ``` ### Root of Mean Square Error (`rms_error`) This is the square root of `Mean Square Error`. ``` def rms_error(X, Y): return math.sqrt(ms_error(X, Y)) distance = rms_error([1,0,5], [3,10,5]) print("Root of Mean Error Distance between (1,0,5) and (3,10,5) is", distance) ``` ## Plurality Learner Classifier ### Overview The Plurality Learner is a simple algorithm, used mainly as a baseline comparison for other algorithms. It finds the most popular class in the dataset and classifies any subsequent item to that class. Essentially, it classifies every new item to the same class. For that reason, it is not used very often, instead opting for more complicated algorithms when we want accurate classification. ![pL plot](images/pluralityLearner_plot.png) Let's see how the classifier works with the plot above. There are three classes named **Class A** (orange-colored dots) and **Class B** (blue-colored dots) and **Class C** (green-colored dots). Every point in this plot has two **features** (i.e. X<sub>1</sub>, X<sub>2</sub>). Now, let's say we have a new point, a red star and we want to know which class this red star belongs to. Solving this problem by predicting the class of this new red star is our current classification problem. The Plurality Learner will find the class most represented in the plot. ***Class A*** has four items, ***Class B*** has three and ***Class C*** has seven. The most popular class is ***Class C***. Therefore, the item will get classified in ***Class C***, despite the fact that it is closer to the other two classes. ### Implementation Below follows the implementation of the PluralityLearner algorithm: ``` def PluralityLearner(dataset): """A very dumb algorithm: always pick the result that was most popular in the training data. Makes a baseline for comparison.""" most_popular = mode([e[dataset.target] for e in dataset.examples]) def predict(example): "Always return same result: the most popular from the training set." return most_popular return predict ``` It takes as input a dataset and returns a function. We can later call this function with the item we want to classify as the argument and it returns the class it should be classified in. The function first finds the most popular class in the dataset and then each time we call its "predict" function, it returns it. Note that the input ("example") does not matter. The function always returns the same class. ### Example For this example, we will not use the Iris dataset, since each class is represented the same. This will throw an error. Instead we will use the zoo dataset. ``` zoo = DataSet(name="zoo") pL = PluralityLearner(zoo) print(pL([1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 4, 1, 0, 1])) ``` The output for the above code is "mammal", since that is the most popular and common class in the dataset. ## k-Nearest Neighbours (kNN) Classifier ### Overview The k-Nearest Neighbors algorithm is a non-parametric method used for classification and regression. We are going to use this to classify Iris flowers. More about kNN on [Scholarpedia](http://www.scholarpedia.org/article/K-nearest_neighbor). ![kNN plot](images/knn_plot.png) Let's see how kNN works with a simple plot shown in the above picture. We have co-ordinates (we call them **features** in Machine Learning) of this red star and we need to predict its class using the kNN algorithm. In this algorithm, the value of **k** is arbitrary. **k** is one of the **hyper parameters** for kNN algorithm. We choose this number based on our dataset and choosing a particular number is known as **hyper parameter tuning/optimising**. We learn more about this in coming topics. Let's put **k = 3**. It means you need to find 3-Nearest Neighbors of this red star and classify this new point into the majority class. Observe that smaller circle which contains three points other than **test point** (red star). As there are two violet points, which form the majority, we predict the class of red star as **violet- Class B**. Similarly if we put **k = 5**, you can observe that there are four yellow points, which form the majority. So, we classify our test point as **yellow- Class A**. In practical tasks, we iterate through a bunch of values for k (like [1, 3, 5, 10, 20, 50, 100]), see how it performs and select the best one. ### Implementation Below follows the implementation of the kNN algorithm: ``` def NearestNeighborLearner(dataset, k=1): """k-NearestNeighbor: the k nearest neighbors vote.""" def predict(example): """Find the k closest items, and have them vote for the best.""" best = heapq.nsmallest(k, ((dataset.distance(e, example), e) for e in dataset.examples)) return mode(e[dataset.target] for (d, e) in best) return predict ``` It takes as input a dataset and k (default value is 1) and it returns a function, which we can later use to classify a new item. To accomplish that, the function uses a heap-queue, where the items of the dataset are sorted according to their distance from *example* (the item to classify). We then take the k smallest elements from the heap-queue and we find the majority class. We classify the item to this class. ### Example We measured a new flower with the following values: 5.1, 3.0, 1.1, 0.1. We want to classify that item/flower in a class. To do that, we write the following: ``` iris = DataSet(name="iris") kNN = NearestNeighborLearner(iris,k=3) print(kNN([5.1,3.0,1.1,0.1])) ``` The output of the above code is "setosa", which means the flower with the above measurements is of the "setosa" species. ## Naive Bayes Learner ### Overview #### Theory of Probabilities The Naive Bayes algorithm is a probabilistic classifier, making use of [Bayes' Theorem](https://en.wikipedia.org/wiki/Bayes%27_theorem). The theorem states that the conditional probability of **A** given **B** equals the conditional probability of **B** given **A** multiplied by the probability of **A**, divided by the probability of **B**. $$P(A|B) = \dfrac{P(B|A)*P(A)}{P(B)}$$ From the theory of Probabilities we have the Multiplication Rule, if the events *X* are independent the following is true: $$P(X_{1} \cap X_{2} \cap ... \cap X_{n}) = P(X_{1})*P(X_{2})*...*P(X_{n})$$ For conditional probabilities this becomes: $$P(X_{1}, X_{2}, ..., X_{n}|Y) = P(X_{1}|Y)*P(X_{2}|Y)*...*P(X_{n}|Y)$$ #### Classifying an Item How can we use the above to classify an item though? We have a dataset with a set of classes (**C**) and we want to classify an item with a set of features (**F**). Essentially what we want to do is predict the class of an item given the features. For a specific class, **Class**, we will find the conditional probability given the item features: $$P(Class|F) = \dfrac{P(F|Class)*P(Class)}{P(F)}$$ We will do this for every class and we will pick the maximum. This will be the class the item is classified in. The features though are a vector with many elements. We need to break the probabilities up using the multiplication rule. Thus the above equation becomes: $$P(Class|F) = \dfrac{P(Class)*P(F_{1}|Class)*P(F_{2}|Class)*...*P(F_{n}|Class)}{P(F_{1})*P(F_{2})*...*P(F_{n})}$$ The calculation of the conditional probability then depends on the calculation of the following: *a)* The probability of **Class** in the dataset. *b)* The conditional probability of each feature occuring in an item classified in **Class**. *c)* The probabilities of each individual feature. For *a)*, we will count how many times **Class** occurs in the dataset (aka how many items are classified in a particular class). For *b)*, if the feature values are discrete ('Blue', '3', 'Tall', etc.), we will count how many times a feature value occurs in items of each class. If the feature values are not discrete, we will go a different route. We will use a distribution function to calculate the probability of values for a given class and feature. If we know the distribution function of the dataset, then great, we will use it to compute the probabilities. If we don't know the function, we can assume the dataset follows the normal (Gaussian) distribution without much loss of accuracy. In fact, it can be proven that any distribution tends to the Gaussian the larger the population gets (see [Central Limit Theorem](https://en.wikipedia.org/wiki/Central_limit_theorem)). *NOTE:* If the values are continuous but use the discrete approach, there might be issues if we are not lucky. For one, if we have two values, '5.0 and 5.1', with the discrete approach they will be two completely different values, despite being so close. Second, if we are trying to classify an item with a feature value of '5.15', if the value does not appear for the feature, its probability will be 0. This might lead to misclassification. Generally, the continuous approach is more accurate and more useful, despite the overhead of calculating the distribution function. The last one, *c)*, is tricky. If feature values are discrete, we can count how many times they occur in the dataset. But what if the feature values are continuous? Imagine a dataset with a height feature. Is it worth it to count how many times each value occurs? Most of the time it is not, since there can be miscellaneous differences in the values (for example, 1.7 meters and 1.700001 meters are practically equal, but they count as different values). So as we cannot calculate the feature value probabilities, what are we going to do? Let's take a step back and rethink exactly what we are doing. We are essentially comparing conditional probabilities of all the classes. For two classes, **A** and **B**, we want to know which one is greater: $$\dfrac{P(F|A)*P(A)}{P(F)} vs. \dfrac{P(F|B)*P(B)}{P(F)}$$ Wait, **P(F)** is the same for both the classes! In fact, it is the same for every combination of classes. That is because **P(F)** does not depend on a class, thus being independent of the classes. So, for *c)*, we actually don't need to calculate it at all. #### Wrapping It Up Classifying an item to a class then becomes a matter of calculating the conditional probabilities of feature values and the probabilities of classes. This is something very desirable and computationally delicious. Remember though that all the above are true because we made the assumption that the features are independent. In most real-world cases that is not true though. Is that an issue here? Fret not, for the the algorithm is very efficient even with that assumption. That is why the algorithm is called **Naive** Bayes Classifier. We (naively) assume that the features are independent to make computations easier. ### Implementation The implementation of the Naive Bayes Classifier is split in two; Discrete and Continuous. The user can choose between them with the argument `continuous`. #### Discrete The implementation for discrete values counts how many times each feature value occurs for each class, and how many times each class occurs. The results are stored in a `CountinProbDist` object. With the below code you can see the probabilities of the class "Setosa" appearing in the dataset and the probability of the first feature (at index 0) of the same class having a value of 5. Notice that the second probability is relatively small, even though if we observe the dataset we will find that a lot of values are around 5. The issue arises because the features in the Iris dataset are continuous, and we are assuming they are discrete. If the features were discrete (for example, "Tall", "3", etc.) this probably wouldn't have been the case and we would see a much nicer probability distribution. ``` dataset = iris target_vals = dataset.values[dataset.target] target_dist = CountingProbDist(target_vals) attr_dists = {(gv, attr): CountingProbDist(dataset.values[attr]) for gv in target_vals for attr in dataset.inputs} for example in dataset.examples: targetval = example[dataset.target] target_dist.add(targetval) for attr in dataset.inputs: attr_dists[targetval, attr].add(example[attr]) print(target_dist['setosa']) print(attr_dists['setosa', 0][5.0]) ``` First we found the different values for the classes (called targets here) and calculated their distribution. Next we initialized a dictionary of `CountingProbDist` objects, one for each class and feature. Finally, we iterated through the examples in the dataset and calculated the needed probabilites. Having calculated the different probabilities, we will move on to the predicting function. It will receive as input an item and output the most likely class. Using the above formula, it will multiply the probability of the class appearing, with the probability of each feature value appearing in the class. It will return the max result. ``` def predict(example): def class_probability(targetval): return (target_dist[targetval] * product(attr_dists[targetval, attr][example[attr]] for attr in dataset.inputs)) return argmax(target_vals, key=class_probability) print(predict([5, 3, 1, 0.1])) ``` You can view the complete code by executing the next line: ``` %psource NaiveBayesDiscrete ``` #### Continuous In the implementation we use the Gaussian/Normal distribution function. To make it work, we need to find the means and standard deviations of features for each class. We make use of the `find_means_and_deviations` Dataset function. On top of that, we will also calculate the class probabilities as we did with the Discrete approach. ``` means, deviations = dataset.find_means_and_deviations() target_vals = dataset.values[dataset.target] target_dist = CountingProbDist(target_vals) print(means["setosa"]) print(deviations["versicolor"]) ``` You can see the means of the features for the "Setosa" class and the deviations for "Versicolor". The prediction function will work similarly to the Discrete algorithm. It will multiply the probability of the class occuring with the conditional probabilities of the feature values for the class. Since we are using the Gaussian distribution, we will input the value for each feature into the Gaussian function, together with the mean and deviation of the feature. This will return the probability of the particular feature value for the given class. We will repeat for each class and pick the max value. ``` def predict(example): def class_probability(targetval): prob = target_dist[targetval] for attr in dataset.inputs: prob *= gaussian(means[targetval][attr], deviations[targetval][attr], example[attr]) return prob return argmax(target_vals, key=class_probability) print(predict([5, 3, 1, 0.1])) ``` The complete code of the continuous algorithm: ``` %psource NaiveBayesContinuous ``` ### Examples We will now use the Naive Bayes Classifier (Discrete and Continuous) to classify items: ``` nBD = NaiveBayesLearner(iris, continuous=False) print("Discrete Classifier") print(nBD([5, 3, 1, 0.1])) print(nBD([6, 5, 3, 1.5])) print(nBD([7, 3, 6.5, 2])) nBC = NaiveBayesLearner(iris, continuous=True) print("\nContinuous Classifier") print(nBC([5, 3, 1, 0.1])) print(nBC([6, 5, 3, 1.5])) print(nBC([7, 3, 6.5, 2])) ``` Notice how the Discrete Classifier misclassified the second item, while the Continuous one had no problem. ## Perceptron Classifier ### Overview The Perceptron is a linear classifier. It works the same way as a neural network with no hidden layers (just input and output). First it trains its weights given a dataset and then it can classify a new item by running it through the network. Its input layer consists of the the item features, while the output layer consists of nodes (also called neurons). Each node in the output layer has *n* synapses (for every item feature), each with its own weight. Then, the nodes find the dot product of the item features and the synapse weights. These values then pass through an activation function (usually a sigmoid). Finally, we pick the largest of the values and we return its index. Note that in classification problems each node represents a class. The final classification is the class/node with the max output value. Below you can see a single node/neuron in the outer layer. With *f* we denote the item features, with *w* the synapse weights, then inside the node we have the dot product and the activation function, *g*. ![perceptron](images/perceptron.png) ### Implementation First, we train (calculate) the weights given a dataset, using the `BackPropagationLearner` function of `learning.py`. We then return a function, `predict`, which we will use in the future to classify a new item. The function computes the (algebraic) dot product of the item with the calculated weights for each node in the outer layer. Then it picks the greatest value and classifies the item in the corresponding class. ``` %psource PerceptronLearner ``` Note that the Perceptron is a one-layer neural network, without any hidden layers. So, in `BackPropagationLearner`, we will pass no hidden layers. From that function we get our network, which is just one layer, with the weights calculated. That function `predict` passes the input/example through the network, calculating the dot product of the input and the weights for each node and returns the class with the max dot product. ### Example We will train the Perceptron on the iris dataset. Because though the `BackPropagationLearner` works with integer indexes and not strings, we need to convert class names to integers. Then, we will try and classify the item/flower with measurements of 5, 3, 1, 0.1. ``` iris = DataSet(name="iris") iris.classes_to_numbers() perceptron = PerceptronLearner(iris) print(perceptron([5, 3, 1, 0.1])) ``` The output is 0, which means the item is classified in the first class, "Setosa". This is indeed correct. Note that the Perceptron algorithm is not perfect and may produce false classifications. ## MNIST Handwritten Digits Classification The MNIST database, available from [this page](http://yann.lecun.com/exdb/mnist/), is a large database of handwritten digits that is commonly used for training and testing/validating in Machine learning. The dataset has **60,000 training images** each of size 28x28 pixels with labels and **10,000 testing images** of size 28x28 pixels with labels. In this section, we will use this database to compare performances of different learning algorithms. It is estimated that humans have an error rate of about **0.2%** on this problem. Let's see how our algorithms perform! NOTE: We will be using external libraries to load and visualize the dataset smoothly ([numpy](http://www.numpy.org/) for loading and [matplotlib](http://matplotlib.org/) for visualization). You do not need previous experience of the libraries to follow along. ### Loading MNIST digits data Let's start by loading MNIST data into numpy arrays. ``` import os, struct import array import numpy as np import matplotlib.pyplot as plt from collections import Counter %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' def load_MNIST(path="aima-data/MNIST"): "helper function to load MNIST data" train_img_file = open(os.path.join(path, "train-images-idx3-ubyte"), "rb") train_lbl_file = open(os.path.join(path, "train-labels-idx1-ubyte"), "rb") test_img_file = open(os.path.join(path, "t10k-images-idx3-ubyte"), "rb") test_lbl_file = open(os.path.join(path, 't10k-labels-idx1-ubyte'), "rb") magic_nr, tr_size, tr_rows, tr_cols = struct.unpack(">IIII", train_img_file.read(16)) tr_img = array.array("B", train_img_file.read()) train_img_file.close() magic_nr, tr_size = struct.unpack(">II", train_lbl_file.read(8)) tr_lbl = array.array("b", train_lbl_file.read()) train_lbl_file.close() magic_nr, te_size, te_rows, te_cols = struct.unpack(">IIII", test_img_file.read(16)) te_img = array.array("B", test_img_file.read()) test_img_file.close() magic_nr, te_size = struct.unpack(">II", test_lbl_file.read(8)) te_lbl = array.array("b", test_lbl_file.read()) test_lbl_file.close() # print(len(tr_img), len(tr_lbl), tr_size) # print(len(te_img), len(te_lbl), te_size) train_img = np.zeros((tr_size, tr_rows*tr_cols), dtype=np.int16) train_lbl = np.zeros((tr_size,), dtype=np.int8) for i in range(tr_size): train_img[i] = np.array(tr_img[i*tr_rows*tr_cols : (i+1)*tr_rows*tr_cols]).reshape((tr_rows*te_cols)) train_lbl[i] = tr_lbl[i] test_img = np.zeros((te_size, te_rows*te_cols), dtype=np.int16) test_lbl = np.zeros((te_size,), dtype=np.int8) for i in range(te_size): test_img[i] = np.array(te_img[i*te_rows*te_cols : (i+1)*te_rows*te_cols]).reshape((te_rows*te_cols)) test_lbl[i] = te_lbl[i] return(train_img, train_lbl, test_img, test_lbl) ``` The function `load_MNIST()` loads MNIST data from files saved in `aima-data/MNIST`. It returns four numpy arrays that we are going to use to train and classify hand-written digits in various learning approaches. ``` train_img, train_lbl, test_img, test_lbl = load_MNIST() ``` Check the shape of these NumPy arrays to make sure we have loaded the database correctly. Each 28x28 pixel image is flattened to a 784x1 array and we should have 60,000 of them in training data. Similarly, we should have 10,000 of those 784x1 arrays in testing data. ``` print("Training images size:", train_img.shape) print("Training labels size:", train_lbl.shape) print("Testing images size:", test_img.shape) print("Training labels size:", test_lbl.shape) ``` ### Visualizing MNIST digits data To get a better understanding of the dataset, let's visualize some random images for each class from training and testing datasets. ``` classes = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"] num_classes = len(classes) def show_MNIST(dataset, samples=8): if dataset == "training": labels = train_lbl images = train_img elif dataset == "testing": labels = test_lbl images = test_img else: raise ValueError("dataset must be 'testing' or 'training'!") for y, cls in enumerate(classes): idxs = np.nonzero([i == y for i in labels]) idxs = np.random.choice(idxs[0], samples, replace=False) for i , idx in enumerate(idxs): plt_idx = i * num_classes + y + 1 plt.subplot(samples, num_classes, plt_idx) plt.imshow(images[idx].reshape((28, 28))) plt.axis("off") if i == 0: plt.title(cls) plt.show() # takes 5-10 seconds to execute this show_MNIST("training") # takes 5-10 seconds to execute this show_MNIST("testing") ``` Let's have a look at the average of all the images of training and testing data. ``` classes = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"] num_classes = len(classes) def show_ave_MNIST(dataset): if dataset == "training": print("Average of all images in training dataset.") labels = train_lbl images = train_img elif dataset == "testing": print("Average of all images in testing dataset.") labels = test_lbl images = test_img else: raise ValueError("dataset must be 'testing' or 'training'!") for y, cls in enumerate(classes): idxs = np.nonzero([i == y for i in labels]) print("Digit", y, ":", len(idxs[0]), "images.") ave_img = np.mean(np.vstack([images[i] for i in idxs[0]]), axis = 0) # print(ave_img.shape) plt.subplot(1, num_classes, y+1) plt.imshow(ave_img.reshape((28, 28))) plt.axis("off") plt.title(cls) plt.show() show_ave_MNIST("training") show_ave_MNIST("testing") ``` ## Testing Now, let us convert this raw data into `DataSet.examples` to run our algorithms defined in `learning.py`. Every image is represented by 784 numbers (28x28 pixels) and we append them with its label or class to make them work with our implementations in learning module. ``` print(train_img.shape, train_lbl.shape) temp_train_lbl = train_lbl.reshape((60000,1)) training_examples = np.hstack((train_img, temp_train_lbl)) print(training_examples.shape) ``` Now, we will initialize a DataSet with our training examples, so we can use it in our algorithms. ``` # takes ~8 seconds to execute this MNIST_DataSet = DataSet(examples=training_examples, distance=manhattan_distance) ``` Moving forward we can use `MNIST_DataSet` to test our algorithms. ### k-Nearest Neighbors We will now try to classify a random image from the dataset using the kNN classifier. First, we choose a number from 0 to 9999 for `test_img_choice` and we are going to predict the class of that test image. ``` from learning import NearestNeighborLearner # takes ~20 Secs. to execute this kNN = NearestNeighborLearner(MNIST_DataSet,k=3) print(kNN(test_img[211])) ``` To make sure that the output we got is correct, let's plot that image along with its label. ``` print("Actual class of test image:", test_lbl[211]) plt.imshow(test_img[211].reshape((28,28))) ``` Hurray! We've got it correct. Don't worry if our algorithm predicted a wrong class. With this techinique we have only ~97% accuracy on this dataset. Let's try with a different test image and hope we get it this time. You might have recognized that our algorithm took ~20 seconds to predict a single image. How would we even predict all 10,000 test images? Yeah, the implementations we have in our learning module are not optimized to run on this particular dataset, as they are written with readability in mind, instead of efficiency.
github_jupyter
``` %load_ext autoreload %autoreload 2 import yaml import time from pathlib import Path import matplotlib as mpl import seaborn as sns import numpy as np plt = mpl.pyplot import sys sys.path.append("../") import ethologger.utils.auxiliary as aux plt.style.use('ggplot') SMALL_SIZE = 16 MEDIUM_SIZE = 18 BIGGER_SIZE = 22 GIGANTIC_SIZE = 16 plt.rc('font', size=SMALL_SIZE) # controls default text sizes plt.rc('axes', titlesize=BIGGER_SIZE) # fontsize of the axes title plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels plt.rc('legend', fontsize=MEDIUM_SIZE) # legend fontsize plt.rc('figure', titlesize=GIGANTIC_SIZE) # fontsize of the figure title FPS = 30 main_cfg_path = "/home/bo/tmp/presentation-project-attempt-SD/main_cfg.yaml" with open(main_cfg_path, "r") as stream: try: main_cfg = yaml.safe_load(stream) except yaml.YAMLError as exc: raise exc sleep_deprived_expt_names = ["FlyF19-11152020170647SD", "FlyF23-11192020162303SD"] #, "FlyF26-11232020173416SD"] old_expt_names = ["FlyF1-03082020164520", "FlyF2-03092020175006", "FlyM4-03062020153616"] #, "FlyF5-03072020150452"] expt_names = list(main_cfg["experiment_data_paths"].keys()) cluster_labels_dict = {} for expt_name in expt_names: cluster_lbls = np.load(Path(main_cfg["project_path"]) / expt_name / "cluster_labels.npy") if expt_name in old_expt_names: cluster_labels_dict[expt_name] = cluster_lbls[2*FPS*3600:] else: cluster_labels_dict[expt_name] = cluster_lbls new_expt_names = [expt_name for expt_name in expt_names if expt_name not in old_expt_names and expt_name not in sleep_deprived_expt_names] # Cluster you're interested in. cluster_lbls = [6,7] lbl_name = "Haltere-switch" cluster_occurence_dict = {} cluster_occupation_dict = {} for expt_name in expt_names: cluster_labels = cluster_labels_dict[expt_name] mask_lbl = np.full(cluster_labels.shape[0], False) for lbl in cluster_lbls: mask_lbl[cluster_labels == lbl] = True lbl_frame_idx = np.nonzero(mask_lbl)[0] cluster_occurence_dict[expt_name] = lbl_frame_idx cluster_occupation_dict[expt_name] = len(lbl_frame_idx) fly_expt_names = list(cluster_occupation_dict.keys()) cluster_occupation_durations = np.array(list(cluster_occupation_dict.values()), dtype=object) / FPS fig, ax = plt.subplots(figsize=(10,6)) sns.barplot(x=cluster_occupation_durations, y=fly_expt_names, orient="h", palette="tab20", ax=ax) formatter = mpl.ticker.FuncFormatter(lambda sec, x: time.strftime("%M", time.gmtime(sec))) ax.xaxis.set_major_formatter(formatter) ax.set_title(f"{lbl_name} Total Time Spent", fontsize=15) ax.set_xlabel("Time Spent (minute)") ax.set_axisbelow(True) ax.grid(alpha=1) fly_expt_names = list(cluster_occupation_dict.keys()) cluster_occurence_times = np.array(list(cluster_occurence_dict.values()), dtype=object) / FPS fig, ax = plt.subplots(figsize=(15,10)) ax = sns.boxplot(data=cluster_occurence_times, orient="h", showfliers=False, palette="tab20", ax=ax) ax.set_yticklabels(fly_expt_names) formatter = mpl.ticker.FuncFormatter(lambda sec, x: time.strftime("%H:%M", time.gmtime(sec))) ax.xaxis.set_major_formatter(formatter) ax.set_title(f"{lbl_name} Occurence Times") ax.set_axisbelow(True) ax.grid(alpha=1) loc = mpl.ticker.MultipleLocator(base=2*3600) # this locator puts ticks at regular intervals ax.xaxis.set_major_locator(loc) selected_expts = ["FlyF1-03082020164520", "FlyM4-03062020153616", "FlyF14-08172021175459"] extended_selected_expts = selected_expts + ["FlyF15-08182021173222", ] #"FlyM8-08112021174107"]#, "FlyM9-08122021171549"] expt_names = selected_expts cluster_occurence_dict = {expt_name: cluster_occurence_dict[expt_name] for expt_name in expt_names} num_expt = len(cluster_occurence_dict) palette = sns.color_palette("tab10", num_expt) width = 3/4 fig, ax = plt.subplots(figsize=(16,10)) for i, (expt_name, occurences) in enumerate(cluster_occurence_dict.items()): ax.scatter( [occ for occ in occurences], [num_expt - i + width / 2 for occ in occurences], facecolors=palette[i], s=5, ) ax.set_yticks([num_expt - i + width / 2 for i in range(num_expt)]) ax.set_yticklabels(expt_names) ax.set_title(f"Ethogram of {lbl_name} Events") formatter = mpl.ticker.FuncFormatter(lambda sec, x: time.strftime("%H:%M", time.gmtime(sec/30))) ax.xaxis.set_major_formatter(formatter) ax.set_axisbelow(True) ax.grid(alpha=1) loc = mpl.ticker.MultipleLocator(base=2*FPS*3600) # this locator puts ticks at regular intervals ax.xaxis.set_major_locator(loc) fig, ax = plt.subplots(figsize=(16,10)) for i, (expt_name, occurences) in enumerate(cluster_occurence_dict.items()): frame_labels = np.zeros(cluster_labels_dict[expt_name].shape[0], dtype=int) frame_labels[occurences] = 1 change_points = aux.cont_intvls(frame_labels) ranges_ = [] for j in range(1, change_points.shape[0]): label = frame_labels[change_points[j - 1]] if label == 1: diff_time = change_points[j] - change_points[j - 1] x_range = max(diff_time, 2500) ranges_.append( (change_points[j - 1] , x_range) ) ax.broken_barh( ranges_, (num_expt - i, width), facecolors=palette[i], ) ax.set_ylim(+width, num_expt + width) ax.set_xlim(0) ax.set_yticks([num_expt - i + width/2 for i in range(num_expt)]) ax.set_yticklabels(expt_names) formatter = mpl.ticker.FuncFormatter(lambda sec, x: time.strftime("%H:%M", time.gmtime(sec/30))) # formatter = mpl.ticker.FuncFormatter(lambda sec, x: time.strftime("%H:%M", time.gmtime(sec/30))) ax.xaxis.set_major_formatter(formatter) ax.set_axisbelow(True) ax.set_title(f"Ethogram of {lbl_name} Events") ax.grid(alpha=1) loc = mpl.ticker.MultipleLocator(base=2*FPS*3600) # this locator puts ticks at regular intervals ax.xaxis.set_major_locator(loc) fig.savefig("haltere_switch_selected.svg", bbox_inches='tight') ```
github_jupyter
``` import sys sys.path.append("..") import pandas as pd import numpy as np import json import seaborn as sns sns.set(style="darkgrid") import matplotlib.pyplot as plt #from project import data_preprocessing from project.model import models from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import KFold from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV %matplotlib inline from project import config ``` #### Ready to feed the ANN ``` xvals = [90, 180, 365, 730, 1095, 1460, 1825, 2190, 2555, 2920, 3285, 3650, 4015, 4380] def loadData(trainingPercentage = 0.8): path_to_cleaned_data_SX5E = config.path_to_cleaned_data_SX5E path_to_cleaned_data_NKY = config.path_to_cleaned_data_NKY path_to_cleaned_data_SPX = config.path_to_cleaned_data_SPX path_to_cleaned_data_UKX = config.path_to_cleaned_data_UKX with open(path_to_cleaned_data_SX5E) as json_file: dictionary_SX5E = json.load(json_file) with open(path_to_cleaned_data_NKY) as json_file: dictionary_NKY = json.load(json_file) with open(path_to_cleaned_data_SPX) as json_file: dictionary_SPX = json.load(json_file) with open(path_to_cleaned_data_UKX) as json_file: dictionary_UKX = json.load(json_file) # Interpolating the data input_vector_SX5E = [] input_vector_NKY = [] input_vector_SPX = [] input_vector_UKX = [] input_vector_SX5E_orig = [] input_vector_NKY_orig = [] input_vector_SPX_orig = [] input_vector_UKX_orig = [] for key in dictionary_SX5E.keys(): x = dictionary_SX5E[key][0] y = dictionary_SX5E[key][1] yinterp = np.interp(xvals, x, y) #computing new interpolated values dictionary_SX5E[key][0] = xvals dictionary_SX5E[key][1] = yinterp input_vector_SX5E.append(yinterp.tolist()) input_vector_SX5E_orig.append(pd.Series(y,index=x)) for key in dictionary_NKY.keys(): x = dictionary_NKY[key][0] y = dictionary_NKY[key][1] yinterp = np.interp(xvals, x, y) #computing new interpolated values dictionary_NKY[key][0] = xvals dictionary_NKY[key][1] = yinterp input_vector_NKY.append(yinterp.tolist()) input_vector_NKY_orig.append(pd.Series(y,index=x)) for key in dictionary_SPX.keys(): x = dictionary_SPX[key][0] y = dictionary_SPX[key][1] yinterp = np.interp(xvals, x, y) #computing new interpolated values dictionary_SPX[key][0] = xvals dictionary_SPX[key][1] = yinterp input_vector_SPX.append(yinterp.tolist()) input_vector_SPX_orig.append(pd.Series(y,index=x)) for key in dictionary_UKX.keys(): x = dictionary_UKX[key][0] y = dictionary_UKX[key][1] yinterp = np.interp(xvals, x, y) #computing new interpolated values dictionary_UKX[key][0] = xvals dictionary_UKX[key][1] = yinterp input_vector_UKX.append(yinterp.tolist()) input_vector_UKX_orig.append(pd.Series(y,index=x)) # Computing the input_training_set and input_validation_set stop_NKY = int(len(input_vector_NKY)*trainingPercentage) stop_SPX = int(len(input_vector_SPX)*trainingPercentage) stop_SX5E = int(len(input_vector_SX5E)*trainingPercentage) stop_UKX= int(len(input_vector_UKX)*trainingPercentage) input_training_set_NKY = input_vector_NKY[:stop_NKY] input_validation_set_NKY = input_vector_NKY[stop_NKY:] input_training_set_SPX = input_vector_SPX[:stop_SPX] input_validation_set_SPX = input_vector_SPX[stop_SPX:] input_training_set_SX5E = input_vector_SX5E[:stop_SX5E] input_validation_set_SX5E = input_vector_SX5E[stop_SX5E:] input_training_set_UKX = input_vector_UKX[:stop_UKX] input_validation_set_UKX = input_vector_UKX[stop_UKX:] input_training_set_NKY_orig = input_vector_NKY_orig[:stop_NKY] input_validation_set_NKY_orig = input_vector_NKY_orig[stop_NKY:] input_training_set_SPX_orig = input_vector_SPX_orig[:stop_SPX] input_validation_set_SPX_orig = input_vector_SPX_orig[stop_SPX:] input_training_set_SX5E_orig = input_vector_SX5E_orig[:stop_SX5E] input_validation_set_SX5E_orig = input_vector_SX5E_orig[stop_SX5E:] input_training_set_UKX_orig = input_vector_UKX_orig[:stop_UKX] input_validation_set_UKX_orig = input_vector_UKX_orig[stop_UKX:] input_training_set = np.vstack((input_training_set_NKY, input_training_set_SPX, input_training_set_SX5E, input_training_set_UKX)) input_validation_set = (input_validation_set_NKY + input_validation_set_SPX + input_validation_set_SX5E + input_validation_set_UKX) input_training_set_orig = (input_training_set_NKY_orig + input_training_set_SPX_orig + input_training_set_SX5E_orig + input_training_set_UKX_orig) input_validation_set_orig = (input_validation_set_NKY_orig + input_validation_set_SPX_orig + input_validation_set_SX5E_orig + input_validation_set_UKX_orig) return input_training_set, input_validation_set, input_training_set_orig, input_validation_set_orig #input_training_set, input_testing_set = data_preprocessing.preprocessed_data_autoencoder_repo(0.8) input_training_set, input_testing_set, input_training_set_orig, input_testing_set_orig = loadData(0.8) scaler_train = MinMaxScaler() df_train = pd.DataFrame(input_training_set).T scaler_train.fit(df_train) df_train = pd.DataFrame(scaler_train.transform(df_train)).T input_training_set_normalized = np.asarray(df_train) from sklearn.preprocessing import PolynomialFeatures kf = KFold(n_splits=3, shuffle=True, random_state=42) def MSE(y, ypred): size = y.size return np.sum((y-ypred)**2)/size input_dim = input_training_set.shape[1] autoencoder, encoder, decoder = models.autoencoder_model_repo(input_dim,l1_=0.0001) x_train = input_training_set_normalized y_train = input_training_set_normalized e = 500 b = 50 losses = [] val_losses = [] for train_index, val_index in kf.split(x_train, y_train): x = x_train[train_index] y = y_train[train_index] hist = autoencoder.fit(x, y, epochs = e, batch_size = b, validation_data=(x_train[val_index], y_train[val_index])) losses.extend(hist.history['loss']) val_losses.extend(hist.history['val_loss']) autoencoder.summary() autoencoder.predict(np.ones((10,14))) epochs = range(1500) plt.plot(epochs, losses) plt.plot(epochs, val_losses) plt.title("CV Losses and Val_loss for Forwards") plt.xlabel("epochs") plt.ylabel("loss") ``` ### Prediction ``` scaler_test = MinMaxScaler() df_test = pd.DataFrame(input_testing_set).T scaler_test.fit(df_test) df_test = pd.DataFrame(scaler_test.transform(df_test)).T input_testing_set_normalized = np.asarray(df_test) predicted_set_CV_normalized = autoencoder.predict(input_testing_set_normalized) predicted_set_CV = np.asarray(pd.DataFrame(scaler_test.inverse_transform(pd.DataFrame(predicted_set_CV_normalized).T)).T) import seaborn as sns MSE_CV = np.mean((predicted_set_CV-input_testing_set)**2, axis = 1) sns.distplot(MSE_CV, bins = 70) plt.xlim(0,np.max(MSE_CV)) plt.title('Cross Testing MSE for Repo') autoencoder_json_CV = autoencoder.to_json() encoder_json_CV = encoder.to_json() decoder_json_CV = decoder.to_json() with open("../project/model/saved_model/autoencoder_CV.json", "w") as json_file: json.dump(autoencoder_json_CV, json_file) with open("../project/model/saved_model/encoder_CV.json", "w") as json_file: json.dump(encoder_json_CV, json_file) with open("../project/model/saved_model/decoder_CV.json", "w") as json_file: json.dump(decoder_json_CV, json_file) autoencoder.save('../project/model/saved_model/autoencoder_CV.h5') encoder.save('../project/model/saved_model/encoder_CV.h5') decoder.save('../project/model/saved_model/decoder_CV.h5') input_testing_set_normalized.shape predicted_set_CV_normalized.shape plt.plot(predicted_set_CV_normalized[0]) plt.plot(input_testing_set_normalized[0]) plt.plot(np.mean((predicted_set_CV_normalized-input_testing_set_normalized)**2, axis = 1)) np.argmax(np.mean((predicted_set_CV_normalized-input_testing_set_normalized)**2, axis = 1)) scaler_test.fit(df_test) threshold = 0.006 def scaleObservation(interpolatedObs, nonInterpolatedObs): commonMax = np.amax(interpolatedObs)#max(np.amax(interpolatedObs),nonInterpolatedObs.max()) commonMin = np.amin(interpolatedObs)#min(np.amin(interpolatedObs),nonInterpolatedObs.min()) return (nonInterpolatedObs - commonMin)/(commonMax - commonMin) funcScaleObservation = lambda x : scaleObservation(x[0],x[1]) input_testing_set_orig_scaled = list(map(funcScaleObservation, list(zip(input_testing_set, input_testing_set_orig)))) def plotOutlier(threshold): indexList = np.where(np.mean((predicted_set_CV_normalized-input_testing_set_normalized)**2,axis = 1) > threshold)[0] unscaledValue = scaler_test.inverse_transform(input_testing_set_normalized) unscaledPredictedValue = scaler_test.inverse_transform(predicted_set_CV_normalized) for i in indexList: print(i) plt.plot(xvals, predicted_set_CV_normalized[i]) plt.plot(xvals, input_testing_set_normalized[i],'-') plt.plot(input_testing_set_orig_scaled[i],'o') plt.show() return plotOutlier(threshold) ``` Pas d'utilisation possible des données originales car le format change et l'autoencodeur attend un format fixe pour un nombre fixe d'input units. On ne peut comparer que l'interpolation et la prédiction de l'interpolation. On devrait alors se trouver avec une définition biaisée des outliers : l'interpolation donnera des observations bien différentes de celles observées. Bien que les points soient nombreux dans les graphes précédents, la courbe inteporlée ne passe absolument pas par les points et la détecte comme outlier. Ces observations seraient probablement considées comme inliers avec l'approche fonctionnelle (à vérifier). Faire attention à ne sélectionner qu'un marché précis. ``` input_testing_set input_testing_set_normalized[0] predicted_set_CV_normalized[0] ```
github_jupyter
``` import numpy as np path_to_gaussian_result = '../bayes_implicit_solvent/continuous_parameter_experiments/freesolv_mh_jax_march28_gaussian_ll.npz' path_to_student_t_result = '../bayes_implicit_solvent/continuous_parameter_experiments/freesolv_mh_jax_march28_student-t_ll.npz' gaussian_result = np.load(path_to_gaussian_result) list(gaussian_result.keys()) train_cids = gaussian_result['cids'] ``` Get the molecules and a function for computing predictions ``` import numpy as np from bayes_implicit_solvent.molecule import Molecule from simtk import unit def sample_path_to_unitted_snapshots(path_to_npy_samples): xyz = np.load(path_to_npy_samples) traj = [snapshot * unit.nanometer for snapshot in xyz] return traj from glob import glob from pkg_resources import resource_filename path_to_vacuum_samples = resource_filename('bayes_implicit_solvent', 'vacuum_samples/vacuum_samples_*.npy') paths_to_samples = glob(path_to_vacuum_samples) #np.random.seed(0) #np.random.shuffle(paths_to_samples) #paths_to_samples = paths_to_samples[::2] print('number of molecules being considered: {}'.format(len(paths_to_samples))) def extract_cid_key(path): i = path.find('mobley_') j = path.find('.npy') return path[i:j] cids = list(map(extract_cid_key, paths_to_samples)) print('first few CIDs', cids[:5]) mols = [] n_configuration_samples = 50 from bayes_implicit_solvent.freesolv import cid_to_smiles from bayes_implicit_solvent.constants import beta def unreduce(value): """Input value is in units of kB T, turn it into units of kilocalorie_per_mole""" return value / (beta * unit.kilocalorie_per_mole) for path in paths_to_samples: cid = extract_cid_key(path) smiles = cid_to_smiles[cid] vacuum_samples = sample_path_to_unitted_snapshots(path) thinning = int(len(vacuum_samples) / n_configuration_samples) mol = Molecule(smiles, vacuum_samples=vacuum_samples[::thinning]) if (unreduce(mol.experimental_value) > -15) and (unreduce(mol.experimental_value) < 5): mols.append(mol) else: print('discarding {} ({}) because its free energy was outside of the range [-15, +5] kcal/mol'.format(smiles, cid)) import numpy as np element_inds = [] all_elements = ['S', 'Cl', 'F', 'C', 'I', 'N', 'Br', 'H', 'P', 'O'] N = len(all_elements) element_dict = dict(zip(all_elements, range(len(all_elements)))) initial_radius_dict = dict(H=0.12, C=0.17, N=0.155, O=0.15, F=0.15, P=0.185, S=0.18, Cl=0.17, Br=0.15, I=0.15) initial_scaling_factor_dict = dict(H=0.85, C=0.72, N=0.79, O=0.85, F=0.88, P=0.86, S=0.96, Cl=0.80, Br=0.80, I=0.80) for mol in mols: element_inds.append(np.array([element_dict[a.element.symbol] for a in list(mol.top.atoms())])) from jax import jit, vmap from bayes_implicit_solvent.gb_models.jax_gb_models import compute_OBC_energy_vectorized from bayes_implicit_solvent.solvation_free_energy import kj_mol_to_kT, one_sided_exp @jit def predict_solvation_free_energy_jax(theta, distance_matrices, charges, element_ind_array): radii_, scaling_factors_ = theta[:N], theta[N:] radii = radii_[element_ind_array] scaling_factors = scaling_factors_[element_ind_array] @jit def compute_component(distance_matrix): return compute_OBC_energy_vectorized(distance_matrix, radii, scaling_factors, charges) W_F = vmap(compute_component)(distance_matrices) w_F = W_F * kj_mol_to_kT return one_sided_exp(w_F) distance_matrices = [mol.distance_matrices for mol in mols] charges = [mol.charges for mol in mols] expt_means = unreduce(np.array([mol.experimental_value for mol in mols])) expt_uncs = unreduce(np.array([mol.experimental_uncertainty for mol in mols])) ``` # Split properly into train and test... ``` gaussian_traj = gaussian_result['traj'] theta = gaussian_traj[0] predict_solvation_free_energy_jax(theta, distance_matrices[0], charges[0], element_inds[0]) def get_predictions(theta): return np.array([predict_solvation_free_energy_jax(theta, distance_matrices[i], charges[i], element_inds[i]) for i in range(len(charges))]) predictions = get_predictions(theta) from tqdm import tqdm thinning = 100 gaussian_prediction_traj = np.array([get_predictions(theta) for theta in tqdm(gaussian_traj[::thinning])]) student_t_result = np.load(path_to_student_t_result) student_t_traj = student_t_result['traj'] student_t_prediction_traj = np.array([get_predictions(theta) for theta in tqdm(student_t_traj[::thinning])]) # unreduce! reduced_student_t_prediction_traj = np.array(student_t_prediction_traj) reduced_gaussian_prediction_traj = np.array(gaussian_prediction_traj) student_t_prediction_traj = unreduce(reduced_student_t_prediction_traj) gaussian_prediction_traj = unreduce(gaussian_prediction_traj) def rmse(x, y): return np.sqrt(np.mean((x-y)**2)) print(np.max(student_t_prediction_traj[10:].std(0)), np.min(student_t_prediction_traj[10:].std(0))) print(np.max(gaussian_prediction_traj[10:].std(0)), np.min(gaussian_prediction_traj[10:].std(0))) rmse(student_t_prediction_traj[-1], expt_means), rmse(gaussian_prediction_traj[-1], expt_means) mol = mols[0] from bayes_implicit_solvent.freesolv import smiles_to_cid smiles_to_cid[mol.smiles] cids[0] len(train_cids) train_inds = [] test_inds = [] for i in range(len(mols)): if smiles_to_cid[mols[i].smiles] in train_cids: train_inds.append(i) else: test_inds.append(i) train_inds = np.array(train_inds) test_inds = np.array(test_inds) test_inds rmse(student_t_prediction_traj[-1][train_inds], expt_means[train_inds]), rmse(gaussian_prediction_traj[-1][train_inds], expt_means[train_inds]) rmse(student_t_prediction_traj[-1][test_inds], expt_means[test_inds]), rmse(gaussian_prediction_traj[-1][test_inds], expt_means[test_inds]) def train_rmse(preds): return rmse(preds[train_inds], expt_means[train_inds]) def test_rmse(preds): return rmse(preds[test_inds], expt_means[test_inds]) student_t_train_rmse_traj = np.array(list(map(train_rmse, student_t_prediction_traj))) student_t_test_rmse_traj = np.array(list(map(test_rmse, student_t_prediction_traj))) gaussian_train_rmse_traj = np.array(list(map(train_rmse, gaussian_prediction_traj))) gaussian_test_rmse_traj = np.array(list(map(test_rmse, gaussian_prediction_traj))) x = np.arange(len(student_t_prediction_traj)) * thinning train_color = 'lightblue' test_color = 'green' train_style = '--' test_style = '-' import matplotlib.pyplot as plt %matplotlib inline from bayes_implicit_solvent.utils import remove_top_right_spines plt.figure(figsize=(8,4)) ax = plt.subplot(1,2,2) remove_top_right_spines(ax) plt.plot(x, student_t_train_rmse_traj, label='train', c=train_color, linestyle=train_style) plt.plot(x, student_t_test_rmse_traj, label='test', c=test_color, linestyle=test_style) plt.legend() plt.title('student-t likelihood') plt.ylabel('RMSE') plt.xlabel('RW-MH iteration') ax = plt.subplot(1,2,1, sharey=ax) remove_top_right_spines(ax) plt.plot(x, gaussian_train_rmse_traj, label='train', c=train_color, linestyle=train_style) plt.plot(x, gaussian_test_rmse_traj, label='test', c=test_color, linestyle=test_style) plt.legend() plt.title('gaussian likelihood') plt.ylabel('RMSE') plt.xlabel('RW-MH iteration') plt.tight_layout() plt.savefig('RW-MH-RMSE.png', dpi=300, bbox_inches='tight') # okay cool! now let's try to estimate the mixing time? # maybe later... plt.plot(student_t_prediction_traj[:,0]) plt.plot(gaussian_prediction_traj[:,0]) plt.plot(student_t_prediction_traj[:,1]) plt.hist(student_t_prediction_traj[200:,0], bins=50, density=True, alpha=0.5); plt.hist(student_t_prediction_traj[:,0], bins=50, density=True, alpha=0.5); i = test_inds[0] i plt.hist(gaussian_prediction_traj[:,i], bins=50, density=True, alpha=0.5); plt.hist(student_t_prediction_traj[:,i], bins=50, density=True, alpha=0.5); from scipy.stats import norm for _ in range(10): min_x, max_x = = -15,5 i = np.random.randint(len(test_inds)) plt.figure() ax = plt.subplot(1,1,1) remove_top_right_spines(ax) plt.title(mols[test_inds[i]].smiles) x_ = np.linspace(-15,5,10000) y_ = norm.pdf(x_, loc=expt_means[i], scale=expt_uncs[i]) plt.plot(x_, y_, label='experimental') plt.fill_between(x_, y_, alpha=0.5) plt.hist(gaussian_prediction_traj[:,i], bins=50, density=True, alpha=0.5, label='gaussian PPD'); plt.hist(student_t_prediction_traj[:,i], bins=50, density=True, alpha=0.5, label='student-t PPD'); plt.xlim(-15,5) plt.legend() plt.xlabel('hydration free energy (kcal/mol)') plt.ylabel('probability density') plt.yticks([]) ``` # Create a function that makes plots like these, and then make all of them, including a picture of the molecule! # Then sort them from best to worst? # Oh, compare also with the statistical uncertainty only! # Summarize into some reliability curves ``` def expt_unc_contained(preds, i, desired_coverage=0.95): alpha = 100 * ((1 - desired_coverage) / 2) upper, lower = norm.cdf(np.percentile(preds, q=[100 - alpha, alpha]), loc=expt_means[i], scale=expt_uncs[i]) return upper - lower preds = student_t_prediction_traj[:,i] desired_coverages = np.linspace(0,1) actual_coverages = np.vstack([np.array([expt_unc_contained(student_t_prediction_traj[:,i], i, desired_coverage=p) for i in range(len(mols))]) for p in desired_coverages]) actual_coverages.shape ax = plt.subplot(1,1,1) remove_top_right_spines(ax) plt.plot(desired_coverages, desired_coverages, c='grey', linestyle='--') plt.plot(desired_coverages, np.mean(actual_coverages, 1), label='student-t') plt.fill_between(desired_coverages, np.mean(actual_coverages, 1), alpha=0.25) plt.plot(desired_coverages, np.mean(actual_coverages_gaussian, 1), label='gaussian') plt.fill_between(desired_coverages, np.mean(actual_coverages_gaussian, 1), alpha=0.25) plt.legend() plt.xlabel('desired coverage probability') plt.ylabel('actual coverage probability') plt.xlim(0,1) plt.ylim(0,1) np.trapz(np.mean(actual_coverages_gaussian, 1), desired_coverages) actual_coverages_gaussian = np.vstack([np.array([expt_unc_contained(gaussian_prediction_traj[:,i], i, desired_coverage=p) for i in range(len(mols))]) for p in desired_coverages]) ``` # what's a better way to show this... A limitation is that this plot isn't sensitive to being over-confident in the tails. If your MCMC predictive is way out in the middle of nowhere, this doesn't distinguish much being super-confident in the middle of nowhere vs. being unconfident... ``` actual_coverages.shape for i in np.random.randint(0,len(mols),20): plt.plot(desired_coverages, actual_coverages[:,i]) plt.plot(desired_coverages, desired_coverages, c='grey', linestyle='--') plt.hist([np.trapz(actual_coverages[:,i] - desired_coverages, desired_coverages) for i in range(len(mols))], bins=50); plt.hist([np.trapz(actual_coverages_gaussian[:,i] - desired_coverages, desired_coverages) for i in range(len(mols))], bins=50); np.mean([expt_unc_contained(gaussian_prediction_traj[:,i], i, desired_coverage=0.95) for i in range(len(mols))]), np.mean([expt_unc_contained(student_t_prediction_traj[:,i], i, desired_coverage=0.95) for i in range(len(mols))]) # what if I just scale up the intervals? plt.hist(actual_coverages) plt. alphas = np.sum(norm.logpdf(0.5 * np.ones(10))) np.mean(theta[:int(len(theta)/2)]), np.mean(theta[int(len(theta)/2):]) student_t_result['log_prob_traj'][15000] np.max(student_t_result['log_prob_traj']) student_t_result['log_prob_traj'][-1] plt.plot(student_t_result['log_prob_traj']) plt.hlines(-822,0,len(student_t_result['log_prob_traj'])) plt.plot(student_t_result['log_prob_traj'][:100]) np.argmax(student_t_result['log_prob_traj'] > -1303)/26 plt.plot(student_t_result['log_prob_traj'][:100]) -822 - np.max(student_t_result['log_prob_traj']) plt.plot(gaussian_result['log_prob_traj']) plt.hlines(-2136,0,len(gaussian_result['log_prob_traj'])) min(gaussian_result['log_prob_traj']) (-2136 - max(gaussian_result['log_prob_traj'])) traj = student_t_result['traj'] N = int(len(traj[0]) / 2) plt.plot(traj[:,:N]) plt.plot(traj[:,N:]) # can I already compare the marginal likelihood of the student-t vs. gaussian model? from bayes_implicit_solvent.typers import AtomSpecificationProposal np.random.seed(0) from bayes_implicit_solvent.gb_models.obc2_parameters import mbondi_model initial_tree = mbondi_model if '[#14]' in initial_tree.nodes: initial_tree.remove_node('[#14]') # otherwise everything is -inf, because this type will be empty initial_tree.proposal_sigmas['radius'] = 1e-2 * unit.nanometer initial_tree.proposal_sigmas['scale_factor'] = 1e-2 # add one more parameter per element appearing in FreeSolv but not specified in obc2 parameter set to initial tree for i in [17, 35, 53]: smirks = '[#{}]'.format(i) initial_tree.add_child(smirks, '*') initial_tree.un_delete_able_types.add(smirks) specifiers = ['X1', 'X2', 'X3', 'X4', 'a', 'A', '-1', '+0', '+1', '+2'] atom_specification_proposal = AtomSpecificationProposal(atomic_specifiers=specifiers) smirks_elaboration_proposal = atom_specification_proposal np.array([np.random.randn(np.random.randint(2,5)) for _ in range(10)]) np.array([initial_tree, initial_tree]) smirks_elaboration_proposal(initial_tree.sample_decorate_able_node_uniformly_at_random()) initial_tree.sample_create_delete_proposal(smirks_elaboration_proposal) theta = student_t_traj[-1] radii, scales = theta[:N], theta[N:] radii all_elements = ['S', 'Cl', 'F', 'C', 'I', 'N', 'Br', 'H', 'P', 'O'] element_to_num_dict = { 'S': 16, 'Cl': 17, 'F': 9, 'C': 6, 'I': 53, 'N': 7, 'Br': 35, 'H': 1, 'P': 15, 'O': 8, } initial_radii_dict = {} initial_scale_factor_dict = {} for i in range(len(all_elements)): e = all_elements[i] n = element_to_num_dict[e] s = '[#{}]'.format(n) initial_radii_dict[s] = radii[i] initial_scale_factor_dict[s] = scales[i] initial_radii_dict initial_scale_factor_dict element_dict np.max(result['log_prob_traj']) ``` # Move some of these figure-generating scripts from notebook into package
github_jupyter
# Visualizing water depth into the Chesapeake and Delaware Bays Many natural phenomena exhibit wide differences in the amount they vary across space, with properties varying quickly in some regions of space, and very slowly in others. These differences can make it challenging to make feasible, accurate models using a fixed sampling grid. For instance, for hydrology modelling, areas that are largely flat and uniform can be approximated with just a few samples, while areas of sharp elevation change need many samples per unit area to have a faithful representation of how water will flow. [Datashader](http://datashader.org)'s support for irregular triangular meshes allows datasets from such simulations to be rendered onscreen efficiently. This notebook shows an example of rendering a dataset from the Chesapeake and Delaware Bay off the US Eastern coast, using data downloaded from the [Datashader examples](https://raw.githubusercontent.com/bokeh/datashader/master/examples/README.md). Once Datashader and the dataset are installed, you can run this notebook yourself to get a live version with interactive zooming for the plots that support it. First, let's load the data file and take a look at it: ``` import datashader as ds, datashader.transfer_functions as tf, datashader.utils as du, pandas as pd fpath = '../data/Chesapeake_and_Delaware_Bays.3dm' df = pd.read_table(fpath, delim_whitespace=True, header=None, skiprows=1, names=('row_type', 'cmp1', 'cmp2', 'cmp3', 'val'), index_col=1) print(len(df)) tf.Images(df.head(), df.tail()) ``` Here we have 1.6 million rows of data, some marked 'ND' (vertices defined as lon,lat,elevation) and others marked 'E3T' (triangles specified as indexes into the provided vertices, in order starting with 1 (i.e. like Matlab or Fortran, not typical Python)). We can extract the separate triangle and vertex arrays we need for Datashader: ``` e3t = df[df['row_type'] == 'E3T'][['cmp1', 'cmp2', 'cmp3']].values.astype(int) - 1 nd = df[df['row_type'] == 'ND' ][['cmp1', 'cmp2', 'cmp3']].values.astype(float) nd[:, 2] *= -1 # Make depth increasing verts = pd.DataFrame(nd, columns=['x', 'y', 'z']) tris = pd.DataFrame(e3t, columns=['v0', 'v1', 'v2']) print('vertices:', len(verts), 'triangles:', len(tris)) ``` We'll also precompute the combined mesh data structure, which doesn't much matter for this 1-million-triangle mesh, but would save time for plotting for much larger meshes: ``` %time mesh = du.mesh(verts,tris) ``` We can now visualize the average depth at each location covered by this mesh (with darker colors indicating deeper areas (higher z values, since we inverted the z values above)): ``` cvs = ds.Canvas(plot_height=900, plot_width=900) %time agg = cvs.trimesh(verts, tris, mesh=mesh) tf.shade(agg) ``` When working with irregular grids, it's important to understand and optimize the properties of the mesh, not just the final rendered data. Datashader makes it easy to see these properties using different aggregation functions: ``` cvs = ds.Canvas(plot_height=400, plot_width=400) tf.Images(tf.shade(cvs.trimesh(verts, tris, mesh=mesh, agg=ds.mean('z')), name="mean"), tf.shade(cvs.trimesh(verts, tris, mesh=mesh, agg=ds.any()), name="any"), tf.shade(cvs.trimesh(verts, tris, mesh=mesh, agg=ds.count()), name="count", how='linear'), tf.shade(cvs.trimesh(verts, tris, mesh=mesh, agg=ds.std('z')), name="std")).cols(2) ``` Here "any" shows the areas of the plane that are covered by the mesh (for constructing a raster mask), "count" shows how many triangles are used in the calculation of each pixel (with few triangles in the offshore area in this case, and more around the complex inland geometry), and "std" showing how much the data varies in each pixel (highlighting regions of steep change). "max" and "min" can also be useful for finding unexpected areas in the mesh or simulation results, e.g. deep troughs too thin to show up in the plot directly. Note that before calling ``tf.shade()``, the result of ``cvs.trimesh`` is just an [xarray](http://xarray.pydata.org) array, and so you can run any algorithm you want on the aggregate to do automatic checking or transformation as part of a processing pipeline. By default, the results are bilinearly interpolated between values at each vertex, but if interpolation is turned off, the average of the values at each vertex is used for the entire triangle: ``` cvs = ds.Canvas(plot_height=420, plot_width=420, x_range=(-76.56, -76.46), y_range=(38.78, 38.902)) from colorcet import bmy as c tf.Images(tf.shade(cvs.trimesh(verts, tris, mesh=mesh, interp=True), cmap=c, name="Interpolated"), tf.shade(cvs.trimesh(verts, tris, mesh=mesh, interp=False), cmap=c, name="Raw triangles")) ``` ## Interactivity and overlaying Datashader only generates arrays and images, but it is designed to be integrated with plotting libraries to provide axes, interactivity, and overlaying with other data. Visualizing the mesh as a wireframe or colored surface is simple with [HoloViews](http://holoviews.org): ``` import holoviews as hv import geoviews as gv from holoviews import opts from holoviews.operation.datashader import datashade hv.extension("bokeh") opts.defaults( opts.Image(width=450, height=450), opts.RGB(width=450, height=450)) wireframe = datashade(hv.TriMesh((tris,verts), label="Wireframe")) trimesh = datashade(hv.TriMesh((tris,hv.Points(verts, vdims='z')), label="TriMesh"), aggregator=ds.mean('z')) wireframe + trimesh ``` Here the underlying wireframe and triangles will be revealed if you enable the wheel-zoom tool and zoom in to either plot. As you can see, HoloViews will reveal the lon,lat coordinates associated with the trimesh. However, HoloViews does not know how to reproject the data into another space, which is crucial if you want to overlay it on a geographic map in a different coordinate system. [GeoViews](http://geo.holoviews.org) provides this projection capability if you need it: ``` opts.defaults(opts.WMTS(width=500, height=500)) tiles = gv.WMTS('https://maps.wikimedia.org/osm-intl/{Z}/{X}/{Y}@2x.png') %time points = gv.operation.project_points(gv.Points(verts, vdims=['z'])) tiles * datashade(hv.TriMesh((tris, points)), aggregator=ds.mean('z'), precompute=True) ``` Here you can enable the wheel-zoom tool in the toolbar, then use scrolling and your pointer to zoom and pan in the plot. As always with datashader, the data is provided to the browser in only one resolution to start with, and it will be updated when you zoom in only if you have a running Python process, and are not just viewing this on a static web page.
github_jupyter
``` # Import dependencies import os import glob import time import datetime import re from re import search import copy # might not need this import pandas as pd from splinter import Browser import requests from bs4 import BeautifulSoup as soup import random #import pymongo def return_dt(): global current_date current_date = str(datetime.datetime.now()).replace(':','.').replace(' ','_')[:-7] return current_date #return_dt() executable_path = {'executable_path': './chromedriver.exe'} url = input("Step 1) Please copy and paste your laptop query that you want to webscrape, and press enter: ") # Step 2) # Function to ask users if they want to watch the Bot (headless = False) work OR not (headless = True) # Lastly, will take you directly to the webpage that was inputted head = '' browser ='' def head_on_off(executable_path): # Have moved two preset variables, head and browser that are both " = '' " # assigning these as global variables enable us to reference them outside and inside the function global head global browser # options creates a bound to an answer options = [1, 2] #executable_path = {'executable_path': './chromedriver.exe'} # for all cases where users input in a value that is not valid while head not in options: head = int(input('Do you want the desktop watch the bot work? Enter a number: 1 - YES | 2 - NO . Your Answer: ')) if head not in options: print("That was not a valid answer. Please try again. ") # For cases where users enter in valid options: if head == options[0]: print('Head is activated. Please view only the new automated Google Chrome web browser. ') print('Do not make any adjustments to this automated window while the program runs, as it may produce errors or undesired outputs. ') browser = Browser('chrome', **executable_path, headless=False) if head == options[1]: print('Headless mode activated. No web browser will pop up. Please proceeed. ') browser = Browser('chrome', **executable_path, headless=True) # visit the target site browser.visit(url) global current_url current_url = browser.url #print(current_url) return current_url #head_on_off(executable_path) #time.sleep(5) # Step 3) # Use Splinter to grab the current url, to setup request to pull URL #current_url = browser.url #+ '&Page=' #+ str(turn_page) # Use Request.get() to pull the current url current_url = browser.url print(current_url) response = requests.get(current_url) response # Step 4) # Use BeautifulSoup to grab all the HTML using the htmlparser current_page_soup = soup(response.text, 'html.parser') current_page_soup #current_page_soup.find_all('a', class_="item-title")[0]['href'] current_page_soup.find_all('div', class_="nav-x-body-top-bar fix")[0].text.split('\n')[5] current_page_soup.find_all('h1', class_="page-title-text")[0].text #current_page_soup.find_all("a", class_="item-title")[0].text #current_page_soup.find_all("div", class_="item-container") #example = current_page_soup.find_all('a', class_="item-title")[0]['href'].split('p/')[1].split('?')[0] try: example1 = current_page_soup.find_all('a', class_="item-title")[0]['href'].split('p/')[1].split('?')[0] print("example 1") print(example1) except (IndexError) as e: example2 = current_page_soup.find_all('a', class_="item-title")[0]['href'].split('p/')[1] print("example 2") print(example2) #bool(example.split('?')) # if bool(example.split('?')) == True: # #example.split('?') # if re.search('?', example) == True: # print(True) example.split('?')[0] current_page_soup.find_all('a', class_="item-title")[0]['href'].split('p/')[1] # Step 5) Are there scrappable items-contrainers on the page? List first, last and count, also how many pages def scrappable_y_n(current_page_soup): global containers containers = current_page_soup.find_all("div", class_="item-container") # print first and last objects so users can understand what the output will be print("Preview: expect these scrapped off this page, and for every other total results pages, if there's more than one: ") print("="*35) # max items should be 36 counter = 0 for con in containers: try: counter += 1 product_details = con.find_all("a", class_="item-title")[0].text product_price = con.find_all("li", class_="price-current")[0].text.split()[0] print(f'{counter}) {product_details} | Price: {product_price}') print("-"*35) except (IndexError) as e: print(f"{counter}) This item was not scrappable. Skipped. ") print("-"*35) print("="*60) if counter == 0: print("Unable to scrap this link. ") else: print(f"{len(containers)} Scrappable Objects on the page. ") #return containers #scrappable_y_n(current_page_soup) # Create basic classes here and then have the function create product objects AND export out to CSV class Product_catalog: all_prod_count = 0 def __init__(self, general_category): # computer systems self.general_category = general_category Product_catalog.all_prod_count += 1 def count_prod(self): return int(self.all_prod_count) #return '{}'.format(self.general_category) class Sub_category(Product_catalog): # laptops/notebooks, gaming sub_category_ct = 0 def __init__(self, general_category, sub_categ, item_num, brand, price, img_link, prod_link, model_specifications, current_promotions): super().__init__(general_category) Sub_category.sub_category_ct += 1 self.sub_categ = sub_categ self.item_num = item_num self.brand = brand self.price = price self.img_link = img_link self.prod_link = prod_link self.model_specifications = model_specifications self.current_promotions = current_promotions # TEST CREATING OBJECTS AND IT WORKS lptp_1 = Sub_category( "Computer_Systems", "Laptops/Notebooks", 'Item=9SIA7AB8D73120', "HP", 1449.00, "//c1.neweggimages.com/ProductImageCompressAll300/A7AB_1_201811092013621813.jpg", 'https://www.newegg.com/p/1TS-000D-032G0?Item=9SIA7AB8D73120', 'HP EliteBook 840 G5 Premium School and Business Laptop (Intel 8th Gen i7-8550U Quad-Core, 16GB RAM, 256GB PCIe SSD, 14" FHD 1920x1080 Sure View Display, Thunderbolt3, NFC, Fingerprint, Win 10 Pro)', 'Free Expedited Shipping') #Product_catalog.all_prod_count # learning purposes Sub_category.__dict__ #Product_catalog.__dict__ print(Product_catalog.count_prod) # need to pass in the Sub_category for it to know what to count Product_catalog.count_prod(Sub_category) lptp_1.count_prod() Sub_category.sub_category_ct keys_list = list(lptp_1.__dict__.keys()) keys_list lptp_1.__dict__.values() dict_values_list = list(lptp_1.__dict__.values()) dict_values_list # handy trick - to see all data from the object. #print(prod_1.__dict__) #prod_1 lptp_1 = sub_category("Computer_Systems", Product_catalog.all_prod_count,"Laptops/Notebooks", sub_category.sub_cat_ct, "HP", 1449.00, "//c1.neweggimages.com/ProductImageCompressAll300/A7AB_1_201811092013621813.jpg", 'https://www.newegg.com/p/1TS-000D-032G0?Item=9SIA7AB8D73120', 'HP EliteBook 840 G5 Premium School and Business Laptop (Intel 8th Gen i7-8550U Quad-Core, 16GB RAM, 256GB PCIe SSD, 14" FHD 1920x1080 Sure View Display, Thunderbolt3, NFC, Fingerprint, Win 10 Pro)', 'Free Expedited Shipping') lptp_1.__dict__ prod_2.__dict__ # reenabled item_number def newegg_page_scraper(containers, turn_page): #before: (containers, turn_page) images = [] product_brands = [] product_models = [] product_links = [] item_numbers = [] general_category = [] product_categories = [] promotions = [] prices = [] shipping_terms = [] page_nums = [] for con in containers: try: page_counter = turn_page page_nums.append(int(turn_page)) gen_category = con.find_all('div', class_="nav-x-body-top-bar fix")[0].text.split('\n')[5] general_category.append(gen_category) prod_category = con.find_all('h1', class_="page-title-text")[0].text product_categories.appened(prod_category) image = con.a.img["src"] #print(image) images.append(image) prd_title = con.find_all('a', class_="item-title")[0].text product_models.append(prd_title) product_link = con.find_all('a', class_="item-title")[0]['href'] product_links.append(product_link) shipping = con.find_all('li', class_='price-ship')[0].text.strip().split()[0] if shipping != "Free": shipping = shipping.replace('$', '') shipping_terms.append(shipping) else: shipping = 0.00 shipping_terms.append(shipping) brand_name = con.find_all('a', class_="item-brand")[0].img["title"] product_brands.append(brand_name) except (IndexError, ValueError) as e: # if there's no item_brand container, take the Brand from product details product_brands.append(con.find_all('a', class_="item-title")[0].text.split()[0]) #print(f"{e} block 1") try: current_promo = con.find_all("p", class_="item-promo")[0].text promotions.append(current_promo) except: promotions.append('null') #print(f"{e} block 2") try: price = con.find_all('li', class_="price-current")[0].text.split()[0].replace('$','').replace(',', '') prices.append(price) except (IndexError, ValueError) as e: prices.append('null') #print(f"{e} block 3") try: item_num = current_page_soup.find_all('a', class_="item-title")[0]['href'].split('p/')[1].split('?')[0] item_numbers.append(item_num) except (IndexError) as e: item_num = current_page_soup.find_all('a', class_="item-title")[0]['href'].split('p/')[1] item_numbers.append(item_num) df = pd.DataFrame({ 'item_number': item_numbers, 'general_category': general_category, 'product_category': product_categories, 'brand': product_brands, 'model_specifications': product_models, 'price': prices, 'current_promotions': promotions, 'shipping': shipping_terms, 'page_number': page_nums, 'product_links': product_links, 'image_link': images }) # df['general_category'] = current_page_soup.find_all('div', class_="nav-x-body-top-bar fix")[0].text.split('\n')[5] # df['product_category'] = current_page_soup.find_all('h1', class_="page-title-text")[0].text # rearrange columns df = df[['item_number', 'general_category','product_category', 'page_number' ,'brand','model_specifications' ,'current_promotions' ,'price' ,'shipping' ,'product_links','image_link']] global product_category product_category = df['product_category'].unique()[0] # eliminate special characters in a string if it exists product_category = ''.join(e for e in product_category if e.isalnum()) #return_list.append(product_category) global items_scraped items_scraped = len(df['model_specifications']) df.to_csv(f'./processing/{current_date}_{product_category}_{items_scraped}_scraped_page{turn_page}.csv') return items_scraped, product_category #df.head() #newegg_page_scraper(containers, turn_page) #print(containers[1].find_all('a', class_="item-brand")[0].img["title"]) ####################################################################################### # create a function to return results pages, if exists, otherwise just scrape one page def results_pages(): # Use BeautifulSoup to extract the total results page number results_pages = current_page_soup.find_all('span', class_="list-tool-pagination-text")[0].text.strip() #print(results_pages) # Find and extract total pages + and add 1 to ensure proper length of total pages global total_results_pages total_results_pages = int(re.split("/", results_pages)[1]) #+ 2 # need to add 2 b/c 'range(inclusive, exclusive)' #========================================= need to remember to +2, and remove -30 #print(total_results_pages) #total_results_pages = total_results_pages ## ### ## ### return total_results_pages #results_pages() # Working def concatenate(total_results_pages): path = f'./processing\\' scraped_pages = glob.glob(path + "/*.csv") concatenate_pages = [] counter = 0 for page in scraped_pages: df = pd.read_csv(page, index_col=0, header=0) concatenate_pages.append(df) compiled_data = pd.concat(concatenate_pages, axis=0, ignore_index=True) total_items_scraped = len(compiled_data['brand']) # can replace this counter by creating class objects everytime it scrapes concatenated_output = compiled_data.to_csv(f"./finished_outputs/{current_date}_{total_items_scraped}_scraped_{total_results_pages}_pages_.csv") return concatenated_output # total_results_pages = 4 # concatenate(total_results_pages) # THis is working def clean_processing_fldr(): # delete all files in the 'processing folder' path = f'./processing\\' scraped_pages = glob.glob(path + "/*.csv") if len(scraped_pages) < 1: print("There are no files in the folder to clear. ") else: print(f"Clearing out a total of {len(scraped_pages)} scraped pages in the processing folder... ") clear_processing_files = [] for page in scraped_pages: os.remove(page) print('Clearing of "Processing" folder complete. ') # webscrape first page, then run page turner, then scraper for every page thereafter # # learning lesson is you can't call a function within a function def page_turner(total_results_pages): # This is "NEXT PAGE BUTTON CLICK" - This loops thru the total amount of pages by clicking the next page button for turn_page in range(1, total_results_pages): # set the current url as the target page (aiming the boomerang) target_url = browser.url # Use Request.get() - throw the boomerang at the target, retrieve the info, & return back to requestor response_target = requests.get(target_url) #response # Use BeautifulSoup to read grab all the HTML using the lxml parser target_page_soup = soup(response_target.text, 'html.parser') # Use BeautifulSoup to extract the total results page number #results_pages = current_page_soup.find_all('span', class_="list-tool-pagination-text")[0].text.strip() results_pages = target_page_soup.find_all('span', class_="list-tool-pagination-text")[0].text.strip() #========================================================= containers = target_page_soup.find_all("div", class_="item-container") newegg_page_scraper(containers, turn_page) #for i in range(total_results_pages): x = random.randint(3, 25) print(f"{turn_page}) | SLEEPING FOR SECONDS: {x} ") time.sleep(x) browser.find_by_xpath('//*[@id="bodyArea"]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[1]/div[2]/div/div[2]/button').click() browser.quit() # concatenate(total_results_pages) # clean_processing_fldr() # # clear out processing folder function here - as delete everything to prevent clutter # print(f'WebScraping Complete! All {total_results_pages} have been scraped and saved as {current_date}_{product_category}_scraped_{total_results_pages}_pages_.csv in the "finished_outputs" folder') # print('Thank you and hope you found this useful!') # scrape_again = 'y' # while scrape_again =='y': # print("=== NewEgg.Com WebScraper Beta ===") # print("=="*30) # print("Instructions:") # print("(1) Go to www.newegg.com, search for your laptop requirements (e.g. brand and specifications) ") # print("(2) Copy and paste the url from your exact search ") # print('(3) Activate or Disable the "Head View", webscraper bots point of view ') # print('(4) Check the "final_output folder when the webscraper bot is done scraping "') # print("") # # NEED TO CREATE LAST INPUT TO LOOP THIS WHOLE THING AGAIN. # #Maybe use pandas to provide summary of data # #Export images of box and whisker plots using statistics by brand return_dt() executable_path = {'executable_path': './chromedriver.exe'} url = input("Step 1) Please copy and paste your laptop query that you want to webscrape, and press enter: ") head = '' browser ='' head_on_off(executable_path) response = requests.get(current_url) #response current_page_soup = soup(response.text, 'html.parser') current_page_soup.find_all("div", class_="item-container") scrappable_y_n(current_page_soup) # Are there any pop ups / safe to proceed? safe_proceed_y_n = input(f'The Break Pedal: Answer any robot queries by NewEgg. Enter "y" when you are ready to proceed. ') if safe_proceed_y_n == 'y': print(f'Proceeding with webscrape... ') else: print("Quitting browser. You will need to press ctrl + c to quit, and then restart the program to try again. ") browser.quit() #newegg_page_scraper(containers) # will need to UNCOMMENT AFTER results_pages() #page_turner(total_results_pages) #total_results_pages = 5 for turn_page in range(1, total_results_pages): # set the current url as the target page (aiming the boomerang) target_url = browser.url # Use Request.get() - throw the boomerang at the target, retrieve the info, & return back to requestor response_target = requests.get(target_url) #response # Use BeautifulSoup to read grab all the HTML using the lxml parser target_page_soup = soup(response_target.text, 'html.parser') # Use BeautifulSoup to extract the total results page number #results_pages = current_page_soup.find_all('span', class_="list-tool-pagination-text")[0].text.strip() results_pages = target_page_soup.find_all('span', class_="list-tool-pagination-text")[0].text.strip() #========================================================= containers = target_page_soup.find_all("div", class_="item-container") newegg_page_scraper(containers, turn_page) #for i in range(total_results_pages): x = random.randint(3, 25) print(f"{turn_page}) | SLEEPING FOR {x} SECONDS ") time.sleep(x) browser.find_by_xpath('//*[@id="bodyArea"]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[1]/div[2]/div/div[2]/button').click() browser.quit() ################################################################## concat_y_n = input(f'All {total_results_pages} pages have been saved in the "processing" folder (1 page = csv files). Would you like for us concatenate all the files into one? Enter "y", if so. Otherwise, enter anykey to exit the program. ') if concat_y_n == 'y': concatenate(total_results_pages) print(f'WebScraping Complete! All {total_results_pages} have been scraped and saved as {current_date}_{product_category}_scraped_{total_results_pages}_pages_.csv in the "finished_outputs" folder') # clear out processing folder function here - as delete everything to prevent clutter clear_processing_y_n = input(f'The "processing" folder has {total_results_pages} csv files of each page that was scraped. Would you like to clear the files? Enter "y", if so. Otherwise, enter anykey to exit the program. ') if clear_processing_y_n == 'y': clean_processing_fldr() print('Thank you and hope you found this useful!') ```
github_jupyter
<h1> AWS Summit 2017 - Seoul: MXNet MNIST CNN Example ``` import math import time import mxnet as mx import mxnet.ndarray as nd import numpy as np import random from sklearn.datasets import fetch_mldata # set the seeds. However, this does not guarantee that the result will always be the same since CUDNN is non-deterministic np.random.seed(777) mx.random.seed(77) random.seed(7777) ``` <h3> 1. Load the data </h3> Load the MNIST dataset. We use 55000 images for training, 5000 images for validation and 10000 images for testing. ``` mnist = fetch_mldata(dataname='MNIST original') X, y = mnist.data, mnist.target X = X.astype(np.float32) / 255.0 X_train, X_valid, X_test = X[:55000].reshape((-1, 1, 28, 28)),\ X[55000:60000].reshape((-1, 1, 28, 28)),\ X[60000:].reshape((-1, 1, 28, 28)) y_train, y_valid, y_test = y[:55000], y[55000:60000], y[60000:] # hyper parameters learning_rate = 0.001 training_epochs = 15 batch_size = 100 drop_out_prob = 0.3 # The keep probability is 0.7 ``` <h3> 2. Build the symbol </h3> Next we will build the symbol, which is used to determine the data flow. ``` data = mx.sym.var(name="data") label = mx.sym.var(name="label") L1 = mx.sym.Convolution(data=data, kernel=(3, 3), pad=(1, 1), num_filter=32, name='L1_conv') L1 = mx.sym.Activation(data=L1, act_type='relu', name='L1_relu') L1 = mx.sym.Pooling(data=L1, kernel=(2, 2), stride=(2, 2), pool_type='max', name='L1_pool') L1 = mx.sym.Dropout(L1, p=drop_out_prob, name="L1_dropout") L2 = mx.sym.Convolution(data=L1, kernel=(3, 3), pad=(1, 1), num_filter=64, name='L2_conv') L2 = mx.sym.Activation(data=L2, act_type='relu', name='L2_relu') L2 = mx.sym.Pooling(data=L2, kernel=(2, 2), stride=(2, 2), pool_type='max', name='L2_pool') L2 = mx.sym.Dropout(L2, p=drop_out_prob, name="L2_dropout") L3 = mx.sym.Convolution(data=L2, kernel=(3, 3), pad=(1, 1), num_filter=128, name='L3_conv') L3 = mx.sym.Activation(data=L3, act_type='relu', name='L3_relu') L3 = mx.sym.Pooling(data=L3, kernel=(2, 2), stride=(2, 2), pad=(1, 1), pool_type='max', name='L3_pool') L3 = mx.sym.flatten(L3) L3 = mx.sym.Dropout(L3, p=drop_out_prob, name="L3_dropout") L4 = mx.sym.FullyConnected(data=L3, num_hidden=625, name='L4_fc') L4 = mx.sym.Dropout(L4, p=drop_out_prob) logits = mx.sym.FullyConnected(data=L4, num_hidden=10, name='logits') loss = mx.sym.mean(-mx.sym.pick(mx.sym.log_softmax(logits), label, axis=-1)) loss = mx.sym.make_loss(loss) ``` <h3> 3. Construct the Module </h3> We will construct the Module object based on the symbol. Module will be used for training and testing. Also, the testing executor will try to reuse the allocated memory space of the training executor. ``` data_desc = mx.io.DataDesc(name='data', shape=(batch_size, 1, 28, 28), layout='NCHW') label_desc = mx.io.DataDesc(name='label', shape=(batch_size, ), layout='N') net = mx.mod.Module(symbol=loss, data_names=[data_desc.name], label_names=[label_desc.name], context=mx.gpu()) net.bind(data_shapes=[data_desc], label_shapes=[label_desc]) net.init_params(initializer=mx.init.Xavier()) net.init_optimizer(optimizer="adam", optimizer_params={'learning_rate': learning_rate, 'rescale_grad': 1.0}, kvstore=None) # We build another testing network that outputs the logits. test_net = mx.mod.Module(symbol=logits, data_names=[data_desc.name], label_names=None, context=mx.gpu()) # Setting the `shared_module` to ensure that the test network shares the same parameters and # allocated memory of the training network test_net.bind(data_shapes=[data_desc], label_shapes=None, for_training=False, grad_req='null', shared_module=net) ``` <h3> 4. Training </h3> We can fit the training set now. ``` for epoch in range(training_epochs): begin = time.time() avg_cost = 0 total_batch = int(math.ceil(X_train.shape[0] / batch_size)) shuffle_ind = np.random.permutation(np.arange(X_train.shape[0])) X_train = X_train[shuffle_ind, :] y_train = y_train[shuffle_ind] for i in range(total_batch): # Slice the data batch and label batch. # Note that we use np.take to ensure that the batch will be padded correctly. data_npy = np.take(X_train, indices=np.arange(i * batch_size, (i+1) * batch_size), axis=0, mode="clip") label_npy = np.take(y_train, indices=np.arange(i * batch_size, (i + 1) * batch_size), axis=0, mode="clip") net.forward(data_batch=mx.io.DataBatch(data=[nd.array(data_npy)], label=[nd.array(label_npy)]), is_train=True) loss_nd = net.get_outputs()[0] net.backward() net.update() avg_cost += loss_nd.asnumpy()[0] / total_batch end = time.time() print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.9f}'.format(avg_cost), 'time spent=%gs' %(end-begin)) print('Learning Finished!') ``` <h3> 5. Testing </h3> Let's test the model on the test set. ``` total_batch = int(np.ceil(X_test.shape[0] / batch_size)) correct_count = 0 total_num = 0 for i in range(total_batch): num_valid = batch_size if (i + 1) * batch_size <= X_test.shape[0]\ else X_test.shape[0] - i * batch_size data_npy = np.take(X_test, indices=np.arange(i * batch_size, (i + 1) * batch_size), axis=0, mode="clip") label_npy = np.take(y_test, indices=np.arange(i * batch_size, (i + 1) * batch_size), axis=0, mode="clip") test_net.forward(data_batch=mx.io.DataBatch(data=[nd.array(data_npy)], label=None), is_train=False) logits_nd = test_net.get_outputs()[0] pred_cls = nd.argmax(logits_nd, axis=-1).asnumpy() correct_count += (pred_cls[:num_valid] == label_npy[:num_valid]).sum() acc = correct_count / float(X_test.shape[0]) print('Accuracy:', acc) ``` <h3> 6. Get one and predict </h3> We can predict the label of a single sample ``` test_net.reshape(data_shapes=[mx.io.DataDesc(name='data', shape=(1, 1, 28, 28), layout='NCHW')], label_shapes=None) r = np.random.randint(0, X_test.shape[0]) test_net.forward(data_batch=mx.io.DataBatch(data=[nd.array(X_test[r:r+1])], label=None)) logits_nd = test_net.get_outputs()[0] print("Label: ", int(y_test[r])) print("Prediction: ", int(nd.argmax(logits_nd, axis=1).asnumpy()[0])) ```
github_jupyter
#### New to Plotly? Plotly's Python library is free and open source! [Get started](https://plotly.com/python/getting-started/) by downloading the client and [reading the primer](https://plotly.com/python/getting-started/). <br>You can set up Plotly to work in [online](https://plotly.com/python/getting-started/#initialization-for-online-plotting) or [offline](https://plotly.com/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plotly.com/python/getting-started/#start-plotting-online). <br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! #### Version Check Note: exponential fits are available in version <b>1.9.2+</b><br> Run `pip install plotly --upgrade` to update your Plotly version ``` import plotly plotly.__version__ ``` ### Exponential Fit ``` # Learn about API authentication here: https://plotly.com/python/getting-started # Find your api_key here: https://plotly.com/settings/api import plotly.plotly as py import plotly.graph_objs as go # Scientific libraries import numpy as np from scipy.optimize import curve_fit x = np.array([399.75, 989.25, 1578.75, 2168.25, 2757.75, 3347.25, 3936.75, 4526.25, 5115.75, 5705.25]) y = np.array([109,62,39,13,10,4,2,0,1,2]) def exponenial_func(x, a, b, c): return a*np.exp(-b*x)+c popt, pcov = curve_fit(exponenial_func, x, y, p0=(1, 1e-6, 1)) xx = np.linspace(300, 6000, 1000) yy = exponenial_func(xx, *popt) # Creating the dataset, and generating the plot trace1 = go.Scatter( x=x, y=y, mode='markers', marker=go.Marker(color='rgb(255, 127, 14)'), name='Data' ) trace2 = go.Scatter( x=xx, y=yy, mode='lines', marker=go.Marker(color='rgb(31, 119, 180)'), name='Fit' ) annotation = go.Annotation( x=2000, y=100, text='$\textbf{Fit}: 163.56e^{-0.00097x} - 1.16$', showarrow=False ) layout = go.Layout( title='Exponential Fit in Python', plot_bgcolor='rgb(229, 229, 229)', xaxis=go.XAxis(zerolinecolor='rgb(255,255,255)', gridcolor='rgb(255,255,255)'), yaxis=go.YAxis(zerolinecolor='rgb(255,255,255)', gridcolor='rgb(255,255,255)'), annotations=[annotation] ) data = [trace1, trace2] fig = go.Figure(data=data, layout=layout) py.plot(fig, filename='Exponential-Fit-in-python') from IPython.display import display, HTML display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />')) display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">')) ! pip install git+https://github.com/plotly/publisher.git --upgrade import publisher publisher.publish( 'Exponential-fits.ipynb', 'python/exponential-fits/', 'Exponential Fit', 'Create a exponential fit / regression in Python and add a line of best fit to your chart.', title = 'Exponential Fit', name = 'Exponential Fit', has_thumbnail='true', thumbnail='thumbnail/exponential_fit.jpg', language='python', page_type='example_index', display_as='statistics', order=11, ipynb= '~notebook_demo/135') ```
github_jupyter
``` # This notebook assumes to be running from your FireCARES VM (eg. python manage.py shell_plus --notebook --no-browser) import sys import os import time import pandas as pd import numpy as np import psycopg2 import django sys.path.insert(0, os.path.realpath('..')) import folium django.setup() from IPython.display import display from django.contrib.gis.geos import GEOSGeometry pd.set_option("display.max_rows",100) def display_geom(geom): _map = folium.Map(location=[geom.centroid.y, geom.centroid.x], tiles='Stamen Toner') _map.choropleth(geo_str=geom.geojson, line_weight=0, fill_opacity=0.2, fill_color='green') ll = geom.extent[1::-1] ur = geom.extent[3:1:-1] _map.fit_bounds([ll, ur]) return _map nfirs = psycopg2.connect('service=nfirs') fc = psycopg2.connect('service=firecares') parcels = psycopg2.connect('service=parcels') ``` ### Total departments/stations in FireCARES ``` q = """ select * from (select count(1) as total_departments from firestation_firedepartment) total_departments, (select count(1) as total_stations from firestation_firestation) total_stations """ pd.read_sql_query(q, fc) ``` ### Covered population ``` q = """ select sum(fd.population) as total_population from firestation_firedepartment fd """ pd.read_sql_query(q, fc) ``` ### Covered area ``` from firecares.firestation.models import FireDepartment deps = FireDepartment.objects.filter(population__isnull=False).exclude(population=0) areas = filter(lambda x: x[1] is not None, map(lambda x: (x.id, x.geom_area), deps)) sum(a[1] for a in areas) len(areas) ``` ### Total incidents and building fires ``` display(pd.read_sql_query("select count(1) as buildingfires from buildingfires", nfirs)) display(pd.read_sql_query("select count(1) as incidents from incidentaddress", nfirs)) ``` ### Coverage of departments based on department-owned census geometry that have populations ``` # Number of departments with owned census tracts q = """ select *, (null_tracts_count + not_null_tracts_count) as total from (select count(1) as null_tracts_count from firestation_firedepartment where owned_tracts_geom is null) null_owned_tracts, (select count(1) as not_null_tracts_count from firestation_firedepartment where owned_tracts_geom is not null) not_null_owned_tracts """ print "Owned tracts geom" display(pd.read_sql_query(q, fc)) q = """ select *, (null_jurisdictions + not_null_jurisdictions) as total from (select count(1) as null_jurisdictions from firestation_firedepartment where geom is null) null_jurisdiction, (select count(1) as not_null_jurisdictions from firestation_firedepartment where geom is not null) not_null_jurisdiction """ print "Jurisdictions" display(pd.read_sql_query(q, fc)) %%time q = """ select ST_Union(ST_Union(fd.owned_tracts_geom), ST_Union(fd.geom)) from firestation_firedepartment fd where fd.state = 'MO' """ df = pd.read_sql_query(q, fc) poly = GEOSGeometry(df.values[0][0]) display(display_geom(poly.simplify())) %%time # Total square meters in the USA (territories excluded) us = 9147593069344. pd.set_option('display.float_format', lambda x: '%f' % x) q = """ select ST_Area(geography(ST_Union(ST_Union(ST_MakeValid(fd.owned_tracts_geom)), ST_Union(ST_MakeValid(fd.geom))))) from firestation_firedepartment fd where fd.population is not null and fd.population != 0 and fd.state = 'CT' """ display(pd.read_sql_query(q, fc).values[0][0] / us) ``` ### Invalid geometries ``` %%time q = """ select count(1), state from firestation_firedepartment fd where not ST_IsValid(fd.geom) group by fd.state """ display(pd.read_sql_query(q, fc)) %%time q = """ select count(1), state from firestation_firedepartment fd where not ST_IsValid(fd.owned_tracts_geom) group by fd.state """ display(pd.read_sql_query(q, fc)) ``` ### Bad address geocodes ``` %%time q = """ select id, ST_SRID(a.geom), ST_X(a.geom), ST_Y(a.geom), a.state_province from firecares_core_address a where ST_X(a.geom) > 180 or ST_X(a.geom) < -180 or ST_Y(a.geom) > 180 or ST_Y(a.geom) < -180 """ display(pd.read_sql_query(q, fc)) # Fix bad addresses %%time q = """ update firecares_core_address set geom = ST_Transform(ST_SetSRID(geom, 3857), 4326) where ST_X(geom) > 180 or ST_X(geom) < -180 or ST_Y(geom) > 180 or ST_Y(geom) < -180 """ c = fc.cursor() c.execute(q) fc.commit() ``` ### Bad addresses (missing country/etc) ``` %%time q = """ select count(1) from firecares_core_address where address_line2 = 'None' """ display(pd.read_sql_query(q, fc)) ``` ### Verify station lat/lons ``` from firecares.firestation.models import FireStation map(lambda x: (x.get('geom').coords, x.get('name')), FireStation.objects.filter(department_id=97963).values('geom', 'name')) ``` ### Jurisdiction boundary statistics ``` %%time from firecares.firestation.models import FireDepartment from firecares.utils import lenient_summation import json import os d = [] fname = '/firecares/outf.json' if os.path.exists(fname): d = json.load(open(fname)) ids = map(lambda x: x.get('fcid'), d) for idx, fd in enumerate(FireDepartment.objects.defer('owned_tracts_geom').filter(archived=False)): if fd.id in ids: continue if idx % 10 == 0: print idx with open(fname, 'w') as f: json.dump(d, f) d.append({'fcid': fd.id, 'name': fd.name, 'state': fd.state, 'boundary': True if fd.geom else False, 'population': fd.population, 'region': fd.region, 'fires_count': lenient_summation(*map(lambda x: x.get('count'), fd.metrics.residential_structure_fire_counts['all'])), 'casualty_count': fd.metrics.nfirs_deaths_and_injuries_sum['all'], 'station_count': fd.firestation_set.count()}) print 'done' deduped = [] for i in d: if i.get('fcid') in map(lambda x: x.get('fcid'), deduped): continue deduped.append(i) pd.DataFrame(deduped).to_csv('/tmp/stations.csv') ```
github_jupyter
# Features Engineering for Model 2 For building up a second model M2, new features are designend. These new data sets are combined with `..\data\train_M1` (resp. `..\data\train_M2`) and stored under `../data/test_M2.csv` (resp. `..\data\test_M2`). ## import libraries / delcare functions ``` import pandas as pd import numpy as np import dask.dataframe as dd from IPython.display import Markdown, display from sklearn import preprocessing import warnings, sys if not sys.warnoptions: warnings.simplefilter("ignore") def convert_types(df): # Convert data types to reduce memory for c in df: col_type = str(df[c].dtypes) numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64'] # Convert objects to category if col_type == 'object': df[c] = df[c].astype('category') # numerics elif col_type in numerics: c_min = df[c].min() c_max = df[c].max() if col_type[:3] == 'int': if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max: df[c] = df[c].astype(np.int8) elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max: df[c] = df[c].astype(np.int16) elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max: df[c] = df[c].astype(np.int32) elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max: df[c] = df[c].astype(np.int64) else: if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max: df[c] = df[c].astype(np.float16) elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max: df[c] = df[c].astype(np.float32) else: df[c] = df[c].astype(np.float64) return df ``` ## Import Data ``` file = '../data/train_clean.csv' ddf = dd.read_csv(file) train = ddf.compute() train = convert_types(train) train.shape file = '../data/test_clean.csv' ddf = dd.read_csv(file) test = ddf.compute() test = convert_types(test) test.shape # target column target = 'HasDetections' # id from data set data_id = 'MachineIdentifier' ``` ## Feature engineering ``` # List for new features to encoding method and model parameter new_features_labelencode = [] new_features_category = [] # function to create new features def featureengineering (df): df['primary_drive_c_ratio'] = df['Census_SystemVolumeTotalCapacity']/ df['Census_PrimaryDiskTotalCapacity'] df['primary_drive_c_ratio'] = df['primary_drive_c_ratio'].fillna(-1) df['non_primary_drive_MB'] = df['Census_PrimaryDiskTotalCapacity'] - df['Census_SystemVolumeTotalCapacity'] df['non_primary_drive_MB'] = df['non_primary_drive_MB'].fillna(-1) df['aspect_ratio'] = df['Census_InternalPrimaryDisplayResolutionHorizontal']/ df['Census_InternalPrimaryDisplayResolutionVertical'] df['aspect_ratio'] = df['aspect_ratio'].fillna(-1) df['monitor_dims'] = df['Census_InternalPrimaryDisplayResolutionHorizontal'].astype(str) + '*' + df['Census_InternalPrimaryDisplayResolutionVertical'].astype('str') df['monitor_dims'] = df['monitor_dims'].fillna('unknown') df['monitor_dims'] = df['monitor_dims'].astype('category') df['Screen_Area'] = (df['aspect_ratio']* (df['Census_InternalPrimaryDiagonalDisplaySizeInInches']**2))/(df['aspect_ratio']**2 + 1) df['Screen_Area'] = df['Screen_Area'].fillna(-1) df['ram_per_processor'] = df['Census_TotalPhysicalRAM']/ df['Census_ProcessorCoreCount'] df['ram_per_processor'] = df['ram_per_processor'].fillna(-1) df['ProcessorCoreCount_DisplaySizeInInches'] = df['Census_ProcessorCoreCount'] * df['Census_InternalPrimaryDiagonalDisplaySizeInInches'] df['ProcessorCoreCount_DisplaySizeInInches'] = df['ProcessorCoreCount_DisplaySizeInInches'].fillna(-1) df['SmartScreen'] = df['SmartScreen'].astype(str) df['AVProductsInstalled'] = df['AVProductsInstalled'].astype(str) df['SmartScreen_AVProductsInstalled'] = df['SmartScreen'] + df['AVProductsInstalled'] df['SmartScreen_AVProductsInstalled'] = df['SmartScreen_AVProductsInstalled'].fillna('unknown') df['SmartScreen_AVProductsInstalled'] = df['SmartScreen_AVProductsInstalled'].astype('category') df['SmartScreen'] = df['SmartScreen'].astype('category') df['AVProductsInstalled'] = df['AVProductsInstalled'].astype('category') return(df) # add feature to encoding list new_features_labelencode.append('monitor_dims') # add feature to parameter list new_features_category.append('monitor_dims') new_features_labelencode.append('SmartScreen_AVProductsInstalled') new_features_category.append('SmartScreen_AVProductsInstalled') # create features for train data train = featureengineering(train) # create features for test data test = featureengineering(test) new_features_labelencode new_features_category ``` ## save Data sets ``` train.to_csv('../data/train_featureengineering_M2.csv', index = False) test.to_csv('../data/test_featureengineering_M2.csv', index = False) ```
github_jupyter
# Get sparse firing from a dictionary of STRFs *Nhat Le, Oct 2017* ``` import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import scipy.io.wavfile import pickle import scipy.signal from sklearn.decomposition import PCA %matplotlib inline ``` Import the STRFs and read the sound file ``` strfs_all = np.genfromtxt('./STRFs/convolution_2017BBLB1_42968_51352408_8_21_14_15_52_150neurons.csv', delimiter=',') firings_all = np.genfromtxt('./STRFs/convolution_background_subtract_2017BBLB1_42968_51352408_8_21_14_15_52_150neurons.csv', delimiter=',') firings_all.shape plt.figure(figsize=(15,15)) plt.imshow(firings_all, cmap='jet', aspect='auto') plt.savefig('image.pdf') plt.plot(firings_all[9,:1000], '.') strfs_all.shape plt.plot(strfs_all[0,:]) # Read song fs, song = scipy.io.wavfile.read('./Finch_songs/2017BBLB1_42968.51352408_8_21_14_15_52.wav') # Read strfs strfs_all = np.genfromtxt('./STRFs/STRFarr_finch_kail_150_neurons_171010.csv', delimiter=',') # Remove first column (index) strfs_all = strfs_all[:,1:] # Read the pca parameters f = open('./pca_finch_kail_150neurons_200pcs.pckl', 'rb') pca = pickle.load(f) f.close() def reconstruct_from_A_s(A, S, pca, shape, segment_width, segment_len, Y=None): '''Reconstruct the spectrogram, given a dictionary A of receptive fields, and s: the activity of neurons''' Y = np.dot(A, S) Y_inverted = pca.inverse_transform(Y.T).T recons = np.zeros(shape) for ncol in range(Y.shape[1]): segment = np.reshape(Y_inverted[:,ncol], (segment_width, segment_len)) start = ncol * segment_step recons[:,start:(start + segment_len)] += segment return recons def reconstruct_from_pca(Y, pca, shape, segment_width, segment_len): '''Reconstruct the spectrogram, given its pca projection''' recons = np.zeros(shape) Y_inverted = pca.inverse_transform(Y.T).T #Y_inverted = np.dot(comp.T, Y) for ncol in range(Y.shape[1]): segment = np.reshape(Y_inverted[:,ncol], (segment_width, segment_len)) start = ncol * segment_step recons[:,start:(start + segment_len)] += segment return recons def make_single_strf(A, index, pca_used, segment_width, segment_len): '''Make the strf from a dictionary A's column given by index ''' A_col = A[index,:] #A_col_inv = pca_used.inverse_transform(A_col) A_inv_im = np.reshape(A_col, (segment_width, segment_len)) A_inv_im /= np.max(np.abs(A_inv_im)) # Invert if skew is negative skew = scipy.stats.skew(A_inv_im.ravel()) if skew < 0: A_inv_im *= -1 return A_inv_im def make_all_strfs(A, segment_width, segment_len): '''Make the strf from a dictionary A's column given by index ''' A_col = A[index,:] #A_col_inv = pca_used.inverse_transform(A_col) A_inv_im = np.reshape(A_col, (segment_width, segment_len)) A_inv_im /= np.max(np.abs(A_inv_im)) # Invert if skew is negative skew = scipy.stats.skew(A_inv_im.ravel()) if skew < 0: A_inv_im *= -1 return A_inv_im strfs_all.shape s = make_single_strf(strfs_all, 0, pca, 256, 22) plt.imshow(np.flipud(s), cmap = 'jet') nrows = 10 ncols = 15 fig, ax = plt.subplots(nrows, ncols, figsize=(ncols,2 * nrows)) fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None) for i in range(nrows): for j in range(ncols): A_inv_im = make_single_strf(strfs_all, ncols * i + j, pca, 256, 22) ax[i][j].grid(False) ax[i][j].imshow(np.flipud(A_inv_im), cmap='jet', aspect='auto', extent=[0, 200, 100, 4000]) ax[i][j].set_ylim([100, 4000]) if i != nrows - 1 or j != 0: ax[i][j].set(xticklabels=[], yticklabels=[]) else: ax[i][j].set_xlabel('Time (ms)') ax[i][j].set_ylabel('Frequency (Hz)') ax[i][j].yaxis.set_major_locator(plt.FixedLocator([100, 4000])) #ax[i][j].yaxis.set_major_formatter(plt.FuncFormatter(format_func_pcs)) def multitaper_spec(X, fs, nadvance=100, slepian_len=500, nperseg=4096*4, NW=1): '''Perform a dpss multitapering Returns the spectrogram, frequency (f) and time ticks (t)''' # Use slepian window of width 0.3 #window = scipy.signal.slepian(M=slepian_len, width=NW / slepian_len) window = scipy.signal.blackman(slepian_len) window = np.lib.pad(window, (0, nperseg - slepian_len), 'constant', constant_values=(0, 0)) noverlap = len(window) - nadvance f,t,spectrogram = scipy.signal.spectrogram(X, fs, window=window, noverlap=noverlap) spectrogram = 20 * np.log10(spectrogram) #Units: dB return (f, t, spectrogram) def sample_logspec(spectrogram, f, nsamples=256, fmin=100.0, fmax=4000.0): '''Sample the spectrogram to collect the required number of samples in frequency from fmin to fmax, logarithmically spaced Returns the spectrogram and the frequencies sampled''' # Frequencies to sample spacing = (np.log(fmax) - np.log(fmin)) / (nsamples - 1) logfreqs = np.log(fmin) + np.arange(nsamples) * spacing freqs = np.exp(logfreqs) # Sampled version row_ids = [] for freq in freqs: row_ids.append(len(f[f < freq])) logspec = spectrogram[row_ids,:] return (logspec, freqs) def format_func_spec(value, tick_number): '''Format tick marks for spectrogram plot''' if value >= len(freqs): label = max(freqs) else: label = freqs[int(value)] return int(np.round(label, -2)) def format_func_pcs(value, tick_number): '''Formatting axis ticks for the PC plot''' return value # Compute the spectrogram and log spectrogram f,t,spectrogram = multitaper_spec(song, fs=fs, nadvance=100, nperseg=4096*4) logspec, freqs = sample_logspec(spectrogram, f, fmin=300.0, fmax=15000) plt.plot(song[340000:360000]) plt.hist(song) plt.hist(logspec.ravel()) # Time and frequency ranges for plotting fmin = 0 fmax = np.max(f) #Hz tmin = 0 tmax = max(t) #s f_start = len(f[f<fmin]) f_end = len(f[f<fmax]) t_start = len(t[t<tmin]) t_end = len(t[t<tmax]) # Plot the spectrogram and log spectrogram fig, ax = plt.subplots(2, 1, figsize=(20,10)) ax[0].imshow(np.flipud(spectrogram[f_start:f_end,t_start:t_end]), cmap='jet', aspect='auto', extent=[tmin, tmax, fmin, fmax]) ax[0].grid(False) ax[0].set_title('Song spectrogram', fontsize=20) ax[0].set_xlabel('Time (s)', fontsize=20) ax[0].set_ylabel('Frequency (Hz)', fontsize=20); ax[0].tick_params(axis='both', which='major', labelsize=20) # For plotting ax[1].imshow(np.flipud(logspec[:,t_start:t_end]), cmap='jet', aspect='auto', extent=[tmin, tmax, 0, len(freqs)]) ax[1].grid(False) ax[1].set_title('Song spectrogram, log sampled', fontsize=20) ax[1].set_xlabel('Time (s)', fontsize=20) ax[1].set_ylabel('Frequency (Hz)', fontsize=20); ax[1].yaxis.set_major_formatter(plt.FuncFormatter(format_func_spec)) ax[1].tick_params(axis='both', which='major', labelsize=20) nadvance = 100 n_comp_pca = 200 segment_len_ms = 50 segment_len = int(segment_len_ms / 1000 / nadvance * fs) #samples spectrogram = logspec segment_width = spectrogram.shape[0] segment_step = int(segment_len / 10) #samples dt = segment_len * nadvance / fs * 1000 #length of each window in ms segments_lst = [] for t_start in np.arange(0, spectrogram.shape[1] - segment_len, segment_step): segments_lst.append(spectrogram[:,t_start:(t_start + segment_len)]) reconstruct_from_pca? Y2 = pca.transform(X1.T).T X_recons = reconstruct_from_pca(Y2, pca, logspec.shape, 256, 22) # Do pca on each segment X = np.zeros((segments_lst[0].shape[0] * segments_lst[0].shape[1], len(segments_lst))) for idx, segment in enumerate(segments_lst): X[:,idx] = segment.ravel() # Mean subtract Xmean = np.mean(X, axis=1) Xmeans = np.tile(Xmean, (X.shape[1], 1)) X1 = X - Xmeans.T # Perform pca with whitening #pca = PCA(n_components=n_comp_pca, whiten=True) #pca.fit(X1.T) Y = pca.transform(X1.T).T print(' Number of segments:', Y.shape[1]) ``` ## Perform the convolution ``` firings_all = np.zeros(0) for i in range(strfs_all.shape[1]): A_inv_im = make_single_strf(strfs_all, i, pca, 256, 22) firing = scipy.signal.convolve2d(X_recons, A_inv_im, mode='valid').ravel() #firing -= np.mean(firing) firings_all = np.concatenate((firings_all, firing.T)) firings_all = firings_all.reshape(strfs_all.shape[1], len(firing)) plt.plot(firings_all[10,:1000]) plt.imshow(firings_all, aspect='auto', cmap='hot') plt.grid(False) plt.plot(firings_all[0,:]) s = scipy.signal.convolve2d(logspec, A_inv_im, mode='valid') s.shape plt.plot(s.ravel()) s = scipy.signal.convolve2d(logspec, A_inv_im, mode='valid') ```
github_jupyter
<a href="https://colab.research.google.com/github/DiffSharp/DiffSharp/blob/dev/notebooks/debug/NativeCudaLoadLinux.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> **Important note:** You should always work on a duplicate of the course notebook. On the page you used to open this, tick the box next to the name of the notebook and click duplicate to easily create a new version of this notebook. You will get errors each time you try to update your course repository if you don't do this, and your changes will end up being erased by the original course version. # Debugging CUDA package problems on Linux using CUDA-enabled machines on Google CoLab Google Colab offers free CUDA-enabled GPU machines which are very useful for debugging problems with CUDA packages on Linux. THis notebook started with the investigations in https://github.com/xamarin/TorchSharp/issues/169 and is being kept for future times we need to investigate similar failures ### Investigate GLIB and GLIBCXX dependencies available on this system One reason Linux native binaries fail to load is when they have been built on a later Linux system, e.g. built on Ubuntu 20.04 and you're trying to load on Ubuntu 18.04. This can even happen 18.04 (Build VM) to 18.04 (CoLab). One particular pair of dependencies is "GLIB" (libc) and "GLIBCXX" (libstdc++). Failures for these cause messages like "GLIBCXX_3.4.14 missing" - this is a version symbol in the native binaries. Do it's important to determine the maximum GLIB and GLIBCXX symbols available on both the build machine and the target machine. Here's an example: ``` # Investigate GLIB and GLIBCXX dependencies available on this system !ldd --version !/sbin/ldconfig -p | grep stdc++ !strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep LIBCXX ``` ## Install .NET SDK To run F# code on CoLab you need to install the .NET SDK on the CoLab VM: ``` # Install dotnet !wget https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb && sudo dpkg -i packages-microsoft-prod.deb && sudo apt-get update && sudo apt-get install -y apt-transport-https && sudo apt-get update && sudo apt-get install -y dotnet-sdk-5.0 !dotnet --version ``` ## Restore packages (libtorch-cpu) Native loading for TorchSHarp.dll ultimately wants to bind to P/Invoke library "LibTorchSharp" which in turn has a bunch of dependencies on `torch.dll` or `libtorch.so` plus a whole lot of other things. First we want to acquire the packages we want to load: ``` # Restore packages (libtorch-cpu) !echo "printfn \"phase0\"" > foo.fsx !echo "#r \"nuget: TorchSharp, 0.91.52515\"" >> foo.fsx !echo "#r \"nuget: libtorch-cpu, 1.9.0.1\"" >> foo.fsx !echo "printfn \"done\"" >> foo.fsx !cat foo.fsx !echo "-------" !dotnet fsi foo.fsx ``` Next we look around the dependencies of libLibTorchSharp. ``` # Look around packages and dependencies (libtorch-cpu) ! /lib/x86_64-linux-gnu/libc.so.6 --version !ls /root/.nuget/packages/libtorch-cpu/1.9.0.1/runtimes/linux-x64/native !ls /root/.nuget/packages/torchsharp/0.91.52515/runtimes/linux-x64/native/ !echo LD_LIBRARY_PATH=$LD_LIBRARY_PATH !ldd --version !ldd /root/.nuget/packages/torchsharp/0.91.52515/runtimes/linux-x64/native/libLibTorchSharp.so ``` Next we run some F# code by creating a script and calling `TorchSharp.Torch.IsCudaAvailable` ``` # Run some code with workaround (libtorch-cpu) !echo "printfn \"phase0\"" > foo.fsx !echo "#r \"nuget: TorchSharp, 0.91.52515\"" >> foo.fsx !echo "#r \"nuget: libtorch-cpu, 1.9.0.1\"" >> foo.fsx #!echo "printfn \"phase1\"" >> foo.fsx #!echo "open System.Runtime.InteropServices" >> foo.fsx #!echo "NativeLibrary.Load(\"/root/.nuget/packages/libtorch-cpu/1.9.0.1/runtimes/linux-x64/native/libtorch.so\") |> printfn \"%A\"" >> foo.fsx !echo "printfn \"phase2\"" >> foo.fsx !echo "TorchSharp.Torch.IsCudaAvailable() |> printfn \"%A\"" >> foo.fsx !cat foo.fsx !dotnet fsi foo.fsx ``` This should report "false" since we're using the LibTorch CPU binaries and they can't connect to the GPU resource. ## Restore packages (libtorch-cuda) Next we do a similar process for libtorch-cuda-11.1-linux-x64. > This will take a long time as the packages are huge > It will be faster on the next iteration ``` # Restore packages (libtorch-cuda) !echo "printfn \"phase0\"" > foo.fsx !echo "#i \"nuget: https://donsyme.pkgs.visualstudio.com/TorchSharp/_packaging/packages2%40Local/nuget/v3/index.json\"" >> foo.fsx !echo "#r \"nuget: TorchSharp, 0.91.52515\";;" >> foo.fsx !echo "#r \"nuget: libtorch-cuda-11.1-linux-x64, 1.9.0.1\";;" >> foo.fsx !echo "printfn \"done\"" >> foo.fsx !cat foo.fsx !echo "-------" !dotnet fsi foo.fsx ``` Next we look around the dependencies of libLibTorchSharp. ``` # Look around packages and dependencies (libtorch-cuda) ! /lib/x86_64-linux-gnu/libc.so.6 --version !ls /root/.nuget/packages/libtorch-cuda-11.1-linux-x64/1.9.0.1/ !echo LD_LIBRARY_PATH=$LD_LIBRARY_PATH !ldd --version !ls /root/.nuget/packages/torchsharp/0.91.52515/runtimes/linux-x64/native/ #!ldd /root/.nuget/packages/torchsharp/0.91.52515/runtimes/linux-x64/native/libLibTorchSharp.so !ls /root/.nuget/packages/torchsharp/0.91.52515/lib/netcoreapp3.1/cuda-11.1/ !ldd /root/.nuget/packages/torchsharp/0.91.52515/lib/netcoreapp3.1/cuda-11.1/libLibTorchSharp.so # Run some code with workaround (libtorch-cpu) !echo "printfn \"phase0\"" > foo.fsx !echo "#r \"nuget: TorchSharp, 0.91.52515\"" >> foo.fsx !echo "#r \"nuget: libtorch-cuda-11.1-linux-x64, 1.9.0.1\"" >> foo.fsx #!echo "printfn \"phase1\"" >> foo.fsx #!echo "open System.Runtime.InteropServices" >> foo.fsx #!echo "NativeLibrary.Load(\"/root/.nuget/packages/libtorch-cpu/1.9.0.1/runtimes/linux-x64/native/libtorch.so\") |> printfn \"%A\"" >> foo.fsx !echo "printfn \"phase2\"" >> foo.fsx !echo "TorchSharp.Torch.IsCudaAvailable() |> printfn \"%A\"" >> foo.fsx !cat foo.fsx !dotnet fsi foo.fsx ``` This should report "true" since we're using the LibTorch CPU binaries and they can't connect to the GPU resource.
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Objectives" data-toc-modified-id="Objectives-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Objectives</a></span></li><li><span><a href="#Concept-of-the-$k$-Nearest-Neighbors-Algorithm" data-toc-modified-id="Concept-of-the-$k$-Nearest-Neighbors-Algorithm-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Concept of the $k$-Nearest Neighbors Algorithm</a></span><ul class="toc-item"><li><span><a href="#Who's-Nearby?" data-toc-modified-id="Who's-Nearby?-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Who's Nearby?</a></span></li><li><span><a href="#Summary-of-$k$NN" data-toc-modified-id="Summary-of-$k$NN-2.2"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>Summary of $k$NN</a></span></li><li><span><a href="#Implementing-in-Scikit-Learn" data-toc-modified-id="Implementing-in-Scikit-Learn-2.3"><span class="toc-item-num">2.3&nbsp;&nbsp;</span>Implementing in Scikit-Learn</a></span><ul class="toc-item"><li><span><a href="#Training-the-KNN" data-toc-modified-id="Training-the-KNN-2.3.1"><span class="toc-item-num">2.3.1&nbsp;&nbsp;</span>Training the KNN</a></span></li><li><span><a href="#Make-Some-Predictions" data-toc-modified-id="Make-Some-Predictions-2.3.2"><span class="toc-item-num">2.3.2&nbsp;&nbsp;</span>Make Some Predictions</a></span></li></ul></li></ul></li><li><span><a href="#The-Pros-and-Cons" data-toc-modified-id="The-Pros-and-Cons-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>The Pros and Cons</a></span><ul class="toc-item"><li><span><a href="#Advantages" data-toc-modified-id="Advantages-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Advantages</a></span></li><li><span><a href="#Disadvantages" data-toc-modified-id="Disadvantages-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Disadvantages</a></span></li></ul></li><li><span><a href="#Classification-with-sklearn.neighbors" data-toc-modified-id="Classification-with-sklearn.neighbors-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Classification with <code>sklearn.neighbors</code></a></span><ul class="toc-item"><li><span><a href="#Train-Test-Split" data-toc-modified-id="Train-Test-Split-4.1"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>Train-Test Split</a></span></li><li><span><a href="#Validation-Split" data-toc-modified-id="Validation-Split-4.2"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>Validation Split</a></span></li><li><span><a href="#Different-$k$-Values" data-toc-modified-id="Different-$k$-Values-4.3"><span class="toc-item-num">4.3&nbsp;&nbsp;</span>Different $k$ Values</a></span><ul class="toc-item"><li><span><a href="#$k=1$" data-toc-modified-id="$k=1$-4.3.1"><span class="toc-item-num">4.3.1&nbsp;&nbsp;</span>$k=1$</a></span></li><li><span><a href="#$k=3$" data-toc-modified-id="$k=3$-4.3.2"><span class="toc-item-num">4.3.2&nbsp;&nbsp;</span>$k=3$</a></span></li><li><span><a href="#$k=5$" data-toc-modified-id="$k=5$-4.3.3"><span class="toc-item-num">4.3.3&nbsp;&nbsp;</span>$k=5$</a></span></li><li><span><a href="#Observing-Different-$k$-Values" data-toc-modified-id="Observing-Different-$k$-Values-4.3.4"><span class="toc-item-num">4.3.4&nbsp;&nbsp;</span>Observing Different $k$ Values</a></span></li></ul></li><li><span><a href="#Scaling" data-toc-modified-id="Scaling-4.4"><span class="toc-item-num">4.4&nbsp;&nbsp;</span>Scaling</a></span><ul class="toc-item"><li><span><a href="#More-Resources-on-Scaling" data-toc-modified-id="More-Resources-on-Scaling-4.4.1"><span class="toc-item-num">4.4.1&nbsp;&nbsp;</span>More Resources on Scaling</a></span></li></ul></li></ul></li><li><span><a href="#$k$-and-the-Bias-Variance-Tradeoff" data-toc-modified-id="$k$-and-the-Bias-Variance-Tradeoff-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>$k$ and the Bias-Variance Tradeoff</a></span><ul class="toc-item"><li><span><a href="#The-Relation-Between-$k$-and-Bias/Variance" data-toc-modified-id="The-Relation-Between-$k$-and-Bias/Variance-5.1"><span class="toc-item-num">5.1&nbsp;&nbsp;</span>The Relation Between $k$ and Bias/Variance</a></span></li></ul></li><li><span><a href="#Level-Up:-Distance-Metrics" data-toc-modified-id="Level-Up:-Distance-Metrics-6"><span class="toc-item-num">6&nbsp;&nbsp;</span>Level Up: Distance Metrics</a></span></li></ul></div> ![wilson](img/wilson.jpg) ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.metrics import f1_score, confusion_matrix,\ recall_score, precision_score, accuracy_score from src.confusion import plot_confusion_matrix from src.k_classify import predict_one from src.plot_train import * from src.euclid import * from sklearn import datasets from sklearn.preprocessing import StandardScaler, MinMaxScaler, LabelEncoder from sklearn.neighbors import KNeighborsClassifier, NearestNeighbors from sklearn.model_selection import train_test_split, KFold ``` # Objectives - Describe the $k$-nearest neighbors algorithm - Identify multiple common distance metrics - Tune $k$ appropriately in response to models with high bias or variance # Concept of the $k$-Nearest Neighbors Algorithm ``` #KNN can be used with more than one target. ``` First let's recall what is **supervised learning**. > In **supervised learning** we use example data (_training data_) to inform our predictions of future data Note that this definition includes _classification_ and _regression_ problems. And there are a variety of ways we can make predictions from past data. $k$-nearest neighbors is one such method of making predictions. ## Who's Nearby? One strategy to make predictions on a new data is to just look at what _similar_ data points are like. ![](img/best_k_fs.png) We can say _nearby_ points are _similar_ to one another. There are a few different wasy to determine how "close" data points are to one another. Check out the [Level Up section on distance metrics](#Level-Up:-Distance-Metrics) for some more detail. ## Summary of $k$NN ![](img/knn-process.png) ## Implementing in Scikit-Learn > [`KNeighborsClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html) & [`KNeighborsRegressor`](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html) Let's try doing some basic classification on some data using the KNN algorithms. ``` iris = sns.load_dataset('iris') display(iris) # Let's convert this over to NumPy array X = iris.iloc[:,:2].to_numpy() # Let's convert classes to numerical values y = LabelEncoder().fit_transform(iris['species']) y f, ax = plt.subplots() sns.scatterplot(x=X[:,0], y=X[:,1], ax=ax, hue=y, palette='colorblind') ax.get_legend().remove() ``` ### Training the KNN ``` neigh = KNeighborsClassifier(n_neighbors=3, metric='euclidean') neigh.fit(X, y) ``` ### Make Some Predictions ``` # Made up data points pred_pts = np.array([ [7.0, 3.0], [8.0, 3.5], [7.0, 4.0], [4.0, 3.0], [5.0, 3.0], [5.5, 4.0], [5.0, 2.0], [6.0, 2.5], [5.8, 3.5], ]) ``` Let's see these new points against the training data. Think about how they'll be made classified. ``` f, ax = plt.subplots() sns.scatterplot(x=X[:,0], y=X[:,1], ax=ax, hue=y, palette='colorblind') sns.scatterplot(x=pred_pts[:,0], ax=ax, y=pred_pts[:,1], marker="*", s=200, edgecolor='black', color='magenta') ax.get_legend().remove() # Make predictions pred_y = neigh.predict(pred_pts) print(pred_y) # Probabilities for KNN (how they voted) for p,prob in zip(pred_y,neigh.predict_proba(pred_pts)): print(f'{p}: {prob}') f, ax = plt.subplots() sns.scatterplot(x=X[:,0],y=X[:,1], ax=ax, hue=y, palette='colorblind') sns.scatterplot(x=pred_pts[:,0], ax=ax, y=pred_pts[:,1], hue=pred_y, palette='colorblind', marker="*", s=200, edgecolor='black') ax.get_legend().remove() ``` Let's see those predictions plotted with the other points after the classification. # The Pros and Cons Models have different use cases and it helps to understand the strengths and weaknesses ## Advantages - Lazy learning (no training phase) - Simple algorithm to understand and implement ## Disadvantages - Has to be kept in memory (small data with few features) - Not robust; doesn't generalize well - Soft boundaries are troublesome - "Curse of Dimensionality" # Classification with `sklearn.neighbors` $k$-Nearest Neighbors is a modeling technique that works for both regression and classification problems. Here we'll apply it to a version of the Titanic dataset. ``` titanic = pd.read_csv('data/cleaned_titanic.csv') titanic = titanic.iloc[:, :-2] titanic.head() ``` **For visualization purposes, we will use only two features for our first model.** ``` X = titanic[['Age', 'Fare']] y = titanic['Survived'] y.value_counts() ``` ## Train-Test Split This dataset of course presents a binary classification problem, with our target being the `Survived` feature. ``` X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=0.25) ``` ## Validation Split ``` X_t, X_val, y_t, y_val = train_test_split(X_train, y_train, random_state=42, test_size=0.25) knn = KNeighborsClassifier() knn.fit(X_t, y_t) print(f"training accuracy: {knn.score(X_t, y_t)}") print(f"validation accuracy: {knn.score(X_val, y_val)}") y_hat = knn.predict(X_val) plot_confusion_matrix(confusion_matrix(y_val, y_hat), classes=['Perished', 'Survived']) X_for_viz = X_t.sample(15, random_state=40) y_for_viz = y_t[X_for_viz.index] fig, ax = plt.subplots(figsize=(10, 10)) sns.scatterplot(X_for_viz['Age'], X_for_viz['Fare'], hue=y_for_viz, palette={0: 'red', 1: 'green'}, s=200, ax=ax) ax.set_xlim(0, 80) ax.set_ylim(0, 80) plt.legend() plt.title('Subsample of Training Data'); ``` The $k$-NN algorithm works by simply storing the training set in memory, then measuring the distance from the training points to a new point. Let's drop a point from our validation set into the plot above. ``` X_for_viz = X_t.sample(15, random_state=40) y_for_viz = y_t[X_for_viz.index] fig, ax = plt.subplots(figsize=(10, 10)) sns.scatterplot(X_for_viz['Age'], X_for_viz['Fare'], hue=y_for_viz, palette={0: 'red', 1: 'green'}, s=200, ax=ax) plt.legend() #################^^^Old code^^^############## ####################New code################# # Let's take one sample from our validation set and plot it new_x = pd.DataFrame(X_val.loc[484]).T new_y = y_val[new_x.index] sns.scatterplot(new_x['Age'], new_x['Fare'], color='blue', s=200, ax=ax, label='New', marker='P') ax.set_xlim(0, 100) ax.set_ylim(0, 100); new_x ``` Then, $k$-NN finds the $k$ nearest points. $k$ corresponds to the `n_neighbors` parameter defined when we instantiate the classifier object. **If $k$ = 1, then the prediction for a point will simply be the value of the target for the nearest point.** ## Different $k$ Values A big factor in this algorithm is choosing $k$ ![](img/k_vs_errors.png) ### $k=1$ ``` knn = KNeighborsClassifier(n_neighbors=1) ``` Let's fit our training data, then predict what our validation point will be based on the (one) closest neighbor. ``` knn.fit(X_for_viz, y_for_viz) knn.predict(new_x) ``` **When we raise the value of $k$, $k$-NN will act democratically: It will find the $k$ closest points, and take a vote based on the labels.** ### $k=3$ Let's raise $k$ to 3. ``` knn3 = KNeighborsClassifier(n_neighbors=3) knn3.fit(X_for_viz, y_for_viz) knn3.predict(new_x) ``` It's not easy to tell what which points are closest by eye. Let's update our plot to add indices. ``` X_for_viz = X_t.sample(15, random_state=40) y_for_viz = y_t[X_for_viz.index] fig, ax = plt.subplots(figsize=(10,10)) sns.scatterplot(X_for_viz['Age'], X_for_viz['Fare'], hue=y_for_viz, palette={0: 'red', 1: 'green'}, s=200, ax=ax) # Now let's take another sample # new_x = X_val.sample(1, random_state=33) new_x = pd.DataFrame(X_val.loc[484]).T new_x.columns = ['Age', 'Fare'] new_y = y_val[new_x.index] print(new_x) sns.scatterplot(new_x['Age'], new_x['Fare'], color='blue', s=200, ax=ax, label='New', marker='P') ax.set_xlim(0, 100) ax.set_ylim(0, 100) plt.legend() #################^^^Old code^^^############## ####################New code################# # add annotations one by one with a loop for index in X_for_viz.index: ax.text(X_for_viz.Age[index]+0.7, X_for_viz.Fare[index], s=index, horizontalalignment='left', size='medium', color='black', weight='semibold') ``` We can use `sklearn`'s NearestNeighors object to see the exact calculations. ``` df_for_viz = pd.merge(X_for_viz, y_for_viz, left_index=True, right_index=True) neighbor = NearestNeighbors(3) neighbor.fit(X_for_viz) nearest = neighbor.kneighbors(new_x) nearest df_for_viz.iloc[nearest[1][0]] new_x # Use Euclidean distance to see how close they are to this point print(((29-24)**2 + (33-25.4667)**2)**0.5) print(((26-24)**2 + (16.1-25.4667)**2)**0.5) print(((20-24)**2 + (15.7417-25.4667)**2)**0.5) ``` ### $k=5$ And with five neighbors? ``` knn = KNeighborsClassifier(n_neighbors=5) knn.fit(X_for_viz, y_for_viz) knn.predict(new_x) ``` ### Observing Different $k$ Values Let's iterate through $k$, odd numbers 1 through 10, and see the predictions. ``` for k in range(1, 10, 2): knn = KNeighborsClassifier(n_neighbors=k) knn.fit(X_for_viz, y_for_viz) print(f'k={k}', knn.predict(new_x)) ``` Which models were correct? ``` new_y ``` ## Scaling You may have suspected that we were leaving something out. For any distance-based algorithms, scaling is very important. Look at how the shape of the array changes before and after scaling. ![non-normal](img/nonnormal.png) ![normal](img/normalized.png) Let's look at our data_for_viz dataset: ``` X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=0.25) X_t, X_val, y_t, y_val = train_test_split(X_train, y_train, random_state=42, test_size=0.25) knn = KNeighborsClassifier(n_neighbors=5) ss = StandardScaler() X_ind = X_t.index X_col = X_t.columns X_t_s = pd.DataFrame(ss.fit_transform(X_t)) X_t_s.index = X_ind X_t_s.columns = X_col X_v_ind = X_val.index X_val_s = pd.DataFrame(ss.transform(X_val)) X_val_s.index = X_v_ind X_val_s.columns = X_col knn.fit(X_t_s, y_t) print(f"training accuracy: {knn.score(X_t_s, y_t)}") print(f"Val accuracy: {knn.score(X_val_s, y_val)}") y_hat = knn.predict(X_val_s) # The plot_train() function just does what we did above. plot_train(X_t, y_t, X_val, y_val) plot_train(X_t_s, y_t, X_val_s, y_val, -2, 2, text_pos=0.1 ) ``` Look at how much that changes things. Look at points 166 and 150. Look at the group 621, 143, and 191. Now let's run our classifier on scaled data and compare to unscaled. ``` X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=0.25) X_t, X_val, y_t, y_val = train_test_split(X_train, y_train, random_state=42, test_size=0.25) # The predict_one() function prints predictions on a given point # (#484) for k-nn models with k ranging from 1 to 10. predict_one(X_t, X_val, y_t, y_val) mm = MinMaxScaler() X_t_s = pd.DataFrame(mm.fit_transform(X_t)) X_t_s.index = X_t.index X_t_s.columns = X_t.columns X_val_s = pd.DataFrame(mm.transform(X_val)) X_val_s.index = X_val.index X_val_s.columns = X_val.columns predict_one(X_t_s, X_val_s, y_t, y_val) ``` ### More Resources on Scaling https://sebastianraschka.com/Articles/2014_about_feature_scaling.html http://datareality.blogspot.com/2016/11/scaling-normalizing-standardizing-which.html # $k$ and the Bias-Variance Tradeoff ``` X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=0.25) # Let's slowly increase k and see what happens to our accuracy scores. kf = KFold(n_splits=5) k_scores_train = {} k_scores_val = {} for k in range(1, 20): knn = KNeighborsClassifier(n_neighbors=k) accuracy_score_t = [] accuracy_score_v = [] for train_ind, val_ind in kf.split(X_train, y_train): X_t, y_t = X_train.iloc[train_ind], y_train.iloc[train_ind] X_v, y_v = X_train.iloc[val_ind], y_train.iloc[val_ind] mm = MinMaxScaler() X_t_ind = X_t.index X_v_ind = X_v.index X_t = pd.DataFrame(mm.fit_transform(X_t)) X_t.index = X_t_ind X_v = pd.DataFrame(mm.transform(X_v)) X_v.index = X_v_ind knn.fit(X_t, y_t) y_pred_t = knn.predict(X_t) y_pred_v = knn.predict(X_v) accuracy_score_t.append(accuracy_score(y_t, y_pred_t)) accuracy_score_v.append(accuracy_score(y_v, y_pred_v)) k_scores_train[k] = np.mean(accuracy_score_t) k_scores_val[k] = np.mean(accuracy_score_v) k_scores_train k_scores_val fig, ax = plt.subplots(figsize=(15, 15)) ax.plot(list(k_scores_train.keys()), list(k_scores_train.values()), color='red', linestyle='dashed', marker='o', markerfacecolor='blue', markersize=10, label='Train') ax.plot(list(k_scores_val.keys()), list(k_scores_val.values()), color='green', linestyle='dashed', marker='o', markerfacecolor='blue', markersize=10, label='Val') ax.set_xlabel('k') ax.set_ylabel('Accuracy') plt.legend(); ``` ## The Relation Between $k$ and Bias/Variance > Small $k$ values leads to overfitting, but larger $k$ values tend towards underfitting ![alt text](img/K-NN_Neighborhood_Size_print.png) > From [Machine Learning Flashcards](https://machinelearningflashcards.com/) by Chris Albon ``` mm = MinMaxScaler() X_train_ind = X_train.index X_train = pd.DataFrame(mm.fit_transform(X_train)) X_train.index = X_train_ind X_test_ind = X_test.index X_test = pd.DataFrame(mm.transform(X_test)) X_test.index = X_test_ind knn = KNeighborsClassifier(n_neighbors=9) knn.fit(X_train, y_train) print(f"training accuracy: {knn.score(X_train, y_train)}") print(f"Test accuracy: {knn.score(X_test, y_test)}") y_hat = knn.predict(X_test) plot_confusion_matrix(confusion_matrix(y_test, y_hat), classes=['Perished', 'Survived']) recall_score(y_test, y_hat) precision_score(y_test, y_hat) ``` # Level Up: Distance Metrics > The "closeness" of data points → proxy for similarity ![](img/distances.png) **Minkowski Distance**: $$dist(A,B) = (\sum_{k=1}^{N} |a_k - b_k|^c)^\frac{1}{c} $$ ``` #Minkowski is a generalization. With a value of 2 it is the same as Euclidean ``` Special cases of Minkowski distance are: - Manhattan: $dist(A,B) = \sum_{k=1}^{N} |a_k - b_k|$ - Euclidean: $dist(A,B) = \sqrt{ \sum_{k=1}^{N} (a_k - b_k)^2 }$ There are quite a few different distance metrics built-in for Scikit-learn: https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html
github_jupyter
# RRT-star Motion Planning Tutorial We'll show rrt-star motion planning on a panda robot. If you want to see a simulation demo, check out the examples directory. ``` import sys, os import yaml import trimesh %matplotlib inline parent_dir = os.path.dirname(os.getcwd()) pykin_path = parent_dir sys.path.append(pykin_path) from pykin.robots.single_arm import SingleArm from pykin.planners.rrt_star_planner import RRTStarPlanner from pykin.collision.collision_manager import CollisionManager from pykin.kinematics.transform import Transform from pykin.utils.object_utils import ObjectManager from pykin.utils import plot_utils as plt file_path = '../asset/urdf/panda/panda.urdf' mesh_path = pykin_path+"/asset/urdf/panda/" yaml_path = '../asset/config/panda_init_params.yaml' with open(yaml_path) as f: controller_config = yaml.safe_load(f) robot = SingleArm(file_path, Transform(rot=[0.0, 0.0, 0], pos=[0, 0, 0])) robot.setup_link_name("panda_link_0", "panda_right_hand") init_qpos = controller_config["init_qpos"] init_fk = robot.forward_kin(init_qpos) goal_eef_pose = controller_config["goal_pose"] ``` ### Apply to robot using CollisionManager ``` c_manager = CollisionManager(mesh_path) c_manager.setup_robot_collision(robot, init_fk) milk_path = pykin_path+"/asset/objects/meshes/milk.stl" milk_mesh = trimesh.load_mesh(milk_path) ``` ### Apply to Object using CollisionManager ``` obs = ObjectManager() o_manager = CollisionManager(milk_path) for i in range(6): name = "miik_" + str(i) obs_pos = [0.6, -0.2+i*0.1, 0.3] o_manager.add_object(name, gtype="mesh", gparam=milk_mesh, transform=Transform(pos=obs_pos).h_mat) obs(name=name, gtype="mesh", gparam=milk_mesh, transform=Transform(pos=obs_pos).h_mat) ``` ### Use RRTStarPlanner - delta_distance(float): distance between nearest vertex and new vertex - epsilon(float): 1-epsilon is probability of random sampling - gamma_RRT_star(int): factor used for search radius - max_iter(int): maximum number of iterations - dimension(int): robot arm's dof - n_step(int): for n equal divisions between waypoints ``` planner = RRTStarPlanner( robot=robot, self_collision_manager=c_manager, object_collision_manager=o_manager, delta_distance=0.5, epsilon=0.4, max_iter=600, gamma_RRT_star=1, dimension=7, n_step=10 ) ``` interpolated_path : joint path divided equally by n_steps between waypoints joint_path : actual joint path ``` interpolated_path, joint_path = planner.get_path_in_joinst_space(cur_q=init_qpos, goal_pose=goal_eef_pose, resolution=0.3) if joint_path is None : print("Cannot Visulization Path") exit() joint_trajectory = [] eef_poses = [] for step, joint in enumerate(interpolated_path): transformations = robot.forward_kin(joint) joint_trajectory.append(transformations) eef_poses.append(transformations[robot.eef_name].pos) ``` ### Visualization ``` fig, ax = plt.init_3d_figure(figsize=(10,6), dpi= 100) plt.plot_trajectories(ax, eef_poses, size=1) plt.plot_robot( robot, transformations=joint_trajectory[0], ax=ax, visible_text=False) plt.plot_robot( robot, transformations=joint_trajectory[-1], ax=ax, visible_text=False, visible_basis=False) plt.plot_objects( ax, objects=obs ) plt.show_figure() ```
github_jupyter
``` # from https://github.com/LukeTonin/simple-deep-learning/blob/main/semantic_segmentation.ipynb !poetry run python -m pip install git+https://github.com/LukeTonin/simple-deep-learning import tensorflow as tf print(tf.__version__) import numpy as np print(np.__version__) import matplotlib from matplotlib import pyplot as plt print(matplotlib.__version__) from simple_deep_learning.mnist_extended.semantic_segmentation import create_semantic_segmentation_dataset np.random.seed(1) train_x, train_y, test_x, test_y = create_semantic_segmentation_dataset(num_train_samples=1000, num_test_samples=200, image_shape=(60, 60), max_num_digits_per_image=4, num_classes=3) train_x.shape for image, mask in zip(train_x.take(1), train_y.take(1)): sample_image, sample_mask = image, mask import numpy as np from simple_deep_learning.mnist_extended.semantic_segmentation import display_grayscale_array, plot_class_masks print(train_x.shape, train_y.shape) i = np.random.randint(len(train_x)) display_grayscale_array(array=train_x[i]) plot_class_masks(train_y[i]) import tensorflow as tf from tensorflow.keras import datasets, layers, models tf.keras.backend.clear_session() model = models.Sequential() model.add(layers.Conv2D(filters=16, kernel_size=(3, 3), activation='relu', input_shape=train_x.shape[1:], padding='same')) model.add(layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same')) model.add(layers.MaxPooling2D(pool_size=(2, 2))) model.add(layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same')) model.add(layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same')) model.add(layers.MaxPooling2D(pool_size=(2, 2))) model.add(layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same')) model.add(layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same')) model.add(layers.UpSampling2D(size=(2, 2))) model.add(layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same')) model.add(layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same')) model.add(layers.UpSampling2D(size=(2, 2))) model.add(layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same')) model.add(layers.Conv2D(filters=16, kernel_size=(3, 3), activation='relu', padding='same')) model.add(layers.Conv2D(filters=train_y.shape[-1], kernel_size=(3, 3), activation='sigmoid', padding='same')) model.compile(optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(), tf.keras.metrics.Recall(), tf.keras.metrics.Precision()]) model.summary() model.compile(optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(), tf.keras.metrics.Recall(), tf.keras.metrics.Precision()]) history = model.fit(train_x, train_y, epochs=20, validation_data=(test_x, test_y)) test_y_predicted = model.predict(test_x) from simple_deep_learning.mnist_extended.semantic_segmentation import display_segmented_image np.random.seed(6) for _ in range(3): fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5)) i = np.random.randint(len(test_y_predicted)) print(f'Example {i}') display_grayscale_array(test_x[i], ax=ax1, title='Input image') display_segmented_image(test_y_predicted[i], ax=ax2, title='Segmented image') plot_class_masks(test_y[i], test_y_predicted[i], title='y target and y predicted sliced along the channel axis') ```
github_jupyter
``` # look at tools/set_up_magics.ipynb yandex_metrica_allowed = True ; get_ipython().run_cell('# one_liner_str\n\nget_ipython().run_cell_magic(\'javascript\', \'\', \n \'// setup cpp code highlighting\\n\'\n \'IPython.CodeCell.options_default.highlight_modes["text/x-c++src"] = {\\\'reg\\\':[/^%%cpp/]} ;\'\n \'IPython.CodeCell.options_default.highlight_modes["text/x-cmake"] = {\\\'reg\\\':[/^%%cmake/]} ;\'\n)\n\n# creating magics\nfrom IPython.core.magic import register_cell_magic, register_line_magic\nfrom IPython.display import display, Markdown, HTML\nimport argparse\nfrom subprocess import Popen, PIPE\nimport random\nimport sys\nimport os\nimport re\nimport signal\nimport shutil\nimport shlex\nimport glob\nimport time\n\n@register_cell_magic\ndef save_file(args_str, cell, line_comment_start="#"):\n parser = argparse.ArgumentParser()\n parser.add_argument("fname")\n parser.add_argument("--ejudge-style", action="store_true")\n args = parser.parse_args(args_str.split())\n \n cell = cell if cell[-1] == \'\\n\' or args.no_eof_newline else cell + "\\n"\n cmds = []\n with open(args.fname, "w") as f:\n f.write(line_comment_start + " %%cpp " + args_str + "\\n")\n for line in cell.split("\\n"):\n line_to_write = (line if not args.ejudge_style else line.rstrip()) + "\\n"\n if line.startswith("%"):\n run_prefix = "%run "\n if line.startswith(run_prefix):\n cmds.append(line[len(run_prefix):].strip())\n f.write(line_comment_start + " " + line_to_write)\n continue\n if line.startswith("%" + line_comment_start + " "):\n f.write(line_comment_start + " " + line_to_write)\n continue\n raise Exception("Unknown %%save_file subcommand: \'%s\'" % line)\n else:\n f.write(line_to_write)\n f.write("" if not args.ejudge_style else line_comment_start + r" line without \\n")\n for cmd in cmds:\n display(Markdown("Run: `%s`" % cmd))\n get_ipython().system(cmd)\n\n@register_cell_magic\ndef cpp(fname, cell):\n save_file(fname, cell, "//")\n \n@register_cell_magic\ndef cmake(fname, cell):\n save_file(fname, cell, "#")\n\n@register_cell_magic\ndef asm(fname, cell):\n save_file(fname, cell, "//")\n \n@register_cell_magic\ndef makefile(fname, cell):\n assert not fname\n save_file("makefile", cell.replace(" " * 4, "\\t"))\n \n@register_line_magic\ndef p(line):\n line = line.strip() \n if line[0] == \'#\':\n display(Markdown(line[1:].strip()))\n else:\n try:\n expr, comment = line.split(" #")\n display(Markdown("`{} = {}` # {}".format(expr.strip(), eval(expr), comment.strip())))\n except:\n display(Markdown("{} = {}".format(line, eval(line))))\n \n \ndef show_log_file(file, return_html_string=False):\n obj = file.replace(\'.\', \'_\').replace(\'/\', \'_\') + "_obj"\n html_string = \'\'\'\n <!--MD_BEGIN_FILTER-->\n <script type=text/javascript>\n var entrance___OBJ__ = 0;\n var errors___OBJ__ = 0;\n function halt__OBJ__(elem, color)\n {\n elem.setAttribute("style", "font-size: 14px; background: " + color + "; padding: 10px; border: 3px; border-radius: 5px; color: white; "); \n }\n function refresh__OBJ__()\n {\n entrance___OBJ__ -= 1;\n if (entrance___OBJ__ < 0) {\n entrance___OBJ__ = 0;\n }\n var elem = document.getElementById("__OBJ__");\n if (elem) {\n var xmlhttp=new XMLHttpRequest();\n xmlhttp.onreadystatechange=function()\n {\n var elem = document.getElementById("__OBJ__");\n console.log(!!elem, xmlhttp.readyState, xmlhttp.status, entrance___OBJ__);\n if (elem && xmlhttp.readyState==4) {\n if (xmlhttp.status==200)\n {\n errors___OBJ__ = 0;\n if (!entrance___OBJ__) {\n if (elem.innerHTML != xmlhttp.responseText) {\n elem.innerHTML = xmlhttp.responseText;\n }\n if (elem.innerHTML.includes("Process finished.")) {\n halt__OBJ__(elem, "#333333");\n } else {\n entrance___OBJ__ += 1;\n console.log("req");\n window.setTimeout("refresh__OBJ__()", 300); \n }\n }\n return xmlhttp.responseText;\n } else {\n errors___OBJ__ += 1;\n if (!entrance___OBJ__) {\n if (errors___OBJ__ < 6) {\n entrance___OBJ__ += 1;\n console.log("req");\n window.setTimeout("refresh__OBJ__()", 300); \n } else {\n halt__OBJ__(elem, "#994444");\n }\n }\n }\n }\n }\n xmlhttp.open("GET", "__FILE__", true);\n xmlhttp.setRequestHeader("Cache-Control", "no-cache");\n xmlhttp.send(); \n }\n }\n \n if (!entrance___OBJ__) {\n entrance___OBJ__ += 1;\n refresh__OBJ__(); \n }\n </script>\n\n <p id="__OBJ__" style="font-size: 14px; background: #000000; padding: 10px; border: 3px; border-radius: 5px; color: white; ">\n </p>\n \n </font>\n <!--MD_END_FILTER-->\n <!--MD_FROM_FILE __FILE__.md -->\n \'\'\'.replace("__OBJ__", obj).replace("__FILE__", file)\n if return_html_string:\n return html_string\n display(HTML(html_string))\n\n \nclass TInteractiveLauncher:\n tmp_path = "./interactive_launcher_tmp"\n def __init__(self, cmd):\n try:\n os.mkdir(TInteractiveLauncher.tmp_path)\n except:\n pass\n name = str(random.randint(0, 1e18))\n self.inq_path = os.path.join(TInteractiveLauncher.tmp_path, name + ".inq")\n self.log_path = os.path.join(TInteractiveLauncher.tmp_path, name + ".log")\n \n os.mkfifo(self.inq_path)\n open(self.log_path, \'w\').close()\n open(self.log_path + ".md", \'w\').close()\n\n self.pid = os.fork()\n if self.pid == -1:\n print("Error")\n if self.pid == 0:\n exe_cands = glob.glob("../tools/launcher.py") + glob.glob("../../tools/launcher.py")\n assert(len(exe_cands) == 1)\n assert(os.execvp("python3", ["python3", exe_cands[0], "-l", self.log_path, "-i", self.inq_path, "-c", cmd]) == 0)\n self.inq_f = open(self.inq_path, "w")\n interactive_launcher_opened_set.add(self.pid)\n show_log_file(self.log_path)\n\n def write(self, s):\n s = s.encode()\n assert len(s) == os.write(self.inq_f.fileno(), s)\n \n def get_pid(self):\n n = 100\n for i in range(n):\n try:\n return int(re.findall(r"PID = (\\d+)", open(self.log_path).readline())[0])\n except:\n if i + 1 == n:\n raise\n time.sleep(0.1)\n \n def input_queue_path(self):\n return self.inq_path\n \n def wait_stop(self, timeout):\n for i in range(int(timeout * 10)):\n wpid, status = os.waitpid(self.pid, os.WNOHANG)\n if wpid != 0:\n return True\n time.sleep(0.1)\n return False\n \n def close(self, timeout=3):\n self.inq_f.close()\n if not self.wait_stop(timeout):\n os.kill(self.get_pid(), signal.SIGKILL)\n os.waitpid(self.pid, 0)\n os.remove(self.inq_path)\n # os.remove(self.log_path)\n self.inq_path = None\n self.log_path = None \n interactive_launcher_opened_set.remove(self.pid)\n self.pid = None\n \n @staticmethod\n def terminate_all():\n if "interactive_launcher_opened_set" not in globals():\n globals()["interactive_launcher_opened_set"] = set()\n global interactive_launcher_opened_set\n for pid in interactive_launcher_opened_set:\n print("Terminate pid=" + str(pid), file=sys.stderr)\n os.kill(pid, signal.SIGKILL)\n os.waitpid(pid, 0)\n interactive_launcher_opened_set = set()\n if os.path.exists(TInteractiveLauncher.tmp_path):\n shutil.rmtree(TInteractiveLauncher.tmp_path)\n \nTInteractiveLauncher.terminate_all()\n \nyandex_metrica_allowed = bool(globals().get("yandex_metrica_allowed", False))\nif yandex_metrica_allowed:\n display(HTML(\'\'\'<!-- YANDEX_METRICA_BEGIN -->\n <script type="text/javascript" >\n (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)};\n m[i].l=1*new Date();k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)})\n (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym");\n\n ym(59260609, "init", {\n clickmap:true,\n trackLinks:true,\n accurateTrackBounce:true\n });\n </script>\n <noscript><div><img src="https://mc.yandex.ru/watch/59260609" style="position:absolute; left:-9999px;" alt="" /></div></noscript>\n <!-- YANDEX_METRICA_END -->\'\'\'))\n\ndef make_oneliner():\n html_text = \'("В этот ноутбук встроен код Яндекс Метрики для сбора статистики использований. Если вы не хотите, чтобы по вам собиралась статистика, исправьте: yandex_metrica_allowed = False" if yandex_metrica_allowed else "")\'\n html_text += \' + "<""!-- MAGICS_SETUP_PRINTING_END -->"\'\n return \'\'.join([\n \'# look at tools/set_up_magics.ipynb\\n\',\n \'yandex_metrica_allowed = True ; get_ipython().run_cell(%s);\' % repr(one_liner_str),\n \'display(HTML(%s))\' % html_text,\n \' #\'\'MAGICS_SETUP_END\'\n ])\n \n\n');display(HTML(("В этот ноутбук встроен код Яндекс Метрики для сбора статистики использований. Если вы не хотите, чтобы по вам собиралась статистика, исправьте: yandex_metrica_allowed = False" if yandex_metrica_allowed else "") + "<""!-- MAGICS_SETUP_PRINTING_END -->")) #MAGICS_SETUP_END ``` # Опрос для всех, кто зашел на эту страницу Он не страшный, там всего два обязательных вопроса на выбор одного варианта из трёх. Извиняюсь за размер, но к сожалению студенты склонны игнорировать опросы :| Пытаюсь компенсировать :) <a href="https://docs.google.com/forms/d/e/1FAIpQLSdUnBAae8nwdSduZieZv7uatWPOMv9jujCM4meBZcHlTikeXg/viewform?usp=sf_link"><img src="poll.png" width="100%" align="left" alt="Опрос"></a> # Работа со временем в С/С++ Поговорим о типах времени в C/C++ и функциях для получения текущего времени, парсинга из строк, сериализации в строки. Меня всегда дико напрягало отсутствие одного хорошего типа времени, наличие времени в разных часовых поясах и куча разных типов сериализации. Постараюсь собрать полезную информацию в одном месте, чтобы жилось проще. <table width=100% > <tr> <th width=15%> <b>Видео с семинара &rarr; </b> </th> <th> <a href="???"><img src="video.jpg" width="320" height="160" align="left" alt="Видео с семинара"></a> </th> <th> </th> </tr> </table> Сегодня в программе: * <a href="types_c" style="color:#856024"> Типы времени в C </a> * <a href="funcs_c" style="color:#856024"> Функции для работы со временем в C </a> * <a href="types_cpp" style="color:#856024"> Типы времени в C++ </a> * <a href="funcs_cpp" style="color:#856024"> Функции для работы со временем в C++ </a> <br><br> * <a href="clocks_and_cpu" style="color:#856024"> Разные часы и процессорное время </a> * <a href="benchmarking" style="color:#856024"> Время для бенчмарков </a> <br><br> * <a href="sleep" style="color:#856024"> Как поспать? </a> <br><br> * <a href="problems" style="color:#856024"> Задачки для самостоятельного решения </a> ## <a name="types_c"></a> Типы времени в C Что у нас есть? Собственно типы времени * `time_t` - целочисленный тип, в котором хранится количество секунд с начала эпохи. В общем таймстемп в секундах. [man](https://www.opennet.ru/man.shtml?topic=time&category=2) * `struct tm` - структурка в которой хранится год, месяц, ..., секунда [man](https://www.opennet.ru/cgi-bin/opennet/man.cgi?topic=ctime&category=3) * `struct timeval` пара (секунды, миллисекунды) (с начала эпохи, если используется как момент времени) [man](https://www.opennet.ru/cgi-bin/opennet/man.cgi?topic=gettimeofday&category=2) * `struct timespec` пара (секунды, наносекунды) [man](https://www.opennet.ru/man.shtml?topic=select&category=2&russian=) * `struct timeb` - секунды, миллисекунды, таймзона+информация о летнем времени [man](https://www.opennet.ru/cgi-bin/opennet/man.cgi?topic=ftime&category=3) (Я ни разу не сталкивался, но и такая есть) Часовой пояс * `struct timezone` - [man](https://www.opennet.ru/cgi-bin/opennet/man.cgi?topic=gettimeofday&category=2) ## <a name="funcs_c"></a> Функции для работы с временем в C До всего последующего хочется напомнить, что многие функции в C не потокобезопасны (если не заканчиваются на `_r`, что означает reentrant, ну и потокобезопасность). Поэтому, перед использованием, стоит посмотреть документацию. Конвертация: <table> <tr> <th>Из чего\Во что</th> <th>time_t</th> <th>struct tm</th> <th>struct timeval</th> <th>struct timespec</th> </tr> <tr> <td>time_t</td> <td>=</td> <td><a href="https://www.opennet.ru/cgi-bin/opennet/man.cgi?topic=ctime&category=3"><code>gmtime_r</code></a>/<a href="https://www.opennet.ru/cgi-bin/opennet/man.cgi?topic=ctime&category=3"><code>localtime_r</code></a></td> <td>{.tv_sec = x}</td> <td>{.tv_sec = x}</td> </tr> <tr> <td>struct tm</td> <td><a href="https://www.opennet.ru/cgi-bin/opennet/man.cgi?topic=ctime&category=3"><code>mktime</code></a> [1]</td> <td>=</td> <td>через time_t</td> <td>через time_t</td> </tr> <tr> <td>struct timeval</td> <td>x.tv_sec</td> <td>через time_t</td> <td>=</td> <td>{.tv_sec = x.tv_sec, .tv_nsec = x.tv_usec * 1000}</td> </tr> <tr> <td>struct timespec</td> <td>x.tv_sec</td> <td>через time_t</td> <td>{.tv_sec = x.tv_sec, .tv_usec = x.tv_nsec / 1000}</td> <td>=</td> </tr> </table> [1] - `mktime` неадекватно работает, когда у вас не локальное время. Подробности и как с этим жить - в примерах. https://stackoverflow.com/questions/530519/stdmktime-and-timezone-info Получение: * `time` - получить время как `time_t` [man](https://www.opennet.ru/man.shtml?topic=time&category=2) * `clock_gettime` - получить время как `struct timespec` [man](https://www.opennet.ru/man.shtml?topic=clock_gettime&category=3&russian=2) * `gettimeofday` - получить время как `struct timeval` [man](https://www.opennet.ru/cgi-bin/opennet/man.cgi?topic=settimeofday&category=2) Парсинг: * Если таймстемп - то просто читаем как число. * `strptime` [man](https://www.opennet.ru/man.shtml?topic=strptime&category=3&russian=0) Не умеет во временные зоны, всегда локальную выставляет * `getdate` [man](https://opennet.ru/man.shtml?topic=getdate&category=3) Не рекомендую, не очень умная функция. Сериализация: * Всегда можно просто записать таймстемп в секундах/миллисекундах. * `strftime` - позволяет превратить struct tm в строку, используя printf-подобную форматную строку [man](https://www.opennet.ru/man.shtml?topic=strftime&category=3) Арифметические операции: * Их нет, все вручную? Работа с часовыми поясами: Прежде всего замечание: в рамках этого семинара считаем, что время в GMT = время в UTC. * Сериализация таймстемпа как локального или UTC времени - `localtime_t`/`gmtime_r`. * Парсинг локального времени - `strptime`. * Другие часовые пояса и парсинг human-readable строк c заданным часовым поясом только через установку локалей, переменных окружения. В общем избегайте этого ``` # В питоне примерно то же самое, что и в С import time print("* Таймстемп (time_t): ", time.time()) print("* Дата (struct tm): ", time.localtime(time.time())) print("* Дата (struct tm): ", time.gmtime(time.time()), "(обращаем внимание на разницу в часовых поясах)") print("* tm_gmtoff для local:", time.localtime(time.time()).tm_gmtoff, "и для gm: ", time.gmtime(time.time()).tm_gmtoff, "(скрытое поле, но оно используется :) )") print("* Дата human-readable (local): ", time.strftime("%Y.%m.%d %H:%M:%S %z", time.localtime(time.time()))) print("* Дата human-readable (gmt): ", time.strftime("%Y.%m.%d %H:%M:%S %z", time.gmtime(time.time()))) %%cpp time.c %run gcc -fsanitize=address time.c -lpthread -o time_c.exe %run ./time_c.exe #define _DEFAULT_SOURCE #define _GNU_SOURCE // для strptime #include <stdio.h> #include <stdlib.h> #include <time.h> #include <sys/types.h> #include <sys/time.h> #include <assert.h> #include <string.h> // Я не уверен, что так делать норм time_t as_utc_timestamp(struct tm timeTm) { time_t timestamp = mktime(&timeTm); // mktime распарсит как локальное время, даже если tm_gmtoff в 0 сбросить // ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ Извращение, чтобы получить нормальный таймстемп UTC return timestamp + timeTm.tm_gmtoff; // mktime выставит tm_gmtoff в соответствии с текущей таймзоной } int main() { { // (1) struct timespec spec = {0}; clock_gettime(CLOCK_REALTIME, &spec); time_t timestamp = spec.tv_sec; struct tm local_tm = {0}; localtime_r(&timestamp, &local_tm); char time_str[100]; size_t time_len = strftime(time_str, sizeof(time_str), "%Y.%m.%d %H:%M:%S", &local_tm); time_len += snprintf(time_str + time_len, sizeof(time_str) - time_len, ".%09ld", spec.tv_nsec); time_len += strftime(time_str + time_len, sizeof(time_str) - time_len, " %Z", &local_tm); printf("(1) Current time: %s\n", time_str); } { // (2) const char* utc_time = "2020.08.15 12:48:06"; struct tm local_tm = {0}; strptime(utc_time, "%Y.%m.%d %H:%M:%S", &local_tm); // распарсит как локальное время time_t timestamp = as_utc_timestamp(local_tm); localtime_r(&timestamp, &local_tm); char time_str[100]; size_t time_len = strftime(time_str, sizeof(time_str), "%Y.%m.%d %H:%M:%S%z", &local_tm); printf("(2) Recovered time by strptime: %s (given utc time: %s)\n", time_str, utc_time); } { // (3) time_t timestamps[] = {1589227667, 840124800, -1}; for (time_t* timestamp = timestamps; *timestamp != -1; ++timestamp) { struct tm local_time = {0}; localtime_r(timestamp, &local_time); char time_str[100]; size_t time_len = strftime(time_str, sizeof(time_str), "%Y.%m.%d %H:%M:%S", &local_time); printf("(3) Timestamp %ld -> %s\n", *timestamp, time_str); } } return 0; } ``` ## <a name="types_cpp"></a> Типы времени в C++ Для начала нам доступно все то же, что было в С. Новые типы времени * `std::tm = struct tm`, `std::time_t = struct tm` - типы старые, но способ написания новый :) * `std::chrono::time_point` [doc](https://en.cppreference.com/w/cpp/chrono/time_point) * `std::chrono::duration` [doc](https://en.cppreference.com/w/cpp/chrono/duration) Скажу откровенно, добавились не самые удобные типы. Единственное, что сделано удобно - арифметика времени. ## <a name="funcs_cpp"></a> Функции для работы с временем в C++ Конвертация: * `std::chrono::system_clock::to_time_t`, `std::chrono::system_clock::from_time_t` Сериализация и парсинг: * `std::get_time` / `std::put_time` - примерно то же самое, что `strftime` и `strptime` в C. Работают с `std::tm`. [doc](https://en.cppreference.com/w/cpp/io/manip/get_time) Арифметические операции: * Из коробки, обычными +/* ``` %%cpp time.cpp %run clang++ -std=c++14 -fsanitize=address time.cpp -lpthread -o time_cpp.exe %run ./time_cpp.exe #include <iostream> #include <sstream> #include <locale> #include <iomanip> #include <chrono> #include <time.h> // localtime_r time_t as_utc_timestamp(struct tm t) { time_t timestamp = mktime(&t); // mktime распарсит как локальное время, даже если tm_gmtoff в 0 сбросить // ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ Извращение, чтобы получить нормальный таймстемп UTC return timestamp + t.tm_gmtoff; // mktime выставит tm_gmtoff в соответствии с текущей таймзоной } int main() { { // (0) using namespace std::literals; auto nowChrono = std::chrono::system_clock::now(); std::time_t timestamp = std::chrono::system_clock::to_time_t(nowChrono); std::tm timeTm = {}; timestamp = 1589401219; localtime_r(&timestamp, &timeTm); uint64_t nowMs = (nowChrono.time_since_epoch() % 1s) / 1ms; std::cout << "(0) Current time: " << std::put_time(&timeTm, "%Y.%m.%d %H:%M:%S") << "." << std::setfill('0') << std::setw(3) << nowMs << " " << std::put_time(&timeTm, "%z") << " " << ", timestamp = " << timestamp << "'\n"; } { // (1) std::string timeStr = "2011-Jan-18 23:12:34"; std::tm timeTm = {}; std::istringstream timeStrStream{timeStr}; timeStrStream.imbue(std::locale("en_US.utf-8")); timeStrStream >> std::get_time(&timeTm, "%Y-%b-%d %H:%M:%S"); if (timeStrStream.fail()) { std::cout << "(1) Parse failed\n"; } else { std::cout << "(1) Parsed time '" << std::put_time(&timeTm, "%Y.%m.%d %H:%M:%S %z") << "'" << " from '" << timeStr << "''\n"; } } { // (2) using namespace std::literals; auto nowChrono = std::chrono::system_clock::now(); for (int i = 0; i < 2; ++i, nowChrono += 23h + 55min) { std::time_t nowTimestamp = std::chrono::system_clock::to_time_t(nowChrono); std::tm localTm = {}; localtime_r(&nowTimestamp, &localTm); // кажись в C++ нет потокобезопасной функции std::cout << "(2) Composed time: " << std::put_time(&localTm, "%Y.%m.%d %H:%M:%S %z") << "\n"; } } { // (3) using namespace std::literals; std::string timeStr = "1977.01.11 22:35:22"; std::tm timeTm = {}; std::istringstream timeStrStream{timeStr}; timeStrStream >> std::get_time(&timeTm, "%Y.%m.%d %H:%M:%S"); // read as UTC/GMT time std::cout << "(3) Original time: " << std::put_time(&timeTm, "%Y.%m.%d %H:%M:%S %z") << "\n"; if (timeStrStream.fail()) { std::cout << "(3) Parse failed\n"; } else { std::time_t timestamp = as_utc_timestamp(timeTm); auto instantChrono = std::chrono::system_clock::from_time_t(timestamp); instantChrono += 23h + 55min; std::time_t anotherTimestamp = std::chrono::system_clock::to_time_t(instantChrono); std::tm localTm = {}; gmtime_r(&timestamp, &localTm); // вот эта фигня проинтерпретировала время как локальное std::tm anotherLocalTm = {}; gmtime_r(&anotherTimestamp, &anotherLocalTm); std::cout << "(3) Take '" << std::put_time(&localTm, "%Y.%m.%d %H:%M:%S %z") << "', add 23:55, and get '" << std::put_time(&anotherLocalTm, "%Y.%m.%d %H:%M:%S %z") << "'\n"; } } return 0; } ``` Стоит обратить внимание, что в С++ не навязывается местный часовой пояс при парсинге времени. Хорошо это или плохо - не знаю. ## <a name="clocks_and_cpu"></a> Разные часы и процессорное время [Проблема 2038 года](https://ru.wikipedia.org/wiki/Проблема_2038_года), связанная с переполнением 32-битного time_t. Просто обозначаю, что она есть. [iana](https://www.iana.org/time-zones) - база данных временных зон. Хардверные часы. Обычные кварцевые часы, для которых на материнской плате есть отдельная батарейка. Они не очень точные. А еще разные системы могут хранить там время по-разному. Поэтому при перезагрузках между ubuntu и windows время может прыгать на 3 часа (если выбрано Московское время). ``` -> sudo hwclock Пт 24 апр 2020 00:28:52 .356966 seconds -> date Пн май 4 14:28:24 MSK 2020 ``` Процессорное время: * [C/C++: как измерять процессорное время / Хабр](https://habr.com/ru/post/282301/) * `clock_t clock(void);` - время затраченное процессором на исполнение потока/программы. Измеряется в непонятных единицах, связанных с секундами через CLOCKS_PER_SEC. [man](https://www.opennet.ru/cgi-bin/opennet/man.cgi?topic=clock&category=3) * `clock_gettime` c параметрами `CLOCK_PROCESS_CPUTIME_ID`, `CLOCK_THREAD_CPUTIME_ID` - процессорное время программы и потока. * Тип часов * `clockid_t` - тип часов [man](https://www.opennet.ru/cgi-bin/opennet/man.cgi?topic=clock_gettime&category=3) * `CLOCK_MONOTONIC` - тип часов, который стоит отдельно выделить. Это монотонные часы, то есть время, которое они показывают всегда возрастает несмотря ни на какие переводы времени. Их правильно использовать для замеров интервалов времени. ``` for time_type in (time.CLOCK_REALTIME, time.CLOCK_MONOTONIC, time.CLOCK_PROCESS_CPUTIME_ID): print(time.clock_gettime(time_type)) ``` ## <a name="benchmarking"></a> Время для бенчмарков #### Что измерять? Стоит измерять процессорное время. В зависимости от того, делаете ли вы в измеряемой части программы системные вызовы или нет, имеет смысл измерять только пользовательское время или пользовательское и системное вместе. #### Как измерять? Чтобы замеры были максимально точными, стоит минимизировать влияние среды и максимизировать стабильность измерений. Какие есть способы повысить стабильность? 0. Повторить замер столько раз, сколько можете себе позволить по времени, и правильно усреднить. 1. Увеличить минимальное время, которое шедулер гарантирует процессу, если он сам не отдает управления. Его можно увеличить до 1с. 2. Запускать бенчмарк на выделенном ядре. То есть запретить шедулеру запускать что-то еще на ядре, где будет работать бенчмарк, и его парном гипертрединговом. А теперь подбробнее 0. Моё эмпирическое правило - из всех повторных замеров выкинуть 25% самых больших значений, а уже потом усреднить. Хорошо уменьшает оценочную дисперсию усредненного значения. 1. `sudo sysctl -w kernel.sched_min_granularity_ns='999999999'` - выкручиваем квант времени шедулера. (Спорная оптимизация, на самом деле. Один раз видел от нее хороший положительный эффект, а один раз слабый отрицательный.) 2. В конфиге grub (`/etc/default/grub`) добавляем `isolcpu=2,3` (у меня это второе физическое ядро) в строку параметров запуска. <br> Обновляем grub. `sudo grub-mkconfig`, `sudo grub-mkconfig -o /boot/grub/grub.cfg`. Перезапускаем систему. <br> Теперь запускаем бенчмарк как `taskset 0x4 ./my_benchmark`. (4 == 1 << 2, 2 - номер виртуального ядра, на котором запускаем процесс) #### Чем измерять? * perf stat perf вообще очень мощная штука, помимо бенчмаркинга позволяет профилировать программу, смотреть, какие функции сколько работают. Устанавливается так: ```bash $ sudo apt install linux-tools-$(uname -r) linux-tools-generic $ echo -1 > /proc/sys/kernel/perf_event_paranoid # under `sudo -i` ``` * time ``` %%bash exec 2>&1 ; set -o xtrace perf stat sleep 1 time sleep 1 ``` `<not supported>` может быть из-за использования виртуальной машины. ## <a name="sleep"></a> Как поспать? `sleep`, `nanosleep` - просто поспать. <s>На практике</s> В хороших продовых проектах такие функции нужны редко, из-за того, что такие ожидания нельзя корректно прервать внешним событием. На деле, конечно, постоянно используется. `timerfd` - позволяет создавать таймеры, которые при срабатывании будут приходить записями, которые можно прочесть из файлового дескриптора. `select`, `epoll_wait` - одновременное ожидание по таймауту и по файловым дескрипторам. `pthread_cond_timedwait` - одновременное ожидание по таймауту и условной переменной. `sigtimedwait` - одновременное ожидание по таймауту и сигнала. (Лучше все-таки свести прием сигнала к чтению из файлового дескриптора и не использовать это.) ## <a name="compiletime"></a> Время компиляции 1) `-ftime-report` - опция компилятора, показывает на каких этапах компиляции сколько времени тратится. 2) Можно измерять время работы компилятора с помощью perf stat
github_jupyter
<table width="100%"> <tr> <td style="background-color:#ffffff;"> <a href="http://qworld.lu.lv" target="_blank"><img src="../images/qworld.jpg" width="35%" align="left"> </a></td> <td style="background-color:#ffffff;vertical-align:bottom;text-align:right;"> prepared by Abuzer Yakaryilmaz (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>) <br> updated by Özlem Salehi | December 2, 2019 </td> </tr></table> <table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table> $ \newcommand{\bra}[1]{\langle #1|} $ $ \newcommand{\ket}[1]{|#1\rangle} $ $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $ $ \newcommand{\dot}[2]{ #1 \cdot #2} $ $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $ $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $ $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $ $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $ $ \newcommand{\mypar}[1]{\left( #1 \right)} $ $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $ $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $ $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $ $ \newcommand{\onehalf}{\frac{1}{2}} $ $ \newcommand{\donehalf}{\dfrac{1}{2}} $ $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $ $ \newcommand{\vzero}{\myvector{1\\0}} $ $ \newcommand{\vone}{\myvector{0\\1}} $ $ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $ $ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $ $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $ $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $ $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $ $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $ $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $ $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $ <h2>One Qubit</h2> <h3> Motivation </h3> An electron has a property called spin that can take the values up or down. Similarly a photon can have a vertical or a horizontal polarization. Until observing, there is a finite chance of being in either state, which is the quantum mechanical phenomenom superposition. Remember that a bit is the smallest unit of information for classical computation. A transistior may be used to realize a classical bit. Likewise, an electron or a photon can realize a <b>qubit</b> (quantum bit), which is the smallest unit of information for quantum computation. <h3> Mathematical definition </h3> Two possible states for a qubit are $ \ket{0} $ and $ \ket{1} $ which correspond to the classical states 0 and 1. <i>Note: $ \ket{\cdot} $ is called ket-notation: Ket-notation is used to represent a column vector in quantum mechanics. For a given column vector $ \ket{v} $, its conjugate transpose is a row vector represented as $ \bra{v} $ (bra-notation). </i> A qubit can be in a linear combination of these states, which is called the <font color="blue"> superposition </font>. A general quantum state is given as $\ket {\psi}=\alpha \ket{0} + \beta \ket {1}$ where $\alpha$ and $\beta$ are complex numbers and $|\alpha|^2 + |\beta|^2=1$. $\alpha$ and $\beta$ are called the <font color="blue"> amplitudes </font> of the states $\ket{0}$ and $ \ket {1}$ respectively. <i> Note: In Bronze, we will assume that $\alpha$ and $\beta $ are real numbers. <h3> Vector representation </h3> Like the classical and probabilitic states 0 and 1, $ \ket{0} = \myvector{1 \\ 0} $ and $ \ket{1} = \myvector{0\\ 1} $. Hence a general state $\ket {\psi}=\alpha \ket{0} + \beta \ket {1}$ is represented by $\alpha \myvector{1\\0} + \beta \myvector{0\\1} = \myvector{\alpha \\ \beta}$ <h3> Superposition </h3> There is no classical counterpart of the concept "superposition". But, we can still use a classical analogy that might help us to give some intuitions. Suppose that Asja starts in $ \myvector{1\\0} $ and secretly applies the probabilistic operator $ \mymatrix{cc}{ 0.3 & 0.6 \\ 0.7 & 0.4 } $. Because she applies her operator secretly, our information about her state is probabilistic, which is calculated as $$ \myvector{0.3 \\ 0.7} = \mymatrix{cc}{ 0.3 & 0.6 \\ 0.7 & 0.4 } \myvector{1\\0}. $$ Asja is either in state 0 or in state 1. However, from our point of view, Asja is in state 0 with probability $ 0.3 $ and in state 1 with probability $ 0.7 $. We can say that Asja is in a probability distribution of states 0 and 1. On the other hand, if we observe Asja's state, then our information about Asja becomes deterministic: either $ \myvector{1 \\ 0} $ or $ \myvector{0 \\ 1} $. We can say that, after observing Asja's state, the probabilistic state $ \myvector{0.3 \\ 0.7} $ collapses to either $ \myvector{1 \\ 0} $ or $ \myvector{0 \\ 1} $. <h3> Measurement </h3> We can measure a quantum system, and then the system is observed in one of its states. This is the most basic type of measurement in quantum computing. (There are more generic measurement operators, but we will not mention about them.) <font color = blue> The probability of the system to be observed in a specified state is the square of its amplitude. </font> <ul> <li> If the amplitude of a state is zero, then this state cannot be observed. </li> <li> If the amplitude of a state is nonzero, then this state can be observed. </li> </ul> For example, if the system is in quantum state $$ - \frac{\sqrt{2}}{\sqrt{3}} \ket{0} + \frac{1}{\sqrt{3}} \ket{1} = \myrvector{ -\frac{\sqrt{2}}{\sqrt{3}} \\ \frac{1}{\sqrt{3}} }, $$ then, after a measurement, we can observe the system in state $\ket{0} $ with probability $ \frac{2}{3} $ and in state $\ket{1}$ with probability $ \frac{1}{3} $. <h4> Collapsing </h4> After the measurement, the system collapses to the observed state, and so the system is no longer in a superposition. Thus, the information kept in a superposition is lost. In the above example, when the system is observed in state $\ket{0}$, then the new state becomes $ \myvector{1 \\ 0} $. If it is observed in state $\ket{1}$, then the new state becomes $ \myvector{0 \\ 1} $. <h3> Task 1 </h3> What are the probabilities of observing the states $ \ket{0} $ and $ \ket{1} $ if the system is in $ \myvector{-\frac{3}{5} \\ - \frac{4}{5}} $ or $ \myvector{\frac{3}{5} \\ \frac{4}{5}} $ or $ \myrvector{\frac{1}{\sqrt{3}} \\ - \frac{\sqrt{2}}{\sqrt{3}}} $? ``` # # your solution is here # ``` <i>What do we know at this point?</i> <ul> <li> A quantum state can be represented by a vector, in which each entry can be zero, a positive value, or a negative value. </li> <li> We can also say that the amplitude of any state can be zero, a positive value, or a negative value. </li> <li> The probability of observing one state after measurement is the square of its amplitude. </li> </ul> <i>What else can we say?</i>
github_jupyter
In this notebook we will deomonstrate aspect based sentiment analysis using [Varder](https://github.com/cjhutto/vaderSentiment) and [Stanford Core NLP](https://stanfordnlp.github.io/CoreNLP/index.html).<br> <br>**VADER Sentiment Analysis**: VADER (Valence Aware Dictionary and sEntiment Reasoner) is a lexicon and rule-based sentiment analysis tool that is specifically attuned to sentiments expressed in social media, and works well on texts from other domains.(source:[github](https://github.com/cjhutto/vaderSentiment))<br> Stanford NLP have a live demo of aspect based sentiment analysis [here](http://nlp.stanford.edu:8080/sentiment/rntnDemo.html).<br><br> **Stanford Core NLP**: "Most sentiment prediction systems work just by looking at words in isolation, giving positive points for positive words and negative points for negative words and then summing up these points. That way, the order of words is ignored and important information is lost. In constrast, our new deep learning model actually builds up a representation of whole sentences based on the sentence structure. It computes the sentiment based on how words compose the meaning of longer phrases. This way, the model is not as easily fooled as previous models."(source: [Stanford Core NLP](https://nlp.stanford.edu/sentiment/index.html).) ``` !pip install vaderSentiment !pip install pycorenlp ``` ### Importing the necessary packages ``` from pprint import pprint from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer import re import string import nltk nltk.download('punkt') nltk.download('vader_lexicon') from nltk.tokenize import word_tokenize, RegexpTokenizer from pycorenlp import StanfordCoreNLP ``` Lets analyze these three sentences. ``` positive = "This fried chicken tastes very good. It is juicy and perfectly cooked." negative = "This fried chicken tasted bad. It is dry and overcooked." ambiguous = "Except the amazing fried chicken everything else at the restaurant tastes very bad." ``` ### VarderSentiment It scores from -1 to 1. -1 being negative and 1 being positive ``` def sentiment_analyzer_scores(text): sentiment_analyzer = SentimentIntensityAnalyzer() score = sentiment_analyzer.polarity_scores(text) pprint(text) pprint(score) print("-"*30) print("Positive:") sentiment_analyzer_scores(positive) print("Negative:") sentiment_analyzer_scores(negative) print("Ambiguous:") sentiment_analyzer_scores(ambiguous) ``` As expected the sentiment analyzer performed well on the positive and negative case. When taking into consideration the ambiguous sentence, it calculated the compound sentiment to be close to 0, i.e, neutral.<br> But it seems to be a negative comment. ``` def get_word_sentiment(text): sentiment_analyzer = SentimentIntensityAnalyzer() tokenized_text = nltk.word_tokenize(text) positive_words=[] neutral_words=[] negative_words=[] for word in tokenized_text: if (sentiment_analyzer.polarity_scores(word)['compound']) >= 0.1: positive_words.append(word) elif (sentiment_analyzer.polarity_scores(word)['compound']) <= -0.1: negative_words.append(word) else: neutral_words.append(word) print(text) print('Positive:',positive_words) print('Negative:',negative_words) print('Neutral:',neutral_words) print("-"*30) get_word_sentiment(positive) get_word_sentiment(negative) get_word_sentiment(ambiguous) ``` ### Stanford Core NLP Before moving on to execute the code we need to start the Stanford Core NLP server on our local machine.<br> To do that follow the steps below (tested on debian should work fine for other distributions too): 1. Download the Stanford Core NLP model from [here](https://stanfordnlp.github.io/CoreNLP/#download). 2. Unizip the folder 3. cd into the folder<br> ```cd stanford-corenlp-4.0.0/``` 4. Start the server using this command:<br> ```java -mx5g -cp "./*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -timeout 10000``` <br><br> If you do not have java installed on your system please install it from the official [Oracle](https://www.oracle.com/in/java/technologies/javase-downloads.html) page. <br><br> ``` nlp = StanfordCoreNLP('http://localhost:9000') def get_sentiment(text): res = nlp.annotate(text, properties={'annotators': 'sentiment', 'outputFormat': 'json', 'timeout': 1000, }) print(text) print('Sentiment:', res['sentences'][0]['sentiment']) print('Sentiment score:', res['sentences'][0]['sentimentValue']) print('Sentiment distribution (0-v. negative, 5-v. positive:', res['sentences'][0]['sentimentDistribution']) print("-"*30) get_sentiment(positive) get_sentiment(negative) get_sentiment(ambiguous) ``` Here you see the model successfully predicts the ambigous sentence which the Varder failed to predict correctly.<br> The code in this notebook has been adapted from this [article](https://towardsdatascience.com/sentiment-analysis-beyond-words-6ca17a6c1b54).See below code for colab.
github_jupyter
Precursors! ``` import os, subprocess if not os.path.isdir('models/heart'): os.mkdir('models/heart') if not os.path.isfile('models/heart/model_best.tf.meta'): subprocess.call('curl -o models/heart/model_best.tf.index https://storage.googleapis.com/basenji_tutorial_data/model_best.tf.index', shell=True) subprocess.call('curl -o models/heart/model_best.tf.meta https://storage.googleapis.com/basenji_tutorial_data/model_best.tf.meta', shell=True) subprocess.call('curl -o models/heart/model_best.tf.data-00000-of-00001 https://storage.googleapis.com/basenji_tutorial_data/model_best.tf.data-00000-of-00001', shell=True) ``` Saturation mutagenesis is a powerful tool both for dissecting a specific sequence of interest and understanding what the model learned. [basenji_sat.py](https://github.com/calico/basenji/blob/master/bin/basenji_sat.py) enables this analysis from a test set of data. [basenji_sat_vcf.py](https://github.com/calico/basenji/blob/master/bin/basenji_sat_vcf.py) lets you provide a VCF file for variant-centered mutagenesis. To do this, you'll need * Trained model * Input file (FASTA or HDF5 with test_in/test_out) First, you can either train your own model in the [Train/test tutorial](https://github.com/calico/basenji/blob/master/tutorials/train_test.ipynb) or use one that I pre-trained from the models subdirectory. We'll bash the GATA4 promoter to see what motifs drive its expression. I placed a 131 kb FASTA file surrounding the GATA4 TSS in data/gata4.fa, so we'll use [basenji_sat.py](https://github.com/calico/basenji/blob/master/bin/basenji_sat.py). The most relevant options are: | Option/Argument | Value | Note | |:---|:---|:---| | -g | | Plot the nucleotides proportional to the gain score, too. | | -f | 20 | Figure width, that I usually scale to 10x the saturation mutageneis region | | -l | 200 | Saturation mutagenesis region in the center of the given sequence(s) | | -o | gata4_sat | Outplot plot directory. | | --rc | | Predict forward and reverse complement versions and average the results. | | -t | 0,1,2 | Target indexes to analyze. | | params_file | models/params_small.txt | Table of parameters to setup the model architecture and optimization parameters. | | model_file | models/heart/model_best.tf | Trained saved model prefix. | | input_file | data/gata4.fa | Either FASTA or HDF5 with test_in/test_out keys. | ``` ! basenji_sat.py -g -f 20 -l 200 -o output/gata4_sat --rc -t 0,1,2 models/params_small.txt models/heart/model_best.tf data/gata4.fa ``` The saturated mutagenesis heatmaps go into output/gata4_sat ``` from IPython.display import IFrame IFrame('output/pim1_sat/seq0_t0.pdf', width=1200, height=400) ```
github_jupyter
# 人脸年龄预测 年龄预测,是指自动识别出一张图片中人物的年龄。这项技术有很多应用,如视频监控、产品推荐、人机交互、市场分析、用户画像、年龄变化预测(age progression)等。 在本案例中,我们将对图片中的人脸进行识别并根据人脸进行年龄预测。我们首先使用`MTCNN`模型检测出人脸区域,然后根据人脸区域使用`SSR-Net`模型预测年龄。 本案例涉及的内容: * `MTCNN`模型的代码实现和使用 * `SSR-Net`模型的解析和使用 ### 进入ModelArts 点击如下链接:https://www.huaweicloud.com/product/modelarts.html , 进入ModelArts主页。点击“立即使用”按钮,输入用户名和密码登录,进入ModelArts使用页面。 ### 创建ModelArts notebook 下面,我们在ModelArts中创建一个notebook开发环境,ModelArts notebook提供网页版的Python开发环境,可以方便的编写、运行代码,并查看运行结果。 第一步:在ModelArts服务主界面依次点击“开发环境”、“创建” ![create_nb_create_button](./img/create_nb_create_button.png) 第二步:填写notebook所需的参数: |项目|建议填写方式| |-|-| |名称|自定义环境名称| |工作环境 | Python3| | 资源池 | 选择"公共资源池"即可 | |类型|GPU| |规格|v100| |存储配置|EVS| |磁盘规格|5GB| 第三步:配置好notebook参数后,点击下一步,进入notebook信息预览。确认无误后,点击“立即创建” ![create_nb_creation_summary](./img/create_nb_creation_summary.png) 第四步:创建完成后,返回开发环境主界面,等待Notebook创建完毕后,打开Notebook,进行下一步操作。 ![modelarts_notebook_index](./img/modelarts_notebook_index.png) ### 在ModelArts中创建开发环境 接下来,我们创建一个实际的开发环境,用于后续的实验步骤。 第一步:点击下图所示的“打开”按钮,进入刚刚创建的Notebook ![inter_dev_env](img/enter_dev_env.png) 第二步:创建一个Python3环境的的Notebook。点击右上角的"New",然后选择TensorFlow 1.13.1开发环境。 第三步:点击左上方的文件名"Untitled",并输入一个与本实验相关的名称,如"age_prediction" ![notebook_untitled_filename](./img/notebook_untitled_filename.png) ![notebook_name_the_ipynb](./img/notebook_name_the_ipynb.png) ### 在Notebook中编写并执行代码 在Notebook中,我们输入一个简单的打印语句,然后点击上方的运行按钮,可以查看语句执行的结果: ![run_helloworld](./img/run_helloworld.png) 开发环境准备好啦,接下来可以愉快地写代码啦! # 案例内容 ## MTCNN模型简介 [MTCNN(Multi-task convolutional neural network)](https://kpzhang93.github.io/MTCNN_face_detection_alignment/) 中文名称是多任务卷积神经网络,可以用来做人脸区域检测和人脸对齐。在人脸检测中会面临很多不同的问题:遮挡,角度倾斜等。传统方法中,大多使用机器学习的方法,而在MTCNN中,使用深度学习方法结合NMS和边界框回归,将人脸区域坐标和关键点坐标进行识别,相比较机器学习方法,MTCNN能更好地识别不同情况下的人脸。 MTCNN模型的详解可以参考:https://kpzhang93.github.io/MTCNN_face_detection_alignment 。 ### 数据和代码下载 运行下面代码,进行数据和代码的下载和解压 ``` from modelarts.session import Session sess = Session() if sess.region_name == 'cn-north-1': bucket_path="modelarts-labs/notebook/DL_face_age_prediction/ssr.tar" elif sess.region_name == 'cn-north-4': bucket_path="modelarts-labs-bj4/notebook/DL_face_age_prediction/ssr.tar" else: print("请更换地区到北京一或北京四") sess.download_data(bucket_path=bucket_path, path="./ssr.tar") !tar -xf ssr.tar ``` ### 安装`mtcnn` ``` !pip install mtcnn==0.0.8 ``` ### mtcnn库检测人脸 使用`mtcnn`库检测人脸,这种使用方式比较简单,但是无法对`mtcnn`库自带的人脸检测模型进行调优。 ``` import numpy as np import cv2 import tensorflow as tf import random from PIL import Image ``` 可以通过notebook `upload`功能上传测试图片,图片路径存储在`image_path`中。 这里我们提供了一张测试图片,可以上传自己的图片进行测试。 ``` image_path = "./test.jpeg" img = Image.open(image_path) img = np.array(img) ``` 调用mtcnn库,进行人脸区域检测,并显示检测结果 ``` from mtcnn.mtcnn import MTCNN as mtcnn detector = mtcnn() detected = detector.detect_faces(img) # 打印检测结果 detected ``` 将检测结果绘制在图片上 ``` # 绘图部分 box = detected[0]["box"] res_img = cv2.rectangle(img, (box[0],box[1]),(box[0]+box[2],box[1]+box[3]), 0, 1) keypoints = detected[0]["keypoints"] res_img = cv2.circle(res_img, keypoints['left_eye'], 1, 255, 4) res_img = cv2.circle(res_img, keypoints['right_eye'], 1, 255, 4) res_img = cv2.circle(res_img, keypoints['nose'], 1, 255, 4) res_img = cv2.circle(res_img, keypoints['mouth_left'], 1, 255, 4) res_img = cv2.circle(res_img, keypoints['mouth_right'], 1, 255, 4) res_img = Image.fromarray(res_img) res_img ``` ### MTCNN模型实现 #### MTCNN 流程总览 MTCNN网络: MTCNN网络分为三部分:PNet RNet ONet ![MTCNN-backbone](./img/M-nw.png) 卷积网络生成3部分结果:人脸/非人脸分类分类结果,人脸边界框以及人脸关键点位置。 数据依次经过PNet,RNet和ONet,每经过一组网络,就进行一次nms和边界框回归,最后在ONet网络输出中获得检测结果,人脸区域坐标及人脸关键点坐标。 >NMS(non maximum suppression)非极大值抑制 当我们进行人脸检测时,可能会对同一张人脸区域有多个边界框检测结果,虽然这些检测结果都有很高的置信度,但是我们只需要置信度最高的检测结果,所以进行局部最大值检测,将不是最大值的预测结果去掉,完成边界框筛选的任务。NMS被应用在很多目标检测模型当中,例如R-CNN,Faster R-CNN,Mask R-CNN等。 接下来,我们使用代码搭建`MTCNN`神经网络结构。 我们将MTCNN的实现分为**PNet**,**RNet**,**ONet**的顺序进行讲解,每一部分包括模型的结构以及运行的效果。 ``` from src.align.detect_face import Network from src.align.detect_face import rerec, pad from src.align.detect_face import nms from src.align.detect_face import imresample from src.align.detect_face import generateBoundingBox ``` #### PNet 我们使用全卷积网络:Proposal 网络(PNet),来生成人脸区域备选框,然后备选框通过边界框回归进行校正。校正后,应用NMS来将高度重复的备选框进行筛选。 PNet构建代码如下所示: ``` class PNet(Network): def setup(self): (self.feed('data') .conv(3, 3, 10, 1, 1, padding='VALID', relu=False, name='conv1') .prelu(name='PReLU1') .max_pool(2, 2, 2, 2, name='pool1') .conv(3, 3, 16, 1, 1, padding='VALID', relu=False, name='conv2') .prelu(name='PReLU2') .conv(3, 3, 32, 1, 1, padding='VALID', relu=False, name='conv3') .prelu(name='PReLU3') .conv(1, 1, 2, 1, 1, relu=False, name='conv4-1') .softmax(3,name='prob1')) (self.feed('PReLU3') .conv(1, 1, 4, 1, 1, relu=False, name='conv4-2')) ``` #### RNet PNet生成的所有人脸备选框都被输入另一个卷积网络,叫做Refine网络(RNet)。RNet将大量错误的人脸信息去掉,同样通过边界框回归进行校正,以及通过NMS进行筛选。 RNet构建代码如下所示: ``` class RNet(Network): def setup(self): (self.feed('data') #pylint: disable=no-value-for-parameter, no-member .conv(3, 3, 28, 1, 1, padding='VALID', relu=False, name='conv1') .prelu(name='prelu1') .max_pool(3, 3, 2, 2, name='pool1') .conv(3, 3, 48, 1, 1, padding='VALID', relu=False, name='conv2') .prelu(name='prelu2') .max_pool(3, 3, 2, 2, padding='VALID', name='pool2') .conv(2, 2, 64, 1, 1, padding='VALID', relu=False, name='conv3') .prelu(name='prelu3') .fc(128, relu=False, name='conv4') .prelu(name='prelu4') .fc(2, relu=False, name='conv5-1') .softmax(1,name='prob1')) (self.feed('prelu4') #pylint: disable=no-value-for-parameter .fc(4, relu=False, name='conv5-2')) ``` #### ONet ONet与RNet相似,但是在ONet将输出5个人脸关键点位置,全称为Output Network,作为最后一层网络,将输出人脸区域坐标以及人脸关键点坐标。 ONet构建代码如下所示: ``` class ONet(Network): def setup(self): (self.feed('data') #pylint: disable=no-value-for-parameter, no-member .conv(3, 3, 32, 1, 1, padding='VALID', relu=False, name='conv1') .prelu(name='prelu1') .max_pool(3, 3, 2, 2, name='pool1') .conv(3, 3, 64, 1, 1, padding='VALID', relu=False, name='conv2') .prelu(name='prelu2') .max_pool(3, 3, 2, 2, padding='VALID', name='pool2') .conv(3, 3, 64, 1, 1, padding='VALID', relu=False, name='conv3') .prelu(name='prelu3') .max_pool(2, 2, 2, 2, name='pool3') .conv(2, 2, 128, 1, 1, padding='VALID', relu=False, name='conv4') .prelu(name='prelu4') .fc(256, relu=False, name='conv5') .prelu(name='prelu5') .fc(2, relu=False, name='conv6-1') .softmax(1, name='prob1')) (self.feed('prelu5') #pylint: disable=no-value-for-parameter .fc(4, relu=False, name='conv6-2')) (self.feed('prelu5') #pylint: disable=no-value-for-parameter .fc(10, relu=False, name='conv6-3')) ``` ### 数据准备 ``` # 打开原图 test_img = Image.open(image_path) test_img ``` 图片预处理 ``` # 进行图片预处理 test_img = np.array(test_img) img_size = np.asarray(test_img.shape)[0:2] factor_count=0 minsize = 20 total_boxes=np.empty((0,9)) points=np.empty(0) h=test_img.shape[0] # h=410 w=test_img.shape[1] # w=599 minl=np.amin([h, w]) # minl = [410,599] 中最小值 410 m=12.0/minsize # m=12/20 minl=minl*m # minl = 410*12/20 = 410* 0.6 factor = 0.709 scales=[] while minl>=12: scales += [m*np.power(factor, factor_count)] minl = minl*factor factor_count += 1 # first stage for scale in scales: hs=int(np.ceil(h*scale)) #大于等于该值的最小整数 ws=int(np.ceil(w*scale)) im_data = cv2.resize(test_img, (ws, hs), interpolation=cv2.INTER_AREA) im_data = (im_data-127.5)*0.0078125 img_x = np.expand_dims(im_data, 0) img_y = np.transpose(img_x, (0,2,1,3)) ``` ### 运行PNet 运行PNet,并加载预训练权重 ``` with tf.Graph().as_default(): with tf.Session() as sess: with tf.variable_scope('pnet'): data = tf.placeholder(tf.float32, shape=(None, None, None, 3), name="input") pnet = PNet({'data':data}) pnet.load("./src/align/PNet.npy", sess) out = sess.run(('pnet/conv4-2/BiasAdd:0', 'pnet/prob1:0'), feed_dict={'pnet/input:0':img_y}) # boundingbox regression 结果 out0 = np.transpose(out[0], (0,2,1,3)) # face classification 结果 out1 = np.transpose(out[1], (0,2,1,3)) threshold = 0.5 boxes, reg = generateBoundingBox(out1[0,:,:,1].copy(), out0[0,:,:,:].copy(), scale, threshold) print("PNet产生结果为:"+str(boxes.shape)) total_boxes = boxes.copy() # 边界框绘制函数 def draw_bboxes(img, total_boxes): for i in range(total_boxes.shape[0]): r = random.randint(0, 255) g = random.randint(0, 255) b = random.randint(0, 255) x1 = int(total_boxes[:,0][i]) y1 = int(total_boxes[:,1][i]) x2= int(total_boxes[:,2][i]) y2 = int(total_boxes[:,3][i]) img = cv2.rectangle(img,(x1,y1),(x2,y2), (r,g,b), 2) return img ``` 将PNet预测结果进行筛选和回归,结果绘制在图片上 ``` img = Image.open(image_path) img = np.array(img) Image.fromarray(draw_bboxes(img,total_boxes)) total_boxes=np.empty((0,9)) pick = nms(boxes.copy(), 0.7, 'Union') if boxes.size>0 and pick.size>0: boxes = boxes[pick,:] total_boxes = np.append(total_boxes, boxes, axis=0) print("筛选之后结果为:"+str(total_boxes.shape)) # 绘制筛选后的边界框 img = Image.open(image_path) img = np.array(img) # 进行nms计算 参数为0.7 pick = nms(total_boxes.copy(), 0.6, 'Union') total_boxes = total_boxes[pick,:] print(total_boxes.shape) # 边界框回归 regw = total_boxes[:,2]-total_boxes[:,0] regh = total_boxes[:,3]-total_boxes[:,1] qq1 = total_boxes[:,0]+total_boxes[:,5]*regw qq2 = total_boxes[:,1]+total_boxes[:,6]*regh qq3 = total_boxes[:,2]+total_boxes[:,7]*regw qq4 = total_boxes[:,3]+total_boxes[:,8]*regh total_boxes = np.transpose(np.vstack([qq1, qq2, qq3, qq4, total_boxes[:,4]])) print(total_boxes.shape) img = Image.open(image_path) img = np.array(img) # 将边界框形状转为正方形 total_boxes = rerec(total_boxes.copy()) print(total_boxes) # 将边界框坐标整理成整数 total_boxes[:,0:4] = np.fix(total_boxes[:,0:4]).astype(np.int32) print(total_boxes) dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph = pad(total_boxes.copy(), w, h) img = Image.open(image_path) img = np.array(img) Image.fromarray(draw_bboxes(img,total_boxes)) ``` ### 运行RNet MTCNN的PNet计算结束后,可以看到已经有若干个边界框已经被预测出来。接下来我们将进行RNet预测,通过RNet预测之后,边界框将更加准确。 ``` numbox = total_boxes.shape[0] tempimg = np.zeros((24,24,3,numbox)) for k in range(0,numbox): tmp = np.zeros((int(tmph[k]),int(tmpw[k]),3)) tmp[dy[k]-1:edy[k],dx[k]-1:edx[k],:] = img[y[k]-1:ey[k],x[k]-1:ex[k],:] if tmp.shape[0]>0 and tmp.shape[1]>0 or tmp.shape[0]==0 and tmp.shape[1]==0: tempimg[:,:,:,k] = imresample(tmp, (24, 24)) else: print(0) tempimg = (tempimg-127.5)*0.0078125 tempimg1 = np.transpose(tempimg, (3,1,0,2)) with tf.Graph().as_default(): with tf.Session() as sess: with tf.variable_scope('rnet'): data = tf.placeholder(tf.float32, shape=(None, 24, 24, 3), name="input") rnet = RNet({'data':data}) rnet.load("./src/align/RNet.npy", sess) out = sess.run(('rnet/conv5-2/conv5-2:0', 'rnet/prob1:0'), feed_dict={'rnet/input:0':tempimg1}) # 检测到的人脸坐标 out0 = np.transpose(out[0]) out1 = np.transpose(out[1]) score = out1[1,:] threshold = 0.7 ipass = np.where(score>0.2) total_boxes = np.hstack([total_boxes[ipass[0],0:4].copy(), np.expand_dims(score[ipass].copy(),1)]) mv = out0[:,ipass[0]] if total_boxes.shape[0]>0: pick = nms(total_boxes, threshold, 'Union') total_boxes = total_boxes[pick,:] print(total_boxes) img = Image.open(image_path) img = np.array(img) from src.align.detect_face import bbreg # 边界框回归 total_boxes = bbreg(total_boxes.copy(), np.transpose(mv[:,pick])) print(total_boxes) # 边界框整理成正方形 total_boxes = rerec(total_boxes.copy()) print(total_boxes) img = Image.open(image_path) img = np.array(img) Image.fromarray(draw_bboxes(img,total_boxes)) ``` ### 运行ONet 最后,我们进行ONet预测,不仅使人脸的边界框检测更加准确,这一步还将关键点检测出来。 ``` numbox = total_boxes.shape[0] total_boxes = np.fix(total_boxes).astype(np.int32) dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph = pad(total_boxes.copy(), w, h) tempimg = np.zeros((48,48,3,numbox)) for k in range(0,numbox): tmp = np.zeros((int(tmph[k]),int(tmpw[k]),3)) tmp[dy[k]-1:edy[k],dx[k]-1:edx[k],:] = img[y[k]-1:ey[k],x[k]-1:ex[k],:] if tmp.shape[0]>0 and tmp.shape[1]>0 or tmp.shape[0]==0 and tmp.shape[1]==0: tempimg[:,:,:,k] = imresample(tmp, (48, 48)) else: print(0) tempimg = (tempimg-127.5)*0.0078125 tempimg1 = np.transpose(tempimg, (3,1,0,2)) with tf.Graph().as_default(): with tf.Session() as sess: with tf.variable_scope('onet'): data = tf.placeholder(tf.float32, shape=(None, 48, 48, 3), name="input") onet = ONet({'data':data}) rnet.load("./src/align/ONet.npy", sess) out = sess.run(('onet/conv6-2/conv6-2:0', 'onet/conv6-3/conv6-3:0', 'onet/prob1:0'), feed_dict={'onet/input:0':tempimg1}) # 人脸区域边界框预测结果 out0 = np.transpose(out[0]) # 人脸关键点预测结果 out1 = np.transpose(out[1]) # 人脸区域置信度 out2 = np.transpose(out[2]) score = out2[1,:] points = out1 # threshold = 0.7 ipass = np.where(score>0.7) points = points[:,ipass[0]] total_boxes = np.hstack([total_boxes[ipass[0],0:4].copy(), np.expand_dims(score[ipass].copy(),1)]) mv = out0[:,ipass[0]] w = total_boxes[:,2]-total_boxes[:,0]+1 h = total_boxes[:,3]-total_boxes[:,1]+1 points[0:5,:] = np.tile(w,(5, 1))*points[0:5,:] + np.tile(total_boxes[:,0],(5, 1))-1 points[5:10,:] = np.tile(h,(5, 1))*points[5:10,:] + np.tile(total_boxes[:,1],(5, 1))-1 if total_boxes.shape[0]>0: total_boxes = bbreg(total_boxes.copy(), np.transpose(mv)) pick = nms(total_boxes.copy(), 0.7, 'Min') total_boxes = total_boxes[pick,:] points = points[:,pick] img = Image.open(image_path) img = np.array(img) r = random.randint(0, 255) g = random.randint(0, 255) b = random.randint(0, 255) point_color = (r, g, b) for i in range(5): cv2.circle(img,(int(points[i]),int(points[i+5])),1, point_color, 4) Image.fromarray(draw_bboxes(img,total_boxes)) ``` ## 年龄预测 我们使用`SSR-Net`模型预测年龄,该模型的论文见[此链接](https://www.ijcai.org/proceedings/2018/0150.pdf)。 ### 加载模型 首先我们将模型结构和权重加载,预训练模型位置存储在`weight_file`中。 ``` from SSRNET_model import SSR_net weight_file = "./ssrnet_3_3_3_64_1.0_1.0.h5" img_size = 64 stage_num = [3,3,3] lambda_local = 1 lambda_d = 1 model = SSR_net(img_size,stage_num, lambda_local, lambda_d)() model.load_weights(weight_file) ``` 模型层级结构 ``` model.summary() ``` 准备输入数据 ``` faces = np.empty((len(detected), img_size, img_size, 3)) faces.shape ``` ### 获取人脸区域图片,并缩放 将人脸检测结果进行裁剪和缩放 ``` ad = 0.4 img_h, img_w, _ = np.shape(img) for i,d in enumerate(detected): if d['confidence'] >=0.95 : x1,y1,w,h = d['box'] x2 = x1 + w y2 = y1 + h xw1 = max(int(x1 - ad * w), 0) yw1 = max(int(y1 - ad * h), 0) xw2 = min(int(x2 + ad * w), img_w - 1) yw2 = min(int(y2 + ad * h), img_h - 1) img = cv2.resize(img[yw1:yw2+1, xw1:xw2+1, :], (img_size, img_size)) faces[i,:,:,:] = img res_img = Image.fromarray(img) res_img ``` ### 预测年龄 将人脸区域图片输入模型,获得预测结果 ``` res = model.predict(faces) print("预测年龄为:"+str(int(res[0]))) ``` ## 小结 在本实践中,我们详细解读并编码实现了人脸区域检测`MTCNN`模型,然后展示了如何使用`SSR-Net`模型预测年龄。
github_jupyter
# Implementing a Neural Network - SVM combo ``` # A bit of a setup import numpy as np import matplotlib.pyplot as plt from scipy import stats as st from cs231n.classifiers.two_layer_neural_net import TwoLayerNet from cs231n.classifiers.neural_net import ThreeLayerNet from cs231n.data_utils import load_CIFAR10 %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): """ returns relative error """ return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) ``` ## Load the data (CIFAR-10 dataset) On the following lines of code, we will separate the into a training set, a validation set and a testing set. The training set contains 49000 images, the validation set contains 1000 images, and the testing set contain 10000 images. Then, we will do some lite preprocessing, substracting the mean image, and reshaping the data into row format (the data on original form are given on tensor format). ``` def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=10000): """ Load the CIFAR-10 dataset from disk and perform preprocessing to prepare it for the two-layer neural net classifier. These are the same steps as we used for the SVM, but condensed to a single function. """ # Load the raw CIFAR-10 data cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # Subsample the data mask = range(num_training, num_training + num_validation) X_val = X_train[mask] y_val = y_train[mask] mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] # Normalize the data: subtract the mean image mean_image = np.mean(X_train, axis=0) X_train -= mean_image X_val -= mean_image X_test -= mean_image # Reshape data to rows X_train = X_train.reshape(num_training, -1) X_val = X_val.reshape(num_validation, -1) X_test = X_test.reshape(num_test, -1) return X_train, y_train, X_val, y_val, X_test, y_test ``` Invoke the function and do a sanity check to see the shapes of the matrices. ``` # Invoke the above function to get our data. #X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data() print 'Train data shape: ', X_train.shape print 'Train labels shape: ', y_train.shape print 'Validation data shape: ', X_val.shape print 'Validation labels shape: ', y_val.shape print 'Test data shape: ', X_test.shape print 'Test labels shape: ', y_test.shape ``` ## Train a network To train our network we will use SGD with momentum (toDo: add the other optimization blocks used in the training). In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate. ``` #input_size = 32 * 32 * 3 input_size = 28 * 28 hidden_size = 500 # number of neurons in the hidden layer num_classes = 10 net = TwoLayerNet(input_size, hidden_size, num_classes) # Train the network stats = net.train(X_train, y_train, X_val, y_val, learning_rate=5.43e-5, learning_rate_decay=0.98, reg=0.3, num_iters=1000, optimizer='Adam', p = 0.7, batch_size=256, verbose=True, svm=True) # Predict on the validation set val_acc = (net.predict(X_val) == y_val).mean() print 'Validation accuracy: ', val_acc ``` ## Debug the training One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization. Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized. ``` # Plot the loss function and train / validation accuracies plt.subplot(2, 1, 1) plt.plot(stats['loss_history']) plt.title('Loss history') plt.xlabel('Iteration') plt.ylabel('Loss') plt.subplot(2, 1, 2) plt.plot(stats['train_acc_history'], label='train') plt.plot(stats['val_acc_history'], label='val') plt.title('Classification accuracy history') plt.xlabel('Epoch') plt.ylabel('Clasification accuracy') plt.show() from cs231n.vis_utils import visualize_grid # Visualize the weights of the network def show_net_weights(net): W1 = net.params['W1'] W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2) plt.imshow(visualize_grid(W1, padding=3).astype('uint8')) plt.gca().axis('off') plt.show() show_net_weights(net) ``` ## Tune your hyperparameters ``` best_val = -1 best_net = None best_stats = None best_nets_ensamble = [] values_ensamble = [] best_stats_ensamble = [] results = {} for i in xrange(20): neurons = np.random.randint(800, 1200) lr = np.random.uniform(3e-5, 8e-5) rs = np.random.uniform(0.05, 0.2) iters = np.random.randint(3000, 5000) learning_rate_decay = np.random.uniform(0.95, 1) p = np.random.uniform(0.4, 0.6) opt = np.random.choice(['Nesterov', 'Adagrad', 'RMSProp', 'Adam']) net = np.random.choice([TwoLayerNet(input_size, neurons, num_classes), ThreeLayerNet(input_size, neurons, num_classes)]) # Train the network stats = net.train(X_train, y_train, X_val, y_val, lr, learning_rate_decay, rs, iters, opt, p, 256, False, True) y_train_pred = net.predict(X_train) acc_train = np.mean(y_train == y_train_pred) y_val_pred = net.predict(X_val) acc_val = np.mean(y_val == y_val_pred) num_layers = 0 if type(net) == TwoLayerNet: num_layers = 2 else: num_layers = 3 optimizers = {'Nesterov': 1, 'Adagrad': 2, 'RMSProp': 3, 'Adam': 4} results[(lr, rs, neurons, learning_rate_decay, iters, num_layers, optimizers[opt], p)] = (acc_train, acc_val) accepted_accuracy = 0.95 if acc_val > accepted_accuracy: best_nets_ensamble.append(net) values_ensamble.append(acc_val) best_stats_ensamble.append(stats) number_of_ensemblers = len(best_nets_ensamble) if best_val < acc_val: best_stats = stats best_val = acc_val best_net = net print values_ensamble print i # Print out results. for lr, reg, neurons, learning_rate_decay, iters, num_layers, opts, p in sorted(results): train_accuracy, val_accuracy = results[(lr, reg, neurons, learning_rate_decay, iters, num_layers, opts, p)] print 'lr %e reg %e neur%e lrd %e iters %e num_layers %e opt %e p %e train accuracy: %f val accuracy: %f' % ( lr, reg, neurons, learning_rate_decay, iters, num_layers, opts, p, train_accuracy, val_accuracy) print 'best validation accuracy achieved during cross-validation: %f' % best_val test_acc = (best_net.predict(X_test) == y_test).mean() print number_of_ensemblers if number_of_ensemblers > 0: ensemble_net = np.zeros((number_of_ensemblers, len(y_test))) i = 0 for net in best_nets_ensamble2: pred = net.predict(X_test) ensemble_net[i,:] = pred i += 1 pred_final = st.mode(ensemble_net) pred_final = pred_final[0] accuracy_ensembler = (pred_final == y_test).mean() print 'Test accuracy - best model: ', test_acc print 'Test accuracy - ensembler: ', accuracy_ensembler show_net_weights(best_net) ```
github_jupyter
# Week 2 lecture notes From Coursera course, Understanding and Visualizing Data. Patricia Schuster University of Michigan # Categorical data: Tables, Bar Charts and Pie Charts Categorial data classfies individuals or items into different groups. The most common way to summarize the data is to group by category and assemble a frequency table of counts, percentages. The most common way to visualize is with a bar chart. Sometimes you arrange in descending order. Don't recommend pie charts: Labels can overlap, it's hard to compare slice sizes. # Quantitative data: Histogram Great first look at the data. Allows you to see the distribution of data in a compact way. Each rectangle is called a bin. Python will automatically set bin edges. 4 aspects: * Shape: Overall appearance. Can be symmetric, bell-shaped, left skewed, right skewed, etc. Unimodal = one peak. "Right skewed" means it leans to the left and has a tail to the right. * Center: Mean or median position * Spread: How far our data spreads. Range, interquartile range (IQR), standard deviation, variance * Outliers: Data points that fall far from the bulk of the data Example summary of the features on a distribution- cover each of those four aspects. "The distribution of adult male heights is roughly bell shaped with a center of about 68 inches, a range of 13 inches (62 to 75) and no apparent outliers." # Quantitative data: Numerical summaries 5 number summary: * Min (0%) * 1st quartile (25%) * Median (50%) * 3rd quartile (75%) * Max (100%) Others * Mean * IQR Inter quartile range (20-75%), useful when the data is left or right skewed. * Standard deviation: Average size that points fall away from the mean * Sample size # Standard score (Empirical Rule) For a normal distribution: * 68% of data fall within 1-sigma of the mean * 95% of data fall within 2-sigma of the mean * 99.7% of data fall within 3-sigma of the mean This allows us to get an idea of how unusual a data point is. Calculate a standard score: $ z = (x-\mu)/\sigma$ A negative value means $x$ is below the mean. # Quantitative data: box plots Gives a visual representation of the 5-number summary. ![Boxplot](fig/boxplot.PNG) Often called a box and whisker plot. The 1st and 3rd quartiles define the box. The median is the line through the box. The min and max are the whiskers. Boxplots have a separate function for determining outliers and they are plotted as individual data plots outside the mix and max. Limitation: Hides some shape, which histogram displays better. # Tables, histograms, and boxplots in Python ``` import numpy as np import seaborn as sns import matplotlib.pyplot as plt ``` Load a tips dataset, which is a default dataset in seaborn ``` tips_data = sns.load_dataset("tips") ``` View it in table form, just the first five lines. ``` tips_data.head() ``` Describe the table to me. Default behavior is only to describe quantitative metrics in numerical columns. I can use the `include` parameter to explicitly add additional columns ``` tips_data.dtypes tips_data.describe() tips_data.describe(include=['float','int64','category']) ``` ## Create a histogram Look at the total bill. ``` # Plot a histogram of the total bill amount sns.distplot(tips_data['total_bill'], kde=False).set_title("Histogram of total bill") plt.show() # Plot a histogram of the tips # This time leave kde on... the smoothing curve sns.distplot(tips_data['tip']).set_title("Histogram of tips") plt.show() # Plot both together. sns.distplot(tips_data['total_bill'], kde=False) sns.distplot(tips_data['tip'], kde=False).set_title("Histogram of total bill") plt.show() ``` ## Create a boxplot ``` sns.boxplot(tips_data["total_bill"]).set_title("Box plot of total bill") plt.show() ``` By default, if we want to plot them both together, it will pile separate plots on top of each other. Let's see: ``` sns.boxplot(tips_data["total_bill"]) sns.boxplot(tips_data["tip"]).set_title("Box plot of total bill") plt.show() ``` We don't want these on the same axis. How do we plot them separately? ## Plot by groups Separate by smokers and not smokers. All we have to do is provide the category by which we are splitting the data as the y-axis. ``` tips_data.head() sns.boxplot(x = tips_data["tip"], y=tips_data["smoker"]) plt.show() ``` The quartile ranges look similar for these, but are the shapes of the data the same? Look at the histograms separately with a function `sns.FacetGrid`. This automatically matches the axes range so they are directly comparable. ``` g = sns.FacetGrid(tips_data, row = "smoker") g = g.map(plt.hist, "tip") ``` Look as grouped by time of day. ``` sns.boxplot(x = 'tip', y = 'time', data = tips_data) plt.show() ``` Dinner generates higher tips than lunch. Look at a histogram. ``` g = sns.FacetGrid(tips_data, row = "time") g = g.map(plt.hist, "tip") ``` Observation: There are a lot more diners at dinner than lunch, and tips are higher then, so if I were a server I would want to work during dinner. Invert and look vs. day. ``` sns.boxplot(x="day", y="tip", data = tips_data) plt.show() ``` How would you decide what day to work? A few observations: * Sunday has the largest tip on average, Thursday has the worst tip on average * Saturday has the most high outliers * Friday has the smallest range Now look at the shapes. ``` g = sns.FacetGrid(tips_data, "day") g = g.map(plt.hist, "tip") ``` Now compare smokers and non-smokers within each day. You can do this by adding as a `hue`. ``` sns.boxplot(x=tips_data["day"], y = tips_data["tip"], hue=tips_data["smoker"]) plt.show() ``` Another way to input all of this is just with the column names, and then provide the name of the dataframe as the `data` input parameter. This simplifies things. ``` sns.boxplot(x= "day", y = "tip", hue= "smoker", data = tips_data) plt.savefig('tips_data.png') plt.show() ``` It looks like we can break out the FacetGrid into eight plots to see the distributions for these eight boxes. ``` g = sns.FacetGrid(tips_data, row="day", col="smoker") g = g.map(plt.hist, "tip") ``` My first observation is that there are generally fewer smokers than non-smokers, except on Saturday when it appears to be roughly even.
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Distributed training with Keras <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/beta/tutorials/distribute/keras"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/distribute/keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/distribute/keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/r2/tutorials/distribute/keras.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> ## Overview The `tf.distribute.Strategy` API provides an abstraction for distributing your training across multiple processing units. The goal is to allow users to enable distributed training using existing models and training code, with minimal changes. This tutorial uses the `tf.distribute.MirroredStrategy`, which does in-graph replication with synchronous training on many GPUs on one machine. Essentially, it copies all of the model's variables to each processor. Then, it uses [all-reduce](http://mpitutorial.com/tutorials/mpi-reduce-and-allreduce/) to combine the gradients from all processors and applies the combined value to all copies of the model. `MirroredStategy` is one of several distribution strategy available in TensorFlow core. You can read about more strategies at [distribution strategy guide](../../guide/distribute_strategy.ipynb). ### Keras API This example uses the `tf.keras` API to build the model and training loop. For custom training loops, see the [tf.distribute.Strategy with training loops](training_loops.ipynb) tutorial. ## Import dependencies ``` from __future__ import absolute_import, division, print_function, unicode_literals # Import TensorFlow and TensorFlow Datasets try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow_datasets as tfds import tensorflow as tf tfds.disable_progress_bar() import os ``` ## Download the dataset Download the MNIST dataset and load it from [TensorFlow Datasets](https://www.tensorflow.org/datasets). This returns a dataset in `tf.data` format. Setting `with_info` to `True` includes the metadata for the entire dataset, which is being saved here to `info`. Among other things, this metadata object includes the number of train and test examples. ``` datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True) mnist_train, mnist_test = datasets['train'], datasets['test'] ``` ## Define distribution strategy Create a `MirroredStrategy` object. This will handle distribution, and provides a context manager (`tf.distribute.MirroredStrategy.scope`) to build your model inside. ``` strategy = tf.distribute.MirroredStrategy() print('Number of devices: {}'.format(strategy.num_replicas_in_sync)) ``` ## Setup input pipeline When training a model with multiple GPUs, you can use the extra computing power effectively by increasing the batch size. In general, use the largest batch size that fits the GPU memory, and tune the learning rate accordingly. ``` # You can also do info.splits.total_num_examples to get the total # number of examples in the dataset. num_train_examples = info.splits['train'].num_examples num_test_examples = info.splits['test'].num_examples BUFFER_SIZE = 10000 BATCH_SIZE_PER_REPLICA = 64 BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync ``` Pixel values, which are 0-255, [have to be normalized to the 0-1 range](https://en.wikipedia.org/wiki/Feature_scaling). Define this scale in a function. ``` def scale(image, label): image = tf.cast(image, tf.float32) image /= 255 return image, label ``` Apply this function to the training and test data, shuffle the training data, and [batch it for training](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch). ``` train_dataset = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE) ``` ## Create the model Create and compile the Keras model in the context of `strategy.scope`. ``` with strategy.scope(): model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(loss='sparse_categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(), metrics=['accuracy']) ``` ## Define the callbacks The callbacks used here are: * *TensorBoard*: This callback writes a log for TensorBoard which allows you to visualize the graphs. * *Model Checkpoint*: This callback saves the model after every epoch. * *Learning Rate Scheduler*: Using this callback, you can schedule the learning rate to change after every epoch/batch. For illustrative purposes, add a print callback to display the *learning rate* in the notebook. ``` # Define the checkpoint directory to store the checkpoints checkpoint_dir = './training_checkpoints' # Name of the checkpoint files checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}") # Function for decaying the learning rate. # You can define any decay function you need. def decay(epoch): if epoch < 3: return 1e-3 elif epoch >= 3 and epoch < 7: return 1e-4 else: return 1e-5 # Callback for printing the LR at the end of each epoch. class PrintLR(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): print('\nLearning rate for epoch {} is {}'.format(epoch + 1, model.optimizer.lr.numpy())) callbacks = [ tf.keras.callbacks.TensorBoard(log_dir='./logs'), tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix, save_weights_only=True), tf.keras.callbacks.LearningRateScheduler(decay), PrintLR() ] ``` ## Train and evaluate Now, train the model in the usual way, calling `fit` on the model and passing in the dataset created at the beginning of the tutorial. This step is the same whether you are distributing the training or not. ``` model.fit(train_dataset, epochs=12, callbacks=callbacks) ``` As you can see below, the checkpoints are getting saved. ``` # check the checkpoint directory !ls {checkpoint_dir} ``` To see how the model perform, load the latest checkpoint and call `evaluate` on the test data. Call `evaluate` as before using appropriate datasets. ``` model.load_weights(tf.train.latest_checkpoint(checkpoint_dir)) eval_loss, eval_acc = model.evaluate(eval_dataset) print('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc)) ``` To see the output, you can download and view the TensorBoard logs at the terminal. ``` $ tensorboard --logdir=path/to/log-directory ``` ``` !ls -sh ./logs ``` ## Export to SavedModel Export the graph and the variables to the platform-agnostic SavedModel format. After your model is saved, you can load it with or without the scope. ``` path = 'saved_model/' tf.keras.experimental.export_saved_model(model, path) ``` Load the model without `strategy.scope`. ``` unreplicated_model = tf.keras.experimental.load_from_saved_model(path) unreplicated_model.compile( loss='sparse_categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(), metrics=['accuracy']) eval_loss, eval_acc = unreplicated_model.evaluate(eval_dataset) print('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc)) ``` Load the model with `strategy.scope`. ``` with strategy.scope(): replicated_model = tf.keras.experimental.load_from_saved_model(path) replicated_model.compile(loss='sparse_categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(), metrics=['accuracy']) eval_loss, eval_acc = replicated_model.evaluate(eval_dataset) print ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc)) ``` ### Examples and Tutorials Here are some examples for using distribution strategy with keras fit/compile: 1. [Transformer](https://github.com/tensorflow/models/blob/master/official/transformer/v2/transformer_main.py) example trained using `tf.distribute.MirroredStrategy` 2. [NCF](https://github.com/tensorflow/models/blob/master/official/recommendation/ncf_keras_main.py) example trained using `tf.distribute.MirroredStrategy`. More examples listed in the [Distribution strategy guide](../../guide/distribute_strategy.ipynb#examples_and_tutorials) ## Next steps * Read the [distribution strategy guide](../../guide/distribute_strategy.ipynb). * Read the [Distributed Training with Custom Training Loops](training_loops.ipynb) tutorial. Note: `tf.distribute.Strategy` is actively under development and we will be adding more examples and tutorials in the near future. Please give it a try. We welcome your feedback via [issues on GitHub](https://github.com/tensorflow/tensorflow/issues/new).
github_jupyter
# Character level language model - Dinosaurus Island Welcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go berserk, so choose wisely! <table> <td> <img src="images/dino.jpg" style="width:250;height:300px;"> </td> </table> Luckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this [dataset](dinos.txt). (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath! By completing this assignment you will learn: - How to store text data for processing using an RNN - How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit - How to build a character-level text generation recurrent neural network - Why clipping the gradients is important We will begin by loading in some functions that we have provided for you in `rnn_utils`. Specifically, you have access to functions such as `rnn_forward` and `rnn_backward` which are equivalent to those you've implemented in the previous assignment. ## <font color='darkblue'>Updates</font> #### If you were working on the notebook before this update... * The current notebook is version "3a". * You can find your original work saved in the notebook with the previous version name ("v3") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. #### List of updates * Sort and print `chars` list of characters. * Import and use pretty print * `clip`: - Additional details on why we need to use the "out" parameter. - Modified for loop to have students fill in the correct items to loop through. - Added a test case to check for hard-coding error. * `sample` - additional hints added to steps 1,2,3,4. - "Using 2D arrays instead of 1D arrays". - explanation of numpy.ravel(). - fixed expected output. - clarified comments in the code. * "training the model" - Replaced the sample code with explanations for how to set the index, X and Y (for a better learning experience). * Spelling, grammar and wording corrections. ``` import numpy as np from utils import * import random import pprint ``` ## 1 - Problem Statement ### 1.1 - Dataset and Preprocessing Run the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size. ``` data = open('dinos.txt', 'r').read() data= data.lower() chars = list(set(data)) data_size, vocab_size = len(data), len(chars) print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size)) ``` * The characters are a-z (26 characters) plus the "\n" (or newline character). * In this assignment, the newline character "\n" plays a role similar to the `<EOS>` (or "End of sentence") token we had discussed in lecture. - Here, "\n" indicates the end of the dinosaur name rather than the end of a sentence. * `char_to_ix`: In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. * `ix_to_char`: We also create a second python dictionary that maps each index back to the corresponding character. - This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. ``` chars = sorted(chars) print(chars) char_to_ix = { ch:i for i,ch in enumerate(chars) } ix_to_char = { i:ch for i,ch in enumerate(chars) } pp = pprint.PrettyPrinter(indent=4) pp.pprint(ix_to_char) ``` ### 1.2 - Overview of the model Your model will have the following structure: - Initialize parameters - Run the optimization loop - Forward propagation to compute the loss function - Backward propagation to compute the gradients with respect to the loss function - Clip the gradients to avoid exploding gradients - Using the gradients, update your parameters with the gradient descent update rule. - Return the learned parameters <img src="images/rnn.png" style="width:450;height:300px;"> <caption><center> **Figure 1**: Recurrent Neural Network, similar to what you had built in the previous notebook "Building a Recurrent Neural Network - Step by Step". </center></caption> * At each time-step, the RNN tries to predict what is the next character given the previous characters. * The dataset $\mathbf{X} = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is a list of characters in the training set. * $\mathbf{Y} = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$ is the same list of characters but shifted one character forward. * At every time-step $t$, $y^{\langle t \rangle} = x^{\langle t+1 \rangle}$. The prediction at time $t$ is the same as the input at time $t + 1$. ## 2 - Building blocks of the model In this part, you will build two important blocks of the overall model: - Gradient clipping: to avoid exploding gradients - Sampling: a technique used to generate characters You will then apply these two functions to build the model. ### 2.1 - Clipping the gradients in the optimization loop In this section you will implement the `clip` function that you will call inside of your optimization loop. #### Exploding gradients * When gradients are very large, they're called "exploding gradients." * Exploding gradients make the training process more difficult, because the updates may be so large that they "overshoot" the optimal values during back propagation. Recall that your overall loop structure usually consists of: * forward pass, * cost computation, * backward pass, * parameter update. Before updating the parameters, you will perform gradient clipping to make sure that your gradients are not "exploding." #### gradient clipping In the exercise below, you will implement a function `clip` that takes in a dictionary of gradients and returns a clipped version of gradients if needed. * There are different ways to clip gradients. * We will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. * For example, if the N=10 - The range is [-10, 10] - If any component of the gradient vector is greater than 10, it is set to 10. - If any component of the gradient vector is less than -10, it is set to -10. - If any components are between -10 and 10, they keep their original values. <img src="images/clip.png" style="width:400;height:150px;"> <caption><center> **Figure 2**: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into "exploding gradient" problems. </center></caption> **Exercise**: Implement the function below to return the clipped gradients of your dictionary `gradients`. * Your function takes in a maximum threshold and returns the clipped versions of the gradients. * You can check out [numpy.clip](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.clip.html). - You will need to use the argument "`out = ...`". - Using the "`out`" parameter allows you to update a variable "in-place". - If you don't use "`out`" argument, the clipped variable is stored in the variable "gradient" but does not update the gradient variables `dWax`, `dWaa`, `dWya`, `db`, `dby`. ``` ### GRADED FUNCTION: clip def clip(gradients, maxValue): ''' Clips the gradients' values between minimum and maximum. Arguments: gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby" maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue Returns: gradients -- a dictionary with the clipped gradients. ''' dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby'] ### START CODE HERE ### # clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines) for gradient in [dWax, dWaa, dWya, db, dby]: np.clip(gradient, -maxValue, maxValue, out=gradient) ### END CODE HERE ### gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby} return gradients # Test with a maxvalue of 10 maxValue = 10 np.random.seed(3) dWax = np.random.randn(5,3)*10 dWaa = np.random.randn(5,5)*10 dWya = np.random.randn(2,5)*10 db = np.random.randn(5,1)*10 dby = np.random.randn(2,1)*10 gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby} gradients = clip(gradients, maxValue) print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2]) print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1]) print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2]) print("gradients[\"db\"][4] =", gradients["db"][4]) print("gradients[\"dby\"][1] =", gradients["dby"][1]) ``` ** Expected output:** ```Python gradients["dWaa"][1][2] = 10.0 gradients["dWax"][3][1] = -10.0 gradients["dWya"][1][2] = 0.29713815361 gradients["db"][4] = [ 10.] gradients["dby"][1] = [ 8.45833407] ``` ``` # Test with a maxValue of 5 maxValue = 5 np.random.seed(3) dWax = np.random.randn(5,3)*10 dWaa = np.random.randn(5,5)*10 dWya = np.random.randn(2,5)*10 db = np.random.randn(5,1)*10 dby = np.random.randn(2,1)*10 gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby} gradients = clip(gradients, maxValue) print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2]) print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1]) print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2]) print("gradients[\"db\"][4] =", gradients["db"][4]) print("gradients[\"dby\"][1] =", gradients["dby"][1]) ``` ** Expected Output: ** ```Python gradients["dWaa"][1][2] = 5.0 gradients["dWax"][3][1] = -5.0 gradients["dWya"][1][2] = 0.29713815361 gradients["db"][4] = [ 5.] gradients["dby"][1] = [ 5.] ``` ### 2.2 - Sampling Now assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below: <img src="images/dinos3.png" style="width:500;height:300px;"> <caption><center> **Figure 3**: In this picture, we assume the model is already trained. We pass in $x^{\langle 1\rangle} = \vec{0}$ at the first time step, and have the network sample one character at a time. </center></caption> **Exercise**: Implement the `sample` function below to sample characters. You need to carry out 4 steps: - **Step 1**: Input the "dummy" vector of zeros $x^{\langle 1 \rangle} = \vec{0}$. - This is the default input before we've generated any characters. We also set $a^{\langle 0 \rangle} = \vec{0}$ - **Step 2**: Run one step of forward propagation to get $a^{\langle 1 \rangle}$ and $\hat{y}^{\langle 1 \rangle}$. Here are the equations: hidden state: $$ a^{\langle t+1 \rangle} = \tanh(W_{ax} x^{\langle t+1 \rangle } + W_{aa} a^{\langle t \rangle } + b)\tag{1}$$ activation: $$ z^{\langle t + 1 \rangle } = W_{ya} a^{\langle t + 1 \rangle } + b_y \tag{2}$$ prediction: $$ \hat{y}^{\langle t+1 \rangle } = softmax(z^{\langle t + 1 \rangle })\tag{3}$$ - Details about $\hat{y}^{\langle t+1 \rangle }$: - Note that $\hat{y}^{\langle t+1 \rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). - $\hat{y}^{\langle t+1 \rangle}_i$ represents the probability that the character indexed by "i" is the next character. - We have provided a `softmax()` function that you can use. #### Additional Hints - $x^{\langle 1 \rangle}$ is `x` in the code. When creating the one-hot vector, make a numpy array of zeros, with the number of rows equal to the number of unique characters, and the number of columns equal to one. It's a 2D and not a 1D array. - $a^{\langle 0 \rangle}$ is `a_prev` in the code. It is a numpy array of zeros, where the number of rows is $n_{a}$, and number of columns is 1. It is a 2D array as well. $n_{a}$ is retrieved by getting the number of columns in $W_{aa}$ (the numbers need to match in order for the matrix multiplication $W_{aa}a^{\langle t \rangle}$ to work. - [numpy.dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) - [numpy.tanh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tanh.html) #### Using 2D arrays instead of 1D arrays * You may be wondering why we emphasize that $x^{\langle 1 \rangle}$ and $a^{\langle 0 \rangle}$ are 2D arrays and not 1D vectors. * For matrix multiplication in numpy, if we multiply a 2D matrix with a 1D vector, we end up with with a 1D array. * This becomes a problem when we add two arrays where we expected them to have the same shape. * When two arrays with a different number of dimensions are added together, Python "broadcasts" one across the other. * Here is some sample code that shows the difference between using a 1D and 2D array. ``` import numpy as np matrix1 = np.array([[1,1],[2,2],[3,3]]) # (3,2) matrix2 = np.array([[0],[0],[0]]) # (3,1) vector1D = np.array([1,1]) # (2,) vector2D = np.array([[1],[1]]) # (2,1) print("matrix1 \n", matrix1,"\n") print("matrix2 \n", matrix2,"\n") print("vector1D \n", vector1D,"\n") print("vector2D \n", vector2D) print("Multiply 2D and 1D arrays: result is a 1D array\n", np.dot(matrix1,vector1D)) print("Multiply 2D and 2D arrays: result is a 2D array\n", np.dot(matrix1,vector2D)) print("Adding (3 x 1) vector to a (3 x 1) vector is a (3 x 1) vector\n", "This is what we want here!\n", np.dot(matrix1,vector2D) + matrix2) print("Adding a (3,) vector to a (3 x 1) vector\n", "broadcasts the 1D array across the second dimension\n", "Not what we want here!\n", np.dot(matrix1,vector1D) + matrix2 ) ``` - **Step 3**: Sampling: - Now that we have $y^{\langle t+1 \rangle}$, we want to select the next letter in the dinosaur name. If we select the most probable, the model will always generate the same result given a starting letter. - To make the results more interesting, we will use np.random.choice to select a next letter that is likely, but not always the same. - Sampling is the selection of a value from a group of values, where each value has a probability of being picked. - Sampling allows us to generate random sequences of values. - Pick the next character's index according to the probability distribution specified by $\hat{y}^{\langle t+1 \rangle }$. - This means that if $\hat{y}^{\langle t+1 \rangle }_i = 0.16$, you will pick the index "i" with 16% probability. - You can use [np.random.choice](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.choice.html). Example of how to use `np.random.choice()`: ```python np.random.seed(0) probs = np.array([0.1, 0.0, 0.7, 0.2]) idx = np.random.choice([0, 1, 2, 3] p = probs) ``` - This means that you will pick the index (`idx`) according to the distribution: $P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$. - Note that the value that's set to `p` should be set to a 1D vector. - Also notice that $\hat{y}^{\langle t+1 \rangle}$, which is `y` in the code, is a 2D array. ##### Additional Hints - [range](https://docs.python.org/3/library/functions.html#func-range) - [numpy.ravel](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html) takes a multi-dimensional array and returns its contents inside of a 1D vector. ```Python arr = np.array([[1,2],[3,4]]) print("arr") print(arr) print("arr.ravel()") print(arr.ravel()) ``` Output: ```Python arr [[1 2] [3 4]] arr.ravel() [1 2 3 4] ``` - Note that `append` is an "in-place" operation. In other words, don't do this: ```Python fun_hobbies = fun_hobbies.append('learning') ## Doesn't give you what you want ``` - **Step 4**: Update to $x^{\langle t \rangle }$ - The last step to implement in `sample()` is to update the variable `x`, which currently stores $x^{\langle t \rangle }$, with the value of $x^{\langle t + 1 \rangle }$. - You will represent $x^{\langle t + 1 \rangle }$ by creating a one-hot vector corresponding to the character that you have chosen as your prediction. - You will then forward propagate $x^{\langle t + 1 \rangle }$ in Step 1 and keep repeating the process until you get a "\n" character, indicating that you have reached the end of the dinosaur name. ##### Additional Hints - In order to reset `x` before setting it to the new one-hot vector, you'll want to set all the values to zero. - You can either create a new numpy array: [numpy.zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html) - Or fill all values with a single number: [numpy.ndarray.fill](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.fill.html) ``` # GRADED FUNCTION: sample def sample(parameters, char_to_ix, seed): """ Sample a sequence of characters according to a sequence of probability distributions output of the RNN Arguments: parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b. char_to_ix -- python dictionary mapping each character to an index. seed -- used for grading purposes. Do not worry about it. Returns: indices -- a list of length n containing the indices of the sampled characters. """ # Retrieve parameters and relevant shapes from "parameters" dictionary Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b'] vocab_size = by.shape[0] n_a = Waa.shape[1] ### START CODE HERE ### # Step 1: Create the a zero vector x that can be used as the one-hot vector # representing the first character (initializing the sequence generation). (≈1 line) x = np.zeros((vocab_size, 1)) # Step 1': Initialize a_prev as zeros (≈1 line) a_prev = np.zeros((n_a, 1)) # Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line) indices = [] # idx is the index of the one-hot vector x that is set to 1 # All other positions in x are zero. # We will initialize idx to -1 idx = -1 # Loop over time-steps t. At each time-step: # sample a character from a probability distribution # and append its index (`idx`) to the list "indices". # We'll stop if we reach 50 characters # (which should be very unlikely with a well trained model). # Setting the maximum number of characters helps with debugging and prevents infinite loops. counter = 0 newline_character = char_to_ix['\n'] while (idx != newline_character and counter != 50): # Step 2: Forward propagate x using the equations (1), (2) and (3) a = np.tanh(np.dot(Wax, x) + np.dot(Waa, a_prev) + b) z = np.dot(Wya, a) + by y = softmax(z) # for grading purposes np.random.seed(counter + seed) # Step 3: Sample the index of a character within the vocabulary from the probability distribution y # (see additional hints above) idx = np.random.choice(list(range(vocab_size)), p=y.ravel()) # Append the index to "indices" indices.append(idx) # Step 4: Overwrite the input character as the one corresponding to the sampled index. x = np.zeros((vocab_size, 1)) x[idx] = 1 # Update "a_prev" to be "a" a_prev = a # for grading purposes seed += 1 counter +=1 ### END CODE HERE ### if (counter == 50): indices.append(char_to_ix['\n']) return indices np.random.seed(2) _, n_a = 20, 100 Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a) b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1) parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by} indices = sample(parameters, char_to_ix, 0) print("Sampling:") print("list of sampled indices:\n", indices) print("list of sampled characters:\n", [ix_to_char[i] for i in indices]) ``` ** Expected output:** ```Python Sampling: list of sampled indices: [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 17, 24, 12, 13, 24, 0] list of sampled characters: ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'q', 'x', 'l', 'm', 'x', '\n'] ``` * Please note that over time, if there are updates to the back-end of the Coursera platform (that may update the version of numpy), the actual list of sampled indices and sampled characters may change. * If you follow the instructions given above and get an output without errors, it's possible the routine is correct even if your output doesn't match the expected output. Submit your assignment to the grader to verify its correctness. ## 3 - Building the language model It is time to build the character-level language model for text generation. ### 3.1 - Gradient descent * In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). * You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN: - Forward propagate through the RNN to compute the loss - Backward propagate through time to compute the gradients of the loss with respect to the parameters - Clip the gradients - Update the parameters using gradient descent **Exercise**: Implement the optimization process (one step of stochastic gradient descent). The following functions are provided: ```python def rnn_forward(X, Y, a_prev, parameters): """ Performs the forward propagation through the RNN and computes the cross-entropy loss. It returns the loss' value as well as a "cache" storing values to be used in backpropagation.""" .... return loss, cache def rnn_backward(X, Y, parameters, cache): """ Performs the backward propagation through time to compute the gradients of the loss with respect to the parameters. It returns also all the hidden states.""" ... return gradients, a def update_parameters(parameters, gradients, learning_rate): """ Updates parameters using the Gradient Descent Update Rule.""" ... return parameters ``` Recall that you previously implemented the `clip` function: ```Python def clip(gradients, maxValue) """Clips the gradients' values between minimum and maximum.""" ... return gradients ``` #### parameters * Note that the weights and biases inside the `parameters` dictionary are being updated by the optimization, even though `parameters` is not one of the returned values of the `optimize` function. The `parameters` dictionary is passed by reference into the function, so changes to this dictionary are making changes to the `parameters` dictionary even when accessed outside of the function. * Python dictionaries and lists are "pass by reference", which means that if you pass a dictionary into a function and modify the dictionary within the function, this changes that same dictionary (it's not a copy of the dictionary). ``` # GRADED FUNCTION: optimize def optimize(X, Y, a_prev, parameters, learning_rate = 0.01): """ Execute one step of the optimization to train the model. Arguments: X -- list of integers, where each integer is a number that maps to a character in the vocabulary. Y -- list of integers, exactly the same as X but shifted one index to the left. a_prev -- previous hidden state. parameters -- python dictionary containing: Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x) Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a) Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a) b -- Bias, numpy array of shape (n_a, 1) by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1) learning_rate -- learning rate for the model. Returns: loss -- value of the loss function (cross-entropy) gradients -- python dictionary containing: dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x) dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a) dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a) db -- Gradients of bias vector, of shape (n_a, 1) dby -- Gradients of output bias vector, of shape (n_y, 1) a[len(X)-1] -- the last hidden state, of shape (n_a, 1) """ ### START CODE HERE ### # Forward propagate through time (≈1 line) loss, cache = rnn_forward(X, Y, a_prev, parameters) # Backpropagate through time (≈1 line) gradients, a = rnn_backward(X, Y, parameters, cache) # Clip your gradients between -5 (min) and 5 (max) (≈1 line) gradients = clip(gradients, 5) # Update parameters (≈1 line) parameters = update_parameters(parameters, gradients, learning_rate) ### END CODE HERE ### return loss, gradients, a[len(X)-1] np.random.seed(1) vocab_size, n_a = 27, 100 a_prev = np.random.randn(n_a, 1) Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a) b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1) parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by} X = [12,3,5,11,22,3] Y = [4,14,11,22,25, 26] loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01) print("Loss =", loss) print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2]) print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"])) print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2]) print("gradients[\"db\"][4] =", gradients["db"][4]) print("gradients[\"dby\"][1] =", gradients["dby"][1]) print("a_last[4] =", a_last[4]) ``` ** Expected output:** ```Python Loss = 126.503975722 gradients["dWaa"][1][2] = 0.194709315347 np.argmax(gradients["dWax"]) = 93 gradients["dWya"][1][2] = -0.007773876032 gradients["db"][4] = [-0.06809825] gradients["dby"][1] = [ 0.01538192] a_last[4] = [-1.] ``` ### 3.2 - Training the model * Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. * Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing. * Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order. **Exercise**: Follow the instructions and implement `model()`. When `examples[index]` contains one dinosaur name (string), to create an example (X, Y), you can use this: ##### Set the index `idx` into the list of examples * Using the for-loop, walk through the shuffled list of dinosaur names in the list "examples". * If there are 100 examples, and the for-loop increments the index to 100 onwards, think of how you would make the index cycle back to 0, so that we can continue feeding the examples into the model when j is 100, 101, etc. * Hint: 101 divided by 100 is zero with a remainder of 1. * `%` is the modulus operator in python. ##### Extract a single example from the list of examples * `single_example`: use the `idx` index that you set previously to get one word from the list of examples. ##### Convert a string into a list of characters: `single_example_chars` * `single_example_chars`: A string is a list of characters. * You can use a list comprehension (recommended over for-loops) to generate a list of characters. ```Python str = 'I love learning' list_of_chars = [c for c in str] print(list_of_chars) ``` ``` ['I', ' ', 'l', 'o', 'v', 'e', ' ', 'l', 'e', 'a', 'r', 'n', 'i', 'n', 'g'] ``` ##### Convert list of characters to a list of integers: `single_example_ix` * Create a list that contains the index numbers associated with each character. * Use the dictionary `char_to_ix` * You can combine this with the list comprehension that is used to get a list of characters from a string. * This is a separate line of code below, to help learners clarify each step in the function. ##### Create the list of input characters: `X` * `rnn_forward` uses the `None` value as a flag to set the input vector as a zero-vector. * Prepend the `None` value in front of the list of input characters. * There is more than one way to prepend a value to a list. One way is to add two lists together: `['a'] + ['b']` ##### Get the integer representation of the newline character `ix_newline` * `ix_newline`: The newline character signals the end of the dinosaur name. - get the integer representation of the newline character `'\n'`. - Use `char_to_ix` ##### Set the list of labels (integer representation of the characters): `Y` * The goal is to train the RNN to predict the next letter in the name, so the labels are the list of characters that are one time step ahead of the characters in the input `X`. - For example, `Y[0]` contains the same value as `X[1]` * The RNN should predict a newline at the last letter so add ix_newline to the end of the labels. - Append the integer representation of the newline character to the end of `Y`. - Note that `append` is an in-place operation. - It might be easier for you to add two lists together. ``` # GRADED FUNCTION: model def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27): """ Trains the model and generates dinosaur names. Arguments: data -- text corpus ix_to_char -- dictionary that maps the index to a character char_to_ix -- dictionary that maps a character to an index num_iterations -- number of iterations to train the model for n_a -- number of units of the RNN cell dino_names -- number of dinosaur names you want to sample at each iteration. vocab_size -- number of unique characters found in the text (size of the vocabulary) Returns: parameters -- learned parameters """ # Retrieve n_x and n_y from vocab_size n_x, n_y = vocab_size, vocab_size # Initialize parameters parameters = initialize_parameters(n_a, n_x, n_y) # Initialize loss (this is required because we want to smooth our loss) loss = get_initial_loss(vocab_size, dino_names) # Build list of all dinosaur names (training examples). with open("dinos.txt") as f: examples = f.readlines() examples = [x.lower().strip() for x in examples] # Shuffle list of all dinosaur names np.random.seed(0) np.random.shuffle(examples) # Initialize the hidden state of your LSTM a_prev = np.zeros((n_a, 1)) # Optimization loop for j in range(num_iterations): ### START CODE HERE ### # Use the hint above to define one training example (X,Y) (≈ 2 lines) index = j % len(examples) X = [None] + [char_to_ix[ch] for ch in examples[index]] Y = X[1:] + [char_to_ix["\n"]] # Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters # Choose a learning rate of 0.01 curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters) ### END CODE HERE ### # Use a latency trick to keep the loss smooth. It happens here to accelerate the training. loss = smooth(loss, curr_loss) # Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly if j % 2000 == 0: print('Iteration: %d, Loss: %f' % (j, loss) + '\n') # The number of dinosaur names to print seed = 0 for name in range(dino_names): # Sample indices and print them sampled_indices = sample(parameters, char_to_ix, seed) print_sample(sampled_indices, ix_to_char) seed += 1 # To get the same result (for grading purposes), increment the seed by one. print('\n') return parameters ``` Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names. ``` parameters = model(data, ix_to_char, char_to_ix) ``` ** Expected Output** The output of your model may look different, but it will look something like this: ```Python Iteration: 34000, Loss: 22.447230 Onyxipaledisons Kiabaeropa Lussiamang Pacaeptabalsaurus Xosalong Eiacoteg Troia ``` ## Conclusion You can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implementation generated some really cool names like `maconucon`, `marloralus` and `macingsersaurus`. Your model hopefully also learned that dinosaur names tend to end in `saurus`, `don`, `aura`, `tor`, etc. If your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, `dromaeosauroides` is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest! This assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favorite name is the great, undefeatable, and fierce: Mangosaurus! <img src="images/mangosaurus.jpeg" style="width:250;height:300px;"> ## 4 - Writing like Shakespeare The rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative. A similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in the sequence. These long term dependencies were less important with dinosaur names, since the names were quite short. <img src="images/shakespeare.jpg" style="width:500;height:400px;"> <caption><center> Let's become poets! </center></caption> We have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes. ``` from __future__ import print_function from keras.callbacks import LambdaCallback from keras.models import Model, load_model, Sequential from keras.layers import Dense, Activation, Dropout, Input, Masking from keras.layers import LSTM from keras.utils.data_utils import get_file from keras.preprocessing.sequence import pad_sequences from shakespeare_utils import * import sys import io ``` To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called [*"The Sonnets"*](shakespeare.txt). Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run `generate_output`, which will prompt asking you for an input (`<`40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try "Forsooth this maketh no sense " (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well. ``` print_callback = LambdaCallback(on_epoch_end=on_epoch_end) model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback]) # Run this cell to try with different inputs without having to re-train the model generate_output() ``` The RNN-Shakespeare model is very similar to the one you have built for dinosaur names. The only major differences are: - LSTMs instead of the basic RNN to capture longer-range dependencies - The model is a deeper, stacked LSTM model (2 layer) - Using Keras instead of python to simplify the code If you want to learn more, you can also check out the Keras Team's text generation implementation on GitHub: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py. Congratulations on finishing this notebook! **References**: - This exercise took inspiration from Andrej Karpathy's implementation: https://gist.github.com/karpathy/d4dee566867f8291f086. To learn more about text generation, also check out Karpathy's [blog post](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). - For the Shakespearian poem generator, our implementation was based on the implementation of an LSTM text generator by the Keras team: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py
github_jupyter
``` import pandas as pd import os import networkx as nx import numpy as np pd.__version__ ``` # Defining the different data path ``` # Data path containing all of the raw and processed data data = '../Data/' # Result path containing all the results from the analysisi resultpath = '../Results/' # ID for the PPI which year it's from PPI_ID = "2018_08" ``` # Loading all the tables ## Loading the raw counts ``` # Load the primary raw screen counts raw_hipo_fec = pd.read_csv(os.path.join(data,'Screen', 'Raw', 'Raw_EggLaying_HpoRNAi.csv')) raw_hipo_ova = pd.read_csv(os.path.join(data,'Screen', 'Raw', 'Raw_Ova_HpoRNAi.csv')) raw_xRNAi_fec = pd.read_csv(os.path.join(data,'Screen', 'Raw', 'Raw_EggLaying.csv')) # Load the prediction results raw_hipo_fec_pred = pd.read_csv(os.path.join(data,'Screen', 'Raw', 'Raw_EggLaying_HpoRNAi_Pred.csv')) raw_hipo_ova_pred = pd.read_csv(os.path.join(data,'Screen', 'Raw', 'Raw_Ova_HpoRNAi_Pred.csv')) raw_xRNAi_fec_pred = pd.read_csv(os.path.join(data,'Screen', 'Raw', 'Raw_EggLaying_Pred.csv')) # Remove Control from dataset raw_hipo_fec = raw_hipo_fec[raw_hipo_fec['FbID'] != 'Control'] raw_hipo_ova = raw_hipo_ova[raw_hipo_ova['FbID'] != 'Control'] raw_xRNAi_fec = raw_xRNAi_fec[raw_xRNAi_fec['FbID'] != 'Control'] # Same on the prediction dataset raw_hipo_fec_pred = raw_hipo_fec_pred[raw_hipo_fec_pred['FbID'] != 'Control'] raw_hipo_ova_pred = raw_hipo_ova_pred[raw_hipo_ova_pred['FbID'] != 'Control'] raw_xRNAi_fec_pred = raw_xRNAi_fec_pred[raw_xRNAi_fec_pred['FbID'] != 'Control'] # # Append the prediction count to the primary screen results # raw_hipo_fec = raw_hipo_fec.append(raw_hipo_fec_pred) # raw_hipo_ova = raw_hipo_ova.append(raw_hipo_ova_pred) # raw_xRNAi_fec = raw_xRNAi_fec.append(raw_xRNAi_fec_pred) # # Remove any duplicate # raw_hipo_fec.drop_duplicates(inplace=True) # raw_hipo_ova.drop_duplicates(inplace=True) # raw_xRNAi_fec.drop_duplicates(inplace=True) hipo_ova = pd.read_csv(os.path.join(data,'Screen', 'hipo_ova_clean.csv')) xRNAi_fec = pd.read_csv(os.path.join(data,'Screen', 'xRNAi_fec_clean.csv')) hipo_fec = pd.read_csv(os.path.join(data,'Screen', 'hipo_fec_clean.csv')) hipo_ova_pred = pd.read_csv(os.path.join(data,'Screen', 'hipo_ova_clean_pred.csv')) xRNAi_fec_pred = pd.read_csv(os.path.join(data,'Screen', 'xRNAi_fec_clean_pred.csv')) hipo_fec_pred = pd.read_csv(os.path.join(data,'Screen', 'hipo_fec_clean_pred.csv')) hipo_ova = hipo_ova[hipo_ova['FbID'] != 'Control'] xRNAi_fec = xRNAi_fec[xRNAi_fec['FbID'] != 'Control'] hipo_fec = hipo_fec[hipo_fec['FbID'] != 'Control'] hipo_ova_pred = hipo_ova_pred[hipo_ova_pred['FbID'] != 'Control'] xRNAi_fec_pred = xRNAi_fec_pred[xRNAi_fec_pred['FbID'] != 'Control'] hipo_fec_pred = hipo_fec_pred[hipo_fec_pred['FbID'] != 'Control'] # Calculate the mean for all datasets mean_ova_gene = hipo_ova.groupby(['FbID'], as_index=False).mean() mean_fec_gene = hipo_fec.groupby(['FbID', 'Condition'], as_index=False).mean() mean_xRNAi_gene = xRNAi_fec.groupby(['FbID', 'Condition'], as_index=False).mean() mean_ova_gene_pred = hipo_ova_pred.groupby(['FbID'], as_index=False).mean() mean_fec_gene_pred = hipo_fec_pred.groupby(['FbID', 'Condition'], as_index=False).mean() mean_xRNAi_gene_pred = xRNAi_fec_pred.groupby(['FbID', 'Condition'], as_index=False).mean() # Calculate the std for ovariole number (only because the other datasets have only 1 measurement) std_ova_gene = hipo_ova.groupby(['FbID']).std().reset_index() std_ova_gene_pred = hipo_ova_pred.groupby(['FbID']).std().reset_index() # mean_ova_gene = mean_ova_gene.append(mean_ova_gene_pred) # mean_fec_gene = mean_fec_gene.append(mean_fec_gene_pred) # mean_xRNAi_gene = mean_xRNAi_gene.append(mean_xRNAi_gene_pred) # Here we select all the genes that were tested in the screen, # because the first screen was Hipo RNAi EggLaying measurement, this dataset contains all the tested genes screen_genes = mean_fec_gene['FbID'].unique() screen_genes_pred = list(mean_fec_gene_pred['FbID'].unique()) modules = pd.read_csv(os.path.join(resultpath, "Modules_Table_{}.csv".format(PPI_ID))) for fbid in modules[modules['SeedStatus'].str.contains('Connector')]['FbID'].values: if fbid not in screen_genes_pred: screen_genes_pred.append(fbid) assert(len(mean_fec_gene['FbID'].unique()) == 463) assert(len(mean_fec_gene_pred['FbID'].unique()) == 42) ``` ## Loading gene names ``` names = pd.read_table(os.path.join(data,'GeneName.csv')) ``` ## Loading the signaling pathway metadata ``` signaling = pd.read_csv(os.path.join(data,'signaling.csv')) ``` ## Loading the PPI network ``` G = nx.read_graphml(os.path.join(data, 'PPIs', 'PPI_{}.graphml'.format(PPI_ID))) ``` ## Loading the networks modules ``` # Modules computed in the notebook file: Seed-Connector ova_module_G = nx.read_graphml(os.path.join(resultpath,'Ova_module_{}.graphml'.format(PPI_ID))) fec_module_G = nx.read_graphml(os.path.join(resultpath,'Hpo_EggL_module_{}.graphml'.format(PPI_ID))) xRNAi_module_G = nx.read_graphml(os.path.join(resultpath,'EggL_module_{}.graphml'.format(PPI_ID))) core_module_G = nx.read_graphml(os.path.join(resultpath,'Core_module_{}.graphml'.format(PPI_ID))) # The list of connector genes connectors= pd.read_csv(os.path.join(resultpath,"ConnectorGeneList_{}.csv".format(PPI_ID))) # Grab the list of genes for each modules ova_module = ova_module_G.nodes() fec_module = fec_module_G.nodes() xRNAi_module = xRNAi_module_G.nodes() core_module = core_module_G.nodes() ``` ## Loading the network metrics ``` betweenness = pd.read_csv(os.path.join(data, "ScreenPPI_Betweenness.csv")) closeness = pd.read_csv(os.path.join(data, "ScreenPPI_Closeness.csv")) eigenvector = pd.read_csv(os.path.join(data, "ScreenPPI_Eigenvector.csv")) degrees_cen = pd.read_csv(os.path.join(data, "ScreenPPI_DegreeCentrality.csv")) ``` # Creating the table ## Step 1: Make the list of genes ``` table = pd.DataFrame(screen_genes, columns=['FbID']) table_pred = pd.DataFrame(screen_genes_pred, columns=['FbID']) table = table.merge(raw_hipo_fec[['FbID', 'Condition']], how='left', on='FbID') table = table.rename(columns={'Condition':'CG number'}) table_pred = table_pred.merge(raw_hipo_fec_pred[['FbID', 'Condition']], how='left', on='FbID') table_pred = table_pred.rename(columns={'Condition':'CG number'}) table = table.merge(names, how='left', on='FbID') table_pred = table_pred.merge(names, how='left', on='FbID') ``` ## Step 2: Add the screen data for each genes ### Hippo RNAi Egg Laying ``` # Hippo RNAi Egg Laying screen egg counts # First we merge the existing table using the FbID column for 1 to 1 matching. # Then we rename that collumn for a unique name in the global database table # Rinse and repeat for all values table = table.merge(mean_fec_gene[mean_fec_gene['Condition'] == 'Day 1'][['FbID', 'Count', 'Z']], how='left', on='FbID') table = table.rename(columns={'Z':'HippoRNAi_EggL_Day_1_Egg_Zscore', 'Count':'HippoRNAi_EggL_Day_1_Egg_Count'}) table = table.merge(mean_fec_gene[mean_fec_gene['Condition'] == 'Day 2 '][['FbID', 'Count', 'Z']], how='left', on='FbID') table = table.rename(columns={'Z':'HippoRNAi_EggL_Day_2_Egg_Zscore', 'Count':'HippoRNAi_EggL_Day_2_Egg_Count'}) table = table.merge(mean_fec_gene[mean_fec_gene['Condition'] == 'Day 3'][['FbID', 'Count', 'Z']], how='left', on='FbID') table = table.rename(columns={'Z':'HippoRNAi_EggL_Day_3_Egg_Zscore', 'Count':'HippoRNAi_EggL_Day_3_Egg_Count'}) table = table.merge(mean_fec_gene[mean_fec_gene['Condition'] == 'Day 4 '][['FbID', 'Count', 'Z']], how='left', on='FbID') table = table.rename(columns={'Z':'HippoRNAi_EggL_Day_4_Egg_Zscore', 'Count':'HippoRNAi_EggL_Day_4_Egg_Count'}) table = table.merge(mean_fec_gene[mean_fec_gene['Condition'] == 'Day 5'][['FbID', 'Count', 'Z']], how='left', on='FbID') table = table.rename(columns={'Z':'HippoRNAi_EggL_Day_5_Egg_Zscore', 'Count':'HippoRNAi_EggL_Day_5_Egg_Count'}) table = table.merge(mean_fec_gene[mean_fec_gene['Condition'] == 'Sum'][['FbID', 'Count', 'Z']], how='left', on='FbID') table = table.rename(columns={'Z':'HippoRNAi_EggL_All_Days_Egg_Sum_Zscore', 'Count':'HippoRNAi_EggL_All_Days_Egg_Sum_Count'}) # table = table.merge(mean_fec_gene[mean_fec_gene['Condition'] == 'Sum'][['FbID', 'Batch']], how='left', on='FbID') # table = table.rename(columns={'Batch':'HippoRNAi_EggL_Batch'}) table_pred = table_pred.merge(mean_fec_gene_pred[mean_fec_gene_pred['Condition'] == 'Day 1'][['FbID', 'Count', 'Z']], how='left', on='FbID') table_pred = table_pred.rename(columns={'Z':'HippoRNAi_EggL_Day_1_Egg_Zscore', 'Count':'HippoRNAi_EggL_Day_1_Egg_Count'}) table_pred = table_pred.merge(mean_fec_gene_pred[mean_fec_gene_pred['Condition'] == 'Day 2 '][['FbID', 'Count', 'Z']], how='left', on='FbID') table_pred = table_pred.rename(columns={'Z':'HippoRNAi_EggL_Day_2_Egg_Zscore', 'Count':'HippoRNAi_EggL_Day_2_Egg_Count'}) table_pred = table_pred.merge(mean_fec_gene_pred[mean_fec_gene_pred['Condition'] == 'Day 3'][['FbID', 'Count', 'Z']], how='left', on='FbID') table_pred = table_pred.rename(columns={'Z':'HippoRNAi_EggL_Day_3_Egg_Zscore', 'Count':'HippoRNAi_EggL_Day_3_Egg_Count'}) table_pred = table_pred.merge(mean_fec_gene_pred[mean_fec_gene_pred['Condition'] == 'Day 4 '][['FbID', 'Count', 'Z']], how='left', on='FbID') table_pred = table_pred.rename(columns={'Z':'HippoRNAi_EggL_Day_4_Egg_Zscore', 'Count':'HippoRNAi_EggL_Day_4_Egg_Count'}) table_pred = table_pred.merge(mean_fec_gene_pred[mean_fec_gene_pred['Condition'] == 'Day 5'][['FbID', 'Count', 'Z']], how='left', on='FbID') table_pred = table_pred.rename(columns={'Z':'HippoRNAi_EggL_Day_5_Egg_Zscore', 'Count':'HippoRNAi_EggL_Day_5_Egg_Count'}) table_pred = table_pred.merge(mean_fec_gene_pred[mean_fec_gene_pred['Condition'] == 'Sum'][['FbID', 'Count', 'Z']], how='left', on='FbID') table_pred = table_pred.rename(columns={'Z':'HippoRNAi_EggL_All_Days_Egg_Sum_Zscore', 'Count':'HippoRNAi_EggL_All_Days_Egg_Sum_Count'}) # table_pred = table_pred.merge(mean_fec_gene_pred[mean_fec_gene_pred['Condition'] == 'Sum'][['FbID', 'Batch']], how='left', on='FbID') # table_pred = table_pred.rename(columns={'Batch':'HippoRNAi_EggL_Batch'}) ``` ### Egg Laying ``` # Egg Laying screen egg counts # We use the same technic as above table = table.merge(mean_xRNAi_gene[mean_xRNAi_gene['Condition'] == 'Day 1'][['FbID', 'Count', 'Z']], how='left', on='FbID') table = table.rename(columns={'Z':'EggL_Day_1_Egg_Zscore', 'Count':'EggL_Day_1_Egg_Count'}) table = table.merge(mean_xRNAi_gene[mean_xRNAi_gene['Condition'] == 'Day 2 '][['FbID', 'Count', 'Z']], how='left', on='FbID') table = table.rename(columns={'Z':'EggL_Day_2_Egg_Zscore', 'Count':'EggL_Day_2_Egg_Count'}) table = table.merge(mean_xRNAi_gene[mean_xRNAi_gene['Condition'] == 'Day 3'][['FbID', 'Count', 'Z']], how='left', on='FbID') table = table.rename(columns={'Z':'EggL_Day_3_Egg_Zscore', 'Count':'EggL_Day_3_Egg_Count'}) table = table.merge(mean_xRNAi_gene[mean_xRNAi_gene['Condition'] == 'Day 4 '][['FbID', 'Count', 'Z']], how='left', on='FbID') table = table.rename(columns={'Z':'EggL_Day_4_Egg_Zscore', 'Count':'EggL_Day_4_Egg_Count'}) table = table.merge(mean_xRNAi_gene[mean_xRNAi_gene['Condition'] == 'Day 5'][['FbID', 'Count', 'Z']], how='left', on='FbID') table = table.rename(columns={'Z':'EggL_Day_5_Egg_Zscore', 'Count':'EggL_Day_5_Egg_Count'}) table = table.merge(mean_xRNAi_gene[mean_xRNAi_gene['Condition'] == 'Sum'][['FbID', 'Count', 'Z']], how='left', on='FbID') table = table.rename(columns={'Z':'EggL_All_Days_Egg_Sum_Zscore', 'Count':'EggL_All_Days_Egg_Sum_Count'}) # table = table.merge(mean_xRNAi_gene[mean_xRNAi_gene['Condition'] == 'Sum'][['FbID','Batch']], how='left', on='FbID') # table = table.rename(columns={'Batch':'EggL_Batch'}) # Egg Laying screen egg counts # We use the same technic as above table_pred = table_pred.merge(mean_xRNAi_gene_pred[mean_xRNAi_gene_pred['Condition'] == 'Day 1'][['FbID', 'Count', 'Z']], how='left', on='FbID') table_pred = table_pred.rename(columns={'Z':'EggL_Day_1_Egg_Zscore', 'Count':'EggL_Day_1_Egg_Count'}) table_pred = table_pred.merge(mean_xRNAi_gene_pred[mean_xRNAi_gene_pred['Condition'] == 'Day 2 '][['FbID', 'Count', 'Z']], how='left', on='FbID') table_pred = table_pred.rename(columns={'Z':'EggL_Day_2_Egg_Zscore', 'Count':'EggL_Day_2_Egg_Count'}) table_pred = table_pred.merge(mean_xRNAi_gene_pred[mean_xRNAi_gene_pred['Condition'] == 'Day 3'][['FbID', 'Count', 'Z']], how='left', on='FbID') table_pred = table_pred.rename(columns={'Z':'EggL_Day_3_Egg_Zscore', 'Count':'EggL_Day_3_Egg_Count'}) table_pred = table_pred.merge(mean_xRNAi_gene_pred[mean_xRNAi_gene_pred['Condition'] == 'Day 4 '][['FbID', 'Count', 'Z']], how='left', on='FbID') table_pred = table_pred.rename(columns={'Z':'EggL_Day_4_Egg_Zscore', 'Count':'EggL_Day_4_Egg_Count'}) table_pred = table_pred.merge(mean_xRNAi_gene_pred[mean_xRNAi_gene_pred['Condition'] == 'Day 5'][['FbID', 'Count', 'Z']], how='left', on='FbID') table_pred = table_pred.rename(columns={'Z':'EggL_Day_5_Egg_Zscore', 'Count':'EggL_Day_5_Egg_Count'}) table_pred = table_pred.merge(mean_xRNAi_gene_pred[mean_xRNAi_gene_pred['Condition'] == 'Sum'][['FbID', 'Count', 'Z']], how='left', on='FbID') table_pred = table_pred.rename(columns={'Z':'EggL_All_Days_Egg_Sum_Zscore', 'Count':'EggL_All_Days_Egg_Sum_Count'}) # table_pred = table_pred.merge(mean_xRNAi_gene_pred[mean_xRNAi_gene_pred['Condition'] == 'Sum'][['FbID','Batch']], how='left', on='FbID') # table_pred = table_pred.rename(columns={'Batch':'EggL_Batch'}) ``` ### Ovariole Number ``` # Hippo RNAi Ovariole number counts screen # We can merge all the data directly because we do not have many conditions in this table table = table.merge(mean_ova_gene[['FbID', 'OvarioleNb', 'Z']], how='left', on='FbID') table = table.rename(columns={'Batch':'HippoRNAi_Ova_Batch', 'OvarioleNb':'HippoRNAi_Ova_OvarioleNb_Mean_Count', 'Z':'HippoRNAi_Ova_OvarioleNb_Mean_Zscore'}) table = table.merge(std_ova_gene[['FbID', 'OvarioleNb']], how='left', on='FbID') table = table.rename(columns={'OvarioleNb':'HippoRNAi_Ova_OvarioleNb_Std_Count'}) table_pred = table_pred.merge(mean_ova_gene_pred[['FbID', 'OvarioleNb', 'Z']], how='left', on='FbID') table_pred = table_pred.rename(columns={'Batch':'HippoRNAi_Ova_Batch', 'OvarioleNb':'HippoRNAi_Ova_OvarioleNb_Mean_Count', 'Z':'HippoRNAi_Ova_OvarioleNb_Mean_Zscore'}) table_pred = table_pred.merge(std_ova_gene_pred[['FbID', 'OvarioleNb']], how='left', on='FbID') table_pred = table_pred.rename(columns={'OvarioleNb':'HippoRNAi_Ova_OvarioleNb_Std_Count'}) # But we want to add the raw data to the table, so we extract the 20 columns from the raw counts raw = raw_hipo_ova[raw_hipo_ova['FbID'].notnull()][['Fly 1', 'Fly 1.1', 'Fly 2', 'Fly 2.1', 'Fly 3', 'Fly 3.1', 'Fly 4', 'Fly 4.1', 'Fly 5', 'Fly 5.1', 'Fly 6', 'Fly 6.1', 'Fly 7', 'Fly 7.1', 'Fly 8', 'Fly 8.1', 'Fly 9','Fly 9.1', 'Fly 10', 'Fly 10.1', 'FbID']].groupby(['FbID']).mean().reset_index() raw_pred = raw_hipo_ova_pred[raw_hipo_ova_pred['FbID'].notnull()][['Fly 1', 'Fly 1.1', 'Fly 2', 'Fly 2.1', 'Fly 3', 'Fly 3.1', 'Fly 4', 'Fly 4.1', 'Fly 5', 'Fly 5.1', 'Fly 6', 'Fly 6.1', 'Fly 7', 'Fly 7.1', 'Fly 8', 'Fly 8.1', 'Fly 9','Fly 9.1', 'Fly 10', 'Fly 10.1', 'FbID']].groupby(['FbID']).mean().reset_index() # We merge them table = table.merge(raw, how='left', on='FbID') table_pred = table_pred.merge(raw_pred, how='left', on='FbID') # And then we rename them to follow the naming scheme table = table.rename(columns={'Fly 1' : "HippoRNAi_Ova_OvarioleNb_Fly_1.1_Count", 'Fly 1.1' : "HippoRNAi_Ova_OvarioleNb_Fly_1.2_Count", 'Fly 2' : "HippoRNAi_Ova_OvarioleNb_Fly_2.1_Count", 'Fly 2.1' : "HippoRNAi_Ova_OvarioleNb_Fly_2.2_Count", 'Fly 3' : "HippoRNAi_Ova_OvarioleNb_Fly_3.1_Count", 'Fly 3.1' : "HippoRNAi_Ova_OvarioleNb_Fly_3.2_Count", 'Fly 4' : "HippoRNAi_Ova_OvarioleNb_Fly_4.1_Count", 'Fly 4.1' : "HippoRNAi_Ova_OvarioleNb_Fly_4.2_Count", 'Fly 5' : "HippoRNAi_Ova_OvarioleNb_Fly_5.1_Count", 'Fly 5.1' : "HippoRNAi_Ova_OvarioleNb_Fly_5.2_Count", 'Fly 6' : "HippoRNAi_Ova_OvarioleNb_Fly_6.1_Count", 'Fly 6.1' : "HippoRNAi_Ova_OvarioleNb_Fly_6.2_Count", 'Fly 7' : "HippoRNAi_Ova_OvarioleNb_Fly_7.1_Count", 'Fly 7.1' : "HippoRNAi_Ova_OvarioleNb_Fly_7.2_Count", 'Fly 8' : "HippoRNAi_Ova_OvarioleNb_Fly_8.1_Count", 'Fly 8.1' : "HippoRNAi_Ova_OvarioleNb_Fly_8.2_Count", 'Fly 9' : "HippoRNAi_Ova_OvarioleNb_Fly_9.1_Count", 'Fly 9.1' : "HippoRNAi_Ova_OvarioleNb_Fly_9.2_Count", 'Fly 10' : "HippoRNAi_Ova_OvarioleNb_Fly_10.1_Count", 'Fly 10.1': "HippoRNAi_Ova_OvarioleNb_Fly_10.2_Count" }) # And then we rename them to follow the naming scheme table_pred = table_pred.rename(columns={'Fly 1' : "HippoRNAi_Ova_OvarioleNb_Fly_1.1_Count", 'Fly 1.1' : "HippoRNAi_Ova_OvarioleNb_Fly_1.2_Count", 'Fly 2' : "HippoRNAi_Ova_OvarioleNb_Fly_2.1_Count", 'Fly 2.1' : "HippoRNAi_Ova_OvarioleNb_Fly_2.2_Count", 'Fly 3' : "HippoRNAi_Ova_OvarioleNb_Fly_3.1_Count", 'Fly 3.1' : "HippoRNAi_Ova_OvarioleNb_Fly_3.2_Count", 'Fly 4' : "HippoRNAi_Ova_OvarioleNb_Fly_4.1_Count", 'Fly 4.1' : "HippoRNAi_Ova_OvarioleNb_Fly_4.2_Count", 'Fly 5' : "HippoRNAi_Ova_OvarioleNb_Fly_5.1_Count", 'Fly 5.1' : "HippoRNAi_Ova_OvarioleNb_Fly_5.2_Count", 'Fly 6' : "HippoRNAi_Ova_OvarioleNb_Fly_6.1_Count", 'Fly 6.1' : "HippoRNAi_Ova_OvarioleNb_Fly_6.2_Count", 'Fly 7' : "HippoRNAi_Ova_OvarioleNb_Fly_7.1_Count", 'Fly 7.1' : "HippoRNAi_Ova_OvarioleNb_Fly_7.2_Count", 'Fly 8' : "HippoRNAi_Ova_OvarioleNb_Fly_8.1_Count", 'Fly 8.1' : "HippoRNAi_Ova_OvarioleNb_Fly_8.2_Count", 'Fly 9' : "HippoRNAi_Ova_OvarioleNb_Fly_9.1_Count", 'Fly 9.1' : "HippoRNAi_Ova_OvarioleNb_Fly_9.2_Count", 'Fly 10' : "HippoRNAi_Ova_OvarioleNb_Fly_10.1_Count", 'Fly 10.1': "HippoRNAi_Ova_OvarioleNb_Fly_10.2_Count" }) # Finally we finished entering the raw data into the table so we append both tables together prior to adding the metadata. table = table.append(table_pred) ``` ## Step 3: Adding the network metrics ``` # Merge and rename as above table = table.merge(betweenness, how='left', on='FbID') table = table.rename(columns={'Betweeness':'PIN_betweenness_centrality'}) table = table.merge(closeness, how='left', on='FbID') table = table.rename(columns={'Closeness':'PIN_closeness_centrality'}) table = table.merge(eigenvector, how='left', on='FbID') table = table.rename(columns={'EigenVector':'PIN_eigenvector_centrality'}) table = table.merge(degrees_cen, how='left', on='FbID') table = table.rename(columns={'DegreeC':'PIN_degree_centrality'}) ``` ## Step 4: Adding the newtork modules ``` # We create 4 columns, where if a gene is found in a module it is 1 if not a 0 # np.where is key here, if condition == true, then X, else Y table['HippoRNAi_Ova_Module'] = np.where(table['FbID'].isin(ova_module), 1, 0) table['HippoRNAi_EggL_Module'] = np.where(table['FbID'].isin(fec_module), 1, 0) table['EggL_Module'] = np.where(table['FbID'].isin(xRNAi_module), 1, 0) table['Core_Module'] = np.where(table['FbID'].isin(core_module), 1, 0) meta_modules = np.array([np.where(table['FbID'].isin(ova_module), 1, 0), np.where(table['FbID'].isin(fec_module), 1, 0), np.where(table['FbID'].isin(xRNAi_module), 1, 0)]) modules = ["001","100","010","111","011","101","110","000"] modules_leg = ["I","II","III","IV","V","VI","VII",""] res = [] for el in meta_modules.T: i = modules.index(''.join([str(s) for s in el])) res.append(modules_leg[i]) table['Meta_Module'] = res table.columns ``` ## Step 5: Adding Signaling pathways ``` # We need to make this tidy data, so we need to add one column per signaling patway, with a 0 or a 1 # We iterate over all signaling pathway and add a column for each with 1 and 0s using the same np.where technic as above # But we first need to make the list of FbID that have this signaling pathway # that is: pathway_genes = signaling[signaling['Sig'] == pathway]['FbID'] for pathway in signaling['Sig'].unique(): pathway_genes = signaling[signaling['Sig'] == pathway]['FbID'] table['{}_pathway'.format(pathway)] = np.where(table['FbID'].isin(pathway_genes), 1, 0) ``` ## Step 6: Adding Connector genes ``` table['HpoOva_Connector'] = table['FbID'].isin(connectors[connectors['Module'] == 'Ova']['FbID'].values) table['HpoEggL_Connector'] = table['FbID'].isin(connectors[connectors['Module'] == 'HpoFec']['FbID'].values) table['EggL_Connector'] = table['FbID'].isin(connectors[connectors['Module'] == 'xRNAiFec']['FbID'].values) table['Core_Connector'] = table['FbID'].isin(connectors[connectors['Module'] == 'Core']['FbID'].values) ``` ## Step 7: Cleaning up ``` final_order = ['FbID', 'CG number', 'NAME', 'SYMBOL', 'HippoRNAi_EggL_Day_1_Egg_Count', 'HippoRNAi_EggL_Day_1_Egg_Zscore', 'HippoRNAi_EggL_Day_2_Egg_Count', 'HippoRNAi_EggL_Day_2_Egg_Zscore', 'HippoRNAi_EggL_Day_3_Egg_Count', 'HippoRNAi_EggL_Day_3_Egg_Zscore', 'HippoRNAi_EggL_Day_4_Egg_Count', 'HippoRNAi_EggL_Day_4_Egg_Zscore', 'HippoRNAi_EggL_Day_5_Egg_Count', 'HippoRNAi_EggL_Day_5_Egg_Zscore', 'HippoRNAi_EggL_All_Days_Egg_Sum_Count', 'HippoRNAi_EggL_All_Days_Egg_Sum_Zscore', 'EggL_Day_1_Egg_Count', 'EggL_Day_1_Egg_Zscore', 'EggL_Day_2_Egg_Count', 'EggL_Day_2_Egg_Zscore', 'EggL_Day_3_Egg_Count', 'EggL_Day_3_Egg_Zscore', 'EggL_Day_4_Egg_Count', 'EggL_Day_4_Egg_Zscore', 'EggL_Day_5_Egg_Count', 'EggL_Day_5_Egg_Zscore', 'EggL_All_Days_Egg_Sum_Count', 'EggL_All_Days_Egg_Sum_Zscore', 'HippoRNAi_Ova_OvarioleNb_Fly_1.1_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_1.2_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_2.1_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_2.2_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_3.1_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_3.2_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_4.1_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_4.2_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_5.1_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_5.2_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_6.1_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_6.2_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_7.1_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_7.2_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_8.1_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_8.2_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_9.1_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_9.2_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_10.1_Count', 'HippoRNAi_Ova_OvarioleNb_Fly_10.2_Count', 'HippoRNAi_Ova_OvarioleNb_Mean_Count', 'HippoRNAi_Ova_OvarioleNb_Std_Count', 'HippoRNAi_Ova_OvarioleNb_Mean_Zscore', 'PIN_betweenness_centrality', 'PIN_closeness_centrality', 'PIN_eigenvector_centrality', 'PIN_degree_centrality', 'HippoRNAi_Ova_Module', 'HippoRNAi_EggL_Module', 'EggL_Module', 'Core_Module', 'Meta_Module', 'HpoOva_Connector', 'HpoEggL_Connector', 'EggL_Connector', 'Core_Connector', 'FGF_pathway', 'VEGF_pathway', 'Toll_pathway', 'JNK_pathway', 'EGF_pathway', 'mTOR_pathway', 'FOXO_pathway', 'SHH_pathway', 'Hippo_pathway', 'JAK/STAT_pathway', 'Wnt_pathway', 'Notch_pathway', 'TGF B_pathway', 'MAPK_pathway'] table = table[final_order] def round_centrality(x): return round(x, 10) for col in [c for c in table.columns if 'PPI_' in c]: table[col] = table[col].apply(round_centrality) for col in [c for c in table.columns if 'PPI_' in c]: table[col] = table[col].fillna('NotInPPI') for col in [c for c in table.columns if '_Module' in c]: if not 'Meta' in col: PPIcheck = table['PIN_closeness_centrality'].values tmp = table[col].values.astype(str) for i in range(len(tmp)): if PPIcheck[i] == 'NotInPPI': tmp[i] = 'NotInPPI' else: if tmp[i] == '0': tmp[i] = 'False' else: tmp[i] = 'True' table[col] = tmp for col in [c for c in table.columns if '_Connector' in c]: PPIcheck = table['PIN_closeness_centrality'].values tmp = table[col].values.astype(object) for i in range(len(tmp)): if PPIcheck[i] == 'NotInPPI': tmp[i] = "NotInPPI" table[col] = tmp def round_zscore(x): return round(x, 4) for col in [c for c in table.columns if '_Zscore' in c]: table[col] = table[col].apply(round_zscore) # Adding the CG number for the missing connector table.at[table[table['FbID'] == 'FBgn0027619'].index, 'CG number'] = "CG12131" ``` # Asserting that the data is correctly entered ``` # Test that the number of genes in the database is equal to the number of gene screened in the primary screen assert(len(table) == len(screen_genes) + len(screen_genes_pred)) # -1 to get rid of the control ID total_datapoints = len(screen_genes) + len(screen_genes_pred) # Test that the number of values in each screen correspond to the raw table for each collumn # We iterate over all the collumns # For each collumns we define a test # try the assertion # if wrong then print an error message for column in table.columns: if 'Module' in column: try: assert(len(table[table[column].notna()]) == total_datapoints) except: print("Discrepancy in column: {}".format(column)) elif '_pathway' in column: try: assert(len(table[table[column].notna()]) == total_datapoints) except: print("Discrepancy in column: {}".format(column)) elif 'HippoRNAi_EggL' in column: try: assert(len(table[table[column].notna()]) == total_datapoints - 1) except: print("Discrepancy in column: {}".format(column)) elif 'EggL' in column and not "Connector" in column: try: assert(len(table[table[column].notna()]) == len(mean_xRNAi_gene['FbID'].unique()) + len(screen_genes_pred) - 1) except: print("Discrepancy in column: {}".format(column)) elif 'HippoRNAi_Ova' in column: try: assert(len(table[table[column].notna()]) == len(mean_ova_gene['FbID'].unique()) + len(screen_genes_pred) - 1) except: print("Discrepancy in column: {}".format(column)) elif 'Connector_Gene' == column: try: assert(len(table[table[column] == 1]) == len(connectors['FbID'].unique())) except: print("Discrepancy in column: {}".format(column)) elif 'PIN_' in column: try: assert(len(table[table[column] != 'NotInPPI']) == len(table[table['FbID'].isin(G.nodes())])) except: print("Discrepancy in column: {}".format(column)) elif "CG" in column: try: assert(len(table[table[column].notna()]) == total_datapoints) except: print("Discrepancy in column: {}".format(column)) ``` # Saving the table ``` table.to_csv(os.path.join(resultpath, "MasterTable.csv"), index=False) ```
github_jupyter
Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks. - Author: Sebastian Raschka - GitHub Repository: https://github.com/rasbt/deeplearning-models ``` %load_ext watermark %watermark -a 'Sebastian Raschka' -v -p torch ``` - Runs on CPU or GPU (if available) # Model Zoo -- Conditional Variational Autoencoder ## (with labels in reconstruction loss) A simple conditional variational autoencoder that compresses 768-pixel MNIST images down to a 35-pixel latent vector representation. This implementation concatenates the inputs with the class labels when computing the reconstruction loss as it is commonly done in non-convolutional conditional variational autoencoders. This leads to sightly better results compared to the implementation that does NOT concatenate the labels with the inputs to compute the reconstruction loss. For reference, see the implementation [./autoencoder-cvae_no-out-concat.ipynb](./autoencoder-cvae_no-out-concat.ipynb) ## Imports ``` import time import numpy as np import torch import torch.nn.functional as F from torch.utils.data import DataLoader from torchvision import datasets from torchvision import transforms if torch.cuda.is_available(): torch.backends.cudnn.deterministic = True ########################## ### SETTINGS ########################## # Device device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu") print('Device:', device) # Hyperparameters random_seed = 0 learning_rate = 0.001 num_epochs = 50 batch_size = 128 # Architecture num_classes = 10 num_features = 784 num_hidden_1 = 500 num_latent = 35 ########################## ### MNIST DATASET ########################## # Note transforms.ToTensor() scales input images # to 0-1 range train_dataset = datasets.MNIST(root='data', train=True, transform=transforms.ToTensor(), download=True) test_dataset = datasets.MNIST(root='data', train=False, transform=transforms.ToTensor()) train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True) test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False) # Checking the dataset for images, labels in train_loader: print('Image batch dimensions:', images.shape) print('Image label dimensions:', labels.shape) break ``` ## Model ``` ########################## ### MODEL ########################## def to_onehot(labels, num_classes, device): labels_onehot = torch.zeros(labels.size()[0], num_classes).to(device) labels_onehot.scatter_(1, labels.view(-1, 1), 1) return labels_onehot class ConditionalVariationalAutoencoder(torch.nn.Module): def __init__(self, num_features, num_hidden_1, num_latent, num_classes): super(ConditionalVariationalAutoencoder, self).__init__() self.num_classes = num_classes ### ENCODER self.hidden_1 = torch.nn.Linear(num_features+num_classes, num_hidden_1) self.z_mean = torch.nn.Linear(num_hidden_1, num_latent) # in the original paper (Kingma & Welling 2015, we use # have a z_mean and z_var, but the problem is that # the z_var can be negative, which would cause issues # in the log later. Hence we assume that latent vector # has a z_mean and z_log_var component, and when we need # the regular variance or std_dev, we simply use # an exponential function self.z_log_var = torch.nn.Linear(num_hidden_1, num_latent) ### DECODER self.linear_3 = torch.nn.Linear(num_latent+num_classes, num_hidden_1) self.linear_4 = torch.nn.Linear(num_hidden_1, num_features+num_classes) def reparameterize(self, z_mu, z_log_var): # Sample epsilon from standard normal distribution eps = torch.randn(z_mu.size(0), z_mu.size(1)).to(device) # note that log(x^2) = 2*log(x); hence divide by 2 to get std_dev # i.e., std_dev = exp(log(std_dev^2)/2) = exp(log(var)/2) z = z_mu + eps * torch.exp(z_log_var/2.) return z def encoder(self, features, targets): ### Add condition onehot_targets = to_onehot(targets, self.num_classes, device) x = torch.cat((features, onehot_targets), dim=1) ### ENCODER x = self.hidden_1(x) x = F.leaky_relu(x) z_mean = self.z_mean(x) z_log_var = self.z_log_var(x) encoded = self.reparameterize(z_mean, z_log_var) return z_mean, z_log_var, encoded def decoder(self, encoded, targets): ### Add condition onehot_targets = to_onehot(targets, self.num_classes, device) encoded = torch.cat((encoded, onehot_targets), dim=1) ### DECODER x = self.linear_3(encoded) x = F.leaky_relu(x) x = self.linear_4(x) decoded = torch.sigmoid(x) return decoded def forward(self, features, targets): z_mean, z_log_var, encoded = self.encoder(features, targets) decoded = self.decoder(encoded, targets) return z_mean, z_log_var, encoded, decoded torch.manual_seed(random_seed) model = ConditionalVariationalAutoencoder(num_features, num_hidden_1, num_latent, num_classes) model = model.to(device) ########################## ### COST AND OPTIMIZER ########################## optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) ``` ## Training ``` start_time = time.time() for epoch in range(num_epochs): for batch_idx, (features, targets) in enumerate(train_loader): features = features.view(-1, 28*28).to(device) targets = targets.to(device) ### FORWARD AND BACK PROP z_mean, z_log_var, encoded, decoded = model(features, targets) # cost = reconstruction loss + Kullback-Leibler divergence kl_divergence = (0.5 * (z_mean**2 + torch.exp(z_log_var) - z_log_var - 1)).sum() # add condition x_con = torch.cat((features, to_onehot(targets, num_classes, device)), dim=1) pixelwise_bce = F.binary_cross_entropy(decoded, x_con, reduction='sum') cost = kl_divergence + pixelwise_bce optimizer.zero_grad() cost.backward() ### UPDATE MODEL PARAMETERS optimizer.step() ### LOGGING if not batch_idx % 50: print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f' %(epoch+1, num_epochs, batch_idx, len(train_loader)//batch_size, cost)) print('Time elapsed: %.2f min' % ((time.time() - start_time)/60)) print('Total Training Time: %.2f min' % ((time.time() - start_time)/60)) ``` # Evaluation ### Reconstruction ``` %matplotlib inline import matplotlib.pyplot as plt ########################## ### VISUALIZATION ########################## n_images = 15 image_width = 28 fig, axes = plt.subplots(nrows=2, ncols=n_images, sharex=True, sharey=True, figsize=(20, 2.5)) orig_images = features[:n_images] decoded_images = decoded[:n_images][:, :-num_classes] for i in range(n_images): for ax, img in zip(axes, [orig_images, decoded_images]): curr_img = img[i].detach().to(torch.device('cpu')) ax[i].imshow(curr_img.view((image_width, image_width)), cmap='binary') ``` ### New random-conditional images ``` for i in range(10): ########################## ### RANDOM SAMPLE ########################## labels = torch.tensor([i]*10).to(device) n_images = labels.size()[0] rand_features = torch.randn(n_images, num_latent).to(device) new_images = model.decoder(rand_features, labels) ########################## ### VISUALIZATION ########################## image_width = 28 fig, axes = plt.subplots(nrows=1, ncols=n_images, figsize=(10, 2.5), sharey=True) decoded_images = new_images[:n_images][:, :-num_classes] print('Class Label %d' % i) for ax, img in zip(axes, decoded_images): curr_img = img.detach().to(torch.device('cpu')) ax.imshow(curr_img.view((image_width, image_width)), cmap='binary') plt.show() %watermark -iv ```
github_jupyter
``` #hide #all_slow # default_exp core ``` # Glycowork > Glycans are a fundamental biological sequence, similar to DNA, RNA, or proteins. Glycans are complex carbohydrates that can form branched structures comprising monosaccharides and linkages as constituents. Despite being conspicuously absent from most research, glycans are ubiquitous in biology. They decorate most proteins and lipids and direct the stability and functions of biomolecules, cells, and organisms. This also makes glycans relevant to every human disease. The analysis of glycans is made difficult by their nonlinearity and their astounding diversity, given the large number of monosaccharides and types of linkages. Glycowork is a Python package designed to process and analyze glycan sequences, with a special emphasis on glycan-focused machine learning. Next to various functions to work with glycans, Glycowork also contains glycan data that can be used for glycan alignments, model pre-training, motif comparisons, etc. Where possible, glycowork uses the IUPAC-condensed nomenclature for glycan. An example for this would be `Man(a1-3)[Man(a1-6)]Man(b1-4)GlcNAc(b1-4)GlcNAc`, in which parentheses enclose the linkage description and brackets indicate branches. Ideally, the longest branch should constitute the main chain and side branches should be ordered ascendingly based on the last linkage of the branch (yet all functions in glycowork can deal with violations to these guidelines). Where possible/known, monosaccharide modifications are indicated by the ring position to which they are attached (e.g., Glc6Me), while the type of connection is given if the exact position is unknown (e.g., GlcOMe). Linkage uncertainty is usually expressed with a 'bond' placeholder. As suggested in IUPAC-condensed, we omit the D-/L-prefix for the "usual" case and only add it for the rarer option (e.g., Glc vs. L-Glc). However, it should be noted that nearly all functions in glycowork operate on pure graphs (facilitated by `glycan_to_graph`), avoiding all ambiguity of the IUPAC nomenclature. We use the IUPAC nomenclature here as a human-readable glycan nomenclature that can be readily used for motif interpretation and user-friendly usage. The conversion to graphs in all end user - functions and operations such as motif annotation via subgraph isomorphism tests are handled behind the scenes, allowing you to work with glycans in a human-readable format. ``` #hide import warnings warnings.filterwarnings("ignore") from nbdev.showdoc import * import copy from IPython.display import HTML import pandas as pd from glycowork.glycan_data.loader import df_species, glycan_emb from glycowork.motif.analysis import plot_embeddings, make_heatmap, characterize_monosaccharide from glycowork.motif.processing import presence_to_matrix from glycowork.motif.query import get_insight %load_ext autoreload %autoreload 2 ``` All modules in glycowork, **except** for `ml`, can be run on any machine. For most parts of `ml`, however, a GPU is needed to load and run torch_geometric. If you want to run `glycowork.ml` in a Jupyter Notebook (in the cloud or wherever you have a GPU), just run this prior to importing glycowork: ``` !pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.10.0+cu111.html !pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.10.0+cu111.html !pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-1.10.0+cu111.html !pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.10.0+cu111.html !pip install torch-geometric==2.0.2 ``` Okay, how about an example! (For more example workflows, head over to the `examples` section) <br> Let's say you're interested in glycans of the plant order Fabales. that includes the legumes. Luckily, we've got you covered and `df_species` has ~500 glycan sequences that stem from Fabales. So the first step is easy: import `df_species` from `glycowork.glycan_data.loader` and filter for Fabales glycans. ``` df_fabales = df_species[df_species.Order == 'Fabales'].reset_index(drop = True) #hide_input df_fabales2 = copy.deepcopy(df_fabales) df_fabales2.index = df_fabales2.target.values.tolist() df_fabales2.drop(['target'], axis = 1, inplace = True) HTML(df_fabales2.head().style.set_properties(**{'font-size': '11pt', 'font-family': 'Helvetica','border-collapse': 'collapse','border': '1px solid black'}).render()) ``` Now, if you're not a plant glycobiologist, that may seem a bit overwhelming to you. But fear not, we can just **plot** glycans. How? Easy: Glycan representations! <br> Representations or embeddings are really just a bunch of numbers learned by a machine learning model that express similarity. Meaning that if two glycans are similar (in sequence/properties/etc.), the difference between their representations should be small. Or, in other words, they should cluster in a plot. <br> Now, you might not be in possession of a GPU and/or the wherewithal to train a glycan-focused deep learning model. That's why `glycowork` has both trained models and **already calculated** glycan representations waiting for you! And the best thing: you don't even need to worry about them. You just pass glycan sequences to our functions and `glycowork` will handle glycan representations behind the scenes. <br> So, for our Fabales example, we pass our glycan sequences to `plot_embeddings` (from `glycowork.motif.analysis`), which will retrieve their learned representations, transform them into two dimensions via t-distributed stochastic neighbor embedding (t-SNE), and plot them! If you want, it can even color the resulting scatterplots by a label, for instance the taxonomic family within Fabales. ``` plot_embeddings(df_fabales.target.values.tolist(), label_list = df_fabales.Family.values.tolist()) ``` Hmm, interesting. We definitely see clear glycan clusters (remember, every point is one glycan sequence). What's even cooler is that the families Polygalaceae and Quillajaceae cluster in narrow regions of the whole plot. I wonder what differentiates their glycans from Fabaceae? <br> If you're also curious, proceed to the next function. First, we need to convert our data into a count table, with counts of every glycan in every family of Fabales. For this, you can use `presence_to_matrix` from the `glycowork.motif.processing` module, which was designed to work with tables such as `df_species`. Changing `label_col_name` to 'genus' will for instance generate a count table for all genera in Fabales. ``` df_map = presence_to_matrix(df_fabales, label_col_name = 'Family') #hide_input HTML(df_map.style.set_properties(**{'font-size': '11pt', 'font-family': 'Helvetica','border-collapse': 'collapse','border': '1px solid black'}).render()) ``` Okay, we're ready for some heatmaps! In `glycowork.motif.analysis`, we have a function titled `make_heatmap`, which...makes heatmaps.<br> Specifically, it will generate clustermaps, that are aiming to cluster samples based on their glycan profile. Calling `make_heatmap` with `df_map` without any other arguments will try to cluster the samples based on their full glycan sequences. But we also have the `motif` mode, which extracts motifs from these glycans and clusters samples based on those. For instance, you can use the `known` (curated glycan motifs) and the `exhaustive` (all observed monosaccharides and disaccharides) options. Check out the documentation for more options! <br> We can immediately see that we have a lot more *N*-linked glycans in Fabaceae, for instance based on the prominence of Trimannosylcore and Fuc(a1-3)GlcNAc. Glycans from Polygalaceae an Quillajaceae in `df_species` seem to mainly contain small molecule- and lipid-linked glycans, which of course are quite different from *N*-linked glycans and are more similar to each other than to glycans from Fabaceae in `df_species`. ``` make_heatmap(df_map, mode = 'motif', feature_set = ['exhaustive'], datatype = 'presence', yticklabels = 1) ``` Let's say you're interested in a specific glycan from Fabales, `GlcNAc(b1-2)Man(a1-3)[GlcNAc(b1-2)Man(a1-6)][Xyl(b1-2)]Man(b1-4)GlcNAc(b1-4)[Fuc(a1-3)]GlcNAc`, finding out more about it could be annoying (try to Google anything more complex than a short motif and you run into problems). That's why we have `get_insight` in `glycowork.motif.query`, a function that returns some metadata about that glycan. The nice thing is that you don't need to bother with database IDs or notational ambiguity, as `glycowork` will convert that glycan into a graph behind the scenes and search our databases. Let's say you develop an acute interest in the distribution of xylose in plant glycans. In such a situation, one option for further inquiry would be `characterize_monosaccharide` from `glycowork.motif.analysis`. This neat little function allows you to get a sense of what a particular monosaccharide is typically directly connected to and which modifications it can acquire. Further, you can filter the results to any taxonomic level you're interested in! (as long as it's in `df_species` or your dataset of interest of course.) <br> So, for xylose in plants, we immediately see Man (in M3X/M3FX structures) and Glc/Xyl (Xyloglucans) as prominent neighbors. Additionally, we note that xylose in plant glycans can be methylated, which is a rather typical modification in plant glycans. <br> There is quite a bit more that you can do with `characterize_monosaccharide`, so have a look at its documentation for all the options! ``` characterize_monosaccharide('Xyl', rank = 'Kingdom', focus = 'Plantae', modifications = True) ``` To show you the true power of `characterize_monosaccharide` we'll switch gears to bacteria and have a look at how they handle their glucose connections. This is a lot more diverse than with xylose! Note how we put `thresh` to 50 (from its default value of 10), as otherwise the plot would become illegible with all the rare modifications and pairings. ``` characterize_monosaccharide('Glc', rank = 'Kingdom', mode = 'sugarbond', focus = 'Bacteria', modifications = True, thresh = 50) ``` And that's just a super-brief glimpse at the really cool functions contained in `glycowork`! The rest of the documentation describes all functions and what they're good for. Be sure to check out more, suggest improvements/fixes, and maybe even contribute something yourself. We are looking forward to your discoveries! ``` #hide from nbdev.export import notebook2script; notebook2script() ```
github_jupyter
# Some analysis of the decay data in ICRP Publication 107 This notebook checks the consistency of the <a href="http://www.icrp.org/publication.asp?id=ICRP%20Publication%20107">ICRP Publication 107</a> decay data. ### Branching fractions The branching fractions for the progeny of one radionuclide should be sum to unity for physical reasons. However in practice the totals may differ from unity as the branching fractions are only quoted to a maximum of 5 significant figures of precision in the ICRP 107 data files. This notebook calculates the sum of the branching fractions from each radionuclide in ICRP 107 and analyzes the results. First read in a CSV file containing the ICRP 107 decay data into a DataFrame: ``` import pandas as pd import numpy as np icrp = pd.read_csv('icrp.csv', index_col='Radionuclide') icrp.head() ``` Find all radionuclides where the sum of the branching fractions does not equal one and add them to a DataFrame: ``` nonunity = [] for rn in icrp.index.to_list(): if np.isinf(icrp.loc[rn, 'Half_life']): continue sum = 0.0 i = 1 while True: if np.isnan(icrp.loc[rn, 'Fraction_'+str(i)]): break sum += icrp.loc[rn, 'Fraction_'+str(i)] i += 1 if i > 4: break if sum != 1.0: nonunity.append([rn, sum]) nonunity = pd.DataFrame(nonunity, columns=['Radionuclide','Total_branching_fraction']).set_index('Radionuclide') print('There are '+str(len(nonunity))+' radionuclides with a non-unity total for the sum of the branching fractions.') ``` Sort the DataFrame so that the sum of the branching fractions is increasing: ``` nonunity.sort_values(by=['Total_branching_fraction'], inplace=True) nonunity ``` Considering the radionuclides where the sum of the branching fractions is greater than one first, we see that for all these cases the deviation from unity occurs in the 6<sup>th</sup> significant figure or higher. Therefore these deviations are all consistent with rounding errors given that the branching fractions are recorded in ICRP 107 with up to 5 significant figures of precision. Cross-checking the radionuclides with the largest sum totals for the branching fractions (Tb-151, Dy-153 etc.) with the original ICRP 107 data file (ICRP-07.NDX), round off errors are clearly the most reasonable explanation. This is because for all these radionuclides there are one or two main decay pathways where the branching fractions add to one, and a minor decay pathway with branching fraction typically of O(10<sup>-5</sup>). Note the number of radionuclides with total branching fraction greater than one is: ``` len(nonunity[(nonunity['Total_branching_fraction'] > 1.0)]) ``` Now look at the cases where the sum of the branching fractions is less than one and the deviation cannot be assumed to be due to round off errors. The radionuclides for which this applies are: ``` nonunity[(nonunity['Total_branching_fraction'] < 0.99999)] ``` Cross-checking the original ICRP 107 data file (ICRP-07.NDX) and another decay data source (<a href="https://www.nndc.bnl.gov/nudat2/chartNuc.jsp">NuDat2</a>), these radionuclides appear to fall into two classes: 1. Round-off of branching fractions in ICRP-07.NDX to the 4<sup>th</sup> or 3<sup>rd</sup> significant figures, which explains the deviation from one (specifically Ag-115, Hf-182m, Np-236, Po-205). 2. Some minor decay pathways not having been included in the ICRP 107 dataset for the other radnionuclides. The missing minor decay pathways and their approximate branching fractions are: - Ac-223 electron capture decay (~0.01) - Am-244m electron capture decay (~0.0004) - At-217 beta- decay (~0.00012) - At-219 beta- decay (~0.03) - Cm-240 electron capture decay (~0.003) - Es-250 alpha decay (~0.015) - Es-254m IT decay (~0.01559) - Po-212m IT decay (~0.0007) - U-228 electron capture decay (~0.025) Note that the <a href="https://doi.org/10.11484/jaeri-1347">JAERI 1347</a> and <a href="https://doi.org/10.11484/jaea-data-code-2007-021">JAEA-Data/Code 2007-021</a> reports discuss why these decay pathways were not included in ICRP 107. In short, there were no Evaluated Nuclear Structure Data Files (ENSDF) entries for these decay modes when the ICRP 107 dataset was put together. These reports also confirm that the reason for branching fractions not summing to one for other radionuclides is due to uncertainties in the underlying nuclear data. ### Radioactive progeny not present in ICRP Publication 107 There are some instances where the radioactive progeny resulting from decay pathway that exists in ICRP 107 are not themselves included in ICRP 107. This code finds the affected radionuclides. <a href="https://pyne.io/">PyNE</a> is used to check whether progeny not present in ICRP 107 are radioactive or not. ``` from pyne import data, nucname radionuclides = icrp[np.isfinite(icrp.Half_life)].index.to_list() missing = {} for nuc in radionuclides: for i in range(1, icrp.loc[nuc, 'Num_decay_modes']+1): prog = icrp.loc[nuc, 'Progeny_'+str(i)] if prog == 'SF' or prog in radionuclides: continue hl = data.half_life(nucname.id(prog)) if np.isinf(hl) or np.isnan(hl): continue missing[prog] = "half-life = " + str(hl) + " s, parent is " + nuc missing ``` So there are 16 progeny not contained within ICRP 107 which are radioactive according to the decay data in PyNE v0.7.5. All these radionuclides have half-lives that are greater than 10,000 billion years, i.e. orders of magnitude longer than the age of the universe. These were therefore considered as effectively stable nuclides by ICRP 107.
github_jupyter
# Ready, Steady, Go AI (*Tutorial*) This tutorial is a supplement to the paper, **Ready, Steady, Go AI: A Practical Tutorial on Fundamentals of Artificial Intelligence and Its Applications in Phenomics Image Analysis** (*Patterns, 2021*) by Farid Nakhle and Antoine Harfouche Read the accompanying paper [here](https://doi.org/10.1016/j.patter.2021.100323). # Table of contents * **1. Background** * **2. Downloading Segmented Images** * **3. Downsampling the Yellow Leaf Curl Class** # 1. Background **Why do we need to balance a dataset?** Data imbalance refers to an unequal distribution of classes within a dataset. In such scenario, a classification model could become biased, inaccurate and might produce unsatisfactory results. Therefore, we balance the dataset either by oversampling the minority class or undersampling the majority classes. To demonstrate the two scenarios, both oversampling and undersampling will be applied. Here, we will downsample the yellow leaf curl class in the training set using K-nearest neighbors (KNN). **What is KNN?** Like oversampling, undersampling is also designed to balance the class distribution in an imbalanced dataset. However, in contrast to oversampling, undersampling techniques delete data from the majority classes to balance the distribution. KNN is an ML algorithm that calculates feature similarity between its training data to predict values for new, previously unseen data. When given a new input, KNN finds k (a user-predefined number) most resembling data (nearest neighbors) using similarity metrics, such as the Euclidean distance. Based on the majority class of those similar cases, the algorithm classifies the new input. We will use KNN to discover similarities of leaves in the yellow leaf curl virus class, deleting images with redundant features, ultimately undersampling it to 1500 images. # 2. Downloading Segmented Images As a reminder, we are working with the PlantVillage dataset, originally obtained from [here](http://dx.doi.org/10.17632/tywbtsjrjv.1). For this tutorial, we will be working with a subset of PlantVillage, where we will choose the tomato classes only. We have made the subset available [here](http://dx.doi.org/10.17632/4g7k9wptyd.1). The next code will automatically download the dataset segmented with SegNet. **It is important to note that Colab deletes all unsaved data once the instance is recycled. Therefore, remember to download your results once you run the code.** ``` import requests import os import zipfile ## FEEL FREE TO CHANGE THESE PARAMETERS dataset_url = "http://faridnakhle.com/pv/tomato-split-cropped-segmented.zip" save_data_to = "/content/dataset/tomato-segmented/" dataset_file_name = "tomato-segmented.zip" ####################################### if not os.path.exists(save_data_to): os.makedirs(save_data_to) r = requests.get(dataset_url, stream = True, headers={"User-Agent": "Ready, Steady, Go AI"}) print("Downloading dataset...") with open(save_data_to + dataset_file_name, "wb") as file: for block in r.iter_content(chunk_size = 1024): if block: file.write(block) ## Extract downloaded zip dataset file print("Dataset downloaded") print("Extracting files...") with zipfile.ZipFile(save_data_to + dataset_file_name, 'r') as zip_dataset: zip_dataset.extractall(save_data_to) ## Delete the zip file as we no longer need it os.remove(save_data_to + dataset_file_name) print("All done!") ``` # 3. Downsampling the Yellow Leaf Curl Class ``` from sklearn.neighbors import NearestNeighbors from glob import glob import numpy as np import scipy.sparse as sp from keras.applications import VGG19 from keras.applications.vgg19 import preprocess_input from keras.engine import Model from keras.preprocessing import image import numpy as np import os img_dir = "/content/dataset/tomato-segmented/train/Tomato___Tomato_Yellow_Leaf_Curl_Virus/*" targetLimit = 1500 deleteImages = True k = 3 def SaveFile(arr, filename): with open(filename, 'w') as filehandle: for listitem in arr: filehandle.write(str(listitem) + "\n") def vectorize_all(files, model, px=224, n_dims=512, batch_size=512): min_idx = 0 max_idx = min_idx + batch_size total_max = len(files) if (max_idx > total_max): max_idx = total_max preds = sp.lil_matrix((len(files), n_dims)) print("Total: {}".format(len(files))) while min_idx < total_max - 1: print(min_idx) X = np.zeros(((max_idx - min_idx), px, px, 3)) # For each file in batch, # load as row into X i = 0 for i in range(min_idx, max_idx): file = files[i] try: img = image.load_img(file, target_size=(px, px)) img_array = image.img_to_array(img) X[i - min_idx, :, :, :] = img_array except Exception as e: print(e) max_idx = i X = preprocess_input(X) these_preds = model.predict(X) shp = ((max_idx - min_idx) + 1, n_dims) preds[min_idx:max_idx + 1, :] = these_preds.reshape(shp) min_idx = max_idx max_idx = np.min((max_idx + batch_size, total_max)) return preds def vectorizeOne(path, model): img = image.load_img(path, target_size=(224, 224)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) pred = model.predict(x) return pred.ravel() def findSimilar(vec, knn, filenames, n_neighbors=6): if n_neighbors >= len(filenames): print("Error. number of neighbours should be less than the number of images.") else: n_neighbors = n_neighbors + 1 dist, indices = knn.kneighbors(vec.reshape(1, -1), n_neighbors=n_neighbors) dist, indices = dist.flatten(), indices.flatten() similarList = [(filenames[indices[i]], dist[i]) for i in range(len(indices))] del similarList[0] #similarImages.sort(reverse=True, key=lambda tup: tup[1]) return similarList files = glob(img_dir) nbrOfImages2Delete = len(files) - targetLimit if (nbrOfImages2Delete > 0): imgToSearchFor = files[0] base_model = VGG19(weights='imagenet') model = Model(inputs=base_model.input, outputs=base_model.get_layer('fc1').output) vecs = vectorize_all(files, model, n_dims=4096) knn = NearestNeighbors(metric='cosine', algorithm='brute') knn.fit(vecs) vec = vectorizeOne(imgToSearchFor, model) similarImages = findSimilar(vec, knn, files, nbrOfImages2Delete) print(similarImages) SaveFile(similarImages, "deletedImages.txt") if deleteImages: for i in range(0, len(similarImages)): if os.path.exists(similarImages[i][0]): os.remove(similarImages[i][0]) print("Balancing done. A list of deleted images can be found in deletedImages.txt") else: print("nothing to delete") ``` Let's re-count the files in the folder ``` files = glob(img_dir) print(len(files)) ```
github_jupyter
<img src='./img/LogoWekeo_Copernicus_RGB_0.png' alt='' align='centre' width='30%'></img> *** # COPERNICUS MARINE BIO MEDITERRANEAN SEA TRAINING <div style="text-align: right"><i> 07-03-BIO </i></div> License: This code is offered as open source and free-to-use in the public domain, with no warranty, under the MIT license associated with this code repository. *** <center><h1> Time evolution of nutrients, chlorophyll, oxygen and CO2 in the Mediterranean Sea </h1></center> *** **General Note 1**: Execute each cell through the <button class="btn btn-default btn-xs"><i class="icon-play fa fa-play"></i></button> button from the top MENU (or keyboard shortcut `Shift` + `Enter`).<br> <br> **General Note 2**: If, for any reason, the kernel is not working anymore, in the top MENU, click on the <button class="btn btn-default btn-xs"><i class="fa fa-repeat icon-repeat"></i></button> button. Then, in the top MENU, click on "Cell" and select "Run All Above Selected Cell".<br> *** # Table of contents - [1. Introduction](#1.-Introduction) - [2. About the data](#2.-About-the-data) - [3. Required Python modules](#3.-Required-Python-modules) - [4. Download data with HDA](#4.-Download-data-with-HDA) - [5. Exercise: Time serie](#5.-Exercise-n.4:-Time-serie) - [6. Conclusion](#6.-Conclusion) *** # 1. Introduction [Go back to the "Table of contents"](#Table-of-contents) The objective of this exercise is to use the Copernicus Marine (CMEMS) BIOgeochemical products to visualize some typical coastal biogeochemical features in the Black Sea. In particular, we will display: - time evolution of the partial pressure of CO2 and CO2 air-to-sea exchanges We will use the near-real time (NRT) products as they already use the latest CMEMS name conventions. After July 2020, the multi-year (MY) product will also use the same conventions. *** # 2. About the data [Go back to the "Table of contents"](#Table-of-contents) ## Model description ### This example is based on the product: [MEDSEA_REANALYSIS_BIO_006_008](https://moi.wekeo.eu/data?view=dataset&dataset=EO%3AMO%3ADAT%3AMEDSEA_REANALYSIS_BIO_006_008) **MEDSEA_REANALYSIS_BIO_006_008** is reanalysis of Mediterranean Sea biogeochemistry at 1/16 degree, using the OGSTM-BFM biogeochemical model and data assimilation of surface chlorophyll concentration. OGSTM-BFM was driven by physical forcing fields produced as output by the Med-Currents model. The ESA-CCI database of surface chlorophyll concentration estimated by satellite and delivered within CMEMS-OCTAC was used for data assimilation. This reanalysis provides monthly means of 3D fields of chlorophyll, nutrients (phosphate and nitrate) and dissolved oxygen concentrations, net primary production, phytoplankton biomass, ocean pH and ocean pCO2. * Product Citation: Please refer to our Technical FAQ for citing products.http://marine.copernicus.eu/faq/cite-cmems-products-cmems-credit/?idpage=169 * ''DOI (Product)'': https://doi.org/10.25423/MEDSEA_REANALYSIS_BIO_006_008 * ''Citation'': Teruzzi A., Cossarini G., Lazzari P., Salon S., Bolzon G., Crise A., Solidoro C. (2016). “Mediterranean Sea biogeochemical reanalysis (CMEMS MED REA-Biogeochemistry 1999-2015)”. [Data set]. Copernicus Monitoring Environment Marine Service. DOI: https://doi.org/10.25423 <img src="https://wekeo-broker.apps.mercator.dpi.wekeo.eu/previews/EO_MO_DAT_MEDSEA_REANALYSIS_BIO_006_008_sv03-med-ogs-pft-rean-m.png"> ## Get more info on the product You can find more info on this product and access to the download services in the [products viewer on Wekeo](https://moi.wekeo.eu/data?view=dataset&dataset=EO%3AMO%3ADAT%3AMEDSEA_REANALYSIS_BIO_006_008). <br><br> ## Parameters used for downloading the data | Parameter | Value | | :---: | :---| | **Product** | MEDSEA_REANALYSIS_BIO_006_008 | | **Datasets** | <ul><li>sv03-med-ogs-car-rean-m</li> | **Frequency** | monthly | | **Lat min** | 42 | | **Lat max** | 44 | | **Lon min** | 2.5 | | **Lon max** | 6.5 | | **Timesteps** | from 2010-01-01 to 2015-12-21 | | **Service for downloading** | HDA (SUBS) | | **Files total dimension** | ~43 MB | <div class="alert alert-block alert-warning"> <b>Get the WEkEO User credentials</b> <hr> If you want to download the data to use this notebook, you will need WEkEO User credentials. If you do not have these, you can register <a href="https://www.wekeo.eu/web/guest/user-registration" target="_blank">here</a>. *** # 3. Required Python modules [Go back to the "Table of contents"](#Table-of-contents) Here you can find the Python modules imported for running the notebook's code. They are quite common modules adopted for handling the scientific data. | Module name | Description | | :---: | :---| | **os** | [ Miscellaneous operating system interfaces](https://docs.python.org/3.7/library/os.html) for managing paths, creating directories,... | | **numpy** | [NumPy](https://numpy.org/) is the fundamental package for scientific computing with Python and for managing ND-arrays | | **xarray** | [Xarray](http://xarray.pydata.org/en/stable/) introduces labels in the form of dimensions, coordinates and attributes on top of raw NumPy-like arrays, which allows for a more intuitive, more concise, and less error-prone developer experience. | | **matplotlib** |[Matplotlib](https://matplotlib.org/) is a Python 2D plotting library which produces publication quality figures | ### Code cells allow you to enter and run Python code Run a code cell using <code>Shift-Enter</code> or pressing the <button class="btn btn-default btn-xs"><i class="icon-play fa fa-play"></i></button> button in the toolbar above: ## Import the modules For avoiding the warning messages during the execution and installation process, at first remove them: ``` import warnings warnings.filterwarnings('ignore') import os import sys import numpy as np import xarray as xr import matplotlib.pyplot as plt ``` If you don't have the right module, please install it with the command: ``` conda install module_name ``` and then re-try to execute the cell for importing it. **Please install the modules one by one**. *** # 4. Download data with HDA ``` # HDA API tools sys.path.append(os.path.join(os.path.dirname(os.path.dirname(os.getcwd())),'wekeo-hda')) import hda_api_functions as hapi # this is the PYTHON version ``` WEkEO provides access to a huge number of datasets through its **'harmonised-data-access'** API. This allows us to query the full data catalogue and download data quickly and directly onto the Jupyter Lab. You can search for what data is available <a href="https://wekeo.eu/data?view=catalogue">here</a> In order to use the HDA-API we need to provide some authentication credentials, which comes in the form of an API key and API token. In this notebook we have provided functions so you can retrieve the API key and token you need directly. You can find out more about this process in the notebook on HDA access (wekeo_harmonized_data_access_api.ipynb) that can be found in the **wekeo-hda** folder on your Jupyterlab. ``` # your WEkEO API username and password (needs to be in ' ') user_name = 'USERNAME' password = 'PASSWORD' # Generate an API key api_key = hapi.generate_api_key(user_name, password) print('Your API key is: '+api_key) # where the data should be downloaded to: download_dir_path = os.path.join(os.getcwd(),'products', '06-08') # make the output directory if required if not os.path.exists(download_dir_path): os.makedirs(download_dir_path) ``` Now we are ready to get our data. ``` dataset_id = "EO:MO:DAT:MEDSEA_REANALYSIS_BIO_006_008" ``` We provide here the parameters of the requests as described in previous section. You can prepare this request thanks to the data access feature of WEkEO data viewer. ``` query = { "datasetId": "EO:MO:DAT:MEDSEA_REANALYSIS_BIO_006_008:sv03-med-ogs-car-rean-m", "boundingBoxValues": [ { "name": "bbox", "bbox": [ 2.5, 42, 6.5, 44 ] } ], "dateRangeSelectValues": [ { "name": "position", "start": "2010-01-01T00:00:00.000Z", "end": "2015-12-31T23:30:00.000Z" } ], "multiStringSelectValues": [ { "name": "variable", "value": [ "pco" ] } ], "stringChoiceValues": [ { "name": "service", "value": "MEDSEA_REANALYSIS_BIO_006_008-TDS" }, { "name": "product", "value": "sv03-med-ogs-car-rean-m" }, { "name": "startDepth", "value": "1.4721" }, { "name": "endDepth", "value": "5334.64795" } ] } ``` Now we have a query, we need to launch it to WEkEO to get our data. The box below takes care of this through the following steps: 1. initialise our HDA-API 2. get an access token for our data 3. accepts the WEkEO terms and conditions 4. loads our JSON query into memory 5. launches our search 6. waits for our search results 7. gets our result list 8. downloads our data This is quite a complex process, so much of the functionality has been buried 'behind the scenes'. If you want more information, you can check out the <a href="./wekeo-hda">Harmonised Data Access API </a></span> tutorials. The code below will report some information as it runs. At the end, it should tell you that one product has been downloaded. ``` HAPI_dict = hapi.init(dataset_id, api_key, download_dir_path) HAPI_dict = hapi.get_access_token(HAPI_dict) HAPI_dict = hapi.acceptTandC(HAPI_dict) # launch job print('Launching job...') HAPI_dict = hapi.get_job_id(HAPI_dict, query) #check results print('Getting results...') HAPI_dict = hapi.get_results_list(HAPI_dict) HAPI_dict = hapi.get_order_ids(HAPI_dict) # download data print('Downloading data...') HAPI_dict = hapi.download_data(HAPI_dict),user_filename='co2_f') ``` *** # 5. Exercise: Time serie [Go back to the "Table of contents"](#Table-of-contents) ### In this exercise, we will look at the temporal evolution of the CO2 flux at the air-sea interface and at depth For this exercise we will use a 3D dataset: sv03-med-ogs-car-rean-m It consists of one variables: values of CO2 at different depth levels CMEMS provides the partial pressure of carbon dioxide in seawater, expressed in Pascal [Pa]. However, pCO2 is usually reported in [μatm]: the conversion is 1 μatm equals to 101.325 kPa. The current level of atmosphere partial pressure of CO2 is 407 ppm which equals to 41.2 Pa. Attention must be paid to unit conversion. The figure shows that the marine environment acts as a sink of the atmospheric CO2 during winter and as a source during summer following the opposite cycle of solubility. The presence of peaks of CO2 flux is related to the effect of wind on the air-sea interface transport: a cold and windy day can contribute much to the annual budget. ## Parameters used for downloading the data | Parameter | Value | | :---: | :---| | **Product** | MEDSEA_REANALYSIS_BIO_006_008 | | **Datasets** | <ul><li>sv03-med-ogs-car-rean-m</li> | **Frequency** | monthly | | **Lat min** | 42 | | **Lat max** | 44 | | **Lon min** | 2.5 | | **Lon max** | 6.5 | | **Timesteps** | from 2010-01-01 to 2015-12-21 | | **Service for downloading** | HDA (SUBS) | | **Files total dimension** | ~43 MB | ## 5.1. Access the data ``` # Input netcdf file # #co2_f = "bs-ulg-co2-an-fc-m_1588235316133.nc" # Build the complete nc path co2_nc = os.path.join(download_dir_path, 'co2_f') # Open the nc dataset co2_ds = xr.open_dataset(co2_nc) co2_ds.data_vars co2_ds.pco # Set the coordinate names (used later for accessing the data) lon_name = "longitude" lat_name = "latitude" time_name = "time" depth_name = "depth" ``` ## 5.2. Plot the time series ### 5.2.1. Plot the averaged time series ``` # dataset mean spco2_mean = co2_ds.mean(dim=(lat_name, lon_name, depth_name), skipna=True) # Plot configuration width_inch = 16 height_inch = 8 title_fontstyle = { "fontsize": "14", "fontstyle": "italic", "fontweight": "bold", "pad": 30 } label_fontstyle = { "fontsize": "12", "labelpad": 30 } def checkDir(outPath): if not os.path.exists(outPath): os.makedirs(outPath) fig, ax1 = plt.subplots(figsize=(width_inch, height_inch)) timesteps = spco2_mean.time sig1 = spco2_mean.pco color1 = 'r' # plot time evolution of partial pressure of CO2 in surface water ax1.plot(timesteps, sig1, color1) ax1.set_ylabel('spco2', fontsize=14, color=color1) ax1.set_xlabel("Time", fontsize=14) ax1.tick_params(axis='y', labelcolor=color1) title = "spco2: partial pressure of CO2 [Pa]" plt.title(title, **title_fontstyle) plt.grid() # output file out_path = os.path.join(os.getcwd(), 'out', '06-08') checkDir(out_path) output_file = os.path.join(out_path,title.replace(' ','_')) + ".png" # save the output file plt.savefig(output_file) plt.show() #sig2 = co2_mean.fpco2 ``` ## 5.2.2. Plot the time series for a single point ``` # selected latitude and longitude point lat_sel, lon_sel, depth_sel = 43, 4.5, 0 ## dataset extract for that poin co2_sel = co2_ds.sel(latitude=lat_sel, longitude=lon_sel,depth=depth_sel, method="nearest") co2_sel.pco fig, ax1 = plt.subplots(figsize=(width_inch, height_inch)) timesteps = co2_sel.time sig1 = co2_sel.pco color1 = 'r' # plot time evolution of partial pressure of CO2 in surface water ax1.plot(timesteps, sig1, color1) ax1.set_ylabel('spco2', fontsize=14, color=color1) ax1.set_xlabel("Time", fontsize=14) ax1.tick_params(axis='y', labelcolor=color1) title = "spco2: surface partial pressure of CO2 [{:s}] @ ({:.2f},{:.2f})".format(co2_sel.pco.units,lon_sel,lat_sel) plt.title(title, **title_fontstyle) plt.grid() # output file output_file = os.path.join(out_path,title.replace(' ','_')) + ".png" # save the output file plt.savefig(output_file) plt.show() plt.close() ``` *** # 6. Conclusion [Go back to the "Table of contents"](#Table-of-contents) <div class="alert alert-block alert-success"> <b>CONGRATULATIONS</b><br> --- #### And thank you for your attention! :) We hope you enjoyed this training on the Mediterranean Biogeochemical model data provided through WEkEO by Copernicus Marine Service, for free, thanks to the European Commission. #### Now let's try to download new data and variables and to access and visualize them... you can try to make new maps and plots... and don't forget to try to the others notebooks available during this training. <img src='./img/all_partners_wekeo.png' alt='' align='center' width='75%'></img> <p style="text-align:left;">This project is licensed under the <a href="./LICENSE">MIT License</a> <span style="float:right;"><a href="https://github.com/wekeo/wekeo-jupyter-lab">View on GitHub</a> | <a href="https://www.wekeo.eu/">WEkEO Website</a> | <a href=mailto:support@wekeo.eu>Contact</a></span></p>
github_jupyter
author: leezeeyee date: 2021/4/15 link: [github](https://github.com/easilylazy/pattern-recognition) ``` import numpy as np import torch from torch import nn from torch.utils.data import DataLoader from torchvision import datasets from torchvision.transforms import ToTensor, Lambda, Compose import matplotlib.pyplot as plt ``` ## 字母识别 2,用3*3的矩阵共9维特征来表示字母,假设 向量(0,1,0,0,1,0,0,1,0)T表示字母“I”, 向量(1,1,1,0,1,0,0,1,0)T表示字母“T”, 向量(1,0,1,1,0,1,1,1,1)T表示字母“U”, 请自行设计神经网络实现对这三个字母的识别,希望能通过设计不同的隐含层数、每层的节点数、不同的学习率、不同的激活函数等对实验结果进行讨论。 ## 过程中的问题与思考 1. 学习率很低,不收敛 不知是否是算法有问题,更换数据后有所改善 因此需要调整网络结构 2. 增加网络隐藏层的节点数,收敛速度增加 3. 增加学习率,SGD的收敛速度有所提升 其实对于这部分的训练可视为过拟合的过程,没有测试集的验证 ### 数据 ``` string='1,0,1,1,0,1,1,1,1' string.replace(',',',') D=np.asarray([[0,1,0,0,1,0,0,1,0],[1,1,1,0,1,0,0,1,0],[1,0,1,1,0,1,1,1,1]]) Y=np.array([0,1,2],dtype=np.int64) D = D.astype(np.float32) D_np = torch.from_numpy(D) # D_np Y_np=torch.tensor(Y) # Y_np train_data=[] for i in range(len(D)): train_data.append([D_np[i],Y_np[i]]) batch_size = 1 # Create data loaders. train_dataloader = DataLoader(train_data,batch_size=batch_size,shuffle=True) # test_dataloader = DataLoader(test_data, batch_size=batch_size) for X in train_dataloader: # print("Shape of X [N, C, H, W]: ", X.shape) print(X) # print("Shape of y: ", y.shape, y.dtype) break # Display image and label. train_features, train_labels = next(iter(train_dataloader)) print(f"Feature batch shape: {train_features.size()}") print(f"Labels batch shape: {train_labels.size()}") # Get cpu or gpu device for training. device = "cuda" if torch.cuda.is_available() else "cpu" print("Using {} device".format(device)) # Define model class NeuralNetwork(nn.Module): def __init__(self): super(NeuralNetwork, self).__init__() self.flatten = nn.Flatten() self.linear_relu_stack = nn.Sequential( nn.ReLU(), nn.Linear(9, 18), nn.ReLU(), # nn.Linear(8, 3), nn.Linear(18, 5), nn.ReLU(), nn.Linear(5, 3), ) def forward(self, x): x = self.flatten(x) logits = self.linear_relu_stack(x) return logits for batch, (X, y) in enumerate(train_dataloader): X, y = X.to(device), y.to(device) print(X) print(y) def train(dataloader, model, loss_fn, optimizer): size = len(dataloader.dataset) for batch, (X, y) in enumerate(dataloader): X, y = X.to(device), y.to(device) # Compute prediction error pred = model(X) # y=torch.tensor(y,dtype=torch.long) # print(pred,y) # print('sf') # print(pred[0][y[0]]) loss = loss_fn(pred, y) # loss=(pred-y)**2 # print(pred.argmax(1)==y) # print(loss) # print(loss.grad_fn) # Backpropagation optimizer.zero_grad() loss.backward() optimizer.step() # if batch % 8 == 0: # loss, current = loss.item(), batch * len(X) # print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]") def test(dataloader, model, epoch): size = len(dataloader.dataset) model.eval() test_loss, correct = 0, 0 with torch.no_grad(): for X, y in dataloader: X, y = X.to(device), y.to(device) pred = model(X) test_loss += loss_fn(pred, y).item() # print(pred) # print(pred.argmax(1)) # print(y) correct += (pred.argmax(1) == y).type(torch.float).sum().item() test_loss /= size correct /= size # if epoch%10==0: # print(f"Epoch {t+1}\n-------------------------------") # print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n") return test_loss,correct model = NeuralNetwork().to(device) loss_fn = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) print(model) epochs = 1000 # fig=plt.figure() maxCorrect=0 for t in range(epochs): train(train_dataloader, model, loss_fn, optimizer) loss,correct=test(train_dataloader, model,t) if correct>maxCorrect: print(loss) print(correct) maxCorrect=correct if abs(maxCorrect-1)<1e-6: break # plt.plot(t,loss,'o') print("Done!") model = NeuralNetwork().to(device) loss_fn = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=1e-2) epochs = 1000 fig=plt.figure() maxCorrect=0 for t in range(epochs): train(train_dataloader, model, loss_fn, optimizer) loss,correct=test(train_dataloader, model,t) if correct>maxCorrect: print(loss) print(correct) maxCorrect=correct if abs(maxCorrect-1)<1e-6: break plt.plot(t,loss,'o',color='r') ```
github_jupyter
# Code for converting an observation to solar coordinates ## Step 1: Run the pipeline on the data to get mode06 files with the correct status bit setting. Note that as of nustardas verion 1.6.0 you can now set the "runsplitsc" keyword to automatically split the CHU combinations for mode06 into separate data files. These files will be stored in the event_cl output directory and have filenames like: nu20201001001A06_chu2_N_cl.evt ### Optional: Check and see how much exposure is in each file. Use the [Observation Report Notebook](Observation_Report.ipynb) example to see how to do this. ## Step 2: Convert the data to heliocentric coordinates. Below is a step-by-step example of what happens. This is also contained in a single python script called "nustar_convert_to_solar.py" in this directory. That script can be invoked directly on an input filename using the syntax: ./nustar_convert_to_solar.py -i PATH/TO/FILE/nu20201001001A06_chu123_N_cl.evt ...like the walk through below this will produced a new file in the same directory as the input file: PATH/TO/FILE/nu20201001001A06_chu123_N_cl_sunpos.evt Load the python libraries that we're going to use: ``` import sys from os.path import * import os from astropy.io import fits import matplotlib.pyplot as plt import matplotlib.colors as colors from matplotlib.colors import LogNorm from pylab import figure, cm import astropy.time import astropy.units as u from astropy.coordinates import get_sun import sunpy.map from sunpy import sun import numpy as np %matplotlib inline ``` ### Get the data from the FITS file. Here we loop over the header keywords to get the correct columns for the X/Y coordinates. We also parse the FITS header to get the data we need to project the X/Y values (which are integers from 0-->1000) into RA/dec coordinates. ``` infile = 'data/Sol_16208/20201001001/event_cl/nu20201001001B06_chu3_N_cl.evt' hdulist = fits.open(infile) evtdata = hdulist[1].data x = np.array([]) y = np.array([]) hdr=hdulist[1].header for field in hdr.keys(): if field.find('TYPE') != -1: if hdr[field] == 'X': print(hdr[field][5:8]) xval = field[5:8] if hdr[field] == 'Y': print(hdr[field][5:8]) yval = field[5:8] ra_ref = hdr['TCRVL'+xval]*u.deg x0 = hdr['TCRPX'+xval] delx = hdr['TCDLT'+xval] * u.deg dec_ref = hdr['TCRVL'+yval]*u.deg y0 = hdr['TCRPX'+yval] dely = hdr['TCDLT'+yval]*u.deg x = evtdata['X'] y = evtdata['Y'] pi =evtdata['PI'] met = evtdata['TIME']*u.s # Conver the NuSTAR epoch times to astropy datetime objects mjdref=hdulist[1].header['MJDREFI'] time = astropy.time.Time(mjdref*u.d+met, format = 'mjd') # Convert X and Y to RA/DEC ra_x = ra_ref + (x - x0) * delx / np.cos(dec_ref) dec_y = dec_ref + (y - y0) * dely print("Loaded: ", len(ra_x), " counts.") hdulist.close() ``` ### Rotate to solar coordinates: Variation on what we did to setup the pointing. Note that this can take a little bit of time to run (~a minute or two). The important optin here is how frequently one wants to recompute the position of the Sun. The default is once every 5 seconds. ``` def get_sun_pos(last_met): sun_time = astropy.time.Time(mjdref*u.d+last_met, format = 'mjd') astro_sun_pos = get_sun(sun_time) # Get the center of the Sun, and assign it degrees. sun_pos = np.array([astro_sun_pos.ra.deg, astro_sun_pos.dec.deg])* u.deg # Solar NP roll angle: sun_np=sun.solar_north(t=sun_time).cgs return sun_pos, sun_np; # How often you want to update the solar ephemeris: tstep = 5. * u.s last_met = met[0] - tstep * 2. last_i = 0 sun_x = np.zeros_like(ra_x) sun_y = np.zeros_like(dec_y) tic() for i in np.arange(len(ra_x)): if ( (met[i] - last_met) > tstep ): (sun_pos, sun_np) = get_sun_pos(last_met) last_met = met[i] # Rotation matrix for a counter-clockwise rotation since we're going # back to celestial north from solar north rotMatrix = np.array([[np.cos(sun_np), np.sin(sun_np)], [-np.sin(sun_np), np.cos(sun_np)]]) # Diagnostics # di = (i -last_i) # print("Updating Sun position...") # if di > 0: # print(i, di) # dt = toc() # tic() # last_i = i # print("Time per event: ",dt / float(di) ) # From here on we do things for every photon: ph_pos = np.array([ra_x[i].value, dec_y[i].value]) * u.deg offset = sun_pos - ph_pos # Account for East->West conversion for +X direction in heliophysics coords offset = offset*[-1., 1.] # Project the offset onto the Sun delta_offset = ((np.dot(offset, rotMatrix)).to(u.arcsec)) sun_x[i] = delta_offset[0] sun_y[i] = delta_offset[1] print("Proccessed: ", i, " of ", len(ra_x)) ``` ### Write the output to a new FITS file. Below keeps the RAWX, RAWY, DET_ID, GRADE, and PI columns from the original file. It repalces the X/Y columns with the new sun_x, sun_y columns. ``` hdulist = fits.open(infile) tbldata=hdulist[1].data hdr=hdulist[1].header hdulist.close() # change to 0-3000 pixels: maxX = 3000 maxY = 3000 x0 = maxX / 2. y0 = maxY / 2. # Header keywords for field in hdr.keys(): if field.find('TYPE') != -1: if hdr[field] == 'X': print(hdr[field][5:8]) xval = field[5:8] if hdr[field] == 'Y': print(hdr[field][5:8]) yval = field[5:8] delx = hdr['TCDLT'+xval] * u.deg dely = hdr['TCDLT'+yval]*u.deg out_sun_x=1.0*(sun_x / delx) + x0 out_sun_y=(sun_y / dely) + y0 newdelx = delx.to(u.arcsec).value newdely = dely.to(u.arcsec).value tbldata['X'] = out_sun_x tbldata['Y'] = out_sun_y hdr['TCRVL'+xval] = 0. hdr['TCRPX'+xval] = x0 hdr['TCDLT'+xval] = 1.0*delx.to(u.arcsec).value hdr['TLMAX'+xval] = maxX hdr['TCRVL'+yval] = 0. hdr['TCRPX'+yval] = x0 hdr['TCDLT'+yval] = dely.to(u.arcsec).value hdr['TLMAX'+yval] = maxY # # Make the new filename: (sfile, ext)=splitext(infile) outfile=sfile+'_sunpos.evt' # Remove output file if necessary if isfile(outfile): print(outfile, 'exists! Removing old version...') os.remove(outfile) fits.writeto(outfile, tbldata, hdr) ```
github_jupyter
# PyMVPD Tutorial This is a tutorial of how to perform MultiVariate Pattern Dependence (MVPD) analysis using the PyMVPD toolbox. The tutorial will walk you through analysis specification, data loading and finally analysis execution. ## Requirement Verification Before using PyMVPD, make sure that you have PyTorch installed. To verify the installation of PyTorch, you can run the following sample PyTorch code to construct a randomly initialized tensor. The output should be something similar to "tensor([0.3277])". ``` import torch x = torch.rand(1) print(x) ``` A variety of pre-implemented MVPD models are provided in PyMVPD, including linear regression models and artificial neural networks. To train the MVPD neural network models efficiently, we highly recommend you to use GPUs, which allow for better computation of multiple parallel processes. Otherwise it might take a fairly long time to get the training results, especially when your neural network is large-scale involving a huge number of parameters. To check if your system has an Nvidia GPU and if it is CUDA enabled, test with the following code: ``` device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print('Using device:', device) if device.type == 'cuda': print(torch.cuda.get_device_name(0)) else: print("No CUDA GPU detected!") ``` ## MVPD Analysis ``` import os, sys sys.path.append("..") from mvpd import data_loading, model_exec ``` ## Step 1 - Analysis Specification The first step of MVPD analysis is to specify the following details: the participant whose data are to be analyzed ('sub'); the total number of experimental runs ('total_run'); the paths to the directories containing processed functional data (‘filepath_func’); the paths to the directories containing the predictor ROI mask (‘filepath_mask1’) and the target ROI mask (‘filepath_mask2’); the path to the directory where the extracted functional data will be saved (‘roidata_save_dir’); the path to the directory where the results will be saved (‘results_save_dir’); the type of model to be used (‘model_type’); the option to save predicted timecourses (‘save_prediction’); the model hyperparameters. ``` # Subject/Participant sub='sub-01' # Total number of experimental runs total_run=8 # Functional Data filepath_func=[] filepath_func+=['./testdata/sub-01/sub-01_movie_bold_space-MNI152NLin2009cAsym_preproc_denoised_run1.nii.gz'] filepath_func+=['./testdata/sub-01/sub-01_movie_bold_space-MNI152NLin2009cAsym_preproc_denoised_run2.nii.gz'] filepath_func+=['./testdata/sub-01/sub-01_movie_bold_space-MNI152NLin2009cAsym_preproc_denoised_run3.nii.gz'] filepath_func+=['./testdata/sub-01/sub-01_movie_bold_space-MNI152NLin2009cAsym_preproc_denoised_run4.nii.gz'] filepath_func+=['./testdata/sub-01/sub-01_movie_bold_space-MNI152NLin2009cAsym_preproc_denoised_run5.nii.gz'] filepath_func+=['./testdata/sub-01/sub-01_movie_bold_space-MNI152NLin2009cAsym_preproc_denoised_run6.nii.gz'] filepath_func+=['./testdata/sub-01/sub-01_movie_bold_space-MNI152NLin2009cAsym_preproc_denoised_run7.nii.gz'] filepath_func+=['./testdata/sub-01/sub-01_movie_bold_space-MNI152NLin2009cAsym_preproc_denoised_run8.nii.gz'] # Predictor ROI Mask filepath_mask1='./testdata/sub-01/sub-01_FFA_80vox_bin.nii.gz' # Target ROI Mask filepath_mask2='./testdata/GM_thr0.1_bin.nii.gz' base1=os.path.basename(filepath_mask1) base2=os.path.basename(filepath_mask2) roi_1_name=base1.split('.nii')[0] roi_2_name=base2.split('.nii')[0] # Output Directory roidata_save_dir='./testdata/roi_data/' results_save_dir='./results/' # MVPD Model model_type='L2_LR' # ['PCA_LR', 'L2_LR', 'NN_1layer', 'NN_5layer', 'NN_5layer_dense'] # WARNING: If you choose a neural network model, it will take 1-2 hours to get the complete results. # only for PCA_LR num_pc=3 # number of principal components used # only for L2_LR alpha=0.01 # regularization strength # only for neural networks (NN_1layer, NN_5layer, NN_5layer_dense) input_size=80 # size of predictor ROI output_size=53539 # size of target ROI hidden_size=100 # number of units per hidden layer num_epochs=5000 # number of epochs for training save_freq=1000 # checkpoint saving frequency print_freq=100 # results printing out frequency batch_size=32 learning_rate=1e-3 momentum_factor=0.9 w_decay=0 # weight decay (L2 penalty) # Save Data save_prediction=False # default ``` ## Step 2 - Data Loading ``` data_loading.load_data(sub, total_run, roi_1_name, roi_2_name, filepath_func, filepath_mask1, filepath_mask2, roidata_save_dir) ``` ## Step 3 - Analysis Execution ``` model_exec.MVPD_exec(model_type, sub, total_run, alpha, num_pc, # reg params input_size, output_size, hidden_size, num_epochs, save_freq, print_freq, batch_size, learning_rate, momentum_factor, w_decay, # nn params roidata_save_dir, roi_1_name, roi_2_name, filepath_func, filepath_mask1, filepath_mask2, results_save_dir, save_prediction) ``` ## Visualizing MVPD results After running the MVPD model, you can visualize the results (e.g. average variance explained map across experimental runs) using a variety of visualization tools (e.g. Connectome Workbench, Mango, FSLeyes) and perform further statistical analyses. ``` import nibabel as nib import matplotlib.pyplot as plt from IPython.display import display from PIL import Image def show_slices(slices): """ Function to display row of image slices """ fig, axes = plt.subplots(1, len(slices)) for i, slice in enumerate(slices): axes[i].imshow(slice.T, cmap="gray", origin="lower") var_expl_img = nib.load(results_save_dir+sub+'_var_expl_map_'+model_type+'_avgruns.nii.gz') var_expl_data = var_expl_img.get_fdata() var_expl_shape = var_expl_data.shape; x_center = var_expl_shape[0]//2; y_center = var_expl_shape[1]//2; z_center = var_expl_shape[2]//2; show_slices([var_expl_data[x_center, :, :], var_expl_data[:, y_center, :], var_expl_data[:, :, z_center]]) plt.suptitle("Center slices for "+sub+" "+model_type+" variance explained map") plt.show() ``` Here are the example figures generated from our test runs (predictor ROI: FFA, target ROI: GM). ``` img_path='./example_viz_figures.png' display(Image.open(img_path)) ```
github_jupyter
# Auto Encoder Example. 본 문서는 TensorFlow 를 사용하여 Deep Learning을 구현하기 위한 기초적인 실습 자료이다. * Using an auto encoder on MNIST handwritten digits. * References: ``` Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. "Gradient-based learning applied to document recognition." Proceedings of the IEEE, 86(11):2278-2324, November 1998. ``` * Links: - [MNIST Dataset] http://yann.lecun.com/exdb/mnist/ * The code and comments are written by by NamJungGu <nowage@gmail.com> <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. ``` %matplotlib inline #from __future__ import division, print_function, absolute_import import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import matplotlib # matplotlib.use('TkAgg') # # Import MNIST data # from tensorflow.examples.tutorials.mnist import input_data # mnist = input_data.read_data_sets("MNIST_data", one_hot=True) from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('./MNIST_data', one_hot=True) train_data = mnist.train.images # Returns np.array train_labels = np.asarray(mnist.train.labels, dtype=np.int32) eval_data = mnist.test.images # Returns np.array eval_labels = np.asarray(mnist.test.labels, dtype=np.int32) # Parameters learning_rate = 0.01 training_epochs = 20 batch_size = 256 display_step = 1 examples_to_show = 10 # Network Parameters n_hidden_1 = 256 # 1st layer num features n_hidden_2 = 128 # 2nd layer num features n_input = 784 # MNIST data input (img shape: 28*28) # tf Graph input (only pictures) X = tf.placeholder("float", [None, n_input]) weights = { 'encoder_h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])), 'encoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])), 'decoder_h1': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_1])), 'decoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_input])), } biases = { 'encoder_b1': tf.Variable(tf.random_normal([n_hidden_1])), 'encoder_b2': tf.Variable(tf.random_normal([n_hidden_2])), 'decoder_b1': tf.Variable(tf.random_normal([n_hidden_1])), 'decoder_b2': tf.Variable(tf.random_normal([n_input])), } # Building the encoder def encoder(x): # Encoder Hidden layer with sigmoid activation #1 layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['encoder_h1']), biases['encoder_b1'])) # Decoder Hidden layer with sigmoid activation #2 layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['encoder_h2']), biases['encoder_b2'])) return layer_2 # Building the decoder def decoder(x): # Encoder Hidden layer with sigmoid activation #1 layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['decoder_h1']), biases['decoder_b1'])) # Decoder Hidden layer with sigmoid activation #2 layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['decoder_h2']), biases['decoder_b2'])) return layer_2 # Construct model encoder_op = encoder(X) decoder_op = decoder(encoder_op) # Prediction y_pred = decoder_op # Targets (Labels) are the input data. y_true = X # Define loss and optimizer, minimize the squared error cost = tf.reduce_mean(tf.pow(y_true - y_pred, 2)) optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost) # Initializing the variables init = tf.global_variables_initializer() # Launch the graph with tf.Session() as sess: sess.run(init) total_batch = int(mnist.train.num_examples/batch_size) # Training cycle for epoch in range(training_epochs): # Loop over all batches for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([optimizer, cost], feed_dict={X: batch_xs}) # Display logs per epoch step if epoch % display_step == 0: print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c)) print("Optimization Finished!") # Applying encode and decode over test set encode_decode = sess.run( y_pred, feed_dict={X: mnist.test.images[:examples_to_show]}) # Compare original images with their reconstructions f, a = plt.subplots(2, 10, figsize=(10, 2)) for i in range(examples_to_show): a[0][i].imshow(np.reshape(mnist.test.images[i], (28, 28))) a[1][i].imshow(np.reshape(encode_decode[i], (28, 28))) ```
github_jupyter
```bash mentalist download_pubmlst -k 31 -o campy_mlst_fasta_files -s 29 --db campy_mlst.db ``` ``` %run ../../multibench.py from inspect import isfunction import os, sys import matplotlib.pyplot as plt import asciitable import sys import os import shutil import numpy as np import glob from shutil import copyfile import pathlib # Summarize numpy array if it has more than 10 elements np.set_printoptions(threshold=10) def clean_if_exists(path): if os.path.exists(path): if(os.path.isfile(path)): os.remove(path) else: shutil.rmtree(path) os.mkdir(path) def get_last_n_lines(string, n): return "\n".join(string.split("\n")[-n:]) def create_folder_if_doesnt_exist(path): if not os.path.exists(path): os.makedirs(path) input_samples = [os.path.basename(f) for f in glob.glob('input/*_1.fastq.gz')] input_samples = [f.replace('_1.fastq.gz','') for f in input_samples] # Prepare the gzipped concatenation of forward and backward reads for input_sample in input_samples: os.system('cat input/%s_1.fastq.gz input/%s_2.fastq.gz > input/%s.fastq.gz' % (input_sample, input_sample, input_sample)) print(input_samples) sample_sizes = list(range(1, 20, 3)) sample_sizes def reset_func(): #for file in glob.glob("output/*.txt"): # clean_if_exists(file) pass def benchmark_list_to_results(benchmark_firsts_list): return { "memory": max(list(map(lambda result: result.memory.max, benchmark_firsts_list))), "disk_read": max(list(map(lambda result: result.disk.read_chars, benchmark_firsts_list))), "disk_write": max(list(map(lambda result: result.disk.write_chars, benchmark_firsts_list))), "runtime": sum(list(map(lambda result: result.process.execution_time, benchmark_firsts_list))) } def sampling_func(sample_size): samples = input_samples[:sample_size] return samples assemble_command = { "command": "mentalist call -o output/%.txt -s % --db campy_mlst.db input/%.fastq.gz", "parallel_args": "-j 1 -I%" } # active_output_print: prints stdout and stderr on every iteration multibench_results, debug_str = multi_cmdbench({ "call": [assemble_command] }, reset_func = reset_func, iterations = 2, sampling_func = sampling_func, sample_sizes = sample_sizes, benchmark_list_to_results = benchmark_list_to_results, active_output_print = False, progress_bar = True) save_path = "multibench_results.txt" samples_per_sample_size = [] for sample_size in sample_sizes: samples_per_sample_size.append(input_samples[:sample_size]) save_multibench_results(multibench_results, samples_per_sample_size, save_path) read_path = "multibench_results.txt" multibench_results, samples_per_sample_size = read_multibench_results(read_path) print(samples_per_sample_size) %matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') import numpy as np from pylab import rcParams rcParams['figure.figsize'] = 15, 3 # Indexing Plots plot_resources(multibench_results, sample_sizes, "call") ```
github_jupyter
___ <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a> ___ <center>*Copyright Pierian Data 2017*</center> <center>*For more information, visit us at www.pieriandata.com*</center> ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline airline = pd.read_csv('airline_passengers.csv', index_col = "Month") airline.dropna(inplace = True) airline.index = pd.to_datetime(airline.index) airline.head() ``` # SMA ## Simple Moving Average We've already shown how to create a simple moving average, for a quick review: ``` airline['6-month-SMA'] = airline['Thousands of Passengers'].rolling(window = 6).mean() airline['12-month-SMA'] = airline['Thousands of Passengers'].rolling(window = 12).mean() airline.head() airline.plot(figsize = (12, 8)) ``` # EWMA ## Exponentially-weighted moving average We just showed how to calculate the SMA based on some window.However, basic SMA has some "weaknesses". * Smaller windows will lead to more noise, rather than signal * It will always lag by the size of the window * It will never reach to full peak or valley of the data due to the averaging. * Does not really inform you about possible future behaviour, all it really does is describe trends in your data. * Extreme historical values can skew your SMA significantly To help fix some of these issues, we can use an EWMA (Exponentially-weighted moving average). EWMA will allow us to reduce the lag effect from SMA and it will put more weight on values that occured more recently (by applying more weight to the more recent values, thus the name). The amount of weight applied to the most recent values will depend on the actual parameters used in the EWMA and the number of periods given a window size. [Full details on Mathematics behind this can be found here](http://pandas.pydata.org/pandas-docs/stable/computation.html#exponentially-weighted-windows) Here is the shorter version of the explanation behind EWMA. The formula for EWMA is: $ y_t = \frac{\sum\limits_{i=0}^t w_i x_{t-i}}{\sum\limits_{i=0}^t w_i} $ Where x_t is the input value, w_i is the applied weight (Note how it can change from i=0 to t), and y_t is the output. Now the question is, how to we define the weight term w_i ? This depends on the adjust parameter you provide to the .ewm() method. When adjust is True (default), weighted averages are calculated using weights: ### $y_t = \frac{x_t + (1 - \alpha)x_{t-1} + (1 - \alpha)^2 x_{t-2} + ... + (1 - \alpha)^t x_{0}}{1 + (1 - \alpha) + (1 - \alpha)^2 + ... + (1 - \alpha)^t}$ When adjust=False is specified, moving averages are calculated as: ### $\begin{split}y_0 &= x_0 \\ y_t &= (1 - \alpha) y_{t-1} + \alpha x_t,\end{split}$ which is equivalent to using weights: \begin{split}w_i = \begin{cases} \alpha (1 - \alpha)^i & \text{if } i < t \\ (1 - \alpha)^i & \text{if } i = t. \end{cases}\end{split} When adjust=True we have y0=x0 and from the last representation above we have yt=αxt+(1−α)yt−1, therefore there is an assumption that x0 is not an ordinary value but rather an exponentially weighted moment of the infinite series up to that point. One must have 0<α≤1, and while since version 0.18.0 it has been possible to pass α directly, it’s often easier to think about either the span, center of mass (com) or half-life of an EW moment: \begin{split}\alpha = \begin{cases} \frac{2}{s + 1}, & \text{for span}\ s \geq 1\\ \frac{1}{1 + c}, & \text{for center of mass}\ c \geq 0\\ 1 - \exp^{\frac{\log 0.5}{h}}, & \text{for half-life}\ h > 0 \end{cases}\end{split} * Span corresponds to what is commonly called an “N-day EW moving average”. * Center of mass has a more physical interpretation and can be thought of in terms of span: c=(s−1)/2 * Half-life is the period of time for the exponential weight to reduce to one half. * Alpha specifies the smoothing factor directly. ``` airline['EWMA12'] = airline['Thousands of Passengers'].ewm(span = 12).mean() airline[['Thousands of Passengers','EWMA12']].plot(figsize = (12, 8)) ``` Great! That is all for now, let's move on to ARIMA modeling!
github_jupyter
# Classifying Poses ## Goal Take key point output of a pose estimator and clasify poses without manually defining threshold checks. ## High-Level Workflow 1. Gather image dataset 1. Perform pose estimation and save keypoints 1. Load data 1. Clean and normalize data to be used as input to SVM 1. Choose a classifier 1. Train Classifier 1. Test Classifier Let's begin by defining the class labels and key points to be used: ``` POSES = { "Tree_Pose_or_Vrksasana_": 0, "Extended_Revolved_Triangle_Pose_or_Utthita_Trikonasana_": 1, "Warrior_I_Pose_or_Virabhadrasana_I_": 2, "Warrior_II_Pose_or_Virabhadrasana_II_": 3, "Warrior_III_Pose_or_Virabhadrasana_III_": 4 } KEY_POINTS = [ 'Neck', 'Right Shoulder', 'Right Elbow', 'Right Wrist', 'Left Shoulder', 'Left Elbow', 'Left Wrist', 'Right Hip', 'Right Knee', 'Right Ankle', 'Left Hip', 'Left Knee', 'Left Ankle'] ``` ## 3. Load Data ``` import pandas as pd import os data = {} for pose, class_id in POSES.items(): data[class_id] = pd.read_csv(os.path.join('in', '{}.csv'.format(pose)), index_col=0) print(data[0][['Neck y', 'Right Wrist y', 'Left Wrist y']].head()) print(data[1][['Neck y', 'Right Wrist y', 'Left Wrist y']].head()) ``` ## 4. Clean and Normalize Data ``` normalized_data = {} for pose, class_id in POSES.items(): df = data[class_id].copy() # Remove all rows with missing key points df = df[(df.T != -1).any()] # Center all points around neck for kp in KEY_POINTS[1:]: df['{} x'.format(kp)] = df['{} x'.format(kp)] - df['Neck x'] df['{} y'.format(kp)] = df['{} y'.format(kp)] - df['Neck y'] # Remove neck columns since they are the [0, 0] df = df.drop(columns=['Neck x', 'Neck y']) # Normalize to the range [0, 1] pose_mean = df.stack().mean() pose_std = df.stack().std() df = (df - pose_mean) / pose_std normalized_data[class_id] = df df.to_csv(os.path.join('normalized', '{}.csv'.format(pose)), index=False) print(normalized_data[0][['Right Wrist y', 'Left Wrist y']].head()) print(normalized_data[1][['Right Wrist y', 'Left Wrist y']].head()) ``` ## 5. Choose Your Classifier We'll be using Scikit-Learn: https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html ## 6. Train Classifier ``` import numpy as np from sklearn.svm import LinearSVC NUM_TEST = 10 X_train = None y_train = None X_test = None y_test = None for pose, class_id in POSES.items(): df = normalized_data[class_id] X_pose = df.to_numpy() y_pose = [class_id] * df.shape[0] print('X shape for {}:'.format(pose), X_pose.shape) print('y length:', len(y_pose)) X_pose_train = X_pose[:-NUM_TEST][:] y_pose_train = y_pose[:-NUM_TEST] X_pose_test = X_pose[-NUM_TEST:][:] y_pose_test = y_pose[-NUM_TEST:] if X_train is None: X_train = X_pose_train else: X_train = np.concatenate((X_train, X_pose_train), axis=0) if y_train is None: y_train = y_pose_train else: y_train = np.concatenate((y_train, y_pose_train)) if X_test is None: X_test = X_pose_test else: X_test = np.concatenate((X_test, X_pose_test), axis=0) if y_test is None: y_test = y_pose_test else: y_test = np.concatenate((y_test, y_pose_test)) clf = LinearSVC(C=1.0) clf.fit(X_train, y_train) ``` ## 7. Test Classifier ``` tests = list(zip(clf.predict(X_test), y_test)) print('Results:\n', tests) incorrect = [element for element in tests if element[0] != element[1]] print('Wrongly Classified:\n', incorrect) print('Ratio correct:', 1 - (len(incorrect) / len(tests))) ```
github_jupyter
``` # Importing Packages import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from sklearn.feature_selection import VarianceThreshold from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report from sklearn import preprocessing from sklearn import metrics # Importing Dataset and setting the na_values parameter to '?' as mentioned by the providers # of the dataset all missing values are labelled with a '?' Symbol prognosis_dataset = pd.read_csv('Dataset/wpbc.data', header=None, na_values=['?']) # Setting names to each Column of the Dataset (As the Columns in the Dataset are not with Column Names) prognosis_dataset.columns = ["id", "outcome", "radius_mean", "texture_mean", "perimeter_mean", "area_mean", "smoothness_mean", "compactness_mean", "concavity_mean", "concave points_mean", "symmetry_mean", "fractal_dimension_mean", "radius_se", "texture_se", "perimeter_se", "area_se", "smoothness_se", "compactness_se", "concavity_se", "concave points_se", "symmetry_se", "fractal_dimension_se", "radius_worst", "texture_worst", "perimeter_worst", "area_worst", "smoothness_worst", "compactness_worst", "concavity_worst", "concave points_worst", "symmetry_worst", "fractal_dimension_worst", "time", "tumor_size", "positive_axillary_lymph_node"] # Deleting the First Column 'id' because it is just used to Uniquely Identify each row del prognosis_dataset["id"] prognosis_dataset.T # Getting the total number of null values in each column prognosis_dataset.isnull().sum() # Setting the total number of null values null_value_length = prognosis_dataset['positive_axillary_lymph_node'].isnull().sum() # Searching for Missing Values in the Dataset if(null_value_length > 0 ): mean_positive_axillary_lymph_node = round(prognosis_dataset['positive_axillary_lymph_node'].mean()) imputed_prognosis_dataset = prognosis_dataset['positive_axillary_lymph_node'].fillna(mean_positive_axillary_lymph_node) null_value_length = 0 prognosis_dataset = imputed_prognosis_dataset prognosis_dataset.isnull().sum() # Finding out the number of Outcomes for Non-Recur and Recur - N = 148, R = 47 ax = sns.countplot(prognosis_dataset.outcome,label="Count") prognosis_dataset.outcome.value_counts() train_features, test_features, train_labels, test_labels=train_test_split( prognosis_dataset.drop(['id','outcome'], axis=1), prognosis_dataset[['outcome']], test_size=0.2, random_state=41) corrMatrix = train_features.corr() f,ax = plt.subplots(figsize=(18, 18)) sns.heatmap(corrMatrix, annot=True,ax=ax) plt.show() ```
github_jupyter
# Data Bootcamp: Code Practice A Optional Code Practice A: Jupyter basics and Python's **[graphics tools](https://davebackus.gitbooks.io/test/content/graphs1.html)** (the Matplotlib package). The goals are to become familiar with Jupyter and Matplotlib and to explore some **new datasets**. The data management part of this goes beyond what we've done in class. We recommend you just run the code provided and focus on the graphs for now. This notebook written by Dave Backus for the NYU Stern course [Data Bootcamp](http://databootcamp.nyuecon.com/). **Check Jupyter before we start.** Run the code below and make sure it works. ``` # to make sure things are working, run this import pandas as pd print('Pandas version: ', pd.__version__) ``` If you get something like "Pandas version: 0.17.1" you're fine. If you get an error, bring your computer by and ask for help. If you're unusually brave, go to [StackOverflow](http://stackoverflow.com/a/19961403/804513) and read the instructions. Then come ask for help. (This has to do with how your computer processes unicode. When you hear that word -- unicode -- you should run away at high speed.) ## Question 1. Setup Import packages, arrange for graphs to display in the notebook. ``` import pandas as pd import matplotlib.pyplot as plt import datetime as dt %matplotlib inline ``` **Remind yourself:** * What does the `pandas` package do? * What does the `matplotlib` package do? * What does `%matplotlib inline` do? ## Question 2. Jupyter basics * We refer to the cell that's highlighted as the **current cell**. * Clicking once on any cell makes it the current cell. Clicking again allows you to edit it. * The + in the toolbar at the top creates a new cell below the current cell. * Change a cell from Code to Markdown (in other words, text) with the dropdown menu in the toolbar. * To run a cell, hit shift-enter or click on the run-cell icon in the tooolbar (sideways triangle and vertical line). * For more information, click on Help at the top. User Interface Tour is a good place to start. Practice with the following: * Make this cell the current cell. * Add an empty cell below it. * Add text to the new cell: your name and the date, for example. * *Optional:* Add a link to your LinkedIn or Facebook page. *Hint:* Look at the text in the top cell to find an example of a link. * Run the cell. ## Question 3. Winner take all and the long tail in the US beer industry The internet has produced some interesting market behavior, music being a great example. Among them: * Winner take all. The large producers (Beyonce, for example) take larger shares of the market than they had in the past. * The long tail. At the same time, small producers in aggregate increase their share. Curiously enough, we see the same thing in the US beer industry: * Scale economies and a reduction in transportation costs (the interstate highway system was built in the 1950s and 60s) led to consolidation, with the large firms getting larger, and the small ones either sellingout or going bankrupt. (How many beer brands can you think of that no longer exist?) * Starting in the 1980s, we saw a significant increase in the market share of small firms ("craft brewers") overall, even though each of them remains small. We illustrate this with data from Victor and Carol Tremblay that describe the output of the top 100 US beer producers from 1947 to 2004. This is background data from their book, [The US Brewing Industry](http://www.amazon.com/The-US-Brewing-Industry-Economic/dp/0262512637), MIT Press, 2004. See [here](http://people.oregonstate.edu/~tremblac/pdf/Appendix%20A%20Weinberg%20Data.pdf) for the names of the brewers. Output is measured in thousands of 31-gallon barrels. **Data manipulation.** The data manipulation goes beyond what we've done in class. You're free to ignore it, but here's the idea. * The spreadsheet contains output by firms ranked 1 to 100 in size. Each row refers to a specific year and includes the outputs of firms in order of size. We don't have their names. * We transpose this so that the columns are years and include output for the top-100 firms. The row labels are the size rank of the firm. * We then plot the size against the rank for four years to see how it has changed. ``` url = 'http://pages.stern.nyu.edu/~dbackus/Data/beer_production_1947-2004.xlsx' beer = pd.read_excel(url, skiprows=12, index_col=0) print('Dimensions:', beer.shape) beer[list(range(1,11))].head(3) vars = list(range(1,101)) # extract top 100 firms pdf = beer[vars].T # transpose (flip rows and columns) pdf[[1947, 1967, 1987, 2004]].head() ``` **Question.** Can you see consolidation here? ``` # a basic plot fig, ax = plt.subplots() pdf[1947].plot(ax=ax, logy=True) pdf[1967].plot(ax=ax, logy=True) pdf[1987].plot(ax=ax, logy=True) pdf[2004].plot(ax=ax, logy=True) ax.legend() ``` **Answer these questions below.** Code is sufficient, but it's often helpful to add comments to remind yourself what you did, and why. * Get help for the `set.title` method by typing `ax.set_title?` in a new cell and running it. Note that you can open the documentation this produces in a separate tab with the icon in the upper right (hover text = "Open the pager in an external window"). * Add a title with `ax.set_title('Your title')`. * Change the fontsize of the title to 14. * What happens if we add the argument/parameter `lw=2` to the `ax.plot()` statements? * Add a label to the x axis with `ax.set_xlabel()`. * Add a label to the y axis. * Why did we use a log scale (`logy=True`)? What happens if we don't? * Use the `color` argument/parameter to choose a more effective set of colors. * In what sense do you see "winner takes all"? A "long tail"? ## Question 4. Japan's aging population Populations are getting older throughout the world, but Japan is a striking example. One of our favorite quotes: > Last year, for the first time, sales of adult diapers in Japan exceeded those for babies.  Let's see what the numbers look like using projections fron the [United Nations' Population Division](http://esa.un.org/unpd/wpp/Download/Standard/Population/). They have several projections; we use what they call the "medium variant." We have a similar issue with the data: population by age for a given country and date goes across rows, not down columns. So we choose the ones we want and transpose them. Again, more than we've done so far. ``` # data input (takes about 20 seconds on a wireless network) url1 = 'http://esa.un.org/unpd/wpp/DVD/Files/' url2 = '1_Indicators%20(Standard)/EXCEL_FILES/1_Population/' url3 = 'WPP2015_POP_F07_1_POPULATION_BY_AGE_BOTH_SEXES.XLS' url = url1 + url2 + url3 cols = [2, 4, 5] + list(range(6,28)) prj = pd.read_excel(url, sheetname=1, skiprows=16, parse_cols=cols, na_values=['…']) print('Dimensions: ', prj.shape) print('Column labels: ', prj.columns) # rename some variables pop = prj pop = pop.rename(columns={'Reference date (as of 1 July)': 'Year', 'Major area, region, country or area *': 'Country', 'Country code': 'Code'}) # select Japan and years countries = ['Japan'] years = [2015, 2035, 2055, 2075, 2095] pop = pop[pop['Country'].isin(countries) & pop['Year'].isin(years)] pop = pop.drop(['Country', 'Code'], axis=1) pop = pop.set_index('Year').T pop = pop/1000 # convert population from thousands to millions pop.head() pop.tail() ``` **Comment.** Now we have the number of people in any five-year age group running down columns. The column labels are the years. With the dataframe `df`: * Plot the current age distribution with `pop[[2015]].plot()`. Note that `2015` here does not have quotes around it: it's an unusual case of integer column labels. * Plot the current age distribution as a bar chart. Which do you think looks better? * Create figure and axis objects * Use the axis object to plot the age distribution for all the years in the dataframe. * Add titles and axis labels. * Plot the age distribution for each date in a separate subplot. What argument parameter does this? *Bonus points:* Change the size of the figure to accomodate the subplots. ## Question 5. Dynamics of the yield curve One of our favorite topics is the yield curve: a plot of the yield to maturity on a bond against the bond's maturity. The foundation here is yields on zero coupon bonds, which are simpler objects than yields on coupon bonds. We often refer to bond yields rising or falling, but in fact the yield curve often does different things at different maturities. We will see that here. For several years, short yields have been stuck at zero, yet yields for bond with maturities of two years and above have varied quite a bit. We use the Fed's well-known [Gurkaynak, Sack, and Wright data](http://www.federalreserve.gov/pubs/feds/2006/200628/200628abs.html), which provides daily data on US Treasury yields from 1961 to the present. The Fed posts the data, but it's in an unfriendly format. So we saved it as a csv file, which we read in below. The variables are yields: `SVENYnn` is the yield for maturity `nn` years. ``` # data input (takes about 20 seconds on a wireless network) url = 'http://pages.stern.nyu.edu/~dbackus/Data/feds200628.csv' gsw = pd.read_csv(url, skiprows=9, index_col=0, usecols=list(range(11)), parse_dates=True) print('Dimensions: ', gsw.shape) print('Column labels: ', gsw.columns) print('Row labels: ', gsw.index) # grab recent data df = gsw[gsw.index >= dt.datetime(2010,1,1)] # convert to annual, last day of year df = df.resample('A', how='last').sort_index() df.head() df.columns = list(range(1,11)) ylds = df.T ylds.head(3) ``` With the dataframe `ylds`: * Create figure and axis objects * Use the axis object to plot the yield curve for all the years in the dataframe. * Add titles and axis labels. * Explain what you see: What happened to the yield curve over the past six years? * **Challenging.** Compute the mean yield for each maturity. Plot them on the same graph in black.
github_jupyter
____ __Universidad Tecnológica Nacional, Buenos Aires__\ __Ingeniería Industrial__\ __Cátedra de Investigación Operativa__\ __Autor: Rodrigo Maranzana__ ____ # Multi-ítem con restricción de espacio <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#____" data-toc-modified-id="____-0.1"><span class="toc-item-num">0.1&nbsp;&nbsp;</span>____</a></span></li></ul></li><li><span><a href="#Introducción:" data-toc-modified-id="Introducción:-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Introducción:</a></span></li><li><span><a href="#Aplicación-en-Python:" data-toc-modified-id="Aplicación-en-Python:-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Aplicación en Python:</a></span></li><li><span><a href="#Ejemplo" data-toc-modified-id="Ejemplo-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Ejemplo</a></span><ul class="toc-item"><li><span><a href="#Resolución-para-un-conjunto-de-lambdas-a-fuerza-bruta" data-toc-modified-id="Resolución-para-un-conjunto-de-lambdas-a-fuerza-bruta-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Resolución para un conjunto de lambdas a fuerza bruta</a></span></li><li><span><a href="#Método-del-gradiente-para-buscar-$\lambda$" data-toc-modified-id="Método-del-gradiente-para-buscar-$\lambda$-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Método del gradiente para buscar $\lambda$</a></span></li></ul></li></ul></div> ## Introducción: Se tienen 'm' ítems con restricción de volumen total 'S'. Es decir, los ítems compiten por ocupar un espacio finito. El modelo que respresenta esta situación es el siguiente: $ min_{q_i} \quad Z = CTE(q_1, q_2, ..., q_m)$ $ s.t.$ $ \quad q_1s_1 + q_2s_2 + ... + q_ms_m \leq S $ Se decide aplicar el método de relajación de Lagrange para resolver múltiples problemas más simples, y encontrar el óptimo entre todos ellos. Cada problema estará afectado por un multiplicador de Lagrange $\lambda$. Siendo: $f = CTE(q_1, q_2, ..., q_m)$ $g = (q_1s_1 + q_2s_2 + ... + q_ms_m - S)$ El modelo relajado será el siguiente: $ L(\lambda) = min_{q_i} \quad Z_{relax} = f + \lambda g $ Como decíamos antes, existe un lambda para cada problema relajado. No todos los $\lambda$ arrojan resultados válidos, ya que este multiplicador penaliza la violación de la restricción que ahora forma parte de la función objetivo. La idea es encontrar el $\lambda$ que optimice la función primal cumpliendo con la región factible. El problema original es cuadrático, pero es fácilmente demostrable que es convexo y por lo tanto cuenta con un óptimo global. Se busca un $\lambda$ que maximice el $L$, teniendo en cuenta que debe minimizarse respecto a la cantidad, es decir, buscar la cantidad óptima. Por lo tanto: $max_{\lambda} \ L(\lambda)$ $L(\lambda) = min_{q_i} \ Z_{relax}$ $Z_{relax} = f + \lambda g $ El modelo anterior, expresado en una línea sería: $max_{\lambda} min_{q_i} f + \lambda g $ ## Aplicación en Python: Inicializamos la librerías necesarias y cargamos las funciones asociadas a las expresiones introducidas en el apartado anterior. En el caso de $g$ elegimos hacer producto matricial con el operador '@' que es el dot product. En python estas operaciones son mucho mas rápidas que el clásico for loop. Para el $f$, a modo didáctico, explicamos cómo vectorizar una función. Obviamente, tanto $g$ como $f$ podrían hacerse con for loop y, a pesar de la pérdida de rendimiento, en estos ejemplos introductorios no se podría percibir la diferencia. Se hace de esta forma para poder mostrar una manera diferente y práctica de encarar soluciones con python. ``` import math import numpy as np import matplotlib.pyplot as plt # Cantidad óptima: def calcular_qopt(K, D, T, c1, s_i, lmbd): return math.sqrt((2 * K * D) / (T * c1 + 2 * lmbd *s_i)) # Restricción: def calcular_g(vect_s, vect_q, S): return vect_s @ vect_q - S def calcular_f_i(b_i, d_i, q_i, c1_i, t, k_i): return b_i * d_i + 0.5 * q_i * c1_i * t + k_i * (d_i / q_i) # Objetivo primal: def calcular_f(vect_b, vect_d, vect_q, vect_c1, t, vect_k): # Vectorizar la funcion: calcular_f_i_vectorizada = np.vectorize(calcular_f_i) # Calcularmos f_i con la función vectorizada, mismos inputs que calcular_f_i pero # en vectores con todos los valores. vector_f_i = calcular_f_i_vectorizada(vect_b, vect_d, vect_q, vect_c1, t, vect_k) # Nos devuelve un vector con cada f_i que tenemos que sumar y retornar: return np.sum(vector_f_i) # Lagrangiano: def calcular_L(f_q, g_q, lmbd): return f_q + lmbd * g_q ``` ## Ejemplo El siguiente ejemplo considera dos productos simples. Puede fácilmente extenderse a $n$ productos sin hacer mayores cambios. ``` # Ejemplo: # S = 150 S = 10 diasmes = 30 t = 1 # período de análisis interes = 0.1 # anual # Datos producto 1: b_1 = 30 #costo por producto alquiler_1 = 30 # diario compra_1 = 100 # unidad calidadrecepcion_1 = 200 # pedido demanda_1 = 3000 # por año k_1 = calidadrecepcion_1 + compra_1 # costo de orden d_1 = demanda_1 # demanda c1_1 = b_1 * interes + (alquiler_1 * diasmes * 12) # costo unitario s_1 = 10 # Datos producto 2: b_2 = 40 #costo por producto alquiler_2 = 40 # diario compra_2 = 150 # unidad calidadrecepcion_2 = 250 # pedido demanda_2 = 4300 # por año k_2 = calidadrecepcion_2 + compra_2 # costo de orden d_2 = demanda_2 # demanda c1_2 = b_2 * interes + (alquiler_2 * diasmes * 12) # costo unitario s_2 = 15 # Construcción de vectores de datos: vect_s = np.array([s_1, s_2]) vect_b = np.array([b_1, b_2]) vect_d = np.array([d_1, d_2]) vect_c1 = np.array([c1_1, c1_2]) vect_k = np.array([k_1, k_2]) ``` ### Resolución para un conjunto de lambdas a fuerza bruta Creamos un array de lambdas equidistantes, vamos a resolver múltiples $L(\lambda)$ para cada \lambda. De todos ellos vamos a seleccionar el mayor, que refiere al mínimo del modelo primal para el lambda que hace cumplir la restricción. En un for loop, iteramos en cada \lambda y resolvemos: cantidad óptima y $L(\lambda)$. ``` lmbds = np.array(range(0, 1_000_000, 1000)) L = np.zeros(lmbds.shape) for i, lmbd_i in enumerate(lmbds): # Cálculo de cada cantidad óptima: q1_opt = calcular_qopt(k_1, d_1, t, c1_1, s_1, lmbd_i) q2_opt = calcular_qopt(k_2, d_2, t, c1_2, s_2, lmbd_i) # Construcción de arrays de q: vect_q_opt = np.array([q1_opt, q2_opt]) # Cálculo de g y f: g = calcular_g(vect_q_opt, vect_s, S) f = calcular_f(vect_b, vect_d, vect_q_opt, vect_c1, t, vect_k) # Cálculo del lagrangiano: L[i] = calcular_L(f, g, lmbd_i) # Máximo: max_index = np.argmax(L) L_max = L[max_index] lmbd_max = lmbds[max_index] # Resultados: print('RESULTADOS:') print(f'El lambda óptimo es: {lmbd_max:.2f}') q1_opt = calcular_qopt(k_1, d_1, t, c1_1, s_1, lmbd_max) q2_opt = calcular_qopt(k_2, d_2, t, c1_2, s_2, lmbd_max) vect_q_opt = np.array([q1_opt, q2_opt]) print(f'Las cantidades óptimas son: {q1_opt:.2f}, {q2_opt:.2f}') f = calcular_f(vect_b, vect_d, vect_q_opt, vect_c1, t, vect_k) print(f'El CTE óptimo es: {f:.2f}') ``` Se procede a plotear los resultados en una curva que muestra $L$ en función de $\lambda$. Marcamos el punto de cantidad óptima. ``` # Subplots: fig, ax = plt.subplots(figsize=(10,7)) ax.plot(lmbds, L, 'm--', label='Lagrangiano respecto de cada $\lambda$') ax.set_xlabel('$\lambda$') ax.set_ylabel('Lagrangiano') ax.plot(lmbd_max, L_max, 'r', marker='o') # punto óptimo ax.plot([0, lmbd_max], [L_max, L_max], 'k:') ax.plot([lmbd_max, lmbd_max], [L_max*(1-0.2), L_max], 'k:') ax.annotate(f'({lmbd_max:.2f}, {L_max:.2f})', (lmbd_max*(1+0.1), L_max*(1-0.1))) plt.show() ``` ### Método del gradiente para buscar $\lambda$ Se evita buscar el multiplicador de manera forzosa. Para lograr este objetivo, elegimos aplicar el método del gradiente en su forma más simple, es decir, teniendo en cuenta muchas suposiciones y un paso de búsqueda constante. Sabemos que el problema tiene un sólo óptimo global en un espacio de búsqueda convexo. A partir de un punto inicial de búsqueda, vamos a dirigirnos en la dirección del gradiente respecto de \lambda del Lagrangiano $L(\lambda)$ para actualizar sucesivos valores de \lambda. Una vez que dos valores de $\lambda$ consecutivos no muestren cambios, el procedimiento finaliza. El algoritmo sería el siguiente: - Inicializar $\lambda$. Dado que habíamos encontrado un valor parcial de lambda anteriormente, usamos uno cercano para iniciar la búsqueda. Esto igualmente, no es obligatorio. - Calcular el valor del Lagrangiano dado el $\lambda$ actual. - Calcular el gradiente del Lagrangiano $ \nabla{L(\lambda)}$. Resulta ser el mismo valor que $g$ - Actualizar el valor de $\lambda$ para la próxima iteración: $\lambda_{i+1} = \lambda_i + step * \nabla{L(\lambda)}$ - Calculamos la diferencia entre lambdas sucesivos: $\Delta \lambda = | \lambda_{i+1} - \lambda_{i} |$ - Si la diferencia es menor que una tolerancia dada, cortamos el algoritmo. En este caso la función de búsqueda es diferenciable, por lo tanto podríamos omitir la tolerancia e ir a buscar una diferencia de \lambda sucesivos igual a cero. Sin embargo, es preferible tener controlado el corte y no depender de una convergencia que puede resultar muy larga. *Nota: En caso de no ser diferenciable la función, el método puede aplicarse igual, pero en este caso se calcula el Subgradiente. Es el mismo procedimiento pero el gradiente es válido en cada trozo de la función. No pueden encontrarse diferencias de \lambda iguales a cero y debe controlarse una tolerancia.* ``` def gradiente_L(vect_s, vect_q, S): return vect_s @ vect_q - S lmbd_0 = 0 step = 10 # paso fijo, sólo funciona en estos casos simples. Referirse a "métodos de paso adaptativo" tol = 10e-2 i = 0 lmbd_i = lmbd_0 lmbd_array = [] L = [] diff = np.inf qopt_array = [] while diff > tol: # Cálculo de cada cantidad óptima: q1_opt = calcular_qopt(k_1, d_1, t, c1_1, s_1, lmbd_i) q2_opt = calcular_qopt(k_2, d_2, t, c1_2, s_2, lmbd_i) # Construcción de arrays de q: vect_q_opt = np.array([q1_opt, q2_opt]) qopt_array.append(vect_q_opt) # Cálculo de g y f: g = calcular_g(vect_q_opt, vect_s, S) f = calcular_f(vect_b, vect_d, vect_q_opt, vect_c1, t, vect_k) # Cálculo del lagrangiano: L.append(calcular_L(f, g, lmbd_i)) # Nuevo lambda: lmbd_old = lmbd_i lmbd_array.append(lmbd_old) lmbd_i = max(lmbd_i + step * gradiente_L(vect_q_opt, vect_s, S), 0) # lambda positivo # Chequeo de convergencia: diff = abs(lmbd_i - lmbd_old) i+=1 # Resultados: print('RESULTADOS:') print(f'El lambda óptimo es: {lmbd_i:.2f}') print(f'Las cantidades óptimas son: {q1_opt:.2f}, {q2_opt:.2f}') print(f'El CTE óptimo es: {f:.2f}') ``` Ploteamos los sucesivos $\lambda$ respecto del número de iteraciones. ``` fig, ax = plt.subplots(figsize=(10,7)) ax.plot(range(0, i), lmbd_array, 'b', label='Lagrangiano respecto de cada $\lambda$') ax.set_xlabel('$iteración$') ax.set_ylabel('$\lambda$') ax.plot([0, i], [lmbd_i, lmbd_i], 'k:') ax.annotate(f'(iteración: {i:.2f}, $\lambda *$: {lmbd_i:.2f})', (i-30, lmbd_i+1)) plt.show() ``` Vamos a plotear el resultado de utilización del espacio de cada producto para cada $\lambda_i$ ``` # Creamos dos listas a partir de la lista de tuplas de soluciones: q_opts = list(zip(*qopt_array)) # Creamos y ploteamos la figura: fig2, ax2 = plt.subplots(figsize=(10,7)) xx = list(np.array(range(1, i, 1000)).astype(int)) q2pq1graph = [q_opt1*s_1+q_opt2*s_2 for (q_opt1, q_opt2) in [qopt_array[i] for i in xx]] q1graph = [q_opts[0][i]*s_1 for i in xx] ax2.plot(xx, q2pq1graph, color='r', label='Producto 2 + Producto 1') ax2.plot(xx, q1graph, color='b', label='Producto 1') ax2.plot(xx, [S]*len(xx), linestyle='--', label='Capacidad') ax2.legend() ax2.set_ylim((0, 15)) ax2.set_xlabel('$iteración$') ax2.set_ylabel('Espacio consumido') plt.show() xx q2pq1graph ```
github_jupyter
``` %load_ext autoreload %autoreload 2 import sys sys.path.append("../title_maker_pro") sys.path.append("../website") import re import stanza from collections import Counter import itertools import datasets import pickle import torch from transformers import AutoModelWithLMHead, AutoTokenizer stanza.download('en') def print_words(words, f): for word in words: word_str = [word.word] if word.pos: word_str.append(f"/{word.pos}/") if word.topic: word_str.append(f"[{word.topic}]") print(" ".join(word_str), file=f) print(f"\t{word.definition}{' |n| ' if word.example is None else ''}", file=f) if word.example: print(f"\t\"{word.example}\"", file=f) print("", file=f) nlp = stanza.Pipeline(lang='en', processors='tokenize,mwt,pos') tokenizer = AutoTokenizer.from_pretrained("gpt2") tokenizer.add_special_tokens(datasets.SpecialTokens.special_tokens_dict()) blacklist = datasets.Blacklist.load("../build/blacklist.pickle") model = AutoModelWithLMHead.from_pretrained("/mnt/evo/projects/title-maker-pro/models/en_dictionary_parsed_lr_00001_creativity/checkpoint-120000/").to("cuda:0") %timeit datasets.ParsedDictionaryDefinitionDataset.evaluate_creativity(tokenizer, model, blacklist, 100, 50, max_length=512) def no_weird(w): return ( w.word[-1] != "-" and "<|" not in w.definition and "<|" not in w.example and (not w.pos or "<|" not in w.pos) and len(w.word.split()) <= 3 and len(w.definition.split()) >= 3 and len(w.example.split()) >= 3 ) def go(**kwargs): return datasets.ParsedDictionaryDefinitionDataset.generate_words( tokenizer, model, num=1, max_iterations=2, blacklist=blacklist, example_match_pos_pipeline=nlp, generation_args=dict( top_k=200, num_return_sequences=50, max_length=375, do_sample=True, ), filter_proper_nouns=True, user_filter=no_weird, **kwargs ) # words, stats = go() # print(stats) # print() # print_words(words, sys.stdout) words, stats = go(use_custom_generate=True) print(stats) print_words(words, sys.stdout) blacklist.contains("foolage") len(blacklist.blacklist_set) print_words(words[:50], sys.stdout) import math from transformers import activations import transformers def gelu_new(x): return 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3.0)))) activations.ACT2FN['gelu_new'] = gelu_new model = AutoModelWithLMHead.from_pretrained("../build/forward-dictionary-model-v1").to("cpu") quantized_model = torch.quantization.quantize_dynamic( model, {torch.nn.Linear, torch.nn.Embedding, transformers.modeling_utils.Conv1D}, dtype=torch.qint8 ) a = go2() print(tokenizer.decode(a[0])) %timeit go2() from words import WordIndex, Word def clean_example(w, example): return re.sub(re.escape(w), w, example, flags=re.IGNORECASE) from hyphen import Hyphenator h_en = Hyphenator('en_US') wi = WordIndex( [ Word( word=w.word, definition=w.definition, pos=w.pos, topic=w.topic, example=clean_example(w.word, w.example), syllables=h_en.syllables(w.word), ) for w in words ] ) wi.dump("../website/data/words2.json") h_en.syllables('fancccwe') wi2 = WordIndex.load("../website/data/words.json") wi_p = WordIndex( [ Word( word=w.word, definition=w.definition, pos=w.pos, topic=w.topic, example=clean_example(w.word, w.example), syllables=h_en.syllables(w.word) ) for w in wi2.words ] ) wi_p.dump("../website/data/words.json") ```
github_jupyter
``` import numpy as np import pandas as pd from matplotlib import pyplot as plt import os.path as op from csv import writer import math import cmath import pickle import tensorflow as tf from tensorflow import keras from keras.models import Model,Sequential,load_model from keras.layers import Input, Embedding from keras.layers import Dense, Bidirectional from keras.layers.recurrent import LSTM import keras.metrics as metrics import itertools from tensorflow.python.keras.utils.data_utils import Sequence from decimal import Decimal from keras.layers import Conv1D,MaxPooling1D,Flatten,Dense A1=np.empty((0,5),dtype='float32') U1=np.empty((0,7),dtype='float32') node=['150','149','147','144','142','140','136','61'] mon=['Apr','Mar','Aug','Jun','Jul','Sep','May','Oct'] for j in node: for i in mon: inp= pd.read_csv('data_gkv/AT510_Node_'+str(j)+'_'+str(i)+'19_OutputFile.csv',usecols=[1,2,3,15,16]) out= pd.read_csv('data_gkv/AT510_Node_'+str(j)+'_'+str(i)+'19_OutputFile.csv',usecols=[5,6,7,8,17,18,19]) inp=np.array(inp,dtype='float32') out=np.array(out,dtype='float32') A1=np.append(A1, inp, axis=0) U1=np.append(U1, out, axis=0) print(A1) print(U1) from sklearn.preprocessing import MinMaxScaler import warnings scaler_obj=MinMaxScaler() X1=scaler_obj.fit_transform(A1) Y1=scaler_obj.fit_transform(U1) warnings.filterwarnings(action='ignore', category=UserWarning) X1=X1[:,np.newaxis,:] Y1=Y1[:,np.newaxis,:] from keras import backend as K def rmse(y_true, y_pred): return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1)) model = Sequential() model.add(keras.Input(shape=(1,5))) model.add(tf.keras.layers.GRU(14,activation="relu",use_bias=True,kernel_initializer="glorot_uniform",bias_initializer="zeros", kernel_regularizer=keras.regularizers.l1_l2(l1=1e-5, l2=1e-4), bias_regularizer=keras.regularizers.l2(1e-4), activity_regularizer=keras.regularizers.l2(1e-5))) model.add(keras.layers.Dropout(.1)) model.add(Dense(7)) model.add(keras.layers.BatchNormalization(axis=-1,momentum=0.99,epsilon=0.001,center=True,scale=True, beta_initializer="zeros",gamma_initializer="ones", moving_mean_initializer="zeros",moving_variance_initializer="ones",trainable=True)) model.add(keras.layers.ReLU()) model.compile(optimizer=keras.optimizers.Adam(learning_rate=1e-5),loss='mse',metrics=['accuracy','mse','mae',rmse]) model.summary() from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(X1, Y1, test_size=0.25, random_state=42) history2 = model.fit(x_train,y_train,batch_size=2048,epochs=300, validation_split=0.1) model_json = model.to_json() with open("Model_File/gru_relu.json", "w") as json_file: json_file.write(model_json) # serialize weights to HDF5 model.save_weights("Model_File/gru_relu.h5") print("Saved model to disk") from keras.models import model_from_json json_file = open('Model_File/gru_relu.json', 'r') loaded_model_json = json_file.read() json_file.close() loaded_model = model_from_json(loaded_model_json) # load weights into new model loaded_model.load_weights("Model_File/gru_relu.json") print("Loaded model from disk") loaded_model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001), loss='mse',metrics=['accuracy','mse','mae',rmse,coeff_determination]) print(loaded_model.evaluate(x_train, y_train, verbose=0)) model.evaluate(x_test,y_test) model.evaluate(x_train,y_train) plt.plot(history2.history['loss']) plt.plot(history2.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() plt.plot(history2.history['accuracy']) plt.plot(history2.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') ```
github_jupyter
## Fourier methods The Fourier transform (FT) for a well-behaved functions $f$ is defined as: $$f(k) = \int e^{-ikx} f(x) ~dx$$ The inverse FT is then $$f(x) = \frac{1}{2\pi} \int e^{ikx} f(k) ~dk$$ ## Discrete Fourier transforms (DFTs) If the function is periodic in real space, $f(x+L) = f(x)$, then the Fourier space is discrete with spacing $\frac{2\pi}{L}$. Moreover, if the real space is periodic as well as discrete with the spacing $h$, then the Fourier space is discrete as well as bounded. $$f(x) = \sum e^{ikx} f(k) ~dk~~~~~~ \text{where } k = \Bigg[ -\frac{\pi}{h}, \frac{\pi}{h}\Bigg];~~ \text{with interval} \frac{2\pi}{L} $$ This is very much in line with crystallography with $ [ -\frac{\pi}{h}, \frac{\pi}{h} ]$ being the first Brillouin zone. So we see that there is a concept of the maximum wavenumber $ k_{max}=\frac{\pi}{h} $, we will get back to this later in the notes. Usually in computations we need to find FT of discrete function rather than of a well defined analytic function. Since the real space is discrete and periodic, the Fourier space is also discrete and periodic or bounded. Also, the Fourier space is continuous if the real space is unbounded. If the function is defined at $N$ points in real space and one wants to calculate the function at $N$ points in Fourier space, then **DFT** is defined as $$f_k = \sum_{n=0}^{N-1} f_n ~ e^{-i\frac{2\pi~n~k}{N}}$$ while the inverse transform of this is $$f_n = \frac1N \sum_{n=0}^{N-1} f_k ~ e^{~i\frac{2\pi~n~k}{N}}$$ To calculate each $f_n$ one needs $N$ computations and it has to be done $N$ times, i.e, the algorithm is simply $\mathcal{O}(N^2)$. This can be implemented numerically as a matrix multiplication, $f_k = M\cdot f_n$, where $M$ is a $N\times N$ matrix. ## Fast fourier tranforms (FFTs) The discussion here is based on the Cooley-Tukey algorithm. FFTs improves on DFTs by exploiting their symmetries. $$ \begin{align} f_k &= \sum_{n=0}^{N-1} f_n e^{-i~\frac{2\pi~k~n}{N}} \\ &= \sum_{n=0}^{N/2-1} f_{2n} e^{-i~\frac{2\pi~k~2n}{N}} &+ \sum_{n=0}^{N/2-1} f_{2n + 1} e^{-i~\frac{2\pi~k~(n+1)}{N}}\\ &= \sum_{n=0}^{N/2 - 1} f_{2n} e^{-i~\frac{2\pi k~n}{N/2}} &+ e^{-i\frac{2\pi k}{N}} \sum_{n=0}^{N/2 - 1} f_{2n + 1} e^{-i~\frac{2\pi~k~n~}{N/2}}\\ &=\vdots &\vdots \end{align}$$ We can use the symmetry property, from the definition, $f_{N+k} = f_k$. Notice that, because of the tree structure, there are $\ln_2 N$ stages of the calculation. By applying the method of splitting the computation in two halves recursively, the complexity of the problem becomes $\mathcal{O}(N \ln N)$ while the naive algorithm is $\mathcal{O}(N^2)$. This is available in standard python packages like numpy and scipy. We will now use PyGL to explore its usage to solve physics problems. ``` import pygl import numpy as np import matplotlib.pyplot as plt dim, Nx, Ny = 2, 128, 128 grid = {"dim":dim, "Nx":Nx, "Ny":Ny} # now construct the spectral solver ss = pygl.dms.FourierSpectral(grid) ``` #### We first demonstrate the translation of blob using momentum operator ``` # The momentum operator, e^{-ikr}, generates translation! f = plt.figure(figsize=(20, 5), dpi=80); L, N = 128, 128 x, y = np.meshgrid(np.linspace(0, L, N), np.linspace(0, L, N)) rr = np.sqrt( ((x-L/2)*(x-L/2)+(y-L/2)*(y-L/2))*.5 ) sig = np.fft.fft2(np.exp(-0.1*rr)) def plotFirst(x, y, sig, n_): sp = f.add_subplot(1, 3, n_ ) plt.pcolormesh(x, y, sig, cmap=plt.cm.Blues) plt.axis('off'); xx = ([0, -L/4, -L/2,]) yy = ([0, -L/3, L/2]) for i in range(3): kdotr = ss.kx*xx[i] + ss.ky*yy[i] sig = sig*np.exp(-1j*kdotr) plotFirst(x, y, np.real(np.fft.ifftn(sig)), i+1) ``` ### Sampling: Aliasing error We saw that because of the smallest length scale, $h$, in the real space there is a corresponding largest wave-vector, $k_{max}$ in the Fourier space. The error is because of this $k_{max}$ and a signal which has $k>k_{max}$ can not be distinguished on this grid. In the given example, below, we see that if the real space has 10 points that one can not distinguish between $sin(2\pi x/L)$ and $sin(34 \pi x/L)$. In general, $sin(k_1 x)$ and $sin(k_2 x)$ can not be distinguished if $k_1 -k_2$ is a multiple of $\frac{2\pi}{h}$. This is a manifestation of the gem called the sampling theorem which is defined, as on wikipedia: If a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart. ``` L, N = 1, 16 x = np.arange(0, L, L/512) xx = np.arange(0, L, L/N) def ff(k, x): return np.sin(k*x) f = plt.figure(figsize=(17, 6), dpi=80); plt.plot(x, ff(x, 2*np.pi), color="#A60628", linewidth=2); plt.plot(x, ff(x, 34*np.pi), color="#348ABD", linewidth=2); plt.plot(xx, ff(xx, 2*np.pi), 'o', color="#020e3e", markersize=8) plt.xlabel('x', fontsize=15); plt.ylabel('y(x)', fontsize=15); plt.title('Aliasing in sampling of $sin(2\pi x/L)$ and $sin(34 \pi x/L)$', fontsize=24); plt.axis('off'); ``` To avoid this error, we truncate higher mode in PyGL using `ss.dealias' ### Differentiation We now show the usage of PyGL to compute differentiation matrices. We first compute the second derivative of $\cos x$. ``` import pygl import numpy as np import matplotlib.pyplot as plt dim, Nx, Ny = 1, 32, 32 grid = {"dim":dim, "Nx":Nx, "Ny":Ny} ss = pygl.dms.FourierSpectral(grid) def f1(kk, x): return np.cos(kk*x) f = plt.figure(figsize=(10, 5), dpi=80); L, N = Nx, Nx; x=np.arange(0, N); fac=2*np.pi/L k = ss.kx fk = np.fft.fft(f1(fac, x)) f1_kk = -k*k*fk f1_xx = np.real(np.fft.ifft(f1_kk)) plt.plot(x, -f1(fac, x)*fac*fac, color="#348ABD", label = 'analytical', linewidth=2) plt.plot(x, f1_xx, 'o', color="#A60628", label = 'numerical', markersize=6) plt.legend(loc = 'best'); plt.xlabel('x', fontsize=15); ```
github_jupyter
``` import syft as sy import numpy as np from syft.core.adp.entity import Entity # Instantiate Domain Object & Entity ishan = Entity(name="Ishan") uk = sy.login(email="info@openmined.org", password="changethis", port=8081) uk.store.pandas data = np.random.rand(1, 3) # Identify the first thing that breaks the deserialization: basic_tensor = sy.Tensor(data).tag("basic") print(basic_tensor) basic_ptr = basic_tensor.send(uk) print(basic_ptr.id_at_location) uk.store.pandas private_tensor = sy.Tensor(data).private(0, 10, entity=ishan).tag("private") print(private_tensor) private_ptr = private_tensor.send(uk) print(private_ptr.id_at_location) ser = sy.serialize(private_tensor) sy.deserialize(ser) ``` ['tensor', 'autodp', 'single_entity_phi', 'SingleEntityPhiTensor'] celeryworker_1 | [2021-08-10 15:27:16,202: WARNING/ForkPoolWorker-1] {'__name__': 'syft.core.tensor', '__doc__': None, '__package__': 'syft.core.tensor', '__loader__': <_frozen_importlib_external.SourceFileLoader object at 0x7f27a9953150>, '__spec__': ModuleSpec(name='syft.core.tensor', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f27a9953150>, origin='/app/syft/src/syft/core/tensor/__init__.py', submodule_search_locations=['/app/syft/src/syft/core/tensor']), '__path__': ['/app/syft/src/syft/core/tensor'], '__file__': '/app/syft/src/syft/core/tensor/__init__.py', '__cached__': '/app/syft/src/syft/core/tensor/__pycache__/__init__.cpython-37.pyc', '__builtins__': {'__name__': 'builtins', '__doc__': "Built-in functions, exceptions, and other objects.\n\nNoteworthy: None is the `nil' object; Ellipsis represents `...' in slices.", '__package__': '', '__loader__': <class '_frozen_importlib.BuiltinImporter'>, '__spec__': ModuleSpec(name='builtins', loader=<class '_frozen_importlib.BuiltinImporter'>), '__build_class__': <built-in function __build_class__>, '__import__': <built-in function __import__>, 'abs': <built-in function abs>, 'all': <built-in function all>, 'any': <built-in function any>, 'ascii': <built-in function ascii>, 'bin': <built-in function bin>, 'breakpoint': <built-in function breakpoint>, 'callable': <built-in function callable>, 'chr': <built-in function chr>, 'compile': <built-in function compile>, 'delattr': <built-in function delattr>, 'dir': <function dir at 0x7f280f1e7320>, 'divmod': <built-in function divmod>, 'eval': <built-in function eval>, 'exec': <built-in function exec>, 'format': <built-in function format>, 'getattr': <built-in function getattr>, 'globals': <built-in function globals>, 'hasattr': <built-in function hasattr>, 'hash': <built-in function hash>, 'hex': <built-in function hex>, 'id': <built-in function id>, 'input': <built-in function input>, 'isinstance': <built-in function isinstance>, 'issubclass': <built-in function issubclass>, 'iter': <built-in function iter>, 'len': <built-in function len>, 'locals': <built-in function locals>, 'max': <built-in function max>, 'min': <built-in function min>, 'next': <built-in function next>, 'oct': <built-in function oct>, 'ord': <built-in function ord>, 'pow': <built-in function pow>, 'print': <built-in function print>, 'repr': <built-in function repr>, 'round': <built-in function round>, 'setattr': <built-in function setattr>, 'sorted': <built-in function sorted>, 'sum': <built-in function sum>, 'vars': <built-in function vars>, 'None': None, 'Ellipsis': Ellipsis, 'NotImplemented': NotImplemented, 'False': False, 'True': True, 'bool': <class 'bool'>, 'memoryview': <class 'memoryview'>, 'bytearray': <class 'bytearray'>, 'bytes': <class 'bytes'>, 'classmethod': <class 'classmethod'>, 'complex': <class 'complex'>, 'dict': <class 'dict'>, 'enumerate': <class 'enumerate'>, 'filter': <class 'filter'>, 'float': <class 'float'>, 'frozenset': <class 'frozenset'>, 'property': <class 'property'>, 'int': <class 'int'>, 'list': <class 'list'>, 'map': <class 'map'>, 'object': <class 'object'>, 'range': <class 'range'>, 'reversed': <class 'reversed'>, 'set': <class 'set'>, 'slice': <class 'slice'>, 'staticmethod': <class 'staticmethod'>, 'str': <class 'str'>, 'super': <class 'super'>, 'tuple': <class 'tuple'>, 'type': <class 'type'>, 'zip': <class 'zip'>, '__debug__': True, 'BaseException': <class 'BaseException'>, 'Exception': <class 'Exception'>, 'TypeError': <class 'TypeError'>, 'StopAsyncIteration': <class 'StopAsyncIteration'>, 'StopIteration': <class 'StopIteration'>, 'GeneratorExit': <class 'GeneratorExit'>, 'SystemExit': <class 'SystemExit'>, 'KeyboardInterrupt': <class 'KeyboardInterrupt'>, 'ImportError': <class 'ImportError'>, 'ModuleNotFoundError': <class 'ModuleNotFoundError'>, 'OSError': <class 'OSError'>, 'EnvironmentError': <class 'OSError'>, 'IOError': <class 'OSError'>, 'EOFError': <class 'EOFError'>, 'RuntimeError': <class 'RuntimeError'>, 'RecursionError': <class 'RecursionError'>, 'NotImplementedError': <class 'NotImplementedError'>, 'NameError': <class 'NameError'>, 'UnboundLocalError': <class 'UnboundLocalError'>, 'AttributeError': <class 'AttributeError'>, 'SyntaxError': <class 'SyntaxError'>, 'IndentationError': <class 'IndentationError'>, 'TabError': <class 'TabError'>, 'LookupError': <class 'LookupError'>, 'IndexError': <class 'IndexError'>, 'KeyError': <class 'KeyError'>, 'ValueError': <class 'ValueError'>, 'UnicodeError': <class 'UnicodeError'>, 'UnicodeEncodeError': <class 'UnicodeEncodeError'>, 'UnicodeDecodeError': <class 'UnicodeDecodeError'>, 'UnicodeTranslateError': <class 'UnicodeTranslateError'>, 'AssertionError': <class 'AssertionError'>, 'ArithmeticError': <class 'ArithmeticError'>, 'FloatingPointError': <class 'FloatingPointError'>, 'OverflowError': <class 'OverflowError'>, 'ZeroDivisionError': <class 'ZeroDivisionError'>, 'SystemError': <class 'SystemError'>, 'ReferenceError': <class 'ReferenceError'>, 'MemoryError': <class 'MemoryError'>, 'BufferError': <class 'BufferError'>, 'Warning': <class 'Warning'>, 'UserWarning': <class 'UserWarning'>, 'DeprecationWarning': <class 'DeprecationWarning'>, 'PendingDeprecationWarning': <class 'PendingDeprecationWarning'>, 'SyntaxWarning': <class 'SyntaxWarning'>, 'RuntimeWarning': <class 'RuntimeWarning'>, 'FutureWarning': <class 'FutureWarning'>, 'ImportWarning': <class 'ImportWarning'>, 'UnicodeWarning': <class 'UnicodeWarning'>, 'BytesWarning': <class 'BytesWarning'>, 'ResourceWarning': <class 'ResourceWarning'>, 'ConnectionError': <class 'ConnectionError'>, 'BlockingIOError': <class 'BlockingIOError'>, 'BrokenPipeError': <class 'BrokenPipeError'>, 'ChildProcessError': <class 'ChildProcessError'>, 'ConnectionAbortedError': <class 'ConnectionAbortedError'>, 'ConnectionRefusedError': <class 'ConnectionRefusedError'>, 'ConnectionResetError': <class 'ConnectionResetError'>, 'FileExistsError': <class 'FileExistsError'>, 'FileNotFoundError': <class 'FileNotFoundError'>, 'IsADirectoryError': <class 'IsADirectoryError'>, 'NotADirectoryError': <class 'NotADirectoryError'>, 'InterruptedError': <class 'InterruptedError'>, 'PermissionError': <class 'PermissionError'>, 'ProcessLookupError': <class 'ProcessLookupError'>, 'TimeoutError': <class 'TimeoutError'>, 'open': <built-in function open>, 'quit': Use quit() or Ctrl-D (i.e. EOF) to exit, 'exit': Use exit() or Ctrl-D (i.e. EOF) to exit, 'copyright': Copyright (c) 2001-2021 Python Software Foundation. ``` uk.store.pandas autograd_tensor = sy.Tensor(data).autograd(requires_grad=True).tag("autograd") print(autograd_tensor) autograd_ptr = autograd_tensor.send(uk) print(autograd_ptr.id_at_location) uk.store.pandas private_tensor.gamma basic_tensor.id basic_ptr.id_at_location autograd_ptr.id_at_location uk.store.pandas uk.store.store autograd_ptr.id_at_location private_autograd_tensor = sy.Tensor(data).private(0, 1, entity=ishan).autograd(requires_grad=True).tag("priv_autograd") private_autograd_tensor private_autograd_tensor.id private_autograd_ptr = private_autograd_tensor.send(uk) print(private_autograd_ptr.id_at_location) uk.store.pandas y1 = private_autograd_ptr + private_autograd_ptr y2 = private_autograd_tensor + private_autograd_tensor public_y2 = y2.publish(acc=uk.acc, sigma=0.1) private_tensor_ptr = uk.store['private'] public_tensor = private_autograd_ptr.get() public_tensor y = public_tensor + public_tensor y public_y = y.publish(acc=uk., sigma=0.1) type(uk) new_acc = sy.core.adp.adversarial_accountant.AdversarialAccountant() new_acc.print_ledger() public_y = y.publish(acc=new_acc, sigma=0.1) ``` ## Error spotted on backend_1 when trying to send Tensor to object store: Apparently the syft modules (autograd, autodp) aren't importing? 2021-08-09T19:19:10.642462+0000][DEBUG][logger]][47] Serializing <class 'syft.core.common.group._create_VERIFYALL.<locals>.VerifyAll'> [2021-08-09T19:19:10.644311+0000][DEBUG][logger]][47] Serializing <class 'syft.lib.python._SyNone'> [2021-08-09 19:19:10,858: INFO/ForkPoolWorker-1] Task app.worker.msg_without_reply[e8dc5efb-4860-481e-ad5d-df08a64c6efb] succeeded in 0.36394500299866195s: None [2021-08-09 19:19:24,061: INFO/MainProcess] Received task: app.worker.msg_without_reply[40b42308-9c0a-43f8-8950-1441c6e96260] [2021-08-09 19:19:24,166: ERROR/ForkPoolWorker-1] Task app.worker.msg_without_reply[40b42308-9c0a-43f8-8950-1441c6e96260] raised unexpected: KeyError('autodp') Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/celery/app/trace.py", line 412, in trace_task R = retval = fun(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/celery/app/trace.py", line 704, in __protected_call__ return self.run(*args, **kwargs) File "/app/app/worker.py", line 20, in msg_without_reply obj_msg = deserialize(blob=msg_bytes, from_bytes=True) File "/app/syft/src/syft/core/common/serde/deserialize.py", line 132, in _deserialize res = _proto2object(proto=blob) File "/app/syft/src/syft/core/common/message.py", line 183, in _proto2object _deserialize(blob=proto.message, from_bytes=True), SyftMessage File "/app/syft/src/syft/core/common/serde/deserialize.py", line 132, in _deserialize res = _proto2object(proto=blob) File "/app/syft/src/syft/core/node/common/action/save_object_action.py", line 59, in _proto2object obj = _deserialize(blob=proto.obj) File "/app/syft/src/syft/core/common/serde/deserialize.py", line 132, in _deserialize res = _proto2object(proto=blob) File "/app/syft/src/syft/core/store/storeable_object.py", line 185, in _proto2object data = data_type._proto2object(proto=data) # type: ignore File "/app/syft/src/syft/core/common/serde/recursive.py", line 53, in _proto2object attrs = dict(deserialize(proto.data, from_bytes=True)) File "/app/syft/src/syft/core/common/serde/deserialize.py", line 132, in _deserialize res = _proto2object(proto=blob) File "/app/syft/src/syft/lib/python/dict.py", line 239, in _proto2object for element in proto.values File "/app/syft/src/syft/lib/python/dict.py", line 239, in <listcomp> for element in proto.values File "/app/syft/src/syft/core/common/serde/deserialize.py", line 62, in _deserialize obj_type = index_syft_by_module_name(fully_qualified_name=data_message.obj_type) File "/app/syft/src/syft/util.py", line 123, in index_syft_by_module_name return index_modules(a_dict=globals()["syft"], keys=attr_list[1:]) File "/app/syft/src/syft/util.py", line 87, in index_modules return index_modules(a_dict=a_dict.__dict__[keys[0]], keys=keys[1:]) File "/app/syft/src/syft/util.py", line 87, in index_modules return index_modules(a_dict=a_dict.__dict__[keys[0]], keys=keys[1:]) File "/app/syft/src/syft/util.py", line 87, in index_modules return index_modules(a_dict=a_dict.__dict__[keys[0]], keys=keys[1:]) KeyError: 'autodp' [2021-08-09 19:20:11,214: INFO/MainProcess] Received task: app.worker.msg_without_reply[5e66d116-fc32-482b-b535-c2d6178a9e1c] [2021-08-09 19:20:11,357: ERROR/ForkPoolWorker-1] Task app.worker.msg_without_reply[5e66d116-fc32-482b-b535-c2d6178a9e1c] raised unexpected: KeyError('autograd') Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/celery/app/trace.py", line 412, in trace_task R = retval = fun(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/celery/app/trace.py", line 704, in __protected_call__ return self.run(*args, **kwargs) File "/app/app/worker.py", line 20, in msg_without_reply obj_msg = deserialize(blob=msg_bytes, from_bytes=True) File "/app/syft/src/syft/core/common/serde/deserialize.py", line 132, in _deserialize res = _proto2object(proto=blob) File "/app/syft/src/syft/core/common/message.py", line 183, in _proto2object _deserialize(blob=proto.message, from_bytes=True), SyftMessage File "/app/syft/src/syft/core/common/serde/deserialize.py", line 132, in _deserialize res = _proto2object(proto=blob) File "/app/syft/src/syft/core/node/common/action/save_object_action.py", line 59, in _proto2object obj = _deserialize(blob=proto.obj) File "/app/syft/src/syft/core/common/serde/deserialize.py", line 132, in _deserialize res = _proto2object(proto=blob) File "/app/syft/src/syft/core/store/storeable_object.py", line 185, in _proto2object data = data_type._proto2object(proto=data) # type: ignore File "/app/syft/src/syft/core/common/serde/recursive.py", line 53, in _proto2object attrs = dict(deserialize(proto.data, from_bytes=True)) File "/app/syft/src/syft/core/common/serde/deserialize.py", line 132, in _deserialize res = _proto2object(proto=blob) File "/app/syft/src/syft/lib/python/dict.py", line 239, in _proto2object for element in proto.values File "/app/syft/src/syft/lib/python/dict.py", line 239, in <listcomp> for element in proto.values File "/app/syft/src/syft/core/common/serde/deserialize.py", line 62, in _deserialize obj_type = index_syft_by_module_name(fully_qualified_name=data_message.obj_type) File "/app/syft/src/syft/util.py", line 123, in index_syft_by_module_name return index_modules(a_dict=globals()["syft"], keys=attr_list[1:]) File "/app/syft/src/syft/util.py", line 87, in index_modules return index_modules(a_dict=a_dict.__dict__[keys[0]], keys=keys[1:]) File "/app/syft/src/syft/util.py", line 87, in index_modules return index_modules(a_dict=a_dict.__dict__[keys[0]], keys=keys[1:]) File "/app/syft/src/syft/util.py", line 87, in index_modules return index_modules(a_dict=a_dict.__dict__[keys[0]], keys=keys[1:]) KeyError: 'autograd' ## The code in question ``` def index_modules(a_dict: object, keys: List[str]) -> object: """Recursively find a syft module from its path This is the recursive inner function of index_syft_by_module_name. See that method for a full description. Args: a_dict: a module we're traversing keys: the list of string attributes we're using to traverse the module Returns: a reference to the final object """ if len(keys) == 0: return a_dict return index_modules(a_dict=a_dict.__dict__[keys[0]], keys=keys[1:]) def index_syft_by_module_name(fully_qualified_name: str) -> object: """Look up a Syft class/module/function from full path and name Sometimes we want to use the fully qualified name (such as one generated from the 'get_fully_qualified_name' method below) to fetch an actual reference. This is most commonly used in deserialization so that we can have generic protobuf objects which just have a string representation of the specific object it is meant to deserialize to. Args: fully_qualified_name: the name in str of a module, class, or function Returns: a reference to the actual object at that string path """ attr_list = fully_qualified_name.split(".") # we deal with VerifyAll differently, because we don't it be imported and used by users if attr_list[-1] == "VerifyAll": return type(syft.core.common.group.VERIFYALL) if attr_list[0] != "syft": raise ReferenceError(f"Reference don't match: {attr_list[0]}") if ( attr_list[1] != "core" and attr_list[1] != "lib" and attr_list[1] != "grid" and attr_list[1] != "wrappers" ): raise ReferenceError(f"Reference don't match: {attr_list[1]}") return index_modules(a_dict=globals()["syft"], keys=attr_list[1:]) import syft as sy import numpy as np ishan = sy.core.adp.entity.Entity(name='Ishan') data = np.random.rand(1, 3) private_tensor = sy.core.tensor.Tensor(data).private(0, 1, entity=ishan) uk = sy.login(email="info@openmined.org", password="changethis", port=8081) uk.store.pandas private_ptr = private_tensor.send(uk) print(private_ptr.id_at_location) uk.store.pandas autograd_tensor = sy.Tensor(data).autograd(requires_grad=True) autograd_tensor autograd_ptr = autograd_tensor.send(uk) ```
github_jupyter