code
stringlengths
38
801k
repo_path
stringlengths
6
263
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Train your own skip-thoughts model # # This notebook walks you through training a skip-thoughts model. It was used with semi-success to train a skip-thoughts model on a gpu-backed machine with the Pythia kernel against the entire stackexchange corpus. It was prepared over the course of 8 days, and isn't perfect. # # The first section will require the user to make several data paths. # # The second section requires importing several modules, including pythia and skip-thoughts-specific modules, which may depend on python paths being set correctly. It may need some tweaking. # # Since the Jupyter notebook on the gpu-backed machines was unreliable, I moved some sections of this notebook into small dirty python files in a folder called skip-thoughts_training. # # There is some trickiness about what a skip-thought model is. The model provided by the skip-thoughts paper was actually trained 3 times, once on its corpus, then again on its corpus with half as many dimensions, then again on its corpus with reversed-order sentences, again with half-as many dimensions. They then concatenate the results. # # We don't attempt this. We merely train a single "uni-skip" model. # # The encode function found in the file skipthoughts.py insists on two models, uni-skip and bi-skip. To encode off a single uni-skip module, the encode function found in the tools.py file is needed. It unfortunately tends to die and die badly when I use it. # # The last part of the code is supposed to be validation. Since my encoding is broken, validation is not tested. # # Because things kept dying on me, I found it convenient to write to disk near constantly. This slows things down, but makes them more robust against dying computers. # # The cells thus tend to use little memory and instead read and write in a streaming fashion, using a lot of time instead. # # # ## Hard-coding data paths # # Mostly this notebook should just run. It however requires the user to deal with one of the next 2 cells. The following locations will not work out of the box and are just suggestions. # # #### Location of the data # # `sample_location` should be the path to a directory which contains your data. Each file should contain json-parsable lines. The directory can have subdirectories. The code will recursively find the files. There should be no `.json` files anywhere in the directory except those the code wishes to parse. # # `path_to_word2vec` is a `.bin` word2vec file the code depends on, e.g. the Google News model founds at https://code.google.com/archive/p/word2vec/ # # #### Where to put output # # `parsed_data_location` is a directory of `.csv` files the code will create structured the same as `sample_location`, but where the sentences have been normalized an tokenized, and where each file reprents a post. # # `training_data_location` is the name of a file that will store the sentences in a single file, one per line, with null characters separating blog posts. # # `vocab_location` should be the name of a pickle file (including path), which will store information about the words in the corpus # # `model_location` should be the name of a .npz (zipped numpy) file (including path), which will store the model itself as a numpy array. The code will also create a .npz.pkl file with the same name containing some metadata. # # # # sample_location = 'pythia/data/stackexchange/all' path_to_word2vec = 'outside-data/stackexchange/models/word2vecAnime.bin' parsed_data_location = 'outside-results/testing' training_data_location = 'outside-results/testing/training.txt' vocab_location = 'outside-data/stackexchange/models/vocab.pickle' model_location = 'outside-data/stackexchange/models/corpus.npz' # ## Let's import some modules! # Import auxillary modules import os import json import numpy import csv import sys import random # Import theano import theano import theano.tensor as tensor # May need to set the flag if your .theanorc isn't correct. # If you want to run on gpu, you should fix your .theanorc # and make this cell irrelevant theano.config.floatX = 'float32' # Double check that floatX is float 32 # device should be either cpu or gpu, as desired. print(theano.config.floatX) print(theano.config.device) # So this next cell is maybe bad. The notebook only runs if your paths are all configured right. You may need to adjust the below cell to import pythia/skipthoughts models. # # The commented-out lines were what I used to make this work on my own computer without any adjustments to my notebook kernel. I *hope* this will just work with the pythia kernel installed. # Import skipthoughts modules #sys.path.append('/Users/chrisn/mad-science/pythia/src/featurizers/skipthoughts') #from training import vocab, train, tools #import skipthoughts from src.featurizers.skipthoughts import skipthoughts from src.featurizers.skipthoughts.training import vocab, train, tools # Import pythia modules #sys.path.append('/Users/chrisn/mad-science/pythia/') from src.utils import normalize, tokenize # For evaluation purposes, import some sklearn modules from sklearn.linear_model import LinearRegression from sklearn.cross_validation import train_test_split import pandas import warnings warnings.filterwarnings('ignore') # Because there were a lot of annoying warnings. # The Beautiful Soup module as used in the pythia normalization is mad about something # And the skip-thoughts code is full of deprecation warnings about how numpy works. The warnings can crash my system # ## Tokenization and normalization # # Who knows the best way to do this? I tried to match the expectations of both the skip-thoughts code and the pythia codebase as best I could. # # For each document: # # 1) Make list of sentences. We use utils.tokenize.punkt_sentences # # 2) Normalize each sentence. Remove html and make everything lower-case. We use utils.normalize.xml_normalize # # 3) Tokenize each sentence. Now each sentence is a string of space-separated tokens. We use utils.tokenize.word_punct_tokens and rejoin the tokens. # # Because I had so many difficulties with things crashing, I was happy whenever I got anything done and wanted to save where I was. I also became gunshy about using memory. The solution below is thus entirely streaming. This slows it down because of file i/o. # # The output of this section run on the entire stackexchange corpus can be found in <...>/stackexchange_models/se_posts_parsed.tar.gz. # # (Well, the tarring was done in the shell. This cell just creates a directory.) # # The section requires previously set varaibles: `sample_location` for the input and `parsed_data_location` for the output. file_extension = ".json" # Instead of trying to parse in memory, can instead parse line by line and write to disk fieldnames = ["body_text", "post_id","cluster_id", "order", "novelty"] for root,dirs,files in os.walk(sample_location): for doc in files: if doc.endswith(file_extension): #Recursively find all .json files for line in open(os.path.join(sample_location,root,doc)): temp_dict = json.loads(line) post_id = temp_dict['post_id'] text = temp_dict['body_text'] sentences = tokenize.punkt_sentences(text) normal = [normalize.xml_normalize(sentence) for sentence in sentences] tokens = [' '.join(tokenize.word_punct_tokens(sentence)) for sentence in normal] base_doc = doc.split('.')[0] output_filename = "{}_{}.csv".format(base_doc,post_id) # Creates one output file per line of input file. # Output file includes post id in name: # {clusterid}_{postid}.csv rel_path = os.path.relpath(root,sample_location) output_path = os.path.join(parsed_data_location, rel_path, output_filename) os.makedirs(os.path.dirname(output_path), exist_ok = True) with open(output_path,'w') as token_file: #print(parsed_data_location,rel_path,output_filename) writer = csv.DictWriter(token_file, fieldnames) writer.writeheader() output_dict = temp_dict for token in tokens: output_dict['body_text'] = token writer.writerow(output_dict) # ## Reformat to match skip-thoughts code input # # `tokenized` is now a list of lists. Each inner list represents a document as a list of strings, where each string represents a sentence. # # ### An annoying issue # # The trainer expects a list of sentences. To match expectations, those inner brackets need to disappear. # # However, this then looks like we have one real long document where the documents have been smashed together in arbitrary order. And the training will mistake the first sentence of one document as being part of the context of the last sentence of another. For sufficiently long documents, you can argue this is just noise. For documents that are themselves only a few sentences, this seems like too much noise. # # My cludgy fix is to introduce a sentence consisting of a single null character `'\0'` and add this sentence between every document when concatenating. This may have unintended side-effects. # # As above, this notebook doesn't depend on much memory. The next cell does not assume you have `tokenized` stored and thus asks you to read it back in. I found this more convenient in the end. # # The cell depends on previously defined variables `parsed_data_location` and `training_data_location` for input and output respectively. doc_separator = '\0' # This cell does three things # Writes sentences to a text file one line per sentence, with the null character separating documents. # Stores all sentences into a list # Stores the cluster_ids into a numpy array. Each sentence gets the cluster_id of its post. So the list and numpy array # are the same length. sentences = [] cluster_ids = [] with open(training_data_location,'w') as outfile: for root, dirs, files in os.walk(parsed_data_location): for doc in files: if doc.endswith('.csv'): for line in csv.DictReader(open(os.path.join(root,doc))): outfile.write(line['body_text'] + '\n') sentences.append(line['body_text']) cluster_ids.append(int(line['cluster_id'])) outfile.write(doc_separator + '\n') cluster_ids.append(-1) cluster_ids = numpy.array(cluster_ids) # ## Build the skip-thoughts training dictionaries # # These are pretty basic things about the whole corpus required by the skip-thoughts code. # # wordcount is a dictionary of wordcounts, ordered by the order the words appear in the sentences. worddict is a dictionary of the same words, with values corresponding to their rank in the count, ordered by rank in the count. # Can skip this cell if sentences is still in memory sentences = [x.strip() for x in open(training_data_location).readlines()] len(sentences) # wordcount the count of words, ordered by appearance in text # worddict worddict, wordcount = vocab.build_dictionary(sentences) vocab.save_dictionary(worddict, wordcount, vocab_location) # ## Training a model # # #### First set parameters # # Definitely set: # * saveto: a path where the model will be periodically saved # * dictionary: where the dictionary is. # # Both these should have been previously set as `model_location` and `vocab_location` respectively. # # Consider tuning: # * dim_word: the dimensionality of the RNN word embeddings (Default 620) # * dim: the size of the hidden state (Default 2400) # * max_epochs: the total number of training epochs (Default 5) # # * decay_c: weight decay hyperparameter (Default 0, i.e. ignored) # * grad_clip: gradient clipping hyperparamter (Default 5) # * n_words: the size of the decoder vocabulary (Default 20000) # * maxlen_w: the max number of words per sentence. Sentences longer than this will be ignored (Default 30) # * batch_size: size of each training minibatch (roughly) (Default 64) # * saveFreq: save the model after this many weight updates (Default 1000) # # Other options: # * displayFreq: display progress after this many weight updates (Default 1) # * reload_: whether to reload a previously saved model (Default False) # # #### Some obvervations on parameters # # The default displayFreq is 1. Which seems low. It means every iteration prints something. It seems excessive. I suggest 100. # # As long as the computer can handle it in memory, a bigger batch size seems better all around. I am trying 256. # # A good chunk of stackexchange sentences seemed to be at least 30 tokens. I am changing that setting to 40. # Using a small set of paramters for testing params = dict( saveto = model_location, dictionary = vocab_location, n_words = 1000, dim_word = 100, dim = 500, max_epochs = 1, saveFreq = 100, ) train.trainer(sentences,**params) # ## Encoding sentences # # The model created doesn't quite fit into the pipeline, because it is a "uni-skip" model, not a "combine skip" model. The pipeline uses skipthoughts.encode, which requires very particularly formatted models. # # The model built above instead works with the encode function found in the tools model. # # Except that this function often breaks. # # I have not trained a "combine-skip model". The model here is the equivalent of `utable.npy`. # # One would still need to train an `btable.npy` equivalent. A btable is created by training a model with half the dimension, reversing the sentences and training again, then concatenating the two models into btable. I have not done this and may be missing some subtelty. # This cell requires hardcoded paths in tools.py to be changed. It should perhaps also be fixed to not depend # on hardcoded paths. embed_map = tools.load_googlenews_vectors(path_to_word2vec) model = tools.load_model(embed_map) # Having a lot of trouble getting this line to not crash. It causes a "floating point exception". tools.encode(model,sentences) # ## How to evaluate? # # Supervised task. Apply cluster_id as label to each sentence. Run regression. Evaluate performance. # # Since there is so much stackexchange data, a random sample may be sufficient. So choose a percentage of the data to sample from. That sample will then get divided into training and testing. # + evaluation_percent = 1 # Choose a subsample of the data holdout_percent = 0.5 # Of that subsample, make this amount training data # and the rest testing data # e.g. 1,000,000 sentences. evaluation_percent = 0.1, # holdout_percent = 0.8 -- # Choose 100,000 sentences. # Then choose 80,000 of those for training # and 20,000 of those for testing. # - # Read in the sentences if they are not already in memory. sentences = [x.strip() for x in open(training_data_location).readlines()] num_sentences = len(sentences) # Read in cluster ids, again if not already in memory. cluster_ids = [] for root, dirs, files in os.walk(parsed_data_location): for doc in files: if doc.endswith('.csv'): for line in csv.DictReader(open(os.path.join(root,doc))): cluster_ids.append(int(line['cluster_id'])) cluster_ids.append(-1) cluster_ids = numpy.array(cluster_ids) # Sanity check. Should be true. num_sentences == len(cluster_ids) # Sample a percentage of your data specified above as evaluation_percent. indices = numpy.arange(num_sentences) num_samples = int(evaluation_percent * num_sentences) index_sample = numpy.sort(numpy.random.choice(indices, size=num_samples, replace = False)) sample_sentences = [sentences[i] for i in index_sample] sample_clusters = cluster_ids[index_sample] # Broken!!! # This section requires the encodings of the previous section. But... #encodings = tools.encode(model, sample_sentences) encodings = numpy.random.rand(num_samples,10) #Since I can't get encodings to actually work, have some random numbers. # From this point forward, the code is not well-tested because I couldn't get the encode function to work. encoding_train, encoding_test, cluster_train, cluster_test = train_test_split(encodings, sample_clusters, test_size = holdout_percent) regression = LinearRegression() regression.fit(encoding_train, cluster_train) regression.predict(encoding_test) regression.score(encoding_test, cluster_test) # ## The end. # # This is the end of the notebook. Below is an alternative approach. Not as well tested. # ## An in-memory approach. # # Because everything kept crashing on me, I ultimately found it most convenient to do everything in a streaming fashion with a lot of writing to disk at every stage. This is obviously slower than desirable. Basically I do a thing, write out the results, read the results back in, then do the next thing. # # Below is an in-memory approach that reads everything into memory and pushes forward, still sometimes saving key steps to disk, but without any rereading in. Because of various technical issues, this code has never been tested at scale. It works on the anime dataset. doc_dicts = [json.loads(line) for root,dirs,files in os.walk(sample_location) for doc in files for line in open(os.path.join(sample_location,root,doc)) ] # doc_dicts is a list of dictionaries, each containing document data # In the anime sample, the text is labeled 'body_text' # There is a field cluster_id which we will use as the categorical label cluster_ids = [d['cluster_id'] for d in doc_dicts] docs = [d['body_text'] for d in doc_dicts] del(doc_dicts) # For efficiency # Make list of sentences for each doc sentenced = [tokenize.punkt_sentences(doc) for doc in docs] # Normalize each sentence normalized = [[normalize.xml_normalize(sentence) for sentence in doc] for doc in sentenced] del(sentenced) #If you're done with it #Tokenize each sentence tokenized = [[' '.join(tokenize.word_punct_tokens(sentence)) for sentence in doc] for doc in normalized] separated = sum(zip(tokenized,[[doc_separator]]*len(tokenized)),tuple()) sentences = sum(separated,[]) separated = sum(zip(tokenized,[[doc_separator]]*len(tokenized)),tuple()) sentences = sum(separated,[]) # This leaves you with the sentences object in memory, leaving you ready to build the skip-thoughts training dictionaries.
src/examples/train-skip-thoughts.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 02 - Introduction to Python for Data Analysis # # by [<NAME>](albahnsen.com/) # # version 0.2, May 2016 # # ## Part of the class [Machine Learning for Risk Management](https://github.com/albahnsen/ML_RiskManagement) # # # This notebook is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](http://creativecommons.org/licenses/by-sa/3.0/deed.en_US). Special thanks goes to [<NAME>er](http://www.cs.sandia.gov/~rmuller/), Sandia National Laboratories # ## Why Python? # Python is the programming language of choice for many scientists to a large degree because it offers a great deal of power to analyze and model scientific data with relatively little overhead in terms of learning, installation or development time. It is a language you can pick up in a weekend, and use for the rest of one's life. # # The [Python Tutorial](http://docs.python.org/3/tutorial/) is a great place to start getting a feel for the language. To complement this material, I taught a [Python Short Course](http://www.wag.caltech.edu/home/rpm/python_course/) years ago to a group of computational chemists during a time that I was worried the field was moving too much in the direction of using canned software rather than developing one's own methods. I wanted to focus on what working scientists needed to be more productive: parsing output of other programs, building simple models, experimenting with object oriented programming, extending the language with C, and simple GUIs. # # I'm trying to do something very similar here, to cut to the chase and focus on what scientists need. In the last year or so, the [Jupyter Project](http://jupyter.org) has put together a notebook interface that I have found incredibly valuable. A large number of people have released very good IPython Notebooks that I have taken a huge amount of pleasure reading through. Some ones that I particularly like include: # # * <NAME> [A Crash Course in Python for Scientists](http://nbviewer.jupyter.org/gist/rpmuller/5920182) # * <NAME>'s [excellent notebooks](http://jrjohansson.github.io/), including [Scientific Computing with Python](https://github.com/jrjohansson/scientific-python-lectures) and [Computational Quantum Physics with QuTiP](https://github.com/jrjohansson/qutip-lectures) lectures; # * [XKCD style graphs in matplotlib](http://nbviewer.ipython.org/url/jakevdp.github.com/downloads/notebooks/XKCD_plots.ipynb); # * [A collection of Notebooks for using IPython effectively](https://github.com/ipython/ipython/tree/master/examples/notebooks#a-collection-of-notebooks-for-using-ipython-effectively) # * [A gallery of interesting IPython Notebooks](https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks) # # I find Jupyter notebooks an easy way both to get important work done in my everyday job, as well as to communicate what I've done, how I've done it, and why it matters to my coworkers. In the interest of putting more notebooks out into the wild for other people to use and enjoy, I thought I would try to recreate some of what I was trying to get across in the original Python Short Course, updated by 15 years of Python, Numpy, Scipy, Pandas, Matplotlib, and IPython development, as well as my own experience in using Python almost every day of this time. # ## Why Python for Data Analysis? # # - Python is great for scripting and applications. # - The `pandas` library offers imporved library support. # - Scraping, web APIs # - Strong High Performance Computation support # - Load balanceing tasks # - MPI, GPU # - MapReduce # - Strong support for abstraction # - Intel MKL # - HDF5 # - Environment # ## But we already know R # # ...Which is better? Hard to answer # # http://www.kdnuggets.com/2015/05/r-vs-python-data-science.html # # http://www.kdnuggets.com/2015/03/the-grammar-data-science-python-vs-r.html # # https://www.datacamp.com/community/tutorials/r-or-python-for-data-analysis # # https://www.dataquest.io/blog/python-vs-r/ # # http://www.dataschool.io/python-or-r-for-data-science/ # ## What You Need to Install # # There are two branches of current releases in Python: the older-syntax Python 2, and the newer-syntax Python 3. This schizophrenia is largely intentional: when it became clear that some non-backwards-compatible changes to the language were necessary, the Python dev-team decided to go through a five-year (or so) transition, during which the new language features would be introduced and the old language was still actively maintained, to make such a transition as easy as possible. # # Nonetheless, I'm going to write these notes with Python 3 in mind, since this is the version of the language that I use in my day-to-day job, and am most comfortable with. # # With this in mind, these notes assume you have a Python distribution that includes: # # * [Python](http://www.python.org) version 3.5; # * [Numpy](http://www.numpy.org), the core numerical extensions for linear algebra and multidimensional arrays; # * [Scipy](http://www.scipy.org), additional libraries for scientific programming; # * [Matplotlib](http://matplotlib.sf.net), excellent plotting and graphing libraries; # * [IPython](http://ipython.org), with the additional libraries required for the notebook interface. # * [Pandas](http://pandas.pydata.org/), Python version of R dataframe # * [scikit-learn](http://scikit-learn.org), Machine learning library! # # A good, easy to install option that supports Mac, Windows, and Linux, and that has all of these packages (and much more) is the [Anaconda](https://www.continuum.io/). # ### Checking your installation # # You can run the following code to check the versions of the packages on your system: # # (in IPython notebook, press `shift` and `return` together to execute the contents of a cell) # + import sys print('Python version:', sys.version) import IPython print('IPython:', IPython.__version__) import numpy print('numpy:', numpy.__version__) import scipy print('scipy:', scipy.__version__) import matplotlib print('matplotlib:', matplotlib.__version__) import pandas print('pandas:', pandas.__version__) import sklearn print('scikit-learn:', sklearn.__version__) # - # # I. Python Overview # This is a quick introduction to Python. There are lots of other places to learn the language more thoroughly. I have collected a list of useful links, including ones to other learning resources, at the end of this notebook. If you want a little more depth, [Python Tutorial](http://docs.python.org/2/tutorial/) is a great place to start, as is Zed Shaw's [Learn Python the Hard Way](http://learnpythonthehardway.org/book/). # # The lessons that follow make use of the IPython notebooks. There's a good introduction to notebooks [in the IPython notebook documentation](http://ipython.org/notebook.html) that even has a [nice video](http://www.youtube.com/watch?v=H6dLGQw9yFQ#!) on how to use the notebooks. You should probably also flip through the [IPython tutorial](http://ipython.org/ipython-doc/dev/interactive/tutorial.html) in your copious free time. # # Briefly, notebooks have code cells (that are generally followed by result cells) and text cells. The text cells are the stuff that you're reading now. The code cells start with "In []:" with some number generally in the brackets. If you put your cursor in the code cell and hit Shift-Enter, the code will run in the Python interpreter and the result will print out in the output cell. You can then change things around and see whether you understand what's going on. If you need to know more, see the [IPython notebook documentation](http://ipython.org/notebook.html) or the [IPython tutorial](http://ipython.org/ipython-doc/dev/interactive/tutorial.html). # ## Using Python as a Calculator # Many of the things I used to use a calculator for, I now use Python for: 2+2 (50-5*6)/4 # (If you're typing this into an IPython notebook, or otherwise using notebook file, you hit shift-Enter to evaluate a cell.) # In the last few lines, we have sped by a lot of things that we should stop for a moment and explore a little more fully. We've seen, however briefly, two different data types: **integers**, also known as *whole numbers* to the non-programming world, and **floating point numbers**, also known (incorrectly) as *decimal numbers* to the rest of the world. # # We've also seen the first instance of an **import** statement. Python has a huge number of libraries included with the distribution. To keep things simple, most of these variables and functions are not accessible from a normal Python interactive session. Instead, you have to import the name. For example, there is a **math** module containing many useful functions. To access, say, the square root function, you can either first # # from math import sqrt # # and then sqrt(81) from math import sqrt sqrt(81) # or you can simply import the math library itself import math math.sqrt(81) # You can define variables using the equals (=) sign: radius = 20 pi = math.pi area = pi * radius ** 2 area # If you try to access a variable that you haven't yet defined, you get an error: volume # and you need to define it: volume = 4/3*pi*radius**3 volume # You can name a variable *almost* anything you want. It needs to start with an alphabetical character or "\_", can contain alphanumeric charcters plus underscores ("\_"). Certain words, however, are reserved for the language: # # and, as, assert, break, class, continue, def, del, elif, else, except, # exec, finally, for, from, global, if, import, in, is, lambda, not, or, # pass, print, raise, return, try, while, with, yield # # Trying to define a variable using one of these will result in a syntax error: return = 0 # The [Python Tutorial](http://docs.python.org/2/tutorial/introduction.html#using-python-as-a-calculator) has more on using Python as an interactive shell. The [IPython tutorial](http://ipython.org/ipython-doc/dev/interactive/tutorial.html) makes a nice complement to this, since IPython has a much more sophisticated iteractive shell. # ## Strings # Strings are lists of printable characters, and can be defined using either single quotes 'Hello, World!' # or double quotes "Hello, World!" # But not both at the same time, unless you want one of the symbols to be part of the string. "He's a Rebel" 'She asked, "How are you today?"' # Just like the other two data objects we're familiar with (ints and floats), you can assign a string to a variable greeting = "Hello, World!" # The **print** statement is often used for printing character strings: print(greeting) # But it can also print data types other than strings: print("The area is " + area) print("The area is " + str(area)) # In the above snipped, the number 600 (stored in the variable "area") is converted into a string before being printed out. # You can use the + operator to concatenate strings together: statement = "Hello," + "World!" print(statement) # Don't forget the space between the strings, if you want one there. statement = "Hello, " + "World!" print(statement) # You can use + to concatenate multiple strings in a single statement: print("This " + "is " + "a " + "longer " + "statement.") # If you have a lot of words to concatenate together, there are other, more efficient ways to do this. But this is fine for linking a few strings together. # ## Lists # Very often in a programming language, one wants to keep a group of similar items together. Python does this using a data type called **lists**. days_of_the_week = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"] # You can access members of the list using the **index** of that item: days_of_the_week[2] # Python lists, like C, but unlike Fortran, use 0 as the index of the first element of a list. Thus, in this example, the 0 element is "Sunday", 1 is "Monday", and so on. If you need to access the *n*th element from the end of the list, you can use a negative index. For example, the -1 element of a list is the last element: days_of_the_week[-1] # You can add additional items to the list using the .append() command: languages = ["Fortran","C","C++"] languages.append("Python") print(languages) # The **range()** command is a convenient way to make sequential lists of numbers: list(range(10)) # Note that range(n) starts at 0 and gives the sequential list of integers less than n. If you want to start at a different number, use range(start,stop) list(range(2,8)) # The lists created above with range have a *step* of 1 between elements. You can also give a fixed step size via a third command: evens = list(range(0,20,2)) evens evens[3] # Lists do not have to hold the same data type. For example, ["Today",7,99.3,""] # However, it's good (but not essential) to use lists for similar objects that are somehow logically connected. If you want to group different data types together into a composite data object, it's best to use **tuples**, which we will learn about below. # # You can find out how long a list is using the **len()** command: help(len) len(evens) # ## Iteration, Indentation, and Blocks # One of the most useful things you can do with lists is to *iterate* through them, i.e. to go through each element one at a time. To do this in Python, we use the **for** statement: for day in days_of_the_week: print(day) # This code snippet goes through each element of the list called **days_of_the_week** and assigns it to the variable **day**. It then executes everything in the indented block (in this case only one line of code, the print statement) using those variable assignments. When the program has gone through every element of the list, it exists the block. # # (Almost) every programming language defines blocks of code in some way. In Fortran, one uses END statements (ENDDO, ENDIF, etc.) to define code blocks. In C, C++, and Perl, one uses curly braces {} to define these blocks. # # Python uses a colon (":"), followed by indentation level to define code blocks. Everything at a higher level of indentation is taken to be in the same block. In the above example the block was only a single line, but we could have had longer blocks as well: for day in days_of_the_week: statement = "Today is " + day print(statement) # The **range()** command is particularly useful with the **for** statement to execute loops of a specified length: for i in range(20): print("The square of ",i," is ",i*i) # ## Slicing # Lists and strings have something in common that you might not suspect: they can both be treated as sequences. You already know that you can iterate through the elements of a list. You can also iterate through the letters in a string: for letter in "Sunday": print(letter) # This is only occasionally useful. Slightly more useful is the *slicing* operation, which you can also use on any sequence. We already know that we can use *indexing* to get the first element of a list: days_of_the_week[0] # If we want the list containing the first two elements of a list, we can do this via days_of_the_week[0:2] # or simply days_of_the_week[:2] # If we want the last items of the list, we can do this with negative slicing: days_of_the_week[-2:] # which is somewhat logically consistent with negative indices accessing the last elements of the list. # # You can do: workdays = days_of_the_week[1:6] print(workdays) # Since strings are sequences, you can also do this to them: day = "Sunday" abbreviation = day[:3] print(abbreviation) # If we really want to get fancy, we can pass a third element into the slice, which specifies a step length (just like a third argument to the **range()** function specifies the step): numbers = list(range(0,40)) evens = numbers[2::2] evens # Note that in this example I was even able to omit the second argument, so that the slice started at 2, went to the end of the list, and took every second element, to generate the list of even numbers less that 40. # ## Booleans and Truth Testing # We have now learned a few data types. We have integers and floating point numbers, strings, and lists to contain them. We have also learned about lists, a container that can hold any data type. We have learned to print things out, and to iterate over items in lists. We will now learn about **boolean** variables that can be either True or False. # # We invariably need some concept of *conditions* in programming to control branching behavior, to allow a program to react differently to different situations. If it's Monday, I'll go to work, but if it's Sunday, I'll sleep in. To do this in Python, we use a combination of **boolean** variables, which evaluate to either True or False, and **if** statements, that control branching based on boolean values. # For example: if day == "Sunday": print("Sleep in") else: print("Go to work") # (Quick quiz: why did the snippet print "Go to work" here? What is the variable "day" set to?) # # Let's take the snippet apart to see what happened. First, note the statement day == "Sunday" # If we evaluate it by itself, as we just did, we see that it returns a boolean value, False. The "==" operator performs *equality testing*. If the two items are equal, it returns True, otherwise it returns False. In this case, it is comparing two variables, the string "Sunday", and whatever is stored in the variable "day", which, in this case, is the other string "Saturday". Since the two strings are not equal to each other, the truth test has the false value. # The if statement that contains the truth test is followed by a code block (a colon followed by an indented block of code). If the boolean is true, it executes the code in that block. Since it is false in the above example, we don't see that code executed. # # The first block of code is followed by an **else** statement, which is executed if nothing else in the above if statement is true. Since the value was false, this code is executed, which is why we see "Go to work". # # You can compare any data types in Python: 1 == 2 50 == 2*25 3 < 3.14159 1 == 1.0 1 != 0 1 <= 2 1 >= 1 # We see a few other boolean operators here, all of which which should be self-explanatory. Less than, equality, non-equality, and so on. # # Particularly interesting is the 1 == 1.0 test, which is true, since even though the two objects are different data types (integer and floating point number), they have the same *value*. There is another boolean operator **is**, that tests whether two objects are the same object: 1 is 1.0 # We can do boolean tests on lists as well: [1,2,3] == [1,2,4] [1,2,3] < [1,2,4] # Finally, note that you can also string multiple comparisons together, which can result in very intuitive tests: hours = 5 0 < hours < 24 # If statements can have **elif** parts ("else if"), in addition to if/else parts. For example: if day == "Sunday": print("Sleep in") elif day == "Saturday": print("Do chores") else: print("Go to work") # Of course we can combine if statements with for loops, to make a snippet that is almost interesting: for day in days_of_the_week: statement = "Today is " + day print(statement) if day == "Sunday": print(" Sleep in") elif day == "Saturday": print(" Do chores") else: print(" Go to work") # This is something of an advanced topic, but ordinary data types have boolean values associated with them, and, indeed, in early versions of Python there was not a separate boolean object. Essentially, anything that was a 0 value (the integer or floating point 0, an empty string "", or an empty list []) was False, and everything else was true. You can see the boolean value of any data object using the **bool()** function. bool(1) bool(0) bool(["This "," is "," a "," list"]) # ## Code Example: The Fibonacci Sequence # The [Fibonacci sequence](http://en.wikipedia.org/wiki/Fibonacci_number) is a sequence in math that starts with 0 and 1, and then each successive entry is the sum of the previous two. Thus, the sequence goes 0,1,1,2,3,5,8,13,21,34,55,89,... # # A very common exercise in programming books is to compute the Fibonacci sequence up to some number **n**. First I'll show the code, then I'll discuss what it is doing. n = 10 sequence = [0,1] for i in range(2,n): # This is going to be a problem if we ever set n <= 2! sequence.append(sequence[i-1]+sequence[i-2]) print(sequence) # Let's go through this line by line. First, we define the variable **n**, and set it to the integer 20. **n** is the length of the sequence we're going to form, and should probably have a better variable name. We then create a variable called **sequence**, and initialize it to the list with the integers 0 and 1 in it, the first two elements of the Fibonacci sequence. We have to create these elements "by hand", since the iterative part of the sequence requires two previous elements. # # We then have a for loop over the list of integers from 2 (the next element of the list) to **n** (the length of the sequence). After the colon, we see a hash tag "#", and then a **comment** that if we had set **n** to some number less than 2 we would have a problem. Comments in Python start with #, and are good ways to make notes to yourself or to a user of your code explaining why you did what you did. Better than the comment here would be to test to make sure the value of **n** is valid, and to complain if it isn't; we'll try this later. # # In the body of the loop, we append to the list an integer equal to the sum of the two previous elements of the list. # # After exiting the loop (ending the indentation) we then print out the whole list. That's it! # ## Functions # We might want to use the Fibonacci snippet with different sequence lengths. We could cut an paste the code into another cell, changing the value of **n**, but it's easier and more useful to make a function out of the code. We do this with the **def** statement in Python: def fibonacci(sequence_length): "Return the Fibonacci sequence of length *sequence_length*" sequence = [0,1] if sequence_length < 1: print("Fibonacci sequence only defined for length 1 or greater") return if 0 < sequence_length < 3: return sequence[:sequence_length] for i in range(2,sequence_length): sequence.append(sequence[i-1]+sequence[i-2]) return sequence # We can now call **fibonacci()** for different sequence_lengths: fibonacci(2) fibonacci(12) # We've introduced a several new features here. First, note that the function itself is defined as a code block (a colon followed by an indented block). This is the standard way that Python delimits things. Next, note that the first line of the function is a single string. This is called a **docstring**, and is a special kind of comment that is often available to people using the function through the python command line: help(fibonacci) # If you define a docstring for all of your functions, it makes it easier for other people to use them, since they can get help on the arguments and return values of the function. # # Next, note that rather than putting a comment in about what input values lead to errors, we have some testing of these values, followed by a warning if the value is invalid, and some conditional code to handle special cases. # ## Two More Data Structures: Tuples and Dictionaries # Before we end the Python overview, I wanted to touch on two more data structures that are very useful (and thus very common) in Python programs. # # A **tuple** is a sequence object like a list or a string. It's constructed by grouping a sequence of objects together with commas, either without brackets, or with parentheses: t = (1,2,'hi',9.0) t # Tuples are like lists, in that you can access the elements using indices: t[1] # However, tuples are *immutable*, you can't append to them or change the elements of them: t.append(7) t[1]=77 # Tuples are useful anytime you want to group different pieces of data together in an object, but don't want to create a full-fledged class (see below) for them. For example, let's say you want the Cartesian coordinates of some objects in your program. Tuples are a good way to do this: ('Bob',0.0,21.0) # Again, it's not a necessary distinction, but one way to distinguish tuples and lists is that tuples are a collection of different things, here a name, and x and y coordinates, whereas a list is a collection of similar things, like if we wanted a list of those coordinates: positions = [ ('Bob',0.0,21.0), ('Cat',2.5,13.1), ('Dog',33.0,1.2) ] # Tuples can be used when functions return more than one value. Say we wanted to compute the smallest x- and y-coordinates of the above list of objects. We could write: # + def minmax(objects): minx = 1e20 # These are set to really big numbers miny = 1e20 for obj in objects: name,x,y = obj if x < minx: minx = x if y < miny: miny = y return minx,miny x,y = minmax(positions) print(x,y) # - # **Dictionaries** are an object called "mappings" or "associative arrays" in other languages. Whereas a list associates an integer index with a set of objects: mylist = [1,2,9,21] # The index in a dictionary is called the *key*, and the corresponding dictionary entry is the *value*. A dictionary can use (almost) anything as the key. Whereas lists are formed with square brackets [], dictionaries use curly brackets {}: ages = {"Rick": 46, "Bob": 86, "Fred": 21} print("Rick's age is ",ages["Rick"]) # There's also a convenient way to create dictionaries without having to quote the keys. dict(Rick=46,Bob=86,Fred=20) # The **len()** command works on both tuples and dictionaries: len(t) len(ages) # ## Conclusion of the Python Overview # There is, of course, much more to the language than I've covered here. I've tried to keep this brief enough so that you can jump in and start using Python to simplify your life and work. My own experience in learning new things is that the information doesn't "stick" unless you try and use it for something in real life. # # You will no doubt need to learn more as you go. I've listed several other good references, including the [Python Tutorial](http://docs.python.org/2/tutorial/) and [Learn Python the Hard Way](http://learnpythonthehardway.org/book/). Additionally, now is a good time to start familiarizing yourself with the [Python Documentation](http://docs.python.org/2.7/), and, in particular, the [Python Language Reference](http://docs.python.org/2.7/reference/index.html). # # <NAME>, one of the earliest and most prolific Python contributors, wrote the "Zen of Python", which can be accessed via the "import this" command: import this # No matter how experienced a programmer you are, these are words to meditate on. # # II. Numpy and Scipy # # [Numpy](http://numpy.org) contains core routines for doing fast vector, matrix, and linear algebra-type operations in Python. [Scipy](http://scipy) contains additional routines for optimization, special functions, and so on. Both contain modules written in C and Fortran so that they're as fast as possible. Together, they give Python roughly the same capability that the [Matlab](http://www.mathworks.com/products/matlab/) program offers. (In fact, if you're an experienced Matlab user, there a [guide to Numpy for Matlab users](http://www.scipy.org/NumPy_for_Matlab_Users) just for you.) # # ## Making vectors and matrices # Fundamental to both Numpy and Scipy is the ability to work with vectors and matrices. You can create vectors from lists using the **array** command: import numpy as np import scipy as sp array = np.array([1,2,3,4,5,6]) array # size of the array array.shape # To build matrices, you can either use the array command with lists of lists: mat = np.array([[0,1],[1,0]]) mat # Add a column of ones to mat mat2 = np.c_[mat, np.ones(2)] mat2 # size of a matrix mat2.shape # You can also form empty (zero) matrices of arbitrary shape (including vectors, which Numpy treats as vectors with one row), using the **zeros** command: np.zeros((3,3)) # There's also an **identity** command that behaves as you'd expect: np.identity(4) # as well as a **ones** command. # ## Linspace, matrix functions, and plotting # The **linspace** command makes a linear array of points from a starting to an ending value. np.linspace(0,1) # If you provide a third argument, it takes that as the number of points in the space. If you don't provide the argument, it gives a length 50 linear space. np.linspace(0,1,11) # **linspace** is an easy way to make coordinates for plotting. Functions in the numpy library (all of which are imported into IPython notebook) can act on an entire vector (or even a matrix) of points at once. Thus, x = np.linspace(0,2*np.pi) np.sin(x) # In conjunction with **matplotlib**, this is a nice way to plot things: # %matplotlib inline import matplotlib.pyplot as plt plt.plot(x,np.sin(x)) # ## Matrix operations # Matrix objects act sensibly when multiplied by scalars: 0.125*np.identity(3) # as well as when you add two matrices together. (However, the matrices have to be the same shape.) np.identity(2) + np.array([[1,1],[1,2]]) # Something that confuses Matlab users is that the times (*) operator give element-wise multiplication rather than matrix multiplication: np.identity(2)*np.ones((2,2)) # To get matrix multiplication, you need the **dot** command: np.dot(np.identity(2),np.ones((2,2))) # **dot** can also do dot products (duh!): v = np.array([3,4]) np.sqrt(np.dot(v,v)) # as well as matrix-vector products. # There are **determinant**, **inverse**, and **transpose** functions that act as you would suppose. Transpose can be abbreviated with ".T" at the end of a matrix object: m = np.array([[1,2],[3,4]]) m.T np.linalg.inv(m) # There's also a **diag()** function that takes a list or a vector and puts it along the diagonal of a square matrix. np.diag([1,2,3,4,5]) # We'll find this useful later on. # ## Least squares fitting # Very often we deal with some data that we want to fit to some sort of expected behavior. Say we have the following: raw_data = """\ 3.1905781584582433,0.028208609537968457 4.346895074946466,0.007160804747670053 5.374732334047101,0.0046962988461934805 8.201284796573875,0.0004614473299618756 10.899357601713055,0.00005038370219939726 16.295503211991434,4.377451812785309e-7 21.82012847965739,3.0799922117601088e-9 32.48394004282656,1.524776208284536e-13 43.53319057815846,5.5012073588707224e-18""" # There's a section below on parsing CSV data. We'll steal the parser from that. For an explanation, skip ahead to that section. Otherwise, just assume that this is a way to parse that text into a numpy array that we can plot and do other analyses with. data = [] for line in raw_data.splitlines(): words = line.split(',') data.append(words) data = np.array(data, dtype=np.float) data data[:, 0] plt.title("Raw Data") plt.xlabel("Distance") plt.plot(data[:,0],data[:,1],'bo') # Since we expect the data to have an exponential decay, we can plot it using a semi-log plot. plt.title("Raw Data") plt.xlabel("Distance") plt.semilogy(data[:,0],data[:,1],'bo') # For a pure exponential decay like this, we can fit the log of the data to a straight line. The above plot suggests this is a good approximation. Given a function # $$ y = Ae^{-ax} $$ # $$ \log(y) = \log(A) - ax$$ # Thus, if we fit the log of the data versus x, we should get a straight line with slope $a$, and an intercept that gives the constant $A$. # # There's a numpy function called **polyfit** that will fit data to a polynomial form. We'll use this to fit to a straight line (a polynomial of order 1) params = sp.polyfit(data[:,0],np.log(data[:,1]),1) a = params[0] A = np.exp(params[1]) # Let's see whether this curve fits the data. x = np.linspace(1,45) plt.title("Raw Data") plt.xlabel("Distance") plt.semilogy(data[:,0],data[:,1],'bo') plt.semilogy(x,A*np.exp(a*x),'b-') # If we have more complicated functions, we may not be able to get away with fitting to a simple polynomial. Consider the following data: # + gauss_data = """\ -0.9902286902286903,1.4065274110372852e-19 -0.7566104566104566,2.2504438576596563e-18 -0.5117810117810118,1.9459459459459454 -0.31887271887271884,10.621621621621626 -0.250997150997151,15.891891891891893 -0.1463309463309464,23.756756756756754 -0.07267267267267263,28.135135135135133 -0.04426734426734419,29.02702702702703 -0.0015939015939017698,29.675675675675677 0.04689304689304685,29.10810810810811 0.0840994840994842,27.324324324324326 0.1700546700546699,22.216216216216214 0.370878570878571,7.540540540540545 0.5338338338338338,1.621621621621618 0.722014322014322,0.08108108108108068 0.9926849926849926,-0.08108108108108646""" data = [] for line in gauss_data.splitlines(): words = line.split(',') data.append(words) data = np.array(data, dtype=np.float) plt.plot(data[:,0],data[:,1],'bo') # - # This data looks more Gaussian than exponential. If we wanted to, we could use **polyfit** for this as well, but let's use the **curve_fit** function from Scipy, which can fit to arbitrary functions. You can learn more using help(curve_fit). # # First define a general Gaussian function to fit to. def gauss(x,A,a): return A*np.exp(a*x**2) # Now fit to it using **curve_fit**: # + from scipy.optimize import curve_fit params,conv = curve_fit(gauss,data[:,0],data[:,1]) x = np.linspace(-1,1) plt.plot(data[:,0],data[:,1],'bo') A,a = params plt.plot(x,gauss(x,A,a),'b-') # - # The **curve_fit** routine we just used is built on top of a very good general **minimization** capability in Scipy. You can learn more [at the scipy documentation pages](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html). # ## Monte Carlo and random numbers # Many methods in scientific computing rely on Monte Carlo integration, where a sequence of (pseudo) random numbers are used to approximate the integral of a function. Python has good random number generators in the standard library. The **random()** function gives pseudorandom numbers uniformly distributed between 0 and 1: from random import random rands = [] for i in range(100): rands.append(random()) plt.plot(rands) # **random()** uses the [Mersenne Twister](http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/emt.html) algorithm, which is a highly regarded pseudorandom number generator. There are also functions to generate random integers, to randomly shuffle a list, and functions to pick random numbers from a particular distribution, like the normal distribution: from random import gauss grands = [] for i in range(100): grands.append(gauss(0,1)) plt.plot(grands) # It is generally more efficient to generate a list of random numbers all at once, particularly if you're drawing from a non-uniform distribution. Numpy has functions to generate vectors and matrices of particular types of random distributions. plt.plot(np.random.rand(100)) # ## Slicing numpy arrays and matrices data.shape # Select second column data[:, 1] # Select the first 5 rows data[:5, :] # Select the second row and the last column data[1, -1] # # III. Intermediate Python # # ## Output Parsing # As more and more of our day-to-day work is being done on and through computers, we increasingly have output that one program writes, often in a text file, that we need to analyze in one way or another, and potentially feed that output into another file. # # Suppose we have the following output: myoutput = """\ @ Step Energy Delta E Gmax Grms Xrms Xmax Walltime @ ---- ---------------- -------- -------- -------- -------- -------- -------- @ 0 -6095.12544083 0.0D+00 0.03686 0.00936 0.00000 0.00000 1391.5 @ 1 -6095.25762870 -1.3D-01 0.00732 0.00168 0.32456 0.84140 10468.0 @ 2 -6095.26325979 -5.6D-03 0.00233 0.00056 0.06294 0.14009 11963.5 @ 3 -6095.26428124 -1.0D-03 0.00109 0.00024 0.03245 0.10269 13331.9 @ 4 -6095.26463203 -3.5D-04 0.00057 0.00013 0.02737 0.09112 14710.8 @ 5 -6095.26477615 -1.4D-04 0.00043 0.00009 0.02259 0.08615 20211.1 @ 6 -6095.26482624 -5.0D-05 0.00015 0.00002 0.00831 0.03147 21726.1 @ 7 -6095.26483584 -9.6D-06 0.00021 0.00004 0.01473 0.05265 24890.5 @ 8 -6095.26484405 -8.2D-06 0.00005 0.00001 0.00555 0.01929 26448.7 @ 9 -6095.26484599 -1.9D-06 0.00003 0.00001 0.00164 0.00564 27258.1 @ 10 -6095.26484676 -7.7D-07 0.00003 0.00001 0.00161 0.00553 28155.3 @ 11 -6095.26484693 -1.8D-07 0.00002 0.00000 0.00054 0.00151 28981.7 @ 11 -6095.26484693 -1.8D-07 0.00002 0.00000 0.00054 0.00151 28981.7""" # This output actually came from a geometry optimization of a Silicon cluster using the [NWChem](http://www.nwchem-sw.org/index.php/Main_Page) quantum chemistry suite. At every step the program computes the energy of the molecular geometry, and then changes the geometry to minimize the computed forces, until the energy converges. I obtained this output via the unix command # # % grep @ nwchem.out # # since NWChem is nice enough to precede the lines that you need to monitor job progress with the '@' symbol. # # We could do the entire analysis in Python; I'll show how to do this later on, but first let's focus on turning this code into a usable Python object that we can plot. # # First, note that the data is entered into a multi-line string. When Python sees three quote marks """ or ''' it treats everything following as part of a single string, including newlines, tabs, and anything else, until it sees the same three quote marks (""" has to be followed by another """, and ''' has to be followed by another ''') again. This is a convenient way to quickly dump data into Python, and it also reinforces the important idea that you don't have to open a file and deal with it one line at a time. You can read everything in, and deal with it as one big chunk. # # The first thing we'll do, though, is to split the big string into a list of strings, since each line corresponds to a separate piece of data. We will use the **splitlines()** function on the big myout string to break it into a new element every time it sees a newline (\n) character: lines = myoutput.splitlines() lines # Splitting is a big concept in text processing. We used **splitlines()** here, and we will use the more general **split()** function below to split each line into whitespace-delimited words. # # We now want to do three things: # # * Skip over the lines that don't carry any information # * Break apart each line that does carry information and grab the pieces we want # * Turn the resulting data into something that we can plot. # # For this data, we really only want the Energy column, the Gmax column (which contains the maximum gradient at each step), and perhaps the Walltime column. # # Since the data is now in a list of lines, we can iterate over it: for line in lines[2:]: # do something with each line words = line.split() # Let's examine what we just did: first, we used a **for** loop to iterate over each line. However, we skipped the first two (the lines[2:] only takes the lines starting from index 2), since lines[0] contained the title information, and lines[1] contained underscores. # # We then split each line into chunks (which we're calling "words", even though in most cases they're numbers) using the string **split()** command. Here's what split does: lines[2].split() # This is almost exactly what we want. We just have to now pick the fields we want: for line in lines[2:]: # do something with each line words = line.split() energy = words[2] gmax = words[4] time = words[8] print(energy,gmax,time) # This is fine for printing things out, but if we want to do something with the data, either make a calculation with it or pass it into a plotting, we need to convert the strings into regular floating point numbers. We can use the **float()** command for this. We also need to save it in some form. I'll do this as follows: data = [] for line in lines[2:]: # do something with each line words = line.split() energy = float(words[2]) gmax = float(words[4]) time = float(words[8]) data.append((energy,gmax,time)) data = np.array(data) # We now have our data in a numpy array, so we can choose columns to print: plt.plot(data[:,0]) plt.xlabel('step') plt.ylabel('Energy (hartrees)') plt.title('Convergence of NWChem geometry optimization for Si cluster') energies = data[:,0] minE = min(energies) energies_eV = 27.211*(energies-minE) plt.plot(energies_eV) plt.xlabel('step') plt.ylabel('Energy (eV)') plt.title('Convergence of NWChem geometry optimization for Si cluster') # This gives us the output in a form that we can think about: 4 eV is a fairly substantial energy change (chemical bonds are roughly this magnitude of energy), and most of the energy decrease was obtained in the first geometry iteration. # We mentioned earlier that we don't have to rely on **grep** to pull out the relevant lines for us. The **string** module has a lot of useful functions we can use for this. Among them is the **startswith** function. For example: # + lines = """\ ---------------------------------------- | WALL | 0.45 | 443.61 | ---------------------------------------- @ Step Energy Delta E Gmax Grms Xrms Xmax Walltime @ ---- ---------------- -------- -------- -------- -------- -------- -------- @ 0 -6095.12544083 0.0D+00 0.03686 0.00936 0.00000 0.00000 1391.5 ok ok Z-matrix (autoz) -------- """.splitlines() for line in lines: if line.startswith('@'): print(line) # - # and we've successfully grabbed all of the lines that begin with the @ symbol. # The real value in a language like Python is that it makes it easy to take additional steps to analyze data in this fashion, which means you are thinking more about your data, and are more likely to see important patterns. # ## Optional arguments # You will recall that the **linspace** function can take either two arguments (for the starting and ending points): np.linspace(0,1) # or it can take three arguments, for the starting point, the ending point, and the number of points: np.linspace(0,1,5) # You can also pass in keywords to exclude the endpoint: np.linspace(0,1,5,endpoint=False) # Right now, we only know how to specify functions that have a fixed number of arguments. We'll learn how to do the more general cases here. # # If we're defining a simple version of linspace, we would start with: def my_linspace(start,end): npoints = 50 v = [] d = (end-start)/float(npoints-1) for i in range(npoints): v.append(start + i*d) return v my_linspace(0,1) # We can add an optional argument by specifying a default value in the argument list: def my_linspace(start,end,npoints = 50): v = [] d = (end-start)/float(npoints-1) for i in range(npoints): v.append(start + i*d) return v # This gives exactly the same result if we don't specify anything: my_linspace(0,1) # But also let's us override the default value with a third argument: my_linspace(0,1,5) # We can add arbitrary keyword arguments to the function definition by putting a keyword argument \*\*kwargs handle in: def my_linspace(start,end,npoints=50,**kwargs): endpoint = kwargs.get('endpoint',True) v = [] if endpoint: d = (end-start)/float(npoints-1) else: d = (end-start)/float(npoints) for i in range(npoints): v.append(start + i*d) return v my_linspace(0,1,5,endpoint=False) # What the keyword argument construction does is to take any additional keyword arguments (i.e. arguments specified by name, like "endpoint=False"), and stick them into a dictionary called "kwargs" (you can call it anything you like, but it has to be preceded by two stars). You can then grab items out of the dictionary using the **get** command, which also lets you specify a default value. I realize it takes a little getting used to, but it is a common construction in Python code, and you should be able to recognize it. # # There's an analogous \*args that dumps any additional arguments into a list called "args". Think about the **range** function: it can take one (the endpoint), two (starting and ending points), or three (starting, ending, and step) arguments. How would we define this? def my_range(*args): start = 0 step = 1 if len(args) == 1: end = args[0] elif len(args) == 2: start,end = args elif len(args) == 3: start,end,step = args else: raise Exception("Unable to parse arguments") v = [] value = start while True: v.append(value) value += step if value > end: break return v # Note that we have defined a few new things you haven't seen before: a **break** statement, that allows us to exit a for loop if some conditions are met, and an exception statement, that causes the interpreter to exit with an error message. For example: my_range() # ## List Comprehensions and Generators # List comprehensions are a streamlined way to make lists. They look something like a list definition, with some logic thrown in. For example: evens1 = [2*i for i in range(10)] print(evens1) # You can also put some boolean testing into the construct: odds = [i for i in range(20) if i%2==1] odds # Here i%2 is the remainder when i is divided by 2, so that i%2==1 is true if the number is odd. Even though this is a relative new addition to the language, it is now fairly common since it's so convenient. # **iterators** are a way of making virtual sequence objects. Consider if we had the nested loop structure: # # for i in range(1000000): # for j in range(1000000): # # Inside the main loop, we make a list of 1,000,000 integers, just to loop over them one at a time. We don't need any of the additional things that a lists gives us, like slicing or random access, we just need to go through the numbers one at a time. And we're making 1,000,000 of them. # # **iterators** are a way around this. For example, the **xrange** function is the iterator version of range. This simply makes a counter that is looped through in sequence, so that the analogous loop structure would look like: # # for i in xrange(1000000): # for j in xrange(1000000): # # Even though we've only added two characters, we've dramatically sped up the code, because we're not making 1,000,000 big lists. # # We can define our own iterators using the **yield** statement: # + def evens_below(n): for i in range(n): if i%2 == 0: yield i return for i in evens_below(9): print(i) # - # We can always turn an iterator into a list using the **list** command: list(evens_below(9)) # There's a special syntax called a **generator expression** that looks a lot like a list comprehension: evens_gen = (i for i in range(9) if i%2==0) for i in evens_gen: print(i)
notebooks/02-IntroPython.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.8 ('base') # language: python # name: python3 # --- # # Network Concatenation # # The results of OPICS Networks/Circuits can be concatenated together to build large networks. In this notebook, we will explore this use-case in more-depth using the example of a two-stage lattice filter. import opics as op from opics.libraries import ebeam # Let's create a mach-zehnder interferometer circuit, and call it `stage_1`. # + circuit1 = op.Network(network_id="stage_1") circuit1.add_component(ebeam.BDC, component_id="bdc1") circuit1.add_component(ebeam.BDC, component_id="bdc2") circuit1.add_component(ebeam.Waveguide, params=dict( length=10e-6), component_id="wg1") circuit1.add_component(ebeam.Waveguide, params=dict( length=9.93e-6), component_id="wg2") circuit1.connect("bdc1", 2, "wg1", 0) circuit1.connect("bdc1", 3, "wg2", 0) circuit1.connect("bdc2", 0, "wg1", 1) circuit1.connect("bdc2", 1, "wg2", 1) circuit1 = circuit1.simulate_network() # - # Let's create another mach-zehnder interferometer circuit, and call it `stage_2`. circuit2 = op.Network(network_id="stage_2") circuit2.add_component(ebeam.BDC, component_id="bdc1") circuit2.add_component(ebeam.BDC, component_id="bdc2") circuit2.add_component(ebeam.Waveguide, params=dict( length=10e-6), component_id="wg1") circuit2.add_component(ebeam.Waveguide, params=dict( length=10.08e-6), component_id="wg2") circuit2.connect("bdc1", 2, "wg1", 0) circuit2.connect("bdc1", 3, "wg2", 0) circuit2.connect("bdc2", 0, "wg1", 1) circuit2.connect("bdc2", 1, "wg2", 1) circuit2 = circuit2.simulate_network() # Let's create a root circuit and concatenate both networks # ``` # .-------. .-------. .____. # --|stage_1|---------|stage_2|---------|BDC | -- # --|_______|-- wg1 --|_______|-- wg2 --|____| -- # ``` # + root = op.Network(network_id="root") root.add_component(circuit1, circuit1.component_id) root.add_component(circuit2, circuit2.component_id) root.add_component(ebeam.Waveguide, params=dict( length=100.125e-6), component_id="wg1") root.add_component(ebeam.Waveguide, params=dict( length=50e-6), component_id="wg2") root.add_component(ebeam.BDC, component_id="bdc") root.connect("stage_1", 2, "stage_2", 0) root.connect("stage_1", 3, "wg1", 0) root.connect("stage_2", 1, "wg1", 1) root.connect("stage_2", 2, "bdc", 0) root.connect("stage_2", 3, "wg2", 0) root.connect("bdc", 1, "wg2", 1) root.simulate_network() # - root.sim_result.plot_sparameters(show_freq=False, ports=[[2,0], [3,0]], interactive=True)
docs/source/notebooks/04-Network_concatenation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from datascience import * import numpy as np # %matplotlib inline import matplotlib.pyplot as plots plots.style.use('fivethirtyeight') # - # ## Lecture 10 ## # ## Apply staff = Table().with_columns( 'Employee', make_array('Jim', 'Dwight', 'Michael', 'Creed'), 'Birth Year', make_array(1985, 1988, 1967, 1904) ) staff def greeting(person): return '<NAME>, this is ' + person greeting('Pam') greeting('Erin') staff.apply(greeting, 'Employee') def name_and_age(name, year): age = 2019 - year return name + ' is ' + str(age) staff.apply(name_and_age, 'Employee', 'Birth Year') # ## Prediction ## galton = Table.read_table('galton.csv') galton galton.scatter('midparentHeight', 'childHeight') galton.scatter('midparentHeight', 'childHeight') plots.plot([67.5, 67.5], [50, 85], color='red', lw=2) plots.plot([68.5, 68.5], [50, 85], color='red', lw=2); nearby = galton.where('midparentHeight', are.between(67.5, 68.5)) nearby_mean = nearby.column('childHeight').mean() nearby_mean galton.scatter('midparentHeight', 'childHeight') plots.plot([67.5, 67.5], [50, 85], color='red', lw=2) plots.plot([68.5, 68.5], [50, 85], color='red', lw=2) plots.scatter(68, nearby_mean, color='red', s=50); def predict(h): nearby = galton.where('midparentHeight', are.between(h - 1/2, h + 1/2)) return nearby.column('childHeight').mean() predict(68) predict(70) predict(73) predicted_heights = galton.apply(predict, 'midparentHeight') predicted_heights galton = galton.with_column('predictedHeight', predicted_heights) galton.select( 'midparentHeight', 'childHeight', 'predictedHeight').scatter('midparentHeight') # ## Prediction Accuracy ## def difference(x, y): return x - y pred_errs = galton.apply(difference, 'predictedHeight', 'childHeight') pred_errs galton = galton.with_column('errors',pred_errs) galton galton.hist('errors') galton.hist('errors', group='gender') # # Discussion Question def predict_smarter(h, g): nearby = galton.where('midparentHeight', are.between(h - 1/2, h + 1/2)) nearby_same_gender = nearby.where('gender', g) return nearby_same_gender.column('childHeight').mean() predict_smarter(68, 'female') predict_smarter(68, 'male') smarter_predicted_heights = galton.apply(predict_smarter, 'midparentHeight', 'gender') galton = galton.with_column('smartPredictedHeight', smarter_predicted_heights) smarter_pred_errs = galton.apply(difference, 'childHeight', 'smartPredictedHeight') galton = galton.with_column('smartErrors', smarter_pred_errs) galton.hist('smartErrors', group='gender') # ## Grouping by One Column ## cones = Table.read_table('cones.csv') cones cones.group('Flavor') cones.drop('Color').group('Flavor', np.average) cones.drop('Color').group('Flavor', min) # ## Grouping By One Column: Welcome Survey ## survey = Table.read_table('welcome_survey_v2.csv') survey.group('Year', np.average) by_extra = survey.group('Extraversion', np.average) by_extra by_extra.select(0,2,3).plot('Extraversion') # Drop the 'Years average' column by_extra.select(0,3).plot('Extraversion') # ## Lists [1, 5, 'hello', 5.0] [1, 5, 'hello', 5.0, make_array(1,2,3)] # ## Grouping by Two Columns ## survey = Table.read_table('welcome_survey_v3.csv') survey.group(['Handedness','Sleep position']).show() # ## Pivot Tables survey.pivot('Sleep position', 'Handedness') survey.pivot('Sleep position', 'Handedness', values='Extraversion', collect=np.average) survey.group('Handedness', np.average)
lec/lec10.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Using EKF tweaks with Sigma Point Kalman Filters # # In Sigma Point Kalman Filters (SPKF, see [**[Merwe2004]**](#merwe)) Weighted Statistical Linear Regression technique is used to approximate nonlinear process and measurement functions: # # $\mathbf{y} = g(\mathbf{x}) = \mathbf{A} \mathbf{x} + \mathbf{b} + \mathbf{e}$, # # $\mathbf{P}_{ee} = \mathbf{P}_{yy} - \mathbf{A} \mathbf{P}_{xx} \mathbf{A}^{\top}$ # # where: # # $\mathbf{e}$ is an approximation error, # # $\mathbf{A} = \mathbf{P}_{xy}^{\top} \mathbf{P}_{xx}^{-1}$, # # $\mathbf{b} = \mathbf{\bar{y}} - \mathbf{A} \mathbf{\bar{x}}$, # # $\mathbf{P}_{xx} = \displaystyle\sum_{i} {w}_{ci} \left( \mathbf{\chi}_{i} - \mathbf{\bar{x}} \right) \left( \mathbf{\chi}_{i} - \mathbf{\bar{x}} \right)$, # # $\mathbf{P}_{yy} = \displaystyle\sum_{i} {w}_{ci} \left( \mathbf{\gamma}_{i} - \mathbf{\bar{y}} \right) \left( \mathbf{\gamma}_{i} - \mathbf{\bar{y}} \right)$, # # $\mathbf{P}_{xy} = \displaystyle\sum_{i} {w}_{ci} \left( \mathbf{\chi}_{i} - \mathbf{\bar{x}} \right) \left( \mathbf{\gamma}_{i} - \mathbf{\bar{y}} \right)$, # # $\mathbf{\gamma}_{i} = g(\mathbf{\chi}_{i})$ # # $\mathbf{\bar{x}} = \displaystyle\sum_{i} {w}_{mi} \mathbf{\chi}_{i}$, # # $\mathbf{\bar{y}} = \displaystyle\sum_{i} {w}_{mi} \mathbf{\gamma}_{i}$, # # ${w}_{ci}$ are covariation weghts, ${w}_{mi}$ are mean weights. # # This means that approximation errors of measurements may be treated as a part of additive noise and so we show that in SPKF we can use the following approximation of $\mathbf{S}_{k}$: # # $\mathbf{S}_{k} = \mathbf{H}_{k} \mathbf{P}_{k|k-1} \mathbf{H}_{k}^{\top} + \mathbf{\tilde{R}}_{k}$, # # where # # $\mathbf{H}_{k} = \mathbf{P}_{xz, k}^{\top} \mathbf{P}_{xx, k}^{-1}$, # # $\mathbf{\tilde{R}}_{k} = \mathbf{R}_{k} + \mathbf{P}_{ee, k}$, # # this enables us to use some EKF tricks such as adaptive correction or generalized linear models with SPKF. # # References # # <a name="merwe"></a>**\[Merwe2004\]** <NAME>, "Sigma-Point Kalman Filters for ProbabilisticInference in Dynamic State-Space Models", PhD Thesis, OGI School of Science & Engineering, Oregon Health & Science University, USA #
doc/src/UsingEKFTricksWithSPKF.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] colab_type="text" id="Ndo4ERqnwQOU" # ##### Copyright 2018 The TensorFlow Authors. # + cellView="form" colab={} colab_type="code" id="MTKwbguKwT4R" #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] colab_type="text" id="xfNT-mlFwxVM" # # Convolutional Variational Autoencoder # + [markdown] colab_type="text" id="0TD5ZrvEMbhZ" # <table class="tfo-notebook-buttons" align="left"> # <td> # <a target="_blank" href="https://www.tensorflow.org/beta/tutorials/generative/cvae"> # <img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> # View on TensorFlow.org</a> # </td> # <td> # <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/generative/cvae.ipynb"> # <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> # Run in Google Colab</a> # </td> # <td> # <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/generative/cvae.ipynb"> # <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> # View source on GitHub</a> # </td> # <td> # <a href="https://storage.googleapis.com/tensorflow_docs/site/en/r2/tutorials/generative/cvae.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> # </td> # </table> # + [markdown] colab_type="text" id="ITZuApL56Mny" # ![evolution of output during training](https://tensorflow.org/images/autoencoders/cvae.gif) # # This notebook demonstrates how to generate images of handwritten digits by training a Variational Autoencoder ([1](https://arxiv.org/abs/1312.6114), [2](https://arxiv.org/abs/1401.4082)). # # # + colab={} colab_type="code" id="P-JuIu2N_SQf" # to generate gifs # !pip install imageio # + [markdown] colab_type="text" id="e1_Y75QXJS6h" # ## Import TensorFlow and other libraries # + colab={} colab_type="code" id="YfIk2es3hJEd" from __future__ import absolute_import, division, print_function, unicode_literals # !pip install tensorflow-gpu==2.0.0-beta0 import tensorflow as tf import os import time import numpy as np import glob import matplotlib.pyplot as plt import PIL import imageio from IPython import display # + [markdown] colab_type="text" id="iYn4MdZnKCey" # ## Load the MNIST dataset # Each MNIST image is originally a vector of 784 integers, each of which is between 0-255 and represents the intensity of a pixel. We model each pixel with a Bernoulli distribution in our model, and we statically binarize the dataset. # + colab={} colab_type="code" id="a4fYMGxGhrna" (train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data() # + colab={} colab_type="code" id="NFC2ghIdiZYE" train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32') test_images = test_images.reshape(test_images.shape[0], 28, 28, 1).astype('float32') # Normalizing the images to the range of [0., 1.] train_images /= 255. test_images /= 255. # Binarization train_images[train_images >= .5] = 1. train_images[train_images < .5] = 0. test_images[test_images >= .5] = 1. test_images[test_images < .5] = 0. # + colab={} colab_type="code" id="S4PIDhoDLbsZ" TRAIN_BUF = 60000 BATCH_SIZE = 100 TEST_BUF = 10000 # + [markdown] colab_type="text" id="PIGN6ouoQxt3" # ## Use *tf.data* to create batches and shuffle the dataset # + colab={} colab_type="code" id="-yKCCQOoJ7cn" train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(TRAIN_BUF).batch(BATCH_SIZE) test_dataset = tf.data.Dataset.from_tensor_slices(test_images).shuffle(TEST_BUF).batch(BATCH_SIZE) # + [markdown] colab_type="text" id="THY-sZMiQ4UV" # ## Wire up the generative and inference network with *tf.keras.Sequential* # # In our VAE example, we use two small ConvNets for the generative and inference network. Since these neural nets are small, we use `tf.keras.Sequential` to simplify our code. Let $x$ and $z$ denote the observation and latent variable respectively in the following descriptions. # # ### Generative Network # This defines the generative model which takes a latent encoding as input, and outputs the parameters for a conditional distribution of the observation, i.e. $p(x|z)$. Additionally, we use a unit Gaussian prior $p(z)$ for the latent variable. # # ### Inference Network # This defines an approximate posterior distribution $q(z|x)$, which takes as input an observation and outputs a set of parameters for the conditional distribution of the latent representation. In this example, we simply model this distribution as a diagonal Gaussian. In this case, the inference network outputs the mean and log-variance parameters of a factorized Gaussian (log-variance instead of the variance directly is for numerical stability). # # ### Reparameterization Trick # During optimization, we can sample from $q(z|x)$ by first sampling from a unit Gaussian, and then multiplying by the standard deviation and adding the mean. This ensures the gradients could pass through the sample to the inference network parameters. # # ### Network architecture # For the inference network, we use two convolutional layers followed by a fully-connected layer. In the generative network, we mirror this architecture by using a fully-connected layer followed by three convolution transpose layers (a.k.a. deconvolutional layers in some contexts). Note, it's common practice to avoid using batch normalization when training VAEs, since the additional stochasticity due to using mini-batches may aggravate instability on top of the stochasticity from sampling. # + colab={} colab_type="code" id="VGLbvBEmjK0a" class CVAE(tf.keras.Model): def __init__(self, latent_dim): super(CVAE, self).__init__() self.latent_dim = latent_dim self.inference_net = tf.keras.Sequential( [ tf.keras.layers.InputLayer(input_shape=(28, 28, 1)), tf.keras.layers.Conv2D( filters=32, kernel_size=3, strides=(2, 2), activation='relu'), tf.keras.layers.Conv2D( filters=64, kernel_size=3, strides=(2, 2), activation='relu'), tf.keras.layers.Flatten(), # No activation tf.keras.layers.Dense(latent_dim + latent_dim), ] ) self.generative_net = tf.keras.Sequential( [ tf.keras.layers.InputLayer(input_shape=(latent_dim,)), tf.keras.layers.Dense(units=7*7*32, activation=tf.nn.relu), tf.keras.layers.Reshape(target_shape=(7, 7, 32)), tf.keras.layers.Conv2DTranspose( filters=64, kernel_size=3, strides=(2, 2), padding="SAME", activation='relu'), tf.keras.layers.Conv2DTranspose( filters=32, kernel_size=3, strides=(2, 2), padding="SAME", activation='relu'), # No activation tf.keras.layers.Conv2DTranspose( filters=1, kernel_size=3, strides=(1, 1), padding="SAME"), ] ) def sample(self, eps=None): if eps is None: eps = tf.random.normal(shape=(100, self.latent_dim)) return self.decode(eps, apply_sigmoid=True) def encode(self, x): mean, logvar = tf.split(self.inference_net(x), num_or_size_splits=2, axis=1) return mean, logvar def reparameterize(self, mean, logvar): eps = tf.random.normal(shape=mean.shape) return eps * tf.exp(logvar * .5) + mean def decode(self, z, apply_sigmoid=False): logits = self.generative_net(z) if apply_sigmoid: probs = tf.sigmoid(logits) return probs return logits # + [markdown] colab_type="text" id="0FMYgY_mPfTi" # ## Define the loss function and the optimizer # # VAEs train by maximizing the evidence lower bound (ELBO) on the marginal log-likelihood: # # $$\log p(x) \ge \text{ELBO} = \mathbb{E}_{q(z|x)}\left[\log \frac{p(x, z)}{q(z|x)}\right].$$ # # In practice, we optimize the single sample Monte Carlo estimate of this expectation: # # $$\log p(x| z) + \log p(z) - \log q(z|x),$$ # where $z$ is sampled from $q(z|x)$. # # **Note**: we could also analytically compute the KL term, but here we incorporate all three terms in the Monte Carlo estimator for simplicity. # + colab={} colab_type="code" id="iWCn_PVdEJZ7" optimizer = tf.keras.optimizers.Adam(1e-4) def log_normal_pdf(sample, mean, logvar, raxis=1): log2pi = tf.math.log(2. * np.pi) return tf.reduce_sum( -.5 * ((sample - mean) ** 2. * tf.exp(-logvar) + logvar + log2pi), axis=raxis) def compute_loss(model, x): mean, logvar = model.encode(x) z = model.reparameterize(mean, logvar) x_logit = model.decode(z) cross_ent = tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=x) logpx_z = -tf.reduce_sum(cross_ent, axis=[1, 2, 3]) logpz = log_normal_pdf(z, 0., 0.) logqz_x = log_normal_pdf(z, mean, logvar) return -tf.reduce_mean(logpx_z + logpz - logqz_x) def compute_gradients(model, x): with tf.GradientTape() as tape: loss = compute_loss(model, x) return tape.gradient(loss, model.trainable_variables), loss def apply_gradients(optimizer, gradients, variables): optimizer.apply_gradients(zip(gradients, variables)) # + [markdown] colab_type="text" id="Rw1fkAczTQYh" # ## Training # # * We start by iterating over the dataset # * During each iteration, we pass the image to the encoder to obtain a set of mean and log-variance parameters of the approximate posterior $q(z|x)$ # * We then apply the *reparameterization trick* to sample from $q(z|x)$ # * Finally, we pass the reparameterized samples to the decoder to obtain the logits of the generative distribution $p(x|z)$ # * **Note:** Since we use the dataset loaded by keras with 60k datapoints in the training set and 10k datapoints in the test set, our resulting ELBO on the test set is slightly higher than reported results in the literature which uses dynamic binarization of Larochelle's MNIST. # # ## Generate Images # # * After training, it is time to generate some images # * We start by sampling a set of latent vectors from the unit Gaussian prior distribution $p(z)$ # * The generator will then convert the latent sample $z$ to logits of the observation, giving a distribution $p(x|z)$ # * Here we plot the probabilities of Bernoulli distributions # # + colab={} colab_type="code" id="NS2GWywBbAWo" epochs = 100 latent_dim = 50 num_examples_to_generate = 16 # keeping the random vector constant for generation (prediction) so # it will be easier to see the improvement. random_vector_for_generation = tf.random.normal( shape=[num_examples_to_generate, latent_dim]) model = CVAE(latent_dim) # + colab={} colab_type="code" id="RmdVsmvhPxyy" def generate_and_save_images(model, epoch, test_input): predictions = model.sample(test_input) fig = plt.figure(figsize=(4,4)) for i in range(predictions.shape[0]): plt.subplot(4, 4, i+1) plt.imshow(predictions[i, :, :, 0], cmap='gray') plt.axis('off') # tight_layout minimizes the overlap between 2 sub-plots plt.savefig('image_at_epoch_{:04d}.png'.format(epoch)) plt.show() # + colab={} colab_type="code" id="2M7LmLtGEMQJ" generate_and_save_images(model, 0, random_vector_for_generation) for epoch in range(1, epochs + 1): start_time = time.time() for train_x in train_dataset: gradients, loss = compute_gradients(model, train_x) apply_gradients(optimizer, gradients, model.trainable_variables) end_time = time.time() if epoch % 1 == 0: loss = tf.keras.metrics.Mean() for test_x in test_dataset: loss(compute_loss(model, test_x)) elbo = -loss.result() display.clear_output(wait=False) print('Epoch: {}, Test set ELBO: {}, ' 'time elapse for current epoch {}'.format(epoch, elbo, end_time - start_time)) generate_and_save_images( model, epoch, random_vector_for_generation) # + [markdown] colab_type="text" id="P4M_vIbUi7c0" # ### Display an image using the epoch number # + colab={} colab_type="code" id="WfO5wCdclHGL" def display_image(epoch_no): return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no)) # + colab={} colab_type="code" id="5x3q9_Oe5q0A" plt.imshow(display_image(epochs)) plt.axis('off')# Display images # + [markdown] colab_type="text" id="NywiH3nL8guF" # ### Generate a GIF of all the saved images. # + colab={} colab_type="code" id="IGKQgENQ8lEI" anim_file = 'cvae.gif' with imageio.get_writer(anim_file, mode='I') as writer: filenames = glob.glob('image*.png') filenames = sorted(filenames) last = -1 for i,filename in enumerate(filenames): frame = 2*(i**0.5) if round(frame) > round(last): last = frame else: continue image = imageio.imread(filename) writer.append_data(image) image = imageio.imread(filename) writer.append_data(image) import IPython if IPython.version_info >= (6,2,0,''): display.Image(filename=anim_file) # + [markdown] colab_type="text" id="yQXO_dlXkKsT" # If you're working in Colab you can download the animation with the code below: # + colab={} colab_type="code" id="4fSJS3m5HLFM" try: from google.colab import files except ImportError: pass else: files.download(anim_file)
site/en/r2/tutorials/generative/cvae.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="bFqstK5Hegki" # # **Step 1: Loading modules** # Before we start scrapping the target website, we need to import some necessary modules from the system library. # * “requests” includes the modules for sending HTTP requests to websites, the core step for web scrapping. # * “bs4/BeautifulSoup” includes the required APIs for cleaning and formatting the data collected from the web scrapper. # * “pandas” includes some essential functionalities for data analytics, allowing users to quickly manipulate and analyse them. # --- # # + id="DeSCpDTmtesQ" import requests from bs4 import BeautifulSoup import pandas as pd # + [markdown] id="2Z2wLcWuhPuk" # # **Step 2: Naïve Scrapping Method (Scrapping Whole Page)** # We will now introduce the simplest way to scrape the data from a website. # * Define a Python "list" for every column you identified in the stock price table from Yahoo! Finance. # * Add the URL of the target website in the code. # * Observe the stock price table and identify the column data that will be useful. Then, use the "Inspect" feature from Chrome to show the HTML content. # * Use for-loop to format the data collected from BeautifulSoup. # # # **Discussions** # 1. Try to discuss the advantages and disadvantages of the method above。 # 2. If the column name of the underlying table in the website changes, does this method still work? # --- # # # # # # + id="ctr9TYQqjuCw" # Use requests and BeautifulSoup(BS)to scrape website data active_stocks_url = "https://finance.yahoo.com/most-active" r = requests.get(active_stocks_url) data = r.text soup = BeautifulSoup(data) # Define python lists for every column codes=[] names=[] prices=[] changes=[] percent_changes=[] total_volumes=[] market_caps=[] price_earning_ratios=[] # + id="KyaSOPqboO3T" """ Using the concepts of for-loop, find all the <tr> tags from "stockTable". Every <tr> tag represent a row of stock data (saved as listing). We need to find all the <td> tag from the "listing", and extract its info to be inserted to the relevant python list. """ # TODO: Fill in the relevant HTML tag in the find_all "brackets" stockTable = soup.find('tbody') for listing in stockTable.find_all('tr'): code = listing.find('td', attrs={'aria-label':'Symbol'}) codes.append(code.text) name = listing.find('td', attrs={'aria-label':'Name'}) names.append(name.text) price = listing.find('td', attrs={'aria-label':'Price (Intraday)'}) prices.append(price.text) # TODO: Use the same method as above to extract the remaining columns change = listing.find('td', attrs={'aria-label':'Change'}) changes.append(change.text) percent_change = listing.find('td', attrs={'aria-label':'% Change'}) percent_changes.append(percent_change.text) total_volume = listing.find('td', attrs={'aria-label':'Volume'}) total_volumes.append(total_volume.text) market_cap = listing.find('td', attrs={'aria-label':'Market Cap'}) market_caps.append(market_cap.text) price_earning_ratio = listing.find('td', attrs={'aria-label':'PE Ratio (TTM)'}) price_earning_ratios.append(price_earning_ratio.text) # + colab={"base_uri": "https://localhost:8080/"} id="shCJ3NnQjrhy" outputId="883f2f8a-6c20-4475-88ff-f25a9fb65bb4" """ Use pandas to create a new data frame, aggregate all python lists into a single table. You will need to know how to use Python dictionary in this part. """ df = pd.DataFrame({ "Symbol": codes, "Name": names, "Price": prices, "Change": changes, "% Change": percent_changes, "Market Cap": market_caps, "Volume": total_volumes, "PE Ratio (TTM)": price_earning_ratios }) df # + [markdown] id="grRwS5kB7jKc" # # **Step 3: Naïve Scrapping Method (Scrapping Individual Rows)** # * Copy and paste the Yahoo Finance link for currencies。 # * Use Chrome Inspector to inspect the HTML elements。 # # **Discussions** # 1. What is the difference of this method in terms of execution efficiency when compared to the previous method? # 2. If the row header, does this method still works? # 3. When should we use whole page scraping, when should we use individual row scraping? # --- # + id="GV6yyQhbtfn1" colab={"base_uri": "https://localhost:8080/"} outputId="9ae5077f-064a-4ea0-a242-16be858c7d28" currencies_url = "https://finance.yahoo.com/currencies" r = requests.get(currencies_url) data = r.text soup = BeautifulSoup(data) codes=[] names=[] last_prices=[] changes=[] percent_changes=[] # Find the starting and ending data-reactid,and the difference between each column start, end, jump = 40, 404, 14 for i in range(start, end, jump): listing = soup.find('tr', attrs={'data-reactid':i}) print(listing) code = listing.find('td', attrs={'data-reactid':i+1}) codes.append(code.text) name = listing.find('td', attrs={'data-reactid':i+3}) names.append(name.text) last_price = listing.find('td', attrs={'data-reactid':i+4}) last_prices.append(last_price.text) change = listing.find('td', attrs={'data-reactid':i+5}) changes.append(change.text) percent_change = listing.find('td', attrs={'data-reactid':i+7}) percent_changes.append(percent_change.text) pd.DataFrame({"Symbol": codes, "Name": names, "Last Price": last_prices, "Change": changes, "% Change": percent_changes}) # + [markdown] id="zzn5ZEhd_5fu" # # **Step 4: Header Scraping Method** # This method is an advanced scraping method. The code will automatically scrape the header so that we don't have to define the list for ourselves, making the code much simpler and cleaner. # # * Copy and paste the Yahoo Finance link of active stocks # * Scrape the headers and put those into a python list # * Put the relevant data into a Python dictionary # --- # + id="C3HU2ghYvKBl" colab={"base_uri": "https://localhost:8080/"} outputId="9ce97e06-6d0e-4339-b11f-67efad74af6b" crypto_url = "https://finance.yahoo.com/cryptocurrencies" r = requests.get(crypto_url) data = r.text soup = BeautifulSoup(data) # Scrape all the headers raw_data = {} headers = [] for header_row in soup.find_all('thead'): for header in header_row.find_all('th'): raw_data[header.text] = [] headers.append(header.text) for rows in soup.find_all('tbody'): for row in rows.find_all('tr'): for idx, cell in enumerate(row.find_all('td')): # print(dir(cell)) raw_data[headers[idx]].append(cell.text) pd.DataFrame(raw_data) # + [markdown] id="8eWlS2Gm_6RI" # # **Step 5: Making a generic scarping function** # # We are going to turn the header method into a Python function. This function can also work for other types of financial products! # # * Define a good name for the function # * Define input paramters and input value # # --- # # + id="x9JOzNQBvOAl" def scrape_table(url): soup = BeautifulSoup(requests.get(url).text) headers = [header.text for listing in soup.find_all('thead') for header in listing.find_all('th')] raw_data = {header:[] for header in headers} for rows in soup.find_all('tbody'): for row in rows.find_all('tr'): if len(row) != len(headers): continue for idx, cell in enumerate(row.find_all('td')): raw_data[headers[idx]].append(cell.text) return pd.DataFrame(raw_data) # + [markdown] id="fI2oNQv2_7ya" # # **Concept Challenge: Scrape other products** # Try using the generic function to scrape other products in Yahoo Finance! # * Gainers # * Losers # * Top ETFs # --- # # + id="zYzpzgElvlOU" cryptocurrencies = scrape_table("https://finance.yahoo.com/cryptocurrencies") currencies = scrape_table("https://finance.yahoo.com/currencies") commondaties = scrape_table("https://finance.yahoo.com/commodities") activestocks = scrape_table("https://finance.yahoo.com/most-active") techstocks = scrape_table("https://finance.yahoo.com/industries/software_services") gainers = scrape_table("https://finance.yahoo.com/gainers") losers = scrape_table("https://finance.yahoo.com/losers") indices = scrape_table("https://finance.yahoo.com/world-indices") # + [markdown] id="OAI6Lhi0V7eA" # #**Step 6: Data Wrangling** # Datatype Conversion # # This part will make use of the stock data we have collected from our web scrapper. However, the data collected are all stored as "strings". In other words, the data is regarded as textual data even if the underlying data is representing a number. We need to convert them into right formats for the chart plotting tools. # # Steps in data conversion: # # Remove all the commas in the number data, and change columns that contain number data to floating point. # Change all columns that contain dates to datetime. # Recover abbreaviated numbers, for example, recover "1M" to 1000000. # + id="GUkFCDTAV_4b" from datetime import datetime def convert_column_to_float(df, columns): for column in columns: df[column] = pd.to_numeric(df[column].str.replace(',','').str.replace('%','')) return df def convert_column_to_datetime(df, columns): for column in columns: df[column] = pd.to_datetime(df[column]) return df def revert_scaled_number(number): mapping = {'M': 1000000, 'B': 1000000000, 'T': 1000000000000} scale = number[-1] if scale not in ['M','B','T']: return float(number.replace(',','')) return float(number[0:-1].replace(',','')) * mapping[scale] # + [markdown] id="KcADKLHZWBsC" # **Filtering dataframe** # # - We can scrape all the active stocks easily now # - Let's try to separate them into rising and losing stocks? # + id="Dgo5TIVkWC6y" # first scrape the active stocks table using the web scraper function activestocks = scrape_table("https://finance.yahoo.com/most-active") # change the data type of the dataframe columns activestocks = convert_column_to_float(activestocks, ['% Change']) # filter the dataframe by % Change (pos/neg) rising = activestocks[activestocks['% Change'] > 0] losing = activestocks[activestocks['% Change'] < 0] # + [markdown] id="CBuPJNqxWEiu" # **Sorting dataframe** # # - It's not quite clear which stock is the top gainer/loser # - We can sort the dataframe and see it clearly # + id="oAPNOMlaWGUh" rising = rising.sort_values(by=['% Change'], ascending=False) losing = losing.sort_values(by=['% Change'], ascending=True) # + [markdown] id="xoZQs0loWJPE" # Finally, if you prefer, you can add back the "+/-" sign and the percentage symbol and convert back the value to string # + id="POl_rOmRWKgn" rising['% Change']='+' + rising['% Change'].astype(str) + '%' losing['% Change']=losing['% Change'].astype(str) + '%' # + colab={"base_uri": "https://localhost:8080/"} id="C_A22ThUWLmr" outputId="39da0298-487e-4df1-9fb1-69657147c2a0" rising
1. Scraping financial data on Yahoo! Finance/1.2 Web_Scraping_with_Python_Completed_Project.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # from http://iamtrask.github.io/2015/07/12/basic-python-network/ import numpy as np import time # - # sigmoid function def nonlin(x,deriv=False): if(deriv==True): return x*(1-x) return 1/(1+np.exp(-x)) def sigmoid_interpolated(x, deriv=False): if(deriv==True): return x*(1-x) W0 = 0.5 W1 = 0.2159198 #015)) W3 = -0.0082176 #259)) W5 = 0.0001825 #597)) W7 = -0.0000018 #848)) W9 = 0.0000000 #072)) x2 = x * x x3 = x2 * x x5 = x2 * x3 x7 = x2 * x5 x9 = x2 * x7 return x9 * W9 + x7 * W7 + x5 *W5 + x3 * W3 + x * W1 + W0 # + # input dataset X = np.array([ [0,0,1], [0,1,1], [1,0,1], [1,1,1] ]) # output dataset y = np.array([[0,0,1,1]]).T # + # seed random numbers to make calculation # deterministic (just a good practice) np.random.seed(1) # initialize weights randomly with mean 0 syn0 = (2*np.random.random((3,1)) - 1) / 1000 print(syn0) # - start_time = time.time() for iter in range(100): # forward propagation l0 = X l1 = sigmoid_interpolated(np.dot(l0,syn0)) #np.maximum(l1, 0, l1) # how much did we miss? l1_error = y - l1 # multiply how much we missed by the # slope of the sigmoid at the values in l1 l1_delta = l1_error * sigmoid_interpolated(l1,True) # update weights syn0 += np.dot(l0.T,l1_delta) print("Elapsed Training Time:\n", time.time() - start_time) print ("Output After Training:") print (l1) print ("Weights After Training:") print (syn0) predictions = sigmoid_interpolated(np.dot(X,syn0)) print("Predictions:") print(predictions)
notebooks/mpc-playground/basic-python-network.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #Remember to load another conda environment with pyEMMA 2.5.7 # %matplotlib inline import os import numpy as np import pandas as pd from pyemma.coordinates.transform._tica_base import * from pyemma.coordinates.transform.nystroem_tica import * from pyemma.coordinates import tica import matplotlib.pyplot as plt import pickle from multiprocessing import Pool import itertools # <h1> Spectral oASIS <h1> # + #Loading features os.makedirs('./SpectralOasis/',exist_ok=True) featdir="./Featurization/" spectraldir="./SpectralOasis/" input_feature_data=[] for i in range(100): temp=np.load(featdir+("features/{}.npy").format(i)) input_feature_data.append(temp) # - #Preparing a list for the number of features tested and #Setting up parameters to run run_SpectraloASIS() in parallel lt=[0.2,0.4,0.6] dt=0.1 lts_in_steps=[ int(round(i/dt)) for i in lt] num_features = input_feature_data[0].shape[1] columns=[4,8,12,16,20,24,28] parameters=[(a,b) for a in columns for b in lts_in_steps] print("no. of features tested: ", columns) # + def run_SpectraloASIS(max_columns,lt,dt=dt,num_features=num_features, input_feature_data=input_feature_data, spectraldir="./SpectralOasis/"): """ Running Spectral oASIS Parameters ---------- max_columns : int The number of features to be selected input_feature_data: list containing ndarrays(dtype=int) or ndarray(n, dtype=int)) features to be selected num_features: int The number of features in the full set spectraldir: str, default="./SpectralOasis/" The directory to save output Return ---------- t.timescales: timescales for tlCA perform with this number of features """ t = NystroemTICA(lt, max_columns, initial_columns=np.random.choice(num_features,1,replace=False), nsel=1) t.estimate(input_feature_data) #######running oasis_tica os.makedirs('{}{}'.format(spectraldir,int(lt)),exist_ok=True) np.savetxt("{}{}/feature_column{}_ticalag_{}.txt".format(spectraldir,int(lt),max_columns,int(lt)), t.column_indices, fmt='%d') np.savetxt("{}{}/timescales_column{}_ticalag_{}.txt".format(spectraldir,int(lt),max_columns,int(lt)), t.timescales) return lt,max_columns,t.timescales with Pool() as pool: results = pool.starmap(run_SpectraloASIS,parameters) df = pd.DataFrame(results) df.to_pickle("{}timescales.pickl".format(spectraldir)) # + columns_=[ i for i in columns] columns_.append(num_features) data=pd.read_pickle("{}timescales.pickl".format(spectraldir)) for n in lts_in_steps: t_timescales=data.loc[data[0] == n][2].values timescales=[] for i in range(0,len(columns_)-1): timescales.append(t_timescales[i][0]) TICA=tica(input_feature_data, lag=n) #Calculating tlCA timescales for full features timescales.append(TICA.timescales[0]) timescales=np.array(timescales)*dt #Plotting the tlCA timescales against number of features. We will pick the feature set when tlCA timescales is converged. f,ax=plt.subplots(figsize=(8,4)) ax.plot(columns_, timescales,"-o", color="b", lw=4) ax.plot([-1,num_features+1], [timescales[-1], timescales[-1]], color="k", lw=4, linestyle=":") ax.set_ylabel("Timescale (ps)", fontsize=16) ax.set_xlabel("No. of Features", fontsize=16) ax.set_xlim(-1,num_features+1) ax.set_title("Lagtime={0:.1f}ps".format(n*dt),fontsize=18) ax.tick_params(axis='both',labelsize=16) plt.tight_layout() plt.savefig(spectraldir+"Timescale_vs_FeatureNo_ticalag_{0:.1f}.png".format(n*dt)) # -
notebook/SpectraloASIS-Parallel.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 # language: python # name: python36 # --- # Copyright (c) Microsoft Corporation. All rights reserved. # # Licensed under the MIT License. # # Data Exploration # # In this lab, we will explore and visualize our telemetry data. You will learn how calculate metrics on top of your raw time series to gain deeper insights into your data. # # In this lab, you will: # - Get to know your dataset better by visualizing it # - Learn how to visualize time series data # - Become familiar with a set of standard metrics that can be defined on time series data # - Understand when to use which metric # ## Load and visualize/explore your data # + # # %matplotlib inline # let's set up your environment, and define some global variables import os from rstl import STL import pandas as pd import random import matplotlib.pyplot as plt from scipy.stats import norm import seaborn as sns import numpy as np # adjust this based on your screen's resolution fig_panel = (18, 16) wide_fig = (16, 4) dpi=80 # + # next, we load the telemetry data base_path = 'https://sethmottstore.blob.core.windows.net' data_subdir = 'predmaint' data_filename = 'telemetry.csv' data_path = os.path.join(base_path, data_subdir, data_filename) print("Reading data ... ", end="") df = pd.read_csv(data_path) print("Done.") print("Parsing datetime...", end="") df['datetime'] = pd.to_datetime(df['datetime'], format="%m/%d/%Y %I:%M:%S %p") print("Done.") df = df.rename(str, columns={'datetime': 'timestamp'}) # + # let's define some useful variables sensors = df.columns[2:].tolist() # a list containing the names of the sensors machines = df['machineID'].unique().tolist() # a list of our machine ids n_sensors = len(sensors) n_machines = len(machines) print("We have %d sensors: %s for each of %d machines." % (n_sensors, sensors, n_machines)) # + # let's pick a random machine random_machine = 67 df_s = df.loc[df['machineID'] == random_machine, :] # - # let's get some info about the time domain df_s['timestamp'].describe() # **Question**: At which frequency do we receive sensor data? # create a table of descriptive statistics for our data set df_s.describe() # Let's do some time series specific exploration of the data # + n_samples = 24*14 # we look at the first 14 days of sensor data plt.close() fig, ax = plt.subplots(2, 2, figsize=fig_panel, dpi=dpi) # create 2x2 panel of figures for s, sensor in enumerate(sensors): c = s%2 # column of figure panel r = int(s/2) # row of figure panel ax[r,c].plot(df_s['timestamp'][:n_samples], df_s[sensor][:n_samples]) ax[r,c].set_title(sensor) display() # - # Next, we create histogram plots to have an understanding of how these data are distributed. # + n_bins=200 plt.close() fig, ax = plt.subplots(2,2,figsize=fig_panel, dpi=dpi) for s, sensor in enumerate(sensors): c = s%2 r = int(s/2) sns.distplot(df_s[sensor], ax=ax[r,c]) display() # - # ## Useful metrics for time series data # # ### Bollinger Bands # # [Bollinger Bands](https://en.wikipedia.org/wiki/Bollinger_Bands) are a type of statistical chart characterizing the prices and volatility over time of a financial instrument or commodity, using a formulaic method propounded by <NAME> in the 1980s. Financial traders employ these charts as a methodical tool to inform trading decisions, control automated trading systems, or as a component of technical analysis. # # This can be done very quickly with pandas, because it has a built-in function `ewm` for convolving the data with a sliding window with exponential decay, which can be combined with standard statistical functions, such as `mean` or `std`. # # Of course, you can imagine that rolling means, standard deviations etc can be useful on their own, without using them for creating Bollinger Bands. # + window_size = 12 # the size of the window over which to aggregate sample_size = 24 * 7 * 2 # let's only look at two weeks of data x = df_s['timestamp'] plt.close() fig, ax = plt.subplots(2, 2, figsize=fig_panel, dpi=dpi) for s, sensor in enumerate(sensors): c = s%2 r = int(s/2) rstd = df_s[sensor].ewm(window_size).std() rm = df_s[sensor].ewm(window_size).mean() ax[r,c].plot(x[window_size:sample_size], df_s[sensor][window_size:sample_size], color='blue', alpha=.2) ax[r,c].plot(x[window_size:sample_size], rm[window_size:sample_size] - 2 * rstd[window_size:sample_size], color='grey') ax[r,c].plot(x[window_size:sample_size], rm[window_size:sample_size] + 2 * rstd[window_size:sample_size], color='grey') ax[r,c].plot(x[window_size:sample_size], rm[window_size:sample_size], color='black') ax[r,c].set_title(sensor) display() # - # ### Lag features # # Lag features can be very useful in machine learning approaches dealing with time series. For example, if you want to train a model to predict whether a machine is going to fail the next day, you can just shift your logs of failures forward by a day, so that failures (i.e. target labels) are aligned with the feature data you will use for predicting failures. # # Luckily, pandas has a built-in `shift` function for doing this. # + sample_size = 24 * 2 # let's only look at first two days x = df_s['timestamp'] plt.close() fig, ax = plt.subplots(2, 2, figsize=fig_panel, dpi=dpi) for s, sensor in enumerate(sensors): c = s%2 r = int(s/2) rstd = df_s[sensor].ewm(window_size).std() rm = df_s[sensor].ewm(window_size).mean() ax[r,c].plot(x[:sample_size], df_s[sensor][:sample_size], color='black', alpha=1, label='orig') ax[r,c].plot(x[:sample_size], df_s[sensor][:sample_size].shift(-1), color='blue', alpha=1, label='-1h') # shift by x hour ax[r,c].plot(x[:sample_size], df_s[sensor][:sample_size].shift(-2), color='blue', alpha=.5, label='-2h') # shift by x hour ax[r,c].plot(x[:sample_size], df_s[sensor][:sample_size].shift(-3), color='blue', alpha=.2, label='-3h') # shift by x hour ax[r,c].set_title(sensor) ax[r,c].legend() display() # - # ### Rolling entropy # # Depending on your use-case entropy can also be a useful metric, as it gives you an idea of how evenly your measures are distributed in a specific range. For more information, visit Wikipedia: # # https://en.wikipedia.org/wiki/Entropy_(information_theory) # + from scipy.stats import entropy sample_size = 24*7*4 # use the first x hours of data sensor = 'volt' sensor_data = df_s[sensor] rolling_entropy = sensor_data.rolling(12).apply(entropy) plt.close() fig, ax = plt.subplots(2,1, figsize=wide_fig) ax[0].plot(x[:sample_size], sensor_data[:sample_size]) ax[1].plot(x[:sample_size], rolling_entropy[:sample_size]) display() # - # ## Other useful metrics # # There are various other useful metrics for timeseries data. You may keep them in the back of your mind when you are dealing with another scenario. # # - Rolling median, min, max, mode etc. statistics # - Rolling majority, for categorical features # - Rolling text statistics for text features # - [Short-time fourier transform](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.stft.html) # ### Quiz # # The big question is when to use which metric for your use-case. # # Here are a couple of sample scenarios. Can you recommend which one of the above metrics to use in each case? # 1. You are developing a fitness application for mobile phones that have an [accelerometer](https://en.wikipedia.org/wiki/Accelerometer). You want to be able to measure how much time a user spends sitting, walking, and running over the course of a day. Which metric would you use to identify the different activities? # 2. You want to get rich on the stock market, but you hate volatility. Which metric would you use to measure volatility? # 3. You are in charge of a server farm. You are looking for a way to detect denial of service attacks on your servers. You don't want to constantly look at individual amounts of traffic at all of the servers at the same time. However, you know that all of the servers typically get a constant amount of traffic. Which metric could you use to determine that things have shifted, such as when some servers seem to be getting a lot more traffic than the other servers? # Copyright (c) Microsoft Corporation. All rights reserved. # # Licensed under the MIT License.
lab01.1_AD_Introduction/lab01.1b_AD_data_preparation_for_time_series.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Stochastic RSI (STOCH RSI) # https://www.tradingview.com/wiki/Stochastic_RSI_(STOCH_RSI)#CALCULATION # # https://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:stochrsi # + outputHidden=false inputHidden=false import numpy as np import pandas as pd import matplotlib.pyplot as plt import warnings warnings.filterwarnings("ignore") # fix_yahoo_finance is used to fetch data import fix_yahoo_finance as yf yf.pdr_override() # + outputHidden=false inputHidden=false # input symbol = 'AAPL' start = '2018-06-01' end = '2018-12-31' # Read data df = yf.download(symbol,start,end) # View Columns df.head() # + outputHidden=false inputHidden=false import talib as ta df['RSI'] = ta.RSI(df['Adj Close'], timeperiod=14) df.head(10) # + outputHidden=false inputHidden=false df = df.dropna() df.head() # + outputHidden=false inputHidden=false LL_RSI = df['RSI'].rolling(14).min() HH_RSI = df['RSI'].rolling(14).max() # + outputHidden=false inputHidden=false df['Stoch_RSI'] = (df['RSI'] - LL_RSI) / (HH_RSI - LL_RSI) df = df.dropna() df.head(10) # + outputHidden=false inputHidden=false fig = plt.figure(figsize=(14,10)) ax1 = plt.subplot(2, 1, 1) ax1.plot(df['Adj Close']) ax1.set_title('Stock '+ symbol +' Closing Price') ax1.set_ylabel('Price') ax2 = plt.subplot(2, 1, 2) ax2.plot(df['Stoch_RSI'], label='Stoch RSI') ax2.text(s='Overbought', x=df.RSI.index[30], y=0.8, fontsize=14) ax2.text(s='Oversold', x=df.RSI.index[30], y=0.2, fontsize=14) ax2.axhline(y=0.8, color='red') ax2.axhline(y=0.2, color='red') ax2.grid() ax2.set_ylabel('Volume') ax2.set_xlabel('Date') # - # ## Candlestick with Stoch RSI # + outputHidden=false inputHidden=false from matplotlib import dates as mdates import datetime as dt dfc = df.copy() dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close'] #dfc = dfc.dropna() dfc = dfc.reset_index() dfc['Date'] = mdates.date2num(dfc['Date'].astype(dt.date)) dfc.head() # + outputHidden=false inputHidden=false from mpl_finance import candlestick_ohlc fig = plt.figure(figsize=(14,10)) ax1 = plt.subplot(2, 1, 1) candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0) ax1.xaxis_date() ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y')) ax1.grid(True, which='both') ax1.minorticks_on() ax1v = ax1.twinx() colors = dfc.VolumePositive.map({True: 'g', False: 'r'}) ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4) ax1v.axes.yaxis.set_ticklabels([]) ax1v.set_ylim(0, 3*df.Volume.max()) ax1.set_title('Stock '+ symbol +' Closing Price') ax1.set_ylabel('Price') ax2 = plt.subplot(2, 1, 2) ax2.plot(df['Stoch_RSI'], label='Stoch RSI') ax2.text(s='Overbought', x=df.RSI.index[30], y=0.8, fontsize=14) ax2.text(s='Oversold', x=df.RSI.index[30], y=0.2, fontsize=14) ax2.axhline(y=0.8, color='red') ax2.axhline(y=0.2, color='red') ax2.grid() ax2.set_ylabel('Volume') ax2.set_xlabel('Date') ax2.legend(loc='best') # + outputHidden=false inputHidden=false fig = plt.figure(figsize=(14,10)) ax1 = plt.subplot(2, 1, 1) candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0) ax1.xaxis_date() ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y')) ax1.grid(True, which='both') ax1.minorticks_on() ax1v = ax1.twinx() colors = dfc.VolumePositive.map({True: 'g', False: 'r'}) ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4) ax1v.axes.yaxis.set_ticklabels([]) ax1v.set_ylim(0, 3*df.Volume.max()) ax1.set_title('Stock '+ symbol +' Closing Price') ax1.set_ylabel('Price') ax2 = plt.subplot(2, 1, 2) ax2.plot(df['Stoch_RSI'], label='Stoch RSI') ax2.text(s='Overbought', x=df.RSI.index[30], y=0.8, fontsize=14) ax2.text(s='Oversold', x=df.RSI.index[30], y=0.2, fontsize=14) ax2.fill_between(df.index, y1=0.2, y2=0.8, color='#adccff', alpha='0.3') ax2.axhline(y=0.8, color='red') ax2.axhline(y=0.2, color='red') ax2.grid(True, which='both') ax2.minorticks_on() ax2.set_ylabel('Volume') ax2.set_xlabel('Date') ax2.legend(loc='best')
Python_Stock/Technical_Indicators/Stochastic_RSI.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Filter maximisation # # In this notebook we will develop a set of functions that allow you to see what images maximise filters inside the network. Much of the notebok here is based on the [How convolutional networks see the world](https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html) blog post, but we have adapted the code to work with tensorflow, rather than straight keras. # # You should be able to take these sets of functions and reuse them for your own problems. import tensorflow as tf from tqdm import notebook import numpy as np from ipywidgets import widgets import matplotlib.pyplot as plt from tensorflow import keras from tensorflow.keras.applications import VGG16 from skimage.color import rgb2gray from skimage.transform import resize # ## Build a model that outputs intermediate filter values # # To do filter maximisation we need to have models that output the values of filters in the middile of the network as information flows through. To do this we # # * Get the names of the intermediate convolutional layers # * Build a new model where every convoutional layer is also an output layer def make_activation_model(model): """Make an 'activation model' for the pre-trained model Takes an existing, pre-trained model and finds all convolution layers. Then creates a new model where every convolutional layer is also an output layer. Args: model: a pre-trained model with convolutional layers Returns: an activation model where all convolutional layers are also output layers """ layers = [layer for layer in model.layers if 'conv' in layer.name] layer_outputs = [layer.output for layer in layers] layer_names = [layer.name for layer in layers] activation_model = tf.keras.models.Model(inputs=model.input, outputs=layer_outputs) return activation_model # ## Find an image to maximise the filter activity # # * Take the gradient of the input image with respect to filter's activity # * Update the input image to maximise this derivative def maximize_filter_activation(input_img, activation_model, layer_index, filter_index, n_iter=20, step=1): """Maximize the activation of a given filter Based on the example code for Keras by <NAME>: https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html Args: input_img: input image to maxmize the activation for. This is usually gaussian random noise. activation_model: an activation model to use to sample filters from layer: layer to maximize given filter for. layer_index: index of the layer to maximize. Returns: Maximized output for the given filter of shape (num filters, h, w, channels) loss for that filter """ input_img_data = tf.constant(input_img, dtype=tf.float32) for i in range(n_iter): with tf.GradientTape() as tape: tape.watch(input_img_data) loss = tf.math.reduce_mean(activation_model(input_img_data)[layer_index][:, :, :, filter_index]) # compute the gradient of the input picture wrt this loss grads = tape.gradient(loss, input_img_data) # normalization trick: we normalize the gradient grads /= (tf.math.sqrt(tf.math.reduce_mean(tf.math.square(grads))) + 1e-5) input_img_data += (grads * step) return input_img_data, loss # ## Random filter sampling # # Here we take the filters from the layer `layer_index` and take a random sample of the filters and maximise the imput images with respect to them. def random_filter_samples(input_img, activation_model, layer_index, n_samples=10, **kwargs): """Maximize the activation for a random sample of filters Args: input_img: input image to maxmize the activation for. This is usually gaussian random noise. activation_model: an activation model to use to sample filters from layer_index: index of the layer to maximize filters for. Returns: list of maximized activations for a random sample of filters. """ n_filters = activation_model.output_shape[layer_index][-1] n_samples = n_filters if n_filters < n_samples else n_samples indices = np.random.choice(n_filters, n_samples, replace=False) outputs = [] for filter_index in indices: output = maximize_filter_activation(input_img, activation_model, layer_index, filter_index, **kwargs) outputs.append(output[0].numpy()) return outputs # ## Maximum filter sampling # # Here we take the filters from the layer `layer_index` and take the `n_samples` filters with the greatest activation values. def max_filter_samples(input_img, activation_model, layer_index, n_samples=5, limit=32, **kwargs): """Get the filters that respond the most to input signals. We get this by seleting the filters with the lowest loss as returned by maximize_filter_activation Args: input_img: input image to maxmize the activation for. This is usually gaussian random noise. activation_model: an activation model to use to sample filters from layer_index: index of the layer to maximize filters for. n_select: the top number of filters to return Returns: list of maximized activations for a random sample of filters. """ n_filters = activation_model.output_shape[layer_index][-1] indices = range(n_filters) if len(indices) > limit: indices = range(limit) outputs = [] for filter_index in indices: output = maximize_filter_activation(input_img, activation_model, layer_index, filter_index, **kwargs) outputs.append([output[0].numpy(), output[1].numpy()]) outputs = sorted(outputs, key=lambda x: x[1], reverse=True) outputs = [o[0] for o in outputs] n_samples = n_filters if n_filters < n_samples else n_samples return outputs[:n_samples] # ## The master function that controls the others # # This is the function that you call at the top level. You need to tell it the model to feed, the input image, which layers of the network you want to look at, whether to sample randomly or maximum activation filters and how many filters to look at in each layer. def sample_model_filters(model, input_img, layers=[], mode='random', n_samples=5, **kwargs): """Sample filters from a model This will randomly sample filters from the model and maximise the activation of that filter using the given input image/ Args: model: input model to sample filters from input_img: the input image to use to maximise filter activation. Must match the input shape expected by the model. layers: the index of the layers to sample Returns: list of samples of maximized input for filters form each layer of the model """ activation_model = make_activation_model(model) model_layer_outputs = [ ] if layers == []: layers = range(len(activation_model.output_shape)) for layer_index in notebook.tqdm(layers): if mode == 'random': output = random_filter_samples(input_img, activation_model, layer_index, n_samples, **kwargs) elif mode == 'max': output = max_filter_samples(input_img, activation_model, layer_index, n_samples, **kwargs) model_layer_outputs.append(output) return model_layer_outputs # ## Set up the network and the image # # * Set up a random image as input, you can choose the dimensions. # * Load a pre-trained model from keras - we will use `VGG16` with the `imagenet` weights img_width = 64 img_height = 64 model = VGG16(include_top=False, weights='imagenet') # we start from a gray image with some noise input_img = np.random.random((1, img_width, img_height, 3)) * 20 + 128. # ## Run the maximisation procedure # # In the first instance let's run the first, fifth and eigth layers and sample 3 filters from each. We can see how the filters develop through the network. We will run the image maximisation for 20 steps with a scaling of steps of 5. model_layer_outputs = sample_model_filters(model, input_img, layers=[0, 4, 7], mode='random', n_samples=3, n_iter=40, step=5) # ## Visualise the results # # Using `ipython_widgets` we will look at the outputs of the maximisation. @widgets.interact(layer_outputs=widgets.fixed(model_layer_outputs), index=widgets.Select(options=range(len(model_layer_outputs)))) def plot_filters(layer_outputs, index): outputs = layer_outputs[index] n_filters = len(outputs) fig, axes = plt.subplots(n_filters, 1, figsize=(20, 18)) fig.subplots_adjust(hspace=0.1, wspace=0.1) for ax, l in zip(axes.flatten(), outputs): img = rgb2gray(l) ax.imshow(np.squeeze(img), cmap='Blues', interpolation='gaussian') ax.axis('off') # ## Exercises # # * Play around with looking at different layers. # * Try feeding an actual image rather than a random array, what are the results? def preprocess_image(image_path, size=0.3): # Util function to open, resize and format pictures # into appropriate arrays. img = keras.preprocessing.image.load_img(image_path) img = keras.preprocessing.image.img_to_array(img) img = resize(img, (int(img.shape[0]*size), int(img.shape[1]*size))) img = np.expand_dims(img, axis=0) return img base_image_path = tf.keras.utils.get_file("sky.jpg", "https://i.imgur.com/aGBdQyK.jpg") img = preprocess_image(base_image_path) model_layer_outputs = sample_model_filters(model, img, layers=[7], mode='random', n_samples=1, n_iter=20, step=5) imgg = rgb2gray((model_layer_outputs[0][0])) fig, ax = plt.subplots(1, 2, figsize=(20, 20)) ax[0] .imshow(np.squeeze(rgb2gray(img)), cmap='Blues') ax[1].imshow(np.squeeze(imgg), cmap='Blues') for i in range(2): ax[i].set_xticks([]) ax[i].set_yticks([])
course_2.0/09_debugging_exploring_extra.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="trying-volume" outputId="b858f397-a99c-4e8e-9cba-fd74bd93af8e" pip install requests # + id="stupid-italy" outputId="eaca66ef-ebfb-4975-c012-0815f71b5b2f" pip install bs4 # + id="unauthorized-tutorial" import requests from bs4 import BeautifulSoup # + colab={"base_uri": "https://localhost:8080/"} id="analyzed-yugoslavia" outputId="b9dd3a7e-252b-4e7f-8176-7b0f2190a382" laptop_name=[] laptop_price=[] page_num=input("Enter number of pages:") for i in range(1,int(page_num)+1): url="https://itti.com.np/laptops-by-brands/asus-laptop-nepal?p="+str(i) req=requests.get(url) content=BeautifulSoup(req.content,'html.parser') name=content.find_all('h2',{"class":"product name product-item-name product-name"}) price=content.find_all('span',{"class":"price"}) print("Laptops in page" + str(i)) print(len(name)) print(len(price)) for i in name: laptop_name.append(i.text) for i in price: laptop_price.append(i.text) # + colab={"base_uri": "https://localhost:8080/"} id="atlantic-petite" outputId="702c8c9e-6eca-4795-cd09-e5f640d2ea10" for i in laptop_name: print(i) # + colab={"base_uri": "https://localhost:8080/"} id="demonstrated-unknown" outputId="01568e8f-2fba-422b-e783-cd900a79782b" for i in laptop_price: print(i) # + colab={"base_uri": "https://localhost:8080/"} id="celtic-generation" outputId="def9585a-e44b-476a-8ec3-06eddf7af8f9" for i in range(len(laptop_name)): print(laptop_name[i] + '\t ' + laptop_price[i])
data_mining/lpps/scripts/itti/scrapes_itti_asus.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="64aq4ZKbZPdW" colab_type="text" # # Tutorial: Ground State of 1D Transverse-field Ferromagnetic Ising Model (TFIM) with RNN wavefunctions # # Code by **<NAME>** and **<NAME>**. # # # + [markdown] id="SM5KKhReRnHb" colab_type="text" # **This notebook is intended to help the reader to get familiarized with Exact Diagonalization (ED) and Positive Recurrent Neural Networks (pRNN) wavefunctions. Here, we just explore small system sizes for pedagogical purposes and to keep the running time very short.** # # # Check if you specifying the right "path" below to the google colab notebook on Drive to make sure everything below is working properly # # Make sure also you use a GPU by going to "Runtime/Change Runtime type" in Google Colaboratory to get a speedup # + id="g_lu1pgdX0vr" colab_type="code" outputId="cf7cfa3f-efbe-4e25-94f8-8fb2390f9b5b" executionInfo={"status": "ok", "timestamp": 1581260390811, "user_tz": 300, "elapsed": 1522, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-NN_4cqAjH6U/AAAAAAAAAAI/AAAAAAAAA5s/M6vFCZPEjP0/s64/photo.jpg", "userId": "11447742370504927382"}} colab={"base_uri": "https://localhost:8080/", "height": 36} from google.colab import drive drive.mount('/content/gdrive/') import sys path = 'gdrive/My Drive/RNNWavefunctions/RNNWavefunctions-master/1DTFIM' sys.path.append(path) # + id="AR5TEw4sZrkm" colab_type="code" colab={} import numpy as np import matplotlib.pyplot as plt from matplotlib import rcParams rcParams['axes.labelsize'] = 20 rcParams['font.serif'] = ['Computer Modern'] rcParams['font.size'] = 10 rcParams['legend.fontsize'] = 20 rcParams['xtick.labelsize'] = 20 rcParams['ytick.labelsize'] = 20 # + [markdown] id="j4tkqj6qipmp" colab_type="text" # ## **Calculating the ground state energy of 1DTFIM using Exact Diagonalization (ED)** # + [markdown] id="KzO3B4RWaXxA" colab_type="text" # Here, we attempt to calculate the ground state and the ground state energy of the 1D Transverse field Ising Model with Open Boundary Conditions using Exact diagonalization. # # The Hamiltonian is given as follows: # # $$\hat{H}_{\text{TFIM}} = - \sum_{\langle i,j \rangle} \hat{\sigma}^{z}_i \hat{\sigma}^{z}_j - B_x \sum_{i} \hat{\sigma}^{x}_i $$ # # where ${\bf \sigma}_i$ are pauli matrices. Here, # $\langle i,j \rangle$ denote # nearest neighbor pairs. # + id="ssa4VPWUitU-" colab_type="code" colab={} def IsingMatrixElements(Jz,Bx,sigmap): """ computes the matrix element of the open Ising Hamiltonian for a given state sigmap ----------------------------------------------------------------------------------- Parameters: Jz: np.ndarray of shape (N), respectively, and dtype=float: Ising parameters sigmap: np.ndarrray of dtype=int and shape (N) spin-state, integer encoded (using 0 for down spin and 1 for up spin) A sample of spins can be fed here. Bx: Transvers magnetic field (N) ----------------------------------------------------------------------------------- Returns: 2-tuple of type (np.ndarray,np.ndarray) sigmas: np.ndarray of dtype=int and shape (?,N) the states for which there exist non-zero matrix elements for given sigmap matrixelements: np.ndarray of dtype=float and shape (?) the non-zero matrix elements """ #the diagonal part is simply the sum of all Sz-Sz interactions diag=0 sigmas=[] matrix_elements=[] N = Jz.shape[0] for site in range(N-1): if sigmap[site]==sigmap[site+1]: #if the two neighouring spins are the same (We use open Boundary Conditions) diag-=Jz[site] #add a negative energy contribution (We use ferromagnetic couplings) else: diag+=Jz[site] matrix_elements.append(diag) sigmas.append(sigmap) #off-diagonal part (For the transverse Ising Model) for site in range(N): if Bx[site] != 0: sig = np.copy(sigmap) sig[site]=np.abs(1-sig[site]) matrix_elements.append(-Bx[site]) sigmas.append(sig) return np.array(sigmas),np.array(matrix_elements) def ED_1DTFIM(N=10, h = 1): """ Returns a tuple (eta,U) eta = a list of energy eigenvalues. U = a list of energy eigenvectors """ Jz=+np.ones(N) Bx=+h*np.ones(N) basis = [] #Generate a z-basis for i in range(2**N): basis_temp = np.zeros((N)) a = np.array([int(d) for d in bin(i)[2:]]) l = len(a) basis_temp[N-l:] = a basis.append(basis_temp) basis = np.array(basis) H=np.zeros((basis.shape[0],basis.shape[0])) #prepare the hamiltonian for n in range(basis.shape[0]): sigmas,elements=IsingMatrixElements(Jz,Bx,basis[n]) for m in range(sigmas.shape[0]): for b in range(basis.shape[0]): if np.all(basis[b,:]==sigmas[m,:]): H[n,b]=elements[m] break eta,U=np.linalg.eigh(H) #diagonalize return eta,U # + [markdown] id="S7ZohOddPLWS" colab_type="text" # It may take some time to do exact diagonalization. You can try up to $N=12$ spins otherwise you have to wait for a very long time. # + id="FrXZBzMcjGc6" colab_type="code" outputId="8c90315c-0462-4438-d5c7-bd10a6fe817c" executionInfo={"status": "ok", "timestamp": 1581260415432, "user_tz": 300, "elapsed": 26093, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-NN_4cqAjH6U/AAAAAAAAAAI/AAAAAAAAA5s/M6vFCZPEjP0/s64/photo.jpg", "userId": "11447742370504927382"}} colab={"base_uri": "https://localhost:8080/", "height": 55} eta, U = ED_1DTFIM(N=10, h = 1) print('The ground state energy is:') print(min(eta)) E_exact = min(eta) # + [markdown] id="3xmDq6XVlrVh" colab_type="text" # ## **Representing the ground state** # + [markdown] id="ZZf0mWkBa72x" colab_type="text" # Sometimes it is useful to represent the ground state in a plot as shown below, to know some information about the properties of the ground state, such as symmetries, sign of the amplitudes,... # + id="eoZ1mC7llQtS" colab_type="code" outputId="cc3c8052-a72a-4f57-c780-68c865c2ec02" executionInfo={"status": "ok", "timestamp": 1581260415438, "user_tz": 300, "elapsed": 26081, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-NN_4cqAjH6U/AAAAAAAAAAI/AAAAAAAAA5s/M6vFCZPEjP0/s64/photo.jpg", "userId": "11447742370504927382"}} colab={"base_uri": "https://localhost:8080/", "height": 411} fig, ax = plt.subplots(figsize=(10,6)) ground_state = -U[:,np.nonzero(eta==np.min(eta))[0][0]] plt.plot(ground_state,label="Ground state") ax.set_xlabel(r'Basis index', fontsize = 25) ax.set_ylabel('Amplitude', fontsize = 25) plt.show() # + [markdown] id="7wj7-yrgbh3s" colab_type="text" # We notice here that the amplitudes of the ground state in the z-basis do not change sign, hence we can use a positive recurrent neural network wavefunction (pRNN wavefunction). # + [markdown] id="p__YUOmUiuES" colab_type="text" # ## **Calculating the ground state energy using an RNN wavefunction** # + [markdown] id="saskRC4Ob1rI" colab_type="text" # After that we obtained the ground state energy from exact diagonalization, we are going to value as a reference to assess the quality of the variational energy calculated by the pRNN wavefunction # + id="t65l5kn5XjNa" colab_type="code" outputId="062bf103-10e9-45a7-ffbb-7d32c9bcee26" executionInfo={"status": "ok", "timestamp": 1581260763138, "user_tz": 300, "elapsed": 15478, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-NN_4cqAjH6U/AAAAAAAAAAI/AAAAAAAAA5s/M6vFCZPEjP0/s64/photo.jpg", "userId": "11447742370504927382"}} colab={"base_uri": "https://localhost:8080/", "height": 1000} from TrainingRNN_1DTFIM import run_1DTFIM #numsteps = number of training iterations #systemsize = number of physical spins #Bx = transverse magnetic field #numsamples = number of samples used for training numsamples = 200 #num_units = number of memory units of the hidden state of the RNN #num_layers = number of vertically stacked RNN cells #This function trains a pRNN wavefunction for 1DTFIM with the corresponding hyperparams RNNEnergy, varRNNEnergy = run_1DTFIM(numsteps = 1000, systemsize = 10, Bx = +1, num_units = 10, num_layers = 1, numsamples = numsamples, learningrate = 5e-3, seed = 111) #RNNEnergy is a numpy array of the variational energy of the pRNN wavefunction #varRNNEnergy is a numpy array of the variance of the variational energy of the pRNN wavefunction # + [markdown] id="peYQlfmihIW-" colab_type="text" # ## **Comparison of RNN results with ED** # + [markdown] id="hXamMx9PclH1" colab_type="text" # After we got the variational energies for each time step, we can now compare the RNN results with exact diagonalization # + id="1mbmXNmgdIQP" colab_type="code" outputId="f3eb211f-dd70-4c6f-9249-3d71a36718b1" executionInfo={"status": "ok", "timestamp": 1581260770335, "user_tz": 300, "elapsed": 877, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-NN_4cqAjH6U/AAAAAAAAAAI/AAAAAAAAA5s/M6vFCZPEjP0/s64/photo.jpg", "userId": "11447742370504927382"}} colab={"base_uri": "https://localhost:8080/", "height": 55} #Computing the ground state energy by taking average over the last 100 iterations print("Ground state energy = ", np.mean(RNNEnergy[-100:]), "+-", np.sqrt(np.max(varRNNEnergy[-100:])/(numsamples*100))) #We use np.max(varRNNEnergy[-100:]) to estimate an upper bound on the error print("Exact ground state energy = ", E_exact) # + [markdown] id="2Z_9__WGeOHU" colab_type="text" # Amazing! We can also plot the variational energy and the energy variance where we observe convergence during the last iterations. # + id="S6U81Z0rhHaH" colab_type="code" outputId="b3d58e0e-26f2-4568-e65c-78b5bf39a9bf" executionInfo={"status": "ok", "timestamp": 1581260778334, "user_tz": 300, "elapsed": 1300, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-NN_4cqAjH6U/AAAAAAAAAAI/AAAAAAAAA5s/M6vFCZPEjP0/s64/photo.jpg", "userId": "11447742370504927382"}} colab={"base_uri": "https://localhost:8080/", "height": 411} fig, ax = plt.subplots(figsize=(10,6)) ax.plot(np.arange(1, len(RNNEnergy)+1), RNNEnergy, "b-", label="Variational energy of the RNN") ax.plot(np.arange(1, len(RNNEnergy)+1), [E_exact]*len(RNNEnergy), "k--", label="Exact energy") ax.set_xlabel(r'Training step', fontsize = 25) ax.set_ylabel('Variational energy', fontsize = 25) plt.legend() plt.show() # + [markdown] colab_type="text" id="48mlVcvhkXAA" # ### **Energy variance** # + id="mdFR_2yHj05j" colab_type="code" outputId="2a55d20b-bbc1-4bb0-bbde-ae652bbbeca7" executionInfo={"status": "ok", "timestamp": 1581260786035, "user_tz": 300, "elapsed": 1248, "user": {"displayName": "<NAME>", "photoUrl": "https://lh4.googleusercontent.com/-NN_4cqAjH6U/AAAAAAAAAAI/AAAAAAAAA5s/M6vFCZPEjP0/s64/photo.jpg", "userId": "11447742370504927382"}} colab={"base_uri": "https://localhost:8080/", "height": 411} fig, ax = plt.subplots(figsize=(10,6)) ax.semilogy(np.arange(1, len(RNNEnergy)+1), varRNNEnergy, "b-", label="Energy variance of the RNN") ax.set_xlabel(r'Training step', fontsize = 25) ax.set_ylabel('Energy variance', fontsize = 25) plt.legend() plt.show() # + [markdown] id="cvav7t7Efi7O" colab_type="text" # ## **Explorations** # + [markdown] id="rlcpx-jqfniT" colab_type="text" # - If you want to explore large system sizes with the pRNN wavefunction, here are some ground states energy of 1DTFIM at the critical point (Bx = 1) given by DMRG and can be considered exact: # # # > N=20 : -25.1077971081 # # > N=30 : -37.8380982304 # # > N=40 : -50.5694337844 # # > N=50 : -63.3011891370 # # > N=60 : -76.0331561023 # # > N=70 : -88.7652446334 # # > N=80 : -101.4974094169 # # > N=90 : -114.2296251736 # # > N=100 : -126.9618766964 # # > N=1000 : -1272.8762945220 # # - You can also play with the hyperparameters (memory units, number of layers, number of samples, learning rate) to obtain better accuracies. # #
Tutorials/1DTFIM/Tutorial_1DTFIM.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import lightgbm as lgb import pandas as pd import catboost as cb import xgboost as xgb import catboost.datasets # Load data # ==== epsilon_test = catboost.datasets.epsilon()[1] X_test = epsilon_test.drop(0, axis=1).values xgb_test = xgb.DMatrix(X_test) catboost_pool = cb.Pool(X_test) epsilon_model = cb.CatBoost() epsilon_model.load_model('epsilon8k_64.bin') print(epsilon_model.tree_count_) # 32 thread apply # ==== bst = xgb.Booster({'nthread': 32}) # set 32 thread in openmp threadpool bst.load_model('XGBoost.model') # load data lgb_model = lgb.Booster(model_file='LightGBM_model.txt') # %timeit -r 5 bst.predict(xgb_test, ntree_limit=8000) # %timeit -r 5 lgb_model.predict(X_test, num_iteration=8000) # %timeit -r 5 epsilon_model.predict(catboost_pool, thread_count=32, ntree_end=8000) # 1 thread apply # ==== bst = xgb.Booster({'nthread': 1}) # set 1 thread in openmp threadpool bst.load_model('XGBoost.model') # load data lgb_model = lgb.Booster(model_file='LightGBM_model.txt') # %time _ = bst.predict(xgb_test, ntree_limit=8000) # %time _ = lgb_model.predict(X_test, num_iteration=8000) # %time _ = epsilon_model.predict(catboost_pool, thread_count=1, ntree_end=8000)
catboost/benchmarks/model_evaluation_speed/model_evaluation_benchmark.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np from scipy.cluster.vq import kmeans2 from skimage import color # %matplotlib inline # + # Orientation preference map: for now, use k-means on Blasdel image # rgb_img = mpimg.imread('v1-topology-blasdel-figure6.png') # rgb_img = mpimg.imread('orientation-preference-rubin-figure6.png') rgb_img = mpimg.imread('orientation-obermayer-fig1.png') plt.figure() plt.imshow(rgb_img) plt.title('Original topographic image') if rgb_img.shape[2] > 3: print "Throwing away the alpha channel..." rgb_img = rgb_img[:,:,0:-1] lab_img = color.rgb2lab(rgb_img) # convert to L*a*b* colourspace ab = lab_img[:,:,1:] n_rows = np.shape(ab)[0] n_cols = np.shape(ab)[1] ab = np.reshape(ab, (n_rows*n_cols, 2)) n_colours = 30 centroids, labels = kmeans2(ab, n_colours) labels = np.reshape(labels, (n_rows, n_cols)) rgb_labels = np.tile(labels[:,:,None], [1,1,3]) OP_range = np.linspace(0, 180, n_colours, endpoint=False) full_OP_map = np.copy(labels) for i in range(n_colours): seg_img = np.copy(rgb_img) seg_img[rgb_labels != i] = 0 # assign an orientation preference (degrees) based on segmentation full_OP_map[full_OP_map == i] = OP_range[i] # Show the individual segmented images: # plt.figure() # plt.imshow(seg_img) N_pairs = 75 # no. of E/I pairs to a side of a grid field_size = 16. # size of field to a side (degrees) dx = field_size / N_pairs xy_range = np.linspace(0, field_size, N_pairs, False) # xy_range = np.linspace(-field_size/2, field_size/2, N_pairs) xv, yv = np.meshgrid(xy_range, xy_range) # x and y grid values (degrees) # sample the OP map uniformly min_dim = np.min(np.shape(full_OP_map)) # Sampling the map evenly - results in poor continuity - use o # o_samples = np.round(np.linspace(0, min_dim-1, N_pairs)) # xo, yo = np.meshgrid(o_samples, o_samples) # xo = xo.astype(int) # yo = yo.astype(int) OP_map = full_OP_map[-N_pairs:, -N_pairs:] # OP_map = OP_map.astype(float) plt.figure() plt.imshow(OP_map) plt.colorbar() # + # Ocular dominance map: from Obermayer and Blasdel, 1993 # which contains images of ocular dominance and orientation preference from the same region def rgb2gray(rgb): return np.dot(rgb[...,:3], [0.299, 0.587, 0.114]) OD_raw = mpimg.imread('ocular-dom-obermayer-fig1.png') print OD_raw.shape OD_gray = rgb2gray(OD_raw) plt.figure() plt.imshow(OD_gray, cmap = plt.get_cmap('gray')) plt.colorbar() OD_norm = (OD_gray - np.min(OD_gray) ) / np.max(OD_gray - np.min(OD_gray)) plt.figure() plt.imshow(OD_norm, cmap = plt.get_cmap('gray')) plt.colorbar() OP_map = OD_norm[-N_pairs-1:-1, -N_pairs-1:-1] plt.figure() plt.imshow(OP_map, cmap='gray') plt.colorbar() print OP_map.shape # -
mechanistic/orientation_map_kmeans.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os import tqdm import pandas as pd import numpy as np from maupassant.settings import DATASET_PATH # - # sentiment_path = os.path.join(DATASET_PATH, "sentiment.csv") sentiment_path = "clean_sentiment.csv" sentiment = pd.read_csv(sentiment_path) sentiment.head() # + cols_binary = ['negative', 'positive', 'neutral'] cols_multi = ["insult", "negative", "neutral", "obscene", "positive", "toxic"] cols_single = ['negative', 'positive', 'neutral'] binary, multi, single = [], [], [] pbar = tqdm.tqdm(total=len(sentiment)) for idx, row in sentiment.iterrows(): p_binary = row[cols_binary] == 1 p_multi = row[cols_multi] == 1 p_single = row[cols_single] == 1 try: val = np.asarray(cols_binary)[p_binary.values][0] if val == "neutral": val = "positive" binary.append( val ) except: binary.append( "negative" ) multi.append( np.asarray(cols_multi)[p_multi.values].tolist() ) try: single.append( np.asarray(cols_single)[p_single.values][0] ) except: single.append( "negative" ) pbar.update(1) pbar.close() # - sentiment['binary'] = binary sentiment['multi'] = multi sentiment['single'] = single sentiment sentiment.to_csv('sentiment.csv', index=False) from sklearn.model_selection import train_test_split train, test = train_test_split(sentiment, test_size=0.2, random_state=42) train.shape test.shape test, val = train_test_split(test, test_size=0.5, random_state=42) test.shape val.shape train.to_csv('sentiment_train.csv', index=False) test.to_csv('sentiment_test.csv', index=False) val.to_csv('sentiment_val.csv', index=False)
notebooks/CreateDataset.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Import Libraries from image_preprocessing import datagens, prepareImages, prepareLabels from train_valid_split import train_valid_split, train_valid_dict_generator from keras.preprocessing import image from keras.applications.imagenet_utils import preprocess_input from sklearn.preprocessing import LabelEncoder from keras.utils.np_utils import to_categorical # + import tensorflow as tf from PIL import Image from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow import keras from tensorflow.keras import layers # - import pandas as pd import numpy as np import cv2 import model4 from model4 import model_deep # + from tensorflow.keras.optimizers import Adam from keras.callbacks import ReduceLROnPlateau # - # ## Load Images & Labels # load training data with image names and Ids train_labels = pd.read_csv("df_train.csv") train, valid = train_valid_split(train_labels) train_df, valid_df = train_valid_dict_generator(train,valid,train_labels) train_df_final = pd.DataFrame(train_df.items(), columns=['Image', 'Id']) valid_df_final = pd.DataFrame(valid_df.items(), columns=['Image', 'Id']) all_train_classes = list(train_df_final['Id'].unique()) all_valid_classes = list(valid_df_final['Id'].unique()) # keep only the common classes within both datasets common_classes = [] for i in all_valid_classes: if i in all_train_classes: common_classes.append(i) train_df_final = train_df_final[train_df_final['Id'].isin(common_classes)] # prepare the training images by applying preprocessing and normalization X_train = prepareImages(train_df_final,train_df_final.shape[0], "...\\Documents\\whale_identification\\whale_identification\\data\\train_rev1\\") X_train.shape # prepare classes by one hot encoding whale categories y_train = prepareLabels(train_df_final,len(train_df_final['Id'].unique())) y_train.shape # - We have more classes represented in validation set than in the training set, because of the way we split the data originally. # - For consistency and for the purposes of this notebook, let's keep just the common classes for now. len(all_train_classes) len(common_classes) valid_df_final = valid_df_final[valid_df_final['Id'].isin(common_classes)] # prepare validation features by applying preprocessing and normalization X_valid = prepareImages(valid_df_final,valid_df_final.shape[0], "...\\Documents\\whale_identification\\whale_identification\\data\\train\\") X_valid.shape # prepare validation labels by one hot encoding the whale categories y_valid = prepareLabels(valid_df_final,len(valid_df_final['Id'].unique())) # ## Train a CNN model = model_deep() model.compile(optimizer='Adam', loss='categorical_crossentropy',metrics =['accuracy']) # apply data augmentations train_datagen, valid_datagen = datagens() epochs = 20 batch_size = 1000 validation_set = valid_datagen.flow(X_valid, y_valid, batch_size = batch_size) history = model.fit_generator(train_datagen.flow(X_train, y_train, batch_size=batch_size), epochs= epochs, verbose = 2, steps_per_epoch = X_train.shape[0] // batch_size, validation_data = validation_set, validation_steps = X_valid.shape[0] // batch_size ) model.save('my_model') model.save_weights('my_weights.h5') import matplotlib.pyplot as plt # plot the accuracy curve plt.plot(history.history['val_accuracy'], color='g', label="Validation Accuracy") plt.plot(history.history['accuracy'], color='c', label="Train Accuracy") plt.title("Validation Accuracy") plt.xlabel("Number of Epochs") plt.ylabel("Accuracy") plt.legend() plt.show()
model-1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from IPython.display import Image # # Smart Reply: Automated Response Suggestion for Email # # **My Notes:** # # * Most of the text in this notebook is taken directly from the [paper](https://www.kdd.org/kdd2016/papers/files/Paper_1069.pdf) with the exception of this section. # * Overview of Google Smart Reply architecture, the challenges faced and innovations used to overcome these challenges. # * One key innovation is the development of a response set using semi-supervised learning. This vastly impoves the quality and utility of the suggested responses. # + Step 1: Canonicalize email responses. Each sentence is parsed using a dependency parser and it's syntactic structure is used to generate a canoncialized representation. # + Responses are limited to responses with 10 or less tokens # + Step 2: Semi-supervised learning with scalable graph algorithms is used to construct semantic clusters. Specifically the Expander Graph Algorithm. # + Step 3: 100 random samples are drawn from each of the learned clusters and curated by humans to validate their accuracy. # * Triggering model improves scalabilty of the model by removing 90% of emails where a response suggestion would not be useful. # # # ## Abstract # # This paper gives an overview of the Smart Reply system developed by Google allowing users to respond to an email with a single tap. The system presents with three options to choose from and is responsible for 10% of all email replies on mobile. The system is designed to process 100s of millions of emails a day. # # ## System Architecture # # Below is an image from the paper showing the system architecture. # # **Overview:**: # # * Based on sequence-to-sequecne learning architecture which uses LSTMs to predict sequences of text. # * Consistent with the approach of the [Neural Conversational Model](https://arxiv.org/pdf/1506.05869.pdf) # * Model input sequence is an incoming message and the output is a distribution of the space of possible replies. # # **Main Challenges:** # # * **Response Quality**: How to ensure that the individual response options are always high quality in langauge and content # * **Utility**: How to select multiple options to show a user to maximize the likelihood that one is chosen # * **Scalability**: How to efficiently process millions of messages per day while remaining within the latency requirements of an email delivery system # * **Privacy**: How to develop the system without ever inspecting the data except for aggregate statisticis. # # **Smart Reply Components:** # # * **Response Selection:** At the core of the system an LSTM neural network processes an incoming message and uses it to predict the most likely responses. Scalability is improved by only finding the approximate best responses. # * **Response Set Generation:** To deliver high response quality, responses are only selected from a response space which is generated offline using a semi-supervised graph learning approach. # * **Diversity:** After finding a set of most likely responses from the LSTM, choose a small set to show to the users which maximize the utility of one being chosen. # * **Triggering Model:** A feed-forward neural network decides whether or not to suggest responses. This further improves utility by not showing suggestions when they are unlikely to be used. This is broken out into a separate architecture for scalability allowing them to use a computationally cheaper architecture than that used for the scoring model. # # ![Smart Reply Architecture](./img/google_smart_reply.png) # # ## Selecting Responses # # The fundamental task of smart reply is to find the most likely response given an original message. In other words, given original message $\textbf{o}$ and the set of all possible responses $R$, we would like to find: # # $$ # \textbf{r}^* = argmax P(\textbf{r}|\textbf{o}) # $$ # # To find this response, they construct a model that an score responses and then find the highest scoring response. # # ### LSTM Model # # Scoring one sequence of tokens **r**, conditional on another sequence of tokens **o**, this problem is a natural fit for sequence-to-sequence learning. The model itself is an LSTM, the input is the tokens of the original message ${0_1, ..., 0_n}$, and the output is the conditional probability distribution of the sequence of response tokens given the input: # # $$ # P(r_1, ..., r_m | o_1, ... o_n) # $$ # # This distribution is factorized as # # $$ # P(r_1, ..., r_m | o_1, ... o_n) = \Pi_{i=1}^{m} P(r_i|o1, ..., o_n, r1, ..., r_{i-1}) # $$ # # EOS token is included with the original input message so that the LSTM's hidden state encodes a vector representation of the whole message. Given the hidden state, a softmax output is computed and interpreted as $P(r_1|o_1, ..., o_n)$ or the probability distribution of the first response token. # # #### Training # # The training objective is to maximize the log probability of observed responses, given their respective originals: # # $$ # \sum_{(\textbf{o}, \textbf{r})} log P(r_1, ..., r_m | o_1, ..., o_n) # $$ # # Optimized with Adagrad over 10 epochs. In addtion to the standard LSTM formulation the addition of a [recurrent projection layer](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43905.pdf) significantly improved both the quality and time to converge of the proposed model. # # Gradient clipping (with a value of 1) was also essential in ensuring stable results. # # #### Inference # # At inference time, the original message is fed in and the output of the softmaxes is used to get a probability distribtuion at each time step. These probability distributions can be used in different ways: # # 1. To draw a random sample from the response distribution $P(r_1, ..., r_m | o_1, ..., o_n)$. This can be done by sampling one token at each timestep and feeding it back into the model. # 2. To approximate the most likely response given the original message. This can be done greedily by taking the most likely token at each time step and feeding it back in. A less greedy strategy is to use a beam search, i.e., take the top _b_ tokens and feed them in, then retain the _b_ best response prefixes and repeat. # 3. To determine the likelihood of a specific response candidate. This can be done by feeding in each token of the candidate and using the softmax output to get the likelihood of the next candidiate token. # # ## Challeneges # # ### Response Quality # # Need to ensure that they are always high quality in style, tone, diction, and content. Model is trained on a corpus of real messages, need to account for the possibility that the most probable response is not necessarily a high quality response. Even a response that occurs frequently in our corpus may not be appropriate to surface back to the users. For example, it could contain poor grammar. spelling, or mechanics. Restricting the vocabulary might address simple cases such as profanity and spelling errors, it would not be sufficient to capture the wide variability with which politically incorrect statements can be made. Instead, a semi-supervised learning approach is used to construct a target response space R comprising only high quality responses. Then the model is used to select the best response in R, rather than the best response from any sequence of words. # # ### Utility # # To improve specificity of responses, we apply some light normalization that penalizes responses which are applicable to a broad range of incoming messages. Utility is further improved by first passing incoming messages through a triggering model to determine whether smart reply suggestions should be shown at all. # # ### Scalability # # Model can not introduce latency to the process of email delivery so scability is critical. Exhaustively scoring every response candidate $r \in R$ would require $O(|R|l)$ LSTM steps where $l$ is the length of the longest response. R is also expected to grow over time and become very large given the tremendous diversity with which people communicate. In a uniform sample of 10 million short responses (10 tokens or less), more than 40% occur only once. Therefore rather than perform an exhaustive scoring of every candidate $r \in R$, we would like to efficiently search for the best responses such that complexity is not a function of $|R|$. # # First the elements of R are organized into a trie then a left-to-right beam search is conducted but only retain hypotheses which appear in the trie. This search process has complexity $O(bl)$ for beam size $b$ and maximum response lenght $l$. Both $b$ and $l$ are typically in the range of 10-30, so this method dramatically reduces the time to find the top responses and is a critical element of making this system deployable. # # ## Response Set Generation # # Two of the core challenges we face when building the end to end automated response system are response quality and utility. Response quality comes from suggesting high quality responses that deliver a positive user experience. Utility, comes from ensuring that we don't suggest multiple responses that capture the same intent (for example, minor lexical variations such as "Yes, I'll be there. " and "I will be there.". # # First need to define a target repsonse space that comprises high quality messages which can be surfaced as suggestions. The goal here is to set generate a structured response set that effectively captures various intents conveyed by people in natural langauge conversations. The target response space should capture variability in both langauage and intents. The result is used in two ways downstream: # # 1. define a response space for scoring and selecting suggestions using the LSTM model previously described. # 2. Promote diversity among chosen suggestions # # Response set is constructed using only the most frequent anonymized sentence saggregated from the preprocessed data. This process yields a few million unique sentences. # # ### Canonicalizing email responses # # First step is to automatically generate a set of canonical response messages that capture the variability in langauge. Each sentence is parsed using a dependence parser and uses its syntactic structure to generate a canonicalized representation. Words or phrases that are modifiers or unattached to head words are ignored. # # ### Semantic intent clustering # # In the next step, responses are partitioned into semantic clusters where a cluster represents a meaningful response intent. All messages within a cluser share the same semantic meaning but may appear very different. This step helps to automatically digest the entire information present in frequent responses into a coherent set of semantic clusters. If we were to build a semantic intent prediction model for this purpose, we would need access to a large corpus of sentences annotated with their correpsonding semantic intents. This is neither practical nor feasible so instead this task is modeled as a semi-supervised machine learning problem and use scalable graph algorithms to automatically learn this information from data and a few human-provided examples. # # ### Graph Construction # # We start with a few manually defined clusters samples from the top frequent messages (e.g., thanks, i love you, sounds good.) A small number of examples responses are added as seeds for each cluster. # # A base graph is then constructed with frequent response messages as nodes ($V_R$). For each response message, we further extract a set of lexical features (ngrams and skip-grams of length up to 3) and add these as feature nodes ($V_F$) to the same graph. Edges are created between a pair of nodes ($u,v$) where $u \in V_R$ and $v \in V_F$ if $v$ belongs to the feature set for response $u$. We follow the same process and create nodes for the manually labelled examples $V_L$. Incoming messages could also be treated as responses to another email depending on the context. Inter-message relations as shown in the above example can be modeled within the same framework by adding extra edges between the corresponding message nodes in the graph. # # ### Semi-Supervised. # # The constructed graph captures relationships between similar canonicalized responses via the feature nodes. Next we learn a semantic labeling for all the response nodes by propagating semantic intent information from the manually labeled examples through the graph. we treat this as a supervised semi-supervised learning problem and use the distributed [EXPANDER](http://proceedings.mlr.press/v51/ravi16.pdf) framework for optimization. The learning framework is scalable and nautrally suited for semi-supervised graph propagation tasks such as the semantic clustering problem described here. The following objective function is minimized for the response nodes in the graph: # # $$ # s_i||\hat{C_i} - C_i||^2 + \mu_{pp}||\hat{C_i} - U||^2 + \mu_{np}(\sum_{j \in N_F(i)}w_{ij}||\hat{C_i} - \hat{C_j}||^2 + \sum_{j \in N_R(i)} w_{ik}||\hat{C_i} - \hat{C_k}||^2 ) # $$ # # Where, # # * $s_i$ is an indicator function equal to 1 if the node $i$ is a seed and 0 otherwise # * $\hat{C_i}$ is the learned semantic cluster distribution for response node $i$. # * $C_i$ is the true label distribution (i.e., for the manually provided examples) # * $N_F(i)$ and $N_R(i)$ represent the feature and message neighborhood of the node $i$ # * $\mu_{np}$ is the predefined penalty for neighboring nodes with divergent label distribution # * $\hat{C_j}$ is the learned label distribution for feature neighbor j # * $w_{ij}$ is the weight of feature $j$ in response $i$ # * $\mu_{pp}$ is the penalty for label distribution deviating from the prior, a uniform distributiion U # # The objective function for the feature nodes is alike except there is no first term, as there are no seed labels for feature nodes: # # $$ # \mu_{np}\sum_{i \in N_F(j)}w_{ij}||\hat{C_j} - \hat{C_i} ||^2 + \mu_{pp}||\hat{C_j} - U||^2 # $$ # # The objective function is jointly optimized for all nodes in the graph. # # The output from EXPANDER is a learned distribution of semantic labels for every node in the graph. We assign the top scoring output label as the semantic intent for the node, labels with low scores are filtered out. # # ![Expander Algorithm](./img/google_expander.png) # # To discover new clusters which are not covered by the labeled examples, we run the semi-supervised learning algorithm in repeated phases. In the first phase, we run the label propagation algorithm for 5 iterations. The process is run in phases: # # 1. Run label propagation algorithm for 5 iterations. Then fix cluster assignment and randomly sample 100 new responses from the remaining unlabeled nodes in the graph. # + The sampled nodes are treated as potential new clusters and labeled with their canonicalized representation. # 2. Rerun label propagation with the new labeled set of clusters and repeat this procedure until convergence (i.e., until no new clusters are discovered and members of a cluster do not change between iterations.) # 3. The iterative propagation method allows us to both expand cluster membership as well as discover new clusters where each cluster has an interpretable semantic meaning. # # ### Cluster Validation # # Finally, we extract the top k members for each semantic cluster, sorted by their label scores. The set of (response, cluster label) pairs are then validated by human raters. The raters are provided with a response $R_i$ a corresponding cluster label $C$ (e.g., thanks) as well as a few exmaple responses belonging to the cluster (e.g., "THanks!", "Thank you") and asked whether $R_i$ belongs to $C$. # # The result is an automatically generated and validated set of high quality response messages labeled with semantic intent. This is subsequently used by the response scoring model to search for approximate best responses to an incoming email and further to enforce diversity among the top responses shown. # # ## Suggestion Diversity # # As discussed in Section 3, the LSTM first processes an incoming message and then selects the approximate best responses from the target response set created using the method described in section 4. Recall that we follow this by some light normalization to penalize responses that may be too general to be valuable to the user. The effect of this normalization can be seen by comparing columns 1 and 2 of Table 2. For example, the very generic, "Yes!" falls out of the top ten responses. # # If we simply select the top N responses there is a high probablity that all of these responses are very similar in meaning. The job of the diversity component is to select a more varied set of suggestions usiing two strategies: omittingi redundant responses and enforcing negative or positive responses. # # ### Omitting Redundant Responses # # This strategy assumes that the user should never see two responses of the same intent. Intents are defined by the clusters generated by the Expander Algo. The actual diversity stategy is simple: the top responses are iterated over in the order of decreasing score. Each response is added to the list of suggestions unless it's intent is already covered by a response on the suggestion list. # # ### Enforcing Negatives and Positives # # We have observed that LSTMs have a strong tendency towards producing positive responses, whereas negative responses usch as I can't make it or I don't think so typically recieve lower scores. There is often utility in including a negative option in the list of suggestions. # # If the top two responses (after omitting redundant repsonses) contain at least one positive response and none of the top 3 responses are negative, the third response is replaced by a negative one. # # In order to find the negative response, a second LSTM pass is performed. In this second pass, the search is restricted to only the negative responses in teh target set. This is necessary since the top responses produced in the first pass may not contain any negatives. In situations where the resposnes are all negative an analogous strategy is employed for enforcing at least 1 postive response. # # ## Triggering # # The triggering module is the entry point of the smart reply system. It is responsible for filterinig messages that are bad candidates for suggesting responses. This includes emails for which short replies are not appropriate, as well as emails for which no reply is necesasry at all. # # The module isi applied to every incoming email just after the preprocessing step. If the decision is negative, execution is finished and no suggestions are shown. Smart reply currently produces responses for around 11% of all emails so this system vastly reduces the number of useless suggestions seen by users. # # The main part of the triggering component is a feed forward neural network which produces a probability score for every incoming message. If the score is above some threshold we trigger and run the LSTM scoring. # # ### Data and Features # # In order to label our training corpus of emails, we use as positive examples those emails that been responded to. More precisely, out of the data set described in Section 7.1, we create a training set that consists of pairs ($\textbf{o}, y$) where $\textbf{o}$ is an incoming message and $y \in {true, false}$ is a boolean label, which is true if the message had a response and false otherwise. For the positive class, we consider only messages that were replied to from a mobile device, while for negative we use a subset of all messages. We downsample the negative class to balance the training set. Our goal is to model $P(y = true|\textbf{o})$. The probability that message $\textbf{o}$ will have a response on mobile. # # After preprocessing we extract content features (e.g., unigrams, bigrams) from the message body, subject and headers. We also use various social signals like whether the sender is in recipients address book, whether the sender is in recipients social network and whether the recipient responded in the past to this sender. # # ### Network Architecture and Training # # We use a feedforward multilayer perceptron with an embedding layer (for a vocabulary of roughly one million words) and three fully connected hidden layers. We use feature hashing to bucket rare words that are not present in the vocabulary. The embeddings are separate for each sparse feature type (eg., unigram, bigram) and within one feature type, we aggregate embeddings by summing them up. Then, all sparse feature embeddings are concatenated with each other and with the vector of dense features. # # We use ReLu activation function for non-linearity between the layers. The dropout layer is applied after each hidden layer. Model is trained using Adagrad with the logistic loss cost function.
papers/Google_SmartReply.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os from utils import load_data_to_mem, augmentation, show_images # + main_dir = "." data_dir = os.path.join(main_dir, "data") train_dir = os.path.join(data_dir, "train") classes = ['I', 'II', 'III', 'IV', 'V', 'VI', 'VII', 'VIII'] # - X, y = load_data_to_mem(train_dir, classes, img_height=64, img_width=64) print("Before augmantation len(X) =", len(X), ", len(y) =", len(y)) X, y = augmentation(X, y, n_transform=5) print("After augmantation len(X) =", len(X), ", len(y) =", len(y)) # + idx = [200, 349, 555, 650, 800, 929] n = 5 for i in range(5): for j in range(6): idx.append(idx[n - 5] + 940) n += 1 imgs_to_show = [X[i] for i in idx] labels_to_show = [y[i] for i in idx] show_images(imgs_to_show, labels_to_show) # -
Overview_augmented_images.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/farhanfuadabir/SHL2020/blob/master/SHL_split_and_batch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="2Ad_Fv9HrO85" outputId="e6948287-9d54-4fea-9174-1d1aeb7fbbea" colab={"base_uri": "https://localhost:8080/", "height": 122} from google.colab import drive drive.mount('/content/drive') # + id="nhjWE6Spq09X" def createBatch(data, batch_size = 1000, random_state=None): """ Randomly selects `batch_size` entries of each label from the dataset Parameters: data : The dataset to create small batch from. This must be a DataFrame. batch_size : Batch size of each label random_state : Seed for the random number generator (if int), or numpy RandomState object. Returns: Series or DataFrame """ y = data.label num_label = y.nunique() newDataBatch = pd.DataFrame(columns=data.columns) for i in range(1, num_label + 1): data_i = data.loc[y == i, :].sample(n=batch_size, random_state=random_state) newDataBatch = pd.concat([newDataBatch, data_i], axis=0) return newDataBatch def random_split_half(data, random_state=123): import pandas as pd y = data.label num_label = y.nunique() newDataBatch1 = pd.DataFrame(columns=data.columns) for i in range(1, num_label + 1): data_i = data.loc[y == i, :].sample(frac=0.5, random_state=random_state) newDataBatch1 = pd.concat([newDataBatch1, data_i], axis=0) newDataBatch2 = pd.concat([data, newDataBatch1], axis=0) newDataBatch2 = newDataBatch2.drop_duplicates(keep=False) return newDataBatch1, newDataBatch2 def process_train_validation(trainSet, valSet=None, trainBatch=1000, splitValSet=False, random_state=1234, removeConstantColumn=True, scaleFeatures=True, noTestLabel=False): import pandas as pd from sklearn.preprocessing import MinMaxScaler print('Given Train Set Shape: ', trainSet.shape) print('Given Validation Set Shape: ', valSet.shape) if trainBatch != None: print('\nCreating train batch...', end=' ') trainSet = createBatch(trainSet, batch_size=trainBatch, random_state=random_state) print('Done | Shape: ', trainSet.shape) if splitValSet == True: print('Merging train batch with half validation set...', end=' ') valTrain, valSet = random_split_half(valSet, random_state=random_state) trainSet = pd.concat([trainSet, valTrain], axis = 0) print('Done | Shape: ', trainSet.shape) X_train = trainSet.drop('label', axis=1) y_train = trainSet.label if noTestLabel == False: X_val = valSet.drop('label', axis=1) y_val = valSet.label else: X_val = valSet if removeConstantColumn == True: X_temp = X_train X_train = X_train.loc[:, (X_temp != X_temp.iloc[0]).any()] X_val = X_val.loc[:, (X_temp != X_temp.iloc[0]).any()] if scaleFeatures == True: scaler = MinMaxScaler(feature_range=(-1,1)) X_train = scaler.fit_transform(X_train) X_val = scaler.fit_transform(X_val) if noTestLabel == False: print('\nX_train Shape: ', X_train.shape, ' | y_train Shape: ', y_train.shape) print('X_val Shape: ', X_val.shape, ' | y_val Shape: ', y_val.shape) print('\nInstances of each Label of the Train Set: ') print(y_train.value_counts()) print('\nInstances of each Label of the Validation Set: ') print(y_val.value_counts()) return X_train, y_train, X_val, y_val else: print('\nX_train Shape: ', X_train.shape, ' | y_train Shape: ', y_train.shape) print('X_val Shape: ', X_val.shape) print('\nInstances of each Label of the Train Set: ') print(y_train.value_counts()) return X_train, y_train, X_val # + id="USfCBEJarigi" outputId="5fa29be1-0e07-4217-bed2-a2bd9008a5f1" colab={"base_uri": "https://localhost:8080/", "height": 425} import pandas as pd import numpy as np from joblib import load, dump from keras.models import Sequential from keras.layers import Dense from keras.utils import to_categorical path = '/content/drive/My Drive/SHL Features Pickle/' train1Prefix = 'validation_2019' train2Prefix = 'validation_2020' train3Prefix = 'test_2019' testPrefix = 'test_2020' valPrefix = 'train_2020' positions = ['_hand', '_bag', '_torso', '_hips'] position_val = ['_hand', '_hips'] data_train1 = pd.DataFrame() for pos in positions: #Unpickle Train1 Set print('Unpickling from: ' + train1Prefix + pos + '_DATA.pickle ...',end=' ') temp = pd.read_pickle(path + train1Prefix + pos + '_DATA.pickle') print('Done | Shape: ', temp.shape) data_train1 = data_train1.append(temp,ignore_index=True) print(train1Prefix, ' shape: ', data_train1.shape) data_train2 = pd.DataFrame() for pos in positions: #Unpickle Train2 Set print('Unpickling from: ' + train2Prefix + pos + '_DATA.pickle ...',end=' ') temp = pd.read_pickle(path + train2Prefix + pos + '_DATA.pickle') print('Done | Shape: ', temp.shape) data_train2 = data_train2.append(temp,ignore_index=True) print(train2Prefix, ' shape: ', data_train2.shape) #Unpickle Train3 Set print('Unpickling from: ' + train3Prefix + '_hand' + '_DATA.pickle ...',end=' ') data_train3 = pd.read_pickle(path + train3Prefix + '_hand' + '_DATA.pickle') print('Done | Shape: ', data_train3.shape) print(train3Prefix, ' shape: ', data_train3.shape) data_train = pd.concat([data_train1, data_train2, data_train3], axis=0) print('\n\ndata_train shape: ', data_train.shape, end='\n\n') data_val = pd.DataFrame() for pos in position_val: #Unpickle Validation Set print('Unpickling from: ' + valPrefix + pos + '_DATA.pickle ...',end=' ') temp = pd.read_pickle(path + valPrefix + pos + '_DATA.pickle') print('Done | Shape: ', temp.shape) data_val = data_val.append(temp,ignore_index=True) print(valPrefix, ' shape: ', data_val.shape) #Unpickle Test Set print('Unpickling from: ' + testPrefix + '_hand' + '_DATA.pickle ...',end=' ') data_test = pd.read_pickle(path + testPrefix + '_hand' + '_DATA.pickle') print('Done | Shape: ', data_test.shape) # Check for nan if data_train.isna().any().any() == True: print('\nnan Detected in Train set') # Drop nan rows print('Dropping nan rows...',end=' ') data_train.dropna(inplace=True) print('Done | Shape: ', data_train.shape) if data_val.isna().any().any() == True: print('\nnan Detected in Validation set') # Drop nan rows print('Dropping nan rows...',end=' ') data_val.dropna(inplace=True) print('Done | Shape: ', data_val.shape) if data_test.isna().any().any() == True: print('\nnan Detected in Test set') # Drop nan rows print('Dropping nan rows...',end=' ') data_test.dropna(inplace=True) print('Done | Shape: ', data_test.shape) # + id="JX8qKc-iroYR" outputId="ba6378f4-9c59-4288-a35d-63a5e581bf03" colab={"base_uri": "https://localhost:8080/", "height": 476} X_train, y_train, X_test, y_test = process_train_validation(data_train2, data_val, trainBatch=None, splitValSet=False, random_state=1234) # + id="T4PxS8X2r0Ks" outputId="bc7665a5-5ac0-4b6d-9a22-08ad272cae20" colab={"base_uri": "https://localhost:8080/", "height": 697} import time from sklearn.model_selection import train_test_split, cross_val_score from sklearn.metrics import classification_report, confusion_matrix, accuracy_score from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=300,verbose=True,n_jobs=-1) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) # Evaluate Algorithm print("\n\nConfusion Matrix: \n\n",confusion_matrix(y_test,y_pred)) print("\n\nReport: \n\n",classification_report(y_test,y_pred)) print("Accuracy: ",accuracy_score(y_test,y_pred)) # + id="enJ2hkMxtIw7" outputId="63d62f4f-901f-4ade-ed43-21d033c3622e" colab={"base_uri": "https://localhost:8080/", "height": 425} def print_unique_count(X): unique, counts = np.unique(X, return_counts=True) print(np.asarray((unique, counts)).astype(int).T) print('\n\ny_pred value_counts: \n') print_unique_count(y_pred) print('\n\ny_test value_counts: \n') print_unique_count(y_test) # + id="xmq5TvjWx9mY" outputId="42bcb308-520f-4ce0-a7fc-b501972917e3" colab={"base_uri": "https://localhost:8080/", "height": 561} _, _, X_test_final = process_train_validation(data_train, data_test, trainBatch=None, splitValSet=False, noTestLabel=True, random_state=1234) y_pred_final = clf.predict(X_test_final) print('\n\ny_pred value_counts: \n') print_unique_count(y_pred_final)
SHL_split_and_batch.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Objective # # Predict the sales price for each house. # For each Id in the test set, you must predict the value of the SalePrice variable. # # Competition Link: https://www.kaggle.com/c/home-data-for-ml-course/overview/description import pandas as pd from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_absolute_error from sklearn.model_selection import train_test_split home_data = pd.read_csv('data/train.csv') # + # Create Target and Features # Create target object and call it y y = home_data.SalePrice # Create X # features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd'] features = ['MSSubClass', 'LotFrontage', 'LotArea', 'OverallQual', 'OverallCond', 'YearBuilt', 'YearRemodAdd', 'MasVnrArea', 'BsmtFinSF1', 'BsmtUnfSF', 'TotalBsmtSF', '1stFlrSF', '2ndFlrSF', 'LowQualFinSF', 'GrLivArea', 'FullBath', 'HalfBath', 'BedroomAbvGr', 'KitchenAbvGr', 'TotRmsAbvGrd', 'Fireplaces', 'GarageCars', 'WoodDeckSF', 'OpenPorchSF', 'EnclosedPorch', 'ScreenPorch', 'PoolArea'] # Replace NaN with zero home_data[features] = home_data[features].fillna(int(0)) X = home_data[features] # - # Split into validation and training data train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1) # Model: Linear Regression linear_model = LinearRegression() linear_model.fit(train_X, train_y) # Make validation predictions and calculate mean absolute error val_predictions = linear_model.predict(val_X) val_mae = mean_absolute_error(val_predictions, val_y) print("Validation MAE when not specifying max_leaf_nodes: {:,.0f}".format(val_mae)) # + # Save model - Linear Regression from sklearn.externals import joblib # Save model joblib.dump(linear_model, "model/linear_model.pkl") # Load saved model linear_model_load = joblib.load("model/linear_model.pkl") # + # In previous code cell linear_model_on_full_data = LinearRegression() linear_model_on_full_data.fit(X, y) # Then in last code cell test_data_path = 'data/test.csv' test_data = pd.read_csv(test_data_path) # Replace NaN with zero test_data[features] = test_data[features].fillna(int(0)) # create test_X which comes from test_data but includes only the columns you used for prediction test_X = test_data[features] test_preds = linear_model_on_full_data.predict(test_X) output = pd.DataFrame({'Id': test_data.Id, 'SalePrice': test_preds}) output.to_csv('fourth_submission.csv', index=False) output # -
Housing-Prices/fourth_submission-Machine-Learning-Competition-Practice-Iowa-Home-Prices.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Chapter 4. Linear Models # + import pandas as pd import seaborn as sns import torch import pyro import pyro.distributions as dist import pyro.ops.stats as stats from rethinking import MAP, coef, extract_samples, link, precis, sim, vcov # - # ### Code 4.1 pos = torch.empty(1000, 16).uniform_(-1, 1).sum(1) # ### Code 4.2 (1 + torch.empty(12).uniform_(0, 0.1)).prod() # ### Code 4.3 growth = (1 + torch.empty(10000, 12).uniform_(0, 0.1)).prod(1) sns.distplot(growth, hist=False) ax = sns.lineplot(growth, dist.Normal(growth.mean(), growth.std()).log_prob(growth).exp()) ax.lines[1].set_linestyle("--") # ### Code 4.4 big = (1 + torch.empty(10000, 12).uniform_(0, 0.5)).prod(1) small = (1 + torch.empty(10000, 12).uniform_(0, 0.01)).prod(1) # ### Code 4.5 log_big = (1 + torch.empty(10000, 12).uniform_(0, 0.5)).prod(1).log() # ### Code 4.6 w, n = 6., 9 p_grid = torch.linspace(start=0, end=1, steps=1000) posterior = (dist.Binomial(n, p_grid).log_prob(torch.tensor(w)).exp() * dist.Uniform(0, 1).log_prob(p_grid).exp()) posterior = posterior / posterior.sum() # ### Code 4.7 howell1 = pd.read_csv("../data/Howell1.csv", sep=";") d = howell1 # ### Code 4.8 d.info() d.head() # ### Code 4.9 d["height"].head() # ### Code 4.10 d2 = d[d["age"] >= 18] d2_height = torch.tensor(d2["height"], dtype=torch.float) # ### Code 4.11 x = torch.linspace(100, 250, 101) sns.lineplot(x, dist.Normal(178, 20).log_prob(x).exp()); # ### Code 4.12 x = torch.linspace(-10, 60, 101) sns.lineplot(x, dist.Uniform(0, 50, validate_args=False).log_prob(x).exp()); # ### Code 4.13 sample_mu = torch.empty(int(1e4)).normal_(178, 20) sample_sigma = torch.empty(int(1e4)).uniform_(0, 50) prior_h = dist.Normal(sample_mu, sample_sigma).sample() sns.distplot(prior_h); # ### Code 4.14 mu_list = torch.linspace(start=140, end=160, steps=200) sigma_list = torch.linspace(start=4, end=9, steps=200) post = {"mu": mu_list.expand(200, 200).reshape(-1), "sigma": sigma_list.expand(200, 200).t().reshape(-1)} post_LL = dist.Normal(post["mu"], post["sigma"]).log_prob(d2_height.unsqueeze(1)).sum(0) post_prod = (post_LL + dist.Normal(178, 20).log_prob(post["mu"]) + dist.Uniform(0, 50).log_prob(post["sigma"])) post_prob = (post_prod - max(post_prod)).exp() # ### Code 4.15 _, ax = sns.mpl.pyplot.subplots() ax.contour(post["mu"].reshape(200, 200), post["sigma"].reshape(200, 200), post_prob.reshape(200, 200)); # ### Code 4.16 _, ax = sns.mpl.pyplot.subplots() ax.imshow(post_prob.reshape(200, 200), origin="lower", extent=(140, 160, 4, 9), aspect="auto") ax.grid(False) # ### Code 4.17 sample_rows = torch.multinomial(input=post_prob, num_samples=int(1e4), replacement=True) sample_mu = post["mu"][sample_rows] sample_sigma = post["sigma"][sample_rows] # ### Code 4.18 ax = sns.scatterplot(sample_mu, sample_sigma, s=64, alpha=0.1, edgecolor="none") ax.set(xlabel="sample.mu", ylabel="sample.sigma"); # ### Code 4.19 sns.distplot(sample_mu) sns.mpl.pyplot.show() sns.distplot(sample_sigma); # ### Code 4.20 print(stats.hpdi(sample_mu, 0.89)) print(stats.hpdi(sample_sigma, 0.89)) # ### Code 4.21 d3 = stats.resample(d2_height, num_samples=20) # ### Code 4.22 mu_list = torch.linspace(start=150, end=170, steps=200) sigma_list = torch.linspace(start=4, end=20, steps=200) post2 = {"mu": mu_list.expand(200, 200).reshape(-1), "sigma": sigma_list.expand(200, 200).t().reshape(-1)} post2_LL = dist.Normal(post2["mu"], post2["sigma"]).log_prob(d3.unsqueeze(1)).sum(0) post2_prod = (post2_LL + dist.Normal(178, 20).log_prob(post2["mu"]) + dist.Uniform(0, 50).log_prob(post2["sigma"])) post2_prob = (post2_prod - max(post2_prod)).exp() sample2_rows = torch.multinomial(input=post2_prob, num_samples=int(1e4), replacement=True) sample2_mu = post2["mu"][sample2_rows] sample2_sigma = post2["sigma"][sample2_rows] ax = sns.scatterplot(sample2_mu, sample2_sigma, s=80, alpha=0.1, edgecolor="none") ax.set(xlabel="mu", ylabel="sigma"); # ### Code 4.23 sns.distplot(sample2_sigma, hist=False) ax = sns.lineplot(sample2_sigma, dist.Normal(sample2_sigma.mean(), sample2_sigma.std()) .log_prob(sample2_sigma).exp()) ax.lines[1].set_linestyle("--") # ### Code 4.24 howell1 = pd.read_csv("../data/Howell1.csv", sep=";") d = howell1 d2 = d[d["age"] >= 18] # ### Code 4.25 def flist(height): mu = pyro.sample("mu", dist.Normal(178, 20)) sigma = pyro.sample("sigma", dist.Uniform(0, 50)) with pyro.plate("plate"): pyro.sample("height", dist.Normal(mu, sigma), obs=height) # ### Code 4.26 d2_height = torch.tensor(d2["height"], dtype=torch.float) m4_1 = MAP(flist).run(d2_height) # ### Code 4.27 precis(m4_1) # ### Code 4.28 start = {"mu": d2_height.mean(), "sigma": d2_height.std()} # ### Code 4.29 # + def model(height): mu = pyro.sample("mu", dist.Normal(178, 0.1)) sigma = pyro.sample("sigma", dist.Uniform(0, 50)) with pyro.plate("plate"): pyro.sample("height", dist.Normal(mu, sigma), obs=height) m4_2 = MAP(model).run(d2_height) precis(m4_2) # - # ### Code 4.30 vcov(m4_1) # ### Code 4.31 print(vcov(m4_1).diag()) cov = vcov(m4_1) print(cov / cov.diag().ger(cov.diag()).sqrt()) # ### Code 4.32 post = extract_samples(m4_1) {latent: post[latent][:5] for latent in post} # ### Code 4.33 precis(post) # ### Code 4.34 post = dist.MultivariateNormal(torch.stack(list(coef(m4_1).values())), vcov(m4_1)).sample(torch.Size([int(1e4)])) # ### Code 4.35 # + def model(height): mu = pyro.sample("mu", dist.Normal(178, 20)) log_sigma = pyro.sample("log_sigma", dist.Normal(2, 10)) with pyro.plate("plate"): pyro.sample("height", dist.Normal(mu, log_sigma.exp()), obs=height) m4_1_logsigma = MAP(model).run(d2_height) # - # ### Code 4.36 post = extract_samples(m4_1_logsigma) sigma = post["log_sigma"].exp() # ### Code 4.37 sns.scatterplot("weight", "height", data=d2); # ### Code 4.38 # + # load data again, since it's a long way back howell1 = pd.read_csv("../data/Howell1.csv", sep=";") d = howell1 d2 = d[d["age"] >= 18] # fit model def model(weight, height): a = pyro.sample("a", dist.Normal(178, 100)) b = pyro.sample("b", dist.Normal(0, 10)) mu = a + b * weight sigma = pyro.sample("sigma", dist.Uniform(0, 50)) with pyro.plate("plate"): pyro.sample("height", dist.Normal(mu, sigma), obs=height) d2_weight = torch.tensor(d2["weight"], dtype=torch.float) d2_height = torch.tensor(d2["height"], dtype=torch.float) m4_3 = MAP(model).run(d2_weight, d2_height) # - # ### Code 4.39 # + def model(weight, height): a = pyro.sample("a", dist.Normal(178, 100)) b = pyro.sample("b", dist.Normal(0, 10)) sigma = pyro.sample("sigma", dist.Uniform(0, 50)) with pyro.plate("plate"): pyro.sample("height", dist.Normal(a + b * weight, sigma), obs=height) m4_3 = MAP(model).run(d2_weight, d2_height) # - # ### Code 4.40 precis(m4_3) # ### Code 4.41 precis(m4_3, corr=True) # ### Code 4.42 d2_weight_c = d2_weight - d2_weight.mean() # ### Code 4.43 m4_4 = MAP(model).run(d2_weight_c, d2_height) # ### Code 4.44 precis(m4_4, corr=True) # ### Code 4.45 sns.scatterplot("weight", "height", data=d2) x = torch.linspace(30, 65, 101) sns.lineplot(x, (coef(m4_3)["a"] + coef(m4_3)["b"] * x), color="k"); # ### Code 4.46 post = extract_samples(m4_3) # ### Code 4.47 {latent: post[latent][:5].detach() for latent in post} # ### Code 4.48 N = 10 dN = {"weight": d2_weight[:N], "height": d2_height[:N]} mN = MAP(model).run(**dN) # ### Code 4.49 # + # extract 20 samples from the posterior idx = mN._categorical.sample(torch.Size([20])) post = {latent: samples[idx] for latent, samples in extract_samples(mN).items()} # display raw data and sample size ax = sns.scatterplot("weight", "height", data=dN) ax.set(xlabel="weight", ylabel="height", title="N = {}".format(N)) # plot the lines, with transparency x = torch.linspace(30, 65, 101) for i in range(20): sns.lineplot(x, post["a"][i] + post["b"][i] * x, color="k", alpha=0.3) # - # ### Code 4.50 post = extract_samples(m4_3) mu_at_50 = post["a"] + post["b"] * 50 # ### Code 4.51 ax = sns.distplot(mu_at_50) ax.set(xlabel="mu|weight=50", ylabel="Density"); # ### Code 4.52 stats.hpdi(mu_at_50, prob=0.89) # ### Code 4.53 mu = link(m4_3) mu.shape, mu[:5, 0] # ### Code 4.54 # + # define sequence of weights to compute predictions for # these values will be on the horizontal axis weight_seq = torch.arange(start=25., end=71, step=1) # use link to compute mu # for each sample from posterior # and for each weight in weight_seq mu = link(m4_3, data={"weight": weight_seq}) mu.shape, mu[:5, 0] # - # ### Code 4.55 # + # use visible=False to hide raw data sns.scatterplot("weight", "height", data=d2, visible=False) # loop over samples and plot each mu value for i in range(100): sns.scatterplot(weight_seq, mu[i], color="royalblue", alpha=0.1) # - # ### Code 4.56 # summarize the distribution of mu mu_mean = mu.mean(0) mu_HPDI = stats.hpdi(mu, prob=0.89, dim=0) # ### Code 4.57 # + # plot raw data # fading out points to make line and interval more visible sns.scatterplot("weight", "height", data=d2, alpha=0.5) # plot the MAP line, aka the mean mu for each weight ax = sns.lineplot(weight_seq, mu_mean, color="k") # plot a shaded region for 89% HPDI ax.fill_between(weight_seq, mu_HPDI[0], mu_HPDI[1], color="k", alpha=0.2); # - # ### Code 4.58 post = extract_samples(m4_3) mu_link = lambda weight: post["a"].unsqueeze(1) + post["b"].unsqueeze(1) * weight weight_seq = torch.arange(start=25., end=71, step=1) mu = mu_link(weight_seq) mu_mean = mu.mean(0) mu_HPDI = stats.hpdi(mu, prob=0.89, dim=0) # ### Code 4.59 sim_height = sim(m4_3, data={"weight": weight_seq}) sim_height.shape, sim_height[:5, 0] # ### Code 4.60 height_PI = stats.pi(sim_height, prob=0.89, dim=0) # ### Code 4.61 # + # plot raw data sns.scatterplot("weight", "height", data=d2, alpha=0.5) # draw MAP line ax = sns.lineplot(weight_seq, mu_mean, color="k") # draw HPDI region for line ax.fill_between(weight_seq, mu_HPDI[0], mu_HPDI[1], color="k", alpha=0.15) # draw PI region for simulated heights ax.fill_between(weight_seq, height_PI[0], height_PI[1], color="k", alpha=0.15); # - # ### Code 4.62 sim_height = sim(m4_3, data={"weight": weight_seq}, n=int(1e4)) height_PI = stats.pi(sim_height, prob=0.89, dim=0) # ### Code 4.63 # + def sim_fn(weight): mean = post["a"].unsqueeze(1) + post["b"].unsqueeze(1) * weight sd = post["sigma"].unsqueeze(1) return dist.Normal(loc=mean, scale=sd).sample() post = extract_samples(m4_3) weight_seq = torch.arange(start=25., end=71, step=1) sim_height = sim_fn(weight_seq) height_PI = stats.pi(sim_height, prob=0.89, dim=0) # - # ### Code 4.64 howell1 = pd.read_csv("../data/Howell1.csv", sep=";") d = howell1 d.info() d.head() # ### Code 4.65 weight = torch.tensor(d["weight"], dtype=torch.float) weight_s = (weight - weight.mean()) / weight.std() # ### Code 4.66 # + def model(weight, weight2, height): a = pyro.sample("a", dist.Normal(178, 100)) b1 = pyro.sample("b1", dist.Normal(0, 10)) b2 = pyro.sample("b2", dist.Normal(0, 10)) mu = a + b1 * weight + b2 * weight2 sigma = pyro.sample("sigma", dist.Uniform(0, 50)) with pyro.plate("plate"): pyro.sample("height", dist.Normal(mu, sigma), obs=height) weight_s2 = weight_s ** 2 height = torch.tensor(d["height"], dtype=torch.float) m4_5 = MAP(model).run(weight_s, weight_s2, height) # - # ### Code 4.67 precis(m4_5) # ### Code 4.68 weight_seq = torch.linspace(start=-2.2, end=2, steps=30) pred_data = {"weight": weight_seq, "weight2": weight_seq ** 2} mu = link(m4_5, data=pred_data) mu_mean = mu.mean(0) mu_PI = stats.pi(mu, prob=0.89, dim=0) sim_height = sim(m4_5, data=pred_data) height_PI = stats.pi(sim_height, prob=0.89, dim=0) # ### Code 4.69 ax = sns.scatterplot(weight_s, height, alpha=0.5) ax.set(xlabel="weight.s", ylabel="height") sns.lineplot(weight_seq, mu_mean, color="k") ax.fill_between(weight_seq, mu_PI[0], mu_PI[1], color="k", alpha=0.2) ax.fill_between(weight_seq, height_PI[0], height_PI[1], color="k", alpha=0.2); # ### Code 4.70 # + def model(weight, weight2, weight3, height): a = pyro.sample("a", dist.Normal(178, 100)) b1 = pyro.sample("b1", dist.Normal(0, 10)) b2 = pyro.sample("b2", dist.Normal(0, 10)) b3 = pyro.sample("b3", dist.Normal(0, 10)) mu = a + b1 * weight + b2 * weight2 + b3 * weight3 sigma = pyro.sample("sigma", dist.Uniform(0, 50)) with pyro.plate("plate"): pyro.sample("height", dist.Normal(mu, sigma), obs=height) weight_s3 = weight_s ** 3 m4_6 = MAP(model).run(weight_s, weight_s2, weight_s3, height) # - # ### Code 4.71 fig, ax = sns.mpl.pyplot.subplots() sns.scatterplot(weight_s, height, alpha=0.5) ax.set(xlabel="weight", ylabel="height", xticks=[]); # ### Code 4.72 at = torch.tensor([-2, -1, 0, 1, 2]) labels = at * weight.std() + weight.mean() ax.set_xticks(at) ax.set_xticklabels([round(label.item(), 1) for label in labels]) fig # ### Code 4.73 sns.scatterplot("weight", "height", data=howell1, alpha=0.4);
notebooks/04_linear_models.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Practice Followings<br>실습 예 # # # ### Opening a `bash` window on Linux GUI # # # 1. Start a linux machine<br>리눅스 시작 # 1. Log in using your id<br>필요시 id로 log in # 1. Press <kbd>Ctrl</kbd>+<kbd>Alt</kbd>+<kbd>t</kbd> key to open a `bash` terminal<br><kbd>Ctrl</kbd>+<kbd>Alt</kbd>+<kbd>t</kbd> 키를 눌러 `bash` 창을 엶 # # # ### Opening a `git-bash` window on Windows # # # 1. Open a window of the folder of interest<br>작업 대상 폴더 창을 엶 # 1. Open right mouse click menu on a space instead of selecting a file<br>파일을 선택하지 말고 여백 위에 오른쪽 마우스 메뉴를 엶 # 1. Select **Git Bash Here** command<br>**Git Bash Here** 명령 선택 # # # ### Opening a `bash` pane in a `repl` # # # 1. Click on the edit pane<br>편집 칸 선택 # 1. Press <kbd>F1</kbd><br><kbd>F1</kbd> 키를 누름 # 1. Select `Open Shell`<br>`Open Shell` 선택 # # # ### After opening a `bash` window<br>`bash` 창을 연 후 # # # 1. Enter `pwd` to check the current working directory<br>`pwd`로 현재 작업 폴더 확인 # # # + # To test pwd command, select this cell and press Shift + Enter # pwd 명령을 시험하려면 이 셀을 선택하고 Shift + Enter 키 입력 # !pwd # - # 2. Try `ls` to check the content of the current folder<br>`ls`로 현재 폴더 내용 확인 # # # + # !ls # - # 3. `mkdir temp` would create a folder whose name is `temp`<br>`mkdir temp`로 하위 폴더 생성 (이름은 `temp`) # 1. To move into the new folder, enter `cd temp`<br>`cd temp`로 하위 폴더 `temp`로 이동 # 1. Now let's try `which python` to see the location of the `python` command<br>`which python`으로 `python` 명령의 전체 경로 확인 # # # + # !which python # - # ## Shell scripts<br>셸 스크립트 # # # * If you create a text file containing shell commands, we can call it a shell script file and run on a shell such as bash.<br> # 셸 명령을 모은 파일을 셸 스크립트라고 부르고 실행시켜볼 수 있다. # # # ### `vi` editor # # # * About `vi` editor commants, see below<br>`vi` 편집기 명령에 대해서는 아래 참조 # * `cd <to an appropriate folder>`<br>`cd <적당한 폴더>` # * `vi name.sh` # * <kbd>i</kbd> # * Enter following code [[ref](https://www.macs.hw.ac.uk/~hwloidl/Courses/LinuxIntro/x984.html)]<br>아래 코드를 입력 [[ref](https://www.macs.hw.ac.uk/~hwloidl/Courses/LinuxIntro/x984.html)] # # # ``` bash # # #!/bin/bash # # # example of using arguments to a script # # 스크립트에 명령행 매개변수를 사용하는 예 # # echo "My first name is $1" # # echo "My surname is $2" # # echo "Total number of arguments is $#" # # ``` # # # * <kbd>Esc</kbd><br>`:wq`<kbd>Enter</kbd> # # # * The cell below can write the file instead of an editor.<br>아래 cell은 편집기 대신 파일을 작성할 수 있음. # # # + # %%writefile name.sh # #!/bin/bash # example of using arguments to a script # 스크립트에 명령행 매개변수를 사용하는 예 # echo "My first name is $1" # echo "My surname is $2" # echo "Total number of arguments is $#" # - # * `source name.sh firstname surname`<br>Or / 또는<br>`. name.sh firstname surname` # # # + # source name.sh abc xyz # The following code would give the same result as the command above in bash shell # 다음 코드의 결과는 bash 셸 안에서 위 명령을 실행한 경우와 같음 # Import a module to expand features of python # python 기능 확장을 위해 모듈을 불러들임 import subprocess import os # see if the file exists # 파일이 있는지 확인 assert os.path.exists('name.sh') # run the command # 명령 실행 print(subprocess.check_output(['sh', 'name.sh', 'abc', 'xyz'], encoding='utf-8')) # - # ## Permission<br>권한 # # # * Unix or Linux files have **permission** attributes about who can do what about them.<br>유닉스 또는 리눅스 파일에는 **권한**에 따라 누가 어떤 조치를 취할 수 있는지 결정됨. # # # * Each file's permission status we can check using `ls -l`.<br>각 파일의 권한 상태는 `ls -l`로 확인할 수 있음 # # # + # !ls -l # - # * For now, let's take a look at the following example output from `ls -l`.<br>이하는 아래의 `ls -l` 출력 예를 참고. # # # ``` # -rw-r--r-- 1 author beachgoer 10815 Oct 31 06:17 00.ipynb # -rw-r--r-- 1 author beachgoer 7137 Sep 30 06:27 01.ipynb # -rw-r--r-- 1 author beachgoer 15170 Sep 30 06:27 02.ipynb # -rw-r--r-- 1 author beachgoer 1511 Oct 9 16:10 LICENSE # drwxr-xr-x 3 author beachgoer 4096 Oct 29 10:09 tests # # ``` # # # * Here, `beachegoer` and `author` are the owner and the owner's user group.<br>여기서 `beachegoer` 와 `author`는 소유자와 소유자의 소속 그룹임. # # # * Let's look at the left 10 characters.<br>왼쪽 첫 10글자를 살펴보기 바람. # # # * First character is `d` if it is a director. `-` means it is a file.<br>첫 글자가 `d` 이면 폴더이고 `-`이면 파일임. # # # * Follorwing three characters are file owner's permissions.<br>그 다음 세 자는 파일 소유자의 권한임. # * `r` for read, `w` for write, and `x` for execution.<br>`r`은 읽기, `w`는 쓰기, `x`는 실행을 뜻함. # * If these characters are `rw-`, the owner can read or modify but cannot execute the file.<br>`rw-` 라면, 소유자는 읽고 고칠 수 있으나 실행할 수는 없음. # # # * The next three characters are permissions for the members of file owner's group.<br>그 다음 세 자는 소유자의 그룹의 일원의 권한임. # * `r`, `w`, and `x` mean the same as before.<br>`r`, `w`, `x`의 의미는 위와 같음. # * If these characters are `r--`, the group members can read but cannot modify or execute the file.<br>`r--`라면 읽기만 가능함. # # # * The last three characters are for others.<br>다음 세 자는 다른 사용자들의 권한임. # # # * For directories, permissions mean what users can do with the files in the folder.<br>폴더의 권한은, 어떤 사용자가 폴더 안 파일에 무엇을 할 수 있는가에 관한 것임. # # # ### How to change permissions<br>권한을 변경하는 방법 # # # * Following exercises may not run as expected in Windows.<br> # 다음 예제는 Windows 에서는 다소 예상과 다르게 작동할 수 있음. # # # * We can use `chmod` to change permissions.<br>`chmod`로 권한을 변경할 수 있음. # # # * The following command would allow the owner to execute the `name.sh` file.<br>아래 명령으로 소유자가 `name.sh` 파일을 실행시킬 수 있게 됨. # # ```sh # chmod u+x name.sh # ``` # # # + # !ls -l name.sh # + # !chmod u+x name.sh # + # !ls -l name.sh # - # * The following line would allow the group members to modify the `name.sh` file.<br>아래 명령으로 그룹 멤버가 파일을 수정할 수 있게 됨. # # # ```sh # chmod g+w name.sh # ``` # # # + # !chmod g+w name.sh # + # !ls -l name.sh # - # * The following command would others unable to read the `name.sh` file.<br>다른 사용자들은 파일을 읽을 수 없게 됨. # # # ```sh # chmod o-r name.sh # ``` # # # + # !chmod o-r name.sh # + # !ls -l name.sh # - # ### Clean up<br>정리 # # # + # Remove practice files # 실습 파일 제거 os.remove('name.sh') # see if the file does not exist # 파일이 없는지 확인 assert not os.path.exists('name.sh') # - # ### Final Bell # # # + import os os.system('printf \a'); # -
01.bash-practice.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # + import numpy as np import matplotlib.pyplot as pl # Test data n = 50 Xtest = np.linspace(-5, 5, n).reshape(-1,1) # Define the kernel function def kernel(a, b, param): sqdist = np.sum(a**2,1).reshape(-1,1) + np.sum(b**2,1) - 2*np.dot(a, b.T) return np.exp(-.5 * (1/param) * sqdist) param = 0.1 K_ss = kernel(Xtest, Xtest, param) np.random.seed(42) # Get cholesky decomposition (square root) of the # covariance matrix L = np.linalg.cholesky(K_ss)# + 1e-15*np.eye(n)) print(L) # Sample 3 sets of standard normals for our test points, # multiply them by the square root of the covariance matrix normal_distributed_samples=np.random.normal(size=(n,3)) f_prior = np.dot(L, normal_distributed_samples) # Now let's plot the 3 sampled functions. pl.plot(Xtest, f_prior) pl.axis([-5, 5, -3, 3]) pl.title('Three samples from the GP prior') pl.show() np.random.seed(42) for x in range(3): print(normal_distributed_samples[0][x]) np.random.seed(42) pl.plot(Xtest, np.random.normal(size=(n,3))) pl.axis([-5, 5, -3, 3]) pl.show() # + # Noiseless training data Xtrain = np.array([0, 1]).reshape(2,1) ytrain = np.array([2, 3]).reshape(2,1) # Apply the kernel function to our training points K = kernel(Xtrain, Xtrain, param) L = np.linalg.cholesky(K) #+ 0.00005*np.eye(len(Xtrain))) # Compute the mean at our test points. K_s = kernel(Xtrain, Xtest, param) Lk = np.linalg.solve(L, K_s) mu = np.dot(Lk.T, np.linalg.solve(L, ytrain)).reshape((n,)) # Compute the standard deviation so we can plot it s2 = np.diag(K_ss) - np.sum(Lk**2, axis=0) stdv = np.sqrt(s2) # Draw samples from the posterior at our test points. L = np.linalg.cholesky(K_ss - np.dot(Lk.T, Lk)) f_post = mu.reshape(-1,1) + np.dot(L, np.random.normal(size=(n,3))) pl.plot(Xtrain, ytrain, 'bs', ms=8) pl.plot(Xtest, f_post) pl.gca().fill_between(Xtest.flat, mu-2*stdv, mu+2*stdv, color="#dddddd") pl.plot(Xtest, mu, 'r--', lw=2) pl.axis([-5, 5, -3, 3]) pl.title('Three samples from the GP posterior') pl.show() # -
notebooks/active_learning/Understanding-GP.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:odc] # language: python # name: conda-env-odc-py # --- # ## KMeans Clustering - CB4_64_16D_STK_1 # # This document presents an example of spectral clustering in the CBERS4 collection V1 (CB4_64_16D_STK_1) of the BDC. # # > This simple example aims to present how to clustering the data from the BDC stored inside the ODC. To know all the possible products, use [BDC-STAC](http://brazildatacube.dpi.inpe.br/stac/). import datacube import numpy as np import matplotlib.pyplot as plt dc = datacube.Datacube(app='datacube') PRODUCT_NAME = "CB4_64_16D_STK_1" # **Load CB4_64_16D_STK_v1 product** # Initially, an entire scene will be loaded, in a range of specific dates cb4_64_16d_ftile = dc.load(PRODUCT_NAME, measurements = ['red', 'green', 'blue', 'nir'], time = ("2019-12-19", "2019-12-31"), resolution = (64, -64), limit = 1) cb4_64_16d_ftile # The example will use only a portion of the data that was uploaded. If necessary, in your analysis you can use the whole scene that was uploaded. cb4_64_16d_stile = cb4_64_16d_ftile.isel(x = slice(0, 1500), y = slice(0, 1500)) cb4_64_16d_stile # Viewing the selected region from utils.data_cube_utilities.dc_rgb import rgb rgb(cb4_64_16d_stile, figsize = (12, 12), x_coord = 'x', y_coord = 'y') # ## Clustering with KMeans # # In this section, the clustering using KMeans is performed from sklearn.cluster import KMeans from utils.data_cube_utilities.dc_clustering import clustering_pre_processing # Below is the definition of the bands and the preparation of the data for clustering bands = ['red', 'green', 'nir'] # + cb4_64_16d_stilec = cb4_64_16d_stile.copy() cb4_64_16d_stilec_rgb = cb4_64_16d_stilec[bands] cb4_64_16d_stilec_rgb = cb4_64_16d_stilec_rgb.sel(time = '2019-12-25') # - # Clustering! features = clustering_pre_processing(cb4_64_16d_stilec_rgb, bands) kmodel = KMeans(3).fit(features) # Setting the output to display # + shape = cb4_64_16d_stilec_rgb[bands[0]].values.shape classification = np.full(shape, -1) classification = kmodel.labels_ # - # Viewing the result res = classification.reshape((1500, 1500)) plt.figure(figsize = (10, 10)) plt.imshow(res)
examples/clustering/KMeans_CB4_64_16D_STK_1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Special String Again # # <br> # # ![image](https://user-images.githubusercontent.com/50367487/82546450-0f744f80-9b93-11ea-9148-6be626c346fa.png) # + # #!/bin/python3 import math import os import random import re import sys # Complete the substrCount function below. def substrCount(n, s): res = n spcCnt = 0 currCnt = 0 prevCnt = 0 prevPrevCnt = 0 for i in range(1, n): prev = s[i - 1] curr = s[i] if prev == curr: currCnt += 1 res += currCnt if spcCnt > 0: spcCnt -= 1 res += 1 else: currCnt = 0 if i > 1 and s[i - 2] == curr: spcCnt = prevPrevCnt res += 1 else: spcCnt = 0 if i > 1: prevPrevCnt = prevCnt prevCnt = currCnt return res if __name__ == '__main__': fptr = open(os.environ['OUTPUT_PATH'], 'w') n = int(input()) s = input() result = substrCount(n, s) fptr.write(str(result) + '\n') fptr.close()
Interview Preparation Kit/5. String Manipulation/Special String Again.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 18659, "status": "ok", "timestamp": 1610382574744, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="e4abR9zSaWNk" outputId="0c32d57f-a411-4648-8144-7db8ae862454" # Mount Google Drive from google.colab import drive # import drive from google colab ROOT = "/content/drive" # default location for the drive print(ROOT) # print content of ROOT (Optional) drive.mount(ROOT) # we mount the google drive at /content/drive # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 1573, "status": "ok", "timestamp": 1610382829592, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="rYQ5KUgrSClH" outputId="a4f91034-cf58-4cbc-bd3b-9a68557f9591" # %cd "/content/drive/My Drive/Projects/quantum_image_classifier/PennyLane/Data Reuploading Classifier" # + executionInfo={"elapsed": 5512, "status": "ok", "timestamp": 1610357980284, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="gk5AKGKcYGOo" # !pip install pennylane from IPython.display import clear_output clear_output() # + id="GigSJusGbx1b" import os def restart_runtime(): os.kill(os.getpid(), 9) restart_runtime() # + executionInfo={"elapsed": 855, "status": "ok", "timestamp": 1610357982999, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="HoLmJLkIX810" # # %matplotlib inline import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable import numpy as np # + [markdown] id="vZFNOwFXoY8N" # # Loading Raw Data # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 3065, "status": "ok", "timestamp": 1610357986720, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="IvdFsGCVof9g" outputId="57b0c866-93c0-45e7-a833-90119b85cf6c" import tensorflow as tf (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() x_train = x_train[:, 0:27, 0:27] x_test = x_test[:, 0:27, 0:27] # + executionInfo={"elapsed": 1254, "status": "ok", "timestamp": 1610357989363, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="c6zvGFvIoxAN" x_train_flatten = x_train.reshape(x_train.shape[0], x_train.shape[1]*x_train.shape[2])/255.0 x_test_flatten = x_test.reshape(x_test.shape[0], x_test.shape[1]*x_test.shape[2])/255.0 # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 996, "status": "ok", "timestamp": 1610357989364, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="Rmj1dzaso00h" outputId="e6c46dd3-4962-4412-8c06-7ccc3a10679a" print(x_train_flatten.shape, y_train.shape) print(x_test_flatten.shape, y_test.shape) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 932, "status": "ok", "timestamp": 1610357989714, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="d10VoIC6o5_I" outputId="0732b267-d0a4-47cd-c1b4-28acd63e5bf8" x_train_0 = x_train_flatten[y_train == 0] x_train_1 = x_train_flatten[y_train == 1] x_train_2 = x_train_flatten[y_train == 2] x_train_3 = x_train_flatten[y_train == 3] x_train_4 = x_train_flatten[y_train == 4] x_train_5 = x_train_flatten[y_train == 5] x_train_6 = x_train_flatten[y_train == 6] x_train_7 = x_train_flatten[y_train == 7] x_train_8 = x_train_flatten[y_train == 8] x_train_9 = x_train_flatten[y_train == 9] x_train_list = [x_train_0, x_train_1, x_train_2, x_train_3, x_train_4, x_train_5, x_train_6, x_train_7, x_train_8, x_train_9] print(x_train_0.shape) print(x_train_1.shape) print(x_train_2.shape) print(x_train_3.shape) print(x_train_4.shape) print(x_train_5.shape) print(x_train_6.shape) print(x_train_7.shape) print(x_train_8.shape) print(x_train_9.shape) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 932, "status": "ok", "timestamp": 1610357990859, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="snFw4LqepFOl" outputId="6fa81d41-9174-4053-b2db-c4933edfcab8" x_test_0 = x_test_flatten[y_test == 0] x_test_1 = x_test_flatten[y_test == 1] x_test_2 = x_test_flatten[y_test == 2] x_test_3 = x_test_flatten[y_test == 3] x_test_4 = x_test_flatten[y_test == 4] x_test_5 = x_test_flatten[y_test == 5] x_test_6 = x_test_flatten[y_test == 6] x_test_7 = x_test_flatten[y_test == 7] x_test_8 = x_test_flatten[y_test == 8] x_test_9 = x_test_flatten[y_test == 9] x_test_list = [x_test_0, x_test_1, x_test_2, x_test_3, x_test_4, x_test_5, x_test_6, x_test_7, x_test_8, x_test_9] print(x_test_0.shape) print(x_test_1.shape) print(x_test_2.shape) print(x_test_3.shape) print(x_test_4.shape) print(x_test_5.shape) print(x_test_6.shape) print(x_test_7.shape) print(x_test_8.shape) print(x_test_9.shape) # + [markdown] id="SAxUS6Lhp95g" # # Selecting the dataset # # Output: X_train, Y_train, X_test, Y_test # + sample = 207 class_selected = 0 X_train = x_train_list[class_selected][:sample, :] Y_train = np.zeros((sample,)) for i in range(10): if i != class_selected: X_train = np.concatenate((X_train, x_train_list[i][:int(sample/9), :]), axis=0) Y_train = np.concatenate((Y_train, np.zeros((int(sample/9))) + 1), axis=0) X_test = x_test_list[class_selected][:sample, :] Y_test = np.zeros((sample,)) for i in range(10): if i != class_selected: X_test = np.concatenate((X_test, x_test_list[i][:int(sample/9), :]), axis=0) Y_test = np.concatenate((Y_test, np.zeros((int(sample/9))) + 1), axis=0) Y_train = to_categorical(Y_train) Y_test = to_categorical(Y_test) print(X_train.shape, Y_train.shape) print(X_test.shape, Y_test.shape) # + [markdown] id="LrebzTO1z-or" # # Dataset Preprocessing # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 788, "status": "ok", "timestamp": 1610358000137, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="tpIJBmVAz-os" outputId="066f3c86-cdcf-48f7-cfe7-b4e203f6a139" X_train = X_train.reshape(X_train.shape[0], 27, 27) X_test = X_test.reshape(X_test.shape[0], 27, 27) X_train.shape, X_test.shape # + [markdown] id="ockEle2Ez-os" # # Quantum # + executionInfo={"elapsed": 4679, "status": "ok", "timestamp": 1610358007092, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="jP9aoKRGz-os" import pennylane as qml from pennylane import numpy as np from pennylane.optimize import AdamOptimizer, GradientDescentOptimizer qml.enable_tape() from tensorflow.keras.utils import to_categorical # Set a random seed np.random.seed(2020) # + executionInfo={"elapsed": 826, "status": "ok", "timestamp": 1610358054620, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="BFo9kVhAz-ot" # Define output labels as quantum state vectors def density_matrix(state): """Calculates the density matrix representation of a state. Args: state (array[complex]): array representing a quantum state vector Returns: dm: (array[complex]): array representing the density matrix """ return state * np.conj(state).T label_0 = [[1], [0]] label_1 = [[0], [1]] state_labels = [label_0, label_1] # + executionInfo={"elapsed": 975, "status": "ok", "timestamp": 1610358057231, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="fYmu1Jchz-ot" n_qubits = 2 # number of class dev_fc = qml.device("default.qubit", wires=n_qubits) @qml.qnode(dev_fc) def q_fc(params, inputs): """A variational quantum circuit representing the DRC. Args: params (array[float]): array of parameters inputs = [x, y] x (array[float]): 1-d input vector y (array[float]): single output state density matrix Returns: float: fidelity between output state and input """ # layer iteration for l in range(len(params[0])): # qubit iteration for q in range(n_qubits): # gate iteration for g in range(int(len(inputs)/3)): qml.Rot(*(params[0][l][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params[1][l][3*g:3*(g+1)]), wires=q) return [qml.expval(qml.Hermitian(density_matrix(state_labels[i]), wires=[i])) for i in range(n_qubits)] # + executionInfo={"elapsed": 859, "status": "ok", "timestamp": 1610358059994, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="UNYRb5MCz-ot" dev_conv = qml.device("default.qubit", wires=3) @qml.qnode(dev_conv) def q_conv(conv_params, inputs): """A variational quantum circuit representing the Universal classifier + Conv. Args: params (array[float]): array of parameters x (array[float]): 2-d input vector y (array[float]): single output state density matrix Returns: float: fidelity between output state and input """ # layer iteration for l in range(len(conv_params[0])): # qubit iteration for q in range(3): qml.Rot(*(conv_params[0][l][3*q:3*(q+1)] * inputs[q, 0:3] + conv_params[1][l][3*q:3*(q+1)]), wires=q) return [qml.expval(qml.PauliZ(j)) for j in range(3)] # + executionInfo={"elapsed": 811, "status": "ok", "timestamp": 1610358062091, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="zzVunJV1z-ou" from keras import backend as K # Addition Custom Layer def add_matrix(x): return K.sum(x, axis=1, keepdims=True) addition_layer = tf.keras.layers.Lambda(add_matrix, output_shape=(1,)) # Alpha Custom Layer class class_weights(tf.keras.layers.Layer): def __init__(self): super(class_weights, self).__init__() w_init = tf.random_normal_initializer() self.w = tf.Variable( initial_value=w_init(shape=(1, 2), dtype="float32"), trainable=True, ) def call(self, inputs): return (inputs * self.w) # + executionInfo={"elapsed": 2804, "status": "ok", "timestamp": 1610358067284, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="6IozHMJhz-ou" # Input image, size = 27 x 27 X = tf.keras.Input(shape=(27,27), name='Input_Layer') # Specs for Conv c_filter = 3 c_strides = 2 # First Quantum Conv Layer, trainable params = 18, output size = 13 x 13 num_conv_layer_1 = 1 q_conv_layer_1 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_1, 9)}, output_dim=(3), name='Quantum_Conv_Layer_1') size_1 = int(1+(X.shape[1]-c_filter)/c_strides) q_conv_layer_1_list = [] # height iteration for i in range(size_1): # width iteration for j in range(size_1): temp = q_conv_layer_1(X[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1]) temp = addition_layer(temp) q_conv_layer_1_list += [temp] concat_layer_1 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_1_list) reshape_layer_1 = tf.keras.layers.Reshape((size_1, size_1))(concat_layer_1) # Second Quantum Conv Layer, trainable params = 18, output size = 6 x 6 num_conv_layer_2 = 1 q_conv_layer_2 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_2, 9)}, output_dim=(3), name='Quantum_Conv_Layer_2') size_2 = int(1+(reshape_layer_1.shape[1]-c_filter)/c_strides) q_conv_layer_2_list = [] # height iteration for i in range(size_2): # width iteration for j in range(size_2): temp = q_conv_layer_2(reshape_layer_1[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1]) temp = addition_layer(temp) q_conv_layer_2_list += [temp] concat_layer_2 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_2_list) reshape_layer_2 = tf.keras.layers.Reshape((size_2, size_2, 1))(concat_layer_2) # Max Pooling Layer, output size = 9 max_pool_layer = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, name='Max_Pool_Layer')(reshape_layer_2) reshape_layer_3 = tf.keras.layers.Reshape((9,))(max_pool_layer) # Quantum FC Layer, trainable params = 18+2, output size = 2 num_fc_layer = 1 q_fc_layer = qml.qnn.KerasLayer(q_fc, {"params": (2, num_fc_layer, 9)}, output_dim=2, name='Quantum_FC_Layer')(reshape_layer_3) # Alpha Layer, trainable params = 2 class_weights_layer = class_weights()(q_fc_layer) model = tf.keras.Model(inputs=X, outputs=class_weights_layer) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 39184, "status": "ok", "timestamp": 1610358109156, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="-kazExRcz-ov" outputId="11a34545-b4dd-425d-d246-8d0d260c9da8" model(X_train[0:32, :, :]) # + executionInfo={"elapsed": 1168, "status": "ok", "timestamp": 1610358110335, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="wZ2e6t93z-ow" opt = tf.keras.optimizers.Adam(learning_rate=0.1) model.compile(opt, loss="mse", metrics=["accuracy"]) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 23954180, "status": "ok", "timestamp": 1610382086131, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="GKW8hv-9z-ow" outputId="a004b2f4-c3e3-4aa1-cef5-0d28d21a6d57" H = model.fit(X_train, Y_train, epochs=20, batch_size=32, validation_data=(X_test, Y_test), verbose=1, initial_epoch=0) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 890, "status": "ok", "timestamp": 1610382856489, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="tgd4aZ2x0zS8" outputId="5ef96e02-a18b-4713-e5c7-5d84745b691e" model.get_weights() # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 943, "status": "ok", "timestamp": 1610382863115, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggpw7xw-lyk6u6l92QjpI7MlI7qjJuuciCpwrUd=s64", "userId": "03770692095188133952"}, "user_tz": -420} id="RLAlWbsJz-ow" outputId="522adec4-41f7-447d-abfc-82c84c7c2041" # serialize model to JSON ''' model_json = model.to_json() with open("./model_quantum-conv_quantum-fc_binary.json", "w") as json_file: json_file.write(model_json) ''' # serialize weights to HDF5 model.save_weights("./model_quantum-conv_quantum-fc_binary_2.h5") print("Saved model to disk") # + id="AhdMvIOez-ow" outputId="c2ec4c03-c4c5-4f09-f6da-a9c5e8b0a8ab" q_conv_layer_1.get_weights() # + id="uTtYUCWIz-ox" outputId="4e583e12-d75d-481b-a43c-e2ece0810ae5" q_conv_layer_2.get_weights() # + id="ekUErXldz-ox" model_best_weights = model.get_weights() # + id="lA_hW9Xbz-ox" outputId="1acd0c77-738f-4fa7-d145-8fb768b8947f" model_best_weights # + id="4YkJkYUgz-oy" predict_test = model.predict(X_test) # + id="dym66Lh0z-oy" ''' from keras.models import model_from_json # load json and create model json_file = open('model.json', 'r') loaded_model_json = json_file.read() json_file.close() loaded_model = model_from_json(loaded_model_json) # load weights into new model loaded_model.load_weights("model.h5") print("Loaded model from disk") '''
PennyLane/Data Reuploading Classifier/Q Conv + DRC Keras MNIST (best)-Copy1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Sentiment Analysis Project with No RNNs import os import tensorflow as tf from tensorflow import keras import numpy as np # + # a def read_imdb(dataset_type, review_type): dataset_type, review_type = dataset_type.lower(), review_type.lower() if dataset_type != "train" and dataset_type != "test": raise ValueError("Choose from train or test.") if review_type != "pos" and review_type != "neg": raise ValueError("Choose from pos or neg.") for roots, dirs, files in os.walk("./datasets/imdb/aclImdb/" + dataset_type + "/" + review_type): files = files return files train_pos = read_imdb("train", "pos") train_neg = read_imdb("train", "neg") test_valid_pos = read_imdb("test", "pos") test_valid_neg = read_imdb("test", "neg") # - # b test_pos = test_valid_pos[:5000] test_neg = test_valid_neg[:5000] valid_pos = test_valid_pos[5000:] valid_neg = test_valid_neg[5000:] # + # c # TextLineDataset for if you want to pretend that it can't be stored in memory def write_dataset(filepaths, dataset_type, review_type): reviews, labels = [], [] if review_type.lower() == "pos": label = 1 elif review_type.lower() == "neg": label = 0 else: raise ValueError("Choose from pos or neg.") for filepath in filepaths: with open("./datasets/imdb/aclImdb/" + dataset_type + "/" + review_type + "/" + filepath, "rb") as review_file: reviews.append(review_file.read()) labels.append(label) dataset = (reviews, labels) return tf.data.Dataset.from_tensor_slices(dataset) def concatenate_datasets(pos_ds, neg_ds): return tf.data.Dataset.concatenate(pos_ds, neg_ds) # + # d # this function I had to copy, I don't know regex # but it is fully debunked and explained to myself in another notebook def preprocess(X_batch, n_words=50, char_limit=300): shape = tf.shape(X_batch) * tf.constant([1, 0]) + tf.constant([0, n_words]) Z = tf.strings.substr(X_batch, 0, char_limit) Z = tf.strings.lower(Z) Z = tf.strings.regex_replace(Z, b"<br\\s*/?>", b" ") Z = tf.strings.regex_replace(Z, b"[^a-z]", b" ") Z = tf.strings.split(Z) return Z.to_tensor(shape=shape, default_value=b"<pad>") X_small_example = tf.constant(["It's a great, great movie! I loved it.", "It was terrible, run away!!!"]) print(preprocess(X_small_example)) # + from collections import Counter class TextVectorization(keras.layers.Layer): # functions can be included in this class or be outside of the class def __init__(self, n_oov_buckets=100, max_vocab=1000, dtype=tf.string, **kwargs): super().__init__(dtype=dtype, **kwargs) # why do I need the tf.string dtype? self.n_oov_buckets = n_oov_buckets self.max_vocab = max_vocab def get_vocabulary(self, X_batch, max_vocab=1000): tally = Counter() preprocessed_batches = preprocess(X_batch).numpy() for review in preprocessed_batches: for letter in review: if letter != b"<pad>": tally[letter] += 1 return [b"<pad>"] + list(np.array(tally.most_common(max_vocab))[:, 0]) # the adapt() is just a self-named variable, but conventionally used in experiment.preprocessing def adapt(self, data_sample): # think of data_sample as a batch self.vocab = self.get_vocabulary(data_sample, self.max_vocab) indices = tf.range(len(self.vocab), dtype=tf.int64) # words = tf.constant(self.vocab) # why do I need this? # it's optional, but KeyValueTensorInitializer takes in tensors, and # words is a constant tensor in nature # everything is automatically converted to a @tf.function # table_initializer(vocabulary, indices) table_init = tf.lookup.KeyValueTensorInitializer(self.vocab, indices) # table(table_init, n_oov_buckets) self.vocab_table = tf.lookup.StaticVocabularyTable(table_init, self.n_oov_buckets) # adapt() has no return def call(self, inputs): # this layer is a preprocessing layer that can be placed only at the beginning # preprocessing is done here or before the model receives the data, your choice preprocessed_inputs = preprocess(inputs) # lookup return self.vocab_table.lookup(preprocessed_inputs) # + when you don't specify dtype=dtype in init # TypeError: Input 'input' of 'Substr' Op has type float32 that does not match expected type of string. train_set = concatenate_datasets(write_dataset(train_pos, "train", "pos"), write_dataset(train_neg, "train", "neg")) test_set = concatenate_datasets(write_dataset(test_pos, "test", "pos"), write_dataset(test_neg, "test", "neg")) valid_set = concatenate_datasets(write_dataset(valid_pos, "test", "pos"), write_dataset(valid_neg, "test", "neg")) # + batch_size=32 train_set = train_set.shuffle(25000).batch(batch_size).prefetch(1) valid_set = valid_set.batch(batch_size).prefetch(1) test_set = test_set.batch(batch_size).prefetch(1) # + Text_Vectorizer = TextVectorization(input_shape=[]) # why do I need an input_shape? sample_review_batches = train_set.map(lambda review, label: review) sample_reviews = np.concatenate(list(sample_review_batches.as_numpy_iterator()), axis=0) Text_Vectorizer.adapt(sample_reviews) # + when you don't specify input_shape=[] # # ValueError: rt_input.shape and shape=[?,?] are incompatible: rt_input.rank = 3 but shape.rank = 2 for '{{node sequential/text_vectorization/RaggedToTensor/RaggedTensorToTensor}} = RaggedTensorToTensor[T=DT_STRING, Tindex=DT_INT64, Tshape=DT_INT32, num_row_partition_tensors=2, row_partition_types=["ROW_SPLITS", "VALUE_ROWIDS"]](sequential/text_vectorization/add, sequential/text_vectorization/StringSplit/StringSplit/StringSplit/StringSplitV2:1, sequential/text_vectorization/RaggedToTensor/default_value, sequential/text_vectorization/StringSplit/RaggedFromTensor/RaggedFromUniformRowLength/RowPartitionFromUniformRowLength/mul, sequential/text_vectorization/StringSplit/StringSplit/StringSplit/strided_slice)' with input shapes: [2], [?], [], [?], [?]. # # + class BagOfWords(keras.layers.Layer): def __init__(self, n_tokens, dtype=tf.int32, **kwargs): super().__init__(**kwargs) # why the tf.int32? # specify self.n_tokens = n_tokens def call(self, inputs): OH_output = tf.one_hot(inputs, depth=self.n_tokens) return tf.math.reduce_sum(OH_output, axis=1)[:, 1:] OH_columns = 1 + Text_Vectorizer.max_vocab + Text_Vectorizer.n_oov_buckets bag_of_words = BagOfWords(OH_columns) # + def build_model_fn(n_layers=1, n_neurons=100, act_fn="relu"): model = keras.models.Sequential([ Text_Vectorizer, bag_of_words, ]) for _ in range(n_layers): model.add(keras.layers.Dense(n_neurons, activation=act_fn)) model.add(keras.layers.Dense(1, activation="sigmoid")) model.compile(loss="binary_crossentropy", optimizer="nadam", metrics=["accuracy"]) # binary crossentropy because it's a binary classification task return model model = build_model_fn() keras.utils.plot_model(model) # - keras.backend.clear_session() tf.random.set_seed(42) np.random.seed(42) for item in train_set: print(item) break model.fit(train_set, epochs=10) # + # e def compute_mean_embedding(inputs): not_pad = tf.math.count_nonzero(inputs, axis=-1) print(not_pad) n_words = tf.math.count_nonzero(not_pad, axis=-1, keepdims=True) print(n_words) sqrt_n_words = tf.math.sqrt(tf.cast(n_words, tf.float32)) print(sqrt_n_words) return tf.reduce_mean(inputs, axis=1) * sqrt_n_words def build_model_fn_with_embedding(n_layers=1, n_neurons=100, act_fn="relu", out_dim=20): model = keras.models.Sequential([ Text_Vectorizer, ]) model.add(keras.layers.Embedding(input_dim=OH_columns, output_dim=out_dim, mask_zero=True)) # mask_zero is supposed to convert the 0 values in the/<pad> tokens to zero vectors # it doesn't work? model.add(keras.layers.Lambda(compute_mean_embedding)) for _ in range(n_layers): model.add(keras.layers.Dense(n_neurons, activation=act_fn)) model.add(keras.layers.Dense(1, activation="sigmoid")) model.compile(loss="binary_crossentropy", optimizer="nadam", metrics=["accuracy"]) return model # + mask_zero # # Boolean, whether or not the input value 0 is a special "padding" value that should be masked out. This is useful when using recurrent layers which may take variable length input. If this is True, then all subsequent layers in the model need to support masking or an exception will be raised. If mask_zero is set to True, as a consequence, index 0 cannot be used in the vocabulary (input_dim should equal size of vocabulary + 1). # + mask_zero with a subsequent Lambda layer # + https://stackoverflow.com/questions/47485216/how-does-mask-zero-in-keras-embedding-layer-work # + https://stackoverflow.com/questions/49961683/how-to-use-the-result-of-embedding-with-mask-zero-true-in-keras#:~:text=If%20you%20set%20mask_zero=True%20in%20your%20embeddings,%20then,to%20understand%20the%20%22masked%22%20information%20will%20use%20them. # + apparently, mask_zero doesn't actually give zero vectors, but they are *like* zero vectors as they are not considered in future layers that support mask_zero # + # f model.compile(loss="binary_crossentropy", optimizer="nadam", metrics=["accuracy"]) model.fit(train_set, epochs=5, validation_data=valid_set) # -
src/Sentiment Analysis Project With No RNNs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/Ayushi12345678912/letspgrade/blob/master/assignment_no_2_day_6.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="pd2EFdjO-lhq" colab_type="code" colab={} # + [markdown] id="5pfTMavO-_vX" colab_type="text" # import math # # + id="fGZu-0aR_Een" colab_type="code" colab={} class cone(): def _init_(self,radius,height): self.radius = radius self.height=height def volume(self): v=(math.pi)*pow(self.radius,2)*(self.height/3) return ("volume: %.2f"%v) def area(self): a = (math.pi)*self.radious*(self.radius+math.sqrt((pow(self.radius,2)+pow(self.height,2)))) return("surface Area: %.2f"%a) vol=cone(5,2) vol.volume() vol.area()
assignment_no_2_day_6.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <a id='home'></a> # # Investigation of Twitter archive of `WeRateDogs` # # ## Table of Contents # <ol> # <li><a href="#Introduction">Introduction</a></li> # <li><a href="#question">Questions imposed</a></li> # <li><a href="#wrangling">Data Wrangling</a></li> # <li><a href="#eda">Exploratory Data Analysis</a></li> # <li><a href="#conclusions">End</a></li> # </ol> # <a id='Introduction'></a> # ## Introduction # # <a href="#home">Home</a> # ### About the Dataset # # The dataset that is being wrangled (and analyzed and visualized) is the tweet archive of Twitter user @dog_rates, also known as WeRateDogs. WeRateDogs is a Twitter account that rates people's dogs with a humorous comment about the dog. These ratings almost always have a denominator of 10. The numerators, though? Almost always greater than 10. 11/10, 12/10, 13/10, etc. Why? Because "they're good dogs Brent." WeRateDogs has over 4 million followers and has received international media coverage. # # This archive contains basic tweet data (tweet ID, timestamp, text, etc.) for all 5000+ of their tweets as they stood on August 1, 2017. More on this soon. # # ### Inspiration # Is it possible to find the best rated dogs based on the tweets # <a id='question'></a> # ### Questions - # - Ratings of dogs based on type # - Best dog according to rating # - Tweets based on hour of the day # - Tweets based on the days of the week # - Tweets based on month # - Most important factor which leads to better rating # # # <a href="#home">Home</a> # <a id='wrangling'></a> # ## Data Wrangling # # # # # <a href="#home">Home</a> # ### Gathering Data # + # Import all necessary libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sb # %matplotlib inline # - # Load data twitter_df = pd.read_csv('data/twitter-archive-enhanced-2.csv') image_df = pd.read_csv('data/image-predictions.tsv', sep='\t') tweet_df = pd.read_json('data/tweet-json.txt', lines=True) # #### Making a copy of original data twitter_df_original = twitter_df.copy() image_df_original = image_df.copy() tweet_df_original = tweet_df.copy() # ### Accessing Data # ### Visual Assessment twitter_df.head() image_df.head() tweet_df.head() # ### Programmatic Assessment twitter_df.info() twitter_df.shape twitter_df.columns twitter_df.describe() # ### Data Quality Issues # # 1. Many null values # # - `in_reply_to_status_id` # # - `in_reply_to_user_id` # # - `retweeted_status_id` # # - `retweeted_status_user_id` # # # 2. Incorrect data types # # - `tweet_id` # # - `in_reply_to_status_id` # # - `in_reply_to_user_id` # # - `retweeted_status_id` # # - `retweeted_status_user_id` # # # 3. `rating_denominator` has minimum value as 0 which is not possible for denominators # # # 4. `datetime` format for # # - `timestamp` # # - `retweeted_status_timestamp` # # # 5. Retweets need to be removed to avoid duplication in our analysis. This may be done by removing rows that have non-empty `retweeted_status_id`, `retweeted_status_user_id`, `and retweeted_status_timestamp` # # # 6. Add `rating` column as the ratio of numerator and denominator # # # 7. Reorder the columns into similar ones close to each other after adding or removing some extra columns # # # 8. Some numerators are wrongly entered. They are different as in the comments # # # ### Data Tidiness Issues # - `category` column can be created to store the type of dog instead of the last 4 columns named as `doggo`, `floofer`, `pupper`, `puppo` # # # - Information about one type of observational unit (tweets) is spread across three different dataframes. Therefore, these three dataframes should be merged as they are part of the same observational unit. # # # ### Data Cleaning # #### Define # - Retweets need to be removed to avoid duplication in our analysis. This may be done by removing rows that have non-empty `retweeted_status_id`, `retweeted_status_user_id`, `and retweeted_status_timestamp` # # # #### Code # + # twitter_df = twitter_df[(twitter_df['retweeted_status_timestamp'].isna()) | # (twitter_df['retweeted_status_id'].isna()) | # (twitter_df['retweeted_status_user_id'].isna())] twitter_df1 = twitter_df[(twitter_df['retweeted_status_timestamp'].isna() == False) | (twitter_df['retweeted_status_id'].isna() == False) | (twitter_df['retweeted_status_user_id'].isna() == False)] twitter_df1.reset_index(inplace=True, drop=True) twitter_df.reset_index(inplace=True, drop=True) # - # #### Test twitter_df.head() twitter_df1.head() # #### Define # - Incorrect data types # # - `tweet_id` # # - `in_reply_to_status_id` # # - `in_reply_to_user_id` # # - `retweeted_status_id` # # - `retweeted_status_user_id` # # #### Code # + # Modify Data types twitter_df['tweet_id'] = twitter_df['tweet_id'].astype('str') twitter_df['in_reply_to_status_id'] = twitter_df['in_reply_to_status_id'].astype('str') twitter_df['in_reply_to_user_id'] = twitter_df['in_reply_to_user_id'].astype('str') twitter_df['retweeted_status_id'] = twitter_df['retweeted_status_id'].astype('str') twitter_df['retweeted_status_user_id'] = twitter_df['retweeted_status_user_id'].astype('str') # - # #### Test twitter_df.info() twitter_df1.info() # #### Define # - `datetime` format for # # - `timestamp` # # - `retweeted_status_timestamp` # twitter_df['timestamp'].head() twitter_df[twitter_df['retweeted_status_timestamp'].isna() == False]['retweeted_status_timestamp'].head() # #### Code # + # Format Date twitter_df['timestamp'] = pd.to_datetime(twitter_df['timestamp'], format="%Y-%m-%d %H:%M:%S +0000") twitter_df['retweeted_status_timestamp'] = pd.to_datetime(twitter_df['retweeted_status_timestamp'], format="%Y-%m-%d %H:%M:%S +0000") # - # #### Test twitter_df[twitter_df['retweeted_status_timestamp'].isna() == False][['timestamp', 'retweeted_status_timestamp']].head() # #### Define # + twitter_df.shape[0] - sum(twitter_df['retweeted_status_timestamp'].isna()) print('There are only %d retweet timestamps.' %(twitter_df.shape[0] - sum(twitter_df['retweeted_status_timestamp'].isna()))) # - # #### Define # `rating_denominator` has minimum value as 0 which is not possible for denominators # # #### Code # + # Drop record with 0 denominator null_index = twitter_df[twitter_df['rating_denominator'] == 0].index twitter_df = twitter_df.drop(null_index) # - # #### Test twitter_df[twitter_df['rating_denominator'] == 0] # #### Define # Some numerators are wrongly entered. They are different as in the comments # # #### Code # + # Extract numerator and denominator from the tweets twitter_df['numerator'] = twitter_df.text.str.extract('((?:\d+\.)?\d+)\/(\d+)', expand=True)[0] twitter_df['denominator'] = twitter_df.text.str.extract('((?:\d+\.)?\d+)\/(\d+)', expand=True)[1] twitter_df['numerator'] = twitter_df['numerator'].astype('float') twitter_df['denominator'] = twitter_df['denominator'].astype('float') # - # #### Check if they are smae or not twitter_df[twitter_df['rating_numerator'] != twitter_df['numerator']][['rating_numerator', 'numerator', 'rating_denominator', 'denominator']] twitter_df[twitter_df['rating_denominator'] != twitter_df['denominator']][['rating_numerator', 'numerator', 'rating_denominator', 'denominator']] # #### We got 6 records having different numerators # Lets clean them twitter_df = twitter_df[twitter_df['rating_numerator'] == twitter_df['numerator']] # #### Test twitter_df[twitter_df['rating_numerator'] != twitter_df['numerator']][['rating_numerator', 'numerator', 'rating_denominator', 'denominator']] # #### Define # Add `rating` column as the ratio of numerator and denominator # # #### Code # + # Add 'rating' column twitter_df['rating'] = twitter_df['numerator']/twitter_df['denominator'] # - # #### Test twitter_df.head() # #### Define # `category` column can be created to store the type of dog instead of the last 4 columns named as `doggo`, `floofer`, `pupper`, `puppo` # # #### Code # + # Add category column def label_category (row): if row['doggo'] == 'doggo' : return 'doggo' if row['floofer'] == 'floofer' : return 'floofer' if row['pupper'] == 'pupper' : return 'pupper' if row['puppo'] == 'puppo': return 'puppo' return 'normal' # - twitter_df['category'] = twitter_df.apply (lambda row: label_category(row), axis=1) # #### Test twitter_df.apply (lambda row: label_category(row), axis=1).value_counts() twitter_df[['name', 'category']] .head(10) # #### Check for Duplicated values # + # No of Duplicated values for _ in twitter_df.columns: print(_,sum(twitter_df[_].duplicated())) # - # #### Check all unique values # + # No of Unique values cols = ['doggo', 'floofer', 'pupper', 'puppo', 'rating', 'category'] for _ in cols: print(_,len(twitter_df[_].unique())) print((twitter_df[_].unique()),'\n') # - twitter_df.describe() # #### Add Month, Day and Hour for Tweet time and retweet time # + # Adding Month, Day and Hour of tweets and retweets twitter_df['Month'] = twitter_df['timestamp'].dt.month_name() twitter_df['Day'] = twitter_df['timestamp'].dt.day_name() twitter_df['Hour'] = twitter_df['timestamp'].dt.hour twitter_df['re-Month'] = twitter_df['retweeted_status_timestamp'].dt.month_name() twitter_df['re-Day'] = twitter_df['retweeted_status_timestamp'].dt.day_name() twitter_df['re-Hour'] = twitter_df['retweeted_status_timestamp'].dt.hour # + # Adding time difference between tweets and retweets # Creating temp_twitter_df which has valid 'retweeted_status_timestamp' temp_twitter_df = twitter_df[twitter_df['re-Hour'].isna() == False] temp_twitter_df['retweetTime'] = temp_twitter_df['timestamp'] - temp_twitter_df['retweeted_status_timestamp'] dates = temp_twitter_df['retweetTime'] def dayCount(dates): return dates.days dates = dates.map(dayCount) dates temp_twitter_df['retweetTime'] = dates twitter_df['retweetTime'] = temp_twitter_df['retweetTime'] # - twitter_df.shape twitter_df.head() twitter_df.info() twitter_df.columns # <a id='eda'></a> # ## Exploratory Data Analysis # # # # # <a href="#home">Home</a> # ### Research Question 1 (Replace this header name!) # ### Basic plots based on the available columns base_color = sb.color_palette()[0] twitter_df['in_reply_to_status_id'].value_counts()[1:11] twitter_df['in_reply_to_user_id'].value_counts()[1:11] sb.countplot(data = twitter_df, x='category', color = base_color) plt.ylabel('No of dogs') plt.xlabel('Category of dogs') plt.title('Nof of dogs in each category'); # #### Conclusions # The above graph shows that most of the dogs belong to normal category. Floofer has the least no of dogs sb.barplot(data=twitter_df, x='category', y='rating', color=base_color); plt.title('Average Rating of dogs based on their category'); plt.xlabel('Category of dogs') plt.ylabel('Average Rating'); # #### Conclusions # The above graph shows that normal category dogs have highest average rating and highest deviation x_marker = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'] sb.countplot(data=twitter_df, x='Month', order=x_marker, color=base_color); plt.title('No of tweets based on month') plt.xlabel('Months') plt.xticks(rotation=30) plt.ylabel('Total no of tweets'); # #### Conclusions # The above graph shows that most of the tweets are done in the November and December x_marker = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'] y_marker = [0, 5, 10, 15, 20] sb.countplot(data=twitter_df, x='re-Month', order=x_marker, color=base_color); plt.title('No of retweets based on month') plt.xlabel('Months') plt.xticks(rotation=30) plt.yticks(y_marker) plt.ylabel('Total no of retweets'); # #### Conclusions # The above graph shows that most of the retweets are done in the beginning or the end of the year x_marker = ['Monday', 'Tuesday', 'Wednesday', 'Thursday','Friday', 'Saturday'] sb.countplot(data=twitter_df, x='Day', order=x_marker, color=base_color); plt.title('No of tweets based on day of the week') plt.xlabel('Day of Week') plt.xticks(rotation=15) plt.ylabel('Total no of tweets'); # #### Conclusions # The above graph shows that most of the tweets are done on Monday and gradually decreasing by the end of the week x_marker = ['Monday', 'Tuesday', 'Wednesday', 'Thursday','Friday', 'Saturday'] sb.countplot(data=twitter_df, x='Day', hue='doggo', order=x_marker); plt.title('No of tweets based on day of the week') plt.legend(['Not Doggo','Doggo'], title='Dog Type'); plt.xlabel('Day of Week') plt.ylabel('Total no of tweets'); # #### Conclusions # The above graph shows that most of the tweets are done on Monday and gradually decreasing by the end of the week. # The most number of tweets for the doggo are done on Tuesday and the least on Thursday x_marker = ['Monday', 'Tuesday', 'Wednesday', 'Thursday','Friday', 'Saturday'] sb.countplot(data=twitter_df, x='Day', hue='floofer', order=x_marker); plt.title('No of tweets based on day of the week') plt.legend(['Not Floofer','Floofer'], title='Dog Type'); plt.xlabel('Day of Week') plt.ylabel('Total no of tweets'); # #### Conclusions # The above graph shows that most of the tweets are done on Monday and gradually decreasing by the end of the week # The total number of tweets done for floofer are negligible as they are having very less population x_marker = ['Monday', 'Tuesday', 'Wednesday', 'Thursday','Friday', 'Saturday'] sb.countplot(data=twitter_df, x='Day', hue='pupper', order=x_marker); plt.title('No of tweets based on day of the week') plt.legend(['Not Pupper','Pupper'], title='Dog Type'); plt.xlabel('Day of Week') plt.ylabel('Total no of tweets'); # #### Conclusions # The above graph shows that most of the tweets are done on Monday and gradually decreasing by the end of the week. # We can see that the numnber of pupper dogs are the most in number as compared to others x_marker = ['Monday', 'Tuesday', 'Wednesday', 'Thursday','Friday', 'Saturday'] sb.countplot(data=twitter_df, x='Day', hue='puppo', order=x_marker); plt.title('No of tweets based on day of the week') plt.legend(['Not Puppo','Puppo'], title='Dog Type'); plt.xlabel('Day of Week') plt.ylabel('Total no of tweets'); # #### Conclusions # The above graph shows that most of the tweets are done on Monday and gradually decreasing by the end of the week. # This category is also very less in number but not as less as the floofer breed. x_marker = ['Monday', 'Tuesday', 'Wednesday', 'Thursday','Friday', 'Saturday'] sb.barplot(data=twitter_df, x='Day', y='rating', order=x_marker, color=base_color); plt.title('Average Rating of dogs based on day of the week'); plt.xlabel('Day of Week') plt.ylabel('Average Rating'); # #### Conclusions # The above graph shows that the average rating of Monday and Saturday is maximum. Monday has the highest deviation in the ratings. twitter_df.groupby('Hour').count() sb.countplot(data=twitter_df, x='Hour', color=base_color); plt.xlabel('Hour of the day') plt.ylabel('Total no of tweets') plt.title('No of tweets based on hour of the Day'); # #### Conclusions # The above graph shows that most of the tweets are done early in the morning or in the evening. The time between 5AM and 2PM has the least number of tweets # + temp_twitter_df = twitter_df[twitter_df['re-Hour'].isna() == False] temp_twitter_df = temp_twitter_df.astype( { 're-Hour' : int } ) sb.countplot(data=temp_twitter_df, x='re-Hour', color=base_color); plt.legend(['Absent','Present'], title='Show-up'); plt.xlabel('Hour of the day') plt.ylabel('Total no of retweets') plt.title('Reteets based on hour of the Day'); # - # #### Conclusions # The above graph shows that most of the reweets are done in the first hour of the day. Its a bimodal distributions having two peaks, first at 0 and second at $16^{th}$ hour bin_size = np.arange(temp_twitter_df['retweetTime'].min(), temp_twitter_df['retweetTime'].max()+10, 10) plt.hist(temp_twitter_df['retweetTime'], bins= bin_size); plt.title('Retweets based on time difference') plt.xlabel('Time difference in days') plt.ylabel('Retweet counts'); # #### Conclusions # The above graph shows that most of the retweets are done on the same day which is shown by the first peak at 0 which is the highest of all. There is almost 0 retweets after one year of the actual tweet sb.countplot(data=twitter_df, x='doggo'); sb.countplot(data=twitter_df, x='floofer'); sb.countplot(data=twitter_df, x='pupper'); sb.countplot(data=twitter_df, x='puppo'); sb.heatmap(twitter_df.corr(), annot = True, fmt = '.2f', cmap = 'vlag_r', center = 0); # #### Conclusions # The above graph shows that Rating is highly correlated with the numerator of the data. So the numerator is sufficient for predicting the popularity of the dogs neighbourhood_counts = twitter_df['name'].value_counts() neighbourhood_order = neighbourhood_counts.index sb.countplot(data = twitter_df, y = 'name', order = neighbourhood_order[2:12], color=base_color) plt.xlabel('No of Dogs') plt.ylabel('Names') plt.title('Top 10 most common names for dogs'); # #### Conclusions # The above graph shows that the most common names used for dogs is Charlie followed by <NAME> and others # ### Research Question 2 (Best rated dog) # #### Ratings given bin_size = np.arange(twitter_df['rating'].min(), twitter_df['rating'].max()+2, 2) plt.hist(twitter_df['rating'], bins = bin_size); plt.xlabel('Rating') plt.ylabel('No of dogs') plt.title('No of dogs based on ratings'); # #### Conclusions # The above graph shows that almost all of the dogs have rating near 0. But few values are near 175. # ### Taking a closer look at the point near 0 bin_size = np.arange(twitter_df['rating'].min(), twitter_df['rating'].max()+.025, .025) plt.hist(twitter_df['rating'], bins=bin_size); plt.xlim(0,2); plt.xlabel('Rating') plt.ylabel('No of dogs') plt.title('No of dogs based on ratings'); # #### Conclusions # The above graph shows that most of the dogs are rated between 1.00 and 1.25 bin_size = np.arange(twitter_df['rating'].min(), twitter_df['rating'].max()+.1, .1) plt.hist(twitter_df['rating'], bins=bin_size); plt.xlim(2,10); plt.ylim(0,5) plt.xlabel('Rating') plt.ylabel('No of dogs') plt.title('No of dogs based on ratings'); # #### Conclusions # The above graph shows that there are only 4 dogs having rating more than 2 and less than 10 twitter_df[(twitter_df['rating']>1.4) & (twitter_df['name'] != 'None')][['name', 'rating']].sort_values(by=['rating'], ascending=False) temp_df = twitter_df[(twitter_df['rating']>1.3) & (twitter_df['name'] != 'None')][['name', 'rating']].sort_values(by=['rating'], ascending=False)[:10] sb.barplot(data = temp_df, x = 'rating', y = 'name', color=base_color) plt.xlabel('Rating') plt.ylabel('Names') plt.title('Top 10 most rated dogs'); temp_df['name'].tolist() # #### Conclusions # The above table shows the highest rated dogs # <a id='conclusions'></a> # ## End # # # <a href="#home">Home</a> twitter_df.to_csv('data/twitter_df_clean.csv')
Project 4 - Wrangle and Analyze Data/wrangle_act.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Unsupervised Anomaly Detection # # We continue with anomaly detection by focusing now on unsupervised anomaly detection. Recall, you can think of an anomaly as some sort of highly unusual event occuring in your data that you wish to find (e.g. an attack on your network, a defective device, credit card fraud). More usefully, a good definition of an anomaly is the following: An anomaly is a data sample that deviates significantly from other data samples, so much so to suggest that it was generated by a different mechanism. In probability language, you can think of an anomaly as something that comes from a different distribution than the "real" data. # # Unsupervised anomaly detection deals with the case where we don't know ahead of time which points in our dataset are anomalies. The goal is to find structure in the data and to use that structure to find points that are "out of place". What does it mean for a point to be out of place? Usually this means the point is unusually "far away" from most of the data in some sense. In statistical lingo this is often called [outlier detection](http://scikit-learn.org/stable/modules/outlier_detection.html). # # Note, however, that not all anomalies are outliers! One example of a non-outlier anomaly is the "faulty sensor" anomaly. These often result in having lots of points in your dataset taking the same or similar values. If that value happens to be inside your "good data" values you'll probably never catch it with these techniques. Another example of a non-outlier anomaly is the "cluster" anomaly. These result in having sets of points forming one or more clusters, which may be hard to pick out from the "good" data. An example of this could be a sudden burst in activity over a network. Unsupervised techniques will often not work very well for these types of anomalies, as they're largely based on finding outliers. # # As always, we begin by loading the packages we'll use and defining a few useful functions for getting the data and plotting stuff. We'll primarily be sticking with sklearn here for our detection techniques. import numpy as np import matplotlib.pyplot as plt import pandas as pd from sklearn.svm import OneClassSVM from sklearn.covariance import EllipticEnvelope from sklearn.ensemble import IsolationForest from sklearn.neighbors import LocalOutlierFactor from sklearn.metrics import f1_score, confusion_matrix, precision_score, recall_score from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from imblearn.over_sampling import SMOTE np.random.seed(4) # Now we load in the data. The non-anomalous points were generated from a 2D Gaussian distribution, while the anomalous points were generated from a complicated mixture distribution. Since I generated the anomalies myself, we can keep track of the "labels" in `y` just for plotting and evaluation purposes. The models we'll use below will only use the feature data `X` though since we're doing unsupervised learning. Note that in real life you usually won't know which points in your dataset are anomalies. You're almost always "flying blind" when doing unsupervised anomaly detection. # # In this case, about 1% of the data are anomalous, which comes out to about 22 in 1485 points. A plot of the data is shown below as well, with the anomalies we're trying to "discover" highlighted in red. You can see we have a mix of different types of anomalies to detect. Some are random outliers, some are clusters, and some are "faulty sensor" types centered at the origin. # + def get_pos_samples(n, lim=3): # helper function for get_data # generates anomalies from a mixture distribution # consists of 1 Uniform(-lim,lim) + 3 Normal(mean,cov) with random means and covs x = np.zeros((n,2)) idx = np.arange(n) mean = np.random.uniform(-lim,lim,size=2) cov = .05*np.random.rand()*np.eye(2) for i in idx[:n//2]: x[i] = np.random.multivariate_normal(mean,cov,size=1) mean = np.random.uniform(-lim,lim,size=2) cov = .01*np.random.rand()*np.eye(2) for i in idx[n//2:]: x[i] = np.random.multivariate_normal(mean,cov,size=1) u = np.zeros((5,2)) for i in range(5): u[i] = np.random.uniform(-lim+1,lim+1,size=2) z = np.zeros((2,2)) return np.vstack([x, u, z]) def get_data(size=1000, ratio=0.01): num_neg_samples = round(size*(1-ratio)) num_pos_samples = round(size*ratio) X_neg = np.random.multivariate_normal(np.array([0,0]),.5*np.eye(2),size=num_neg_samples) y_neg = np.zeros(len(X_neg)) X_pos = get_pos_samples(num_pos_samples) y_pos = np.ones(len(X_pos)) X = np.vstack([X_neg,X_pos]) y = np.concatenate([y_neg,y_pos]) idx = np.random.permutation(len(y)) X = X[idx] y = y[idx] return X, y X, y = get_data(1500,0.01) print('Number of non-anomalies:',len(y[y==0])) print('Number of anomalies:',len(y[y==1])) print('Percent anomalies in dataset:',round(len(y[y==1])/(len(y[y==0])+len(y[y==1]))*100,2),'%') # - def plot_data(X, y, yhat=None): lim = np.max(np.abs(X)) f, ax = plt.subplots(figsize=(8, 8)) ax.scatter(X[y==0][:,0],X[y==0][:,1],marker='.',c='blue',s=10,alpha=0.5,label='normal data') ax.scatter(X[y==1][:,0],X[y==1][:,1],marker='.',c='red',s=50,label='anomalies') if yhat is not None: ax.scatter(X[yhat==1][:,0],X[yhat==1][:,1],marker='.',c='green',s=50,label='predicted anomalies') ax.set(aspect="equal", xlabel="$X_1$", ylabel="$X_2$") ax.legend(loc='upper right') plt.xlim(-lim-.5,lim+.5) plt.ylim(-lim-.5,lim+.5) plt.show() plot_data(X,y) # The first thing you'd likely try when finding outlier-type anomalies is just looking for points that are "far away" from most of your data. In probability language, you're looking for "low probability events". If we assume the good data is Gaussian distributed (which in our case it is, by sheer luck), you can try fitting a Gaussian to the data and classifying points "too far away" (i.e. points that occur in the tails of the distribution) as anomalies. Note this technique will not work well if your "normal" data is multimodal (i.e. has multiple clusters)! # # In sklearn this is done with the `EllipticEnvelope` class. One caveat to use this technique is we have to tell the API how many points about we think are outliers. Assuming we know roughly what the fraction of outliers is, in our case about 1.5%, we can specify that (0.015), otherwise one has to guess. Generally the number should be small, no more than a few percent. # # From the confusion matrix, we can see that the model correctly identified 10 of our 22 anomalies. Is this good? # + def get_scores(y, yhat): print('precision: ', round(precision_score(y,yhat),4)) print('recall: ', round(recall_score(y,yhat),4)) print('f1: ', round(f1_score(y,yhat),4)) print('number of anomalies found:',(yhat[y==1] == y[y==1]).sum(),'out of',len(y[y==1])) print('confusion matrix:\n', confusion_matrix(y,yhat)) def adjust_labels(yhat): yhat[yhat==1] = 0 yhat[yhat==-1] = 1 return yhat # + model = EllipticEnvelope(contamination=0.015) model.fit(X) yhat = model.predict(X) yhat = adjust_labels(yhat) get_scores(y,yhat) # - plot_data(X, y, yhat) # A popular non-parametric technique that doesn't require you to assume anything about your data is isolation forests. These are basically just a small variation on random forests. Instead of predicting a label, it uses the average tree depth to predict whether a point is an anomaly. # # In sklearn this is done using the `IsolationForest` class. Just like with `EllipticEnvelope`, we have to specify the approximate ratio of outliers in the dataset using the `contamination` parameter. We again set it to 1.5% since that's roughly how many we have. It looks like Isolation Forests is performing worse than Elliptic Envelopes in this case, only finding 8 of 22 anomalies, at least with minimal tuning to the forest. # + model = IsolationForest(contamination=0.015) model.fit(X) yhat = model.predict(X) yhat = adjust_labels(yhat) get_scores(y,yhat) # - plot_data(X, y, yhat) # Another classic anomaly detection technique is the [one-class](https://en.wikipedia.org/wiki/One-class_classification) support vector machine (SVM). [SVMs](https://en.wikipedia.org/wiki/Support_vector_machine) are fairly old classification models that tend to perform somewhere in between logistic regression and random forests / neural nets. They were very popular like 15 years ago but have since fallen largely out of favor. At any rate, one-class SVMs can perform fairly well for anomaly detection if tuned properly. # # In sklearn these are done using the `OneClassSVM`. The `nu` parameter functions something like contamination above. You can see we're doing a bit better here, identifying 12 of 22 anomalies. Note how many false positives we're picking up though (140). This may or may not concern you depending on your situation. # + model = OneClassSVM(nu=0.1) model.fit(X) yhat = model.predict(X) yhat = adjust_labels(yhat) get_scores(y,yhat) # - plot_data(X, y, yhat) # The last unsupervised technique we mention here is local outlier factor (LOF). LOF tries to detect outliers by density using a variation on KNN. This can allow LOF to work in situations where you're not looking for "outliers" per se, but for regions of abnormal density (think cluster type anomalies). # # In sklearn this is done with the `LocalOutlierFactor` class. We must specify a contamination rate for these. It also may help to adjust the number of `n_neighbors` to catch various densities. You can see that in this case we only catch 10 of 22 anomalies. # + model = LocalOutlierFactor(n_neighbors=80,contamination=0.015,n_jobs=-1) yhat = model.fit_predict(X) yhat = adjust_labels(yhat) get_scores(y,yhat) # - plot_data(X, y, yhat) # I want to close by making a point. Suppose we actually did know what the labels were for some of our data. How much better could we do just by knowing that information? Equivalently, how much better could we do if we were able to use supervised learning instead? We quickly answer this question below. # # We assume now like last week that we have a labeled dataset with "-1"s being anomalies and "1"s being non-anomalies (sorry it's different than last week, one of those weird sklearn things). We first split the dataset up into a training and test set, and then apply SMOTE to the training set. We then fit a random forest classifier to the data, evaluate the model, and plot the results. Notice we found all 22 anomalies, or all 6 of the anomalies that were held out for the test set. # # Moral: You can't beat having labels. You will almost always do better using supervised learning techniques if you can find a way to get some labeled data to train with. # + X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.3) X_train_bal,y_train_bal = SMOTE().fit_sample(X_train,y_train) model = RandomForestClassifier(min_samples_leaf=5, n_jobs=-1) model.fit(X_train_bal,y_train_bal) yhat = model.predict(X) get_scores(y,yhat) # - plot_data(X, y, yhat)
notebooks/unsupervised_anomalies.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "skip"} # # To execute before running the slides # + slideshow={"slide_type": "skip"} import unittest def apply_jupyter_patch(): """Monkey patch unittest to be able to run it in the notebook""" def jupyter_unittest_main(**kwargs): if "argv" not in kwargs: kwargs["argv"] = ['ignored'] kwargs["exit"] = False jupyter_unittest_main._original(**kwargs) if unittest.main.__module__ != "unittest.main": # Restiture the previous state, in case unittest.main = unittest.main._original # Apply the patch jupyter_unittest_main._original = unittest.main unittest.main = jupyter_unittest_main apply_jupyter_patch() def polynom(a, b, c): """The function that will be tested.""" delta = (b**2.0) - 4.0 * a * c solutions = [] if delta > 0: solutions.append((-b + (delta**0.5)) / (2.0 * a)) solutions.append((-b - (delta**0.5)) / (2.0 * a)) elif delta == 0: solutions.append(-b / (2.0 * a)) return solutions try: from PyQt5 import Qt qapp = Qt.QApplication.instance() if Qt.QApplication.instance() is None: qapp = Qt.QApplication([]) class PolynomSolver(Qt.QMainWindow): def __init__(self, parent=None): super(PolynomSolver, self).__init__(parent=parent) self.initGui() def initGui(self): self.setWindowTitle("Polygon Solver") self._inputLine = Qt.QLineEdit(self) self._processButton = Qt.QPushButton(self) self._processButton.setText(u"Solve ax² + bx + c = 0") self._processButton.clicked.connect(self.processing) self._resultWidget = Qt.QLabel(self) widget = Qt.QWidget() layout = Qt.QFormLayout(widget) layout.addRow("Coefs a b c:", self._inputLine) layout.addRow("Solutions:", self._resultWidget) layout.addRow(self._processButton) self.setCentralWidget(widget) def getCoefs(self): text = self._inputLine.text() data = [float(i) for i in text.split()] a, b, c = data return a, b, c def processing(self): try: a, b, c = self.getCoefs() except Exception as e: Qt.QMessageBox.critical(self, "Error while reaching polygon coefs", str(e)) return try: result = polynom(a, b, c) except Exception as e: Qt.QMessageBox.critical(self, "Error while computing the polygon solution", str(e)) return if len(result) == 0: text = "No solution" else: text = ["%0.3f" % x for x in result] text = " ".join(text) self._resultWidget.setText(text) except ImportError as e: print(str(e)) # + [markdown] slideshow={"slide_type": "slide"} # Testing # ======= # # - Introduction # - Python `unittest` module # - Estimate tests' quality # - Continuous integration # + [markdown] slideshow={"slide_type": "slide"} # # What is it? # # - Part of the software quality # - A task consisting of checking that the **program** is working as expected # - Manually written **tests** which can be automatically executed # # <img src="img/test.svg" style="height:50%;margin-left:auto;margin-right:auto;padding:2em;"> # + [markdown] slideshow={"slide_type": "notes"} # # Presenter Notes # # - A test injects input to the program, and checks output # - It answers if the code is valid or not (for a specific usecase) # + [markdown] slideshow={"slide_type": "slide"} # # Different methodologies # # - Test-driven development: Always and before anything else # # <img src="img/ttd-workflow.svg" style="height:50%;margin-left:auto;margin-right:auto;padding:2em;"> # # - <NAME> (2014). [Test-Driven Development with Python. O'Reilly](https://www.oreilly.com/library/view/test-driven-development-with/9781449365141/) # + [markdown] slideshow={"slide_type": "slide"} # # Why testing? # # | Benefits | Disadvantage | # |-----------------------------------------|---------------------------------------------| # | Find problems early | Extra work (to write and execute) | # | Globally reduce the cost | Maintain test environments | # | To validate the code to specifications | Does not mean it's bug-free | # | Safer to changes of the code with | More difficult to change the code behaviour | # | Improve the software design | &nbsp; | # | It's part of documentation and examples | &nbsp; | # # + [markdown] slideshow={"slide_type": "notes"} # # Presenter Notes # # - 30% percent of the time of a project # - Cost reduction: If you find a problem late (at deployment for example) the cost can be very hight # - Automated tests (in CI) reduce the cost of execution, and help code review # - Having the structure set-up for testing encourages writing tests # # + [markdown] slideshow={"slide_type": "slide"} # # What kinds of tests? # # - <span style="color:#ee5aa0">**Unit tests**</span>: Tests independant pieces of code # - <span style="color:#19bdcd">**Integration tests**</span>: Tests components together # - <span style="color:#1aac5b">**System tests**</span>: Tests a completely integrated application # - <span style="color:#b8b800">**Acceptance tests**</span>: Tests the application with the customer # # <img src="img/test-kind.svg" style="height:50%;margin-left:auto;margin-right:auto;padding:0em;"> # + [markdown] slideshow={"slide_type": "notes"} # # Presenter Notes # # The test pyramid is a concept developed by <NAME>, described in his book "Succeeding with Agile" # # - Unit tests (dev point of view, fast, low cost) # - Integration tests # - System tests # - Acceptance tests (customer point of view, but slow, and expensive, can't be automated) # # - Cost: unit << integration (not always true) << system # - Fast to execute: unit >> integration >> system # # + [markdown] slideshow={"slide_type": "slide"} # # Where to put the tests? # # Separate tests from the source code: # # - Run the test from the command line. # - Separate tests and code distributing. # - [...](https://docs.python.org/3/library/unittest.html#organizing-test-code) # # Folder structure: # # - In a separate `test/` folder. # - In `test` sub-packages in each Python package/sub-package, # so that tests remain close to the source code. # Tests are installed with the package and can be run from the installation. # - A `test_*.py` for each module and script (an more if needed). # - Consider separating tests that are long to run from the others. # # + [markdown] slideshow={"slide_type": "slide"} # # Where to put the tests? # # - `project` # - `setup.py` # - `run_tests.py` # - `package/` # - `__init__.py` # - `module1.py` # - `test/` # - `__init__.py` # - `test_module1.py` # - `subpackage/` # - `__init__.py` # - `module1.py` # - `module2.py` # - `test/` # - `__init__.py` # - `test_module1.py` # - `test_module2.py` # + [markdown] slideshow={"slide_type": "slide"} # # `unittest` Python module # # [unittest](https://docs.python.org/3/library/unittest.html) is the default Python module for testing. # # It provides features to: # # - Write tests # - Discover tests # - Run those tests # # Other frameworks exists: # # - [pytest](http://pytest.org/) # + [markdown] slideshow={"slide_type": "slide"} # # Write and run tests # # The classe `unittest.TestCase` is the base class for writting tests for # Python code. # # The function `unittest.main()` provides a command line interface to # discover and run the tests. # + import unittest class TestMyTestCase(unittest.TestCase): def test_my_test(self): # Code to test a = round(3.1415) # Expected result b = 3 self.assertEqual(a, b, msg="") if __name__ == "__main__": unittest.main() # + [markdown] slideshow={"slide_type": "slide"} # # Assertion functions # # - Argument(s) to compare/evaluate. # - An additional error message. # # - `assertEqual(a, b)` checks that `a == b` # - `assertNotEqual(a, b)` checks that `a != b` # - `assertTrue(x)` checks that `bool(x) is True` # - `assertFalse(x)`checks that `bool(x) is False` # - `assertIs(a, b)` checks that `a is b` # - `assertIsNone(x)` checks that `x is None` # - `assertIn(a, b)` checks that `a in b` # - `assertIsInstance(a, b)` checks that `isinstance(a, b)` # # There's more, see [unittest TestCase documentation](https://docs.python.org/3/library/unittest.html#unittest.TestCase>) # or [Numpy testing documentation](http://docs.scipy.org/doc/numpy/reference/routines.testing.html). # + [markdown] slideshow={"slide_type": "slide"} # # Example # # Test the `polynom` function provided in the `pypolynom` sample project. # # It solves the equation $ax^2 + bx + c = 0$. # + import unittest class TestPolynom(unittest.TestCase): def test_0_roots(self): result = polynom(2, 0, 1) self.assertEqual(len(result), 0) def test_1_root(self): result = polynom(2, 0, 0) self.assertEqual(len(result), 1) self.assertEqual(result, [0]) def test_2_root(self): result = polynom(4, 0, -4) self.assertEqual(len(result), 2) self.assertEqual(set(result), set([-1, 1])) if __name__ == "__main__": unittest.main(defaultTest="TestPolynom") # unittest.main(verbosity=2, defaultTest="TestPolynom") # + [markdown] slideshow={"slide_type": "slide"} # # Run from command line arguments # - # Auto discover tests of the current path # + active="" # $ python3 -m unittest # - # Running a specific `TestCase`: # + active="" # $ python3 -m unittest myproject.test.TestMyTrueRound # # $ python3 test_builtin_round.py TestMyTrueRound # - # Running a specific test method: # + active="" # $ python3 -m unittest myproject.test.TestMyTrueRound.test_positive # # $ python3 test_builtin_round.py TestMyTrueRound.test_positive # + [markdown] slideshow={"slide_type": "slide"} # # Fixture # # Tests might need to share some common initialisation/finalisation (e.g., create a temporary directory). # # This can be implemented in ``setUp`` and ``tearDown`` methods of ``TestCase``. # Those methods are called before and after each test. # + class TestCaseWithFixture(unittest.TestCase): def setUp(self): self.file = open("img/test-pyramid.svg", "rb") print("open file") def tearDown(self): self.file.close() print("close file") def test_1(self): foo = self.file.read() # do some test on foo print("test 1") def test_2(self): foo = self.file.read() # do some test on foo print("test 2") if __name__ == "__main__": unittest.main(defaultTest='TestCaseWithFixture') # + [markdown] slideshow={"slide_type": "slide"} # # Testing exception # + class TestPolynom(unittest.TestCase): def test_argument_error(self): try: polynom(0, 0, 0) self.fail() except ZeroDivisionError: self.assertTrue(True) def test_argument_error__better_way(self): with self.assertRaises(ZeroDivisionError): result = polynom(0, 0, 0) if __name__ == "__main__": unittest.main(defaultTest='TestPolynom') # - # `TestCase.assertRaisesRegexp` also checks the message of the exception. # + [markdown] slideshow={"slide_type": "slide"} # # Parametric tests # # Running the same test with multiple values # # Problems: # # - The first failure stops the test, remaining test values are not processed. # - There is no information on the value for which the test has failed. # + class TestPolynom(unittest.TestCase): TESTCASES = { (2, 0, 1): [], (2, 0, 0): [0], (4, 0, -4): [1, -1] } def test_all(self): for arguments, expected in self.TESTCASES.items(): self.assertEqual(polynom(*arguments), expected) def test_all__better_way(self): for arguments, expected in self.TESTCASES.items(): with self.subTest(arguments=arguments, expected=expected): self.assertEqual(polynom(*arguments), expected) if __name__ == "__main__": unittest.main(defaultTest='TestPolynom') # + [markdown] slideshow={"slide_type": "slide"} # # Class fixture # - class TestSample(unittest.TestCase): @classmethod def setUpClass(cls): # Called before all the tests of this class pass @classmethod def tearDownClass(cls): # Called after all the tests of this class pass # + [markdown] slideshow={"slide_type": "slide"} # # Module fixture # # + def setUpModule(): # Called before all the tests of this module pass def tearDownModule(): # Called after all the tests of this module pass # + [markdown] slideshow={"slide_type": "slide"} # # Skipping tests # # If tests requires a specific OS, device, library... # + import unittest, os, sys def is_gui_available(): # Is there a display if sys.platform.startswith('linux'): if os.environ.get('DISPLAY', '') == '': return False # Is there the optional library try: import PyQt8 except: return False return True @unittest.skipUnless(is_gui_available(), 'GUI not available') class TestPolynomGui(unittest.TestCase): def setUp(self): if not is_gui_available(): self.skipTest('GUI not available') def test_1(self): if not is_gui_available(): self.skipTest('GUI not available') @unittest.skipUnless(is_gui_available() is None, 'GUI not available') def test_2(self): pass if __name__ == "__main__": unittest.main(defaultTest='TestPolynomGui') # + [markdown] slideshow={"slide_type": "slide"} # # Test numpy # # Numpy provides modules for unittests. See the [Numpy testing documentation](http://docs.scipy.org/doc/numpy/reference/routines.testing.html). # + import numpy class TestNumpyArray(unittest.TestCase): def setUp(self): self.data1 = numpy.array([1, 2, 3, 4, 5, 6, 7]) self.data2 = numpy.array([1, 2, 3, 4, 5, 6, 7.00001]) # def test_equal__cant_work(self): # self.assertEqual(self.data1, self.data2) # self.assertTrue((self.data1 == self.data2).all()) def test_equal(self): self.assertTrue(numpy.allclose(self.data1, self.data2, atol=0.0001)) def test_equal__even_better(self): numpy.testing.assert_allclose(self.data1, self.data2, atol=0.0001) if __name__ == "__main__": unittest.main(defaultTest='TestNumpyArray') # + [markdown] slideshow={"slide_type": "slide"} # # Test resources # # How to handle test data? # # Need to separate (possibly huge) test data from python package. # # Download test data and store it in a temporary directory during the tests if not available. # # Example: [silx.utils.ExternalResources](https://github.com/silx-kit/silx/blob/master/silx/utils/utilstest.py) # + [markdown] slideshow={"slide_type": "slide"} # # QTest # # For GUI based on `PyQt`, `PySide` it is possible to use Qt's [QTest](http://doc.qt.io/qt-5/qtest.html). # # It provides the basic functionalities for GUI testing. # It allows to send keyboard and mouse events to widgets. # + from PyQt5.QtTest import QTest class TestPolynomGui(unittest.TestCase): def test_type_and_process(self): widget = PolynomSolver() QTest.qWaitForWindowExposed(widget) QTest.keyClicks(widget._inputLine, '2.000 0 -1', delay=100) # Wait 100ms QTest.mouseClick(widget._processButton, Qt.Qt.LeftButton, pos=Qt.QPoint(1, 1)) self.assertEqual(widget._resultWidget.text(), "0.707 -0.707") if __name__ == "__main__": unittest.main(defaultTest='TestPolynomGui') # - # Tighly coupled with the code it tests. # It needs to know the widget's instance and hard coded position of mouse events. # + [markdown] slideshow={"slide_type": "slide"} # # Chaining tests # # How-to run tests from many ``TestCase`` and many files at once: # # - Explicit: # Full control, boilerplate code. # # - Automatic: # No control # # - Mixing approach # # # The [TestSuite](https://docs.python.org/3/library/unittest.html#unittest.TestSuite) class aggregates test cases and test suites through: # # - Allow to test specific use cases # - Full control of the test sequence # - But requires some boilerplate code # + [markdown] slideshow={"slide_type": "slide"} # # Chaining tests example # + def suite_without_gui(): loadTests = unittest.defaultTestLoader.loadTestsFromTestCase suite = unittest.TestSuite() suite.addTest(loadTests(TestPolynom)) return suite def suite_with_gui(): loadTests = unittest.defaultTestLoader.loadTestsFromTestCase suite = unittest.TestSuite() suite.addTest(suite_without_gui()) suite.addTest(loadTests(TestPolynomGui)) return suite if __name__ == "__main__": # unittest.main(defaultTest='suite_without_gui') unittest.main(defaultTest='suite_with_gui') # + [markdown] slideshow={"slide_type": "slide"} # # Estimate tests' quality # # Using [`coverage`](https://coverage.readthedocs.org) to gather coverage statistics while running the tests (`pip install coverage`). # + active="" # $ python -m coverage run -m unittest # $ python -m coverage report # + active="" # Name Stmts Miss Cover # ---------------------------------------------------- # pypolynom\__init__.py 1 0 100% # pypolynom\polynom.py 19 2 89% # pypolynom\test\__init__.py 0 0 100% # pypolynom\test\test_polynom.py 29 0 100% # ---------------------------------------------------- # TOTAL 49 2 96% # + [markdown] slideshow={"slide_type": "slide"} # # Estimate tests' quality # # Execute the tests and generate an output file per module with annotations per lines. # + active="" # $ python -m coverage annotate # # $ ls pypolynom # 30/03/2019 19:15 1,196 polynom.py # 30/03/2019 19:17 1,294 polynom.py,cover # + active="" # > def polynom(a, b, c): # > delta = pow2(b) - 4.0 * a * c # > solutions = [] # > if delta > 0: # ! solutions.append((-b + sqrt(delta)) / (2.0 * a)) # ! solutions.append((-b - sqrt(delta)) / (2.0 * a)) # > elif delta == 0: # > solutions.append(-b/(2.0*a)) # > return solutions # + [markdown] slideshow={"slide_type": "slide"} # # Continuous integration # # Automatically testing a software for each changes applied to the source code. # # Benefits: # # - Be aware of problems early # - Before merging a change on the code # - On third-party library update (sometimes before the release) # - Reduce the cost in case of problem # - Improve contributions and team work # # Costs: # # - Set-up and maintenance # - Test needs to be automated # # + [markdown] slideshow={"slide_type": "slide"} # # Continuous integration # # - [Travis-CI](https://travis-ci.org/) (Linux and MacOS), [AppVeyor](http://www.appveyor.com/) (Windows), gitlab-CI (https://gitlab.esrf.fr)... # - A `.yml` file to describing environment, build, installation, test process # # <img src="img/ci-workflow.svg" style="width:40%;margin-left:auto;margin-right:auto;padding:0em;"> # + [markdown] slideshow={"slide_type": "slide"} # # Continuous integration: Configuration # # Example of configuration with Travis # + active="" # language: python # # matrix: # include: # - python: 3.6 # - python: 3.7 # # before_install: # Upgrade distribution modules # - python -m pip install --upgrade pip # - pip install --upgrade setuptools wheel # # install: # Generate source archive and wheel # - python setup.py bdist_wheel # # before_script: # Install wheel package # - pip install --pre dist/pypolynom*.whl # # script: # Run the tests from the installed module # - mkdir tmp ; cd tmp # - python -m unittest pypolynom.test.suite_without_gui # + [markdown] slideshow={"slide_type": "slide"} # # Sum-up # # - A coverage of most cases is a good start (80/20%). # - Tests should be done early, to identify design problems, and to improve team work. # - The amount of work must not be under estimate. Designing good test is about 1/3 of resources a project. # - Aiming at exhaustive tests is not needed and utopic. # - To be tested, an application have to be architectured in this way. # - Continuous integration is particularly useful to prevent regressions, and help contributions. # - Next step: Continuous deployment.
software_engineering/5_Test/notebook.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import matplotlib.pyplot as plt import numpy as np # # Case Study - Text classification for SMS spam detection # We first load the text data from the `dataset` directory that should be located in your notebooks directory, which we created by running the `fetch_data.py` script from the top level of the GitHub repository. # # Furthermore, we perform some simple preprocessing and split the data array into two parts: # # 1. `text`: A list of lists, where each sublists contains the contents of our emails # 2. `y`: our SPAM vs HAM labels stored in binary; a 1 represents a spam message, and a 0 represnts a ham (non-spam) message. # + import os with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f: lines = [line.strip().split("\t") for line in f.readlines()] text = [x[1] for x in lines] y = [int(x[0] == "spam") for x in lines] # - text[:10] y[:10] print('Number of ham and spam messages:', np.bincount(y)) type(text) type(y) # Next, we split our dataset into 2 parts, the test and training dataset: # + from sklearn.model_selection import train_test_split text_train, text_test, y_train, y_test = train_test_split(text, y, random_state=42, test_size=0.25, stratify=y) # - # Now, we use the CountVectorizer to parse the text data into a bag-of-words model. # + from sklearn.feature_extraction.text import CountVectorizer print('CountVectorizer defaults') CountVectorizer() # + vectorizer = CountVectorizer() vectorizer.fit(text_train) X_train = vectorizer.transform(text_train) X_test = vectorizer.transform(text_test) # - print(len(vectorizer.vocabulary_)) X_train.shape print(vectorizer.get_feature_names()[:20]) print(vectorizer.get_feature_names()[2000:2020]) print(X_train.shape) print(X_test.shape) # ### Training a Classifier on Text Features # We can now train a classifier, for instance a logistic regression classifier, which is a fast baseline for text classification tasks: # + from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf # - clf.fit(X_train, y_train) # We can now evaluate the classifier on the testing set. Let's first use the built-in score function, which is the rate of correct classification in the test set: clf.score(X_test, y_test) # We can also compute the score on the training set to see how well we do there: clf.score(X_train, y_train) # # Visualizing important features def visualize_coefficients(classifier, feature_names, n_top_features=25): # get coefficients with large absolute values coef = classifier.coef_.ravel() positive_coefficients = np.argsort(coef)[-n_top_features:] negative_coefficients = np.argsort(coef)[:n_top_features] interesting_coefficients = np.hstack([negative_coefficients, positive_coefficients]) # plot them plt.figure(figsize=(15, 5)) colors = ["red" if c < 0 else "blue" for c in coef[interesting_coefficients]] plt.bar(np.arange(2 * n_top_features), coef[interesting_coefficients], color=colors) feature_names = np.array(feature_names) plt.xticks(np.arange(1, 2 * n_top_features + 1), feature_names[interesting_coefficients], rotation=60, ha="right"); visualize_coefficients(clf, vectorizer.get_feature_names()) # + vectorizer = CountVectorizer(min_df=2) vectorizer.fit(text_train) X_train = vectorizer.transform(text_train) X_test = vectorizer.transform(text_test) clf = LogisticRegression() clf.fit(X_train, y_train) print(clf.score(X_train, y_train)) print(clf.score(X_test, y_test)) # - len(vectorizer.get_feature_names()) print(vectorizer.get_feature_names()[:20]) visualize_coefficients(clf, vectorizer.get_feature_names()) # <img src="figures/supervised_scikit_learn.png" width="100%"> # <div class="alert alert-success"> # <b>EXERCISE</b>: # <ul> # <li> # Use TfidfVectorizer instead of CountVectorizer. Are the results better? How are the coefficients different? # </li> # <li> # Change the parameters min_df and ngram_range of the TfidfVectorizer and CountVectorizer. How does that change the important features? # </li> # </ul> # </div> # + from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import train_test_split text_train, text_test, y_train, y_test = train_test_split(text, y, random_state=42, test_size=0.25, stratify=y) tf_vec = TfidfVectorizer() tf_vec.fit(text_train) X_train = tf_vec.transform(text_train) X_test = tf_vec.transform(text_test) clf = LogisticRegression() clf.fit(X_train, y_train) print(clf.score(X_train, y_train), clf.score(X_test, y_test)) # - visualize_coefficients(clf, tf_vec.get_feature_names()) # + # # %load solutions/12A_tfidf.py from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer() vectorizer.fit(text_train) X_train = vectorizer.transform(text_train) X_test = vectorizer.transform(text_test) clf = LogisticRegression() clf.fit(X_train, y_train) print(clf.score(X_train, y_train)) print(clf.score(X_test, y_test)) visualize_coefficients(clf, vectorizer.get_feature_names()) # + # # %load solutions/12B_vectorizer_params.py # CountVectorizer vectorizer = CountVectorizer(min_df=10, ngram_range=(1, 3)) vectorizer.fit(text_train) X_train = vectorizer.transform(text_train) X_test = vectorizer.transform(text_test) clf = LogisticRegression() clf.fit(X_train, y_train) visualize_coefficients(clf, vectorizer.get_feature_names()) # TfidfVectorizer vectorizer = TfidfVectorizer(min_df=10, ngram_range=(1, 3)) vectorizer.fit(text_train) X_train = vectorizer.transform(text_train) X_test = vectorizer.transform(text_test) clf = LogisticRegression() clf.fit(X_train, y_train) visualize_coefficients(clf, vectorizer.get_feature_names()) # - (clf.coef_.ravel().shape)
notebooks/12.Case_Study-SMS_Spam_Detection.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="Ih7gcr9o37cL" colab_type="text" # # variabel # + id="4HLjoEfF3uLf" colab_type="code" outputId="c3860c58-1fb7-44da-c21f-b6f9857f9734" executionInfo={"status": "ok", "timestamp": 1584387994613, "user_tz": 420, "elapsed": 1219, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} colab={"base_uri": "https://localhost:8080/", "height": 50} age = 27 name = 'meraz' print(age) print(name) # + id="SWnApLe04hKV" colab_type="code" outputId="c6b28bb7-9854-4499-fe98-2fbac86b1fa9" executionInfo={"status": "ok", "timestamp": 1584388095462, "user_tz": 420, "elapsed": 1017, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} colab={"base_uri": "https://localhost:8080/", "height": 50} weight, name = 50 , 'Rahul' print(age) print(name) # + id="WjvDboVZ47m3" colab_type="code" outputId="f19cd664-64b1-41c0-acbc-2c0358697687" executionInfo={"status": "ok", "timestamp": 1584388216001, "user_tz": 420, "elapsed": 1139, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} colab={"base_uri": "https://localhost:8080/", "height": 50} a = 5 b = 7 print(a, b) a , b = b, a #Now swap the variabel print(a , b) # + id="Tgvfbos25IEP" colab_type="code" outputId="fa0ae8de-277b-4b94-ec8d-1675be2e851a" executionInfo={"status": "ok", "timestamp": 1584388534913, "user_tz": 420, "elapsed": 1053, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} colab={"base_uri": "https://localhost:8080/", "height": 34} c = 2 d = 3 print(c, d) # + id="1LLgb81-6m4o" colab_type="code" colab={} del c # + id="9FwMO-Ge6o3g" colab_type="code" outputId="5aa33da2-eb90-4fff-b669-66e9f25377e9" executionInfo={"status": "error", "timestamp": 1584388555217, "user_tz": 420, "elapsed": 1116, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} colab={"base_uri": "https://localhost:8080/", "height": 162} print(c) # + [markdown] id="LrwJ3hgTEhnn" colab_type="text" # #Data Types # + [markdown] id="3UlRpcBoyrub" colab_type="text" # ##Int/Float/Complex # + id="OD7VRtYP6rxH" colab_type="code" outputId="b43138bf-85d9-414e-e6dc-abb644400b5d" executionInfo={"status": "ok", "timestamp": 1584391782177, "user_tz": 420, "elapsed": 691, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} colab={"base_uri": "https://localhost:8080/", "height": 84} a, b, c = 0, 2, -5 print(a, b, c) print(type(a), type(b), type(c)) e, f, g = 0.0, 2.05, -13.0 print(e, f, g) print(type(e), type(f), type(g)) # + id="BvHwppcNFpGw" colab_type="code" outputId="2731c786-c1d8-43e0-b316-9dc53eb0c530" executionInfo={"status": "ok", "timestamp": 1584392316198, "user_tz": 420, "elapsed": 1032, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} colab={"base_uri": "https://localhost:8080/", "height": 50} x = 2 y =3 z = complex(x, y) print(z) print(type(z)) # + [markdown] id="JbS8uAsk3mm1" colab_type="text" # Conversion between Int to float # + id="f-ycRMVg32a3" colab_type="code" outputId="31385424-be88-4a09-90cd-589e45f55221" executionInfo={"status": "ok", "timestamp": 1584404855139, "user_tz": 420, "elapsed": 1023, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} colab={"base_uri": "https://localhost:8080/", "height": 90} a = 3 b = 2.5 print(a, b) print(type(a), type(b)) a = float(a) print(a, b) print(type(a), type(b)) # + [markdown] id="0n3cUeG9y83B" colab_type="text" # --- # ##String/ LIst/ Touple # # + id="PIc9RXEOIWX3" colab_type="code" outputId="12ac8e4a-be08-41e9-db73-f7d201628204" executionInfo={"status": "ok", "timestamp": 1584403648501, "user_tz": 420, "elapsed": 974, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} colab={"base_uri": "https://localhost:8080/", "height": 35} name1 = 'meraz' name2 = "rahul" name3 = "sujit" print(type(name1), type(name2), type(name3)) # + id="mzjEZwht0OLY" colab_type="code" outputId="b5ddba7b-b15e-4d5b-ec6d-fa1e5881304b" executionInfo={"status": "ok", "timestamp": 1584405106239, "user_tz": 420, "elapsed": 977, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} colab={"base_uri": "https://localhost:8080/", "height": 54} my_list = ['laptop', 'mobile', 3.147, 10] print(my_list) print(type(my_list)) # + id="xJ_lt8Nx5s3m" colab_type="code" outputId="91a103db-f2fd-43c2-e5fd-e388254eecbd" executionInfo={"status": "ok", "timestamp": 1584405333769, "user_tz": 420, "elapsed": 1048, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} colab={"base_uri": "https://localhost:8080/", "height": 54} my_tuple = ('laptop', 'mobile', 3.147, 10) print(my_tuple) print(type(my_tuple)) # + [markdown] id="0FuRKlN77CM-" colab_type="text" # ##Set/Dictonary # + id="evvOTlRl6c5-" colab_type="code" outputId="7fbcd04b-7de5-4007-e676-7a4578d4a28d" executionInfo={"status": "ok", "timestamp": 1584405961346, "user_tz": 420, "elapsed": 967, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} colab={"base_uri": "https://localhost:8080/", "height": 54} my_dict = {"a": 1, "e":5, "i":9} print(my_dict) print(type(my_dict)) # + id="VxpqbtL39FZR" colab_type="code" outputId="6a3c2ede-7db1-4290-d15a-6180c6122220" executionInfo={"status": "ok", "timestamp": 1584406067762, "user_tz": 420, "elapsed": 1030, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} colab={"base_uri": "https://localhost:8080/", "height": 54} my_set = {1, 2, 3, 4, 5} print(my_set) print(type(my_set)) # + id="1GT0-O3b9XUe" colab_type="code" outputId="4d3de25d-9dc9-4f1f-dfb7-ffc0687b86f3" executionInfo={"status": "ok", "timestamp": 1584406134514, "user_tz": 420, "elapsed": 1060, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} colab={"base_uri": "https://localhost:8080/", "height": 35} ramu = {} print(type(ramu)) # + [markdown] id="RbgchDaw_kAR" colab_type="text" # ##Boolean # + id="sCfVN1Na9vnN" colab_type="code" outputId="c378beb9-bb3c-4673-ddcc-9c7f61a40a21" executionInfo={"status": "ok", "timestamp": 1584406756256, "user_tz": 420, "elapsed": 1032, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} colab={"base_uri": "https://localhost:8080/", "height": 54} a = 5 b = 4 c = a > b d = b > a print(c, d) print(type(c), type(d)) # + [markdown] id="12xIEJPt_jOu" colab_type="text" # # + [markdown] id="Q4Y09Cwd-fJ3" colab_type="text" # --- # #Python Basic Operation # # + id="jiWwmmMLErRm" colab_type="code" outputId="8e5bb223-832a-4123-ef2d-bb0483607794" executionInfo={"status": "ok", "timestamp": 1584408256904, "user_tz": 420, "elapsed": 1171, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} colab={"base_uri": "https://localhost:8080/", "height": 145} a = 5 b = 2 print("Addition", a+b) print("Subtraction", a-b) print("Multipication", a*b) print("Division", a/b) print("Modulus", a%b) print("Exponent", a**b) print("FloorDivision", a//b) # + [markdown] id="HwScC7-Gf3Ah" colab_type="text" # ##Python Comparison operator # + id="RT4HhI7ClsDX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="2ca9c344-d491-4f72-e207-2c3602e85f3c" executionInfo={"status": "ok", "timestamp": 1584467458331, "user_tz": 420, "elapsed": 1432, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} a = 5 b = 4 c = 5.0 d = "india" print (a == b) print (a ==c) print(a==d) # + [markdown] id="g2ulWCQ5lgcQ" colab_type="text" # ##Python logical operator # + id="8xEhnRxOgRS1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 145} outputId="7b605785-967a-4dfb-ff61-082c3e3f9eb1" executionInfo={"status": "ok", "timestamp": 1584468778815, "user_tz": 420, "elapsed": 1397, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} a = 5 b = 3 c = 3 d = True e = False print (a and b) print (b and a) print(b and c) print(a and d) print(a and e) print(d and e) print(e and d) # + id="GgZIv9VKm-W0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 145} outputId="e7b726db-93ae-45f5-9356-eca5f09a29c7" executionInfo={"status": "ok", "timestamp": 1584468903997, "user_tz": 420, "elapsed": 1303, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} a = 5 b = 3 c = 3 d = True e = False print (a or b) print (b or a) print(b or c) print(a or d) print(a or e) print(d and e) print(e and d) # + [markdown] id="sW9v_tQbvnjG" colab_type="text" # ##Membership Operator # + id="0wGv5W5Asbpu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="716f7510-450c-4c8b-ecd6-50e594fbf07a" executionInfo={"status": "ok", "timestamp": 1584469722235, "user_tz": 420, "elapsed": 1316, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} x = "meraz" print( 'a' in x) print( 'b' in x) print( '2' in x) # + id="ePEv3zPewCKt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 90} outputId="29c403ef-371f-48c3-8f94-39822459ae38" executionInfo={"status": "ok", "timestamp": 1584469901511, "user_tz": 420, "elapsed": 1727, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} y = [1, 2, 3, 4] print(1 in y) print(3.0 in y) print( 5 in y) print( 6 not in y) # + id="LBtAKG_1wgMN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 90} outputId="6e2083d0-d818-4fbc-b1f8-2c449c1cea19" executionInfo={"status": "ok", "timestamp": 1584469966596, "user_tz": 420, "elapsed": 1312, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} y = (1, 2, 3, 4) print(1 in y) print(3.0 in y) print( 5 in y) print( 6 not in y) # + id="TdCqpAHUxPma" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 90} outputId="eaaea103-6889-46e3-a0ed-9bd289185001" executionInfo={"status": "ok", "timestamp": 1584469999540, "user_tz": 420, "elapsed": 1325, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} y = {1, 2, 3, 4} print(1 in y) print(3.0 in y) print( 5 in y) print( 6 not in y) # + [markdown] id="tLlD0o9dztE8" colab_type="text" # ##Identity Operator # + id="j7I_S4oBxXpL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="48ad5745-f462-4dcc-ffa4-a550d86dfb52" executionInfo={"status": "ok", "timestamp": 1584471002087, "user_tz": 420, "elapsed": 1381, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} x = 5 y = x print( x is y) print(id(x)) print(id(y)) # + id="Mx30WgLt0v-T" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="37fb2692-8093-41d6-83d0-42886ae26ee0" executionInfo={"status": "ok", "timestamp": 1584471079430, "user_tz": 420, "elapsed": 1370, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} z = 5 print(x is z) print(id(z)) # + id="XD-mNICc1Xlk" colab_type="code" colab={} def fun(): return [1, 2, 3] # + id="3v0ZSqBE6lnq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="7a688ec1-addb-4a76-a4a4-d8726c88a6a7" executionInfo={"status": "ok", "timestamp": 1584473929192, "user_tz": 420, "elapsed": 2620, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} p = [1, 2, 3] t = fun() print( p is t) print(id(p)) print(id(t)) # + id="mVDM26rg653i" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="11cc0a41-f22b-4f96-b078-d925d95de6cf" executionInfo={"status": "ok", "timestamp": 1584474018544, "user_tz": 420, "elapsed": 2539, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} def fun(): return (1, 2, 3) p = (1, 2, 3) t = fun() print( p is t) print(id(p)) print(id(t)) # + id="AfpHkmf0Asio" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="e05ff38b-cda0-4ce7-ff89-9ca2bba3472a" executionInfo={"status": "ok", "timestamp": 1584474064943, "user_tz": 420, "elapsed": 1559, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiCDqeiThy6Uz7npH4ERYGnqSgnpBR6l78yLv1B=s64", "userId": "16501587712343191977"}} def fun(): return 7 p = 7 t = fun() print( p is t) print(id(p)) print(id(t)) # + id="DjeB8hfzA4HZ" colab_type="code" colab={}
mod_02_python_intro/mod_02_02.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.5 (''.venv'': venv)' # language: python # name: python3 # --- # # Boston Housing Classification SVM Evaluation import sys sys.path.append("..") from pyspark.ml.classification import LinearSVC from pyspark.ml.evaluation import BinaryClassificationEvaluator, MulticlassClassificationEvaluator from pyspark.ml.feature import StringIndexer, VectorAssembler from pyspark.ml.pipeline import Pipeline from pyspark.ml.tuning import CrossValidator, ParamGridBuilder from pyspark.mllib.evaluation import MulticlassMetrics from pyspark.sql.functions import expr from pyspark.sql.session import SparkSession from pyspark.sql.types import BooleanType from helpers.path_translation import translate_to_file_string from helpers.data_prep_and_print import add_weight_col, print_confusion_matrix inputFile = translate_to_file_string("../data/Boston_Housing_Data.csv") # Spark session creation spark = (SparkSession .builder .appName("BostonHousingSVNEval") .getOrCreate()) # DataFrame creation using an ifered Schema df = spark.read.option("header", "true") \ .option("inferSchema", "true") \ .option("delimiter", ";") \ .csv(inputFile) \ .withColumn("CATBOOL", expr("CAT").cast(BooleanType())) print(df.printSchema()) # Create the weight column to handle the biased distribution of labels df_with_weight = add_weight_col(df,"CAT","classWeightCol") # Prepare training and test data. splits = df_with_weight.randomSplit([0.9, 0.1 ], 12345) training = splits[0] test = splits[1] # Data preprocessing # + featureCols = df_with_weight.columns.copy() featureCols.remove("MEDV") featureCols.remove("CAT") featureCols.remove("CATBOOL") featureCols.remove("classWeightCol") print(featureCols) assembler = VectorAssembler(outputCol="features", inputCols=featureCols) # - # Build the evaluator # + #TODO Add weight column evaluator = BinaryClassificationEvaluator(labelCol="CAT",rawPredictionCol="rawPrediction", metricName="areaUnderROC") #evaluator = MulticlassClassificationEvaluator(labelCol="CAT", predictionCol="prediction", metricName='weightedPrecision') # - # Support Vector Machine Classifier lsvc = LinearSVC(labelCol="CAT",aggregationDepth=2, featuresCol="features" ) # Build the pipeline pipeline = Pipeline(stages= [assembler, lsvc] ) # Build the paramGrid # TODO Add your settings there paramGrid = ParamGridBuilder().addGrid(lsvc.maxIter, [100])\ .addGrid(lsvc.regParam, [0.1]) \ .build() # Build the CrossValidator cvSVM = CrossValidator(estimator=pipeline, evaluator=evaluator, \ estimatorParamMaps=paramGrid, numFolds=5, parallelism=2) # Train the model cvSVMModel = cvSVM.fit(training) # Test the model predictions = cvSVMModel.transform(test) predictions.show() # # Evaluate the Model # ## Area under ROC accuracy = evaluator.evaluate(predictions) print("Test Error",(1.0 - accuracy)) # ## Confusion Matrix predictionAndLabels = predictions.select("prediction", "CAT").rdd.map(lambda p: [p[0], float(p[1])]) # Map to RDD prediction|label labels = predictionAndLabels.map(lambda x: x[1]).distinct().collect() # List of all labels metrics = MulticlassMetrics(predictionAndLabels) print_confusion_matrix(spark, metrics.confusionMatrix()) # + # TODO print and evaluate on MulticlassMetrics metrics # Confusion Matrix # - # ## Statistics per label for label in labels: print("Class %f precision = %f\n" % (label , metrics.precision(label))) # TODO add additional statistics for the label (recall, ...) # ## Weighted stats # + #TODO print weighted Stats # + ## Summary stats # - print(f"Recall = {metrics.recall(1.0)}") print(f"Precision = {metrics.precision(1.0)}") print(f"Accuracy = {metrics.accuracy}") print(f"F1 = {metrics.fMeasure(1.0)}") spark.stop()
exercises/boston_housing_classification_svm_evaluation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # cd C:\Users\mmatousek\GIT\recsys # %load_ext autoreload # %autoreload 2 # # Original from spotlight.cross_validation import random_train_test_split from spotlight.datasets.movielens import get_movielens_dataset from spotlight.evaluation import rmse_score from spotlight.factorization.explicit import ExplicitFactorizationModel dataset = get_movielens_dataset(variant='100K') train, test = random_train_test_split(dataset) model = ExplicitFactorizationModel(n_iter=13) model.fit(train, verbose=True) rmse = rmse_score(model, test) rmse # # Explore random_state = None def train_test_split() if random_state is None: random_state = np.random.RandomState() indices = np.arange(len(interactions.user_ids)) random_state.shuffle(shuffle_indices)
notebooks/exploration/0.1_spotlight2pytorch.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from tensorflow.keras.preprocessing.sequence import pad_sequences import data_source.preproc as pp import h5py import numpy as np import unicodedata import string # + class DataGenerator(): def __init__(self, source, batchsize, maxTextLenght, predict = False): """DataGenerator class, functions: next_train_batch next_valid_batch next_test_batch """ self.charset = string.printable[:95] #All possible chars that the model will predict self.maxTextLenght = maxTextLenght self.tokenizer = Tokenizer(self.charset, self.maxTextLenght) self.batchsize = batchsize self.partitions = ['test'] if predict else ['train', 'valid', 'test'] self.size = dict() self.steps = dict() self.index = dict() self.dataset = dict() with h5py.File(source, "r") as f: for pt in self.partitions: self.dataset[pt] = dict() self.dataset[pt]['dt'] = f[pt]['dt'][:] self.dataset[pt]['gt'] = f[pt]['gt'][:] for pt in self.partitions: # decode sentences from byte self.dataset[pt]['gt'] = [x.decode() for x in self.dataset[pt]['gt']] # set size and setps self.size[pt] = len(self.dataset[pt]['gt']) self.steps[pt] = int(np.ceil(self.size[pt] / self.batchsize)) self.index[pt] = 0 def next_train_batch(self): "get the next batch, function yields batch" while(True): if self.index['train'] >= self.size["train"]: #reset index if all trainings example have been taken self.index['train'] = 0 #index -> index + batchsize and index -> batchsize index = self.index['train'] until = index + self.batchsize self.index['train'] = until x_train = self.dataset['train']['dt'][index:until] y_train = self.dataset['train']['gt'][index:until] #Augment trainings data: x_train = pp.augmentation(x_train, rotation_range=1.5, scale_range=0.05, height_shift_range=0.025, width_shift_range=0.05, erode_range=5, dilate_range=3) x_train = pp.normalization(x_train) y_train = [self.tokenizer.encode(i) for i in y_train] y_train = pad_sequences(y_train, maxlen=self.tokenizer.maxlen, padding="post") yield(x_train, y_train, []) def next_valid_batch(self): "get the next validation batch, function yields the batch" while(True): if self.index['valid'] >= self.size['valid']: self.index['valid'] = 0 index = self.index['valid'] until = index + self.batchsize self.index['valid'] = until x_valid = self.dataset['valid']['dt'][index:until] y_valid = self.dataset['valid']['gt'][index:until] x_valid = pp.normalization(x_valid) y_valid = [self.tokenizer.encode(i) for i in y_valid] y_valid = pad_sequences(y_valid, maxlen=self.tokenizer.maxlen, padding="post") yield (x_valid, y_valid, []) def next_test_batch(self): while(True): if self.index['test'] >= self.size['test']: self.index['test'] = 0 index = self.index['test'] until = index + self.batchsize self.index['test'] = until x_test = self.dataset['test']['dt'][index:until] x_test = pp.normalization(x_test) yield x_test # + class Tokenizer(): def __init__(self, chars, max_TextLenght): """Tokenizerclass Functions: encode() char -> numpy vector decode() numpy vector -> chars remove_tokens() removes PAD token from text """ self.PAD_TK, self.UNK_TK = "¶", "¤" self.chars = (self.PAD_TK + self.UNK_TK + chars) self.PAD = self.chars.find(self.PAD_TK) self.UNK = self.chars.find(self.UNK_TK) self.vocab_size = len(self.chars) self.maxlen = max_TextLenght def encode(self, text): "encode data into Vector char -> index" text = unicodedata.normalize("NFKD", text).encode("ASCII", "ignore").decode("ASCII") text = " ".join(text.split()) #self.test = 0 encoded = [] for item in text: #get a Vector with a number from 0 to len(chars), each letter gets a number index = self.chars.find(item) index = self.UNK if index == -1 else index encoded.append(index) #if self.test == 10: #print(encoded) #self.test = self.test + 1 return np.asarray(encoded) def decode(self, text): """Decode vector to text""" decoded = "".join([self.chars[int(x)] for x in text if x > -1]) decoded = self.remove_tokens(decoded) decoded = pp.text_standardize(decoded) return decoded def remove_tokens(self, text): """Remove tokens (PAD) from text""" return text.replace(self.PAD_TK, "")
src/DataGenerator.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] slideshow={"slide_type": "slide"} # Setting Up a Twitter Bot in 5 Easy Steps # ==== # # 1. Get our project directory organized # 2. Make a Twitter app # 3. Make a Twitter account # 4. OAuth secret handshake # 5. Store our secrets somewhere safe # + [markdown] slideshow={"slide_type": "slide"} # ### 1. First things first, let's get our project directory organized # # if you're running this locally check out [README.md](https://github.com/nmacri/twitter-bots-smw-2016/blob/master/README.md) otherwise skip to the next section... # + slideshow={"slide_type": "subslide"} # # cd into your project directory in my case this is, but yours may be different # %cd ~/twitter-bots-smw-2016/ # + slideshow={"slide_type": "slide"} # install dependencies if you haven't already :) # !pip install -r requirements.txt # + slideshow={"slide_type": "subslide"} # import your libraries import twitter import json import webbrowser from rauth import OAuth1Service # + [markdown] slideshow={"slide_type": "slide"} # ### 2. Good, that went smoothly, now let's go deal with twitter # # + [markdown] slideshow={"slide_type": "notes"} # To run our bot we'll need to use a protocol called [OAuth](https://www.wikiwand.com/en/OAuth) which sounds a little bit daunting, but really it's just a kind of secret handshake that we agree on with twitter so they know that we're cool. # + [markdown] slideshow={"slide_type": "subslide"} # ![](https://upload.wikimedia.org/wikipedia/commons/thumb/d/d2/Oauth_logo.svg/239px-Oauth_logo.svg.png) ![](https://49.media.tumblr.com/tumblr_mbbl0uMR8t1ri61zco1_500.gif) # + [markdown] slideshow={"slide_type": "slide"} # First thing you'll need to do is make an "app". It's pretty straightforward process that you can go through here https://apps.twitter.com/. # # This is what my settings looked like: # # ![](http://cl.ly/2o3J0R103s2N/Image%202016-02-20%20at%208.35.10%20PM.png) # + [markdown] slideshow={"slide_type": "notes"} # In the end you'll get two tokens (`YOUR_APP_KEY` and `YOUR_APP_SECRET`) that you should store somewhere safe. I'm storing mine in a file called `secrets.json` there is an example (`secrets_example.json`) in the project root directory that you can use as a template. It looks like this: # + slideshow={"slide_type": "skip"} f = open('secrets_example.json','rb') print "".join(f.readlines()) f.close() # + [markdown] slideshow={"slide_type": "slide"} # ### 3. Make your Bot's Account! # # [Twitter's onboarding process](https://twitter.com/signup) isn't really optimized for the bot use-case, but once you get to the welcome screen you'll be logged in and ready for the next step (iow, you can keep the "all the stuff you love" to yourself). # # <br> # <br> # <div class="container" style="width: 80%;"> # <div class="theme-table-image col-sm-5"> # <img src="http://cl.ly/2l2t380q393G/Image%202016-02-20%20at%209.06.21%20PM.png"> # </div> # <div class="col-sm-2"> # </div> # <div class="theme-table-image col-sm-5"> # <img src="http://cl.ly/050O2M362Q1B/Image%202016-02-20%20at%209.07.35%20PM.png"> # </div> # </div> # # # # - # ### 4. Final OAuth step: Secret handshake! # Load in your fresh new file of secrets (`secrets.json`) # + slideshow={"slide_type": "skip"} f = open('secrets.json','rb') secrets = json.load(f) f.close() # - # Use a library that knows how to implement OAuth1 (trust me, it's not fun to figure out by scratch). I'm using [rauth](https://rauth.readthedocs.org/en/latest/) but there are [tons more](https://dev.twitter.com/oauth/overview/single-user) out there. tw_oauth_service = OAuth1Service( consumer_key=secrets['twitter']['app']['consumer_key'], consumer_secret=secrets['twitter']['app']['consumer_secret'], name='twitter', access_token_url='https://api.twitter.com/oauth/access_token', authorize_url='https://api.twitter.com/oauth/authorize', request_token_url='https://api.twitter.com/oauth/request_token', base_url='https://api.twitter.com/1.1/') request_token, request_token_secret = tw_oauth_service.get_request_token() url = tw_oauth_service.get_authorize_url(request_token=request_token) webbrowser.open_new(url) # The cells above will open a permissions dialog for you in a new tab: # # ![](http://cl.ly/2I2L2m1L2e1K/Image%202016-02-20%20at%209.29.46%20PM.png) # # **If you're cool w/ it, authorize your app** against your bot user you will then be redirected to the callback url you specified when you set up your app. I get redirected to something that looks like this # # `http://127.0.0.1:9999/?oauth_token=JvutuAAAAAAAkfBmbVABUwFD6pI&oauth_verifier=<KEY>` # # **It will like an error, but it's not!**, all you need to do is parse out two parameters from the url they bounce you back to: the `oauth_token` and the `oauth_verifier`. # # Only one more step to go. You are so brave! # + # Once you go through the flow and land on an error page http://127.0.0.1:9999 something # enter your token and verifier below like so. The # The example below (which won't work until you update the parameters) is from the following url: # http://127.0.0.1:9999/?oauth_token=JvutuAAAAAAAkfBmbVABUwFD6pI&oauth_verifier=<KEY> oauth_token='<KEY>' oauth_verifier='<KEY>' session = tw_oauth_service.get_auth_session(request_token, request_token_secret, method='POST', data={'oauth_verifier': oauth_verifier}) # - # ### 5. Store your secrets somewhere safe # + # Copy this guy into your secrets file # { # "user_id": "701177805317472256", # "screen_name": "SmwKanye", # HERE ----> "token_key": "YOUR_TOKEN_KEY", # "token_secret": "YOUR_TOKEN_SECRET" # }, session.access_token # + # Copy this guy into your secrets file # { # "user_id": "701177805317472256", # "screen_name": "SmwKanye", # "token_key": "YOUR_TOKEN_KEY", # HERE ----> "token_secret": "YOUR_TOKEN_SECRET" # }, session.access_token_secret # - # Awesome, now we have our user access tokens and secret. Store them in `secrets.json` and test below to see if they work. You don't really need 3 test accounts, so if you don't want to repeat the process just keep "production". # # Finally, test to see that your secrets are good... # + f = open('secrets.json','rb') secrets = json.load(f) f.close() tw_api_client = twitter.Api(consumer_key = secrets['twitter']['app']['consumer_key'], consumer_secret = secrets['twitter']['app']['consumer_secret'], access_token_key = secrets['twitter']['accounts']['production']['token_key'], access_token_secret = secrets['twitter']['accounts']['production']['token_secret'], ) # - tw_api_client.GetUser(screen_name='SmwKanye').AsDict() tw_api_client. # ![High Five](http://media3.giphy.com/media/IxJMT1ugyBMdy/giphy.gif)
notebooks/1 - Setting Up a Twitter Bot in 5 Easy Steps.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="0-53Xmr3wEu2" # Letter Changes # # Have the function LetterChanges(str) take the str parameter being passed and modify it using the following algorithm. Replace every letter in the string with the letter following it in the alphabet (ie. c becomes d, z becomes a). Then capitalize every vowel in this new string (a, e, i, o, u) and finally return this modified string. # # Examples # # Input: "hello*3" # # Output: Ifmmp*3 # # Input: "fun times!" # # Output: gvO Ujnft! # # # + colab={} colab_type="code" id="17OwoatcwDfI" def LetterChanges(str): word = [*str] letter_list = [] new_word = '' for i in word: if ord(i) in range(65,90) or ord(i) in range(97,122): letter = (chr((ord(i)+1))) #letter_list.append(letter) while (ord(letter)) in (97,101,105,111,117): letter = (chr(ord(letter)-32)) #letter_list.append(letter) letter_list.append(letter) else: letter = (chr(ord(i))) letter_list.append(letter) return (new_word.join(letter_list)) # keep this function call here print(LetterChanges(input()))
coderbyte/20191215_2/Letter Changes.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/greatyashtiwari/Tweet-Emotion-Recognition/blob/main/Tweet_Emotion_Recognition_Learner.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="sp7D0ktn5eiG" # ## Tweet Emotion Recognition: Natural Language Processing with TensorFlow # # --- # # Dataset: [Tweet Emotion Dataset](https://github.com/dair-ai/emotion_dataset) # # This is a starter notebook for the guided project [Tweet Emotion Recognition with TensorFlow](https://www.coursera.org/projects/tweet-emotion-tensorflow) # # A complete version of this notebook is available in the course resources # # --- # # ## Task 1: Introduction # + [markdown] id="cprXxkrMxIgT" # ## Task 2: Setup and Imports # # 1. Installing Hugging Face's nlp package # 2. Importing libraries # + id="5agZRy-45i0g" colab={"base_uri": "https://localhost:8080/"} outputId="515d54f4-e941-4bf4-87fb-267069dee6f0" # !pip install nlp # + id="yKFjWz6e5eiH" colab={"base_uri": "https://localhost:8080/"} outputId="de53d2ea-4f31-4adf-ce9d-3c9a30362341" # %matplotlib inline import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import nlp import random def show_history(h): epochs_trained = len(h.history['loss']) plt.figure(figsize=(16, 6)) plt.subplot(1, 2, 1) plt.plot(range(0, epochs_trained), h.history.get('accuracy'), label='Training') plt.plot(range(0, epochs_trained), h.history.get('val_accuracy'), label='Validation') plt.ylim([0., 1.]) plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.subplot(1, 2, 2) plt.plot(range(0, epochs_trained), h.history.get('loss'), label='Training') plt.plot(range(0, epochs_trained), h.history.get('val_loss'), label='Validation') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() def show_confusion_matrix(y_true, y_pred, classes): from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_true, y_pred, normalize='true') plt.figure(figsize=(8, 8)) sp = plt.subplot(1, 1, 1) ctx = sp.matshow(cm) plt.xticks(list(range(0, 6)), labels=classes) plt.yticks(list(range(0, 6)), labels=classes) plt.colorbar(ctx) plt.show() print('Using TensorFlow version', tf.__version__) # + [markdown] id="7JsBpezExIga" # ## Task 3: Importing Data # # 1. Importing the Tweet Emotion dataset # 2. Creating train, validation and test sets # 3. Extracting tweets and labels from the examples # + id="0YHOvjAu5eiL" colab={"base_uri": "https://localhost:8080/", "height": 249, "referenced_widgets": ["<KEY>", "<KEY>", "f993f5d61e064c578f4250ff4956c1c1", "953f05a4b0a643fa84ab340966e73251", "<KEY>", "<KEY>", "4910a8cefe15414982975f1613e4e981", "b0444a347d11453ab258b9d7ad47694b", "<KEY>", "8f8760d66339430ea74eb3eb0a951ee5", "aceb9e4517c14930934df5d372a9eef5", "<KEY>", "d5b07d2394534d0a8c55c5b1a9fd7015", "<KEY>", "<KEY>", "9f100a521df64d3f80c611a4c235d155", "1863a6a2a4724c46a4ba507095692c79", "9543f2e81e0f4978851dd9a5d1d19ba9", "<KEY>", "8892dab47d3e40e4b51932e171c10b69", "4a8d4169d86e47cb878080559d8fafcc", "<KEY>", "fdc56a8ddfa04fc0bb5458007a33052c", "5704dceea5a1407486e3aea9b50e77e1", "491d6ffb89ef48e6a1c0504a0d295b65", "<KEY>", "5e94a16af8784725baf1d09ef334f478", "70354f7efe5d455fa58027a370a5f6d9", "<KEY>", "<KEY>", "3f5b6a7d27814927acf6a21fda85a504", "78e55636a2ca4955957618767763f6f3", "<KEY>", "5f239b3b4bba4cd1a47cc1c245afe2d2", "<KEY>", "cd288eccc6c044c59de392dafceaea1c", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "e5b12187ed41414bbaf6b6e70aeeb0fb", "<KEY>", "<KEY>", "50832febd8ff476fbc5c20da54b9ee56", "<KEY>", "<KEY>", "<KEY>", "7846b7e9f75a45cfa21e95368296fe0b", "a45a0c563ee942ceabf51b59d50ec401", "9c79e2534f1a4058afc2931e42e97197", "a3459f06282340af8838f2040ceac6c7", "<KEY>", "5da8ca6b9adf412f99b42041e223ba92", "<KEY>", "<KEY>", "<KEY>", "febe0c832425405c82d9a8c142c659aa", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "86e2ba91f9a04254adea1d3d5e266d3d", "<KEY>", "<KEY>", "abd7e16fac4f4e7da22c42b8e67fa934", "<KEY>", "e9751b86441d491d9ca0b0aa4f8ad091", "<KEY>", "<KEY>", "<KEY>", "4708b44d3b144ce88308ff83caa405c2", "bd155bb11194427eb2a01f4dc52ce552", "<KEY>", "ff113f59d34c4e92835ac772ebd2f665", "e8130e6d92574484813f6c14125d39b0", "413f4ee04933431a83307e70c2b184ee", "<KEY>", "61fa4013a61442329ff12a227a1d5171", "<KEY>", "<KEY>", "eea6d30b07eb4f84a3f9f49b4c6867ba", "<KEY>", "854397c8087849ca8a451ba1d58bda13", "9cfbbcde04be4d528638a89fff10b083", "<KEY>", "<KEY>", "483c2a1d99ab40b6a1879203bf5cea2b", "984c2122445c496a8224c470f1bb0b14"]} outputId="c33d020a-247a-4087-fb5c-6b6f88389867" dataset = nlp.load_dataset('emotion') # + id="2s0h541FxIgc" colab={"base_uri": "https://localhost:8080/"} outputId="e0e99f3c-3dc7-4861-d474-153dfa2d8169" dataset # + id="z7eCnxU25eiN" train = dataset['train'] val = dataset['validation'] test = dataset['test'] # + id="oDYXMfZy5eiP" def get_tweet(data) : tweets = [x['text'] for x in data] labels = [x['label'] for x in data] return tweets, labels # + id="jeq3-vSB5eiR" tweets, labels = get_tweet(train) # + id="bHD3Tk0J5eiU" colab={"base_uri": "https://localhost:8080/"} outputId="c90f7f4e-e815-4d5a-a6d6-fd58185e133d" tweets[0], labels[0] # + [markdown] id="gcAflLv6xIgp" # ## Task 4: Tokenizer # # 1. Tokenizing the tweets # + id="qfX5-ResxIgq" from tensorflow.keras.preprocessing.text import Tokenizer # + id="cckUvwBo5eif" tokenizer = Tokenizer(num_words=10000, oov_token='<UNK>') tokenizer.fit_on_texts(tweets) # + colab={"base_uri": "https://localhost:8080/", "height": 35} id="nslQpPXjhuCZ" outputId="5e7331f7-c4ac-48b4-c066-81e640a58025" tweets[0] # + colab={"base_uri": "https://localhost:8080/"} id="hPsdMmN5h1JH" outputId="6f0e910a-f0cb-4a8e-cb8b-6ad733f752a5" tokenizer.texts_to_sequences([tweets[0]]) # + [markdown] id="i3Bqm7b2xIgu" # ## Task 5: Padding and Truncating Sequences # # 1. Checking length of the tweets # 2. Creating padded sequences # + id="mLvf_WFZxIgu" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="adefe7c0-ba70-4005-dcc3-83b372933e1b" lengths = [len(t.split(' ')) for t in tweets] plt.hist(lengths, bins = len(set(lengths))) plt.show # + id="EOi5lIE3xIgx" maxlen = 50 from tensorflow.keras.preprocessing.sequence import pad_sequences # + id="Q9J_Iemf5eiq" def get_sequences(tokenizer, tweets): sequences = tokenizer.texts_to_sequences(tweets) padded = pad_sequences(sequences, truncating='post', padding='post', maxlen=maxlen) return padded # + id="eglH77ky5ei0" padded_train_seq = get_sequences(tokenizer, tweets) # + id="iGR473HA5ei7" colab={"base_uri": "https://localhost:8080/"} outputId="73633cb6-6374-4f05-9064-59b617b232b8" padded_train_seq[0] # + [markdown] id="BURhOX_KxIg8" # ## Task 6: Preparing the Labels # # 1. Creating classes to index and index to classes dictionaries # 2. Converting text labels to numeric labels # + id="SufT2bpD5ejE" colab={"base_uri": "https://localhost:8080/"} outputId="30d7212c-5504-433f-a773-a1404d496875" classes= set(labels) print(classes) # + id="rpwzL88I7YSm" colab={"base_uri": "https://localhost:8080/", "height": 265} outputId="030f14c0-e4b6-49d1-a29f-9458567f70a5" plt.hist(labels, bins=11) plt.show() # + id="dNLF6rXL5ejN" class_to_index = dict((c,i) for i, c in enumerate(classes)) index_to_class = dict((v,k) for k, v in class_to_index.items()) # + id="_08InVyM5ejc" colab={"base_uri": "https://localhost:8080/"} outputId="16f96044-1cce-47ba-f11a-5d3d5bd06313" class_to_index # + id="gpeDoA6gxIhE" colab={"base_uri": "https://localhost:8080/"} outputId="ef0b4cbb-53e1-4825-c019-4aa73e280863" index_to_class # + [markdown] id="j-1dZJ9snAr5" # # + id="Jq0WJYsP5ejR" names_to_ids = lambda labels: np.array([class_to_index.get(x) for x in labels]) # + id="v15KnrNC5ejW" colab={"base_uri": "https://localhost:8080/"} outputId="a5187d07-b591-48dd-f4f4-b3706731f75a" train_labels = names_to_ids(labels) print(train_labels[0]) # + [markdown] id="c-v0Mnh8xIhP" # ## Task 7: Creating the Model # # 1. Creating the model # 2. Compiling the model # + id="OpewXxPQ5eji" model = tf.keras.models.Sequential([ tf.keras.layers.Embedding(10000, 16, input_length=maxlen), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(20, return_sequences=True)), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(20)), tf.keras.layers.Dense(6, activation='softmax') ]) model.compile( loss='sparse_categorical_crossentropy' , optimizer = 'adam', metrics=['accuracy'] ) # + colab={"base_uri": "https://localhost:8080/"} id="bSCNiYBIsGZR" outputId="e6647d41-a4c2-4174-d3f4-2b9c6685a60c" model.summary() # + [markdown] id="1HST_CHjxIhR" # ## Task 8: Training the Model # # 1. Preparing a validation set # 2. Training the model # + id="Ff7F3hCK5ejm" val_tweets, val_labels = get_tweet(val) val_seq = get_sequences(tokenizer, val_tweets) val_labels = names_to_ids(val_labels) # + id="hlMKaZ3H5ejr" colab={"base_uri": "https://localhost:8080/"} outputId="506ee7dc-de1a-47cc-bf3e-36b1edb16abb" val_tweets[0], val_labels[0] # + id="bzBqnWQ-5ejw" colab={"base_uri": "https://localhost:8080/"} outputId="39e5219c-989f-4df0-c534-b384aebf1a32" h = model.fit( padded_train_seq, train_labels, validation_data=(val_seq,val_labels), epochs = 20, callbacks=[ tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=2) ] ) # + [markdown] id="EdsJyMTLxIhX" # ## Task 9: Evaluating the Model # # 1. Visualizing training history # 2. Prepraring a test set # 3. A look at individual predictions on the test set # 4. A look at all predictions on the test set # + id="ENCfvXeLxIhX" colab={"base_uri": "https://localhost:8080/", "height": 392} outputId="39cd02b1-f62d-4d9a-87e9-def3d37f56cf" show_history(h) # + id="kWuzoz8uxIha" test_tweets, test_labels = get_tweet(test) test_seq = get_sequences(tokenizer, test_tweets) test_labels = names_to_ids(test_labels) # + id="7vRVJ_2SxIhc" colab={"base_uri": "https://localhost:8080/"} outputId="a18fd800-35aa-40b6-bba8-4fba2fdd1f73" _ = model.evaluate(test_seq, test_labels) # + id="rh638vHG5ej6" colab={"base_uri": "https://localhost:8080/"} outputId="a4c5056a-044e-422c-dafe-a5988a03cd06" i = random.randint(0, len(test_labels) - 1) print('Sentence:', test_tweets[i]) print('Emotion:', index_to_class[test_labels[i]]) p = model.predict(np.expand_dims(test_seq[i], axis=0))[0] pred_class = index_to_class[np.argmax(p).astype('uint8')] print('Predicted Emotion:' , pred_class) # + id="hHl5SVCFxIhh" preds = (model.predict(test_seq) > 0.5).astype("int32") # + id="NC8YQ0OexIhj" # + id="Up_yrVlIzuDv"
Tweet_Emotion_Recognition_Learner.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="v4OdNCeP18-F" # # Imports # + id="Eo-Pfm2BApZU" colab={"base_uri": "https://localhost:8080/"} outputId="d3cbd763-9f83-4f2d-8eec-c0493bfdbd6c" from google.colab import drive drive.mount('/content/drive') # + id="or1bXxRcBqn4" # !cp '/content/drive/My Drive/GIZ Zindi/Train.csv' . # !cp '/content/drive/My Drive/GIZ Zindi/SampleSubmission.csv' . # + id="LZlxM2g-1dzv" # !cp '/content/drive/My Drive/GIZ Zindi/AdditionalUtterances.zip' AdditionalUtterances.zip # + id="uAWDjYdh1m0m" # !unzip -q AdditionalUtterances.zip # + id="QgLBGRGz1yq2" # Copy the files in and unzip # !cp '/content/drive/My Drive/GIZ Zindi/audio_files.zip' audio_files.zip # !unzip -q audio_files.zip # + id="H7GH-9qUm3_k" # !cp "/content/drive/My Drive/GIZ Zindi/nlp_keywords_29Oct2020.zip" nlp_keywords_29Oct2020.zip # !unzip -q nlp_keywords_29Oct2020.zip # + id="sBv1Gkw2Rje3" colab={"base_uri": "https://localhost:8080/"} outputId="c89d8fa8-977f-4d4f-9e8d-935afd726653" # !pip -q install efficientnet_pytorch # + id="t-5agYag6nPg" colab={"base_uri": "https://localhost:8080/"} outputId="a1f20bae-c0a1-426a-8542-286ceadd76e9" # !pip install -q python_speech_features # + id="i0epTZBG7Zr_" colab={"base_uri": "https://localhost:8080/"} outputId="fd22e424-1cf6-49be-80fd-aa47069f0681" # !pip -q install albumentations --upgrade # + id="w24RQCaX0Zyi" import os from PIL import Image from sklearn.model_selection import train_test_split from torchvision import datasets, models from torch.utils.data import DataLoader, Dataset import torch.nn as nn import torch import torchvision.models as models from efficientnet_pytorch import EfficientNet from torch.optim.lr_scheduler import MultiStepLR from torch.optim.lr_scheduler import OneCycleLR import pandas as pd import numpy as np import sklearn from sklearn.model_selection import StratifiedKFold from sklearn.metrics import accuracy_score, roc_auc_score from tqdm.notebook import tqdm as tqdm from sklearn.model_selection import train_test_split import librosa import librosa.display as display import python_speech_features as psf from matplotlib import pyplot as plt import numpy as np import albumentations from torch.nn import Module,Sequential import gc import cv2 import multiprocessing as mp from multiprocessing import Pool from albumentations.augmentations.transforms import Lambda import IPython.display as ipd # + id="h5X002A-P4-i" N_WORKERS = mp.cpu_count() LOAD_TRAIN_DATA = None LOAD_TEST_DATA = None # + id="Ba854myQBcfU" import random import numpy as np SEED_VAL = 1000 # Set the seed value all over the place to make this reproducible. def seed_all(SEED = SEED_VAL): random.seed(SEED) np.random.seed(SEED) torch.manual_seed(SEED) torch.cuda.manual_seed_all(SEED) os.environ['PYTHONHASHSEED'] = str(SEED) torch.backends.cudnn.deterministic = True # + [markdown] id="tZniD6ThCw6a" # # DataLoader # + id="mwQd_y6hQvIU" class conf: sampling_rate = 44100 duration = 3 # sec hop_length = 200*duration # to make time steps 128 fmin = 20 fmax = sampling_rate // 2 n_mels = 128 n_fft = n_mels * 20 padmode = 'constant' samples = sampling_rate * duration def get_default_conf(): return conf conf = get_default_conf() # + id="LyGR5S46S5S0" def melspectogram_dB(file_path, cst=3, top_db=80.): row_sound, sr = librosa.load(file_path,sr=conf.sampling_rate) sound = np.zeros((cst*sr,)) if row_sound.shape[0] < cst*sr: sound[:row_sound.shape[0]] = row_sound[:] else: sound[:] = row_sound[:cst*sr] spec = librosa.feature.melspectrogram(sound, sr=conf.sampling_rate, n_mels=conf.n_mels, hop_length=conf.hop_length, n_fft=conf.n_fft, fmin=conf.fmin, fmax=conf.fmax) spec_db = librosa.power_to_db(spec) spec_db = spec_db.astype(np.float32) return spec_db def spec_to_image(spec, eps=1e-6): mean = spec.mean() std = spec.std() spec_norm = (spec - mean) / (std + eps) spec_min, spec_max = spec_norm.min(), spec_norm.max() spec_img = 255 * (spec_norm - spec_min) / (spec_max - spec_min) return spec_img.astype(np.uint8) def preprocess_audio(audio_path): spec = melspectogram_dB(audio_path) spec = spec_to_image(spec) return spec # + id="fFdXzGpuFeQI" def get_data(df,mode='train'): """ :param: df: dataframe of train or test :return: images_list: spec images of all the data :return: label_list : label list of all the data """ audio_paths = df.fn.values images_list = [] with mp.Pool(N_WORKERS) as pool: images_list = pool.map(preprocess_audio,tqdm(audio_paths)) if mode == 'train': label_list = df.label.values return images_list,label_list else: return images_list # + id="PV6u_nW3pc31" class ImageDataset(Dataset): def __init__(self, images_list,labels_list=None,transform=None): self.images_list = images_list self.transform = transform self.labels_list = labels_list def __getitem__(self, index): spec = self.images_list[index] if self.transform is not None: spec = self.transform(image=spec) spec = spec['image'] if self.labels_list is not None: label = self.labels_list[index] return {'image' : torch.tensor(spec,dtype=torch.float), 'label' : torch.tensor(label,dtype = torch.long) } return {'image' : torch.tensor(spec,dtype=torch.float), } def __len__(self): return len(self.images_list) # + [markdown] id="vOQv1YlR3jJu" # # Models and train functions # + id="njGRGejm2i6D" class Net(nn.Module): def __init__(self,name): super(Net, self).__init__() self.name = name #self.convert_3_channels = nn.Conv2d(1,3,2,padding=1) if name == 'b0': self.arch = EfficientNet.from_pretrained('efficientnet-b0') self.arch._fc = nn.Linear(in_features=1280, out_features=193, bias=True) elif name == 'b1': self.arch = EfficientNet.from_pretrained('efficientnet-b1') self.arch._fc = nn.Linear(in_features=1280, out_features=193, bias=True) elif name == 'b2': self.arch = EfficientNet.from_pretrained('efficientnet-b2') self.arch._fc = nn.Linear(in_features=1408, out_features=193, bias=True) elif name =='b3': self.arch = EfficientNet.from_pretrained('efficientnet-b3') self.arch._fc = nn.Linear(in_features=1536, out_features=193, bias=True) elif name =='b4': self.arch = EfficientNet.from_pretrained('efficientnet-b4') self.arch._fc = nn.Linear(in_features=1792, out_features=193, bias=True,) elif name =='b5': self.arch = EfficientNet.from_pretrained('efficientnet-b5') self.arch._fc = nn.Linear(in_features=2048, out_features=193, bias=True) elif name =='b6': self.arch = EfficientNet.from_pretrained('efficientnet-b6') self.arch._fc = nn.Linear(in_features=2304, out_features=193, bias=True) elif name =='b7': self.arch = EfficientNet.from_pretrained('efficientnet-b7') self.arch._fc = nn.Linear(in_features=2560, out_features=193, bias=True) elif name == 'densenet121': self.arch = models.densenet121(pretrained=True) num_ftrs = self.arch.classifier.in_features self.arch.classifier = nn.Linear(num_ftrs,193,bias=True) elif name == 'densenet169': self.arch = models.densenet169(pretrained=True) num_ftrs = self.arch.classifier.in_features self.arch.classifier = nn.Linear(num_ftrs,193,bias=True) elif name == 'densenet201': self.arch = models.densenet201(pretrained=True) num_ftrs = self.arch.classifier.in_features self.arch.classifier = nn.Linear(num_ftrs,193,bias=True) elif name == 'resnet50': self.arch = models.resnet50(pretrained=True) num_ftrs = self.arch.fc.in_features self.arch.fc = nn.Linear(num_ftrs,193,bias=True) elif name == 'resnet101': self.arch = models.resnet101(pretrained=True) num_ftrs = self.arch.fc.in_features self.arch.fc = nn.Linear(num_ftrs,193,bias=True) elif name == 'resnet152': self.arch = models.resnet152(pretrained=True) num_ftrs = self.arch.fc.in_features self.arch.fc = nn.Linear(num_ftrs,193,bias=True) elif name == 'resnet18': self.arch = models.resnet18(pretrained=True) my_weight = self.arch.conv1.weight.mean(dim=1, keepdim=True) self.arch.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) self.arch.conv1.weight = torch.nn.Parameter(my_weight) num_ftrs = self.arch.fc.in_features self.arch.fc = nn.Linear(num_ftrs,193,bias=True) elif name == 'resnet34': self.arch = models.resnet34(pretrained=True) num_ftrs = self.arch.fc.in_features self.arch.fc = nn.Linear(num_ftrs,193,bias=True) elif name == 'resnext101': self.arch = models.resnext101_32x8d(pretrained=True) my_weight = self.arch.conv1.weight.mean(dim=1, keepdim=True) self.arch.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) self.arch.conv1.weight = torch.nn.Parameter(my_weight) num_ftrs = self.arch.fc.in_features self.arch.fc = nn.Linear(num_ftrs,193,bias=True) elif name == 'resnext50': self.arch = models.resnext50_32x4d(pretrained=True) my_weight = self.arch.conv1.weight.mean(dim=1, keepdim=True) self.arch.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) self.arch.conv1.weight = torch.nn.Parameter(my_weight) num_ftrs = self.arch.fc.in_features self.arch.fc = nn.Linear(num_ftrs,193,bias=True) elif name =='rexnetv1': model = rexnetv1.ReXNetV1(width_mult=1.0) model.output.conv2D = nn.Conv2d(1280, 1, kernel_size=(1, 1), stride=(1, 1)) def forward(self, x): """ """ #x = self.convert_3_channels(x) x = self.arch(x) return x # + [markdown] id="WQCeJOLcuxz9" # # Predicting # + id="UXtOOmjhRMij" HEIGHT = 128 WIDTH = 600 def get_transforms(): train_transform = albumentations.Compose([ #albumentations.PadIfNeeded(HEIGHT,WIDTH,border_mode = cv2.BORDER_CONSTANT,value=0), albumentations.Resize(HEIGHT,WIDTH), #albumentations.Lambda(NM(),always_apply=True) #Lambda(image=SpecAugment(num_mask=2,freq_masking=0.1,time_masking=0.1),mask=None,p=0.2), #Lambda(image=GaussNoise(2),mask=None,p=0.2), #albumentations.Lambda(image=CONVERTRGB(),always_apply=True), #albumentations.CenterCrop(100,140,p=1) #albumentations.RandomCrop(120,120) #albumentations.VerticalFlip(p=0.2), #albumentations.HorizontalFlip(p=0.2), #albumentations.RandomContrast(p=0.2), #AT.ToTensor() ]) val_transform = albumentations.Compose([ #albumentations.PadIfNeeded(HEIGHT,WIDTH,border_mode = cv2.BORDER_CONSTANT,value=0), albumentations.Resize(HEIGHT,WIDTH), #albumentations.Lambda(NM(),always_apply=True) #albumentations.Lambda(image=CONVERTRGB(),always_apply=True), #AT.ToTensor() ]) return train_transform,val_transform # + id="KHgeHsYT8-Gy" colab={"base_uri": "https://localhost:8080/", "height": 101, "referenced_widgets": ["b901a2c8b89c4447863fd794196be80e", "403b47afd076487f824a43c7ef199faf", "77ea197f04614319ba2f932307e23297", "861fce8e86034003a96763d83eaf0011", "6bdc9af741124bf79c3d2887f53f45f8", "bd3c7e18f4b34b418f26e207a9e3ac51", "bdd6dde778b94e469cbf1e5a3fff8990", "6dfa4f298f2d4b4bb872bab1bd80c348"]} outputId="7169939a-6fbe-4d3b-a023-256045fc3107" # %%time if LOAD_TEST_DATA is None: gc.collect() test = pd.read_csv('SampleSubmission.csv') #takes 5 minutes test_images = get_data(test,mode='test') LOAD_TEST_DATA = True else: print('Data Already Loaded') # + id="juIpiQpwGXIZ" _,test_transform = get_transforms() # + id="Rsfg5DGaDaqh" test_dataset = ImageDataset(test_images,labels_list=None,transform=test_transform) test_data_loader = DataLoader(dataset=test_dataset,shuffle=False,batch_size=32) # + [markdown] id="JnmtLm29u88H" # ## KFOLDS # + id="GMyfrd6YvC_m" colab={"base_uri": "https://localhost:8080/", "height": 548, "referenced_widgets": ["ac533fbc662e4ca5be1b1176a62feca1", "560105007d7643b3bace0b8c7ffeac5f", "bc5f193fa26b4e0ba4853e172d9fb617", "c5052572d3014cb8b3ea27eae4a00f3b", "<KEY>", "<KEY>", "c6917c11ec1c434d86e40cea27d7fbf4", "<KEY>", "4493de739f974ed48b6c3f50804d7c49", "e957cef5462e4548bb70e5abfc9cd7a4", "767645e801c84c43a26c876c52077934", "ab1099e1fd8c441889245599864c7a15", "<KEY>", "4483db1e23544b21b4779617e1162f95", "7bf465c1fe3b47fab0ce51288a9809dd", "726a77f252ba4060b0d08e427444d8d5", "<KEY>", "2e0719e88c094e6a9368c02ce1330e92", "<KEY>", "e61a70f387b84ab19ba390e2df8d5ed2", "<KEY>", "e87ac8d1fdd146faadc5a1e66ca28a66", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "ec7ed0b24f9e436eb34986a93d7d9036", "3f98e74227fe446faa337d321d6d13e9", "ce6f7458837e48c08044fe486b46f1ab", "2a880cf6e95b4281be107c8a28ba3f97", "<KEY>", "<KEY>", "8a62c25e32fd48418a7211d14a8404ce", "b0c5c4e34e7d4270abec6a050105756a", "<KEY>", "84ed7cdb6c2a4ed79df3def3be3d38c6", "<KEY>", "<KEY>", "1c9a0e65b0ab4cd3b2202737ef702ff1", "6fd945dd10ca4d72a9d567acb7044088", "17b3a3bd9c5a4727a662ac3ded399e5a", "<KEY>", "4fc11fbea76c40ff944b090edd0aed67", "74901f739f8a43c5a8a5dc38d2b81822", "<KEY>", "<KEY>", "<KEY>", "d9e524a785a242ceb44fa6be7a69f5d8", "<KEY>", "7d3d889064194e83944de63d41b95f66", "cd07865ea6b14e3a9cfa61e3268db587", "5e754c93f426423f823ea9d57e66d934", "7069af93bb3c4ade8d6f86e181fa2564", "b4abe95207864663bbef88debc5e747e", "401b122da8854d6f8a818bdca359d1ae", "<KEY>", "e3c4474520a04e14a3f68eaf61705986", "<KEY>", "4be5f9b94ad9413d9f46ec5c625c4f34", "<KEY>", "00ba657f61ac462dac617ede5e59eb34", "abb05e9761bc448b84bee2eb9ee5d084", "07a633d560294954b34d00842e2937ef", "beac41f32f8e4880839564360553a63a", "<KEY>", "<KEY>", "46353447888b41d3ada30694f165ca9c", "cdaf712bef074c93aa1f35ea92e11b63", "670c60963f09413ab2e53255e290dca8", "e42fa4b851de4ee988179f00f347fcef", "4da3c1e23e904d61925dda0905837cf6", "c9eac8318bed44ec906ff9bf2400ee45", "47238b0db5de4837868f80c2c238945a", "<KEY>", "<KEY>", "ae6788021e6d4b03945be5d89877a155", "<KEY>", "438c1a0bc1a7443d9aeb20f70e6558da", "d20bd01e29664dafa17af8d527f34e19"]} outputId="2ff7f1b1-35ec-4b28-9ecd-6c9041976715" NFOLDS = 10 NAME = 'resnext101' all_outputs = [] device = torch.device("cuda") for i in range(NFOLDS): best_model = Net(NAME) #best_model.load_state_dict(torch.load(f'/content/drive/MyDrive/Resnext101GIZ/best_model_{i}')) best_model.load_state_dict(torch.load(f'best_model_{i}')) best_model = best_model.to(device) best_model.eval() fold_outputs = [] with torch.no_grad(): tk0 = tqdm(test_data_loader, total=len(test_data_loader)) for bi,d in enumerate(tk0): images = d['image'] #send them to device images = images.to(device,dtype=torch.float) outputs = best_model(images.unsqueeze(dim=1)) outputs = torch.nn.functional.softmax(outputs) fold_outputs.extend(outputs.cpu().detach().numpy()) all_outputs.append(fold_outputs) # + id="76KkLK2S1ljr" import scipy from scipy.stats.mstats import gmean # + id="dX4GjX_ez-_k" colab={"base_uri": "https://localhost:8080/", "height": 325} outputId="df52c256-d7fd-4a08-f0d6-d8719cf30bcd" ss = pd.read_csv('/content/SampleSubmission.csv') ss.loc[:,1:] = gmean(all_outputs,axis=0) ss.head() # + colab={"base_uri": "https://localhost:8080/", "height": 325} id="b-3fjFGUZD5q" outputId="83554787-0bb4-4ad7-c5ca-6ce2aadd2454" ss1 = pd.read_csv('/content/SampleSubmission.csv') ss1.loc[:,1:] = np.mean(all_outputs,axis=0) ss1.head() # + colab={"base_uri": "https://localhost:8080/", "height": 290} id="hzDrQ7dpZchj" outputId="f16b3952-23ea-4a26-8cb1-a272d2a577fc" ss.iloc[:,1:] = (ss1.iloc[:,1:] + ss.iloc[:,1:])/2 ss.head() # + id="JasAClUA0Mz0" ss.to_csv(f'resnext101_mean_gmean.csv',index=False)
Competition-Solutions/Audio/GIZ NLP Agricultural Keyword Spotter/Solution 2/resnext101/inference.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="HO67bSDTFCrU" # # Mask R-CNN training code for tvai2021 # # [2021tvaihackathon homepage](https://tvaihackathon.com/) # # **2021 차로위반영상 데이터 활용 AI 해커톤을 위한 Mask R-CNN 학습 코드** # # dataset은 주최측으로부터 주어진 images와 annotation을 가공했으며, Matterport의 open source를 활용함 # # 학습 환경은 Google Colaboratory의 GPU 가속기를 활용했으며, Google Drive에 Mount하여 dataset 및 라이브러리를 사용함 # + id="NXuTyyo-4-P0" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1638902859186, "user_tz": -540, "elapsed": 420, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjzN50d8I2TcacRIffe8gQprobo-QT6aGz-dPb5eA=s64", "userId": "06659635866293705518"}} outputId="79468634-fef6-4dc7-e39b-3047c9c3a7db" # %cd /content/drive/MyDrive/mask-rcnn-matterport # + [markdown] id="EB0Xk1ZLE9Pn" # 버전 다운그레이드 맞춰주기 # + id="njk2Qlz094Am" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1638903031092, "user_tz": -540, "elapsed": 171350, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjzN50d8I2TcacRIffe8gQprobo-QT6aGz-dPb5eA=s64", "userId": "06659635866293705518"}} outputId="4178d193-5000-4ac8-8513-140e8425ba6e" # !pip install -r requirements.txt # !python setup.py install # !pip install tensorboard==1.15.0 tensorflow==1.15.0 tensorflow-estimator==1.15.1 tensorflow-gpu==1.15.2 tensorflow-gpu-estimator==2.1.0 Keras==2.2.5 Keras-Applications==1.0.8 Keras-Preprocessing==1.1.0 # !pip install h5py==2.10.0 # !pip install -U scikit-image==0.16.2 # + [markdown] id="DCsh1teCFBLr" # 버전 확인 # + colab={"base_uri": "https://localhost:8080/"} id="paMdvwSY9-1v" executionInfo={"status": "ok", "timestamp": 1638903032620, "user_tz": -540, "elapsed": 1535, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjzN50d8I2TcacRIffe8gQprobo-QT6aGz-dPb5eA=s64", "userId": "06659635866293705518"}} outputId="3ae5f271-ad59-40c1-fe33-6d55bed51a76" import keras import tensorflow print(keras.__version__) print(tensorflow.__version__) # + [markdown] id="Ib-2G4NvFhfZ" # dataset load 후에 weights는 pre-trained weights는 coco로 training 시작 # + colab={"base_uri": "https://localhost:8080/"} id="1NyXkTC7J5Fa" executionInfo={"status": "ok", "timestamp": 1638913812440, "user_tz": -540, "elapsed": 8319905, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjzN50d8I2TcacRIffe8gQprobo-QT6aGz-dPb5eA=s64", "userId": "06659635866293705518"}} outputId="1715aa94-5a17-470c-d580-5b8432ce6699" dataset = '/content/drive/MyDrive/mask-rcnn-matterport/samples/custom/dataset' # !python3 '/content/drive/MyDrive/mask-rcnn-matterport/samples/custom/custom.py' train --dataset=dataset/ --weights=coco
samples/custom/tvai-mask-rcnn-training.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np df = pd.read_csv('../data/interim/employements_mult_new.csv', sep=';') df.head(2) # + df = df[~df['id'].isnull()].reset_index(drop=True) df['id'] = df['id'].astype(int) df['responsibilities'] = df['responsibilities'].fillna('') df['achievements'] = df['achievements'].fillna('') # - df['start_date'] = pd.to_datetime(df['start_date'], errors='coerce') df['finish_date'] = pd.to_datetime(df['finish_date'], errors='coerce') df['work_duration'] = (df['finish_date'] - df['start_date']).dt.days df['position_clean'] = df['position_clean'].fillna('') df['employer_clean'] = df['employer_clean'].fillna('') # + # Этим можно заполнять пропуски mean_work_duration = df['work_duration'].mean() median_work_duration = df['work_duration'].median() mean_work_duration, median_work_duration # + # df.head() # + agg_df = df\ .groupby('id')\ .agg( all_work_duration=('work_duration', lambda x: list(x)), all_responsibilities=('responsibilities', lambda x: ' '.join(x)), all_positions=('position_clean', lambda x: ' '.join(x)), all_employers=('employer_clean', lambda x: ' '.join(x)), all_achievements=('achievements', lambda x: ' '.join(x)), ) agg_df['mean_work_duration'] = agg_df['all_work_duration'].apply(np.nanmean) agg_df['max_work_duration'] = agg_df['all_work_duration'].apply(np.nanmax) agg_df['min_work_duration'] = agg_df['all_work_duration'].apply(np.nanmin) agg_df['median_work_duration'] = agg_df['all_work_duration'].apply(np.nanmedian) agg_df = agg_df.drop(columns='all_work_duration') # - agg_df.head() agg_df.info() agg_df = agg_df.fillna(0) agg_df = agg_df.reset_index() agg_df.to_pickle('../data/interim/employements_aggregated.pkl')
notebooks/use_employements_features.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Optimizer tweaks # + # %load_ext autoreload # %autoreload 2 # %matplotlib inline # - #export from exp.nb_08 import * # ## Imagenette data # We grab the data from the previous notebook. path = datasets.untar_data(datasets.URLs.IMAGENETTE_160) tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor] il = ImageItemList.from_files(path, tfms=tfms) sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val')) ll = label_by_func(sd, parent_labeler) bs=64 train_dl,valid_dl = get_dls(ll.train,ll.valid,bs, num_workers=8) x,y = next(iter(valid_dl)) show_image(x[0]) ll.train.y.processor.vocab[y[0]] nfs = [32,64,128,256,512] sched = combine_scheds([0.25, 0.75], [sched_cos(0.4/25, 0.4), sched_cos(0.4, 0.)]) cbfs = [partial(AvgStatsCallback,accuracy), CudaCallback, partial(BatchTransformXCallback, norm_imagenette), partial(ParamScheduler, 'lr', sched)] data = DataBunch(train_dl, valid_dl, 3, 10) # This is the baseline of training with vanilla SGD. learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs) run.fit(3, learn) # ## Weight decay # Weight decay comes from the idea of L2 regularization, which consists in adding to your loss function the sum of all the weights squared. Why do that? Because when we compute the gradients, it will add a contribution to them that will encourage the weights to be as small as possible. # # Why would it prevent overfitting? The idea is that the larger the coefficient are, the more sharp canyons we will have in the loss function. If we take the basic example of parabola, `y = a * (x**2)`, the larger `a` is, the more *narrow* the parabola is. x = torch.linspace(-2,2,100) a_s = [1,2,5,10,50] ys = [a * x**2 for a in a_s] _,ax = plt.subplots() for a,y in zip(a_s,ys): ax.plot(x,y, label=f'a={a}') ax.set_ylim([0,5]) ax.legend(); # So by letting our model learn high parameters, it might fit all the data points in the training set with an over-complex function that has very sharp changes, which will lead to overfitting. # # <img src="images/overfit.png" alt="Fitting vs over-fitting" width="600"> # Limiting our weights from growing to much is going to hinder the training of the model, but it will yield to a state where it generalizes better. Going back to the theory a little bit, weight decay (or just `wd`) is a parameter that controls that sum of squares we add to our loss: # ``` python # loss_with_wd = loss + (wd / 2) * sum([(p ** 2).sum() for p in model_parameters]) # ``` # # In practice though, it would be very inefficient (and maybe numerically unstable) to compute that big sum and add it to the loss. If you remember a little bit of high schoool math, you should now that the derivative of `p ** 2` with respect to `p` is simple `2 * p`, so adding that big sum to our loss is exactly the same as doing # ``` python # weight.grad += wd * weight # ``` # # for every weight in our model, which is equivalent to (in the case of vanilla SGD) updating the parameters # with # ``` python # new_weight = weight - lr * weight.grad - lr * wd * weight # ``` # # This last formula explains why the name of this technique is weight decay, as each weight is decayed by a factor `lr * wd`. # # This only works for standard SGD, as we have seen that with momentum, RMSProp or in Adam, the update has some additional formulas around the gradient. In those cases, the formula that comes from L2 regularization: # ``` python # weight.grad += wd * weight # ``` # is different than weight decay # ``` python # new_weight = weight - lr * weight.grad - lr * wd * weight # ``` # # Most libraries use the first one, but as it was pointed out in [Decoupled Weight Regularization](https://arxiv.org/pdf/1711.05101.pdf) by <NAME> and <NAME>, it is better to use the second one with the Adam optimizer, which is why fastai made it its default. class Optimizer(optim.Optimizer): def __init__(self, params, steppers, **defaults): super().__init__(params, defaults) self.steppers = listify(steppers) def step(self): for pg in self.param_groups: for p in pg['params']: if p.grad is not None: compose(p, self.steppers, pg=pg) # Weight decay is substracting `lr x wd x weights` to the weights class WeightDecay(): _defaults = dict(wd=0.) def __call__(self,p,pg): p.data.mul_(1 - pg['lr'] * pg['wd']) return p # L2 regularization is adding `wd x weight` to the gradients. class L2_Reg(): _defaults = dict(wd=0.) def __call__(self,p,pg): p.grad.data.add_(pg['wd'], p.data) return p # And this is the classic SGD step. def sgd_step(p, pg): p.data.add_(-pg['lr'], p.grad.data) return p # A stepper may introduce new hyperparameters so we associate a `_defaults` variable to it to make sure it's present in the param groups. class Optimizer(optim.Optimizer): def __init__(self, params, steppers, **defaults): self.steppers = listify(steppers) stepper_defaults = {} for stepper in self.steppers: stepper_defaults.update(getattr(stepper,'_defaults',{})) super().__init__(params, {**stepper_defaults, **defaults}) def step(self): for pg in self.param_groups: for p in pg['params']: if p.grad is not None: compose(p, self.steppers, pg=pg) opt_func = partial(Optimizer, steppers=[WeightDecay(), sgd_step]) model = learn.model opt = opt_func(model.parameters(), lr=0.1) opt.param_groups[0]['wd'],opt.param_groups[0]['lr'] opt = opt_func(model.parameters(), lr=0.1, wd=1e-4) opt.param_groups[0]['wd'] learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs, opt_func=partial(opt_func, wd=0.01)) run.fit(3, learn) # ### With momentum # Momentum requires to add some state. We need to save the moving average of the gradients to be able to do the step and store this inside the optimizer state if we want it saved by PyTorch (when doing checkpointing). class StatefulOptimizer(optim.Optimizer): def __init__(self, params, steppers, stats=None, **defaults): self.steppers,self.stats = listify(steppers),listify(stats) base_defaults = {} for stepper in self.steppers: base_defaults.update(getattr(stepper,'_defaults',{})) for stat in self.stats: base_defaults.update(getattr(stat,'_defaults',{})) super().__init__(params, {**base_defaults, **defaults}) def step(self): for pg in self.param_groups: for p in pg['params']: if p.grad is not None: if p not in self.state: init_state = {} for stat in self.stats: init_state.update(stat.init_state(p)) self.state[p] = init_state state = self.state[p] for stat in self.stats: state = stat.update(p, pg, state) compose(p, self.steppers, pg=pg, state=state) self.state[p] = state class Stat(): _defaults = {} def init_state(self, p): raise NotImplementedError def update(self, p, pg, state): raise NotImplementedError class AverageGrad(Stat): _defaults = dict(mom=0.9) def init_state(self, p): return {'grad_avg': torch.zeros_like(p.grad.data)} def update(self, p, pg, state): state['grad_avg'].mul_(pg['mom']).add_(p.grad.data) return state # We update the previous classes/functions to take the new `state` argument. class WeightDecay(): _defaults = dict(wd=0.) def __call__(self,p,pg,state): p.data.mul_(1 - pg['lr'] * pg['wd']) return p class L2_Reg(): _defaults = dict(wd=0.) def __call__(self,p,pg,state): p.grad.data.add_(pg['wd'], p.data) return p def sgd_step(p, pg,state): p.data.add_(-pg['lr'], p.grad.data) return p # Then we add the momentum step: def momentum_step(p, pg, state): p.data.add_(-pg['lr'], state['grad_avg']) return p sgd_mom = partial(StatefulOptimizer, steppers=momentum_step, stats=AverageGrad()) learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs, opt_func=sgd_mom) run.fit(3, learn) # ### Momentum experiments # What does momentum do to the gradients exactly? Let's do some plots to find out! # + hide_input=false x = torch.linspace(-4, 4, 200) y = torch.randn(200) + 0.3 betas = [0.5,0.7,0.9,0.99] # + hide_input=false def plot_mom(f): _,axs = plt.subplots(2,2, figsize=(12,8)) for beta,ax in zip(betas, axs.flatten()): ax.plot(y, linestyle='None', marker='.') avg,res = None,[] for i,yi in enumerate(y): avg,p = f(avg, beta, yi, i) res.append(p) ax.plot(res, color='red') ax.set_title(f'beta={beta}') # - # This is the regular momentum. # + hide_input=false def mom1(avg, beta, yi, i): if avg is None: avg=yi res = beta * avg + yi return res,res plot_mom(mom1) # - # As we can see, with a too high value, it may go way too high with no way to change its course. # # Another way to smooth noisy data is to do an exponentially moving average. # + hide_input=false def mom2(avg, beta, yi, i): if avg is None: avg=yi avg = beta * avg + (1-beta) * yi return avg, avg plot_mom(mom2) # - # We can see it gets to a zero-constant when the data is purely random. If the data has a certain shape, it will get that shape (with some delay for high beta). # + hide_input=false y = 1 - (x/3) ** 2 + torch.randn(200) * 0.1 # - y[0]=0.5 # + hide_input=false plot_mom(mom2) # - # Debiasing is here to correct the wrong information we may have in the very first batch. # + hide_input=false def mom3(avg, beta, yi, i): if avg is None: avg=0 avg = beta * avg + (1-beta) * yi return avg, avg/(1-beta**(i+1)) plot_mom(mom3) # - # ### Adam and friends # In Adam, we use the gradient averages but with dampening, so let's add this to the `AverageGrad` class. class AverageGrad(Stat): _defaults = dict(mom=0.9) def __init__(self, dampening:bool=False): self.dampening=dampening def init_state(self, p): return {'grad_avg': torch.zeros_like(p.grad.data)} def update(self, p, pg, state): pg['mom_damp'] = 1 - pg['mom'] if self.dampening else 1. state['grad_avg'].mul_(pg['mom']).add_(pg['mom_damp'], p.grad.data) return state # We also need to track the moving average of the gradients squared. class AverageSqrGrad(Stat): _defaults = dict(sqr_mom=0.99) def __init__(self, dampening:bool=True): self.dampening=dampening def init_state(self, p): return {'sqr_avg': torch.zeros_like(p.grad.data)} def update(self, p, pg, state): pg['sqr_damp'] = 1 - pg['sqr_mom'] if self.dampening else 1. state['sqr_avg'].mul_(pg['sqr_mom']).addcmul_(pg['sqr_damp'],p.grad.data,p.grad.data) return state # We also need the number of steps done during training for the debiasing. class StepCount(Stat): def init_state(self, p): return {'step': 0} def update(self, p, pg, state): state['step'] += 1 return state def debias(mom, damp, step): return damp * (1 - mom**step) / (1-mom) # Then the Adam step is just the following: class AdamStep(): _defaults = dict(eps=1e-5) def __call__(self, p, pg, state): debias1 = debias(pg['mom'], pg['mom_damp'], state['step']) debias2 = debias(pg['sqr_mom'], pg['sqr_damp'], state['step']) p.data.addcdiv_(-pg['lr'] / debias1, state['grad_avg'], (state['sqr_avg']/debias2 + pg['eps']).sqrt()) return p adam = partial(StatefulOptimizer, steppers=AdamStep(), stats=[AverageGrad(dampening=True), AverageSqrGrad(), StepCount()]) learn,run = get_learn_run(nfs, data, 0.1, conv_layer, cbs=cbfs, opt_func=adam) run.fit(3, learn) # It's then super easy to implement a new optimizer. This is LAMB from a [very recent paper](https://arxiv.org/pdf/1904.00962.pdf): # # $\begin{align} # g_{t}^{l} &= \nabla L(w_{t-1}^{l}, x_{t}) \\ # m_{t}^{l} &= \beta_{1} m_{t-1}^{l} + (1-\beta_{1}) g_{t}^{l} \\ # v_{t}^{l} &= \beta_{2} v_{t-1}^{l} + (1-\beta_{2}) g_{t}^{l} \odot g_{t}^{l} \\ # m_{t}^{l} &= m_{t}^{l} / (1 - \beta_{1}^{t}) \\ # v_{t}^{l} &= v_{t}^{l} / (1 - \beta_{2}^{t}) \\ # r_{1} &= \|w_{t-1}^{l}\|_{2} \\ # s_{t}^{l} &= \frac{m_{t}^{l}}{\sqrt{v_{t}^{l} + \epsilon}} + \lambda w_{t-1}^{l} \\ # r_{2} &= \| s_{t}^{l} \|_{2} \\ # \eta^{l} &= \eta * r_{1}/r_{2} \\ # w_{t}^{l} &= w_{t}^{l-1} - \eta_{l} * s_{t}^{l} \\ # \end{align}$ class LambStep(): _defaults = dict(eps=1e-6, wd=0.) def __call__(self, p, pg, state): debias1 = debias(pg['mom'], pg['mom_damp'], state['step']) debias2 = debias(pg['sqr_mom'], pg['sqr_damp'], state['step']) r1 = p.data.pow(2).mean().sqrt() step = (state['grad_avg']/ debias1) / (state['sqr_avg']/debias2 + pg['eps']).sqrt() + pg['wd'] * p.data r2 = step.pow(2).mean().sqrt() p.data.add_(-pg['lr'] * min(r1/r2,10), step) return p lamb = partial(StatefulOptimizer, steppers=LambStep(), stats=[AverageGrad(dampening=True), AverageSqrGrad(), StepCount()]) learn,run = get_learn_run(nfs, data, 0.4, conv_layer, cbs=cbfs, opt_func=lamb) run.fit(3, learn) # Other recent variants of optimizers: # - [Large Batch Training of Convolutional Networks](https://arxiv.org/abs/1708.03888) (LARS also uses weight statistics, not just gradient statistics. Can you add that to this class?) # - [Adafactor: Adaptive Learning Rates with Sublinear Memory Cost](https://arxiv.org/abs/1804.04235) (Adafactor combines stats over multiple sets of axes) # - [Adaptive Gradient Methods with Dynamic Bound of Learning Rate](https://arxiv.org/abs/1902.09843) # ## Export # !python notebook2script.py 09_optimizers.ipynb
dev_course/dl2/09_optimizers.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np waitdata_raw = pd.read_csv('2009_2020-annual-surgical_wait_times.csv') waitdata_raw.head(100) import pandas as pd import numpy as np waitdata_raw = pd.read_csv('2009_2020-annual-surgical_wait_times.csv') waitdata_raw.head(100) import pandas as pd import numpy as np waitdata_raw = pd.read_csv('2009_2020-annual-surgical_wait_times.csv') waitdata_raw.head(100) waitdata_raw.head(20) waitdata_raw.shape waitdata_raw_shape = waitdata_raw.shape waitdata_raw_shape wdr = waitdata_raw wdr.loc[10:20, "WAITING":"COMPLETED"] wdr.loc[1000, "WAITING"] wdr.loc[20000] wdr.loc[10000:10020, ["WAITING"]] wdr.iloc[-9:] wdr_sort = wdr.sort_values(by = "FISCAL_YEAR", ascending = False) wdr_sort.head() wdr.describe(include = 'all') waiting = wdr[['WAITING']] waiting.mean() wdr.mean() waitingSum = wdr[['WAITING']].sum() waitingSum waiting_column = wdr['WAITING'] waiting_column waiting_freq = waiting_column.value_counts() waiting_freq wl = waitdata_raw.rename(columns={'FISCAL_YEAR':'year', 'HEALTH_AUTHORITY': 'HA', 'HOSPITAL_NAME':'hosp', 'PROCEDURE_GROUP': 'prcd', 'WAITING':'waiting', 'COMPLETED': 'comp', 'COMPLETED_50TH_PERCENTILE': 'median_wait_w', 'COMPLETED_90TH_PERCENTILE':'wait_90_w'}) wl.head() comp_day = wl.assign(median_wait_d = wl['median_wait_w']*7) comp_day.head() comp_day = comp_day.assign(wait_90th_d = wl['wait_90_w']*7) comp_day.head() wl_19_20 = wl[wl['year'] == '2019/20'] wl_19_20 wl_kgh = wl[wl['hosp'] == 'Kelowna General Hospital'] wl_kgh wl_kgh_19_20 = wl[(wl['hosp'] == 'Kelowna General Hospital') & (wl['year'] == '2019/20')] wl_kgh_19_20 wl_all_knee= wl[(wl['HA'] == 'All Health Authorities') & (wl['prcd'] == 'Knee Replacement')] wl_all_knee wl_all_ortho= wl[(wl['HA'] == 'All Health Authorities') & (wl['prcd'] == 'Other Orthopaedic Surgery')] wl_all_ortho wl_all_prostate= wl[(wl['HA'] == 'All Health Authorities') & (wl['prcd'] == 'Prostate Surgery')] wl_all_prostate wl_all_uterine= wl[(wl['HA'] == 'All Health Authorities') & (wl['prcd'] == 'Uterine Surgery')] wl_all_uterine wl_all_eye= wl[(wl['HA'] == 'All Health Authorities') & (wl['prcd'] == 'Other Eye Surgery')] wl_all_eye wl_all_cataract= wl[(wl['HA'] == 'All Health Authorities') & (wl['prcd'] == 'Cataract Surgery')] wl_all_cataract wl_all= wl[(wl['HA'] == 'All Health Authorities')] wl_all wl_all_kgh = wl[(wl['prcd'] == 'All Procedures') & ((wl['hosp'] == 'Kelowna General Hospital')|(wl['hosp'] == 'All Facilities'))] wl_all_kgh wl_kgh_all = wl[(wl['prcd'] == 'All Procedures') & (wl['hosp'] == 'Kelowna General Hospital')] wl_kgh_all wl_kgh_all.describe() HA_stats = wl.groupby(by='HA').agg(['min','max']) HA_stats wl_lt5_wait = wl[wl['waiting'] == '<5'] wl_lt5_wait wl_0_wait = wl[wl['waiting'] == '0'] wl_0_wait wl_lt5_comp = wl[wl['comp'] == '<5'] wl_lt5_comp wl_0_comp = wl[wl['comp'] == '0'] wl_0_comp wl_25_comp = wl[wl['comp'] == '25'] wl_25_comp
analysis/Monica/milestone1.ipynb/analysis.monica.milestone1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="VEYe67K6E6j0" # ## **Grasp Candidate Sampling** # # + [markdown] id="VsbCH_XUJDCN" # ## Notebook Setup # The following cell will install Drake, checkout the manipulation repository, and set up the path (only if necessary). # - On Google's Colaboratory, this **will take approximately two minutes** on the first time it runs (to provision the machine), but should only need to reinstall once every 12 hours. # # More details are available [here](http://manipulation.mit.edu/drake.html). # + id="v5OrhpSmxkGH" import importlib import os, sys from urllib.request import urlretrieve if 'google.colab' in sys.modules and importlib.util.find_spec('manipulation') is None: urlretrieve(f"http://manipulation.csail.mit.edu/setup/setup_manipulation_colab.py", "setup_manipulation_colab.py") from setup_manipulation_colab import setup_manipulation setup_manipulation(manipulation_sha='c1bdae733682f8a390f848bc6cb0dbbf9ea98602', drake_version='0.25.0', drake_build='releases') from manipulation import running_as_notebook # Setup rendering (with xvfb), if necessary: import os if 'google.colab' in sys.modules and os.getenv("DISPLAY") is None: from pyvirtualdisplay import Display display = Display(visible=0, size=(1400, 900)) display.start() # setup ngrok server server_args = [] if 'google.colab' in sys.modules: server_args = ['--ngrok_http_tunnel'] from meshcat.servers.zmqserver import start_zmq_server_as_subprocess proc, zmq_url, web_url = start_zmq_server_as_subprocess(server_args=server_args) import numpy as np from pydrake.all import ( AddMultibodyPlantSceneGraph, ConnectMeshcatVisualizer, DiagramBuilder, RigidTransform, RotationMatrix, Parser, Simulator, FindResourceOrThrow ) import open3d as o3d import meshcat import meshcat.geometry as g import meshcat.transformations as tf from manipulation.meshcat_utils import draw_open3d_point_cloud from manipulation.utils import FindResource # Load mustard bottle pointcloud from online. urlretrieve(f"http://hjrobotics.net/wp-content/uploads/2020/10/mustard_bottle.pcd", "mustard_bottle.pcd") pcd = o3d.io.read_point_cloud("mustard_bottle.pcd") vis = meshcat.Visualizer(zmq_url=zmq_url) vis["/Background"].set_property("visible", False) draw_open3d_point_cloud(vis, pcd) def setup_grasp_diagram(draw_frames=True): builder = DiagramBuilder() plant, scene_graph = AddMultibodyPlantSceneGraph(builder, time_step=0.001) parser = Parser(plant) parser.package_map().Add("wsg_50_description", os.path.dirname(FindResourceOrThrow("drake/manipulation/models/wsg_50_description/package.xml"))) gripper = parser.AddModelFromFile(FindResource( "models/schunk_wsg_50_welded_fingers.sdf"), "gripper") plant.Finalize() frames_to_draw = {"gripper": {"body"}} if draw_frames else {} meshcat_vis = ConnectMeshcatVisualizer(builder, scene_graph, zmq_url=zmq_url, delete_prefix_on_load=False, frames_to_draw=frames_to_draw) diagram = builder.Build() context = diagram.CreateDefaultContext() return plant, scene_graph, diagram, context, meshcat_vis # Now we'll use this as a global variable. drake_params = setup_grasp_diagram() def draw_grasp_candidate(X_G, prefix='gripper', refresh=False): plant, scene_graph, diagram, context, meshcat_vis = drake_params if (refresh): meshcat_vis.vis.delete() plant_context = plant.GetMyContextFromRoot(context) plant.SetFreeBodyPose(plant_context, plant.GetBodyByName("body"), X_G) meshcat_vis.load() diagram.Publish(context) X_G = plant.GetFreeBodyPose(plant_context, plant.GetBodyByName("body")) def draw_grasp_candidates(X_G, prefix='gripper', draw_frames=True): builder = DiagramBuilder() plant, scene_graph = AddMultibodyPlantSceneGraph(builder, time_step=0.001) parser = Parser(plant) parser.package_map().Add("wsg_50_description", os.path.dirname(FindResourceOrThrow("drake/manipulation/models/wsg_50_description/package.xml"))) gripper = parser.AddModelFromFile(FindResource( "models/schunk_wsg_50_welded_fingers.sdf"), "gripper") plant.WeldFrames(plant.world_frame(), plant.GetFrameByName("body"), X_G) plant.Finalize() frames_to_draw = {"gripper": {"body"}} if draw_frames else {} meshcat = ConnectMeshcatVisualizer(builder, scene_graph, zmq_url=zmq_url, prefix=prefix, delete_prefix_on_load=False, frames_to_draw=frames_to_draw) diagram = builder.Build() context = diagram.CreateDefaultContext() meshcat.load() diagram.Publish(context) def draw_frame_meshcat(vis, frame_name, X_AB, scale): vis[frame_name].set_object(meshcat.geometry.triad(scale=scale)) vis[frame_name].set_transform(X_AB.GetAsMatrix4()) def compute_sdf(pcd, X_G, visualize=False): plant, scene_graph, diagram, context, meshcat_vis = drake_params plant_context = plant.GetMyContextFromRoot(context) scene_graph_context = scene_graph.GetMyContextFromRoot(context) plant.SetFreeBodyPose(plant_context, plant.GetBodyByName("body"), X_G) if (visualize): meshcat_vis.load() diagram.Publish(context) query_object = scene_graph.get_query_output_port().Eval(scene_graph_context) pcd_sdf = np.inf for pt in pcd.points: distances = query_object.ComputeSignedDistanceToPoint(pt) for body_index in range(len(distances)): distance = distances[body_index].distance if distance < pcd_sdf: pcd_sdf = distance return pcd_sdf def check_collision(pcd, X_G, visualize=False): sdf = compute_sdf(pcd, X_G, visualize) return (sdf > 0) # + [markdown] id="ZUg7IbDmIyeo" # ## Grasp Candidate based on Local Curvature # # This is an implementation-heavy assignment, where we will implement a variation of the grasp candidate sampling algorithm on [this paper](https://arxiv.org/pdf/1706.09911.pdf). It is from 2017, so we are really doing some cutting-edge techniques! Parts of the [library](https://github.com/atenpas/gpg) based on the paper, which the authors have named "Grasp Pose Generator" (GPG), is used in real grasp selection systems including the one being run at Toyota Research Institute. # # As opposed to sampling candidate grasp poses using the "antipodal heuristic", this sampling algorithm uses a heuristic based on the local curvature. This heursitic can work quite well especially for smoother / symmetrical objects which has relatively consistent curvature characteristics. # # + [markdown] id="1PAeSQRiHRNi" # ## Computing the Darboux Frame # # First, let's work on formalizing our notion of a "local curvature" by bringing up the [**Darboux Frame**](https://en.wikipedia.org/wiki/Darboux_frame) from differential geometry. It has a fancy French name (after its creator), but the concept is quite simple. # # Given a point $p\in\mathbb{R}^3$ on a differentiable surface $\mathcal{S}\subset\mathbb{R}^3$, we've seen that we can compute the normal vector at point $p$. Let's denote this vector as $n(p)$. # # The Darboux frame first aligns the $y$-axis with the inward normal vector, and aligns the $x$ and $z$ axis with principal axii of the tangent surface given the curvature. We will define the axis as # - x-axis: aligned with the major axis of curvature at point $p$. # - y-axis: aligned with the inward normal vector at point $p$. # - z-axis: aligned with the minor axis of curvature at point $p$. # # Where major axis of curvature has a smaller radius of curvature compared to the minor axis. The below figure might clear things up. # # <img src="https://raw.githubusercontent.com/RussTedrake/manipulation/master/figures/exercises/darboux_frame.png" width="400"> # + [markdown] id="J5WdmM8hQkQ7" # Below, your job is to compute the RigidTransform from the world to the Darboux frame of a specific point on the pointcloud. # # Here is a simple outline of the algorithm that we've seeen in class: # 1. Compute the set of points $\mathcal{S}$ around the given point using [`kdtree.search_hybrid_vector3d`](http://www.open3d.org/docs/release/python_api/open3d.geometry.KDTreeFlann.html?highlight=search_hybrid_vector#open3d.geometry.KDTreeFlann.search_hybrid_vector_3d), with `ball_radius` as the distance parameter. # 2. Compute the $3\times3$ matrix with sum of outer-products of the normal vectors. # $$\mathbf{N}=\sum_{p\in\mathcal{S}} n(p)n^T(p)$$ # 3. Run eigen decomposition and get the eigenvectors using `np.linalg.eig`. Denote the eigen vectors as $[v_1, v_2, v_3]$, in order of decreasing corresponding eigenvalues. Convince yourself that: # - $v_1$ is the normal vector, # - $v_2$ is the major tangent vector, # - $v_3$ is the minor tangent vector. # 4. If $v_1$ is heading outwards (same direction as $n(p)$), negate $v_1$ # 5. Using $v_1,v_2,v_3$, construct the Rotation matrix by horizontally stacking the vertical vectors: $\mathbf{R} = [v_2 | v_1 | v_3]$ # 6. If the rotation is improper, negate $v_2$ # 5. Return a `RigidTransform` that has the rotation set as defined in the figure above, and translation defined at the desired point. # # NOTE: Convince yourself of the following: if you knew the orthonormal basis vectors of a frame ${}^W[i,j,k]$, then the Rotation matrix of of that frame with respect to the world ${}^W\mathbf{R}^F$ can be computed by horizontally stacking the vertical vectors ($[i|j|k]$). Why would this be? (This doesn't necessarily mean the eigenvector matrix is always a rotation matrix due to improper rotations) # + id="WRuwFwcuTQtw" def compute_darboux_frame(index, pcd, kdtree, ball_radius=0.002, max_nn=50): """ Given a index of the pointcloud, return a RigidTransform from world to the Darboux frame at that point. Args: - index (int): index of the pointcloud. - pcd (o3d.pointcloud object): open3d pointcloud of the object. - kdtree (o3d.geometry.KDTreeFlann object): kd tree to use for nn search. - ball_radius (float): ball_radius used for nearest-neighbors search - max_nn (int): maximum number of points considered in nearest-neighbors search. """ points = np.asarray(pcd.points) # Nx3 np array of points normals = np.asarray(pcd.normals) # Nx3 np array of normals # Fill in your code here. X_WF = RigidTransform() # modify here. return X_WF # + [markdown] id="O3Nmr31JUZPB" # You can check your work by running the cell below and looking at the frame visualization in Meshcat. # + id="TcNpDwGiZw1n" # 151, 11121 are pretty good verifiers of the implementation. index = 151 vis = meshcat.Visualizer(zmq_url=zmq_url) vis.delete() # Build KD tree. kdtree = o3d.geometry.KDTreeFlann(pcd) X_WP = compute_darboux_frame(index, pcd, kdtree) draw_open3d_point_cloud(vis, pcd) draw_frame_meshcat(vis, "frame", X_WP, 0.1) # + [markdown] id="HrdDJyHzU4W_" # ## Collision Line Search # # Now we wish to align our gripper frame with the Darboux frame that we found, but naively doing it will result in collision / being too far from the object. # # An important heuristic that is used in the GPG work is that grasps are more stable when contact area is maximized. For that, we would need the gripper to be as inwards as possible towards the object but avoid collisions. # # To implement this, we will use a line search along a grid along the y-axis, and find the **maximum** value of $y$ (remember that our $y$ is towards the inwards normal) that results in no-collision. # # We've given you the grid you should search over, and the function `distance=compute_sdf(pcd, X_WG)` that will return the signed distance function between the set of pointclouds, and the gripper, given the transform `X_WG`. You are required to use this to detect the presence of collisions. # # Finally, if there is no value of $y$ that results in no collisions, you should return `np.nan` for the signed distance, and `None` for the rigid transform. # + id="tUgWtIoDW-x2" # Compute static rotation between the frame and the gripper. def find_minimum_distance(pcd, X_WG): """ By doing line search, compute the maximum allowable distance along the y axis before penetration. Return the maximum distance, as well as the new transform. Args: - pcd (open3d.geometry.Pointcloud object): pointcloud to search over. - X_WG (Drake RigidTransform object): RigidTransform. You can expect this to be the return from compute_darboux_frame. Return: - Tuple (signed_distance, X_WGnew) where - signed_distance (float): signed distance between gripper and object pointcloud at X_WGnew. - X_WGnew: New rigid transform that moves X_WG along the y axis while maximizing the y-translation subject to no collision. If there is no value of y that results in no collisions, return (np.nan, None). """ y_grid = np.linspace(-0.05, 0.05, 10) # do not modify # modify here. signed_distance = 0.0 # modify here X_WGnew = RigidTransform() # modify here return signed_distance, X_WGnew # + [markdown] id="s3E771RFXO7N" # You can check your work below by running the cell below. If the visualization results in a collision, or the gripper is excessively far from the object, your implementation is probably wrong. # + id="NvPNcEzpWqzt" vis = meshcat.Visualizer(zmq_url=zmq_url) vis.delete() draw_open3d_point_cloud(vis, pcd) draw_frame_meshcat(vis, "frame", X_WP, 0.1) shortest_distance, X_WGnew = find_minimum_distance(pcd, X_WP) draw_grasp_candidate(X_WGnew, refresh=False) # + [markdown] id="ivgvyXpVXiXn" # ## Nonempty Grasp # # Let's add one more heuristic: when we close the gripper, we don't want what is in between the two fingers to be an empty region. That would make our robot look not very smart! # # There is a simple way to check this: let's define a volumetric region swept by the gripper's closing trajectory, and call it $\mathcal{B}(^{W}X^{G})$. We will also call the gripper body (when fully open) as the set $\mathcal{C}(^{W}X^G)$. If there are no object pointclouds within the set $\mathcal{B}(^{W}X^{G})$, we can simply discard it. # # <img src="https://raw.githubusercontent.com/RussTedrake/manipulation/master/figures/exercises/closing_plane.png" width="800"> # # You're probably thinking - how do I do a rigid transform on a set? Generally it's doable if the transform is affine, the set is polytopic, etc., but there is an easier trick - we just transform all the pointclouds to the gripper frame $G$! # # Your algorithm below will follow these step: # 1. Transform the pointcloud points `pcd` from world frame to gripper frame. # 2. For each point, check if it is within the bounding box we have provided. # 3. If there is a point inside the set, return True. If not, return false. # # HINT: Our implementation uses no for-loops for this one. You can look into `np.all` for an extra boost of speed. # + id="EuDXAeQZbhgR" def check_nonempty(pcd, X_WG, visualize=False): """ Check if the "closing region" of the gripper is nonempty by transforming the pointclouds to gripper coordinates. Args: - pcd (open3d.geometry.Pointcloud): open3d pointcloud class - X_WG (Drake RigidTransform): transform of the gripper. Return: - is_nonempty (boolean): boolean set to True if there is a point within the cropped region. """ pcd_W_np = np.array(pcd.points) # Bounding box of the closing region written in the coordinate frame of the gripper body. # Do not modify crop_min = [-0.054, 0.036, -0.01] crop_max = [0.054, 0.117, 0.01] ############### modify from here. pcd_G_np = pcd_W_np # modify to get pointcloud seen from Gripper frame. Don't change the variable name. is_nonempty = False ############### Do not modify beyond. if (visualize): vis.delete() pcd_G = o3d.geometry.PointCloud() pcd_G.points = o3d.utility.Vector3dVector(pcd_G_np) pcd_G.colors = pcd.colors draw_grasp_candidate(RigidTransform()) draw_open3d_point_cloud(vis, pcd_G) box_length = np.array(crop_max) - np.array(crop_min) box_center = (np.array(crop_max) + np.array(crop_min)) / 2. box = g.Box(box_length) vis["closing_region"].set_object(box, g.MeshLambertMaterial(color=0xff0000, opacity=0.3)) vis["closing_region"].set_transform(tf.translation_matrix(box_center)) return is_nonempty # + [markdown] id="_SIw8uGhb2P-" # You can check the correctness of your implementation by running the below cell, where we have visualized the pointclouds and $\mathcal{B}({}^W X^G)$ from the gripper frame. # + id="BBg0NCWd2qI8" # Lower and upper bounds of the closing region in gripper coordinates. Do not modify. vis = meshcat.Visualizer(zmq_url=zmq_url) check_nonempty(pcd, X_WGnew, visualize=True) # + [markdown] id="dwGE2JDnXmY5" # ## Grasp Sampling Algorithm # # That was a lot of subcomponents, but we're finally onto the grand assembly. You will now generate `candidate_num` candidate grasps using everything we have written so far. The sampling algorithm goes as follows: # # 1. Select a random point $p$ from the pointcloud (use `np.random.randint()`) # 2. Compute the Darboux frame ${}^WX^F(p)$ of the point $p$ using `compute_darboux_frame`. # 3. Randomly sample an $x$ direction translation $x\in[x_{min},x_{max}]$, and a $z$ direction rotation $\phi\in[\phi_{min},\phi_{max}]$. Compute a grasp frame $T$ that has the relative transformation `X_FT=(RotateZ(phi),TranslateX(x))`. Convince yourself this makes the point $p$ stay in the "closing plane" (drawn in red) defined in the figure above. (NOTE: For ease of grading, make sure you compute the $x$ direction first with `np.random.rand()`, then compute the $\phi$ direction with another call to `np.random.rand()`, not the other way around.) # 4. From the grasp frame $T$, translate along the $y$ axis such that the gripper is closest to the object without collision. Use `find_minimum_distance`, and call this frame $G$. Remember that `find_minimum_distance` can return `np.nan`. Skip the loop if this happens. # 5. If $G$ results in no collisions (see `check_collision`) and results in non-empty grasp (use `check_nonempty`), append it to the candidate list. If not, continue the loop until we have desired number of candidates. # # + id="LvKVHqv8fnq1" def compute_candidate_grasps(pcd, candidate_num = 10, random_seed=5): """ Compute candidate grasps. Args: - pcd (open3d.geometry.Pointcloud): pointcloud of the object - candidate_num (int) : number of desired candidates. - random_seed (int) : seed for rng, used for grading. Return: - candidate_lst (list of drake RigidTransforms) : candidate list of grasps. """ # Do not modify. x_min = -0.03 x_max = 0.03 phi_min = -np.pi/3 phi_max = np.pi/3 np.random.seed(random_seed) # Build KD tree for the pointcloud. kdtree = o3d.geometry.KDTreeFlann(pcd) ball_radius = 0.002 candidate_count = 0 candidate_lst = [] # list of candidates, given by RigidTransforms. # Modify from here. return candidate_lst # + [markdown] id="dypaCKcOf9cn" # You can check your implementation by running the cell below. Note that although we've only sampled 20 candidates, a lot of them look promising. # + id="ItS9GtKaZ39w" # Takes approximately 40 seconds. if (running_as_notebook): grasp_candidates = compute_candidate_grasps(pcd, candidate_num=3, random_seed=5) vis.delete() draw_open3d_point_cloud(vis, pcd) for i in range(len(grasp_candidates)): draw_grasp_candidates(grasp_candidates[i], prefix="gripper" + str(i), draw_frames=False) else: grasp_candidates = compute_candidate_grasps(pcd, candidate_num=1, random_seed=1) # + [markdown] id="7jcCyk-q2U3L" # ## Note on Running Time # # You might be disappointed in how slowly this runs, but the same algorithm written in C++ with optimized libraries can run much faster. (I would expect around a 20 times speedup). # # But more fundamentally, it's important to note how trivially parallelizable the candidate sampling process is. With a parallelized and optimized implementation, hundres can be sampled in real time. # + [markdown] id="MwE8yNg58VQN" # ## How will this notebook be Graded?## # # If you are enrolled in the class, this notebook will be graded using [Gradescope](www.gradescope.com). You should have gotten the enrollement code on our announcement in Piazza. # # For submission of this assignment, you must do two things. # - Download and submit the notebook `grasp_candidate.ipynb` to Gradescope's notebook submission section, along with your notebook for the other problems. # # We will evaluate the local functions in the notebook to see if the function behaves as we have expected. For this exercise, the rubric is as follows: # - [4 pts] `compute_darboux_frame` must be implemented correctly. # - [4 pts] `find_minimum_distance` must be implemented correctly. # - [2 pts] `check_nonempty` must be implemented correctly. # - [4 pts] `compute_candidate_grasps` must be implemented correctly. # + id="xj5nAh4g8VQO" from manipulation.exercises.clutter.test_grasp_candidate import TestGraspCandidate from manipulation.exercises.grader import Grader Grader.grade_output([TestGraspCandidate], [locals()], 'results.json') Grader.print_test_results('results.json') # -
exercises/clutter/grasp_candidate.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # UT2000 Beacon Summary # For the MADS framework paper, we need to present the data. This notebook looks at the data to make sure we have reasonable values at least in terms of the beacon. import warnings warnings.filterwarnings('ignore') # + import os import os.path from os import path from datetime import datetime, timedelta import pytz import matplotlib.pyplot as plt import seaborn as sns import matplotlib.dates as mdates import pandas as pd import numpy as np # - # # ID Crossover # We need the IDs from the multiply modalities to cross-reference the participants. ids_1000 = pd.read_csv('../data/raw/ut1000/admin/id_crossover.csv') # limiting so we don't have repeats with ut2000 ids_1000 = ids_1000[ids_1000['record'] < 2000] ids_1000.head() ids_2000 = pd.read_csv('../data/raw/ut2000/admin/id_crossover.csv') ids_2000.head() # combining ut1000 and 2000 records ids = ids_1000.append(ids_2000) # # Beacon Data # The beacon data can be read in based on the beiwe IDs of the participants # ## PM Data # + pm_df = pd.DataFrame() measurements = [ 'pm1.0', 'pm2.5', 'pm10', 'std1.0', 'std2.5', 'std10', 'pc0.3', 'pc0.5', 'pc1.0', 'pc2.5', 'pc5.0', 'pc10.0' ] for folder in os.listdir('../data/raw/ut2000/beacon/'): beacon_no = folder[-2:] if beacon_no in ['01','02','03','05','06','07','08','09','10','11','12','20']: beacon_df = pd.DataFrame() for file in os.listdir(f'../data/raw/ut2000/beacon/{folder}/bevo/pms5003/'): if file[-1] == 'v': temp = pd.read_csv(f'../data/raw/ut2000/beacon/{folder}/bevo/pms5003/{file}',names=measurements, parse_dates=True,infer_datetime_format=True) if len(temp) > 1: beacon_df = pd.concat([beacon_df,temp]) if len(beacon_df) > 0: beacon_df['number'] = beacon_no pm_df = pd.concat([pm_df,beacon_df]) dt = [] for i in range(len(pm_df)): if isinstance(pm_df.index[i], str): try: ts = int(pm_df.index[i]) except ValueError: ts = int(pm_df.index[i][:-2]) dt.append(datetime.utcfromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')) else: dt.append(datetime.now()) pm_df['datetime'] = dt pm_df['datetime'] = pd.to_datetime(pm_df['datetime']) pm_df.set_index('datetime',inplace=True) # - # There seems to be issues with the data - the PM10 values should be the greatest while the PM1 values should be the smallest - they appear reversed. Also, no need to include the STD values since they really make no sense. pm_df = pm_df[['pm1.0','pm2.5','pm10','number']] pm_df.columns = ['pm10','pm2.5','pm1','number'] pm_df np.nanmean(pm_df['pm10']) # ### Visualizing to Get a Sense of the Concentrations # Beacons of concern: # - 1: Data seems unusable for the duration of the study # - 2: Data seems unusable for the duration of the study # - 3: Isolated incidents that can be removed # - 5: Isolated incidents that can be removed # - 6: Isolated incidents that can be removed for beacon in ['03','05','06']:#pm_df['number'].unique(): #beacon = '01' pm_pt = pm_df[pm_df['number'] == beacon] fig, ax = plt.subplots(figsize=(12,6)) ax.scatter(pm_pt.index,pm_pt['pm1'],color='firebrick',s=5) ax.scatter(pm_pt.index,pm_pt['pm2.5'],color='black',s=5) ax.scatter(pm_pt.index,pm_pt['pm10'],color='seagreen',s=5) ax.set_title(f'Beacon: {beacon}') ax.set_xlim([datetime(2019,3,20),datetime(2019,4,20)]) ax.set_ylim([0,500]) plt.show() plt.close()
notebooks/archive/2.0.1-hef-ut2000-beacon-summary.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Feature Importance # !pip install lightgbm BASE_DIR = '/mnt/ceph/storage/data-in-progress/data-research/web-search/TREC-21/lightgbm/all-50-features/' # TRAIN_APPROACHES = !ls $BASE_DIR |grep 'train-with-' # + def load_model(training_approach): from lightgbm import Booster print('Load model: ' + training_approach) return Booster(model_file=BASE_DIR + training_approach + '/LightGBM_model.txt') def plot_importance(model): from lightgbm import plot_importance return plot_importance(model, height=1,dpi=1024, figsize=(8,16)) #plot_importance(load_model(TRAIN_APPROACHES[1])) # - def importance(model_name): import json import pandas as pd model = load_model(model_name) feature_name = json.load(open('/mnt/ceph/storage/data-in-progress/data-research/web-search/TREC-21/lightgbm/all-features.jsonl')) feature_importance = model.feature_importance(importance_type='split') split_importance = {} for i in range(len(feature_importance)): split_importance[str(i)] = feature_importance[i] feature_importance = model.feature_importance(importance_type='gain') gain_importance = {} for i in range(len(feature_importance)): gain_importance[str(i)] = feature_importance[i] ret = [] for k in feature_name: ret += [{ 'model_name': model_name.replace('train-with-', ''), 'feature_name': feature_name[k], 'gain_importance': gain_importance[k], 'split_importance': split_importance[k], }] return pd.DataFrame(ret) import pandas as pd df_importance = pd.concat([importance(TRAIN_APPROACHES[0]), importance(TRAIN_APPROACHES[1]), importance(TRAIN_APPROACHES[2])]) df_importance = df_importance.sort_values('gain_importance', ascending=False) df_importance # + import seaborn as sb import matplotlib.pyplot as plt most_important_features = [i for i in df_importance[df_importance['model_name'] == '5000-trees'].feature_name[:10]] ax = sb.catplot( #data=df_importance, data=df_importance[df_importance['feature_name'].isin(most_important_features)], x='feature_name', y='gain_importance', hue='model_name', kind='bar', aspect=2, #aspect=10 ) plt.xticks(rotation=45) ax # + import seaborn as sb import matplotlib.pyplot as plt most_important_features = [i for i in df_importance[df_importance['model_name'] == '5000-trees'].feature_name[:10]] ax = sb.catplot( #data=df_importance, data=df_importance[df_importance['feature_name'].isin(most_important_features)], x='feature_name', y='gain_importance', hue='model_name', kind='bar', aspect=2, #aspect=10 ) plt.xticks(rotation=45) ax # - [i for i in df_importance[df_importance['model_name'] == '5000-trees'].feature_name]
src/trec-dl-21-ltr/src/main/ipynb/feature-importance.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import cv2 as cv import matplotlib.pyplot as plt from skimage.metrics import structural_similarity as ssim import urllib req = urllib.request.urlopen('https://docs.opencv.org/3.4/water_coins.jpg') arr = np.asarray(bytearray(req.read()), dtype=np.uint8) img = cv.imdecode(arr, cv.IMREAD_COLOR ) # 'Load it as grayscale' gray = cv.cvtColor(img,cv.COLOR_BGR2GRAY) ret, thresh = cv.threshold(gray,0,255,cv.THRESH_BINARY_INV+cv.THRESH_OTSU) plt.imshow(thresh, cmap="gray") plt.show() # + # noise removal kernel = np.ones((3,3),np.uint8) opening = cv.morphologyEx(thresh,cv.MORPH_OPEN,kernel, iterations = 2) # sure background area sure_bg = cv.dilate(opening,kernel,iterations=3) # Finding sure foreground area dist_transform = cv.distanceTransform(opening,cv.DIST_L2,5) ret, sure_fg = cv.threshold(dist_transform,0.7*dist_transform.max(),255,0) # Finding unknown region sure_fg = np.uint8(sure_fg) unknown = cv.subtract(sure_bg,sure_fg) fig, (ax1, ax2) = plt.subplots(1, 2,figsize=(15,15)) ax1.imshow(dist_transform, cmap="gray") ax1.set_title("dist_transform") ax2.imshow(sure_fg,cmap="gray") ax2.set_title("sure_fg") # + # Marker labelling ret, markers = cv.connectedComponents(sure_fg) # Add one to all labels so that sure background is not 0, but 1 markers = markers+1 # Now, mark the region of unknown with zero markers[unknown==255] = 0 markers = cv.watershed(img,markers) img[markers == -1] = [255,0,0] fig, (ax1, ax2) = plt.subplots(1, 2,figsize=(15,15)) ax1.imshow(markers) ax1.set_title("Marker") ax2.imshow(img) ax2.set_title("Result") plt.show() # -
opencv/Image Processing in OpenCV/Image Segmentation with Watershed Algorithm.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Getting started with the Data API # ### **Let's search & download some imagery of farmland near Stockton, CA. Here are the steps we'll follow:** # # 1. Define an Area of Interest (AOI) # 2. Save our AOI's coordinates to GeoJSON format # 3. Create a few search filters # 4. Search for imagery using those filters # 5. Activate an image for downloading # 6. Download an image # ### Requirements # - Python 2.7 or 3+ # - requests # - A [Planet API Key](https://www.planet.com/account/#/) # ## Define an Area of Interest # An **Area of Interest** (or *AOI*) is how we define the geographic "window" out of which we want to get data. # # For the Data API, this could be a simple bounding box with four corners, or a more complex shape, as long as the definition is in [GeoJSON](http://geojson.org/) format. # # For this example, let's just use a simple box. To make it easy, I'll use [geojson.io](http://geojson.io/) to quickly draw a shape & generate GeoJSON output for our box: # ![geojsonio.png](images/geojsonio.png) # We only need the "geometry" object for our Data API request: # Stockton, CA bounding box (created via geojson.io) geojson_geometry = { "type": "Polygon", "coordinates": [ [ [-121.59290313720705, 37.93444993515032], [-121.27017974853516, 37.93444993515032], [-121.27017974853516, 38.065932950547484], [-121.59290313720705, 38.065932950547484], [-121.59290313720705, 37.93444993515032] ] ] } # ## Create Filters # Now let's set up some **filters** to further constrain our Data API search: # + # get images that overlap with our AOI geometry_filter = { "type": "GeometryFilter", "field_name": "geometry", "config": geojson_geometry } # get images acquired within a date range date_range_filter = { "type": "DateRangeFilter", "field_name": "acquired", "config": { "gte": "2016-08-31T00:00:00.000Z", "lte": "2016-09-01T00:00:00.000Z" } } # only get images which have <50% cloud coverage cloud_cover_filter = { "type": "RangeFilter", "field_name": "cloud_cover", "config": { "lte": 0.5 } } # combine our geo, date, cloud filters combined_filter = { "type": "AndFilter", "config": [geometry_filter, date_range_filter, cloud_cover_filter] } # - # ## Searching: Items and Assets # Planet's products are categorized as **items** and **assets**: an item is a single picture taken by a satellite at a certain time. Items have multiple asset types including the image in different formats, along with supporting metadata files. # # For this demonstration, let's get a satellite image that is best suited for analytic applications; i.e., a 4-band image with spectral data for Red, Green, Blue and Near-infrared values. To get the image we want, we will specify an item type of `PSScene4Band`, and asset type `analytic`. # # You can learn more about item & asset types in Planet's Data API [here](https://planet.com/docs/reference/data-api/items-assets/). # # Now let's search for all the items that match our filters: # + import os import json import requests from requests.auth import HTTPBasicAuth # API Key stored as an env variable PLANET_API_KEY = os.getenv('PL_API_KEY') item_type = "PSScene4Band" # API request object search_request = { "interval": "day", "item_types": [item_type], "filter": combined_filter } # fire off the POST request search_result = \ requests.post( 'https://api.planet.com/data/v1/quick-search', auth=HTTPBasicAuth(PLANET_API_KEY, ''), json=search_request) print(json.dumps(search_result.json(), indent=1)) # - # Our search returns metadata for all of the images within our AOI that match our date range and cloud coverage filters. It looks like there are multiple images here; let's extract a list of just those image IDs: # extract image IDs only image_ids = [feature['id'] for feature in search_result.json()['features']] print(image_ids) # Since we just want a single image, and this is only a demonstration, for our purposes here we can arbitrarily select the first image in that list. Let's do that, and get the `asset` list available for that image: # + # For demo purposes, just grab the first image ID id0 = image_ids[0] id0_url = 'https://api.planet.com/data/v1/item-types/{}/items/{}/assets'.format(item_type, id0) # Returns JSON metadata for assets in this ID. Learn more: planet.com/docs/reference/data-api/items-assets/#asset result = \ requests.get( id0_url, auth=HTTPBasicAuth(PLANET_API_KEY, '') ) # List of asset types available for this particular satellite image print(result.json().keys()) # - # ## Activation and Downloading # # The Data API does not pre-generate assets, so they are not always immediately availiable to download. In order to download an asset, we first have to **activate** it. # # Remember, earlier we decided we wanted a color-corrected image best suited for *analytic* applications. We can check the status of the analytic asset we want to download like so: # # This is "inactive" if the "analytic" asset has not yet been activated; otherwise 'active' print(result.json()['analytic']['status']) # Let's now go ahead and **activate** that asset for download: # + # Parse out useful links links = result.json()[u"analytic"]["_links"] self_link = links["_self"] activation_link = links["activate"] # Request activation of the 'analytic' asset: activate_result = \ requests.get( activation_link, auth=HTTPBasicAuth(PLANET_API_KEY, '') ) # - # At this point, we wait for the activation status for the asset we are requesting to change from `inactive` to `active`. We can monitor this by polling the "status" of the asset: # + activation_status_result = \ requests.get( self_link, auth=HTTPBasicAuth(PLANET_API_KEY, '') ) print(activation_status_result.json()["status"]) # - # Once the asset has finished activating (status is "active"), we can download it. # # *Note: the download link on an active asset is temporary* # Image can be downloaded by making a GET with your Planet API key, from here: download_link = activation_status_result.json()["location"] print(download_link) # ![stockton_thumb.png](images/stockton_thumb.png) #
jupyter-notebooks/data-api-tutorials/search_and_download_quickstart.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # Processing data example using **Rasterio**, **geopandas** and **earthpy** libraires # # * This notebook was developed by the Earth Lab of Colorado University for their Earth Analytics Python course. We have reproduced most of it here, with small modifications for the purpose of analysing new data. # * This notebook opens the data downloaded using the **Data Downloader notebook** # * If you are just opening the notebook, please make sure your bucket is mounted and the imagery can be accessed. # * Remember to change the quantity of bands and combinations if you are working with imagery other than Sentinel 2. # + import os import earthpy as et import earthpy.plot as ep import earthpy.spatial as es import geopandas as gpd import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import rasterio as rio from IPython.display import clear_output clear_output(wait=True) # - ## Here we get the data from mount, which hosts the bucket where we transferred our data with rio.open("mount/Caracas_All_Bands.tif") as src: naip_csf = src.read() naip_csf_meta = src.meta naip_csf_meta naip_csf.shape fig, ax = plt.subplots() ax.imshow(naip_csf[0], cmap="Greys") ax.set( title="NAIP RGB Imagery - Band 1-Red\nCold Springs Fire Scar", xticks=[], yticks=[] ) plt.show() ep.plot_bands(naip_csf[1], figsize=(24, 12)) # + titles = [ "Band 0 – Coastal aerosol", "Band 1 – Blue", "Band 2 – Green", "Band 3 – Red", "Band 4 – Vegetation red edge – 704.1nm", "Band 5 – Vegetation red edge – 740.5nm", "Band 6 – Vegetation red edge – 782.8nm", "Band 7 – NIR – 832.8nm", "Band 8 – Narrow NIR – 864.7nm", "Band 9 – Water vapour", "Band 10 – SWIR – Cirrus", "Band 11 – SWIR – 1613.7nm", "Band 12 – SWIR – 2202.4nm", ] # plot all bands using the earthpy function ep.plot_bands(naip_csf, title=titles, figsize=(12, 10), cols=3) # - ep.plot_rgb(naip_csf, title="RGB Image", rgb=[3, 2, 1], figsize=(24, 12)) ep.plot_rgb(naip_csf, title="False Color Infrarred", rgb=[6, 3, 2], figsize=(24, 12)) ep.plot_rgb(naip_csf, title="False Color Urban", rgb=[12, 11, 4], figsize=(24, 12)) ep.plot_rgb(naip_csf, title="Atmoshpheric Penetration", rgb=[7, 6, 5], figsize=(24, 12)) ep.plot_rgb(naip_csf, title="Healthy Vegetation", rgb=[5, 6, 2], figsize=(24, 12)) ep.plot_rgb(naip_csf, title="Land Water", rgb=[5, 6, 4], figsize=(24, 12)) ep.plot_rgb(naip_csf, title="Short Wave Infrarred", rgb=[7, 5, 4], figsize=(24, 12)) ep.plot_rgb(naip_csf, title="Vegetation Analysis", rgb=[6, 5, 4], figsize=(24, 12)) ep.plot_rgb( naip_csf, title="Natural with Atmospheric Removal", rgb=[7, 5, 3], figsize=(24, 12) ) sentinel2_ndvi = (naip_csf[7] - naip_csf[3]) / (naip_csf[7] + naip_csf[3]) # Plot NDVI data fig, ax = plt.subplots(figsize=(24, 12)) ndvi = ax.imshow(sentinel2_ndvi, cmap="jet", vmin=0, vmax=1) fig.colorbar(ndvi, fraction=0.05) ax.set(title="Sentinel 2 Derived NDVI\n - San Diego California") ax.set_axis_off() plt.show()
Sentinel2_processing-2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # *Lot of materials in today's workshop (including text, code, and figures) were adapted from the "SciPy 2017 Scikit-learn Tutorial" by <NAME> and <NAME>. The contents of their tutorial are licensed under Creative Commons CC0 1.0 Universal License as work dedicated to the public domain, and can be found at https://github.com/amueller/scipy-2017-sklearn.* # # # Unsupervised Learning # Defination on wikipedia: Unsupervised learning is a type of machine learning that looks for previously undetected patterns in a data set with no pre-existing labels and with a minimum of human supervision. # %matplotlib inline import matplotlib.pyplot as plt import numpy as np # ## Clustering # Clustering is the task of gathering samples into groups of similar samples according to some predefined similarity or distance (dissimilarity) # measure, such as the Euclidean distance. # # <img width="60%" src='figures/clustering.png'/> # In this section we will explore a basic clustering task on some synthetic and real-world datasets. # # Here are some common applications of clustering algorithms: # - grouping related web news (e.g. Google News) and web search results # - grouping related stock quotes for investment portfolio management # - building customer profiles for market analysis # # Let's start by creating a simple, 2-dimensional, synthetic dataset: # + from sklearn.datasets import make_blobs X, y = make_blobs(random_state=42) X.shape # - plt.scatter(X[:, 0], X[:, 1]); # In the scatter plot above, we can see three separate groups of data points and we would like to recover them using clustering -- think of "discovering" the class labels that we already take for granted in a classification task. # # Even if the groups are obvious in the data, it is hard to find them when the data lives in a high-dimensional space, which we can't visualize in a single histogram or scatterplot. # Now we will use one of the simplest clustering algorithms, K-means. # This is an iterative algorithm which searches for three cluster # centers such that the distance from each point to its cluster is # minimized. The standard implementation of K-means uses the Euclidean distance, which is why we want to make sure that all our variables are measured on the same scale if we are working with real-world datastets. In the previous notebook, we talked about one technique to achieve this, namely, standardization. # # <img width="60%" src='figures/kmean_iteration.gif'/> # # <br/> # <div class="alert alert-success"> # <b>Question</b>: # <ul> # <li> # what would you expect the output to look like? # </li> # </ul> # </div> # + from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=3, random_state=42) # - # We can get the cluster labels either by calling fit and then accessing the # ``labels_`` attribute of the K means estimator, or by calling ``fit_predict``. # Either way, the result contains the ID of the cluster that each point is assigned to. labels = kmeans.fit_predict(X) labels np.all(y == labels) # Let's visualize the assignments that have been found plt.scatter(X[:, 0], X[:, 1], c=labels); # Compared to the true labels: # Here, we are probably satisfied with the clustering results. But in general we might want to have a more quantitative evaluation. How about comparing our cluster labels with the ground truth we got when generating the blobs? # + from sklearn.metrics import confusion_matrix, accuracy_score print('Accuracy score:', accuracy_score(y, labels)) print(confusion_matrix(y, labels)) # - np.mean(y == labels) # Even though we recovered the partitioning of the data into clusters perfectly, the cluster IDs we assigned were arbitrary, # and we can not hope to recover them. Therefore, we must use a different scoring metric, such as ``adjusted_rand_score``, which is invariant to permutations of the labels: # + from sklearn.metrics import adjusted_rand_score adjusted_rand_score(y, labels) # - # One of the "short-comings" of K-means is that we have to specify the number of clusters, which we often don't know *apriori*. For example, let's have a look what happens if we set the number of clusters to 2 in our synthetic 3-blob dataset: kmeans = KMeans(n_clusters=2, random_state=42) labels = kmeans.fit_predict(X) plt.scatter(X[:, 0], X[:, 1], c=labels); kmeans.cluster_centers_ # #### The Elbow Method # # The Elbow method is a "rule-of-thumb" approach to finding the optimal number of clusters. Here, we look at the cluster dispersion for different values of k: # + distortions = [] for i in range(1, 11): km = KMeans(n_clusters=i, random_state=0) km.fit(X) distortions.append(km.inertia_) plt.plot(range(1, 11), distortions, marker='o') plt.xlabel('Number of clusters') plt.ylabel('Distortion') plt.show() # - # Then, we pick the value that resembles the "pit of an elbow." As we can see, this would be k=3 in this case, which makes sense given our visual expection of the dataset previously. # **Clustering comes with assumptions**: A clustering algorithm finds clusters by making assumptions with samples should be grouped together. Each algorithm makes different assumptions and the quality and interpretability of your results will depend on whether the assumptions are satisfied for your goal. For K-means clustering, the model is that all clusters have equal, spherical variance. # # **In general, there is no guarantee that structure found by a clustering algorithm has anything to do with what you were interested in**. # # We can easily create a dataset that has non-isotropic clusters, on which kmeans will fail: # + plt.figure(figsize=(12, 12)) n_samples = 1500 random_state = 170 X, y = make_blobs(n_samples=n_samples, random_state=random_state) # Incorrect number of clusters y_pred = KMeans(n_clusters=2, random_state=random_state).fit_predict(X) plt.subplot(221) plt.scatter(X[:, 0], X[:, 1], c=y_pred) plt.title("Incorrect Number of Blobs") # Anisotropicly distributed data transformation = [[0.60834549, -0.63667341], [-0.40887718, 0.85253229]] X_aniso = np.dot(X, transformation) y_pred = KMeans(n_clusters=3, random_state=random_state).fit_predict(X_aniso) plt.subplot(222) plt.scatter(X_aniso[:, 0], X_aniso[:, 1], c=y_pred) plt.title("Anisotropicly Distributed Blobs") # - # ## Transformation and Dimensionality Reduction # Many instances of unsupervised learning, such as dimensionality reduction, manifold learning find a new representation of the input data without any additional input. (In contrast to supervised learning, usnupervised algorithms don't require or consider target variables like in the previous classification and regression examples). # # <img src="figures/unsupervised_workflow.svg" width="100%"> # #### Why Dimensionality Reduction? # # There are a number of reasons! Here are a few: # # 1. Reducing the number of dimensions is a way we can eliminate noise variables from our data allowing for better models, we'll see an example of this with the Wisconsin Cancer data. # # 2. Your data is too big. This is common with image classification problems like the MNIST data. # # 3. Humans can only see in 2 or 3 dimensions, and interesting data sets often have many more dimensions than that. # # # I've hopefully convinced you of the importance of dimensionality reduction. # ### Principal Component Analysis # ============================ # An unsupervised transformation that is somewhat more interesting is Principal Component Analysis (PCA). # It is a technique to reduce the dimensionality of the data, by creating a linear projection. # That is, we find new features to represent the data that are a linear combination of the old data (i.e. we rotate it). Thus, we can think of PCA as a projection of our data onto a *new* feature space. # # The way PCA finds these new directions is by looking for the directions of maximum variance. # Usually only few components that explain most of the variance in the data are kept. Here, the premise is to reduce the size (dimensionality) of a dataset while capturing most of its information. There are many reason why dimensionality reduction can be useful: It can reduce the computational cost when running learning algorithms, decrease the storage space, and may help with the so-called "curse of dimensionality," which we will discuss in greater detail later. # # To illustrate how a rotation might look like, we first show it on two-dimensional data and keep both principal components. Here is an illustration: from figures import plot_pca plot_pca.plot_pca_illustration() # Now let's go through all the steps in more detail: # We create a Gaussian blob that is rotated: rnd = np.random.RandomState(5) X_ = rnd.normal(size=(300, 2)) X_blob = np.dot(X_, rnd.normal(size=(2, 2))) + rnd.normal(size=2) y = X_[:, 0] > 0 plt.scatter(X_blob[:, 0], X_blob[:, 1], c=y, linewidths=0, s=30) plt.xlabel("feature 1") plt.ylabel("feature 2"); # As always, we instantiate our PCA model. By default all directions are kept. from sklearn.decomposition import PCA pca = PCA() # Then we fit the PCA model with our data. As PCA is an unsupervised algorithm, there is no output ``y``. pca.fit(X_blob) # Then we can transform the data, projected on the principal components: # + X_pca = pca.transform(X_blob) plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, linewidths=0, s=30) plt.xlabel("first principal component") plt.ylabel("second principal component"); # - pca = PCA(n_components=1).fit(X_blob) X_blob.shape X_pca = pca.transform(X_blob) X_pca.shape plt.scatter(X_pca[:, 0],np.zeros(X_pca.shape[0]), c=y, linewidths=0, s=30) plt.xlabel("first principal component") from sklearn.datasets import load_digits digits = load_digits() digits.data.shape digits.keys() # data consists of 8×8 pixel images, meaning that they are 64-dimensional. # + # set up the figure fig = plt.figure(figsize=(6, 6)) # figure size in inches fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05) # plot the digits: each image is 8x8 pixels for i in range(64): ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[]) ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest') # label the image with the target value ax.text(0, 7, str(digits.target[i])) # - # To gain some intuition into the relationships between these points, we can use PCA to project them to a more manageable number of dimensions, say two: pca = PCA(2) # project from 64 to 2 dimensions projected = pca.fit_transform(digits.data) print(digits.data.shape) print(projected.shape) # We can now plot the first two principal components of each point to learn about the data: plt.scatter(projected[:, 0], projected[:, 1], c=digits.target, edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap("tab10",10)) plt.xlabel('component 1') plt.ylabel('component 2') plt.colorbar(); # + colors = ["#476A2A","#7851B8",'#BD3430','#4A2D4E','#875525', '#A83683','#4E655E','#853541','#3A3120','#535D8E', 'black'] km = KMeans(n_clusters=10, random_state=0) km = km.fit(projected) clusters_km = km.predict(projected) plt.figure(figsize=(10, 10)) plt.scatter(projected[:,0], projected[:,1], s=50, color="w") for i in range(len(digits.target)): plt.text(projected[i,0], projected[i,1], str(digits.target[i]), color = colors[clusters_km[i]], fontdict={'weight':'bold', 'size':9}) plt.xlabel("first PC") plt.ylabel("second PC") plt.title("PCA KMeans clsutering MNIST") # - # ## Density-Based Spatial Clustering of Application with Noise (DBSCAN) # # This technique focuses on the density of the points in the feature space. # # The main concept behind DBSCAN is that dense regions of data are the result of clusters, and that sparse regions of data are the result of noise. # # The DBSCAN algorithm goes through and labels each point as a core point, border point, or noise. Noise is thrown out and, in `sklearn`, labeled as $-1$. It then joins core points that are within each other's neighborhoods, these collections of core points are the foundations of our clusters. Finally border points are assigned to the cluster to which they are closest. from sklearn.cluster import DBSCAN from sklearn.preprocessing import StandardScaler # + centers = [[1, 1], [-1, -1], [1, -1]] X, labels_true = make_blobs(n_samples=750, centers=centers, cluster_std=0.4, random_state=0) X = StandardScaler().fit_transform(X) # - plt.scatter(X[:, 0], X[:, 1]); db = DBSCAN(eps=1, min_samples=10).fit(X) labels = db.labels_ plt.figure(figsize=(6, 6)) plt.scatter(X[:, 0], X[:, 1], c=labels) plt.title("Unevenly Sized Blobs")
workshop9/workshop9_unsupervised_learning.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import requests from bs4 import BeautifulSoup import pandas as pd def append_df_to_excel(filename, df, sheet_name='Sheet1', startrow=None, truncate_sheet=False, **to_excel_kwargs): from openpyxl import load_workbook # ignore [engine] parameter if it was passed if 'engine' in to_excel_kwargs: to_excel_kwargs.pop('engine') writer = pd.ExcelWriter(filename, engine='openpyxl') try: # try to open an existing workbook writer.book = load_workbook(filename) # get the last row in the existing Excel sheet # if it was not specified explicitly if startrow is None and sheet_name in writer.book.sheetnames: startrow = writer.book[sheet_name].max_row # truncate sheet if truncate_sheet and sheet_name in writer.book.sheetnames: # index of [sheet_name] sheet idx = writer.book.sheetnames.index(sheet_name) # remove [sheet_name] writer.book.remove(writer.book.worksheets[idx]) # create an empty sheet [sheet_name] using old index writer.book.create_sheet(sheet_name, idx) # copy existing sheets writer.sheets = {ws.title:ws for ws in writer.book.worksheets} except FileNotFoundError: # file does not exist yet, we will create it pass if startrow is None: startrow = 0 # write out the new sheet df.to_excel(writer, sheet_name, startrow=startrow, **to_excel_kwargs) # save the workbook writer.save() reviewlist = [] def get_soup(url): r = requests.get(url) soup = BeautifulSoup(r.text, 'html.parser') return soup def get_reviews(soup): reviews = soup.find_all('div', {'data-hook': 'review'}) try: for item in reviews: review = { 'Product': soup.title.text.replace('Amazon.in:Customer reviews:', '').strip(), 'Custer Name':item.find('span',class_='a-profile-name').text.strip(), 'Review Title': item.find('a', {'data-hook': 'review-title'}).text.strip(), 'Rating': float(item.find('i', {'data-hook': 'review-star-rating'}).text.replace('out of 5 stars', '').strip()), 'Reviews': item.find('span', {'data-hook': 'review-body'}).text.strip(), } reviewlist.append(review) except: pass for x in range(1,30): pg=str(x) url_main="https://www.amazon.in/Redmi-Prime-Storage-Display-Camera/product-reviews/B086984LJ4" url_split1="/ref=cm_cr_arp_d_paging_btm_next_" url_split2="?ie=UTF8&reviewerType=all_reviews&pageNumber=" url_final = url_main+url_split1+pg+url_split2+pg print(url_final) soup = get_soup(url_final) print(f'Getting page: {x}') get_reviews(soup) print(len(reviewlist)) if not soup.find('li', {'class': 'a-disabled a-last'}): pass else: break df = pd.DataFrame(reviewlist) append_df_to_excel("final.xlsx", df, header=True, index=False) print('End.') # - # # Analysing import pandas as pd import numpy as np df=pd.read_excel(r'final.xlsx') df.head() x=[] x=df.Reviews x # + import re REPLACE_NO_SPACE = re.compile("[.;:!\'?,\"()\[\]]") REPLACE_WITH_SPACE = re.compile("(<br\s*/><br\s*/>)|(\-)|(\/)") def preprocess_reviews(reviews): reviews = [REPLACE_NO_SPACE.sub("", line.lower()) for line in reviews] reviews = [REPLACE_WITH_SPACE.sub(" ", line) for line in reviews] return reviews reviews_train_clean = preprocess_reviews(x) # - df['Cleaned Reviews']=reviews_train_clean df['Cleaned Reviews'] import string import nltk from nltk.corpus import stopwords from nltk import PorterStemmer STOPWORDS=stopwords.words("english") def deEmojify(inputString): return inputString.encode('ascii', 'ignore').decode('ascii') df.sample(5) import matplotlib.pyplot as plt import seaborn as sns from wordcloud import WordCloud wordcloud = WordCloud(height=1000, width=1000) wordcloud = wordcloud.generate(' '.join(df['Cleaned Reviews'].tolist())) plt.imshow(wordcloud) plt.title("Most common words in the reviews") plt.axis('off') plt.show() wordcloud = wordcloud.to_file('static/wordcloud.png') from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer analyser = SentimentIntensityAnalyzer() def sentiment_analyzer_scores(sentence): score = analyser.polarity_scores(sentence) return score def compound_score(text): comp=sentiment_analyzer_scores(text) return comp['compound'] df['sentiment_score']=df['Cleaned Reviews'].apply(lambda x:compound_score(x)) df.sample(4) def sentiment_category(score): if score >= 0.05: return "positive" elif score <= -0.05: return "negative" else: return "neutral" df['review_category']=df['sentiment_score'].apply(lambda x:sentiment_category(x)) df.sample(10) sns.countplot(df['review_category']).set_title("Distribution of Reviews Category") plt.savefig('static/count_plot.png') positive_reviews=df.loc[df['review_category']=='positive','Cleaned Reviews'].tolist() # extracting all positive reviews and converting to a list positive_reviews[0:4] negative_reviews=df.loc[df['review_category']=='negative','Cleaned Reviews'].tolist() # extracting all negative reviews and converting to a list negative_reviews[0:5] # # Positive Words in Positive Reviews from wordcloud import WordCloud wordcloud = WordCloud(height=2000, width=2000, background_color='black') wordcloud = wordcloud.generate(' '.join(df.loc[df['review_category']=='positive','Cleaned Reviews'].tolist())) plt.imshow(wordcloud) plt.title("Most common words in positive customer comments") plt.axis('off') plt.show() # # Negative Words in Negative Reviews from wordcloud import WordCloud wordcloud = WordCloud(height=2000, width=2000, background_color='black') wordcloud = wordcloud.generate(' '.join(df.loc[df['review_category']=='negative','Cleaned Reviews'].tolist())) plt.imshow(wordcloud) plt.title("Most common words in negative customer comments") plt.axis('off') plt.show() from collections import Counter def getMostCommon(reviews_list,topn=20): reviews=" ".join(reviews_list) tokenised_reviews=reviews.split(" ") freq_counter=Counter(tokenised_reviews) return freq_counter.most_common(topn) def plotMostCommonWords(reviews_list,topn=20,title="Common Review Words",color="blue",axis=None): #default number of words is given as 20 top_words=getMostCommon(reviews_list,topn=topn) data=pd.DataFrame() data['words']=[val[0] for val in top_words] data['freq']=[val[1] for val in top_words] if axis!=None: sns.barplot(y='words',x='freq',data=data,color=color,ax=axis).set_title(title+" top "+str(topn)) else: sns.barplot(y='words',x='freq',data=data,color=color).set_title(title+" top "+str(topn)) def generateNGram(text,n): tokens=text.split(" ") ngrams = zip(*[tokens[i:] for i in range(n)]) return ["_".join(ngram) for ngram in ngrams] # + from matplotlib import rcParams rcParams['figure.figsize'] = 14,10 ## Sets the heigth and width of image fig,ax=plt.subplots(1,2) fig.subplots_adjust(wspace=1.0) #Adjusts the space between the two plots plotMostCommonWords(positive_reviews,60,"Positive Review Unigrams",axis=ax[0]) plotMostCommonWords(negative_reviews,60,"Negative Review Unigrams",color="red",axis=ax[1]) plt.savefig('static/unigram.png') # -
scrap_processing.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # <img style="float: center;" src="images/CI_horizontal.png" width="600"> # <center> # <span style="font-size: 1.5em;"> # <a href='https://www.coleridgeinitiative.org'>Website</a> # </span> # </center> # # Ghani, Rayid, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. "ADA-KCMO-2018." Coleridge Initiative GitHub Repositories. 2018. https://github.com/Coleridge-Initiative/ada-kcmo-2018. [![DOI](https://zenodo.org/badge/119078858.svg)](https://zenodo.org/badge/latestdoi/119078858) # # Databases # --- # ## Table of Contents # # - [Introduction](#Introduction) # - [Learning objectives](#Learning-objectives) # - [Methods](#Methods) # # - [Connection information](#Connection-information) # - [GUI clients](#GUI-clients) # # - [GUI - pgAdmin](#GUI---pgAdmin) # # - [Python database clients](#Python-database-clients) # # - [Python - `psycopg2`](#Python---psycopg2) # - [Python - `SQLAlchemy`](#Python---SQLAlchemy) # - [Python - `pandas`](#Python---pandas) # ## Introduction # # - Back to [Table of Contents](#Table-of-Contents) # # Regardless of how you connect, most interactions with relational database management systems (RDBMS) are carried out via Structured Query Language (SQL). Many programming languages are more similar than different. # SQL is genuinely different conceptually and syntactically. # # To make learning SQL easier, in this notebook we list a number of database clients you can use to connect to a PostgreSQL database and run SQL queries, so you can try them out and find one you prefer to use (we recommend pgAdmin if you are new to databases). # # We will follow the following sequence: # 1. Connection Information: We'll outline the information needed to connect to our class database server. # 2. Then, we'll briefly look at how to use a number of different SQL clients, and the pros and cons of each. # 3. Finally, we'll each pick one to connect and test before we move on to focusing on SQL. # ### Learning objectives # # - Back to [Table of Contents](#Table-of-Contents) # # This notebook documents different database clients you can use to run SQL queries against the PostgreSQL database used for this class. PostgreSQL is an open source relational database management system (DBMS) developed by a worldwide team of volunteers. # # ** Learning objectives: ** # # - Understand options for connecting to a PostgreSQL database and running SQL, including pros and cons of each. # - Pick an SQL interface to use while learning SQL. # ### Methods # # - Back to [Table of Contents](#Table-of-Contents) # # We cover the following database clients in this notebook: # # 1. Graphical User Interface (GUI) application 'pgAdmin' # 2. Using SQL in Python with: # # - Direct database connection - `psycopg2` # - `SQLAlchemy` # - `pandas` # # You can use any of these clients to run SQL in the database. Some are easier to use or better suited in certain situations over others. Each client's section includes information on good and bad points of each. # # If you are here to learn SQL, once you've looked over your options, pick one and proceed to the notebook "Intro to SQL" to learn more about the SQL language. # ## Connection information # # - Back to [Table of Contents](#Table-of-Contents) # # All of the programs that connect to and query a database listed below need to be initially told how to connect to the database one wants to query. There are a set of common connection properties that are used to specify how to connect to a broad range of database servers: # # - **_host name_**: the network name of the database server one is connecting to, if the database is not on your local computer. # - **_host port_**: the network port on which the database server is listening, if the database is not on your local computers. Most database server types have a default port that is assumed if you don't specify a port (5432 for PostgreSQL, for example, or 3306 for MySQL). # - **_username_**: for databases that authenticate a connection based on user credentials, the username you want to use to connect. # - **_password_**: for databases that authenticate a connection based on user credentials, the password you want to use to authenticate your username. # - **_database name_**: The name of the database to which you want to connect. # # Not all databases will need all of these parameters to be specified to successfully connect. For our class database, for example, we only need to specify: # # - **_host name_**: 10.10.2.10 # - **_database name_**: appliedda # # The class database server listens on the default PostgreSQL port (5432), so no port is needed, and it authenticates the user based on whether that user has a linux user on the database server itself, rather than requiring a username and password (though access to schemas and tables inside are controlled by a more stringent set of per-user access privileges stored within the database). # ## GUI clients # # - Back to [Table of Contents](#Table-of-Contents) # # The first database clients we will cover are Graphical User Interface (GUI) clients. These clients are designed to be used with mouse and keyboard, and to simplify submitting queries to a database and interacting with the results. # # We will briefly cover connecting to a database and running a query in the GUI database client **_pgAdmin_**, a PostgreSQL-specific database client. # ### GUI - pgAdmin # # - Back to [Table of Contents](#Table-of-Contents) # # pgAdmin is a PostgreSQL client written and suppported by the PostgreSQL community. It isn't the most beautiful program, but it is full-featured and available on many platforms. It doesn't let you connect to any databases other than PostgreSQL. # # **1. Running pgadmin** Double-click the "`pgAdmin III`" icon on the Desktop in the ADRF workspace. # # <img src="images/pgAdmin-open.png" /> # # **2. Creating a connection to the class database** In pgadmin: # # - Go to the file menu, then click on the "Add Connection to Server" option on top-left. # - In the "New Server Registration" window that opens, set: # # - the "Name" to "ADRF-appliedda" # - the "Host" to "10.10.2.10" # - the "Username" field to your username (it won't let you leave it empty) # - and uncheck the "Store password" checkbox # # <img src="images/pgAdmin-new_connection.png" /> # # **3. Connecting to the class database 'appliedda'** # # - Double-click on the "ADRF-appliedda" link in the pane on the left, under "Server Groups" --> "Servers (1)". # - If prompted for a password, just click "OK". You do not have to type any password # - On successful connection, you should see items under "ADRF-appliedda", including "Databases". Click on the "+" sign to the left of "Databases". # - Double-click on "appliedda" (it will probably have a red X on its icon, denoting that it is not currently connected.). # # <img src="images/pgAdmin-connected.png" /> # # # **4. Running a Query** # Once you are connected to the "appliedda" database, you can start running queries using this GUI. Click on the button that looks like a magnifying glass with "SQL" inside it, at the top center of the window. Enter your SQL query in the "SQL Editor" in the top left. # # Let us count the number of rows in the dataset kcmo_lehd.mo_qcew_employers: # # SELECT COUNT(*) # FROM kcmo_lehd.mo_qcew_employers; # # Now, press the green triangle "play" button to run the query. In the data output tab (down left)- you will see the results of this query. # # <img src="images/pgAdmin-run_query.png" /> # # Other queries you can run: # # - Counting number of unique employers in the data: # # SELECT COUNT(distinct equi_ein) # FROM kcmo_lehd.mo_qcew_employers; # # - Counting number of different records for each NAICS industry code: # # SELECT equi_naics, COUNT(*) AS cnt # FROM kcmo_lehd.mo_qcew_employers # GROUP BY equi_naics; # ## Python database clients # # - Back to [Table of Contents](#Table-of-Contents) # # Apart from client GUIs, we can also access PostgreSQL using programming languages like Python. We do this using libraries of code that extend core Python named 'packages'. # # The commands work similarly, you can execute almost any SQL in a programming language that you can in a manual client, and the results are returned in a format that lets you interact with them after the SQL statements finish. # # _(Python lets you interact with databases using SQL just like you would in any SQL GUI or terminal. Python code can do SELECTs, CREATEs, INSERTs, UPDATEs, and DELETEs, and any other SQL)_ # # Below are three ways one can interact with PostgreSQL using Python: # # 1. **_`psycopg2`_** - The Python `psycopg2` package implements Python's DBAPI, a mostly-standardized API for database interaction, to allow for querying PostgreSQL, It is the closest you can get in Python to a direct database connection. # 2. **_`SQLAlchemy`_** - `SQLAlchemy` can be used to map Python objects to database tables, but it also contains a wrapper around DBAPI that allows for query code be more consistently re-used across databases. # 3. **_`pandas`_** - `pandas` is an analysis package that uses a `SQLAlchemy` engine to read the results of SQL queries directly into `pandas` DataFrames, allowing you to analyze the data that results. # ### Python - `psycopg2` # # # - Back to [Table of Contents](#Table-of-Contents) # # The `psycopg2` package is the most popular PostgreSQL adapter for the Python programming language. This Python package implements the standard DBAPI Python interface for interacting with a relational database. This is the closest you can get to connecting directly to the database in Python - there aren't any objects creating in-memory tables or layers of abstraction between you and the data. Your Python sends SQL directly to the database and then deals row-by-row with the results. # # __Pros:__ # - This is often the best way to use Python to manage a database (ALTER, CREATE, INSERT, UPDATE, etc.). Fancier packages sometimes don't deal well with more complicated management SQL statements. # - It also is often what you have to resort to for genuinely big data, since the different ways you can fetch rows from the results of a query give you fine-grained control over exactly how much data is in memory at a given time. # - If you have a particularly vexing problem with a more feature-rich package, this is going to be your bare-bones troubleshooting sanity check to see if the problem is with that package rather than your SQL or your database. # # __Cons:__ # - All this control and bare-bones-ed-ness means that some things that are pretty easy in pandas can take a lot more code, time, and learning at this lower level. Pandas manages a lot of the details of connecting to and interacting with a database for you. # # __Mixed:__ # - In theory, when you write DBAPI-compliant code, that code can be used to interact with any database that has a DBAPI=compliant driver package. In practice, DBAPI drivers are about 95% compatible between databases and SQL for some tasks can be different from database to database, so you end up with code that can be ported between databases with a few tweaks and modifications, and then needing to test it all to make sure your SQL works. # + # importing datetime and psycopg2 package import datetime import psycopg2 import psycopg2.extras print( "psycopg2 imports completed at " + str( datetime.datetime.now() ) ) # + # Connect... pgsql_connection = None # set up connection properties db_host = "10.10.2.10" db_database = "appliedda" # and connect. pgsql_connection = psycopg2.connect( host = db_host, database = db_database ) print( "psycopg2 connection to host: " + db_host + ", database: " + db_database + " completed at " + str( datetime.datetime.now() ) ) # + # ...and create cursor. pgsql_cursor = None # results come back as a list of columns: pgsql_cursor = pgsql_connection.cursor() # results come back as a dictionary where values are mapped to column names (preferred) pgsql_cursor = pgsql_connection.cursor( cursor_factory = psycopg2.extras.DictCursor ) print( "psycopg2 cursor created at " + str( datetime.datetime.now() ) ) # + # Single row query sql_string = "" result_row = None # SQL sql_string = "SELECT COUNT( * ) AS row_count FROM kcmo_lehd.mo_qcew_employers;" # execute it. pgsql_cursor.execute( sql_string ) # fetch first (and only) row, then output the count first_row = pgsql_cursor.fetchone() print( "row_count = " + str( first_row[ "row_count" ] ) ) # + # Multiple row query sql_string = "" result_list = None result_row = None row_counter = -1 # SQL sql_string = "SELECT * FROM kcmo_lehd.mo_qcew_employers LIMIT 1000;" # execute it. pgsql_cursor.execute( sql_string ) # ==> fetch rows to loop over: # all rows. #result_list = pgsql_cursor.fetchall() # first 10 rows. result_list = pgsql_cursor.fetchmany( size = 10 ) # loop result_counter = 0 for result_row in result_list: result_counter += 1 print( "- row " + str( result_counter ) + ": " + str( result_row ) ) #-- END loop over 10 rows --# # ==> loop over the rest one at a time. result_counter = 0 result_row = pgsql_cursor.fetchone() while result_row is not None: # increment counter result_counter += 1 # get next row result_row = pgsql_cursor.fetchone() #-- END loop over rows, one at a time. --# print( "fetchone() row_count = " + str( result_counter ) ) # + # Close Connection and cursor pgsql_cursor.close() pgsql_connection.close() print( "psycopg2 cursor and connection closed at " + str( datetime.datetime.now() ) ) # - # ### Python - `SQLAlchemy` # # # - Back to [Table of Contents](#Table-of-Contents) # # `SQLAlchemy` is a higher-level Python database library that, among many other things, contains a wrapper around DBAPI that makes a subset of the DBAPI API work the same for any database `SQLAlchemy` supports (though it doesn't work exactly like DBAPI... nothing's perfect). You can use this wrapper to write Python code that can be re-used with different databases (though you'll have to make sure the SQL also is portable). `SQLAlchemy` also includes advanced features like connection pooling in its implementation of DBAPI that help to make it perform better than a direct database connection. # # Just be aware that the farther you move from a direct connection, the more potential there is for things to go wrong. Under the hood, `SQLAlchemy` is using `psycopg2` for its PostgreSQL database access, so now you have two relatively complex packages working in tandem. If you get a particularly vexing bug running SQL with `SQLAlchemy`, in particular complex SQL or statements that update or alter the database, make sure to try that SQL with a pure DBAPI client or in the command line client to see if it is a problem with `SQLAlchemy`, not with your SQL or database. # # `SQLAlchemy`'s database connection is called an engine. To connect a `SQLAlchemy` engine to a database, you will: # # - create a `SQLAlchemy` connection string for your database. # - use that string to initialize an engine and connect it to your database. # # A full connection URL for `SQLAlchemy` looks like this: # # dialect+driver://username:password@host:port/database # # If you recall back to our connection properties, we only need to specify host name and database. In `SQLAlchemy`, any elements of the URL that are not needed can be omitted. So for our database, the connection URL is: # # postgresql://10.10.2.10/appliedda # imports import sqlalchemy import datetime # + # Connect connection_string = 'postgresql://10.10.2.10/appliedda' pgsql_engine = sqlalchemy.create_engine( connection_string ) print( "SQLAlchemy engine connected to " + connection_string + " at " + str( datetime.datetime.now() ) ) # + # Single row query - with the streaming option so it does not return results until we "fetch" them: sql_string = "SELECT COUNT( * ) AS row_count FROM kcmo_lehd.mo_qcew_employers;" query_result = pgsql_engine.execution_options( stream_results = True ).execute( sql_string ) # output results - you can also check what columns "query_result" has by accessing # it's "keys" since it is just a Python dict object. Like so: print( query_result.keys() ) # print an empty string to separate out our two more useful print statements print('') # fetch first (and only) row, then output the count first_row = query_result.fetchone() print("row_count = " + str( first_row[ "row_count" ] ) ) # + # Multiple row query sql_string = "" query_result = None result_list = None result_row = None row_counter = -1 # run query with the streaming option so it does not return results until we "fetch" them: # SQL sql_string = "SELECT * FROM kcmo_lehd.mo_qcew_employers LIMIT 1000;" # execute it. query_result = pgsql_engine.execution_options( stream_results = True ).execute( sql_string ) # ==> fetch rows to loop over: # all rows. #result_list = query_result.fetchall() # first 10 rows. result_list = query_result.fetchmany( size = 10 ) # loop result_counter = 0 for result_row in result_list: result_counter += 1 print( "- row " + str( result_counter ) + ": " + str( result_row ) ) #-- END loop over 10 rows --# # ==> loop over the rest one at a time. result_counter = 0 result_row = query_result.fetchone() while result_row is not None: # increment counter result_counter += 1 # get next row result_row = query_result.fetchone() #-- END loop over rows, one at a time. --# print( "fetchone() row_count = " + str( result_counter ) ) # + # Clean up: pgsql_engine.dispose() print( "SQLAlchemy engine dispose() called at " + str( datetime.datetime.now() ) ) # - # ### Python - `pandas` # # - Back to [Table of Contents](#Table-of-Contents) # # Next we'll use the [pandas package](http://pandas.pydata.org/) to populate `pandas` DataFrames from the results of SQL queries. `pandas` uses a `SQLAlchemy` database engine to connect to databases and run queries. It then reads data returned from a given SQL query and further processes it to store it in a tabular data format called a "DataFrame" (a term that will be familiar for those with R or STATA experience). # # DataFrames allow for easy statistical analysis, and can be directly used for machine learning. They also load your entire result set into memory by default, and so are not suitable for really large data sets. # # And, as discussed in the `SQLAlchemy` section, this is yet another layer added on top of other relatively complex database packages, such that you multiply the potential for a peculiarity in one to cause obscure, difficult-to-troubleshoot problems in one of the other layers. It won't occur frequently, but if you run into weird or inexplicable problems when turning SQL into DataFrames, try running the SQL using lower layers to isolate the problem. # # In the code cell below, we'll use `SQLAlchemy` to connect to the database, then we'll give this engine to pandas and let it retrieve and process data. # # _Note: in addition to processing SQL queries, `pandas` has a range of [Input/Output tools](http://pandas.pydata.org/pandas-docs/stable/io.html) that let it read from and write to a large variety of tabular data formats, including CSV and Excel files, databases via SQL, JSON files, and even SAS and Stata data files. In the example below, we'll use the `pandas.read_sql()` function to read the results of an SQL query into a data frame._ # imports import datetime import pandas # + # Connect - create SQLAlchemy engine for pandas to use. connection_string = 'postgresql://10.10.2.10/appliedda' pgsql_engine = sqlalchemy.create_engine( connection_string ) print( "SQLAlchemy engine connected to " + connection_string + " at " + str( datetime.datetime.now() ) ) # + # Single row query sql_string = "" df_ildoc_admit = "" first_row = None row_count = -1 # Single row query sql_string = "SELECT COUNT( * ) AS row_count FROM kcmo_lehd.mo_qcew_employers;" df_ildoc_admit = pandas.read_sql( sql_string, con = pgsql_engine ) # get row_count - first get first row first_row = df_ildoc_admit.iloc[ 0 ] # then grab value. row_count = first_row[ "row_count" ] print("row_count = " + str( row_count ) ) # and call head(). df_ildoc_admit.head() # + # Multiple row query sql_string = "" df_ildoc_admit = "" row_count = -5 result_row = None # SQL sql_string = "SELECT * FROM kcmo_lehd.mo_qcew_employers LIMIT 2000;" # execute it. df_ildoc_admit = pandas.read_sql( sql_string, con = pgsql_engine ) # unlike previous Python examples, rows are already fetched and in a dataframe: # you can loop over them... row_count = 0 for result_row in df_ildoc_admit.iterrows(): row_count += 1 #-- END loop over rows. --# print( "loop row_count = " + str( row_count ) ) # Print out the first X using head() output_count = 10 df_ildoc_admit.head( output_count ) # etc. # + # Close Connection - Except you don't have to because pandas does it for you!
class_notebooks/2_1_Databases.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Chapter 2 - Session 1 # # Matrix (Images) in Python # # ## Objectives # #### learn how to use python library call NumPy which is dedicated to matrix handling and related function (all images are form of matrixes) # # ## Content # #### 1) Installation and Importing NumPy Library # #### 2) Creating Matrixes using NumPy # #### 3) Indexing Matrixes # #### 4) Math Operations with Matrixes # #### 5) Get Statistics of Matrixes # <hr> # # 1) Installation and Importing NumPy Library # # First we have to install NumPy to the Python using pip3 command as below # # pip3 install numpy # # if you use Anaconda, NumPy library is already installed by default # # then we can import and access various functions from NumPy library to our python work space # + '''import numpy library to python workspace as np''' import numpy as np # - # # 2) Creating Matrixes using NumPy # # Actually, we can represent vector or matrix as lists in python. as below # # [1,2,3,4] - vecor # [[1,2,3,4],[5,6,7,8]] - matrix # # but in order to do operation on matrixes, we have to use "for" loops which is not efficient and hectic. to address this, NumPy provide alternative way, which is easy to manage and efficient # # + '''lists and numpy matrixes''' my_list = [[1,2,3,4],[5,6,7,8]] my_matrix = np.array([[1,2,3,4],[5,6,7,8]]) print(type(my_list)) print(type(my_matrix)) print('\n---------------------\n') '''getting shape of matrixes''' my_matrix = np.array([[1,2,3,4],[5,6,7,8]]) print(my_matrix.shape) print('\n---------------------\n') '''creating numpy arrays and matrixes''' row_vector = np.array([1,2,3,4]) col_vector = np.array([1,2,3,4]).T #transpose my_matrix = np.array([[1,2,3,4],[5,6,7,8]]) print(row_vector) print(col_vector) print(my_matrix) print('\n---------------------\n') '''we can use built in function in python to creat special matrixes, easily''' zeros_mat = np.zeros((4,4)) ones_mat = np.zeros((4,4)) random_mat = np.random.rand(4,4) print(zeros_mat) print(ones_mat) print(random_mat) # - # ### Python Data Types # # NumPy library has their own data types which are broader than python default data types # # int8 - Byte (-128 to 127) # nt16 - Integer (-32768 to 32767) # int32 - Integer (-2147483648 to 2147483647) # uint8 - Unsigned integer (0 to 255) # uint16 - Unsigned integer (0 to 65535) # uint32 - Unsigned integer (0 to 4294967295) # float16 - Half precision float: sign bit, 5 bits exponent, 10 bits mantissa # float32 - Single precision float: sign bit, 8 bits exponent, 23 bits mantissa # float64 - Double precision float: sign bit, 11 bits exponent, 52 bits mantissa # # So we can define our own data type for matrix when we are creating it or we can converty between data types # + '''get data type of matrix''' my_matrix = np.array([[1,2,3,4],[5,6,7,8]]) print(my_matrix.dtype) print('\n---------------------\n') '''convert data type from float to int''' my_matrix = np.array([1.5, 2.5, 3.5], dtype=np.float32) print(my_matrix.dtype) print(my_matrix) my_matrix_int = np.array(my_matrix, dtype=np.int32) print(my_matrix_int.dtype) print(my_matrix_int) # - # ### Exercise 1 # # Create a 4x4 matrix with values 1,2,3,4 in the diagonal # ### Exercise 2 # # RGB images are 3-D matrixes. We will deal with 3-D matrixes in future sessions. So first, create random 3 by 3 random matrix and create 3-D matrix with 3x3x3 matrix with values from 1 to 27 values and print created matrixes # # # 3) Indexing Matrixes # # Once we have a matrix we should be able to access particular element of part of matrix (similar to lists, we extracted value at particular index from a list) # # In case of matrixes also, we use indexes to extract values from a matrix. It's similar to Sub-setting a Image in Remote Sensing context # + my_matrix = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) print(my_matrix) print('\n---------------------\n') '''access only one value''' print(my_matrix[0,0]) print(my_matrix[3,2]) print('\n---------------------\n') '''access part of matrix''' print(my_matrix[0:3,0:3]) print(my_matrix[1,:]) print(my_matrix[1:3,:]) print(my_matrix[2:,:]) print(my_matrix[:2,:]) # - # ### Exercise 3 # # Create a 4x4 matrix with values 1,2,3,4 in the diagonal and remove first , last columns and remove first , last rows and print final 2x2 matrix # # 4) Math Operations with Matrixes # # Like numbers, we can perform mathematics operations with matrixes like Adding, Subtraction, Multiplication.. etc # # Usually in matrixes operations perform in element-wise manner # + x = np.array([[1, 2, 3], [4, 5, 6]]) y = np.array([[1, 1, 1], [2, 2, 2]]) print(x) print(y) print('\n---------------------\n') '''operatins with scalers''' print(x * 0.1) print(x + 10) print(x / 10) print('\n---------------------\n') '''operatins between matrixes''' print(x + y) print(x * y) print('\n---------------------\n') '''we can use numpy fuctions to do advance element-wise operations on matrix''' print(np.sqrt(x)) print(np.exp(x)) # - # ### Exercise 4 # # Create any 4by4 Matrix with decimal values and create and print matrix with rounded values of original matrix # ### Exercise 5 # # Create any 4by4 Matrix and Calculate x^2+2x+1 for each element of the matrix # ### Exercise 6 # # Create any 2 4by4 Matrix and Calculate (matrix 2 - matrix 1) / (matrix 2 + matrix 1) for each element of the matrix # # 5) Get Statistics of Matrixes # # Now we can create, index, perform operations with matrixes. And there are another easy set of functions that allow us to calculate statistics parameters such as sum, mean, max, min, etc. of a matrix # + my_matrix = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) print(my_matrix) print('\n---------------------\n') print(np.sum(my_matrix)) print(np.mean(my_matrix)) print(np.max(my_matrix)) print(np.min(my_matrix)) # - # In above examples, we calculate statistics for whole matrix. furthermore we can specify particular direction that we can calculate statistics. # # In this example, we calculate statistics vertically and horizontally # + my_matrix = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]]) print(my_matrix) print('\n---------------------\n') print(np.sum(my_matrix, axis=0)) print(np.sum(my_matrix, axis=1)) # - # Another important parameter of matrix is unique values in matrix. # # Let's imagine, if someone has given us classified satellite image. so we want to know, what are the class value and home many of them are there. this this case, we have to first find unique values in given image (matrix) # my_matrix = np.array([3, 3, 3, 2, 2, 1, 1, 4, 4]) np.unique(my_matrix) # ### Exercise 7 # # Create a 10x10 array with random values and find the minimum and maximum values # ### Exercise 8 # # Create any 4by4 Matrix containing duplicates and print all unique values of that matrix and print number of unique values of the matrix # ### Exercise 9 # # create matrix with 100 rows and 2 columns with random value, can calculate and print mean, standard deviation in each column # ### Exercise 10 # # create 3-D matrix with 3x3x3 matrix with values from 1 to 27 values and print created matrixes. and calculate sum, mean, max, min in all 3 dimensions
Chapter 2/C2S1 - Matrix (Images) in Python.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 1. Fill missing pieces # Fill `____` pieces below to have correct values for `lower_cased`, `stripped` and `stripped_lower_case` variables. original = ' Python strings are COOL! ' lower_cased = original.lower() stripped = original.strip() stripped_lower_cased = original.strip().lower() # Let's verify that the implementation is correct by running the cell below. `assert` will raise `AssertionError` if the statement is not true. # + editable=false assert lower_cased == ' python strings are cool! ' assert stripped == 'Python strings are COOL!' assert stripped_lower_cased == 'python strings are cool!' # - # # 2. Prettify ugly string # Use `str` methods to convert `ugly` to wanted `pretty`. # + editable=false ugly = ' tiTle of MY new Book\n\n' # - # Your implementation: pretty = ugly.strip().title() # Let's make sure that it does what we want. `assert` raises [`AssertionError`](https://docs.python.org/3/library/exceptions.html#AssertionError) if the statement is not `True`. # + editable=false print('pretty: {}'.format(pretty)) assert pretty == 'Title Of My New Book' # - # # 3. Format string based on existing variables # Create `sentence` by using `verb`, `language`, and `punctuation` and any other strings you may need. # + editable=false verb = 'is' language = 'Python' punctuation = '!' # - # Your implementation: verb1 = 'Learning' adjective = 'fun' sentence = verb1 + ' ' + language + ' ' + verb + ' ' + adjective + punctuation # + editable=false print('sentence: {}'.format(sentence)) assert sentence == 'Learning Python is fun!'
notebooks/beginner/exercises/strings_exercise.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 # language: python # name: python36 # --- # # Here is my oversimplified and rather naive understanding of the difference: # # As we know, CBOW is learning to predict the word by the context. Or maximize the probability of the target word by looking at the context. And this happens to be a problem for rare words. For example, given the context yesterday was a really [...] day CBOW model will tell you that most probably the word is beautiful or nice. Words like delightful will get much less attention of the model, because it is designed to predict the most probable word. This word will be smoothed over a lot of examples with more frequent words. # # On the other hand, the skip-gram model is designed to predict the context. Given the word delightful it must understand it and tell us that there is a huge probability that the context is yesterday was really [...] day, or some other relevant context. With skip-gram the word delightful will not try to compete with the word beautiful but instead, delightful+context pairs will be treated as new observations. # # UPDATE # Thanks to @0xF for sharing this article # # According to Mikolov # # Skip-gram: works well with small amount of the training data, represents well even rare words or phrases. # # CBOW: several times faster to train than the skip-gram, slightly better accuracy for the frequent words # # One more addition to the subject is found here: # # In the "skip-gram" mode alternative to "CBOW", rather than averaging the context words, each is used as a pairwise training example. That is, in place of one CBOW example such as [predict 'ate' from average('The', 'cat', 'the', 'mouse')], the network is presented with four skip-gram examples [predict 'ate' from 'The'], [predict 'ate' from 'cat'], [predict 'ate' from 'the'], [predict 'ate' from 'mouse']. (The same random window-reduction occurs, so half the time that would just be two examples, of the nearest words.) # # size # The size of the dense vector to represent each token or word (i.e. the context or neighboring words). If you have limited data, then size should be a much smaller value since you would only have so many unique neighbors for a given word. If you have lots of data, it’s good to experiment with various sizes. A value of 100–150 has worked well for me for similarity lookups. # # window # The maximum distance between the target word and its neighboring word. If your neighbor’s position is greater than the maximum window width to the left or the right, then, some neighbors would not be considered as being related to the target word. In theory, a smaller window should give you terms that are more related. Again, if your data is not sparse, then the window size should not matter too much, as long as it’s not overly narrow or overly broad. If you are not too sure about this, just use the default value. # # min_count # Minimium frequency count of words. The model would ignore words that do not satisfy the min_count. Extremely infrequent words are usually unimportant, so its best to get rid of those. Unless your dataset is really tiny, this does not really affect the model in terms of your final results. The settings here probably has more of an effect on memory usage and storage requirements of the model files. # + #1.SKip-Gram sentences = [['I','love','nlp'], ['I','will','learn','nlp','in','2','months'], ['nlp','is','future'], ['nlp','saves','time','and','solves','lot','of','industry','problems'], ['nlp','uses','machine','learning']] # - # !pip install gensim # + import gensim from gensim.models import Word2Vec from sklearn.decomposition import PCA import matplotlib.pyplot as plt # training the model skipgram = Word2Vec(sentences, size=50, window=3, min_count=1, sg=1) # size=50 is the vector size generated and window is the window size print(skipgram) # - print(skipgram['nlp']) print(skipgram['deep']) # We get an error saying the word doesn't exist because the word was not in the input training data. this is the reason we need to train the algorithm on as much data possible so that we do not miss out on words. #save the model skipgram.save('skipgram.bin') #load the model skipgram = Word2Vec.load('skipgram.bin') # + #T-SNE plot is one of the way to evaluate word embeddings. let's generate it and see how it looks X = skipgram[skipgram.wv.vocab] pca = PCA(n_components=2) result = pca.fit_transform(X) #create a scatter plot of the projection plt.scatter(result[:,0],result[:,1]) words = list(skipgram.wv.vocab) for i,word in enumerate(words): print(i,word) plt.annotate(word,xy=(result[i,0],result[i,1])) plt.show() # + #Fast text import gensim from gensim.models import FastText from sklearn.decomposition import PCA import matplotlib.pyplot as plt #training the model fast = FastText(sentences,size=50,window=1,min_count=1,workers=5,min_n=1,max_n=2) #size=50 is the vector size generated and window is the window size print(fast) # - print(fast['nlp']) print(fast['deep']) fast.save('fast.bin') fast = FastText.load('fast.bin') # + #T-SNE plot is onw of the way to evaluate word embeddings. let's generate it and see how it looks X= fast[fast.wv.vocab] pca = PCA(n_components=2) result = pca.fit_transform(X) #create a scatter plot of the projection plt.scatter(result[:,0],result[:,1]) words = list(fast.wv.vocab) for i,word in enumerate(words): print(i,word) plt.annotate(word,xy=(result[i,0],result[i,1])) plt.show() # - model = Word2Vec.load_word2vec_format('C:')
Converting_text_to_features/Word_Embeddings.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] deletable=true editable=true # # Homework 3 - <NAME> # + [markdown] deletable=true editable=true # For this homework, you are a data scientist working for Pronto (before the end of their contract with the City of Seattle). Your job is to assist in determining how to do end-of-day adjustments in the number of bikes at stations so that all stations will have enough bikes for the next day of operation (as estimated by the weekday average for the station for the year). Your assistance will help in constructing a plan for each day of the week that specifies how many bikes should be moved from each station and how many bikes must be delievered to each station. # # Your assignment is to construct plots of the differences between 'from' and 'to' counts for each station by day of the week. Do this as a set of 7 subplots. You should use at least one function to construct your plots. # - import pandas as pd import matplotlib.pyplot as plt # The following ensures that the plots are in the notebook # %matplotlib inline # We'll also use capabilities in numpy import numpy as np import calendar df = pd.read_csv("2015_trip_data.csv") # + [markdown] deletable=true editable=true # ## Create a dataframe with station counts averages by day-of-week # - start_day = [pd.to_datetime(time).dayofweek for time in df.starttime] stop_day = [pd.to_datetime(time).dayofweek for time in df.stoptime] df['startday'] = start_day # Creates a new column named 'startday' df['stopday'] = stop_day groupby_day_from = df.groupby(['from_station_id','startday']).size() groupby_day_to = df.groupby(['to_station_id','stopday']).size() from_means = groupby_day_from.groupby(level=[0]).mean() # Computes the mean of counts by day to_means = groupby_day_to.groupby(level=[0]).mean() # Computes the mean of counts by day df_day_counts = pd.DataFrame({'From_mean': groupby_day_from, 'To_mean': groupby_day_to}).unstack() df_day_counts.head() # + [markdown] deletable=true editable=true # ## Structure the 7 day-of-week plots as subplots # - def week_plot(data, column, opts): n_groups = len(data.index) index = np.arange(n_groups) # The "raw" x-axis of the bar plot bar_width = 0.35 # Width of the bars opacity = 0.6 # How transparent the bars are #VVVV Changed to do two plots with error bars rects1 = plt.bar(index, data.From_mean[column], bar_width, alpha=opacity, color='b', label='From', ) rects2 = plt.bar(index + bar_width, data.To_mean[column], bar_width, alpha=opacity, color='r', label='To' ) if 'xlabel' in opts: plt.xlabel(opts['xlabel']) if 'ylabel' in opts: plt.ylabel(opts['ylabel']) if 'xticks' in opts and opts['xticks']: plt.xticks(index + bar_width / 2, data.index) _, labels = plt.xticks() # Get the new labels of the plot plt.setp(labels, rotation=90) # Rotate labels to make them readable else: labels = ['' for x in data.index] plt.xticks(index, labels) if 'ylim' in opts: plt.ylim(opts['ylim']) if 'title' in opts and opts['title'] == True: plt.title(calendar.day_name[column]) if 'legend' in opts and opts['legend'] == True: plt.legend() def plot_barN(df, columns, opts, fig): fig = plt.figure(figsize=fig) num_columns = len(columns) local_opts = dict(opts) # Make a deep copy of the object idx = 0 for column in columns: idx += 1 plt.subplot(4,2,idx) if 'title' in local_opts: plt.title(calendar.day_name[column]) local_opts['xticks'] = False local_opts['xlabel'] = '' local_opts['ylabel'] = '' if idx == num_columns or idx == num_columns - 1: if 'xticks' in opts and opts['xticks'] == True: local_opts['xticks'] = True if 'xlabel' in opts: local_opts['xlabel'] = opts['xlabel'] if 'ylabel' in opts and idx % 2 == 1: local_opts['ylabel'] = opts['ylabel'] week_plot(df_day_counts, column, local_opts) # Constant variables in_cols = [0,1,2,3,4,5,6] fig = (16,24) # + # Plot options opts = {} plot_barN(df_day_counts, in_cols, opts, fig) # + [markdown] deletable=true editable=true # ## Label the plots by day-of-week # + # Plot options #opts = {'title' : True, 'xlabel': 'Stations', 'ylabel': 'Counts', 'legend': True} opts = {'title': True} plot_barN(df_day_counts, in_cols, opts, fig) # + [markdown] deletable=true editable=true # ## Label the x-axis for plots in the last row and label the y-axis for plots in the left-most column # + deletable=true editable=true # Plot options opts = {'title' : True, 'xlabel': 'Stations', 'ylabel': 'Counts', 'legend': True, 'xticks': True} plot_barN(df_day_counts, in_cols, opts, fig)
Homework3.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Mask R-CNN for DeepScore # For our own dataset DeepScore # + import os import sys import itertools import math import logging import json import re import random from collections import OrderedDict import numpy as np import matplotlib import matplotlib.pyplot as plt import matplotlib.patches as patches import matplotlib.lines as lines from matplotlib.patches import Polygon # Root directory of the project ROOT_DIR = os.path.abspath("../") # Import Mask RCNN sys.path.append(ROOT_DIR) # To find local version of the library from mrcnn import utils from mrcnn import visualize from mrcnn.visualize import display_images import mrcnn.model as modellib from mrcnn.model import log from mrcnn.config import Config # %matplotlib inline # + import datetime import numpy as np # Import Mask RCNN from mrcnn.config import Config from mrcnn import model as modellib, utils # for mask import pathlib from skimage.io import imread, imsave, imshow import numpy as np from scipy import ndimage import matplotlib.pyplot as plt import matplotlib.patches as patchess import skimage # process xml file import xml.etree.ElementTree # Local path to trained weights file COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5") # Download COCO trained weights from Releases if needed if not os.path.exists(COCO_MODEL_PATH): utils.download_trained_weights(COCO_MODEL_PATH) # Directory to save logs and trained model MODEL_DIR = os.path.join(ROOT_DIR, "logs") # through the command line argument --logs DEFAULT_LOGS_DIR = os.path.join(ROOT_DIR, "logs") # - # ## Configurations # + class ScoreConfig(Config): """Configuration for training on the toy shapes dataset. Derives from the base Config class and overrides values specific to the toy shapes dataset. """ # Give the configuration a recognizable name NAME = "symbols" # Backbone network architecture # Supported values are: resnet50, resnet101 BACKBONE = "resnet50" # Input image resizing # Random crops of size 512x512 IMAGE_RESIZE_MODE = "crop" IMAGE_MIN_DIM = 256 IMAGE_MAX_DIM = 256 IMAGE_MIN_SCALE = 2.0 # Train on 1 GPU and 8 images per GPU. We can put multiple images on each # GPU because the images are small. Batch size is 8 (GPUs * images/GPU). GPU_COUNT = 1 IMAGES_PER_GPU = 1 # If enabled, resizes instance masks to a smaller size to reduce # memory load. Recommended when using high-resolution images. USE_MINI_MASK = True MINI_MASK_SHAPE = (28, 28) # (height, width) of the mini-mask # ROIs kept after non-maximum supression (training and inference) POST_NMS_ROIS_TRAINING = 1000 POST_NMS_ROIS_INFERENCE = 2000 # Number of training and validation steps per epoch STEPS_PER_EPOCH = 1000/IMAGES_PER_GPU VALIDATION_STEPS = 50/IMAGES_PER_GPU # Number of classes (including background) NUM_CLASSES = 1 + 114 # background + 114 symbols # Use smaller anchors because our image and objects are small RPN_ANCHOR_SCALES = (8, 16, 32, 64, 128) # anchor side in pixels # Number of ROIs per image to feed to classifier/mask heads # The Mask RCNN paper uses 512 but often the RPN doesn't generate # enough positive proposals to fill this and keep a positive:negative # ratio of 1:3. You can increase the number of proposals by adjusting # the RPN NMS threshold. TRAIN_ROIS_PER_IMAGE = 512 # Maximum number of ground truth instances to use in one image MAX_GT_INSTANCES = 512 # Max number of final detections per image DETECTION_MAX_INSTANCES = 512 config = ScoreConfig() config.display() # - # ## Notebook Preferences def get_ax(rows=1, cols=1, size=8): """Return a Matplotlib Axes array to be used in all visualizations in the notebook. Provide a central point to control graph sizes. Change the default size attribute to control the size of rendered images """ _, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows)) return ax # ## Dataset # # Create a synthetic dataset # # Extend the Dataset class and add a method to load the shapes dataset, `load_shapes()`, and override the following methods: # # * load_image() # * load_mask() # * image_reference() class ScoreDataset(utils.Dataset): """Generates the shapes synthetic dataset. The dataset consists of simple shapes (triangles, squares, circles) placed randomly on a blank surface. The images are generated on the fly. No file access required. """ def load_score(self, dataset_dir, subset, split): """Load a subset of the DeepScore dataset. dataset_dir: Root directory of the dataset. subset: Subset to load: train or val """ for key, value in class_dict.items(): self.add_class("symbol", value, key) # Train or validation dataset? assert subset in ["train", "val"] img_dir = pathlib.Path(dataset_dir).glob('*/images_png/*.png') img_sorted = sorted([x for x in img_dir]) xml_dir = pathlib.Path(dataset_dir).glob('*/xml_annotations/*.xml') xml_sorted = sorted([x for x in xml_dir]) mask_dir = pathlib.Path(dataset_dir).glob('*/pix_annotations_png/*.png') mask_sorted = sorted([x for x in mask_dir]) if subset == "train": img_sorted = img_sorted[:split] xml_sorted = xml_sorted[:split] mask_sorted = mask_sorted[:split] if subset == "val": img_sorted = img_sorted[split:] xml_sorted = xml_sorted[split:] mask_sorted = mask_sorted[split:] # add images for i, image_path in enumerate(img_sorted): # image = imread(str(image_path)) # height, width = image.shape[:2] image_name = os.path.basename(image_path) xml_path = xml_sorted[i] symbols, _, height, width = get_symbol_info(xml_path) mask_path = str(mask_sorted[i]) # only select scores with less than 500 symbols if len(symbols) < 500: self.add_image( "symbol", image_id=image_name, path=image_path, width=width, height=height, symbols=symbols, mask_path=mask_path) def image_reference(self, image_id): """Return the score data of the image.""" info = self.image_info[image_id] if info["source"] == "symbol": return info["path"] else: super(self.__class__).image_reference(self, image_id) def load_mask(self, image_id): """Generate instance masks for an image. Returns: masks: A bool array of shape [height, width, instance count] with one mask per instance. class_ids: a 1D array of class IDs of the instance masks. """ image_info = self.image_info[image_id] if image_info["source"] != "symbol": return super(self.__class__, self).load_mask(image_id) # image_id == xml_id symbols = image_info['symbols'] mask = imread(image_info['mask_path']) masks = np.zeros([image_info['height'], image_info['width'], len(symbols)], dtype=np.uint8) for i, symbol in enumerate(symbols): # coords are row, col, so we should put (y, x), instead of (x, y) xmin, xmax, ymin, ymax = symbol[1], symbol[2], symbol[3], symbol[4] masks[ymin:ymax+1, xmin:xmax+1, i] = mask[ymin:ymax+1, xmin:xmax+1] # Map class names to class IDs. class_ids = np.array([self.class_names.index(s[0]) for s in symbols]) return masks.astype(np.bool), class_ids.astype(np.int32) def train(model): """Train the model.""" # Training dataset. dataset_train = ScoreDataset() dataset_train.load_score(dataset_dir, "train", split) dataset_train.prepare() # Validation dataset dataset_val = ScoreDataset() dataset_val.load_score(dataset_dir, "val", split) dataset_val.prepare() # *** This training schedule is an example. Update to your needs *** # Since we're using a very small dataset, and starting from # COCO trained weights, we don't need to train too long. Also, # no need to train all layers, just the heads should do it. print("Training network heads") model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=30, layers='heads') # ## Load Data # + # Glob the training data and load a single image path img_paths = pathlib.Path('../../').glob('*/images_png/*.png') img_sorted = sorted([x for x in img_paths]) # mask and xml files mask_paths = pathlib.Path('../../').glob('*/pix_annotations_png/*.png') mask_sorted = sorted([x for x in mask_paths]) xml_paths = pathlib.Path('../../').glob('*/xml_annotations/*.xml') xml_sorted = sorted([x for x in xml_paths]) # check the image, mask and xml path names are in the same order rand_img = 1000 im_path = img_sorted[rand_img] mask_path = mask_sorted[rand_img] xml_path = xml_sorted[rand_img] num_samples = len(img_sorted) print(im_path) print(len(img_sorted)) print(mask_path) print(xml_path) im = imread(str(im_path)) mask = imread(str(mask_path)) root = xml.etree.ElementTree.parse(str(xml_path)).getroot() size = root.findall('size') width = float(size[0][0].text) height = float(size[0][1].text) # + # get the information of all symbols in one image def get_symbol_info(xml_path): root = xml.etree.ElementTree.parse(str(xml_path)).getroot() size = root.findall('size') width = float(size[0][0].text) height = float(size[0][1].text) symbols = [] symbol_names = set() # use a set to store unique symbol names rectangles = [] # get the bounding box for each object, multiply with its width and height to get the real pixel coords for symbol in root.findall('object'): name = symbol.find('name').text xmin = round(float(symbol.find('bndbox')[0].text)*width) xmax = round(float(symbol.find('bndbox')[1].text)*width) ymin = round(float(symbol.find('bndbox')[2].text)*height) ymax = round(float(symbol.find('bndbox')[3].text)*height) # current_rectangle = name, (xmin, ymin), xmax - xmin, ymax - ymin current_symbol = name, xmin, xmax, ymin, ymax # rectangles.append(current_rectangle) symbols.append(current_symbol) symbol_names.add(name) return symbols, symbol_names, int(height), int(width) # + # uncomment the whole cell if you want to regenerate symbol set # class_dict = {} # symbol_type = set() # # form a universal symbol set fot the whole dataset, this can take 2 ~ 3 min # for x in xml_sorted: # _, symbol_names,_ = get_symbol_info(x) # symbol_type = symbol_type.union(symbol_names) # # save the symbol_type set for convenience# save t # np.save('symbol_type.npy', symbol_type) # # Load the dictionary # symbol_type = np.load('symbol_type.npy').item() # print('Total num of symbols in the dictionary: %d' % (len(symbol_type))) # i = 0 # for item in symbol_type: # class_dict[item] = i # i += 1 # print(class_dict['fClef']) # # save the class dictionary for futre use so that the integer class label does not change every time # np.save('class_dict.npy', class_dict) # - # uncomment this cell if you want to load previous symbol dict class_dict = np.load('class_dict.npy').item() print('Total number of symbols in the whole dataset:', len(class_dict)) print('The integer value for fClef is:', class_dict['fClef']) # # Create Dataset # load dataset # the directory where deepscore folder is in dataset_dir = '../../' # The former split number of data used as training data # The latter num_samples - split number of data used as validation data split = 8000 # + # the dataset is very large, can take 1~3 minutes # Training dataset dataset_train = ScoreDataset() dataset_train.load_score(dataset_dir, "train", split) dataset_train.prepare() # Validation dataset dataset_val = ScoreDataset() dataset_val.load_score(dataset_dir, "val", split) dataset_val.prepare() # - print("Image Count in training set: {}".format(len(dataset_train.image_ids))) print("Class Count: {}".format(dataset_train.num_classes)) # for i, info in enumerate(dataset_train.class_info): # print("{:3}. {:50}".format(i, info['name'])) print("Image Count in validation set: {}".format(len(dataset_val.image_ids))) print("Class Count: {}".format(dataset_val.num_classes)) # for i, info in enumerate(dataset_val.class_info): # print("{:3}. {:50}".format(i, info['name'])) # Load and display random samples image_ids = np.random.choice(dataset_train.image_ids, 1) for image_id in image_ids: image = dataset_train.load_image(image_id) mask, class_ids = dataset_train.load_mask(image_id) visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names) print('There are %d symbols in the score' %(mask.shape[2])) # ## Bounding Boxes # # Rather than using bounding box coordinates provided by the source datasets, we compute the bounding boxes from masks instead. This allows us to handle bounding boxes consistently regardless of the source dataset, and it also makes it easier to resize, rotate, or crop images because we simply generate the bounding boxes from the updates masks rather than computing bounding box transformation for each type of image transformation. # + # Load random image and mask. image_id = random.choice(dataset_train.image_ids) image = dataset_train.load_image(image_id) mask, class_ids = dataset_train.load_mask(image_id) # Compute Bounding box bbox = utils.extract_bboxes(mask) # Display image and additional stats print("image_id ", image_id, dataset_train.image_reference(image_id)) log("image", image) log("mask", mask) log("class_ids", class_ids) log("bbox", bbox) # Display image and instances visualize.display_instances(image, bbox, mask, class_ids, dataset_train.class_names) # - # ## Ceate Model # Create model in training mode model = modellib.MaskRCNN(mode="training", config=config, model_dir=MODEL_DIR) # + # initialize weights from pretrained model instead of from scratch # Which weights to start with? init_with = "coco" # imagenet, coco, or last if init_with == "imagenet": model.load_weights(model.get_imagenet_weights(), by_name=True) elif init_with == "coco": # Load weights trained on MS COCO, but skip layers that # are different due to the different number of classes # See README for instructions to download the COCO weights model.load_weights(COCO_MODEL_PATH, by_name=True, exclude=["mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox", "mrcnn_mask"]) elif init_with == "last": # Load the last model you trained and continue training model.load_weights(model.find_last()[1], by_name=True) # - # ## Training # # Train in two stages: # 1. Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass `layers='heads'` to the `train()` function. # # 2. Fine-tune all layers. For this simple example it's not necessary, but we're including it to show the process. Simply pass `layers="all` to train all layers. # Train the head branches # Passing layers="heads" freezes all layers except the head # layers. You can also pass a regular expression to select # which layers to train by name pattern. model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=1, layers='heads') # Fine tune all layers # Passing layers="all" trains all layers. You can also # pass a regular expression to select which layers to # train by name pattern. model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE / 10, epochs=2, layers="all") # ## Detection # + class InferenceConfig(ShapesConfig): GPU_COUNT = 1 IMAGES_PER_GPU = 1 inference_config = InferenceConfig() # Recreate the model in inference mode model = modellib.MaskRCNN(mode="inference", config=inference_config, model_dir=MODEL_DIR) # Get path to saved weights # Either set a specific path or find last trained weights # model_path = os.path.join(ROOT_DIR, ".h5 file name here") model_path = model.find_last()[1] # Load trained weights (fill in path to trained weights here) assert model_path != "", "Provide path to trained weights" print("Loading weights from ", model_path) model.load_weights(model_path, by_name=True) # + # Test on a random image image_id = random.choice(dataset_val.image_ids) original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(dataset_val, inference_config, image_id, use_mini_mask=False) log("original_image", original_image) log("image_meta", image_meta) log("gt_class_id", gt_class_id) log("gt_bbox", gt_bbox) log("gt_mask", gt_mask) visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id, dataset_train.class_names, figsize=(8, 8)) # + results = model.detect([original_image], verbose=1) r = results[0] visualize.display_instances(original_image, r['rois'], r['masks'], r['class_ids'], dataset_val.class_names, r['scores'], ax=get_ax()) # - # ## Evaluation # + # Compute VOC-Style mAP @ IoU=0.5 # Running on 10 images. Increase for better accuracy. image_ids = np.random.choice(dataset_val.image_ids, 10) APs = [] for image_id in image_ids: # Load image and ground truth data image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(dataset_val, inference_config, image_id, use_mini_mask=False) molded_images = np.expand_dims(modellib.mold_image(image, inference_config), 0) # Run object detection results = model.detect([image], verbose=0) r = results[0] # Compute AP AP, precisions, recalls, overlaps =\ utils.compute_ap(gt_bbox, gt_class_id, gt_mask, r["rois"], r["class_ids"], r["scores"], r['masks']) APs.append(AP) print("mAP: ", np.mean(APs))
deepscore/inspect_deepscore_data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Example Notebook # # This notebook demonstrates how to use the package API. # ## Hello World Example # # The following cell calls the ``hello_world`` function and prints the output. # + from pyraliddemo.example.hello import hello_world print(hello_world()) # - # ## Stefan-Boltzmann Example # # The following cell calls the ``StefBoltz`` class and prints the luminosity corresponding to the radius and effective temperature provided. # + from pyraliddemo.example.classes import StefBoltz r_sun = 7e8 # m t_sun = 5800 # K luminosity = StefBoltz(r_sun, t_sun).luminosity() print('The luminosity is {0:.2e}w.'.format(luminosity))
notebooks/example_notebook.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="rzmBy3Bf19vv" # * Source : https://taeguu.tistory.com/27 # + id="EeNa2VfR2GAD" import gc import numpy as np import pandas as pd import matplotlib.pyplot as plt # 교차검증 lib from sklearn.model_selection import StratifiedKFold,train_test_split from tqdm import tqdm_notebook from sklearn.metrics import accuracy_score, roc_auc_score #모델 lib from keras.datasets import mnist from keras.utils.np_utils import to_categorical from keras.preprocessing.image import ImageDataGenerator, load_img from keras.models import Sequential, Model from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau from keras.layers import Dense, Dropout, Flatten, Activation, Conv2D, AveragePooling2D,BatchNormalization, MaxPooling2D from keras import layers from keras.optimizers import Adam,RMSprop, SGD #모델 from keras.applications import VGG16, VGG19, resnet50 #경고메세지 무시 import warnings warnings.filterwarnings(action='ignore') import os import gc # + [markdown] id="U-39CF1C1Da7" # # Data # + id="42rbCpCs1Fr-" # https://www.kaggle.com/bulentsiyah/dogs-vs-cats-classification-vgg16-fine-tuning filenames = os.listdir("/content/dogs-vs-cats/train/train") # ! /content/ categories = [] for filename in filenames: category = filename.split('.')[0] if category == 'dog': categories.append(1) else: categories.append(0) train = pd.DataFrame({ 'filename': filenames, 'category': categories }) train.head() # + id="QGvrDl0f2Q_E" # !pwd # + [markdown] id="4LvcBu9T1Il8" # #Visualizing the data # # + id="0-FWSf2L1JFh" sample = filenames[2] image = load_img("../input/dogs-vs-cats/train/train/"+sample) plt.imshow(image) plt.show() train["category"] = train["category"].astype('str') its = np.arange(train.shape[0]) train_idx, test_idx = train_test_split(its, train_size = 0.8, random_state=42) df_train = train.iloc[train_idx, :] X_test = train.iloc[test_idx, :] its = np.arange(df_train.shape[0]) train_idx, val_idx = train_test_split(its, train_size = 0.8, random_state=42) X_train = df_train.iloc[train_idx, :] X_val = df_train.iloc[val_idx, :] print(X_train.shape) print(X_val.shape) print(X_test.shape) X_train['category'].value_counts() # + id="im9atuOj1KNo" # Parameter # + id="QVK6YAUs1Kva" image_size = 227 img_size = (image_size, image_size) nb_train_samples = len(X_train) nb_validation_samples = len(X_val) nb_test_samples = len(X_test) epochs = 20 #batch size 128 batch_size =128 # + id="dss2qkZy1Mhd" # Define Generator config # + id="4CqTti0-1NDl" train_datagen =ImageDataGenerator( rescale=1./255, rotation_range=10, shear_range=0.2, zoom_range=0.2, horizontal_flip=True ) val_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) # + id="Rq3vESDI1OJB" #generator # + id="vMiK6-qt1OhM" train_generator = train_datagen.flow_from_dataframe( dataframe=X_train, directory="../input/dogs-vs-cats/train/train", x_col = 'filename', y_col = 'category', target_size = img_size, color_mode='rgb', class_mode='binary', batch_size=batch_size, seed=42 ) validation_generator = val_datagen.flow_from_dataframe( dataframe=X_val, directory="../input/dogs-vs-cats/train/train", x_col = 'filename', y_col = 'category', target_size = img_size, color_mode='rgb', class_mode='binary', batch_size=batch_size, ) test_generator = test_datagen.flow_from_dataframe( dataframe=X_test, directory="../input/dogs-vs-cats/train/train", x_col = 'filename', y_col=None, target_size= img_size, color_mode='rgb', class_mode=None, batch_size=batch_size, shuffle=False ) # + id="QpBx_Ez01TWr" # Model - AlexNet # + id="d7CWmGZl1T2D" #INPUT input_shape = (227, 227, 3) model = Sequential() #CONV1 model.add(Conv2D(96, (11, 11), strides=4,padding='valid', input_shape=input_shape)) #MAX POOL1 model.add(MaxPooling2D(pool_size=(3, 3), strides=2)) #NORM1 Local response normalization 사용하였는데 현재는 사용하지 않습니다. 현재는 Batch Normalizetion을 사용합니다. model.add(BatchNormalization()) #CONV2 model.add(Conv2D(256, (3, 3), activation='relu', padding='same')) #MAX POOL1 model.add(MaxPooling2D(pool_size=(3, 3), strides=2)) #NORM2 model.add(BatchNormalization()) #CONV3 model.add(Conv2D(384, (3, 3),strides=1, activation='relu', padding='same')) #CONV4 model.add(Conv2D(384, (3, 3),strides=1, activation='relu', padding='same')) #CONV5 model.add(Conv2D(256, (3, 3),strides=1, activation='relu', padding='same')) #MAX POOL3 model.add(MaxPooling2D(pool_size=(3, 3), strides=2)) model.add(Flatten()) #FC6 예측 class가 적어 FC layer을 조정했습니다. model.add(Dense(1024, activation='relu')) model.add(Dropout(0.5)) #FC7 model.add(Dense(512, activation='relu')) model.add(Dropout(0.5)) #FC8 이진 분류이기 때문에 sigmoid model.add(Dense(1, activation='sigmoid')) # SGD Momentum 0.9, L2 weight decay 5e-4 optimizer = SGD(lr=0.01, decay=5e-4, momentum=0.9) model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy']) model.summary() # + [markdown] id="rX3Rtu4Q1YVw" # # # Train # + id="GcCVEyc51ZCx" def get_steps(num_samples, batch_size): if (num_samples % batch_size) > 0 : return (num_samples // batch_size) + 1 else : return num_samples // batch_size # %%time from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau # + id="Olc2EOMU1c1C" #model path MODEL_SAVE_FOLDER_PATH = './model/' if not os.path.exists(MODEL_SAVE_FOLDER_PATH): os.mkdir(MODEL_SAVE_FOLDER_PATH) model_path = MODEL_SAVE_FOLDER_PATH + 'AlexNet.hdf5' patient = 5 callbacks_list = [ # Learning rate 1e-2, reduced by 10 manually when val accuracy plateaus ReduceLROnPlateau( monitor = 'val_accuracy', #콜백 호출시 학습률(lr)을 10으로 나누어줌 factor = 0.1, #5epoch 동안 val_accuracy가 상승하지 않으면 lr 조정 patience = patient, #최소학습률 min_lr=0.00001, verbose=1, mode='min' ), ModelCheckpoint( filepath=model_path, monitor ='val_accuracy', # val_loss가 좋지 않으면 모델파일을 덮어쓰지 않는다 save_best_only = True, verbose=1, mode='min') ] history = model.fit_generator( train_generator, steps_per_epoch = get_steps(nb_train_samples, batch_size), epochs=epochs, validation_data = validation_generator, validation_steps = get_steps(nb_validation_samples, batch_size), callbacks = callbacks_list ) gc.collect() # + [markdown] id="BahnHFoh1eVJ" # # Predict # + id="-ZlQhQP51nom" # %%time test_generator.reset() prediction = model.predict_generator( generator = test_generator, steps = get_steps(nb_test_samples, batch_size), verbose=1 ) print('Test accuracy : ', roc_auc_score(X_test['category'].astype('int'), prediction, average='macro')) # + [markdown] id="D8Slfyoy114K" # # acc / loss plot # + id="f0ZuvCGR1nrl" acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] epochs = range(len(acc)) plt.plot(epochs, acc, label='Training acc') plt.plot(epochs, val_acc, label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.ylim(0.5,1) plt.show() # + id="JCsEHUc_1nt3" loss = history.history['loss'] val_loss = history.history['val_loss'] plt.plot(epochs, loss, label='Training loss') plt.plot(epochs, val_loss, label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.ylim(0,0.5) plt.show() # + [markdown] id="lb07wxNm1nzE" # # Result # + id="-doOqE8I1n1y" from IPython.core.display import display, HTML display(HTML("<style>.container {width:90% !important;}</style>"))
05.CNN_With_Neural_Network_Architecture/AlexNet.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python2 # --- # # Machine Learning Engineer Nanodegree # ## Supervised Learning # ## Project: Building a Student Intervention System # Welcome to the second project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `'TODO'` statement. Please be sure to read the instructions carefully! # # In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. # # >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. # ### Question 1 - Classification vs. Regression # *Your goal for this project is to identify students who might need early intervention before they fail to graduate. Which type of supervised learning problem is this, classification or regression? Why?* # **Answer: ** This is classification problem because we trying to only classify whether a student needs intervention (1 or true) or not(0 or false). It is discrete value we need to predict instead of continous value like price of house in boston housing project. Thus its a classification problem # ## Exploring the Data # Run the code cell below to load necessary Python libraries and load the student data. Note that the last column from this dataset, `'passed'`, will be our target label (whether the student graduated or didn't graduate). All other columns are features about each student. # + # Import libraries import numpy as np import pandas as pd from time import time from sklearn.metrics import f1_score # Read student data student_data = pd.read_csv("student-data.csv") print "Student data read successfully!" # - # ### Implementation: Data Exploration # Let's begin by investigating the dataset to determine how many students we have information on, and learn about the graduation rate among these students. In the code cell below, you will need to compute the following: # - The total number of students, `n_students`. # - The total number of features for each student, `n_features`. # - The number of those students who passed, `n_passed`. # - The number of those students who failed, `n_failed`. # - The graduation rate of the class, `grad_rate`, in percent (%). # # + # TODO: Calculate number of students n_students = student_data.shape[0] # TODO: Calculate number of features n_features = student_data.shape[1]-1 # TODO: Calculate passing students n_passed = student_data[student_data["passed"]=="yes"].count()["school"] # TODO: Calculate failing students n_failed = student_data[student_data["passed"]=="no"].count()["school"] # TODO: Calculate graduation rate grad_rate = float(n_passed)*100/n_students # Print the results print "Total number of students: {}".format(n_students) print "Number of features: {}".format(n_features) print "Number of students who passed: {}".format(n_passed) print "Number of students who failed: {}".format(n_failed) print "Graduation rate of the class: {:.2f}%".format(grad_rate) # - # ## Preparing the Data # In this section, we will prepare the data for modeling, training and testing. # # ### Identify feature and target columns # It is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with. # # Run the code cell below to separate the student data into feature and target columns to see if any features are non-numeric. # + # Extract feature columns feature_cols = list(student_data.columns[:-1]) # Extract target column 'passed' target_col = student_data.columns[-1] # Show the list of columns print "Feature columns:\n{}".format(feature_cols) print "\nTarget column: {}".format(target_col) # Separate the data into feature data and target data (X_all and y_all, respectively) X_all = student_data[feature_cols] y_all = student_data[target_col] # Show the feature information by printing the first five rows print "\nFeature values:" print X_all.head() # - # ### Preprocess Feature Columns # # As you can see, there are several non-numeric columns that need to be converted! Many of them are simply `yes`/`no`, e.g. `internet`. These can be reasonably converted into `1`/`0` (binary) values. # # Other columns, like `Mjob` and `Fjob`, have more than two values, and are known as _categorical variables_. The recommended way to handle such a column is to create as many columns as possible values (e.g. `Fjob_teacher`, `Fjob_other`, `Fjob_services`, etc.), and assign a `1` to one of them and `0` to all others. # # These generated columns are sometimes called _dummy variables_, and we will use the [`pandas.get_dummies()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html?highlight=get_dummies#pandas.get_dummies) function to perform this transformation. Run the code cell below to perform the preprocessing routine discussed in this section. # + def preprocess_features(X): ''' Preprocesses the student data and converts non-numeric binary variables into binary (0/1) variables. Converts categorical variables into dummy variables. ''' # Initialize new output DataFrame output = pd.DataFrame(index = X.index) # Investigate each feature column for the data for col, col_data in X.iteritems(): # If data type is non-numeric, replace all yes/no values with 1/0 if col_data.dtype == object: col_data = col_data.replace(['yes', 'no'], [1, 0]) # If data type is categorical, convert to dummy variables if col_data.dtype == object: # Example: 'school' => 'school_GP' and 'school_MS' col_data = pd.get_dummies(col_data, prefix = col) # Collect the revised columns output = output.join(col_data) return output X_all = preprocess_features(X_all) print "Processed feature columns ({} total features):\n{}".format(len(X_all.columns), list(X_all.columns)) # - # ### Implementation: Training and Testing Data Split # So far, we have converted all _categorical_ features into numeric values. For the next step, we split the data (both features and corresponding labels) into training and test sets. In the following code cell below, you will need to implement the following: # - Randomly shuffle and split the data (`X_all`, `y_all`) into training and testing subsets. # - Use 300 training points (approximately 75%) and 95 testing points (approximately 25%). # - Set a `random_state` for the function(s) you use, if provided. # - Store the results in `X_train`, `X_test`, `y_train`, and `y_test`. # + # TODO: Import any additional functionality you may need here from sklearn.cross_validation import train_test_split # TODO: Set the number of training points num_train = 300 # Set the number of testing points num_test = X_all.shape[0] - num_train # TODO: Shuffle and split the dataset into the number of training and testing points above X_train, X_test, y_train, y_test = train_test_split(X_all, y_all, test_size=95, random_state=0) # Show the results of the split print "Training set has {} samples.".format(X_train.shape[0]) print "Testing set has {} samples.".format(X_test.shape[0]) # - # ## Training and Evaluating Models # In this section, you will choose 3 supervised learning models that are appropriate for this problem and available in `scikit-learn`. You will first discuss the reasoning behind choosing these three models by considering what you know about the data and each model's strengths and weaknesses. You will then fit the model to varying sizes of training data (100 data points, 200 data points, and 300 data points) and measure the F<sub>1</sub> score. You will need to produce three tables (one for each model) that shows the training set size, training time, prediction time, F<sub>1</sub> score on the training set, and F<sub>1</sub> score on the testing set. # # **The following supervised learning models are currently available in** [`scikit-learn`](http://scikit-learn.org/stable/supervised_learning.html) **that you may choose from:** # - Gaussian Naive Bayes (GaussianNB) # - Decision Trees # - Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting) # - K-Nearest Neighbors (KNeighbors) # - Stochastic Gradient Descent (SGDC) # - Support Vector Machines (SVM) # - Logistic Regression # ### Question 2 - Model Application # *List three supervised learning models that are appropriate for this problem. For each model chosen* # - Describe one real-world application in industry where the model can be applied. *(You may need to do a small bit of research for this — give references!)* # - What are the strengths of the model; when does it perform well? # - What are the weaknesses of the model; when does it perform poorly? # - What makes this model a good candidate for the problem, given what you know about the data? # **Answer: ** # 1. **Decision Trees ** # - It can be used to classify different land covers(trees, bare ground, water, etc) from remote sensing data. Reference - http://www.sciencedirect.com/science/article/pii/S0034425797000497 # - The Strengths are # - Its easy to visualize, understand and interpret # - Automatically or implicitly performs feature selection # - Less data preparation by the user # - Non linearity in the data does not affect the performance of the model # - The Weaknesses are # - Developing large decision trees with many branches are complex and time-consuming affairs # - A small change in the data may end up in building a completely different tree structure # - Given that the data is not very complex and small, decision tree can be used as a quick, easy solution for this problem. # **** # 2. **Support Vector Machines ** # - It can be used to classify and validate different cancer tissues from microarray expression data. Reference - http://bioinformatics.oxfordjournals.org/content/16/10/906.short # - The Strengths are # - Works well even though data has some noise # - Predicts well even when large number of features is used relative to the size of the data # - Kernels can be used to predict complex data # - The Weaknesses are # - Chossing the correct kernel with good parameters for a given data is hard # - Time consuming for large data # - Since the data is small and with advantages of kernel, this problem can be efficiently modeled using an SVM. # **** # 3. **Guassian Naive Bayes** # - It can used in selecting features for text classification. Reference - http://www.sciencedirect.com/science/article/pii/S0957417408003564 # - The Strengths are # - It consumes less memory and cpu footprint # - Its extremely fast and simple # - The Weaknesses are # - The predictions are little poor compared to other methods # - Considering the less memory and cpu footprint, it meets the main requirements of the school very well. # ### Setup # Run the code cell below to initialize three helper functions which you can use for training and testing the three supervised learning models you've chosen above. The functions are as follows: # - `train_classifier` - takes as input a classifier and training data and fits the classifier to the data. # - `predict_labels` - takes as input a fit classifier, features, and a target labeling and makes predictions using the F<sub>1</sub> score. # - `train_predict` - takes as input a classifier, and the training and testing data, and performs `train_clasifier` and `predict_labels`. # - This function will report the F<sub>1</sub> score for both the training and testing data separately. # + def train_classifier(clf, X_train, y_train): ''' Fits a classifier to the training data. ''' # Start the clock, train the classifier, then stop the clock start = time() clf.fit(X_train, y_train) end = time() # Print the results print "Trained model in {:.4f} seconds".format(end - start) def predict_labels(clf, features, target): ''' Makes predictions using a fit classifier based on F1 score. ''' # Start the clock, make predictions, then stop the clock start = time() y_pred = clf.predict(features) end = time() # Print and return results print "Made predictions in {:.4f} seconds.".format(end - start) return f1_score(target.values, y_pred, pos_label='yes') def train_predict(clf, X_train, y_train, X_test, y_test): ''' Train and predict using a classifer based on F1 score. ''' # Indicate the classifier and the training set size print "Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train)) # Train the classifier train_classifier(clf, X_train, y_train) # Print the results of prediction for both training and testing print "F1 score for training set: {:.4f}.".format(predict_labels(clf, X_train, y_train)) print "F1 score for test set: {:.4f}.".format(predict_labels(clf, X_test, y_test)) # - # ### Implementation: Model Performance Metrics # With the predefined functions above, you will now import the three supervised learning models of your choice and run the `train_predict` function for each one. Remember that you will need to train and predict on each classifier for three different training set sizes: 100, 200, and 300. Hence, you should expect to have 9 different outputs below — 3 for each model using the varying training set sizes. In the following code cell, you will need to implement the following: # - Import the three supervised learning models you've discussed in the previous section. # - Initialize the three models and store them in `clf_A`, `clf_B`, and `clf_C`. # - Use a `random_state` for each model you use, if provided. # - **Note:** Use the default settings for each model — you will tune one specific model in a later section. # - Create the different training set sizes to be used to train each model. # - *Do not reshuffle and resplit the data! The new training points should be drawn from `X_train` and `y_train`.* # - Fit each model with each training set size and make predictions on the test set (9 in total). # **Note:** Three tables are provided after the following code cell which can be used to store your results. # + # TODO: Import the three supervised learning models from sklearn from sklearn.naive_bayes import GaussianNB from sklearn.tree import DecisionTreeClassifier from sklearn.svm import SVC # TODO: Initialize the three models clf_A = GaussianNB() clf_B = DecisionTreeClassifier(random_state=1) clf_C = SVC(random_state=1) # TODO: Set up the training set sizes for clf in [clf_A,clf_B,clf_C]: # show the estimator type print "\n{}: \n".format(clf.__class__.__name__) # TODO: loop thru training sizes for n in [100,200,300]: # fit model using "n" training data points (i.e., 100, 200, or 300) train_predict(clf, X_train[:n], y_train[:n], X_test, y_test) print "\n" # - # ### Tabular Results # Edit the cell below to see how a table can be designed in [Markdown](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet#tables). You can record your results from above in the tables provided. # ** Classifer 1 - Gaussian Naive Bayes** # # | Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) | # | :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: | # | 100 | 0.0010 | 0.0000 | 0.8550 | 0.7481 | # | 200 | 0.0010 | 0.0000 | 0.8321 | 0.7132 | # | 300 | 0.0010 | 0.0000 | 0.8088 | 0.7500 | # # ** Classifer 2 - Decision Tree** # # | Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) | # | :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: | # | 100 | 0.0020 | 0.0000 | 1.0000 | 0.6496 | # | 200 | 0.0010 | 0.0000 | 1.0000 | 0.7231 | # | 300 | 0.0020 | 0.0000 | 1.0000 | 0.7107 | # # ** Classifer 3 - Support Vector Machine** # # | Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) | # | :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: | # | 100 | 0.0020 | 0.0010 | 0.8591 | 0.7838 | # | 200 | 0.0030 | 0.0010 | 0.8693 | 0.7755 | # | 300 | 0.0070 | 0.0020 | 0.8692 | 0.7586 | # ## Choosing the Best Model # In this final section, you will choose from the three supervised learning models the *best* model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (`X_train` and `y_train`) by tuning at least one parameter to improve upon the untuned model's F<sub>1</sub> score. # ### Question 3 - Choosing the Best Model # *Based on the experiments you performed earlier, in one to two paragraphs, explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance?* # **Answer: ** # # As we compare the three model based on the experiment we can see that Decision tree is the inferior model compared to other two. Though the prediction time is merely 0, even after providing the full data, the model could not cross an F1 score of 0.72, while the other two models managed to cross an F1 score of 0.75. Naive bayes appears to be the best model considering that it trains and predicts very fast even when the size of the data is increased and it could provide an decent F1 Score of 0.75. # # Support Vector Machines on the other hand provide the highest F1 score of 0.7586. But the training time of svm tends to exponentially increase with respect to increase in the size of the data. Though the time tends to exponentially increase, considering the given dataset, it would take only 0.0070 seconds to train the model and 0.0020 seconds to predict an output which is satisfyingly fast. But naive bayes could be preferred if the size of the dataset may increase. # # Thus considering the given data,limited resource, cost and performance and the fact that svm can be tuned well, i would choose support vector machine to be the best model. # ### Question 4 - Model in Layman's Terms # *In one to two paragraphs, explain to the board of directors in layman's terms how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical or technical jargon, such as describing equations or discussing the algorithm implementation.* # **Answer: ** The final model chosen is Support Vector Machine(SVM). Imagine the students who passed are given green cap and the students who failed are given red cap and all students in our data are made to stand in a ground at different places which calculated as function of known data about them(study time, gaurdian, age, etc) as shown in Fig 4.1. Now we try to draw two line called as margins in the ground with one close to the students who passed and another close to the student who failed with the biggest possible distance between them. Now we draw a final line along the middle of the other two margins. This ensures that the two margins(students who passed and students who failed) are seperated with the maximum width between them as shown in Fig 4.1. All the students are then dispersed, but we have the line in the ground. Now if we take any new student, by making him stand on the ground at a place which calculated as function of known data about them(study time, gaurdian, age, etc). Now we can see whether the student stands in which side of the line whether on the passed students side or failed students side. By this, we can predict whether our student needs an intervention or not. This is how SVM works, it is trained with the known student data to fit a line or hyperplane that best seperates between the students who passed and failed and then when a new student comes, it could predict whether the student needs intervention(may fail) or not by comparing with the line or hyperplane. # # In an 2d (with two features) plot we draw a line(1d) to seperate the between the students who passed or not, whereas in 3d(with three features) plot we draw a plane or sheet(2d) to seperate the between the students who passed or not. But in real time scenario like our student data, there will 'n'(many) number of features and an hyperplane(n-1 d) named in general to all dimensions is used to seperate between the students who passed or not. # # Now, there are some cases as shown in Fig 4.2 where the data cannot be seperated by an line or in general hyperplane. In this case we see that the above method may not work. But there is a nifty way to overcome this problem using a method called kernel trick. Though the name kernel trick looks complex, the technique behind is simple. It just adds a new feature(say X) which is a function(mostly dot product) of all the known features of the student(study time, gaurdian, age, etc). After adding an new feature(dimension) we may now be able to seperate the data with an hyperplane. To understand this, look at the Fig 4.3, where the X and Y data in Fig 4.2, say, is multipled(or dot product or any function) together to form a new 3rd Dimension Z and is plotted together. Now we can see that an sheet or plane or in general hyperplane can be placed to seperate the students who passed or failed. This is know as Kernel trick. By using different available kernels and creating new custom kernels, we can ensure that our SVM can classify any complex data. # # - **Fig 4.1 Linearly Seperable** # <img src="https://udacity-github-sync-content.s3.amazonaws.com/_imgs/372/1457891591/SVM_2.png"> # # - **Fig 4.2 Linearly Non-Seperable data** # <img src="http://www.eric-kim.net/eric-kim-net/posts/1/imgs/dataset_nonsep.png"> # # - **Fig 4.3 Linearly Seperable using Kernel** # <img src="https://udacity-github-sync-content.s3.amazonaws.com/_imgs/372/1457891585/data_2d_to_3d_hyperplane.png"> # # ### Implementation: Model Tuning # Fine tune the chosen model. Use grid search (`GridSearchCV`) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following: # - Import [`sklearn.grid_search.GridSearchCV`](http://scikit-learn.org/0.17/modules/generated/sklearn.grid_search.GridSearchCV.html) and [`sklearn.metrics.make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html). # - Create a dictionary of parameters you wish to tune for the chosen model. # - Example: `parameters = {'parameter' : [list of values]}`. # - Initialize the classifier you've chosen and store it in `clf`. # - Create the F<sub>1</sub> scoring function using `make_scorer` and store it in `f1_scorer`. # - Set the `pos_label` parameter to the correct value! # - Perform grid search on the classifier `clf` using `f1_scorer` as the scoring method, and store it in `grid_obj`. # - Fit the grid search object to the training data (`X_train`, `y_train`), and store it in `grid_obj`. # + # TODO: Import 'GridSearchCV' and 'make_scorer' from sklearn.grid_search import GridSearchCV from sklearn.metrics import make_scorer # TODO: Create the parameters list you wish to tune parameters = [{ 'C': [0.01,0.1,0.2,0.3,0.4,0.5], 'kernel': ['linear', 'rbf','poly','sigmoid'] }] # TODO: Initialize the classifier clf = SVC() # TODO: Make an f1 scoring function using 'make_scorer' def f1_score_fn(y, y_pred): return f1_score(y, y_pred, pos_label='yes') f1_scorer = make_scorer(f1_score_fn) # TODO: Perform grid search on the classifier using the f1_scorer as the scoring method grid_obj = GridSearchCV(clf,parameters,scoring=f1_scorer) # TODO: Fit the grid search object to the training data and find the optimal parameters grid_obj = grid_obj.fit(X_train,y_train) # Get the estimator clf = grid_obj.best_estimator_ # Report the final F1 score for training and testing after parameter tuning print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train)) print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test)) # - # ### Question 5 - Final F<sub>1</sub> Score # *What is the final model's F<sub>1</sub> score for training and testing? How does that score compare to the untuned model?* # **Answer: ** The final training F1 score of my tuned model is 0.8632 which is less than the training F1 score of my untuned model 0.8692 whereas the final testing F1 score of my tuned model is 0.7671 which is greater than the testing F1 score of my untuned model 0.7586. # > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to # **File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
student_intervention.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from tensorflow import keras from tensorflow.keras import layers,models,utils # - (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() print('x_train shape : %s' % str(x_train.shape)) print('y_train shape : %s' % str(y_train.shape)) print('x_test shape : %s' % str(x_test.shape)) print('y_test shape : %s' % str(y_test.shape)) model = models.Sequential() model.add(layers.Dense(512, activation='relu', input_shape=(28 * 28,))) model.add(layers.Dense(10, activation='softmax')) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) x_train = x_train.reshape((60000, 28 * 28)) x_train = x_train.astype('float32') / 255 x_test = x_test.reshape((10000, 28 * 28)) x_test = x_test.astype('float32') / 255 print('x_train shape : %s' % str(x_train.shape)) print('x_test shape : %s' % str(x_test.shape)) y_train = utils.to_categorical(y_train, 10) y_test = utils.to_categorical(y_test, 10) print('y_train shape : %s' % str(y_train.shape)) print('y_test shape : %s' % str(y_test.shape)) model.fit(x_train, y_train, epochs=5, batch_size=128) test_loss, test_acc = model.evaluate(x_test, y_test) print('test_acc:', test_acc)
notebooks/dl-chollet/scripts/Handwritten Digit Recognition Simple.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/IanReyes2/OOP-58003/blob/main/OOP%2058003%20LONG%20QUIZ1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="FWT-asqpf12t" # Bank Account # + id="DcURUzDTjJ_0" class Bank_Account: def __init__(self): self.balance=0 print("Welcome to the imperial bank of the philippines") name = input("Enter your name: ") AccountNumber = input("Enter your number: ") def deposit(self): amount=float(input("Enter amount to be deposited: ")) self.balance += amount print("Amount Deposited: ",amount) def display(self): print("Net Available Balance=",self.balance) s = Bank_Account() s.deposit() s.display()
OOP 58003 LONG QUIZ1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.0 64-bit (''py38'': conda)' # language: python # name: python3 # --- # # Loding a Dataset # + import torch from torch.utils.data import Dataset from torchvision import datasets from torchvision.transforms import ToTensor import matplotlib.pyplot as plt training_data = datasets.FashionMNIST( root="data", train=True, download=True, transform=ToTensor() ) test_data = datasets.FashionMNIST( root="data", train=False, download=True, transform=ToTensor() ) # - # # Iterating and Visulizing the Dataset labels_map = { 0: "T-Shirt", 1: "Trouser", 2: "Pullover", 3: "Dress", 4: "Coat", 5: "Sandal", 6: "Shirt", 7: "Sneaker", 8: "Bag", 9: "Ankle Boot", } figure = plt.figure(figsize=(8, 8)) cols, rows = 3, 3 for i in range(1, cols * rows + 1): sample_idx = torch.randint(len(training_data), size=(1,)).item() img, label = training_data[sample_idx] figure.add_subplot(rows, cols, i) plt.title(labels_map[label]) plt.axis("off") plt.imshow(img.squeeze(), cmap="gray") plt.show() for i in range(1, 10): print(i) # # Creating a Custom Dataset for your files # + import os import pandas as pd from torchvision.io import read_image class CustomImageDataset(Dataset): def __init__(self, annotations_file, img_dir, transform=None, target_transform=None): self.img_labels = pd.read_csv(annotations_file) self.img_dir = img_dir self.transform = transform self.target_transform = target_transform def __len__(self): return len(self.img_labels) def __getitem__(self, idx): img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0]) image = read_image(img_path) label = self.img_labels.iloc[idx, 1] if self.transform: image = self.transform(image) if self.target_transform: label = self.target_transform(label) return image, label # - # # Preparing your data for training with DataLoaders # + from torch.utils.data import DataLoader train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True) test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True) # - # # Iterate through the DataLoader # Display image and label. train_features, train_labels = next(iter(train_dataloader)) print(f"Feature batch shape: {train_features.size()}") print(f"Labels batch shape: {train_labels.size()}") img = train_features[0].squeeze() label = train_labels[0] plt.imshow(img, cmap="gray") plt.show() print(f"Label: {label}")
pythonExample/pytorchExample/IntroductionTopytorch/Datasets&DataLoaders.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import sys from scipy import integrate import matplotlib.pyplot as plt from math import log10, log2, pi # %matplotlib inline np.set_printoptions(16) print("Python float epsilon: {}".format(sys.float_info[8])) print("Numpy float64 epsilon: {}".format(np.finfo(np.float64).eps)) # + def f1(x): #todo division by zero return (np.cos(x)-1)/(np.sqrt(x)) def f2_func(theta_m): def f(theta): return np.sqrt(8/(1e-8+np.cos(theta)-np.cos(theta_m))) return f true_f1 = 0.26874616993238043 print("True value of f1 (including analytical part), error 1e-14, calculated using romberg: ", true_f1) # - def plot(I, N, title, true_value=None, log=True): if log: y = [log10(abs(i)) for i in I] else: y = [i for i in I] x = [log2(i/10) for i in N] plt.plot(x, y, label="Computed value") if true_value is not None: if log: y2 = [log10(abs(true_value))]*(len(I)) else: y2 = [true_value]*(len(N)) plt.plot(x, y2, label="True value") plt.plot if log: plt.ylabel("log10(|I|)") else: plt.ylabel("|I|") plt.xlabel("log2(N/10)") plt.title(title) plt.legend() plt.show() # ### Left rectangle rule def integrate_rect(f, lower, upper, N, verbose=False): """Integrates using left rectangle rule""" x_k, h = np.linspace(lower, upper, N, False, retstep=True, dtype=np.float64) assert h==abs(upper-lower)/N if verbose: print(h) f_k = f(x_k) return h*np.sum(f_k) I = [integrate_rect(f1, 1e-14, 1, 10**i)+ 1-np.cos(1) for i in range(1,8)] N = [10**i for i in range(1,8)] plot(I, N, "left rectangle", true_value=true_f1) print("best estimate: {}, error:{}, N: {}".format(I[-1], I[-1]-true_f1, N[-1])) # ### Trapezoidal rule def integrate_trap(f, lower, upper, N, oddterms=False, verbose=False): h = abs(upper-lower)/N if verbose: print('h = {}'.format(h)) if oddterms: x_k = np.arange(lower+h, upper-0.5*h, 2*h, dtype=np.float64) else: x_k = np.arange(lower, upper+0.5*h, h, dtype=np.float64) if verbose==2: print("points: ", x_k) A_k = f(x_k) if verbose: print("scipy integrate: ", integrate.trapz(A_k, dx=h)) if not oddterms: A_k[0] *= 0.5 A_k[-1] *= 0.5 result = h*A_k.sum() if verbose: print('my result: ', result) return result # + #analytical part a = 1-np.cos(1) I_list, N_list = [], [] N = 2 I = integrate_trap(f1, 1e-14, 1, N) for i in range(9): N *= 2 I_next = 0.5*I + integrate_trap(f1, 1e-14, 1, N, oddterms=True) error = (I_next - I)/3 I_best = I_next + error print("N: {}, I: {}, I_next: {}, error: {}, I_best: {}".format(N, I+a, I_next+a, error, I_best+a)) I = I_next I_list.append(I_best+a) N_list.append(N) plot(I_list, N_list, "Adaptive trapezoidal", true_f1) # - # ### Romberg integration # + #Analytical part a = 1-np.cos(1) num_iters = 7 N = 2 R = [] #Get the first order estimates I = integrate_trap(f1, 1e-14, 1, N) R.append(I) for i in range(0, num_iters-1): N *= 2 I_next = 0.5*I + integrate_trap(f1, 1e-14, 1, N, oddterms=True) I = I_next R.append(I) #Get the higher order estimates R_ = [value for value in R] #deep copy for j in range(1, num_iters): R_next = [R_[i]+ (R_[i] - R_[i-1])/(4**j-1) for i in range(1,len(R_))] latest_error = [(R_[i] - R_[i-1])/(4**i-1) for i in range(1,len(R_))][0] R_ = R_next print('best romberg estimate: {}, error: {}, depth: {}, N_max: {}'.format(R_[0]+a, latest_error, num_iters, N)) # - # ### Simpson's rule # Note: Only an even number of equally spaces sample points to be used def integrate_simpson(f, lower, upper, N, verbose=False): h = abs(upper-lower)/N if verbose: print('h = {}'.format(h)) x = np.arange(lower, upper+0.5*h, h, dtype=np.float64) if verbose==2: print("points: ", x) A = f(x) if verbose: print("scipy integrate: ", integrate.simps(A, dx=h)) args = (f, lower, upper, N) T, S = simpsons_odd_terms(*args), simpsons_even_terms(*args) result = h*(S.sum() + T.sum()*2) if verbose: print('my result: ', result) return result # + def simpsons_odd_terms(f, lower, upper, N): h = abs(upper-lower)/N #Odd terms x = np.arange(lower+h, upper-0.5*h, 2*h, dtype=np.float64) A_odd = f(x)*2/3 return A_odd def simpsons_even_terms(f, lower, upper, N): h = abs(upper-lower)/N #Even terms x = np.arange(lower, upper+0.5*h, 2*h, dtype=np.float64) A_even = f(x)*2/3 A_even[0] *= 1/2 A_even[-1] *= 1/2 return A_even # + I_list = [] N_list = [] N = 4 l, u = 1e-14, 1 T, S = simpsons_odd_terms(f1, l, u, N).sum(), simpsons_even_terms(f1, l, u, N).sum() for i in range(1, 6): h = abs(l-u)/N I = h*(S + 2*T) N *= 2 h_next = abs(l-u)/N S_next = S + T T_next = simpsons_odd_terms(f1, l, u, N).sum() I_next = h_next*(S_next + 2*T_next) error = (I_next - I)/15 I_best = I_next + error print("N: {}, I: {}, I_next: {}, error: {}, I_best: {}".format(N, I+a, I_next+a, error, I_best+a)) S, T, I = S_next, T_next, I_next I_list.append(I_best+a) N_list.append(N) plot(I_list, N_list, "Adaptive simpsons's", true_f1) # - # ### Gaussian quadrature from gaussxw import gaussxwab as g def integrate_gauss(f, lower, upper, N): x, w = g(N, lower, upper) return np.sum(w*f(x)) I = [integrate_gauss(f1, 1e-14, 1, 2**i)+ 1-np.cos(1) for i in range(1,5)] N = [2**i for i in range(1,5)] plot(I, N, "gaussian quadrature", true_value=true_f1) print("best estimate: {}, error:{}, N: {}".format(I[-1], I[-1]-true_f1, N[-1])) # ## Non-linear oscillator # + N = 100 I_gauss, I_trap, I_simp = [], [], [] values = [0.1, 0.2, 0.5, 1, 2, 3] for theta_m in values: f = f2_func(theta_m) I_gauss.append(integrate_gauss(f, 0, theta_m, N)) I_trap.append(integrate_trap(f, 0, theta_m, N)) I_simp.append(integrate_simpson(f, 0, theta_m, N)) print("theta_m: {}, trapezoid: {}, simpsons's: {}, gauss: {}".format(theta_m, I_trap[-1], I_simp[-1], I_gauss[-1])) plt.plot(values, I_gauss, label='Gaussian quadrature') plt.plot(values, [6.28711, 6.29893, 6.38279, 6.69998, 8.34975, 16.155], label='true values (Wolfram alpha)') plt.xlabel('theta_m') plt.ylabel('computed integral') plt.legend() plt.show()
Assignment2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import json from pprint import pprint import secrets as secrets_module from dydx3 import Client from dydx3 import private_key_to_public_key_pair_hex from dydx3.constants import NETWORK_ID_ROPSTEN, NETWORK_ID_MAINNET from dydx3.constants import API_HOST_ROPSTEN, API_HOST_MAINNET from web3 import Web3 def generate_private_key(): priv = secrets_module.token_hex(32) private_key = "0x" + priv return private_key def onboard_initial_account(eth_private_key, network_id, host): print(f"Eth private key: {eth_private_key}\n") dydx_client = Client( network_id=network_id, host=host, eth_private_key=eth_private_key, ) stark_private_key = dydx_client.onboarding.derive_stark_key() dydx_client.stark_private_key = stark_private_key print(f"STARK private key: {stark_private_key}\n") public_x, public_y = private_key_to_public_key_pair_hex(stark_private_key) print(f"STARK (public x, public y): ({public_x},{public_y})\n") onboarding_response = dydx_client.onboarding.create_user( stark_public_key=public_x, stark_public_key_y_coordinate=public_y, ).data print("Account onboarding response:") pprint(onboarding_response) return secrets = {} # ensure that the secrests.json file contains `infura_ropsten_endpoint` attribute with open("secrets.json") as secretfile: secrets = json.load(secretfile) # ropsten check (with randomly generated ethereum address) onboard_initial_account( generate_private_key(), NETWORK_ID_ROPSTEN, API_HOST_ROPSTEN ) # # mainnet check # onboard_initial_account( # secrets["eth_private_endpoint"], # must provide a used account's private key, else get `DydxApiError(status_code=400, response={'errors': [{'msg': 'User wallet has no transactions, Ethereum or USDC'}]})` # secrets["infura_mainnet_key"], # NETWORK_ID_MAINNET, # API_HOST_MAINNET # ) # - priv_key1 = generate_private_key() onboard_initial_account( priv_key1, secrets["infura_ropsten_endpoint"], NETWORK_ID_ROPSTEN, API_HOST_ROPSTEN ) # + client1 = Client( network_id=NETWORK_ID_ROPSTEN, host=API_HOST_ROPSTEN, default_ethereum_address="0x5763265928eA73F9f2090D7E39A17E34dA1D4BB5", api_key_credentials={ 'key': '<KEY>', 'passphrase': '<KEY>', 'secret': '<KEY> }, ) # - token_faucet_response = client1.private.request_testnet_tokens(); token_faucet_response.data # + # token_faucet_response.data # + # client1.private.get_account().data # -
authenticating.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/r5racker/012_RahilBhensdadia/blob/main/Lab_01_NLTK_Matplotlib.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + colab={"base_uri": "https://localhost:8080/"} id="1LgotGxulzkh" outputId="ec2fcec8-faf1-4142-9372-83deeffbca96" from google.colab import drive drive.mount("/content/drive") # + id="o0PpL5rNmRWm" # importing libraries import nltk import matplotlib.pyplot as plt import pandas as pd # + colab={"base_uri": "https://localhost:8080/"} id="1UJQcWdbuqBP" outputId="1bc798d4-8f59-4d1e-fd79-65cd3cfd0027" # Raw Text Analysis random_text = """Discussing climate, sustainability, and preserving the natural world with President @EmmanuelMacron today in Paris. #BezosEarthFund #ClimatePledge""" import re import string from nltk.corpus import stopwords from nltk.stem import PorterStemmer from nltk.tokenize import TweetTokenizer remove_link_text = re.sub(r'https?:\/\/.*[\r\n]*', '', random_text) remove_link_text = re.sub(r'#', '', remove_link_text) print(remove_link_text) # + colab={"base_uri": "https://localhost:8080/"} id="xyl_0s-03TFh" outputId="7c331ddf-1621-4dc9-d03a-c622e0ed5737" print('\033[92m' + random_text) print('\033[92m' + remove_link_text) # + colab={"base_uri": "https://localhost:8080/"} id="3hKhYMiXpGFr" outputId="4bbb57d6-5b14-4168-a280-18bc4fe20321" from nltk.tokenize import sent_tokenize text="""Hello Mr. steve, how you doing? whats up? The weather is great, and city is awesome. how you doing? The sky is pinkish-blue. You shouldn't eat cardboard, how you doing?""" # download punkt nltk.download("punkt") tokenized_text=sent_tokenize(text) print(tokenized_text) # + colab={"base_uri": "https://localhost:8080/"} id="KOGvOYTdpVd5" outputId="27af7a4b-ef35-424f-e0ee-73bada719a02" # breaks paregraph into words from nltk.tokenize import word_tokenize tokenized_word=word_tokenize(text) print(tokenized_word) # + colab={"base_uri": "https://localhost:8080/", "height": 330} id="Jb5ACXqwpotm" outputId="74e2ba8b-5482-4322-d340-72d056cf4211" # frequency distribution from nltk.probability import FreqDist fdist = FreqDist(tokenized_word) fdist.most_common(4) # Frequency Distribution Plot import matplotlib.pyplot as plt fdist.plot(30, cumulative = False, color = "green") plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="UlxorA4tqzhD" outputId="ee7c38cb-833a-482a-848e-105ac5966534" # stop words from nltk.corpus import stopwords # download stopwords nltk.download("stopwords") stop_words = set(stopwords.words("english")) print(stop_words) # + colab={"base_uri": "https://localhost:8080/"} id="giaafJN6rxlb" outputId="8fbadd75-f7e4-448f-d6da-00ed344ef8b4" filtered_sent=[] for w in tokenized_word: if w not in stop_words: filtered_sent.append(w) print("Tokenized Sentence:",tokenized_word) print("Filterd Sentence:",filtered_sent) # + colab={"base_uri": "https://localhost:8080/"} id="aMlWpuDzsBMh" outputId="8fa12f42-84c4-4c9f-b53f-53906dd50ad0" # stemming from nltk.stem import PorterStemmer from nltk.tokenize import sent_tokenize, word_tokenize ps = PorterStemmer() stemmed_words=[] for w in filtered_sent: stemmed_words.append(ps.stem(w)) print("Filtered Sentence:",filtered_sent) print("Stemmed Sentence:",stemmed_words) # + colab={"base_uri": "https://localhost:8080/"} id="dZSb7K_WsqFU" outputId="f834d56f-ec72-4bbe-a540-d6631f3220c3" #Lexicon Normalization #performing stemming and Lemmatization from nltk.stem.wordnet import WordNetLemmatizer nltk.download('wordnet') lem = WordNetLemmatizer() from nltk.stem.porter import PorterStemmer stem = PorterStemmer() word = "flying" print("Lemmatized Word:",lem.lemmatize(word,"v")) print("Stemmed Word:",stem.stem(word)) # + id="fix5Qlw8suEL"
Lab1/Lab1_NLTK.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np def quadratic_formula(a,b,c): base = np.sqrt(b**2 - 4.0*a*c) return sorted((-b + np.array([base,-base]))/(2*a)) print(np.roots([1.0,7.0,12.0]), quadratic_formula(1.0,7.0,12.0)) print(np.roots([-3.0,5.0,2.0]), quadratic_formula(-3.0,5.0,2.0)) print(np.roots([4.0,5.0,4.0])) print(np.roots([1.5,7.2,0.0]), quadratic_formula(1.5,7.2,0.0)) print(np.isclose(np.roots([1e-8,10.0,1e-8]), quadratic_formula(1e-8,10.0,1e-8),atol=1e-10)) a,b = np.roots([-3.0,5.0,2.0]) print("%.16g"%b) # - sorted([-4,-3])
demos/other/Untitled.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + active="" # https://www.algoexpert.io/questions/Binary%20Search # + def binary_search(arr, target): return binary_search_helper(arr, target, 0, len(arr) -1) def binary_search_helper(arr, target, left, right): if left > right: return -1 middle = (left + right) // 2 # the index we compare to target if arr[middle] == target: return middle elif arr[middle] < target: return binary_search_helper(arr, target, middle + 1, right) elif arr[middle] > target: return binary_search_helper(arr, target, left, middle - 1) # - print(binary_search([1,2,3], 5))
notebooks/binary-search-array.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # NHISS Categorization Analysis of 60 Experimentally Tested Molecules for Indocyanine Nanoparticle Formation # # Number of High Intrinsic State Substructures (NHISS) is calculated as the total number of functional groups in a molecule with fluorine (-F) and double bonded oxygen (=O). # # NHISS = fluorine + carbonyl + sulfinyl + 2 * sulfonyl + nitroso + 2 * nitro import pandas as pd import numpy as np import os import re from __future__ import print_function, division import matplotlib.pyplot as plt # %matplotlib inline # ### 1. Calculate NHISS descriptor # #### Import names of experimentally tested drugs and their mol files. # + ### Creating dataframe for the list of molecules path = "./" filename="mol_file_list_N60.txt" file = open(os.path.join(path, filename), "r") filename_list = [] for line in file: filename_list.append(line.split('\n')[0]) print(len(filename_list)) print(filename_list[:5]) # + df_molecules = pd.DataFrame(filename_list) df_molecules.columns= ["File Name"] print(df_molecules.size) df_molecules["NAME"] = None df_molecules.head() # - # #### Create SMILES strings for all molecules # This section requires using OpenEye OEChem library, version 2.0.5. # + from openeye import oechem, oedepict df_molecules["smiles"] = None ifs = oechem.oemolistream() ofs = oechem.oemolostream() ifs.SetFormat(oechem.OEFormat_MOL2) ofs.SetFormat(oechem.OEFormat_SMI) for i, row in enumerate(df_molecules.iterrows()): df_molecules.ix[i,"NAME"] = re.split("[.]", df_molecules.ix[i,"File Name"])[0] file_name = df_molecules.ix[i,0] mol_file_path = "./mol_files_of_60_drugs/" this_path = os.path.join(mol_file_path, file_name) mol_file = ifs.open(os.path.join(this_path)) for mol in ifs.GetOEGraphMols(): #print ("Number of atoms:", mol.NumAtoms()) #print ("Canonical isomeric SMILES:", OEMolToSmiles(mol)) df_molecules.ix[i,"smiles"] = oechem.OEMolToSmiles(mol) df_molecules.head() # - # #### Counting Substructures for NHISS descriptor # This section requires using OpenEye OEChem library, version 2.0.5. df_molecules.loc[:,"F"] = None df_molecules.loc[:,"carbonyl"] = None df_molecules.loc[:,"sulfinyl"] = None df_molecules.loc[:,"sulfonyl"] = None df_molecules.loc[:,"nitroso"] = None df_molecules.loc[:,"nitro"] = None df_molecules.head() # + #write to csv df_molecules.to_csv("df_molecules.csv", encoding='utf-8') # Run the following to populate the dataframe from terminal (runs faster): import os # %run count_carbonyls.py # %run count_fluorines.py # %run count_sulfinyls.py # %run count_sulfonyls.py # %run count_nitroso.py # %run count_nitro.py # - # #### Import experimental data and merge df_exp_data = pd.read_csv("experimental_dataset_N60.csv") df_exp_data.head() # Merge DataFrames df_molecules= pd.merge(df_molecules, df_exp_data, on=["NAME"]) print(df_molecules.size) print(df_molecules.shape) df_molecules.head() # #### Calculating NHISS (Number of High Instrinsic State Substructures) # NHISS descriptor is the total number of fluorines and double bonded oxygens in the structure. # $ NHISS = fluorine + carbonyl + sulfinyl + 2*sulfonyl + nitroso + 2*nitro $ df_molecules.loc[:,"NHISS"] = None for i, row in enumerate(df_molecules.iterrows()): NHISS= df_molecules.loc[i,"F"] + df_molecules.loc[i,"carbonyl"]+ df_molecules.loc[i,"sulfinyl"] + 2*df_molecules.loc[i,"sulfonyl"] + df_molecules.loc[i,"nitroso"] + 2*df_molecules.loc[i,"nitro"] df_molecules.loc[i,"NHISS"]=NHISS df_molecules.to_csv("df_molecules.csv", encoding='utf-8') df_molecules.head() # ### 2. NHISS vs NHISS Rank Plot # + df_exp_sorted = df_molecules.sort_values(by="NHISS", ascending=1).reset_index(drop=True) df_exp_sorted["NHISS rank"]=df_exp_sorted.index df_exp_yes_sorted = df_exp_sorted.loc[df_exp_sorted["Experimental INP Formation"] == "Yes"].reset_index(drop=True) df_exp_no_sorted = df_exp_sorted.loc[df_exp_sorted["Experimental INP Formation"] == "No"].reset_index(drop=True) NHISS_array_yes_sorted = df_exp_yes_sorted.ix[:,"NHISS"] NHISS_rank_array_yes_sorted = df_exp_yes_sorted.ix[:,"NHISS rank"] NHISS_array_no_sorted = df_exp_no_sorted.ix[:,"NHISS"] NHISS_rank_array_no_sorted = df_exp_no_sorted.ix[:,"NHISS rank"] plt.rcParams.update({'font.size': 12}) fig = plt.figure(1, figsize=(6,4), dpi=200) plt.scatter(NHISS_rank_array_yes_sorted, NHISS_array_yes_sorted, alpha=0.7, c="b", s=40 ) plt.scatter(NHISS_rank_array_no_sorted, NHISS_array_no_sorted, alpha=0.7, c="w", s=40) plt.xlabel("NHISS rank") plt.ylabel("NHISS") plt.xlim(-1,61) plt.ylim(-0.2,8.2) plt.savefig("NHISS_rank.png", dpi=200) plt.savefig("NHISS_rank.svg") #plt.gcf().canvas.get_supported_filetypes() # - # ### 3. NHISS Box Plot # + NHISS_array_yes = df_exp_yes_sorted.ix[:,"NHISS"].astype(float) NHISS_array_no = df_exp_no_sorted.ix[:,"NHISS"].astype(float) data=[NHISS_array_yes, NHISS_array_no] fig=plt.figure(1, figsize=(4,4)) ax= fig.add_subplot(111) ax.boxplot(data) ax.set_xticklabels(["INP F","INP NF"]) ax.set_ylabel("NHISS") ax.set_ylim(-0.5, 8.5) fig.savefig("NHISS_boxplot.png", dpi=200) plt.savefig("NHISS_boxplot.svg") # - from scipy import stats print(stats.ttest_ind(NHISS_array_yes, NHISS_array_no, equal_var=False)) import numpy as np, statsmodels.stats.api as sms cm = sms.CompareMeans(sms.DescrStatsW(NHISS_array_yes), sms.DescrStatsW(NHISS_array_no)) print("95% CI: ", cm.tconfint_diff(usevar='unequal')) # ### 4. NHISS Logistic Regression # + from scipy import optimize def logistic(params,x): """ Logistic function Parameters ---------- params : list or numpy array the three parameters of the logistic function First parameter is set to 1 to make the function span 0 to 1. x : numpy array the explanatory variable Return ------ numpy array the output of the logistic function """ params[0]=1 return params[0]/(1+np.exp(-x*params[1] - params[2])) def residuals(params): predicted = logistic(params,x) return np.sum((y-predicted)**2) # + df_molecules["Experimental Category"]=None for i,row in enumerate(df_molecules.iterrows()): if df_molecules.ix[i,"Experimental INP Formation"] == "Yes" : df_molecules.ix[i, "Experimental Category"] = 1 else: df_molecules.ix[i, "Experimental Category"] = 0 df_molecules.head() # + fig = plt.figure(1, figsize=(4,4)) df_sorted = df_molecules.sort_values(by="NHISS", ascending=1).reset_index(drop=True) initial_guess = [1,1,1] x=df_sorted.ix[:, "NHISS"].astype(float) y=df_sorted.ix[:, "Experimental Category"] fit = optimize.minimize(residuals, initial_guess, method='Nelder-Mead') print("The predicted parameters are ", fit.x) # Inflection point is -x_0/b threshold =(-1)*fit.x[2]/fit.x[1] print("Threshold NHISS: ", threshold) plt.scatter(x,y) predicted = logistic(fit.x, x) plt.plot(x, predicted,color="red") plt.xlabel('NHISS') plt.ylabel('INP formation', size=10) plt.ylim(-0.1, 1.1) plt.savefig("NHISS_logistic_fit.png", dpi=200) plt.savefig("NHISS_logistic_fit.svg") # - # ### 5. NHISS ROC Curve # + from sklearn import metrics y_actual = df_sorted["Experimental Category"] # predicted score come from logistic regression y_predicted = predicted # ROC fpr, tpr, thresholds = metrics.roc_curve(y_actual, y_predicted) roc_auc = metrics.auc(fpr, tpr) # Plotting ROC curve fig = plt.figure(1, figsize=(4,4)) plt.title('Receiver Operating Characteristic') plt.plot(fpr, tpr, 'b', label='AUC = %0.2f'% roc_auc) plt.legend(loc='lower right') plt.plot([0,1],[0,1],'r--') plt.xlim([-0.1,1.1]) plt.ylim([-0.1,1.1]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.savefig("NHISS_ROC.svg") # - print("TPR:", tpr) print("FPR:", fpr) print("Treshold:", thresholds) # #### Bootstrap for ROC AUC confidence intervals # + y_actual = np.array(y_actual).astype(int) y_predicted = np.array(y_predicted) print("Original ROC area: {:0.3f}".format(metrics.roc_auc_score(y_actual, y_predicted))) n_bootstraps = 1000 rng_seed = 0 # control reproducibility bootstrapped_scores = [] rng = np.random.RandomState(rng_seed) for i in range(n_bootstraps): # bootstrap by sampling with replacement on the prediction indices indices = rng.random_integers(0, len(y_predicted) - 1, len(y_predicted)) if len(np.unique(y_actual[indices])) < 2: # We need at least one positive and one negative sample for ROC AUC # to be defined: reject the sample continue score = metrics.roc_auc_score(y_actual[indices], y_predicted[indices]) bootstrapped_scores.append(score) #print("Bootstrap #{} ROC area: {:0.3f}".format(i + 1, score)) fig = plt.figure(1, figsize=(9,4)) plt.subplot(1,2,1) plt.hist(bootstrapped_scores, bins=50) plt.title('Histogram of the bootstrapped ROC AUC scores') # plt.show() sorted_scores = np.array(bootstrapped_scores) sorted_scores.sort() # Computing the lower and upper bound of the 95% confidence interval # 95% CI percentiles to 0.025 and 0.975 confidence_lower = sorted_scores[int(0.025 * len(sorted_scores))] confidence_upper = sorted_scores[int(0.975 * len(sorted_scores))] print("95% Confidence interval for the score: [{:0.3f} - {:0.3}]".format( confidence_lower, confidence_upper)) # Plotting ROC curve #fig = plt.figure(1, figsize=(4,4)) plt.subplot(1,2,2) plt.title('Receiver Operating Characteristic') plt.plot(fpr, tpr, 'b', label='AUC={0:0.2f} 95%CI: [{1:0.2f},{2:0.2f}]'.format(roc_auc, confidence_lower, confidence_upper)) plt.legend(loc='lower right', prop={'size':10}) plt.plot([0,1],[0,1],'r--') plt.xlim([-0.1,1.1]) plt.ylim([-0.1,1.1]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.savefig("NHISS_ROC_with_histogram.png", dpi=200) plt.savefig("NHISS_ROC_with_histogram.svg") # - # Plotting ROC curve fig = plt.figure(1, figsize=(4,4)) plt.title('Receiver Operating Characteristic', size=16) plt.plot(fpr, tpr, 'b', label='AUC={0:0.2f} \n95% CI: [{1:0.2f},{2:0.2f}]'.format(roc_auc, confidence_lower, confidence_upper)) plt.legend(loc='lower right', prop={'size':13}) plt.plot([0,1],[0,1],'r--') plt.xlim([-0.1,1.1]) plt.ylim([-0.1,1.1]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.savefig("NHISS_ROC.png", dpi=200) plt.savefig("NHISS_ROC.svg") # ### 6. NHISS Confusion Matrix # Threshold for confusion matrix was determined by inflection point of logistic regression. # + df_molecules["Pred Category by NHISS"]= None for i, row in enumerate(df_molecules.iterrows()): logP = float(df_molecules.ix[i, "NHISS"]) #print(logD) if logP < threshold: df_molecules.ix[i, "Pred Category by NHISS"] = 0 else: df_molecules.ix[i, "Pred Category by NHISS"] = 1 df_molecules.head() # + exp_NP = df_molecules.ix[:,"Experimental Category"].astype(int) pred_NP = df_molecules.ix[:, "Pred Category by NHISS"].astype(int) actual = pd.Series(exp_NP, name= "Actual") predicted = pd.Series(pred_NP, name= "Predicted") df_confusion = pd.crosstab(actual, predicted) # Accuracy = (TP+TN)/(TP+TN+FP+FN) TP = df_confusion.ix[1,1] TN = df_confusion.ix[0,0] FP = df_confusion.ix[0,1] FN = df_confusion.ix[1,0] accuracy = (TP+TN)/(TP+TN+FP+FN) print("NHISS", "\nAccuracy= {:.2f}".format(accuracy)) print("NHISS threshold= {:.2f}\n".format(threshold)) print(df_confusion)
NHISS_discrimination_performance_analysis/NHISS_classification_performance.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Image-Basic-Recognition_static # O Sistema deve identificar as provas a serem analisadas, independente de sua posição # #### [1]Desta forma as operações relacionadas para Scanner do gabarito serão pertinentes ao objeto da classe Scanner # ### Identificando respostas do gabarito # + ##################### SCANINNG UTILS ##################### import cv2 import numpy as np import imutils from skimage.filters import threshold_local class Scanner: def order_points(self,pts): # initialzie a list of coordinates that will be ordered # such that the first entry in the list is the top-left, # the second entry is the top-right, the third is the # bottom-right, and the fourth is the bottom-left rect = np.zeros((4, 2), dtype = "float32") # the top-left point will have the smallest sum, whereas # the bottom-right point will have the largest sum s = pts.sum(axis = 1) rect[0] = pts[np.argmin(s)] rect[2] = pts[np.argmax(s)] # now, compute the difference between the points, the # top-right point will have the smallest difference, # whereas the bottom-left will have the largest difference diff = np.diff(pts, axis = 1) rect[1] = pts[np.argmin(diff)] rect[3] = pts[np.argmax(diff)] # return the ordered coordinates return rect def four_point_transform(self,image, pts): # obtain a consistent order of the points and unpack them # individually rect = self.order_points(pts) (tl, tr, br, bl) = rect # compute the width of the new image, which will be the # maximum distance between bottom-right and bottom-left # x-coordiates or the top-right and top-left x-coordinates widthA = np.sqrt(((br[0] - bl[0]) ** 2) + ((br[1] - bl[1]) ** 2)) widthB = np.sqrt(((tr[0] - tl[0]) ** 2) + ((tr[1] - tl[1]) ** 2)) maxWidth = max(int(widthA), int(widthB)) # compute the height of the new image, which will be the # maximum distance between the top-right and bottom-right # y-coordinates or the top-left and bottom-left y-coordinates heightA = np.sqrt(((tr[0] - br[0]) ** 2) + ((tr[1] - br[1]) ** 2)) heightB = np.sqrt(((tl[0] - bl[0]) ** 2) + ((tl[1] - bl[1]) ** 2)) maxHeight = max(int(heightA), int(heightB)) # now that we have the dimensions of the new image, construct # the set of destination points to obtain a "birds eye view", # (i.e. top-down view) of the image, again specifying points # in the top-left, top-right, bottom-right, and bottom-left # order dst = np.array([ [0, 0], [maxWidth - 1, 0], [maxWidth - 1, maxHeight - 1], [0, maxHeight - 1]], dtype = "float32") # compute the perspective transform matrix and then apply it M = cv2.getPerspectiveTransform(rect, dst) warped = cv2.warpPerspective(image, M, (maxWidth, maxHeight)) # return the warped image return warped def scanning(self,image): ratio = image.shape[0] / 500.0 orig = image.copy() image = imutils.resize(image, height = 500) # convert the image to grayscale, blur it, and find edges # in the image gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) edged = cv2.Canny(image, 100, 200) # show the original image and the edge detected image print("STEP 1: Edge Detection") cnts = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if imutils.is_cv2() else cnts[1] cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:5] screenCnt = None # loop over the contours for c in cnts: # approximate the contour peri = cv2.arcLength(c, True) approx = cv2.approxPolyDP(c, 0.02 * peri, True) # if our approximated contour has four points, then we # can assume that we have found our screen if len(approx) == 4: screenCnt = approx break # show the contour (outline) of the piece of paper print("STEP 2: Find contours of paper") if screenCnt is not None: # apply the four point transform to obtain a top-down # view of the original image warped = self.four_point_transform(orig, screenCnt.reshape(4, 2) * ratio) # convert the warped image to grayscale, then threshold it # to give it that 'black and white' paper effect # warped = cv2.cvtColor(warped, cv2.COLOR_BGR2GRAY) # T = threshold_local(warped, 11, offset = 10, method = "gaussian") # warped = (warped > T).astype("uint8") * 255 # show the original and scanned images print("STEP 3: Apply perspective transform") return imutils.resize(warped, height = 650) def image_processing(self): image = cv2.imread('dataset/imgs/gabarito_template_geral.png') cv2.imshow("Scanned",self.scanning(image)) cv2.waitKey(0) cv2.destroyAllWindows() def camera_processing(self): cap = cv2.VideoCapture(0) while(True): ret, frame = cap.read() frame = cv2.flip(frame,1) cv2.imshow("frame",frame) frame = cv2.flip(frame,1) scanning = self.scanning(frame) if scanning is not None: cv2.imshow("Scanned",scanning) if (cv2.waitKey(1) & 0xFF == ord('q')): break cap.release() cv2.destroyAllWindows() if __name__ == "__main__": scanner = Scanner() scanner.image_processing()
lessons/7.4.2 - Camera-Basic-Recognition_scanner_Object.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # name: ir # --- # + [markdown] colab_type="text" id="view-in-github" # <a href="https://colab.research.google.com/github/ikwak2/testrepository/blob/main/R4ds_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="gstrndhz6psu" # # R for Data Science 실습 2 - Data wrangling and programing # # ## Data Wrangling (tidyr) # + colab={"base_uri": "https://localhost:8080/"} id="76FV1Zt07B11" outputId="17d5a785-5661-422b-8ced-ee7cac0aa5bf" library(tidyverse) # + [markdown] id="RNJOq6-gLQik" # ## Example datasets # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="NVHja2rs6-sx" outputId="62e189b2-81cb-4ae4-de20-c246f0e621d7" table1 # + colab={"base_uri": "https://localhost:8080/", "height": 466} id="Yc0K2hUT62I0" outputId="25268c1a-6e99-495b-935f-593c87698732" table2 # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="ruUuOM3Z6uHz" outputId="3a70ff73-543b-4003-fb68-22c0be1b1959" table3 # + colab={"base_uri": "https://localhost:8080/", "height": 189} id="g7pkx0_iKcLw" outputId="196f195b-292d-41e0-bb55-b822777dd255" table4a # N. cases # + colab={"base_uri": "https://localhost:8080/", "height": 189} id="49S9TIboKeCV" outputId="66558e4c-b2c7-43f9-c51a-fef7a543c27d" table4b # N. population # + [markdown] id="Ja7B9v_QLXDw" # ## pivot_longer() # + [markdown] id="_HaJOk7fMnEM" # One variable might be spread across multiple columns # + [markdown] id="CM7i75ShLcRx" # Apply pivot_longer() to table4a # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="BefFvDSOKo6I" outputId="960f86d4-e836-4a77-9da6-3d9c3f685978" table4a %>% pivot_longer(c('1999', '2000'), names_to = "year", values_to = "cases") # + [markdown] id="536HG8rsLgSu" # apply it to table4b # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="NcFsYuOTKf3b" outputId="c77f7ba8-64f9-467e-fa45-ab65b91975bd" table4b %>% pivot_longer(c('1999', '2000'), names_to = "year", values_to = "population") # + [markdown] id="Y8lndFAeLk6S" # Join two tables # + colab={"base_uri": "https://localhost:8080/", "height": 317} id="y3-IqtQCKfsR" outputId="c6657604-38ff-410d-895a-792eac9cb177" tidy4a <- table4a %>% pivot_longer(c('1999', '2000'), names_to = "year", values_to = "cases") tidy4b <- table4b %>% pivot_longer(c('1999', '2000'), names_to = "year", values_to = "population") left_join(tidy4a, tidy4b) # + [markdown] id="pFJXg7AAMIQR" # ### Q: What is right_join? try ?right_join and study join methods # + [markdown] id="6tJiqPzUMZXe" # ## Pivot_wider() # + [markdown] id="agZwBpgJMe3T" # One observation might be scattered across multiple rows # + colab={"base_uri": "https://localhost:8080/", "height": 466} id="ef7xVxDvKfU6" outputId="509210ad-4c1a-4408-89ad-bfa1f0a9ee01" table2 # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="dxI9LeITM3j3" outputId="e3ab5eef-cda6-483c-c12c-7622d4aa6b95" table2 %>% pivot_wider(names_from = type, values_from = count) # + [markdown] id="gyxo4AYZNEAs" # ## Separating and Uniting # + [markdown] id="ry9qAKWNNznX" # ## separate() # + [markdown] id="I783ke7ANPUh" # One column contains two variables # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="v6XtM3tBM3RB" outputId="3b6cddc8-f755-4cce-9f6d-464f1bcd639b" table3 # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="l14bCe7tNOXC" outputId="e307e696-9aa9-4db3-d652-050c942263d7" table3 %>% separate(rate, into = c("cases", "population")) # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="G0XHFNVkNrug" outputId="0f39a177-f0e7-4f0f-b8a6-8da389cf229c" table3 %>% separate(rate, into = c("cases", "population")) %>% mutate(cases = as.numeric(cases), population = as.numeric(population)) # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="WgIYyPrp1hET" outputId="10634517-70b2-4acd-f82d-fe5dd399e9b5" table3 %>% separate(rate, into = c("cases", "population"), sep="[^[:alnum:]]+") # + [markdown] id="rcwEYGt7NwG_" # ## unite() # # Single variable is spread across multiple columns # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="YUwleQUMN8sm" outputId="4db5d1cf-e334-43be-81cd-b43ae8e2182b" table5 # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="wn5qgG30N8gs" outputId="638c68e0-ad06-4baf-cb5c-c44623196752" table5 %>% unite(new, century, year, sep = "") # + [markdown] id="-nwDA6IOOHp9" # # Functional Programming # + id="Iav7G1uXODrT" df <- tibble( a = rnorm(10), b = rnorm(10), c = rnorm(10), d = rnorm(10) ) # + colab={"base_uri": "https://localhost:8080/", "height": 405} id="_s-sppIH5_yd" outputId="d2bf2f0d-e56f-486b-bd0d-c4f2e7325cd4" df # + colab={"base_uri": "https://localhost:8080/", "height": 84} id="7VSOl_mgODcd" outputId="88324188-806c-4ec4-defb-2e8b239415da" median(df$a) #> [1] -0.2457625 median(df$b) #> [1] -0.2873072 median(df$c) #> [1] -0.05669771 median(df$d) #> [1] 0.1442633 # + [markdown] id="h6sbfW2BOYtb" # Don't copy and paste multiple times # You can iterate using for loops # # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="8GlTKYa0ODPD" outputId="47f0c7c1-0cc7-4e37-e9ee-cf7d80a69ea2" output <- vector("double", ncol(df)) # 1. output for (i in seq_along(df)) { # 2. sequence output[[i]] <- median(df[[i]]) # 3. body } output # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="BETCL9Cn6PXw" outputId="36d9141e-62cc-4334-ad19-97b54cd85603" output <- c() output # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="AGLJM4dTOnxI" outputId="34055583-1219-4e0c-da3b-529adff704c5" output <- c() for (i in seq_along(df)) { # 2. sequence output <- c( output, median(df[[i]]) ) # 3. body } output # + colab={"base_uri": "https://localhost:8080/"} id="Pg0qclI4Onkm" outputId="0b07d74b-9fbc-41c0-c5a0-2990bc5f10af" install.packages("rbenchmark") library(rbenchmark) # + id="V7yYUKwmOnYF" simple1 <- function(n) { output <- vector("double", n) for(i in 1:n) output[i] = i return(output) } simple2 <- function(n) { output <- c() for(i in 1:n) output = c(output, i) return(output) } # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="XlX9cKoiOnLT" outputId="d7cd1f43-f9ec-4f7d-a99e-e91729b52415" simple1(10) # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="zqTIyo1ZPuMH" outputId="32d05424-05d4-4552-a55d-439cf4cb5aae" simple2(10) # + colab={"base_uri": "https://localhost:8080/", "height": 158} id="MVBXWP7vPuHh" outputId="a02eb10f-e9ad-49e9-950f-733521ebabbc" benchmark("fc1"=simple1(1000), "fc2"=simple2(1000), replications=100, columns=c('test', 'elapsed', 'replications')) # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="qtnRjjEl632U" outputId="da02256f-447b-4116-b765-d9f3e704d8f6" 226/6 # + [markdown] id="WrlRNlaqRY9_" # ## for_loops vs functional # # Possible to wrap up for loops in a function # # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="X5Tgr73_ROJ9" outputId="a69f222f-93cb-4ce4-8c78-c624337a43b4" col_mean <- function(df) { output <- vector("double", ncol(df)) for (i in seq_along(df)) { output[[i]] <- mean(df[[i]]) } output } col_median <- function(df) { output <- vector("double", ncol(df)) for (i in seq_along(df)) { output[[i]] <- median(df[[i]]) } output } col_sd <- function(df) { output <- vector("double", ncol(df)) for (i in seq_along(df)) { output[[i]] <- sd(df[[i]]) } output } col_mean(df) # + [markdown] id="l74rc-m5Rx1G" # You can make function as a variable # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="2Q7gR6NZRN8g" outputId="c32af3d1-5a69-4080-fe5f-8868fa5541cb" col_summary <- function(df, fun) { out <- vector("double", length(df)) for (i in seq_along(df)) { out[i] <- fun(df[[i]]) } out } col_summary(df, median) # + [markdown] id="LrExb-kvR4LJ" # ## The map function (purrr) # # the purrr package provides a family of functions for looping patterns over a vector # # remind apply() # + colab={"base_uri": "https://localhost:8080/"} id="Xn_9vUMJSRLU" outputId="6c0f8e05-aa0f-4c60-bef2-5811091cf7a4" str(df) # + colab={"base_uri": "https://localhost:8080/", "height": 405} id="MK_EC5iFAjBy" outputId="2d0c6978-e130-454c-c111-ac722f49312b" df # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="fH1NXXsZPuD8" outputId="92c6ecbd-7819-4514-acce-4b8bd5a9c26c" df %>% map_dbl(mean) # + colab={"base_uri": "https://localhost:8080/", "height": 179} id="i3Bg0fjZArec" outputId="fbed8587-688d-4a1d-8c0e-86d19dfda0bd" df %>% map(mean) # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="elT_8EcUSi3R" outputId="dae53d96-02c5-404c-875d-2b133c6c3dee" df %>% map_dbl(median) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="sYkN8Kq5A9zi" outputId="e7cb3a71-ff14-4410-b953-18141bd784bc" mtcars # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="cFaUZ4KpUi-v" outputId="f8749981-9012-498f-cd2b-822a3d15c40a" mtcars %>% split(.$cyl) # + [markdown] id="n6Srx5BFUraC" # You can define a function in map # + colab={"base_uri": "https://localhost:8080/", "height": 527} id="JSni07SbBH4R" outputId="80cd149e-90f6-4661-c8dd-3624766e5855" f1 <- function(df) { lm(mpg ~ wt, data = df) } mtcars %>% split(.$cyl) %>% map(f1) # + colab={"base_uri": "https://localhost:8080/", "height": 527} id="7VjH2IZTUcbG" outputId="bdbc661a-3828-427f-c2d7-9fb20dadf1c2" mtcars %>% split(.$cyl) %>% map(function(df) lm(mpg ~ wt, data = df)) # + colab={"base_uri": "https://localhost:8080/", "height": 527} id="mVtUbq1_UcPv" outputId="c7fd33eb-9db6-4875-c00f-cb02c31a4124" f1 <- function(df) { lm(mpg ~ wt, data = df) } mtcars %>% split(.$cyl) %>% map(f1) # + colab={"base_uri": "https://localhost:8080/", "height": 527} id="i7gSsRAbUcEf" outputId="ee14e805-e9da-4fd3-daf7-f1c84bc80735" mtcars %>% split(.$cyl) %>% map(~lm(mpg ~ wt, data = .)) # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="5f8uZitZBn7m" outputId="12e73eb6-e02d-472a-8f8d-893e2c6ff0e1" df %>% map_dbl(mean) # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="eOn2mGY2BjHh" outputId="05bcf538-e622-4845-f7b7-b82406a48aef" df %>% map_dbl(~mean(.)) # + [markdown] id="6QObZEBrVPyB" # ## Extract Component # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="ckMPCo4CByzS" outputId="b9ce19f5-6c4f-4895-d3a8-f08f83e4ae9c" mtcars %>% split(.$cyl) %>% map(~lm(mpg ~ wt, data = .)) %>% map(summary) # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="RjhdIZugUb4h" outputId="8141d4e6-5743-4699-89f2-00d2391d1bfe" mtcars %>% split(.$cyl) %>% map(~lm(mpg ~ wt, data = .)) %>% map(summary) %>% map_dbl(~.$r.squared) # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="2o5geQ5RVUVt" outputId="8fa153cb-c525-4a65-b48a-1bfe4b2d171d" mtcars %>% split(.$cyl) %>% map(~lm(mpg ~ wt, data = .)) %>% map(summary) %>% map_dbl('r.squared') # + [markdown] id="FAOULT0-Vy_k" # You can also use an integer to select elements by position # + id="wL4SxQoxCGkT" # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="JDOtIgsyVjju" outputId="1ccc9b47-1c6f-4b2e-f249-61b65415ee07" x <- list(list(1, 2, 3), list(4, 5, 6), list(7, 8, 9)) x %>% map_dbl(2) #> [1] 2 5 8 # + colab={"base_uri": "https://localhost:8080/", "height": 196} id="HX2eRbamVjZI" outputId="51c4569c-97bf-4aec-d5d8-30ec9b1777d3" x # + id="1IdJCeWrVjQp" # + [markdown] id="2Lk6MKBCV2ru" # Functional Programming is memory efficient by not saving internal calculations. Easier to understand if you get used to it. # # Is it fater as well? # + id="WrOi6XEJTl3G" n_len = 10 # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="45LMBjgdChBA" outputId="9726a367-b166-4531-ce96-1eb62993da6e" rep(10,n_len) # + colab={"base_uri": "https://localhost:8080/", "height": 246} id="g-KTb1qwSzvP" outputId="93197878-adda-437f-cf93-dad038a2a973" rep(10,n_len) %>% map(rnorm) # + colab={"base_uri": "https://localhost:8080/", "height": 280} id="RcPq9l-jTfUA" outputId="369731bf-d18f-4f06-f196-83bb8cbcea24" df2 <- vector(mode = "list", length = n_len) for (i in 1:n_len) df2[[i]] <- rnorm(10) df2 # + colab={"base_uri": "https://localhost:8080/", "height": 246} id="N-UuCL4sACUc" outputId="4e2fc24c-67c0-47aa-86b8-ce488611be95" lapply(rep(10,n_len), rnorm ) # + id="y-y-UswHT3HE" gen1 <- function(n_len) { rep(10,n_len) %>% map(rnorm) } # + id="UWQUYdJBT28N" gen2 <- function(n_len) { df2 <- vector(mode = "list", length = n_len) for (i in 1:n_len) df2[[i]] <- rnorm(10) df2 } # + id="8rTxkSrwWzIt" gen3 <- function(n_len) { lapply(rep(10,n_len), rnorm ) } # + colab={"base_uri": "https://localhost:8080/", "height": 246} id="yg90aScAUEAV" outputId="e3f89274-c364-4e18-cb98-8dec88779216" gen2(10) # + colab={"base_uri": "https://localhost:8080/", "height": 189} id="iVM7W9qZUD2i" outputId="1669756f-0948-4ff9-ec34-743688543273" benchmark("fc1"=gen1(1000), "fc2"=gen2(1000), "fc3"=gen3(1000), replications=100, columns=c('test', 'elapsed', 'replications')) # + id="eLXStW40Sp_r" # + [markdown] id="4c-cFefpX1mz" # # Something useful # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="CjmJpt_QSpyd" outputId="8990498a-c096-40e4-b34e-2b06fe677396" diamonds # + colab={"base_uri": "https://localhost:8080/", "height": 437} id="XmxSvXSzPsTr" outputId="2f804ad9-de4c-4690-bc9e-5104a835b898" diamonds %>% ggplot(aes(cut, price)) + geom_boxplot() # + colab={"base_uri": "https://localhost:8080/", "height": 437} id="_TsUN1MrYS3b" outputId="cd42fec7-a84e-4bde-b3e7-dce6178c2e90" diamonds %>% ggplot(aes(cut, price)) + geom_violin() # + colab={"base_uri": "https://localhost:8080/", "height": 437} id="QEbFu0PeaEKx" outputId="17c10d91-c7b0-40b2-a096-8d7af3e93902" diamonds %>% ggplot(aes(cut, price)) + geom_violin() + geom_boxplot(width=0.1, color="grey", alpha=0.2) # + id="Aovvb2AeG5Ew"
R4ds_2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Chapter 2 - Phase Plane Analysis # # Phase plane analysis is a graphical method that is used with first- and second-order systems (or systems that can be approximated as such). By graphically solving the differential equations, we obtain a family of trajectories that allow us to see the motion of the system. # # ## Advantages # # - Visualization of the system # - see what happens with various initial conditions with out solving differential equations # - small or smooth trajectories to strong nonlinearities and to “hard” trajectories # - control systems can be approximated as second-order systems # # ## Disadvantages # - restricted to first- or second-order systems # ## Concepts of Phase Plane Analysis # # ### Phase Portraits # # We shall concern our study with second order autonomous systems given by # # \begin{align*} # \dot{x}_1 &= f_1 ( x_1, x_2 ) \\ # \dot{x}_2 &= f_2 ( x_1, x_2 ) # \end{align*} # # where $\mathbf{x} (t)$ is a solution to (1) and (2) with initial conditions $\mathbf{x} (0) = \mathbf{x}_0$ and is represented as a curve on the phase plane varying from $t \in [0, 1)$ and is called a phase plane trajectory. A family of these curves (solutions with varying initial values) is a phase portrait. # #### Example: Phase portrait of an undamped pendulum [1] # # An undamped pendulum is shown in Figure 1(a) and described by # # \begin{equation*} # \ddot{y} + \sin (y) = 0 # \end{equation*} # # The above equation can be reduced to a system of two first-order equations # # \begin{align*} # \dot{y}_1 & = y_2 \\ # \dot{y}_2 & = \sin (y_1) # \end{align*} # # We can use Python to solve this system numerically and plot the phase portrait. The following code will plot a vector field for the system. # + import numpy as np import matplotlib.pyplot as plt # Define a function that describes the system dynamics def undamped_pendulum(Y, t): """This function gives the dynamics for an undamped pendulum""" y1, y2 = Y return [y2, -np.sin(y1)] y1 = np.linspace(-2.0, 8.0, 20) y2 = np.linspace(-2.0, 2.0, 20) Y1, Y2 = np.meshgrid(y1, y2) t = 0 u, v = np.zeros(Y1.shape), np.zeros(Y2.shape) NI, NJ = Y1.shape for i in range(NI): for j in range(NJ): x = Y1[i, j] y = Y2[i, j] y_prime = undamped_pendulum([x, y], t) u[i,j] = y_prime[0] v[i,j] = y_prime[1] plt.figure(figsize=(18,6), dpi=180) Q = plt.quiver(Y1, Y2, u, v, color='r') plt.xlabel('$y_1$') plt.ylabel('$y_2$') plt.xlim([-2, 8]) plt.ylim([-4, 4]) # - # Now, let's plot a few different trajectories for different initial conditions. # + from scipy.integrate import odeint for y20 in [0, 0.5, 1, 1.5, 2, 2.5]: t_span = np.linspace(0, 50, 200) y0 = [0.0, y20] ys = odeint(undamped_pendulum, y0, t_span) plt.plot(ys[:,0], ys[:,1], 'b-') # path plt.plot([ys[0,0]], [ys[0,1]], 'o') # start plt.plot([ys[-1,0]], [ys[-1,1]], 's') # end plt.xlim([-2, 8]) plt.show() # - # Let's put it all together. To make this last script self-contained we'll basically just copy-and-paste the previous two scripts and combine. # + import numpy as np import matplotlib.pyplot as plt from scipy.integrate import odeint # Define a function that describes the system dynamics def undamped_pendulum(Y, t): """This function gives the dynamics for an undamped pendulum""" y1, y2 = Y return [y2, -np.sin(y1)] y1 = np.linspace(-2.0, 8.0, 20) y2 = np.linspace(-2.0, 2.0, 20) Y1, Y2 = np.meshgrid(y1, y2) t = 0 u, v = np.zeros(Y1.shape), np.zeros(Y2.shape) NI, NJ = Y1.shape for i in range(NI): for j in range(NJ): x = Y1[i, j] y = Y2[i, j] y_prime = undamped_pendulum([x, y], t) u[i,j] = y_prime[0] v[i,j] = y_prime[1] plt.figure(figsize=(18,6), dpi=180) Q = plt.quiver(Y1, Y2, u, v, color='r') plt.xlabel('$y_1$') plt.ylabel('$y_2$') plt.xlim([-2, 8]) plt.ylim([-4, 4]) for y20 in [0, 0.5, 1, 1.5, 2, 2.5]: t_span = np.linspace(0, 50, 200) y0 = [0.0, y20] ys = odeint(undamped_pendulum, y0, t_span) plt.plot(ys[:,0], ys[:,1], 'b-') # path plt.plot([ys[0,0]], [ys[0,1]], 'o') # start plt.plot([ys[-1,0]], [ys[-1,1]], 's') # end plt.xlim([-2, 8]) plt.show() # - # What this shows is that for the undamped pendulum, there is a singular (equilibrium) point at the coordinate $(0, 0)$, there are some limit cycles around the singular point, and these can be unstable (more on all of this in later notebooks). # # [1] This example is from [CMU's Kitchin Research Group](http://kitchingroup.cheme.cmu.edu/blog/2013/02/21/Phase-portraits-of-a-system-of-ODEs/).
Chapter 2 - Phase Plane Analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Analysis of Fitting import tellurium as te from SBstoat.modelFitter import ModelFitter import matplotlib matplotlib.use('TkAgg') # %matplotlib inline import SBstoat SBstoat.__version__ # + model = te.loada(''' function HillTime(V, K, n, t) ((V * n * (K^n) * (t^(n-1))) / (((K^n) + (t^n))^2)) end model modular_EGFR_current_128() // Reactions SproutyFunc: -> Spry2; HillTime(V_0, K_0, n_0, t) // Species IVs Spry2 = 0; // Parameter values V_0 = 19.9059673; K_0 = 10153.3568; n_0 = 2.52290790; t := time end ''') # sim = model.simulate(0, 7200, 7201) # model.plot() # quit() fitter = ModelFitter(model, "spry2_2a.txt", ["V_0", "K_0", "n_0"], fitterMethods='differential_evolution', parameterDct={ "V_0": (10, 20, 40), "K_0": (1800, 6000, 20000), "n_0": (1, 2, 12)}) fitter.fitModel() print(fitter.reportFit()) # - fitter.plotFitAll()
bugs/mike_02192021/Analysis of Fitting.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Starting from scratch on Covid-19 data analysis import geopandas as gpd import numpy as np import pandas as pd import sys sys.path import seaborn as sns import folium import branca.colormap as cm import matplotlib.pyplot as plt from pathlib import Path projdir = Path.cwd().parent if str(projdir) not in sys.path: sys.path.append(str(projdir)) # from src.common import loadenv as const projdir fips_fn = projdir / 'data' / 'raw' / 'fips' / 'all-geocodes-v2018.xlsx' fips_fn path_str = fips_fn.as_posix() path_str # ### Read an Excel file into a pandas DataFrame. dff = pd.read_excel(path_str, skiprows=4) dff.shape dff dff.info # ### Import county population from census bureau data cp_fn = projdir / 'data' / 'raw' / 'censusBurPop' / 'co-est2019-alldata.csv' cp_fn path_str = cp_fn.as_posix() path_str cp_df = pd.read_csv(path_str, dtype={'STATE': float}, encoding='ISO-8859–1') cp_df.dtypes cp_df cp_df.shape for col in cp_df.columns: print(col) # ### import covid-19 stats cvd_fn = projdir / 'data' / 'raw' / 'covid-19-data' / 'nyt-covid-19-data-master-us-counties.csv' cvd_fn path_str = cvd_fn.as_posix() path_str cvd_df = pd.read_csv(path_str, ) cvd_df.shape cvd_df dff.dtypes cvd_df.dtypes cvd_df = pd.read_csv(path_str, parse_dates=['date'], dtype={'fips':str}) cvd_df cvd_df.shape cvd_df.dtypes # ### Import county shape files ctyshp_fp = projdir / 'data' / 'raw' / 'tl_2019_us_county' / 'tl_2019_us_county.shp' ctyshp_str = ctyshp_fp.as_posix() ctyshp_str cty_dropcols = [ 'STATEFP', # 'COUNTYFP', 'COUNTYNS', # 'GEOID', # 'NAME', 'NAMELSAD', 'LSAD', 'CLASSFP', 'MTFCC', 'CSAFP', 'CBSAFP', # 'METDIVFP', 'FUNCSTAT', 'ALAND', 'AWATER', 'INTPTLAT', 'INTPTLON' # 'geometry' ] cty_dropcols cty_gdf = gpd.read_file(ctyshp_fp) cty_gdf.shape cty_gdf.drop(columns=cty_dropcols, inplace=True) cty_gdf.shape cty_gdf.dtypes cty_gdf cty_gdf = cty_gdf.rename(columns={'GEOID': 'fips'}) unknowns_df = cvd_df.loc[cvd_df['county'].str.contains('Unknown')].copy() cvd_df.drop(unknowns_df.index, inplace=True) cvd_df cvd_nofips_df = cvd_df.loc[cvd_df['fips'].isnull()].copy() cvd_nofips_df.shape cvd_nofips_df # ### set fips for NYC to '1', Kansas City to '2' since they are NAN in original data set cvd_df.loc[cvd_df['county'] == 'New York City', 'fips'] = '1' cvd_df.loc[(cvd_df['county'] == 'Kansas City') & (cvd_df['state'] == 'Missouri'), 'fips'] = '2' cvd_df.loc[cvd_df['county'] == 'New York City'] cvd_df.loc[(cvd_df['county'] == 'Kansas City') & (cvd_df['state'] == 'Missouri')] # ### merge corona virus county data set with county shape files comb_df = cty_gdf.merge(cvd_df, on='fips') comb_df.shape comb_df.dtypes comb_df
notebooks/covidz-01.2020-04-21.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Developing an AI application # # Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications. # # In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using [this dataset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) of 102 flower categories, you can see a few examples below. # # <img src='assets/Flowers.png' width=500px> # # The project is broken down into multiple steps: # # * Load and preprocess the image dataset # * Train the image classifier on your dataset # * Use the trained classifier to predict image content # # We'll lead you through each part which you'll implement in Python. # # When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new. # # First up is importing the packages you'll need. It's good practice to keep all the imports at the beginning of your code. As you work through this notebook and find you need to import a package, make sure to add the import up here. # Imports here from collections import OrderedDict import pprint import pandas as pd import numpy as np import torch from torch import nn, optim from torchvision import datasets, models, transforms from workspace_utils import keep_awake from PIL import Image import matplotlib.pyplot as plt # %matplotlib inline # ## Load the data # # Here you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/0.3.0/torchvision/index.html)). The data should be included alongside this notebook, otherwise you can [download it here](https://s3.amazonaws.com/content.udacity-data.com/nd089/flower_data.tar.gz). The dataset is split into three parts, training, validation, and testing. For the training, you'll want to apply transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. You'll also need to make sure the input data is resized to 224x224 pixels as required by the pre-trained networks. # # The validation and testing sets are used to measure the model's performance on data it hasn't seen yet. For this you don't want any scaling or rotation transformations, but you'll need to resize then crop the images to the appropriate size. # # The pre-trained networks you'll use were trained on the ImageNet dataset where each color channel was normalized separately. For all three sets you'll need to normalize the means and standard deviations of the images to what the network expects. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`, calculated from the ImageNet images. These values will shift each color channel to be centered at 0 and range from -1 to 1. # data_dir = 'flowers' train_dir = data_dir + '/train' valid_dir = data_dir + '/valid' test_dir = data_dir + '/test' # + # DataLoader batch size batch_size = 32 # Transforms for training, validation, and testing sets train_transforms = transforms.Compose([transforms.RandomRotation(120), transforms.RandomResizedCrop(224), transforms.RandomVerticalFlip(p=0.5), transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))]) # Definition of training, validation and testing datasets using ImageFolder train_datasets = datasets.ImageFolder(train_dir, transform=train_transforms) valid_datasets = datasets.ImageFolder(valid_dir, transform=test_transforms) test_datasets = datasets.ImageFolder(test_dir, transform=test_transforms) # Loading the training, validation and testing datasets as generator using DataLoader train_loader = torch.utils.data.DataLoader(train_datasets, batch_size=batch_size, shuffle=True) valid_loader = torch.utils.data.DataLoader(valid_datasets, batch_size=batch_size) test_loader = torch.utils.data.DataLoader(test_datasets, batch_size=batch_size) # - # ### Label mapping # # You'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the flowers. # + import json with open('cat_to_name.json', 'r') as f: cat_to_name = json.load(f) print("Number of categories to be classified: {}\n".format(len(cat_to_name))) pprint.pprint(cat_to_name) # - # # Building and training the classifier # # Now that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features. # # We're going to leave this part up to you. Refer to [the rubric](https://review.udacity.com/#!/rubrics/1663/view) for guidance on successfully completing this section. Things you'll need to do: # # * Load a [pre-trained network](http://pytorch.org/docs/master/torchvision/models.html) (If you need a starting point, the VGG networks work great and are straightforward to use) # * Define a new, untrained feed-forward network as a classifier, using ReLU activations and dropout # * Train the classifier layers using backpropagation using the pre-trained network to get the features # * Track the loss and accuracy on the validation set to determine the best hyperparameters # # We've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal! # # When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right. Make sure to try different hyperparameters (learning rate, units in the classifier, epochs, etc) to find the best model. Save those hyperparameters to use as default values in the next part of the project. # # One last important tip if you're using the workspace to run your code: To avoid having your workspace disconnect during the long-running tasks in this notebook, please read in the earlier page in this lesson called Intro to # GPU Workspaces about Keeping Your Session Active. You'll want to include code from the workspace_utils.py module. # # **Note for Workspace users:** If your network is over 1 GB when saved as a checkpoint, there might be issues with saving backups in your workspace. Typically this happens with wide dense layers after the convolutional layers. If your saved checkpoint is larger than 1 GB (you can open a terminal and check with `ls -lh`), you should reduce the size of your hidden layers and train again. def create_model(parameters): """ Create a neural network based on vgg16 architecture with a custom classifier. The classifier will have 5 layers (3 hidden layers). Arguments parameters: dictionary, network parameters """ # Get pre-trained vgg16 network model = models.vgg16(pretrained=True) # Ensure that gradients of vgg16 pre-trained model are not considered in back propagation for param in model.parameters(): param.requires_grad = False # Classifier neural network layer sizes input_size = parameters.get('class_input_size', 0) hidden_sizes = parameters.get('class_hidden_sizes', 0) output_size = parameters.get('class_output_size', 0) if input_size == 0: print("ERROR: model not created. 'class_input_size' key not defined. Return None") return None if hidden_sizes == 0: print("ERROR: model not created. 'hidden_sizes' key not defined. Return None") return None if output_size == 0: print("ERROR: model not created. 'output_size' key not defined. Return None") return None # Dropout probability dropout_p = parameters.get('dropout_p', 0) if dropout_p == 0: print("WARN: Set dropout probability to default: 0.5. 'dropout_p' key not defined") dropout_p = 0.5 # Define new classifier for flower dataset # Per default gradients of new layer are enabled (requires_grad=True) flower_class = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(input_size, hidden_sizes[0])), ('relu1', nn.ReLU()), ('dropout1', nn.Dropout(p=dropout_p)), ('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])), ('relu2', nn.ReLU()), ('dropout2', nn.Dropout(p=dropout_p)), ('fc3', nn.Linear(hidden_sizes[1], hidden_sizes[2])), ('relu3', nn.ReLU()), ('dropout3', nn.Dropout(p=dropout_p)), ('fc4', nn.Linear(hidden_sizes[2], hidden_sizes[3])), ('relu4', nn.ReLU()), ('dropout4', nn.Dropout(p=dropout_p)), ('fc5', nn.Linear(hidden_sizes[3], output_size)), ('out_log_softmax', nn.LogSoftmax(dim=1)) ])) model.classifier = flower_class print("Model succesfully created") return model # Dictionaries that stores the different training and validations excersire parameters parameters = {} param_version = 0 # + # Define 'cuda' as device if GPU is available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Hyper parameters # Classifier neural network layer sizes input_size = 25088 hidden_sizes = (4096, 2048, 1024, 512) # Length of dict equals the number of flower categories output_size = len(cat_to_name) # Dropout probability dropout_p = 0.3 # Train intervals epochs = 20 # Number of train and test data records: # $ /home/workspace/ImageClassifier/flowers# find train/ -mindepth 1 -type f | wc -l # 6552 # $ find valid/ -mindepth 1 -type f | wc -l # 818 # $ find test/ -mindepth 1 -type f | wc -l # 819 loss_accuracy_batch_interval = int(6552 / batch_size / 4) learning_rate = 0.0001 # Store parameters in parameters dict param_version += 1 parameters[param_version] = {} parameters[param_version]['model'] = models.vgg16(pretrained=True) parameters[param_version]['class_input_size'] = input_size parameters[param_version]['class_hidden_sizes'] = hidden_sizes parameters[param_version]['class_output_size'] = output_size parameters[param_version]['batch_size_data_loader'] = batch_size parameters[param_version]['loss_function'] = 'NLLLoss' parameters[param_version]['optimizer'] = 'Adam' parameters[param_version]['learning_rate'] = learning_rate parameters[param_version]['dropout_p'] = dropout_p parameters[param_version]['epochs'] = epochs parameters[param_version]['loss_accuracy_batch_interval'] = loss_accuracy_batch_interval pprint.pprint(parameters[param_version]) # + # Create the model latest_parameters = len(parameters) model = create_model(parameters[latest_parameters]) # Usage of negative log likelihood loss (because of log softmax) criterion = nn.NLLLoss() # Use Adam optimizer for parameter optimization of classifier parameters after backpropagation optimizer = optim.Adam(model.classifier.parameters(), lr=learning_rate) # Set device for model (Use of GPU or CPU) model.to(device) print("Device: ", device) print("Model:" , model) # - def train_image_classifier(device, model, criterion, optimizer, epochs, train_loader, valid_loader, train_losses, valid_losses, valid_accuracies, loss_accuracy_batch_interval): """ Train and validate flower image classification model. """ # Training loop # keep_awake is provided by Udacity to ensure that anything that happens inside this loop will keep the workspace active for e in keep_awake(range(epochs)): # Enable training mode, which uses dropouts model.train() print("***************************") print("Epoch {}/{}".format(e+1, epochs)) running_training_loss = 0 batch_count = 0 for train_images, train_labels in train_loader: # Count batches used for training batch_count += 1 # Move input and label tensors to GPU (if available) or CPU train_images, train_labels = train_images.to(device), train_labels.to(device) # Ensure that the gradient is not accumulated with each iteration optimizer.zero_grad() # Forward feeding the model train_output = model(train_images) # Calculate the loss (error function) train_loss = criterion(train_output, train_labels) # Backpropagation train_loss.backward() # Update weights optimizer.step() # Accumulate loss for each trained image running_training_loss += train_loss.item() # Validate the model after each loss_accuracy_batch_interval if batch_count % loss_accuracy_batch_interval == 0: # Validate the model test_image_classifier(device, model, criterion, valid_loader, valid_losses, valid_accuracies) # Calculate loss for batch used for last training train_losses.append(running_training_loss / batch_count) print("----") print("Results after training batch: {}".format(batch_count)) print("Training loss: {}".format(train_losses[-1])) print("Validation loss: {}".format(valid_losses[-1])) print("Accuracy: {}".format(valid_accuracies[-1])) def test_image_classifier(device, model, criterion, data_loader, losses, accuracies): """ Validate/test flower image classification model based on the data loader provided. Calculcated losses and accuracies are tracked appended to respetive list arguments """ # Enable validation / testing mode, which disables dropouts model.eval() # Initialize variables for calculation per batch running_loss = 0 running_accuracies = 0 # For validation / testing no new gradient calculation is required, backpropagation will not be used with torch.no_grad(): # Iterate through test data per batch for images, labels in data_loader: # Move input and label tensors to GPU (if available) or CPU images, labels = images.to(device), labels.to(device) # Forward feeding the model to validate the input output = model(images) # Calculate the loss (error function) loss = criterion(output, labels) # Accumulate loss for each validated image running_loss += loss.item() # Calculate out probability, because logsoftmax is used ps = torch.exp(output) # Get the best probability value and it's index top_p, top_k = ps.topk(1, dim=1) # Create true / false tensor with matches # Create labels tensor 2D view equals = top_k == labels.view(*top_k.shape) # Alternative: running_accuracies += len(equals[equals == True]) / len(equals) running_accuracies += torch.mean(equals.type(torch.FloatTensor)).item() else: losses.append(running_loss / len(data_loader)) accuracies.append(running_accuracies / len(data_loader)) # + # Lists for storing the loss and accuracies results train_losses, valid_losses, valid_accuracies = [], [], [] # Train and validate image classifier train_image_classifier(device, model, criterion, optimizer, epochs, train_loader, valid_loader, train_losses, valid_losses, valid_accuracies, loss_accuracy_batch_interval) # Finally, store results of this attempt in dictionary parameters[param_version]['train_losses'] = np.array(train_losses) parameters[param_version]['valid_losses'] = np.array(valid_losses) parameters[param_version]['valid_accuracies'] = np.array(valid_accuracies) # - def plot_parameters(parameters): version = len(parameters) fig, axs = plt.subplots(2) loss_accuracy_batch_interval = parameters[version]['loss_accuracy_batch_interval'] batch_count = loss_accuracy_batch_interval * len(parameters[version].get('train_losses', [0])) x_train_valid = np.arange(0, batch_count, loss_accuracy_batch_interval) # Losses axs[0].plot(x_train_valid, parameters[version].get('train_losses', np.zeros_like(x_train_valid)), label='Training loss') axs[0].plot(x_train_valid, parameters[version].get('valid_losses', np.zeros_like(x_train_valid)), label='Validation loss') # Accuracies axs[1].plot(x_train_valid, parameters[version].get('valid_accuracies', np.zeros_like(x_train_valid))*100, label='Validation Accuracy', c='tab:green') # Subplot title per top row axs[0].set_title('Version #{}'.format(version)) # Labels axs[0].set_ylabel('Loss') axs[1].set_ylabel('%') axs[1].set_xlabel('Batch') # Legend axs[0].legend() axs[1].legend() def print_parameters(parameters): for version in range(len(parameters)): print("***********************************************") print("Model parameters of version #{}".format(version+1)) print("***********************************************\n") state_dict_exists = False for k, v in parameters[version+1].items(): if k in ['train_losses', 'valid_losses', 'test_losses', 'valid_accuracies', 'test_accuracies']: print("{}: {} (first 5 values)".format(k, v[:5])) elif k == 'state_dict': print("'state_dict' key exists, but won't be printed") state_dict_exists = True else: print("{}: {}".format(k, v)) if not state_dict_exists: print("'state_dict' key does not exist") print("\n") print_parameters(parameters) plot_parameters(parameters) # ## Testing your network # # It's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. Run the test images through the network and measure the accuracy, the same way you did validation. You should be able to reach around 70% accuracy on the test set if the model has been trained well. # + # Initialize loss variables test_losses, test_accuracies = [], [] # Get latest parameter version latest_parameters = len(parameters) # Do validation on the test set test_image_classifier(device, model, criterion, test_loader, test_losses, test_accuracies) print("Test loss: {}".format(test_losses[-1])) print("Accuracy: {}".format(test_accuracies[-1])) # Finally, store results of this attempt in dictionary parameters[latest_parameters]['test_losses'] = np.array(test_losses) parameters[latest_parameters]['test_accuracies'] = np.array(test_accuracies) # - print_parameters(parameters) plot_parameters(parameters) # ## Save the checkpoint # # Now that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as an attribute which makes inference easier later on. # # ```model.class_to_idx = image_datasets['train'].class_to_idx``` # # Remember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now. def save_checkpoint(model, parameters): """ Save parameters dictionary containing model parameters and model to disk. """ latest_parameters = len(parameters) # Store mapping between classes and indices from dataset in model model.class_to_idx = train_datasets.class_to_idx # Store in parameters the state dict (weights and biases) and the classes/indices mapping parameters[latest_parameters]['class_to_idx'] = model.class_to_idx parameters[latest_parameters]['state_dict'] = model.state_dict() # Save model to checkpoint file torch.save(parameters, 'checkpoint_{}.pth'.format(latest_parameters)) save_checkpoint(model, parameters) # ## Loading the checkpoint # # At this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network. # Load the model from checkpoint file def load_model_from_checkpoint(file, device): if device == 'cuda': # Load all tensors onto GPU map_location=lambda storage, loc: storage.cuda() else: # Load all tensors onto CPU map_location=lambda storage, loc: storage parameters = torch.load(file, map_location=map_location) latest_parameters = len(parameters) model = create_model(parameters[latest_parameters]) model.class_to_idx = parameters[latest_parameters]['class_to_idx'] model.load_state_dict(parameters[latest_parameters]['state_dict'], strict=False) return model, parameters # + # Define 'cuda' as device if GPU is available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Load model model, parameters = load_model_from_checkpoint('checkpoint_1.pth', device) # Usage of negative log likelihood loss (because of log softmax) criterion = nn.NLLLoss() # Use Adam optimizer for parameter optimization of classifier parameters after backpropagation optimizer = optim.Adam(model.classifier.parameters(), lr=parameters[len(parameters)].get('learning_rate', 0.0001)) # Set device for model (Use of GPU or CPU) model.to(device) print("Device: ", device) print("Model:" , model) # - print_parameters(parameters) plot_parameters(parameters) # # Inference for classification # # Now you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the probabilities. It should look like # # ```python # probs, classes = predict(image_path, model) # print(probs) # print(classes) # > [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339] # > ['70', '3', '45', '62', '55'] # ``` # # First you'll need to handle processing the input image such that it can be used in your network. # # ## Image Preprocessing # # You'll want to use `PIL` to load the image ([documentation](https://pillow.readthedocs.io/en/latest/reference/Image.html)). It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training. # # First, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) or [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) methods. Then you'll need to crop out the center 224x224 portion of the image. # # Color channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. You'll need to convert the values. It's easiest with a Numpy array, which you can get from a PIL image like so `np_image = np.array(pil_image)`. # # As before, the network expects the images to be normalized in a specific way. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. You'll want to subtract the means from each color channel, then divide by the standard deviation. # # And finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. You can reorder dimensions using [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html). The color channel needs to be first and retain the order of the other two dimensions. # Process a PIL image for use in a PyTorch model def process_image(pil_im): ''' Scales, crops, and normalizes a PIL image for a PyTorch model, returns an Numpy array ''' size = (256, 256) crop_len = 224 # Resize the image to 256x256 pil_im = pil_im.resize((size)) # Crop the center 224x224 portion of the image # Left, Top = 0, 0 # Right, Bottom = 256, 256 left = (pil_im.width - crop_len)/2 top = (pil_im.height - crop_len)/2 right = left + crop_len bottom = top + crop_len pil_im = pil_im.crop((left, top, right, bottom)) # Convert from PIL image to numpy array and change channel encoding from 0-255 to floats from 0-1 # np_im = np.array(pil_im) np_im = np.array(pil_im) / 255 # Color channel normalization mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) np_im = (np_im - mean) / std # The color channel is the third dimension in the PIL image and Numpy array. # It needs to be the first dimension. The order of the other two dimensions need to be retained. return np_im.transpose((2, 0, 1)) # To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions). def imshow(image, ax=None, title=None): """Imshow for Tensor.""" if ax is None: fig, ax = plt.subplots() # PyTorch tensors assume the color channel is the first dimension # but matplotlib assumes is the third dimension image = np.array(image).transpose((1, 2, 0)) # Undo preprocessing mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) image = std * image + mean # Image needs to be clipped between 0 and 1 or it looks like noise when displayed image = np.clip(image, 0, 1) # Set title if title: ax.set_title(title) ax.imshow(image) return ax # ## Class Prediction # # Once you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values. # # To get the top $K$ largest values in a tensor use [`x.topk(k)`](http://pytorch.org/docs/master/torch.html#torch.topk). This method returns both the highest `k` probabilities and the indices of those probabilities corresponding to the classes. You need to convert from these indices to the actual class labels using `class_to_idx` which hopefully you added to the model or from an `ImageFolder` you used to load the data ([see here](#Save-the-checkpoint)). Make sure to invert the dictionary so you get a mapping from index to class as well. # # Again, this method should take a path to an image and a model checkpoint, then return the probabilities and classes. # # ```python # probs, classes = predict(image_path, model) # print(probs) # print(classes) # > [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339] # > ['70', '3', '45', '62', '55'] # ``` def predict(image_path, model, topk=5): ''' Predict the class (or classes) of an image using a trained deep learning model. Arguments: image_path str - Provide the path to the image file. torch.Tensor - Image already transformed (resize, crop, color channel normalization) provided as torch.Tensor ''' # Image path is path to an image file if type(image_path) == str: # Open image as PIL image with Image.open(image_path) as im: # Pre process the image similar to what im = process_image(im) # Convert to ndarray to tensor and respective type depending on, if GPU is available im_t = torch.tensor(im) # Ensure that image tensor is converted to FloatTensor type im_t = im_t.type(torch.FloatTensor) im_t = im_t.to(device) # image_path already is a tensor elif torch.is_tensor(image_path): im_t = image_path # Add the batch dimension to the tensor (train ) im_t = torch.unsqueeze(im_t, 0) # Enable test mode, which disables dopouts model.eval() with torch.no_grad(): # Forward feed the model with the image output = model.forward(im_t) # Calculate probability (because model returns logsoftmax) ps = torch.exp(output) # Get the best probability value and it's index top_p, top_k = ps.topk(topk, dim=1) return top_p, top_k # ## Sanity Checking # # Now that you can use a trained model for predictions, check to make sure it makes sense. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should look like this: # # <img src='assets/inference_example.png' width=300px> # # You can convert from the class integer encoding to actual flower names with the `cat_to_name.json` file (should have been loaded earlier in the notebook). To show a PyTorch tensor as an image, use the `imshow` function defined above. def get_class_labels_top_k(top_k, class_to_idx, class_to_label): """ Return the text labels of the top k classifications as list. In case the index does not exist, None is returned as label. Arguments: top_k, torch.Tensor, top classification index values predicted by the model class_to_idx, Dictionary, mapping between model classification index and numbered classification labels determined by data loader ImageFolder (see directory structure) class_to_label, Dictionary, mapping between numbered classification labels and text labels return: class_labels_top_k, list, text labels of the top k classifications as list """ # Flatten to 1D tensor and convert to ndarray top_k = np.array(np.squeeze(top_k)) # top_k represents an class index provided by model prediction # Get the label from the class that matches the top k index class_labels_top_k = [class_to_label.get(cls, None) for k in top_k for cls, idx in class_to_idx.items() if k == idx] # Return list return class_labels_top_k # + # Display an image along with the top 5 classes # Image file paths and correct classifications images = ["flowers/test/1/image_06743.jpg", "flowers/test/59/image_05052.jpg"] labels = [cat_to_name.get("1", None), cat_to_name.get("59", None)] img_range = range(len(images)) # Alternative: Get images from DataLoader #images, labels_idx = next(iter(test_loader)) #img_range = range(parameters[len(parameters)].get('batch_size_data_loader', 1)) #labels = get_class_labels_top_k(labels_idx, model.class_to_idx, cat_to_name) # Loop over all images for i in img_range: image_path = images[i] title = labels[i] # Get probability and class top_p, top_k = predict(image_path, model, topk=5) # Get text labels of the top k classifications as list y = get_class_labels_top_k(top_k, model.class_to_idx, cat_to_name) # Flatten to 1D tensor and calculate percentage x = np.squeeze(top_p) * 100 if type(image_path) == str: # Open image as PIL image with Image.open(image_path) as im: # Pre process the image similar to what im = process_image(im) # image_path already is a tensor elif torch.is_tensor(image_path): im = image_path # Create subplot with 2 rows fig, ax = plt.subplots(figsize=(6,3), ncols=2) # Display image in first row imshow(im, ax=ax[0], title=title) # Display top_k plot in second row y_pos = np.arange(len(y)) ax[1].barh(y_pos, x) ax[1].set_yticks(y_pos) ax[1].set_yticklabels(y) ax[1].set_xlabel("Probability in %") plt.tight_layout()
Image Classifier Project.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.6.2 # language: julia # name: julia-1.6 # --- # # SIF TO Julia # # **Goal:** Obtaining the CUTEst.jl test problems in a native Julia to obtain the Hessian via an AD computation of the gradient of some objective function. # - The CUTE test set: https://www.cuter.rl.ac.uk/Problems/mastsif.shtml # # The aforementioned link downloads a tarball containing the SIF encoded test problems problems: https://bitbucket.org/optrove/sif/get/99c5b38e7d03.tar.gz # # From the CUTEst.jl build, (i.e. in the Pkg.add("CUTEst) step), the bitbucked link gets untarred as a .julia artifact. # The entire installation utilizes the work of https://github.com/optimizers/homebrew-cutest for setting up the needed enviroment. **Thus, CUTEst.jl will only work on *Nix Machines** # # # ### Setup to have CUTEst.jl interface with CUTEst C: # # The MASTSIF enviroment variable must be exported in me _./zshrc_: # export MASTSIF="/Users/daniel/.julia/artifacts/a7ea0d0aaf29a39ca0fe75588fc077cdd5b5ed54/optrove-sif-99c5b38e7d03" # # Similarlly, the enviroment variable pointing to your architecture is appened to my _./zshrc_: # export MYARCH="/Users/daniel/.julia/artifacts/a5c5506e4bfa601362b9aeb09bb775994e3e65c4/libexec/CUTEst-2.0.3/versions" # # There are three other enviroment variables that must be exported, namely, SIFDECODE, CUTEST, and ARCHDEFS. The straightforward procedure is highlighted here: https://github.com/ralna/CUTEst/blob/master/doc/README#L62. # Furthermore, you can set up enviroment variables to locate the manpages, but this isn't mandatory and they can be viewed on the ralna/CUTEst repositorey. # # ***NOTE*** The long shasum directory in my export statements correspond to my installation of CUTEst.jl, the easiest way to determine yours is using a comand line tool such as locate or find. # # Other Optimization Collections with Native Julia Code # # It appears the man behind JuliaSmoothOptimizers orginization (which is a regular presenter at JuliaCon and his work is well accepted) also realizes the downfall with CUTEst.jl. His attempt to overcome the block-box nature of CUTEst.jl is unfinished, and apart of the https://github.com/JuliaSmoothOptimizers/OptimizationProblems.jl package. The majority of problems are small dimensional and constrained, however, there are some useful ones in there. However, there have been issues reported with errors in the translation. It appears a lot of students did this for an assignment. # # # Another repository worth mentioning, which containes a lot of the CUTE family problems is found here: https://github.com/mpf/Optimization-Test-Problems. # This is in the AMPL framework, but there can be deconded to native julia in a much more economical fashion. # - TODO: see if there exist tools for the conversion, specifically, something better than this: https://github.com/jump-dev/AmplNLWriter.jl/issues # # NLPModel to ADNLPModel # # You can define an ADNLPModel, an NLPModel, that extends ForwardDiff.jl. To declare an ADNLPModel, you must specify an objective function and a initial iterate. I attempt to specify the objective as a black box CUTEst.jl instance... this surely doesn't work. # + using CUTEst, NLPModels using ForwardDiff, ADNLPModels nlp = CUTEstModel("ROSENBR") f = (z) -> obj(nlp, z) nlpAD = ADNLPModel(f, [-1.2; 1.0]) # things seem okay # fx = obj(nlpAD, nlpAD.meta.x0) # ... and this breaks it. finalize(nlp); # always finalize before moving to another CUTEst problem # - # # Direct Homebrew Installation # # When you decode an SIF file, there are $3$ (sometimes 4) fortran files that are created and one "*.d" file. This, likely will not yeild an easy conversion to Julia but it is worth a try. My attempts to setup CUTEst.jl to interface with julia did not work when I tried to decode the SIF files. Maybe using the homebrew cutest installation will allow a decoding of the SIF files, an attempt at conversion.
Unconstrained-NonLinear-TestSuite-Notes.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Rec systems # language: python # name: mark # --- # + colab={} colab_type="code" id="-rZvPT6cX1eE" import pickle from collections import defaultdict import pandas as pd import numpy as np from surprise import accuracy from surprise.model_selection import train_test_split from surprise import KNNBasic from surprise.model_selection import KFold from surprise import Dataset from surprise import Reader # + reader = Reader(sep=',', rating_scale=(1,10)) data = Dataset.load_from_file('../Data/ratings_compressed.csv', reader=reader) # + colab={} colab_type="code" id="BSCC7J3eYDGg" train, test = train_test_split(data, test_size=0.3, random_state=42) options = {'name':'cosine'} knnModel = KNNBasic(sim_options=options) # + colab={"base_uri": "https://localhost:8080/", "height": 50} colab_type="code" id="18ByWvlpYERf" outputId="0e766a38-8c0d-46ec-c617-441d262e654e" knnModel.fit(train) predictions = knnModel.test(test) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="ZYfK4B9kkNOZ" outputId="7fb98f93-1f57-46bc-e1eb-066d2cdcfe83" n = 10 top_n = defaultdict(list) for uid, iid, true_r, est, _ in predictions: top_n[uid].append((iid, est)) for uid, user_ratings in top_n.items(): user_ratings.sort(key=lambda x: x[1], reverse=True) top_n[uid] = user_ratings[:n] top_n # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="mcJPFAhZYFoR" outputId="289f9ffe-e085-4962-f170-92d518d941c4" user_id = 1 item_id = 1 prediction = knnModel.predict(user_id, item_id) # + test2 = pd.read_csv('../Data/test.csv', names=['user_id', 'profile_id', 'rating']) test2['user_id'] = test2['user_id'].astype(str) test2['profile_id'] = test2['profile_id'].astype(str) tuples = [tuple(x) for x in test2.to_numpy()] # - predictions = knnModel.test(tuples) # Then compute RMSE accuracy.rmse(predictions) f = open('model/KNN.pickle', 'wb') pickle.dump(knnModel, f) f.close() f = open('model/KNN.pickle','rb') loaded_model = pickle.load(f) f.close() predictions = loaded_model.test(tuples) # Then compute RMSE accuracy.rmse(predictions)
kNN/kNN.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-f6a612f9e4e3134f", "locked": true, "schema_version": 3, "solution": false, "task": false} # # SLU00 - Jupyter Notebook: Exercise notebook # # In this notebook you'll train the concepts learned in the Learning notebook. Good luck! # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-a9b1f2d28b3f41f4", "locked": true, "schema_version": 3, "solution": false, "task": false} # ## Shortcuts # Run the cell below using two different shortcuts: first, use the shortcut that just rans the cell. Next, use the shortcut that runs the cell and inserts a cell below. # - print("I'm a Jupyter Notebook master now") # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-0d60f830c3fb93ab", "locked": true, "schema_version": 3, "solution": false, "task": false} # Use a shortcut to merge the 3 cells above this one: # - print("Please reunite me with the cell below") print("I belong to everyone") print("Please reunite me with the cell above") # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-91dcdbb6f909dd8c", "locked": true, "schema_version": 3, "solution": false, "task": false} # ## Notebook Kernel and memory # The following cells are meant the illustrate the problem of Jupyter keeping the information in memory even when we delete the cells that were originally run with that information. # Run the 4 cells below. Delete the cell with the statement `b = 0.7` by using command mode and a shortcut. Run the 3 remaining cells again. Create a cell under the cell with the statement `print(a, b, c)` using the command mode and a shortcut. Print just the variable b. Now, run all cells again but restart the notebook first. Check that now you have an error when you try to print the variable b. After that, delete the print cell and go on to the next exercise: Markdown cells. # - a = 7 b = 0.7 c = 0.07 print(a, b, c) # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-d93866f07727826d", "locked": true, "schema_version": 3, "solution": false, "task": false} # ## Markdown cells # Create a markdown list in the markdown cell below, with 3 items: mathematics, physics and chemistry. # - # # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-8c5ff77d1fce74a8", "locked": true, "schema_version": 3, "solution": false, "task": false} # Create, in the markdown cell below, a quote using markdown notation (tip: Google is your best friend) # - # # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-b68abc102337c17e", "locked": true, "schema_version": 3, "solution": false, "task": false} # Change the cell below to markdown and create an header that says: "I've finished my first exercise notebook!" # - # # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-d16c0232d767a484", "locked": true, "schema_version": 3, "solution": false, "task": false} # # Submit your work! # # To submit your work, [get your slack id](https://moshfeu.medium.com/how-to-find-my-member-id-in-slack-workspace-d4bba942e38c) and assign your slack id to a `slack_id` variable in the cell bellow. # Example: # # ```python # slack_id = "UTS63FC02" # + deletable=false nbgrader={"grade": false, "grade_id": "cell-db86bff08f67584f", "locked": false, "schema_version": 3, "solution": true, "task": false} # YOUR CODE HERE raise NotImplementedError() # slackid = # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-8ea0aa7dc9e60a19", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false} from submit import grade_submit assert slackid is not None #grade_submit(notebook_name='Exercise notebook.ipynb', learning_unit=0, exercise_notebook=1, slackid='migueldias') # -
Week 00/SLU00 - Jupyter Notebook/Exercise notebook.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline import sys sys.path.append("../") import numpy as np import cv2 import matplotlib.pyplot as plt import glob # + class_mapper = { 'Normal': 'Normal', 'Benign': 'Normal', 'ASCUS': 'Low-Risk', 'LSIL': 'High-Risk', 'HSIL': 'High-Risk', 'Carcinoma': 'High-Risk', } labels_info_path = "../data/labels_info.npy" partition_path = "../data/partition.npy" labels_info = np.load(labels_info_path, allow_pickle=True, encoding='latin1').item() partition = np.load(partition_path, allow_pickle=True, encoding='latin1').item() # + # Check if having multiple classes in a single image. one_class_IDs = [] multi_class_IDs = [] for k, v in labels_info.items(): cnames = [class_mapper.get(label[0]) for label in v] uniq_cnames = np.unique(cnames) if len(uniq_cnames) > 1: multi_class_IDs.append(k) else: one_class_IDs.append(k) print("Num of Single Class Image: {}".format(len(one_class_IDs))) print("Num of Multiple Class Image: {}".format(len(multi_class_IDs))) # - plt.figure(figsize=(10, 20)) plt.pie( x=[len(one_class_IDs), len(multi_class_IDs)], labels=['Single', "Multiple"], autopct='%1.1f%%', shadow=True ); # + # Visualize classes distribution. normal_IDs = [] low_risk_IDs = [] high_risk_IDs = [] for ID in one_class_IDs: v = labels_info.get(ID) cnames = [class_mapper.get(label[0]) for label in v] if "Normal" in cnames: normal_IDs.append(ID) elif "Low-Risk" in cnames: low_risk_IDs.append(ID) else: high_risk_IDs.append(ID) print("Num of [Normal] Image: {}".format(len(normal_IDs))) print("Num of [Low-Risk] Image: {}".format(len(low_risk_IDs))) print("Num of [High-Risk] Image: {}".format(len(high_risk_IDs))) # - plt.figure(figsize=(10, 20)) plt.pie( x=[len(normal_IDs), len(low_risk_IDs), len(high_risk_IDs)], labels=['Normal', "Low-Risk", "High-Risk"], autopct='%1.1f%%', shadow=True );
examples/0. Dataset Label Review.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 演習5 - VQE(変分量子固有値ソルバー) # *** # # ## 歴史的背景 # # # 過去10年間で、量子コンピューターは急速に成熟し、量子的な手法で自然の法則をシミュレートできるコンピューティングシステムというファインマンの夢を実現し始めました。2014年の論文において、最初に、アルベルト・ペルッゾが **変分量子固有値ソルバー(VQE)** を発表しました。分子の基底状態エネルギー(最小エネルギー)をこれまでの手法より短い回路で見つけるアルゴリズムです。[1] そして、2017年に、IBMの量子チームがVQEアルゴリズムを使って水素化リチウム分子の基底状態をシミュレートしました。[2] # # VQEのマジックは、問題の計算ワークロードのうちの一部を古典コンピューターにアウトソースすることです。アルゴリズムは、まず試行状態(ansatz、ベストな推測)と呼ばれるパラメーター化された量子回路から始め、古典オプティマイザーを使ってこの回路の最適なパラメーターを探します。VQEが古典アルゴリズムより優っている点は、量子回路が問題の厳密な波動関数を表現し保存できるということです。これは古典コンピューターでは指数関数的に難しい問題です。 # # この演習5では、分子の基底状態と基底エネルギーを決定するために、変分量子固有値ソルバーを設定することで、ファインマンの夢をみなさんに実現してもらいます。この問題は、基底状態は、様々な分子の特性を計算するために使われるので興味深いことです。例えば、原子核における厳密な力は分子動力学シミュレーションで化学システムにおいて何が起こっているのか時間変化を伴って探究することができます。[3] # # # ### 参考文献 # # 1. <NAME>, et al. "A variational eigenvalue solver on a photonic quantum processor." Nature communications 5.1 (2014): 1-7. # 2. <NAME>, et al. "Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets." Nature 549.7671 (2017): 242-246. # 3. Sokolov, <NAME>., et al. "Microcanonical and finite-temperature ab initio molecular dynamics simulations on quantum computers." Physical Review Research 3.1 (2021): 013125. # # # ## はじめに # # VQEの実装において、特に試行状態(ansatz)の量子回路に注目して、どのようにシミュレーションを実装するか、その方法を選びます。 # ノイズのある量子コンピューターにおいてVQEを実行する際に重要な課題の一つは、忠実度(fidelity)をロスしないように、最もコンパクトな量子回路を基底状態として選ぶことです。この問題は、この課題に基づいて作られています。 # この課題は、実際には、精度を損わずに、2量子ビットゲート(例:CNOT)の数と変分パラメーターの数を削減することを意味します。 # # <div class="alert alert-block alert-success"> # # <b>目標</b> # # 与えられた問題における基底状態を正確に表現する最も短い試行状態(ansatz)を見つけてください。創造的に! # # # <b>計画</b> # # はじめに、小さな分子を使ってVQEシミュレーションの構築の仕方を学びます。その後、より大きな分子の場合について学んだことを適用します。 # # **1. チュートリアル - H$_2$に対するVQE:** VQEに慣れるために、statevectorシミュレーターで実行して、試行状態(ansatz)と古典オプティマイザーのベストな組み合わせを選んでください。 # # # **2. チャレンジ - LiHに対するVQE:** 1のチュートリアルと同じような検討をしますが、statevectorシミュレーターのみに制限します。Qiskitに用意されている量子ビット数を削減するスキームを使って、このより大きな系に対して最適な回路を探してください。回路を最適化し、想像力を使って、パラメーター化された回路のベストなビルティングブロックを選ぶ方法を探してください。そして、Qiskitにすでにある基底状態のための試行回路よりもコンパクトで、最もコンパクトな試行状態回路を構築してください。 # # # </div> # # # <div class="alert alert-block alert-danger"> # # 以下はVQEシミュレーションの理論の紹介です。VQEの実行の前に全てを理解する必要はありません。怖がらないで! # # </div> # # # # *** # # ## 理論 # # # 下図に量子コンピューター上でVQEを使って分子シミュレーションを行う一般的なワークフローを示します。 # # <img src="resources/workflow.png" width=800 height= 1400/> # # 量子-古典のハイブリッド手法のアイディアのコアは、 **CPU(古典プロセッシング・ユニット)** と **QPU(量子プロセッシング・ユニット)** にそれぞれベストな計算ができる部分をアウトソースすることです。CPUは、エネルギー計算のために測定する必要のある項目をリストすること、また回路のパラメーターを最適化することを担当します。QPUは、システムの量子状態を表現する量子回路を実装し、エネルギーを測定します。より詳細には以下のようになります: # # **CPU** は、電子のホッピングと相互作用(ハートリー・フォック計算による1体/ 2体積分)に関連したエネルギーを効率的に計算することができます。このエネルギーは、全エネルギーを表す演算子であるハミルトニアンとして表されます。[ハートリー・フォック (HF) 法](https://en.wikipedia.org/wiki/Hartree%E2%80%93Fock_method#:~:text=In%20computational%20physics%20and%20chemistry,system%20in%20a%20stationary%20state.) は、波動関数を1つのスレーター行列式によって表すことができると仮定することで、効率的に基底状態の波動関数の近似計算を行います。(例:4スピン軌道と4量子ビットのSTO-3G基底におけるH$_2$分子において、最もエネルギーの低いスピン軌道を電子が占有している場合、$|\Psi_{HF} \rangle = |0101 \rangle$ です。) # QPUが後でVQEで行うことは、欠落している電子相関に関連する他の状態を表すこともできる(回路とそのパラメーターに関連した)量子状態を見つけることです。(例:$|\Psi \rangle$における$\sum_i c_i |i\rangle$は、$c_{HF}|\Psi_{HF} \rangle + \sum_i c_i |i\rangle $に等しい。ここで$i$はビット列です。) # # HF計算の後、ハミルトニアンにおける演算子は、フェルミオン-量子ビット変換を使ってQPUにおける測定量にマップされます。(後述のハミルトニアンの章を参照してください。) # 量子ビット数の削減や試行回路(ansatz)を短くするためにシステムの特性を更に分析することができます: # # - Z2対称性と2量子ビット削減のためには、こちらをご覧ください:[Bravyi *et al*, 2017](https://arxiv.org/abs/1701.08213v1) # - エンタングルメントを作るためには、こちらをご覧ください: [Eddins *et al.*, 2021](https://arxiv.org/abs/2104.10220v1) # - 試行回路(ansatz)の適応のためには、こちらをご覧ください: [Tang *et al.*,2019](https://arxiv.org/abs/1911.10205). これらの論文のアイディアを活用して、量子回路を短くする方法を見つけることができるでしょう。 # # **QPU** は、角度$\vec\theta$でパラメーター化された量子回路(下の試行状態の章を参照)を構築します。この回路は、様々な単一量子ビット回転とエンタングラー(2量子ビットゲートなど)を活用することで、基底状態の波動関数を表現します。量子超越性は、QPUが効率的に正確な波動関数を表現し、保持できることが元になっています。古典コンピューターでは、原子数が数個を超えるシステムの波動関数は扱いにくくなります。最終的には、QPUは選択された演算子(例:ハミルトニアンを表現する演算子)を測定します。 # # 以下で、VQEアルゴリズムの各コンポーネントにおける数学的な詳細について少し詳しくみていきます。[VQEのエピソード動画](https://www.youtube.com/watch?v=Z-A6G0WVI9w)も役に立つと思います # # # ### ハミルトニアン # # ここでは、与えられた系に対して測定する必要のある演算子をどうやって求めるかを説明します。 # これらは以下のように定義された分子のハミルトニアンに含まれています: # # $$ # \begin{aligned} # \hat{H} &=\sum_{r s} h_{r s} \hat{a}_{r}^{\dagger} \hat{a}_{s} \\ # &+\frac{1}{2} \sum_{p q r s} g_{p q r s} \hat{a}_{p}^{\dagger} \hat{a}_{q}^{\dagger} \hat{a}_{r} \hat{a}_{s}+E_{N N} # \end{aligned} # $$ # ここで # $$ # h_{p q}=\int \phi_{p}^{*}(r)\left(-\frac{1}{2} \nabla^{2}-\sum_{I} \frac{Z_{I}}{R_{I}-r}\right) \phi_{q}(r) # $$ # $$ # g_{p q r s}=\int \frac{\phi_{p}^{*}\left(r_{1}\right) \phi_{q}^{*}\left(r_{2}\right) \phi_{r}\left(r_{2}\right) \phi_{s}\left(r_{1}\right)}{\left|r_{1}-r_{2}\right|} # $$ # # # ここで$h_{r s}$と$g_{p q r s}$は、1体/2体の電子積分(ハートリー・フォック法で求める)であり、$E_{N N}$は、核反発エネルギーです。 # 1体の電子積分は、電子の運動エネルギーと電子と原子核との相互作用を表します。 # 2体の電子積分は、電子と電子の相互作用を表します。 # $\hat{a}_{r}^{\dagger}, \hat{a}_{r}$の演算子は、スピン軌道$r$における電子の生成と消滅を表し、量子コンピューターでそれらを測定できるように演算子へマップされる必要があります。VQEは、電子のエネルギーを最小化するので、全体のエネルギーを計算するために、核反発ネルギー$E_{NN}$を取得して追加する必要があることに注意してください。 # # よって、$ h_{r s}$と$g_{p q r s}$のテンソルのゼロでない要素すべてにおいて、以下のフェルミオンから量子ビットへの変換に従って、パウリ・ストリング(パウリ演算子のテンソル積)を構築することができます。例えば、軌道$r = 3$における、ジョルダン・ウィグナー変換で、以下のパウリ・ストリングを得ることができます: # # $$ # \hat a_{3}^{\dagger}= \hat \sigma_z \otimes \hat \sigma_z \otimes\left(\frac{ \hat \sigma_x-i \hat \sigma_y}{2}\right) \otimes 1 \otimes \cdots \otimes 1 # $$ # # ここで、$\hat \sigma_x, \hat \sigma_y, \hat \sigma_z$は、よく知られているパウリ演算子です。$\hat \sigma_z$演算子のテンソル積は、フェルミオンの反交換関係を満たすために配置されます。 # 水分子の14スピン軌道と14量子ビットの間のジョルダン・ウィグナー変換は、以下のように与えられます: # # <img src="resources/mapping.png" width=600 height= 1200/> # # # 従って、ハミルトニアンにおける1体/2体の励起(例えば、 $\hat{a}_{r}^{\dagger} \hat{a}_{s}$, $\hat{a}_{p}^{\dagger} \hat{a}_{q}^{\dagger} \hat{a}_{r} \hat{a}_{s}$)を、対応するパウリ・ストリング(つまり、上の図の$\hat{P}_i$)によって置き換えます。 # その結果、QPUによって測定可能な演算子のセットとなります。 # より詳しい情報は、[Seeley *et al.*, 2012](https://arxiv.org/abs/1208.5986v1)をご覧ください。 # # ### 試行状態(Ansatzes) # # 化学の問題には主に2種類の試行状態(Ansatzes)を使うことができます。 # # - **量子ユニタリー結合クラスター(q-UCC)試行状態** は、物理的な発想によるもので、量子回路に電子の励起を大まかにマップします。量子ユニタリー結合クラスターシングル・ダブル(q-UCCSD)試行状態 (Qiskitでは`UCCSD`)は、すべての可能な一電子励起と二電子励起まで考慮します。ペアダブルq-pUCCD (`PUCCD`) とシングレットのq-UCCD0 (`SUCCD`)は、これらの励起の一部のみを考慮し(したがってとても短い回路になります。)、解離プロファイルに良い結果をもたらすことが証明されています。例えば、q-pUCCD は、以下の図のように、一励起状態を持たず、第二励起状がペアになっています。 # # - **経験的な試行状態(TwoLocal)** は、回路の深さ(Depth)を短くするために発明されましたが、この短い回路を使っても基底状態を表現できます。 # 下の図にあるように、Rゲートがパラメーター化された1量子ビットの回転で、2量子ビットゲートによる$U_{CNOT}$ のエンタングラーがあります。特定の$D$回、(独立なパラメーターを使って)この同じブロックを繰り返した後に、基底状態に達するというアイディアです。 # # より詳しい説明は、[Sokolov *et al.* (q-UCC ansatzes)](https://arxiv.org/abs/1911.10864v2) 、[Barkoutsos *et al.*](https://arxiv.org/pdf/1805.04340.pdf)をご覧ください。 # # <img src="resources/ansatz.png" width=700 height= 1200/> # # # ### VQE # # ハミルトニアン演算子$\hat H$が、固有ベクトル$|\psi_{min}\rangle$に関連付けられた未知の最小固有値$E_{min}$を持つとして与えられた時、VQEは、$E_{\theta}$を推定し、その最小値は、$E_{min}$です: # # \begin{align*} # E_{min} \le E_{\theta} \equiv \langle \psi(\theta) |\hat H|\psi(\theta) \rangle # \end{align*} # # ここで、$|\psi(\theta)\rangle$は、$E_{\theta}$に関連付けられた試行状態です。 # $U(\theta)$で表された、パラメーター化された回路を任意の初期状態$|\psi\rangle$に対して適用することによって、アルゴリズムは、$U(\theta)|\psi\rangle \equiv |\psi(\theta)\rangle$を$|\psi_{min}\rangle$に推定します。 # 推定は、古典オプティマイザーによってパラメーター$\theta$を変え、期待値$\langle \psi(\theta) |\hat H|\psi(\theta) \rangle$を最小化するように、繰り返し最適化されます。 # # VQEの応用として、分子動力学シミュレーションにおける可能性についてはこちら[Sokolov *et al.*, 2021](https://arxiv.org/abs/2008.08144v1)をご覧ください。また励起状態の計算については、こちら[Ollitrault *et al.*, 2019](https://arxiv.org/abs/1910.12890) にいくつか例があります。 # # <div class="alert alert-block alert-warning"> # # <b> より詳細な説明</b> # # このアルゴリズムを実装した、Qiskit Natureのチュートリアルは[こちら](https://qiskit.org/documentation/nature/tutorials/01_electronic_structure.html)にありますが、十分ではないので、[githubレポジトリーの最初のページ](https://github.com/Qiskit/qiskit-nature)と[テストフォルダー](https://github.com/Qiskit/qiskit-nature/tree/main/test) をご覧になることをお勧めします。各コンポーネントに対するテストがあり、各機能の基本となるコードが置かれています。 # # </div> # *** # # # パート1: チュートリアル - H$_2$ 分子に対するVQE # # # このパートでは、PySCFドライバーとジョルダン・ウィグナー変換で、STO-3G基底を用いて、H$_2$分子をシミュレートします。 # 以下でやり方を紹介するので、より難しい問題にも取り組めるようになるはずです。 # # #### 1. ドライバー # # # Qiskitに用意されている古典的な化学のコードへのインターフェースをドライバーと呼びます。例えば、`PSI4Driver`, `PyQuanteDriver`, `PySCFDriver`が用意されています。 # # 以下のセルで、ドライバーを実行させ(与えられた基底セットと分子構造においてハートリー・フォック法の計算をして)、量子アルゴリズムに適用する分子に関する必要なすべての情報を取得します。 # + from qiskit_nature.drivers import PySCFDriver molecule = "H .0 .0 .0; H .0 .0 0.739" driver = PySCFDriver(atom=molecule) qmolecule = driver.run() # - # <div class="alert alert-block alert-danger"> # # <b> 練習問題 1</b> # # `qmolecule`の特性を調べて、以下の質問に答えましょう。 # # 1. 分子の基本的特性を知る必要があります。与えられた系の電子は合計何個ですか? # 2. 分子軌道は何個あります? # 3. スピン軌道は何個あります? # 4. この分子をシミュレートするためにジョルダン・ウィグナー変換では何量子ビット必要ですか? # 5. 核反発エネルギーの値はいくつですか? # # 解答は、このノートブックの最後にあります。 # </div> # ### 解答 # `qmolecule`のドキュメントを探せば良さそうです。 # # ただし、[API Refarence](https://qiskit.org/documentation/nature/stubs/qiskit_nature.drivers.QMolecule.html)には記述されていないの[コード](https://qiskit.org/documentation/nature/_modules/qiskit_nature/drivers/qmolecule.html#QMolecule)を参照します。 # + # このラインの下にコードを書いてください n_el = qmolecule.num_alpha + qmolecule.num_beta n_mo = qmolecule.num_molecular_orbitals n_so = 2 * qmolecule.num_molecular_orbitals n_q = 2* qmolecule.num_molecular_orbitals e_nn = qmolecule.nuclear_repulsion_energy # このラインの上にコードを書いてください n_el,n_mo,n_so,n_q,e_nn # - # #### 2. 電子構造の問題 # # 次に、量子ビット(パウリ・ストリング)に変換する前に、フェルミ演算子のリストを生成する`ElectronicStructureProblem`を作ります。 # + from qiskit_nature.problems.second_quantization.electronic import ElectronicStructureProblem problem = ElectronicStructureProblem(driver) # 第二量子化演算子を作ります second_q_ops = problem.second_q_ops() # ハミルトニアン main_op = second_q_ops[0] # - # #### 3. 量子ビットへの変換 # シミュレーションで使用するマッピングを定義できます。 # 別の変換も使うことができますが、シンプルな対応である`JordanWignerMapper`を使います:量子ビットを分子におけるスピン軌道として表現します。 # + from qiskit_nature.mappers.second_quantization import ParityMapper, BravyiKitaevMapper, JordanWignerMapper from qiskit_nature.converters.second_quantization.qubit_converter import QubitConverter # 変換器と量子ビット変換を設定 mapper_type = 'JordanWignerMapper' if mapper_type == 'ParityMapper': mapper = ParityMapper() elif mapper_type == 'JordanWignerMapper': mapper = JordanWignerMapper() elif mapper_type == 'BravyiKitaevMapper': mapper = BravyiKitaevMapper() converter = QubitConverter(mapper=mapper, two_qubit_reduction=False) # フェルミオン演算子が量子ビット演算子に変換されます num_particles = (problem.molecule_data_transformed.num_alpha, problem.molecule_data_transformed.num_beta) qubit_op = converter.convert(main_op, num_particles=num_particles) # - # #### 4. 初期状態 # # 理論の節で説明したように、化学の問題における良い初期状態はHF状態(つまり$|\Psi_{HF} \rangle = |0101 \rangle$)です。以下のように初期化できます: # + from qiskit_nature.circuit.library import HartreeFock num_particles = (problem.molecule_data_transformed.num_alpha, problem.molecule_data_transformed.num_beta) num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals init_state = HartreeFock(num_spin_orbitals, num_particles, converter) #print(init_state) init_state.draw('mpl') # - # #### 5. 試行状態(Ansatz) # # 最も重要な選択の一つが、基底状態を近似するために選ぶ量子回路です。 # 回路として選択可能な量子回路ライブラリーの例を紹介します。 # + from qiskit.circuit.library import TwoLocal from qiskit_nature.circuit.library import UCCSD, PUCCD, SUCCD # ansatzを選ぶ ansatz_type = "TwoLocal" # q-UCC ansatzのためのパラメーター num_particles = (problem.molecule_data_transformed.num_alpha, problem.molecule_data_transformed.num_beta) num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals # twolocalのための引数を設定 if ansatz_type == "TwoLocal": # 単一量子ビット回転は、すべての量子ビットに独立なパラメーターとして置かれます rotation_blocks = ['ry', 'rz'] # エンタングルさせるゲート entanglement_blocks = 'cx' # 量子ビットをどのくらいエンタングルさせるか entanglement = 'full' # 独立なパラメーターを持った回転ブロックとエンタングルメントのブロックの繰り返し回数 repetitions = 3 # 最後の回転ブロック層をスキップする skip_final_rotation_layer = True ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions, entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer) # 初期状態を加える ansatz.compose(init_state, front=True, inplace=True) elif ansatz_type == "UCCSD": ansatz = UCCSD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "PUCCD": ansatz = PUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "SUCCD": ansatz = SUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "Custom": # 回路の作り方の例 from qiskit.circuit import Parameter, QuantumCircuit, QuantumRegister # 変分パラメーターの定義 theta = Parameter('a') n = qubit_op.num_qubits # 空の量子回路の作成 qc = QuantumCircuit(qubit_op.num_qubits) qubit_label = 0 # アダマールゲートを置く qc.h(qubit_label) # CNOTを置く for i in range(n-1): qc.cx(i, i+1) # 見た目のためのセパレーター qc.barrier() # 全量子ビットへrz回転 qc.rz(theta, range(n)) ansatz = qc ansatz.compose(init_state, front=True, inplace=True) #print(ansatz) ansatz.draw('mpl') # - # #### 6. バックエンド # # アルゴリズムを実行するシミュレーターまたはデバイスを特定するところです。 # このチャレンジでは`statevector_simulator`を使います。 # # from qiskit import Aer backend = Aer.get_backend('statevector_simulator') # #### 7. オプティマイザー # # オプティマイザーは、試行回路のパラメーターの変化を導き、QPUで実行される測定の数が定義されるエネルギーの収束の評価のためにとても重要です。 # 賢い選択をすることで、必要となるエネルギー評価の回数を劇的に激減できるでしょう。 # + from qiskit.algorithms.optimizers import COBYLA, L_BFGS_B, SPSA, SLSQP optimizer_type = 'COBYLA' # 各オプティマイザーのパラメーターを調整したいかもしれませんが # ここではデフォルト値を使います if optimizer_type == 'COBYLA': optimizer = COBYLA(maxiter=500) elif optimizer_type == 'L_BFGS_B': optimizer = L_BFGS_B(maxfun=500) elif optimizer_type == 'SPSA': optimizer = SPSA(maxiter=500) elif optimizer_type == 'SLSQP': optimizer = SLSQP(maxiter=500) # - # #### 8. 厳密固有値ソルバー # # 学習目的のために、問題をハミルトニアン行列の厳密な対角化によって解くことで、VQEがどこを目指しているかを知ることができます。 # もちろん、この行列の次元は分子軌道の数が増えるとともに指数関数的に増大するので、より大きな分子に対してこの方法を試すと、とても時間がかかることを確認できるでしょう。とても大きなシステムにおいては、その波動関数を保持するためのメモリーが足りなくなるでしょう。 # + from qiskit_nature.algorithms.ground_state_solvers.minimum_eigensolver_factories import NumPyMinimumEigensolverFactory from qiskit_nature.algorithms.ground_state_solvers import GroundStateEigensolver import numpy as np def exact_diagonalizer(problem, converter): solver = NumPyMinimumEigensolverFactory() calc = GroundStateEigensolver(converter, solver) result = calc.solve(problem) return result result_exact = exact_diagonalizer(problem, converter) exact_energy = np.real(result_exact.eigenenergies[0]) print("Exact electronic energy", exact_energy) print(result_exact) # よって目標とする電子のエネルギーは、-1.85336 Haです。 # ご自分のVQEの結果を確認してください。 # - # #### 9. VQEと試行状態のための初期パラメーター # これで、VQEクラスをインポートし、アルゴリズムを実行できます。 # + from qiskit.algorithms import VQE from IPython.display import display, clear_output # リストのデータを出力して保存します def callback(eval_count, parameters, mean, std): # 出力時に同じ行を上書きしてください display("Evaluation: {}, Energy: {}, Std: {}".format(eval_count, mean, std)) clear_output(wait=True) counts.append(eval_count) values.append(mean) params.append(parameters) deviation.append(std) counts = [] values = [] params = [] deviation = [] # ansatzの初期パラメーターを設定します # すべての参加者が似たような初期ポイントから始められるように # 固定の小さい変位を選びます try: initial_point = [0.01] * len(ansatz.ordered_parameters) except: initial_point = [0.01] * ansatz.num_parameters algorithm = VQE(ansatz, optimizer=optimizer, quantum_instance=backend, callback=callback, initial_point=initial_point) result = algorithm.compute_minimum_eigenvalue(qubit_op) print(result) # - # #### 9. スコアリング関数 # # あなたが選択した試行状態/オプティマイザーで、VQEシミュレーションがどれだけ良くできたかを判定する必要があります。 # そのために、以下のスコアリング関数を導入します: # # # $$ score = N_{CNOT}$$ # # # ここで、$N_{CNOT}$はCNOTの数です。 # しかし、我々は、化学的精度$\delta E_{chem} = 0.004$ Ha $= 4$ mHaに到達する必要があります。この値は、問題によっては到達が難しい場合があります。 # 最も少ないCNOTの数でこの精度に到達した人がこのチャレンジの勝者です。スコアの値が小さい方が良いスコアです! # + # ディクショナリーに結果を保存します from qiskit.transpiler import PassManager from qiskit.transpiler.passes import Unroller # 作った回路をCNOTとUゲートにトランスパイルするUnroller pass_ = Unroller(['u', 'cx']) pm = PassManager(pass_) ansatz_tp = pm.run(ansatz) cnots = ansatz_tp.count_ops()['cx'] score = cnots accuract_thres = 4.0 # mHaの単位 energy = result.optimal_value if ansatz_type is "TwoLocal": result_dict = { 'optimizer': optimizer.__class__.__name__, 'mapping': converter.mapper.__class__.__name__, 'ansatz': ansatz.__class__.__name__, 'rotation blocks': rotation_blocks, 'entanglement_blocks': entanglement_blocks, 'entanglement': entanglement, 'repetitions': repetitions, 'skip_final_rotation_layer': skip_final_rotation_layer, 'energy (Ha)': energy, 'error (mHa)': (energy-exact_energy)*1000, 'pass': (energy-exact_energy)*1000 <= accuract_thres, '# of parameters': len(result.optimal_point), 'final parameters': result.optimal_point, '# of evaluations': result.optimizer_evals, 'optimizer time': result.optimizer_time, '# of qubits': int(qubit_op.num_qubits), '# of CNOTs': cnots, 'score': score} else: result_dict = { 'optimizer': optimizer.__class__.__name__, 'mapping': converter.mapper.__class__.__name__, 'ansatz': ansatz.__class__.__name__, 'rotation blocks': None, 'entanglement_blocks': None, 'entanglement': None, 'repetitions': None, 'skip_final_rotation_layer': None, 'energy (Ha)': energy, 'error (mHa)': (energy-exact_energy)*1000, 'pass': (energy-exact_energy)*1000 <= accuract_thres, '# of parameters': len(result.optimal_point), 'final parameters': result.optimal_point, '# of evaluations': result.optimizer_evals, 'optimizer time': result.optimizer_time, '# of qubits': int(qubit_op.num_qubits), '# of CNOTs': cnots, 'score': score} # 結果をプロットします import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 1) ax.set_xlabel('Iterations') ax.set_ylabel('Energy') ax.grid() fig.text(0.7, 0.75, f'Energy: {result.optimal_value:.3f}\nScore: {score:.0f}') plt.title(f"{result_dict['optimizer']}-{result_dict['mapping']}\n{result_dict['ansatz']}") ax.plot(counts, values) ax.axhline(exact_energy, linestyle='--') fig_title = f"\ {result_dict['optimizer']}-\ {result_dict['mapping']}-\ {result_dict['ansatz']}-\ Energy({result_dict['energy (Ha)']:.3f})-\ Score({result_dict['score']:.0f})\ .png" fig.savefig(fig_title, dpi=300) # データを表示して保存します import pandas as pd import os.path filename = 'results_h2.csv' if os.path.isfile(filename): result_df = pd.read_csv(filename) result_df = result_df.append([result_dict]) else: result_df = pd.DataFrame.from_dict([result_dict]) result_df.to_csv(filename) result_df[['optimizer','ansatz', '# of qubits', '# of parameters','rotation blocks', 'entanglement_blocks', 'entanglement', 'repetitions', 'error (mHa)', 'pass', 'score']] # - # <div class="alert alert-block alert-danger"> # # <b>練習問題 2</b> # # すべてのパラメーターを実験してから: # # 1. ベストな(最も良いスコアの)経験的なansatz(試行回路)とオプティマイザーを見つけられましたか?(`TwoLocal` ansatzのパラメーターを修正することで) # 2. ベストなq-UCC ansatzとオプティマイザーを見つけられましたか?(`UCCSD, PUCCD, SUCCD` ansatzの中から選択することで) # 3. ansatzを定義するセルで、自分自身でゲートを置くことで`Custom` ansatzを修正して、`TwoLocal`の回路より良い回路を作れましたか? # # 各質問ごとに、`ansatz` オブジェクトを指定します。 # 化学的精度$|E_{exact} - E_{VQE}| \leq 0.004 $ Ha $= 4$ mHa に到達する必要があることを忘れないでください。 # # </div> # # # + # このラインの下にコードを書いてください def op(optimizer_type = 'COBYLA',maxiter=5000): if optimizer_type == 'COBYLA': optimizer = COBYLA(maxiter=maxiter) elif optimizer_type == 'L_BFGS_B': optimizer = L_BFGS_B(maxfun=maxiter) elif optimizer_type == 'SPSA': optimizer = SPSA(maxiter=maxiter) elif optimizer_type == 'SLSQP': optimizer = SLSQP(maxiter=maxiter) return optimizer def choose_ansatz(ansatz_type = "TwoLocal", num_particles=num_particles, num_spin_orbitals=num_spin_orbitals): if ansatz_type == "TwoLocal": # 単一量子ビット回転は、すべての量子ビットに独立なパラメーターとして置かれます rotation_blocks = ['ry', 'rz'] # エンタングルさせるゲート entanglement_blocks = 'cx' # 量子ビットをどのくらいエンタングルさせるか entanglement = 'full' # 独立なパラメーターを持った回転ブロックとエンタングルメントのブロックの繰り返し回数 repetitions = 3 # 最後の回転ブロック層をスキップする skip_final_rotation_layer = True ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions, entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer) # 初期状態を加える ansatz.compose(init_state, front=True, inplace=True) elif ansatz_type == "UCCSD": ansatz = UCCSD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "PUCCD": ansatz = PUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "SUCCD": ansatz = SUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "Custom": # 回路の作り方の例 from qiskit.circuit import Parameter, QuantumCircuit, QuantumRegister # 変分パラメーターの定義 theta = Parameter('a') n = qubit_op.num_qubits # 空の量子回路の作成 qc = QuantumCircuit(qubit_op.num_qubits) qubit_label = 0 # アダマールゲートを置く qc.h(qubit_label) # CNOTを置く for i in range(n-1): qc.cx(i, i+1) # 見た目のためのセパレーター qc.barrier() # 全量子ビットへrz回転 qc.rz(theta, range(n)) ansatz = qc ansatz.compose(init_state, front=True, inplace=True) return ansatz num_particles = (problem.molecule_data_transformed.num_alpha, problem.molecule_data_transformed.num_beta) num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals for optimizer_type in ['COBYLA', 'L_BFGS_B', 'SPSA', 'SLSQP']: for ansatz_type in ["TwoLocal", "UCCSD", "PUCCD", "SUCCD", "Custom"]: print("======================================================") print("optimizer:{}, ansatz:{}".format(optimizer_type, ansatz_type)) print("") optimizer = op(optimizer_type, maxiter=2000) ansatz = choose_ansatz(ansatz_type, num_particles, num_spin_orbitals) #ansatz.draw("mpl") counts = [] values = [] params = [] deviation = [] try: initial_point = [0.01] * len(ansatz.ordered_parameters) except: initial_point = [0.01] * ansatz.num_parameters algorithm = VQE(ansatz, optimizer=optimizer, quantum_instance=backend, #callback=callback, initial_point=initial_point) result = algorithm.compute_minimum_eigenvalue(qubit_op) pass_ = Unroller(['u', 'cx']) pm = PassManager(pass_) ansatz_tp = pm.run(ansatz) cnots = ansatz_tp.count_ops()['cx'] score = cnots accuract_thres = 4.0 # mHaの単位 energy = result.optimal_value print("error:{:.2e}mHa".format((energy-exact_energy)*1000)) if ansatz_type is "TwoLocal": result_dict = { 'optimizer': optimizer.__class__.__name__, 'mapping': converter.mapper.__class__.__name__, 'ansatz': ansatz.__class__.__name__, 'rotation blocks': rotation_blocks, 'entanglement_blocks': entanglement_blocks, 'entanglement': entanglement, 'repetitions': repetitions, 'skip_final_rotation_layer': skip_final_rotation_layer, 'energy (Ha)': energy, 'error (mHa)': (energy-exact_energy)*1000, 'pass': (energy-exact_energy)*1000 <= accuract_thres, '# of parameters': len(result.optimal_point), 'final parameters': result.optimal_point, '# of evaluations': result.optimizer_evals, 'optimizer time': result.optimizer_time, '# of qubits': int(qubit_op.num_qubits), '# of CNOTs': cnots, 'score': score} else: result_dict = { 'optimizer': optimizer.__class__.__name__, 'mapping': converter.mapper.__class__.__name__, 'ansatz': ansatz.__class__.__name__, 'rotation blocks': None, 'entanglement_blocks': None, 'entanglement': None, 'repetitions': None, 'skip_final_rotation_layer': None, 'energy (Ha)': energy, 'error (mHa)': (energy-exact_energy)*1000, 'pass': (energy-exact_energy)*1000 <= accuract_thres, '# of parameters': len(result.optimal_point), 'final parameters': result.optimal_point, '# of evaluations': result.optimizer_evals, 'optimizer time': result.optimizer_time, '# of qubits': int(qubit_op.num_qubits), '# of CNOTs': cnots, 'score': score} filename = 'results_h2_q2.csv' if os.path.isfile(filename): result_df = pd.read_csv(filename) result_df = result_df.append([result_dict]) else: result_df = pd.DataFrame.from_dict([result_dict]) result_df.to_csv(filename) result_df[['optimizer','ansatz', '# of qubits', '# of parameters','rotation blocks', 'entanglement_blocks', 'entanglement', 'repetitions', 'error (mHa)', 'pass', 'score']] # このラインの上にコードを書いてください # - # `Custom`以外は良い感じに計算できていそうです。 `Custom`も回路を修正すれば、良い計算結果ができると予測できます。回路の修正は後ろの問題で検討します。 # *** # # # パート2: チャレンジ - LiH分子に対するVQE # # # このパートでは、PySCFドライバーでSTG-3G基底を用いて、LiH分子をシミュレートします。 # # </div> # # <div class="alert alert-block alert-success"> # # <b>目標</b> # # すべてのパラメーターを実験して、ベストな試行状態(ansatz)を見つけてください。好きなだけ創造力を発揮してください! # # すべての質問において、パート1と同じように`ansatz`オブジェクトを与えてください。最終スコアは、パート2の結果のみで判定されます。 # # </div> # # 今回はシステムが大きくなったことに注意してください。このシステムに対して量子ビットが何個必要になるか、スピン軌道の数を得ることによって計算します。 # # # # ### 問題サイズの削減 # # シミュレーション実行時に量子ビット数を減らしたいかもしれません。 # - 化学に大きく貢献していないコア電子を凍結し、価電子のみを考慮することができます。Qiskitには、すでにこの機能が実装されています。よって、`qiskit_nature.transformers` において様々なtransformersを調べて、「フローズンコア近似」を実現するものを探してください。 # - 2量子ビットを削減するために、`two_qubit_reduction=True`とともに`ParityMapper`を使うことができます。 # - ハミルトニアンの対称性を調べることによって量子ビットの数を削減できる可能性があります。Qiskitの`Z2Symmetries`を使う方法を探してみてください。 # # ### 独自の試行状態(ansatz) # # [Grimsley *et al.*,2018](https://arxiv.org/abs/1812.11173v2), [<NAME> *et al.*,2019](https://arxiv.org/abs/1911.10205), [Rattew *et al.*,2019](https://arxiv.org/abs/1910.09694), [Tang *et al.*,2019](https://arxiv.org/abs/1911.10205)に提案されているアイディアを使ってみたいかもしれません。ベストな試行回路の生成する機械学習アルゴリズムも参考になるかもしれません。 # # ### シミュレーションの設定 # # 次に、ハートリー・フォック計算を実行させ、その後は、あなたにお任せします! # # <div class="alert alert-block alert-danger"> # # <b>注意事項</b> # # `driver`、`initial_point`、`initial_state` は与えられたものを使ってください。 # それ以外はQiskitに用意されているものを全て自由に使うことができます。 # つまり、以下の初期点(すべてのパラメーターが0.01に設定されています)から始めてください: # # `initial_point = [0.01] * len(ansatz.ordered_parameters)` # または # `initial_point = [0.01] * ansatz.num_parameters` # # また、初期状態はHF状態を使わなければいけません: # # `init_state = HartreeFock(num_spin_orbitals, num_particles, converter)` # # 質問ごとに、`ansatz`オブジェクトを指定します。 # 化学的な精度$|E_{exact} - E_{VQE}| \leq 0.004 $ Ha$ = 4$ mHa に到達する必要があることを忘れないでください。 # </div> # ### まずはH2分子と同じように計算してみる # + from qiskit_nature.drivers import PySCFDriver molecule = 'Li 0.0 0.0 0.0; H 0.0 0.0 1.5474' driver = PySCFDriver(atom=molecule) qmolecule = driver.run() # + # 電子構造の問題 problem = ElectronicStructureProblem(driver) # 第二量子化演算子を作ります second_q_ops = problem.second_q_ops() # ハミルトニアン main_op = second_q_ops[0] # + # 量子ビット変換 # 変換器と量子ビット変換を設定 mapper_type = 'JordanWignerMapper' if mapper_type == 'ParityMapper': mapper = ParityMapper() elif mapper_type == 'JordanWignerMapper': mapper = JordanWignerMapper() elif mapper_type == 'BravyiKitaevMapper': mapper = BravyiKitaevMapper() converter = QubitConverter(mapper=mapper, two_qubit_reduction=False) # フェルミオン演算子が量子ビット演算子に変換されます num_particles = (problem.molecule_data_transformed.num_alpha, problem.molecule_data_transformed.num_beta) qubit_op = converter.convert(main_op, num_particles=num_particles) # - # 初期状態 num_particles = (problem.molecule_data_transformed.num_alpha, problem.molecule_data_transformed.num_beta) num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals init_state = HartreeFock(num_spin_orbitals, num_particles, converter) #print(init_state) init_state.draw('mpl') # + # 厳密解 result_exact = exact_diagonalizer(problem, converter) exact_energy = np.real(result_exact.eigenenergies[0]) print("Exact electronic energy", exact_energy) print(result_exact) # 厳密解は-8.90869711642429 Ha # + # ansatz # 今回は"SUCCD"を使う ansatz_type = "SUCCD" # q-UCC ansatzのためのパラメーター num_particles = (problem.molecule_data_transformed.num_alpha, problem.molecule_data_transformed.num_beta) num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals # twolocalのための引数を設定 if ansatz_type == "TwoLocal": # 単一量子ビット回転は、すべての量子ビットに独立なパラメーターとして置かれます rotation_blocks = ['ry', 'rz'] # エンタングルさせるゲート entanglement_blocks = 'cx' # 量子ビットをどのくらいエンタングルさせるか entanglement = 'full' # 独立なパラメーターを持った回転ブロックとエンタングルメントのブロックの繰り返し回数 repetitions = 3 # 最後の回転ブロック層をスキップする skip_final_rotation_layer = True ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions, entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer) # 初期状態を加える ansatz.compose(init_state, front=True, inplace=True) elif ansatz_type == "UCCSD": ansatz = UCCSD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "PUCCD": ansatz = PUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "SUCCD": ansatz = SUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "Custom": # 回路の作り方の例 from qiskit.circuit import Parameter, QuantumCircuit, QuantumRegister # 変分パラメーターの定義 theta = Parameter('a') n = qubit_op.num_qubits # 空の量子回路の作成 qc = QuantumCircuit(qubit_op.num_qubits) qubit_label = 0 # アダマールゲートを置く qc.h(qubit_label) # CNOTを置く for i in range(n-1): qc.cx(i, i+1) # 見た目のためのセパレーター qc.barrier() # 全量子ビットへrz回転 qc.rz(theta, range(n)) ansatz = qc ansatz.compose(init_state, front=True, inplace=True) #print(ansatz) #ansatz.draw('mpl') #重い回路の場合は表示に時間がかかるので、printを使う # + # オプティマイザー # 'L_BFGS_B'を使う optimizer_type = 'L_BFGS_B' # 各オプティマイザーのパラメーターを調整したいかもしれませんが # ここではデフォルト値を使います if optimizer_type == 'COBYLA': optimizer = COBYLA(maxiter=500) elif optimizer_type == 'L_BFGS_B': optimizer = L_BFGS_B(maxfun=500) elif optimizer_type == 'SPSA': optimizer = SPSA(maxiter=500) elif optimizer_type == 'SLSQP': optimizer = SLSQP(maxiter=500) # + # VQE backend = Aer.get_backend('statevector_simulator') counts = [] values = [] params = [] deviation = [] # ansatzの初期パラメーターを設定します # すべての参加者が似たような初期ポイントから始められるように # 固定の小さい変位を選びます try: initial_point = [0.01] * len(ansatz.ordered_parameters) except: initial_point = [0.01] * ansatz.num_parameters algorithm = VQE(ansatz, optimizer=optimizer, quantum_instance=backend, callback=callback, initial_point=initial_point) result = algorithm.compute_minimum_eigenvalue(qubit_op) print(result) # + # スコアリング # 作った回路をCNOTとUゲートにトランスパイルするUnroller pass_ = Unroller(['u', 'cx']) pm = PassManager(pass_) ansatz_tp = pm.run(ansatz) cnots = ansatz_tp.count_ops()['cx'] score = cnots accuract_thres = 4.0 # mHaの単位 energy = result.optimal_value if ansatz_type is "TwoLocal": result_dict = { 'optimizer': optimizer.__class__.__name__, 'mapping': converter.mapper.__class__.__name__, 'ansatz': ansatz.__class__.__name__, 'rotation blocks': rotation_blocks, 'entanglement_blocks': entanglement_blocks, 'entanglement': entanglement, 'repetitions': repetitions, 'skip_final_rotation_layer': skip_final_rotation_layer, 'energy (Ha)': energy, 'error (mHa)': (energy-exact_energy)*1000, 'pass': (energy-exact_energy)*1000 <= accuract_thres, '# of parameters': len(result.optimal_point), 'final parameters': result.optimal_point, '# of evaluations': result.optimizer_evals, 'optimizer time': result.optimizer_time, '# of qubits': int(qubit_op.num_qubits), '# of CNOTs': cnots, 'score': score} else: result_dict = { 'optimizer': optimizer.__class__.__name__, 'mapping': converter.mapper.__class__.__name__, 'ansatz': ansatz.__class__.__name__, 'rotation blocks': None, 'entanglement_blocks': None, 'entanglement': None, 'repetitions': None, 'skip_final_rotation_layer': None, 'energy (Ha)': energy, 'error (mHa)': (energy-exact_energy)*1000, 'pass': (energy-exact_energy)*1000 <= accuract_thres, '# of parameters': len(result.optimal_point), 'final parameters': result.optimal_point, '# of evaluations': result.optimizer_evals, 'optimizer time': result.optimizer_time, '# of qubits': int(qubit_op.num_qubits), '# of CNOTs': cnots, 'score': score} # 結果をプロットします import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 1) ax.set_xlabel('Iterations') ax.set_ylabel('Energy') ax.grid() fig.text(0.7, 0.75, f'Energy: {result.optimal_value:.3f}\nScore: {score:.0f}') plt.title(f"{result_dict['optimizer']}-{result_dict['mapping']}\n{result_dict['ansatz']}") ax.plot(counts, values) ax.axhline(exact_energy, linestyle='--') fig_title = f"\ {result_dict['optimizer']}-\ {result_dict['mapping']}-\ {result_dict['ansatz']}-\ Energy({result_dict['energy (Ha)']:.3f})-\ Score({result_dict['score']:.0f})\ .png" fig.savefig(fig_title, dpi=300) # データを表示して保存します import pandas as pd import os.path filename = 'results_LiH.csv' if os.path.isfile(filename): result_df = pd.read_csv(filename) result_df = result_df.append([result_dict]) else: result_df = pd.DataFrame.from_dict([result_dict]) result_df.to_csv(filename) result_df[['optimizer','ansatz', '# of qubits', '# of parameters','rotation blocks', 'entanglement_blocks', 'entanglement', 'repetitions', 'error (mHa)', 'pass', 'score']] # - # 以下のコードで答えを確認してください。 from qc_grader import grade_ex5 freeze_core = False # 核電子を凍結した場合はTrueにします。 grade_ex5(ansatz,qubit_op,result,freeze_core) # エラーが4mHa以下になっていますが、回路が大きすぎます。回路を小さくする改善を考えています。 # ### 回路を小さくする方法 # # [VQE を利用しての分子シミュレーションを行う](https://qiskit.org/textbook/ja/ch-applications/vqe-molecules.html)、[Qiskit Global Summer School 2020](https://qiskit.org/learn/intro-qc-qh/)の[Lab8](https://www.youtube.com/watch?v=DMosNL68b6Q)、 # [Qiskit各種ドキュメント](https://qiskit.org/documentation/nature/apidocs/qiskit_nature.html)などが参考になります。 # # 具体的には下記2つ # * `FreezeCoreTransformer` を使って、freeze coreし、さらに`remove_orbitals`で軌道を削除します。 # * `z2symmetry_reduction` # # 各種パラメータは自分で試行錯誤して決めました。 molecule = 'Li 0.0 0.0 0.0; H 0.0 0.0 1.5474' driver = PySCFDriver(atom=molecule) qmolecule = driver.run() # + # FreezeCoreTransformer from qiskit_nature.transformers import FreezeCoreTransformer # remove_orbitalsで削減する軌道は、いろいろ軌道を変えて試してみて確認するといいですね。 problem = ElectronicStructureProblem(driver,q_molecule_transformers=[FreezeCoreTransformer(freeze_core=True,remove_orbitals=[4,3])]) # 第二量子化演算子を作ります second_q_ops = problem.second_q_ops() # ハミルトニアン main_op = second_q_ops[0] # + # 変換器と量子ビット変換を設定 mapper_type = 'ParityMapper' # z2symmetry_reductionを使うときは'ParityMapper'を使う if mapper_type == 'ParityMapper': mapper = ParityMapper() elif mapper_type == 'JordanWignerMapper': mapper = JordanWignerMapper() elif mapper_type == 'BravyiKitaevMapper': mapper = BravyiKitaevMapper() converter = QubitConverter(mapper=mapper, two_qubit_reduction=True,z2symmetry_reduction=[1]) # フェルミオン演算子が量子ビット演算子に変換されます num_particles = (problem.molecule_data_transformed.num_alpha, problem.molecule_data_transformed.num_beta) qubit_op = converter.convert(main_op, num_particles=num_particles) print(qubit_op.z2_symmetries) # - num_particles = (problem.molecule_data_transformed.num_alpha, problem.molecule_data_transformed.num_beta) num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals init_state = HartreeFock(num_spin_orbitals, num_particles, converter) #print(init_state) init_state.draw('mpl') # Qbitを12個から4個まで削減できました。これで量子回路を小さくすることができます。 # + # 厳密解 result_exact = exact_diagonalizer(problem, converter) exact_energy = np.real(result_exact.eigenenergies[0]) print("Exact electronic energy", exact_energy) print(result_exact) # 厳密解は-1.088706015734737 Ha # + # ansatz # 今回は"TwoLocal"を使う ansatz_type = "TwoLocal" # q-UCC ansatzのためのパラメーター num_particles = (problem.molecule_data_transformed.num_alpha, problem.molecule_data_transformed.num_beta) num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals # twolocalのための引数を設定 if ansatz_type == "TwoLocal": # 単一量子ビット回転は、すべての量子ビットに独立なパラメーターとして置かれます rotation_blocks = ['ry', 'rz'] # エンタングルさせるゲート entanglement_blocks = 'cx' # 量子ビットをどのくらいエンタングルさせるか entanglement = 'full' # 独立なパラメーターを持った回転ブロックとエンタングルメントのブロックの繰り返し回数 repetitions = 3 # 最後の回転ブロック層をスキップする skip_final_rotation_layer = True ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions, entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer) # 初期状態を加える ansatz.compose(init_state, front=True, inplace=True) elif ansatz_type == "UCCSD": ansatz = UCCSD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "PUCCD": ansatz = PUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "SUCCD": ansatz = SUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "Custom": # 回路の作り方の例 from qiskit.circuit import Parameter, QuantumCircuit, QuantumRegister # 変分パラメーターの定義 theta = Parameter('a') n = qubit_op.num_qubits # 空の量子回路の作成 qc = QuantumCircuit(qubit_op.num_qubits) qubit_label = 0 # アダマールゲートを置く qc.h(qubit_label) # CNOTを置く for i in range(n-1): qc.cx(i, i+1) # 見た目のためのセパレーター qc.barrier() # 全量子ビットへrz回転 qc.rz(theta, range(n)) ansatz = qc ansatz.compose(init_state, front=True, inplace=True) #print(ansatz) ansatz.draw('mpl') #重い回路の場合は表示に時間がかかるので、printを使う # + # オプティマイザー # 'L_BFGS_B'を使う optimizer_type = 'L_BFGS_B' # 各オプティマイザーのパラメーターを調整したいかもしれませんが # ここではデフォルト値を使います if optimizer_type == 'COBYLA': optimizer = COBYLA(maxiter=500) elif optimizer_type == 'L_BFGS_B': optimizer = L_BFGS_B(maxfun=5000) elif optimizer_type == 'SPSA': optimizer = SPSA(maxiter=500) elif optimizer_type == 'SLSQP': optimizer = SLSQP(maxiter=500) # + # VQE backend = Aer.get_backend('statevector_simulator') counts = [] values = [] params = [] deviation = [] # ansatzの初期パラメーターを設定します # すべての参加者が似たような初期ポイントから始められるように # 固定の小さい変位を選びます try: initial_point = [0.01] * len(ansatz.ordered_parameters) except: initial_point = [0.01] * ansatz.num_parameters algorithm = VQE(ansatz, optimizer=optimizer, quantum_instance=backend, callback=callback, initial_point=initial_point) result = algorithm.compute_minimum_eigenvalue(qubit_op) print(result) # + # スコアリング # 作った回路をCNOTとUゲートにトランスパイルするUnroller pass_ = Unroller(['u', 'cx']) pm = PassManager(pass_) ansatz_tp = pm.run(ansatz) cnots = ansatz_tp.count_ops()['cx'] score = cnots accuract_thres = 4.0 # mHaの単位 energy = result.optimal_value if ansatz_type is "TwoLocal": result_dict = { 'optimizer': optimizer.__class__.__name__, 'mapping': converter.mapper.__class__.__name__, 'ansatz': ansatz.__class__.__name__, 'rotation blocks': rotation_blocks, 'entanglement_blocks': entanglement_blocks, 'entanglement': entanglement, 'repetitions': repetitions, 'skip_final_rotation_layer': skip_final_rotation_layer, 'energy (Ha)': energy, 'error (mHa)': (energy-exact_energy)*1000, 'pass': (energy-exact_energy)*1000 <= accuract_thres, '# of parameters': len(result.optimal_point), 'final parameters': result.optimal_point, '# of evaluations': result.optimizer_evals, 'optimizer time': result.optimizer_time, '# of qubits': int(qubit_op.num_qubits), '# of CNOTs': cnots, 'score': score} else: result_dict = { 'optimizer': optimizer.__class__.__name__, 'mapping': converter.mapper.__class__.__name__, 'ansatz': ansatz.__class__.__name__, 'rotation blocks': None, 'entanglement_blocks': None, 'entanglement': None, 'repetitions': None, 'skip_final_rotation_layer': None, 'energy (Ha)': energy, 'error (mHa)': (energy-exact_energy)*1000, 'pass': (energy-exact_energy)*1000 <= accuract_thres, '# of parameters': len(result.optimal_point), 'final parameters': result.optimal_point, '# of evaluations': result.optimizer_evals, 'optimizer time': result.optimizer_time, '# of qubits': int(qubit_op.num_qubits), '# of CNOTs': cnots, 'score': score} # 結果をプロットします import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 1) ax.set_xlabel('Iterations') ax.set_ylabel('Energy') ax.grid() fig.text(0.7, 0.75, f'Energy: {result.optimal_value:.3f}\nScore: {score:.0f}') plt.title(f"{result_dict['optimizer']}-{result_dict['mapping']}\n{result_dict['ansatz']}") ax.plot(counts, values) ax.axhline(exact_energy, linestyle='--') fig_title = f"\ {result_dict['optimizer']}-\ {result_dict['mapping']}-\ {result_dict['ansatz']}-\ Energy({result_dict['energy (Ha)']:.3f})-\ Score({result_dict['score']:.0f})\ .png" fig.savefig(fig_title, dpi=300) # データを表示して保存します import pandas as pd import os.path filename = 'results_LiH.csv' if os.path.isfile(filename): result_df = pd.read_csv(filename) result_df = result_df.append([result_dict]) else: result_df = pd.DataFrame.from_dict([result_dict]) result_df.to_csv(filename) result_df[['optimizer','ansatz', '# of qubits', '# of parameters','rotation blocks', 'entanglement_blocks', 'entanglement', 'repetitions', 'error (mHa)', 'pass', 'score']] # - from qc_grader import grade_ex5 freeze_core = True # 核電子を凍結した場合はTrueにします。 grade_ex5(ansatz,qubit_op,result,freeze_core) # エラー4 mHa以下でScore 18となり、解答できました🎉 # # Score 3を目指して # # チャレンジ開催期間中はトップスコアがオープンになっていて、Score 3がトップスコアとわかっていました。 # # ここからは、自作の`Custom`回路を作ってScore 3を目指します。 # # 回路作成の方針としては、3つCNOTゲートをその前後で回転ゲート(u2)で挟んだ回路にして見ました。 # + # ansatz # 今回は"TwoLocal"を使う ansatz_type = "Custom" # q-UCC ansatzのためのパラメーター num_particles = (problem.molecule_data_transformed.num_alpha, problem.molecule_data_transformed.num_beta) num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals # twolocalのための引数を設定 if ansatz_type == "TwoLocal": # 単一量子ビット回転は、すべての量子ビットに独立なパラメーターとして置かれます rotation_blocks = ['ry', 'rz'] # エンタングルさせるゲート entanglement_blocks = 'cx' # 量子ビットをどのくらいエンタングルさせるか entanglement = 'full' # 独立なパラメーターを持った回転ブロックとエンタングルメントのブロックの繰り返し回数 repetitions = 3 # 最後の回転ブロック層をスキップする skip_final_rotation_layer = True ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions, entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer) # 初期状態を加える ansatz.compose(init_state, front=True, inplace=True) elif ansatz_type == "UCCSD": ansatz = UCCSD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "PUCCD": ansatz = PUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "SUCCD": ansatz = SUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "Custom": # 回路の作り方の例 from qiskit.circuit import ParameterVector, QuantumCircuit, QuantumRegister # 変分パラメーターの定義 a = ParameterVector('a', length=39) n = qubit_op.num_qubits # 空の量子回路の作成 qc = QuantumCircuit(qubit_op.num_qubits) idx = 0 # 回転ゲートを置く for i in range(n): qc.u(a[idx], a[idx+1], 0, i) idx += 2 # アダマールゲートを置く qc.h(range(n)) # CNOTを置く qc.cx(1, 0) qc.cx(2, 1) qc.cx(3, 2) # 回転ゲートを置く for i in range(n): qc.u(a[idx], a[idx+1], 0, i) idx += 2 ansatz = qc ansatz.compose(init_state, front=True, inplace=True) #print(ansatz) ansatz.draw('mpl') #重い回路の場合は表示に時間がかかるので、printを使う # + # VQE backend = Aer.get_backend('statevector_simulator') counts = [] values = [] params = [] deviation = [] # ansatzの初期パラメーターを設定します # すべての参加者が似たような初期ポイントから始められるように # 固定の小さい変位を選びます try: initial_point = [0.01] * len(ansatz.ordered_parameters) except: initial_point = [0.01] * ansatz.num_parameters algorithm = VQE(ansatz, optimizer=optimizer, quantum_instance=backend, callback=callback, initial_point=initial_point) result = algorithm.compute_minimum_eigenvalue(qubit_op) print(result) # + # スコアリング # 作った回路をCNOTとUゲートにトランスパイルするUnroller pass_ = Unroller(['u', 'cx']) pm = PassManager(pass_) ansatz_tp = pm.run(ansatz) cnots = ansatz_tp.count_ops()['cx'] score = cnots accuract_thres = 4.0 # mHaの単位 energy = result.optimal_value if ansatz_type is "TwoLocal": result_dict = { 'optimizer': optimizer.__class__.__name__, 'mapping': converter.mapper.__class__.__name__, 'ansatz': ansatz.__class__.__name__, 'rotation blocks': rotation_blocks, 'entanglement_blocks': entanglement_blocks, 'entanglement': entanglement, 'repetitions': repetitions, 'skip_final_rotation_layer': skip_final_rotation_layer, 'energy (Ha)': energy, 'error (mHa)': (energy-exact_energy)*1000, 'pass': (energy-exact_energy)*1000 <= accuract_thres, '# of parameters': len(result.optimal_point), 'final parameters': result.optimal_point, '# of evaluations': result.optimizer_evals, 'optimizer time': result.optimizer_time, '# of qubits': int(qubit_op.num_qubits), '# of CNOTs': cnots, 'score': score} else: result_dict = { 'optimizer': optimizer.__class__.__name__, 'mapping': converter.mapper.__class__.__name__, 'ansatz': ansatz.__class__.__name__, 'rotation blocks': None, 'entanglement_blocks': None, 'entanglement': None, 'repetitions': None, 'skip_final_rotation_layer': None, 'energy (Ha)': energy, 'error (mHa)': (energy-exact_energy)*1000, 'pass': (energy-exact_energy)*1000 <= accuract_thres, '# of parameters': len(result.optimal_point), 'final parameters': result.optimal_point, '# of evaluations': result.optimizer_evals, 'optimizer time': result.optimizer_time, '# of qubits': int(qubit_op.num_qubits), '# of CNOTs': cnots, 'score': score} # 結果をプロットします import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 1) ax.set_xlabel('Iterations') ax.set_ylabel('Energy') ax.grid() fig.text(0.7, 0.75, f'Energy: {result.optimal_value:.3f}\nScore: {score:.0f}') plt.title(f"{result_dict['optimizer']}-{result_dict['mapping']}\n{result_dict['ansatz']}") ax.plot(counts, values) ax.axhline(exact_energy, linestyle='--') fig_title = f"\ {result_dict['optimizer']}-\ {result_dict['mapping']}-\ {result_dict['ansatz']}-\ Energy({result_dict['energy (Ha)']:.3f})-\ Score({result_dict['score']:.0f})\ .png" fig.savefig(fig_title, dpi=300) # データを表示して保存します import pandas as pd import os.path filename = 'results_LiH.csv' if os.path.isfile(filename): result_df = pd.read_csv(filename) result_df = result_df.append([result_dict]) else: result_df = pd.DataFrame.from_dict([result_dict]) result_df.to_csv(filename) result_df[['optimizer','ansatz', '# of qubits', '# of parameters','rotation blocks', 'entanglement_blocks', 'entanglement', 'repetitions', 'error (mHa)', 'pass', 'score']] # - from qc_grader import grade_ex5 freeze_core = True # 核電子を凍結した場合はTrueにします。 grade_ex5(ansatz,qubit_op,result,freeze_core) # 自作の量子回路を作ってScore 3を達成できました🎉 # 答えを提出します。何度でも再提出できます。 from qc_grader import submit_ex5 freeze_core = True # 核電子を凍結した場合はTrueにします。 submit_ex5(ansatz,qubit_op,result,freeze_core) # # パート1の質問の解答 # <div class="alert alert-block alert-danger"> # # <b> 練習問題 1</b> # # `qmolecule`の特性を調べて、以下の質問に答えましょう。 # # 1. 分子の基本的特性を知る必要があります。与えられた系の電子は合計何個ですか? # 2. 分子軌道は何個あります? # 3. スピン軌道は何個あります? # 4. この分子をシミュレートするためにジョルダン・ウィグナー変換では何量子ビット必要ですか? # 5. 核反発エネルギーの値はいくつですか? # # # </div> # # <div class="alert alert-block alert-success"> # # <b>解答 </b> # # 1. `n_el = qmolecule.num_alpha + qmolecule.num_beta` # # 2. `n_mo = qmolecule.num_molecular_orbitals` # # 3. `n_so = 2 * qmolecule.num_molecular_orbitals` # # 4. `n_q = 2* qmolecule.num_molecular_orbitals` # # 5. `e_nn = qmolecule.nuclear_repulsion_energy` # # # </div> # ## Additional information # # **Created by:** <NAME>, <NAME>, <NAME> # # **Version:** 1.0.0
solutions by participants/ex5/ex5-ja-SoyaSaijo-3cnot-2.339013mHa-16params.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: uncertify-env # language: python # name: uncertify-env # --- # + # %load_ext autoreload # %autoreload 2 from context import uncertify # + import logging from uncertify.log import setup_logging setup_logging() LOG = logging.getLogger(__name__) # Matplotlib DEBUG logging spits out a whole bunch of crap mpl_logger = logging.getLogger('matplotlib') mpl_logger.setLevel(logging.WARNING) # + pycharm={"name": "#%%\n"} LOG.info(f'Your code goes here... "{uncertify.__package__}" loaded successfully from context.py') # + import torch import random import matplotlib.pyplot as plt import numpy as np from uncertify.data.utils import gauss_2d_tensor_image from uncertify.evaluation.statistics import get_entropy, rec_error_entropy_batch_stat from uncertify.data.dataloaders import dataloader_factory, DatasetType from uncertify.visualization.plotting import setup_plt_figure from uncertify.visualization import entropy_experiments import seaborn as sns from uncertify.common import DATA_DIR_PATH, HD_DATA_PATH # + example_factory = entropy_experiments.ExampleFactory(shape=(200, 200)) flat_image = example_factory.create_sample('flat') entropy_experiments.plot_image_and_entropy(flat_image) gauss_noise_image = example_factory.create_sample('gauss_noise') entropy_experiments.plot_image_and_entropy(gauss_noise_image) checkerboard_image = example_factory.create_sample('checkerboard') entropy_experiments.plot_image_and_entropy(checkerboard_image) gauss_images = example_factory.create_sample('centered_gauss_blobs') entropy_experiments.plot_images_and_entropy(gauss_images) circle_images = example_factory.create_sample('centered_circles') entropy_experiments.plot_images_and_entropy(gauss_images) # + BATCH_SIZE = 155 SHUFFLE_VAL = True brats_t2_path = HD_DATA_PATH / 'processed/brats17_t2_bc_std_bv3.5.hdf5' _, brats_val_t2_dataloader = dataloader_factory(DatasetType.BRATS17, batch_size=BATCH_SIZE, val_set_path=brats_t2_path, shuffle_val=SHUFFLE_VAL) # - for background in [0.0]: torch.manual_seed(0) for input_batch in brats_val_t2_dataloader: entropy_experiments.plot_entropy_segmentations(input_batch, add_steady_background=background, add_gauss_blobs=True, add_circles=False, zero_out_seg=False, normalize=True) break
notebooks/entropy.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python # language: python # name: conda-env-python-py # --- # <a href="https://cognitiveclass.ai/"> # <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center"> # </a> # <h1>Python - Writing Your First Python Code!</h1> # <p><strong>Welcome!</strong> This notebook will teach you the basics of the Python programming language. Although the information presented here is quite basic, it is an important foundation that will help you read and write Python code. By the end of this notebook, you'll know the basics of Python, including how to write basic commands, understand some basic types, and how to perform simple operations on them.</p> # <div class="alert alert-block alert-info" style="margin-top: 20px"> # <a href="https://cocl.us/topNotebooksPython101Coursera"> # <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png" width="750" align="center"> # </a> # </div> # <h2>Table of Contents</h2> # <div class="alert alert-block alert-info" style="margin-top: 20px"> # <ul> # <li> # <a href="#hello">Say "Hello" to the world in Python</a> # <ul> # <li><a href="version">What version of Python are we using?</a></li> # <li><a href="comments">Writing comments in Python</a></li> # <li><a href="errors">Errors in Python</a></li> # <li><a href="python_error">Does Python know about your error before it runs your code?</a></li> # <li><a href="exercise">Exercise: Your First Program</a></li> # </ul> # </li> # <li> # <a href="#types_objects">Types of objects in Python</a> # <ul> # <li><a href="int">Integers</a></li> # <li><a href="float">Floats</a></li> # <li><a href="convert">Converting from one object type to a different object type</a></li> # <li><a href="bool">Boolean data type</a></li> # <li><a href="exer_type">Exercise: Types</a></li> # </ul> # </li> # <li> # <a href="#expressions">Expressions and Variables</a> # <ul> # <li><a href="exp">Expressions</a></li> # <li><a href="exer_exp">Exercise: Expressions</a></li> # <li><a href="var">Variables</a></li> # <li><a href="exer_exp_var">Exercise: Expression and Variables in Python</a></li> # </ul> # </li> # </ul> # <p> # Estimated time needed: <strong>25 min</strong> # </p> # </div> # # <hr> # <h2 id="hello">Say "Hello" to the world in Python</h2> # When learning a new programming language, it is customary to start with an "hello world" example. As simple as it is, this one line of code will ensure that we know how to print a string in output and how to execute code within cells in a notebook. # <hr/> # <div class="alert alert-success alertsuccess" style="margin-top: 20px"> # [Tip]: To execute the Python code in the code cell below, click on the cell to select it and press <kbd>Shift</kbd> + <kbd>Enter</kbd>. # </div> # <hr/> # + # Try your first Python output print('Hello, Python!') # - # After executing the cell above, you should see that Python prints <code>Hello, Python!</code>. Congratulations on running your first Python code! # <hr/> # <div class="alert alert-success alertsuccess" style="margin-top: 20px"> # [Tip:] <code>print()</code> is a function. You passed the string <code>'Hello, Python!'</code> as an argument to instruct Python on what to print. # </div> # <hr/> # <h3 id="version">What version of Python are we using?</h3> # <p> # There are two popular versions of the Python programming language in use today: Python 2 and Python 3. The Python community has decided to move on from Python 2 to Python 3, and many popular libraries have announced that they will no longer support Python 2. # </p> # <p> # Since Python 3 is the future, in this course we will be using it exclusively. How do we know that our notebook is executed by a Python 3 runtime? We can look in the top-right hand corner of this notebook and see "Python 3". # </p> # <p> # We can also ask directly Python and obtain a detailed answer. Try executing the following code: # </p> # + # Check the Python Version import sys print(sys.version) # - # <hr/> # <div class="alert alert-success alertsuccess" style="margin-top: 20px"> # [Tip:] <code>sys</code> is a built-in module that contains many system-specific parameters and functions, including the Python version in use. Before using it, we must explictly <code>import</code> it. # </div> # <hr/> # <h3 id="comments">Writing comments in Python</h3> # <p> # In addition to writing code, note that it's always a good idea to add comments to your code. It will help others understand what you were trying to accomplish (the reason why you wrote a given snippet of code). Not only does this help <strong>other people</strong> understand your code, it can also serve as a reminder <strong>to you</strong> when you come back to it weeks or months later.</p> # # <p> # To write comments in Python, use the number symbol <code>#</code> before writing your comment. When you run your code, Python will ignore everything past the <code>#</code> on a given line. # </p> # + # Practice on writing comments print('Hello, Python!') # This line prints a string # print('Hi') # - # <p> # After executing the cell above, you should notice that <code>This line prints a string</code> did not appear in the output, because it was a comment (and thus ignored by Python). # </p> # <p> # The second line was also not executed because <code>print('Hi')</code> was preceded by the number sign (<code>#</code>) as well! Since this isn't an explanatory comment from the programmer, but an actual line of code, we might say that the programmer <em>commented out</em> that second line of code. # </p> # <h3 id="errors">Errors in Python</h3> # <p>Everyone makes mistakes. For many types of mistakes, Python will tell you that you have made a mistake by giving you an error message. It is important to read error messages carefully to really understand where you made a mistake and how you may go about correcting it.</p> # <p>For example, if you spell <code>print</code> as <code>frint</code>, Python will display an error message. Give it a try:</p> # + # Print string as error message frint("Hello, Python!") # - # <p>The error message tells you: # <ol> # <li>where the error occurred (more useful in large notebook cells or scripts), and</li> # <li>what kind of error it was (NameError)</li> # </ol> # <p>Here, Python attempted to run the function <code>frint</code>, but could not determine what <code>frint</code> is since it's not a built-in function and it has not been previously defined by us either.</p> # <p> # You'll notice that if we make a different type of mistake, by forgetting to close the string, we'll obtain a different error (i.e., a <code>SyntaxError</code>). Try it below: # </p> # + # Try to see build in error message print("Hello, Python!) # - # <h3 id="python_error">Does Python know about your error before it runs your code?</h3> # Python is what is called an <em>interpreted language</em>. Compiled languages examine your entire program at compile time, and are able to warn you about a whole class of errors prior to execution. In contrast, Python interprets your script line by line as it executes it. Python will stop executing the entire program when it encounters an error (unless the error is expected and handled by the programmer, a more advanced subject that we'll cover later on in this course). # Try to run the code in the cell below and see what happens: # + # Print string and error to see the running order print("This will be printed") frint("This will cause an error") print("This will NOT be printed") # - # <h3 id="exercise">Exercise: Your First Program</h3> # <p>Generations of programmers have started their coding careers by simply printing "Hello, world!". You will be following in their footsteps.</p> # <p>In the code cell below, use the <code>print()</code> function to print out the phrase: <code>Hello, world!</code></p> # Write your code below and press Shift+Enter to execute print("helloworld") # Double-click __here__ for the solution. # # <!-- Your answer is below: # # print("Hello, world!") # # --> # <p>Now, let's enhance your code with a comment. In the code cell below, print out the phrase: <code>Hello, world!</code> and comment it with the phrase <code>Print the traditional hello world</code> all in one line of code.</p> # Write your code below and press Shift+Enter to execute print("helloworld") #Print the traditional hello world # Double-click __here__ for the solution. # # <!-- Your answer is below: # # print("Hello, world!") # Print the traditional hello world # # --> # # <hr> # <h2 id="types_objects" align="center">Types of objects in Python</h2> # <p>Python is an object-oriented language. There are many different types of objects in Python. Let's start with the most common object types: <i>strings</i>, <i>integers</i> and <i>floats</i>. Anytime you write words (text) in Python, you're using <i>character strings</i> (strings for short). The most common numbers, on the other hand, are <i>integers</i> (e.g. -1, 0, 100) and <i>floats</i>, which represent real numbers (e.g. 3.14, -42.0).</p> # <a align="center"> # <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%201/Images/TypesObjects.png" width="600"> # </a> # <p>The following code cells contain some examples.</p> # + # Integer 11 # + # Float 2.14 # + # String "Hello, Python 101!" # - # <p>You can get Python to tell you the type of an expression by using the built-in <code>type()</code> function. You'll notice that Python refers to integers as <code>int</code>, floats as <code>float</code>, and character strings as <code>str</code>.</p> # + # Type of 12 type(12) # + # Type of 2.14 type(2.14) # + # Type of "Hello, Python 101!" type("Hello, Python 101!") # - # <p>In the code cell below, use the <code>type()</code> function to check the object type of <code>12.0</code>. # Write your code below. Don't forget to press Shift+Enter to execute the cell type(12.0) # Double-click __here__ for the solution. # # <!-- Your answer is below: # # type(12.0) # # --> # <h3 id="int">Integers</h3> # <p>Here are some examples of integers. Integers can be negative or positive numbers:</p> # <a align="center"> # <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%201/Images/TypesInt.png" width="600"> # </a> # <p>We can verify this is the case by using, you guessed it, the <code>type()</code> function: # + # Print the type of -1 type(-1) # + # Print the type of 4 type(4) # + # Print the type of 0 type(0) # - # <h3 id="float">Floats</h3> # <p>Floats represent real numbers; they are a superset of integer numbers but also include "numbers with decimals". There are some limitations when it comes to machines representing real numbers, but floating point numbers are a good representation in most cases. You can learn more about the specifics of floats for your runtime environment, by checking the value of <code>sys.float_info</code>. This will also tell you what's the largest and smallest number that can be represented with them.</p> # # <p>Once again, can test some examples with the <code>type()</code> function: # + # Print the type of 1.0 type(1.0) # Notice that 1 is an int, and 1.0 is a float # + # Print the type of 0.5 type(0.5) # + # Print the type of 0.56 type(0.56) # + # System settings about float type sys.float_info # - # <h3 id="convert">Converting from one object type to a different object type</h3> # <p>You can change the type of the object in Python; this is called typecasting. For example, you can convert an <i>integer</i> into a <i>float</i> (e.g. 2 to 2.0).</p> # <p>Let's try it:</p> # + # Verify that this is an integer type(2) # - # <h4>Converting integers to floats</h4> # <p>Let's cast integer 2 to float:</p> # + # Convert 2 to a float float(2) # + # Convert integer 2 to a float and check its type type(float(2)) # - # <p>When we convert an integer into a float, we don't really change the value (i.e., the significand) of the number. However, if we cast a float into an integer, we could potentially lose some information. For example, if we cast the float 1.1 to integer we will get 1 and lose the decimal information (i.e., 0.1):</p> # + # Casting 1.1 to integer will result in loss of information int(1.1) # - # <h4>Converting from strings to integers or floats</h4> # <p>Sometimes, we can have a string that contains a number within it. If this is the case, we can cast that string that represents a number into an integer using <code>int()</code>:</p> # + # Convert a string into an integer int('1') # - # <p>But if you try to do so with a string that is not a perfect match for a number, you'll get an error. Try the following:</p> # + # Convert a string into an integer with error int('1 or 2 people') # - # <p>You can also convert strings containing floating point numbers into <i>float</i> objects:</p> # + # Convert the string "1.2" into a float float('1.2') # - # <hr/> # <div class="alert alert-success alertsuccess" style="margin-top: 20px"> # [Tip:] Note that strings can be represented with single quotes (<code>'1.2'</code>) or double quotes (<code>"1.2"</code>), but you can't mix both (e.g., <code>"1.2'</code>). # </div> # <hr/> # <h4>Converting numbers to strings</h4> # <p>If we can convert strings to numbers, it is only natural to assume that we can convert numbers to strings, right?</p> # + # Convert an integer to a string str(1) # - # <p>And there is no reason why we shouldn't be able to make floats into strings as well:</p> # + # Convert a float to a string str(1.2) # - # <h3 id="bool">Boolean data type</h3> # <p><i>Boolean</i> is another important type in Python. An object of type <i>Boolean</i> can take on one of two values: <code>True</code> or <code>False</code>:</p> # + # Value true True # - # <p>Notice that the value <code>True</code> has an uppercase "T". The same is true for <code>False</code> (i.e. you must use the uppercase "F").</p> # + # Value false False # - # <p>When you ask Python to display the type of a boolean object it will show <code>bool</code> which stands for <i>boolean</i>:</p> # + # Type of True type(True) # + # Type of False type(False) # - # <p>We can cast boolean objects to other data types. If we cast a boolean with a value of <code>True</code> to an integer or float we will get a one. If we cast a boolean with a value of <code>False</code> to an integer or float we will get a zero. Similarly, if we cast a 1 to a Boolean, you get a <code>True</code>. And if we cast a 0 to a Boolean we will get a <code>False</code>. Let's give it a try:</p> # + # Convert True to int int(True) # + # Convert 1 to boolean bool(1) # + # Convert 0 to boolean bool(0) # + # Convert True to float float(True) # - # <h3 id="exer_type">Exercise: Types</h3> # <p>What is the data type of the result of: <code>6 / 2</code>?</p> # + # Write your code below. Don't forget to press Shift+Enter to execute the cell # - # Double-click __here__ for the solution. # # <!-- Your answer is below: # type(6/2) # float # --> # <p>What is the type of the result of: <code>6 // 2</code>? (Note the double slash <code>//</code>.)</p> # + # Write your code below. Don't forget to press Shift+Enter to execute the cell # - # Double-click __here__ for the solution. # # <!-- Your answer is below: # type(6//2) # int, as the double slashes stand for integer division # --> # <hr> # <h2 id="expressions">Expression and Variables</h2> # <h3 id="exp">Expressions</h3> # <p>Expressions in Python can include operations among compatible types (e.g., integers and floats). For example, basic arithmetic operations like adding multiple numbers:</p> # + # Addition operation expression 43 + 60 + 16 + 41 # - # <p>We can perform subtraction operations using the minus operator. In this case the result is a negative number:</p> # + # Subtraction operation expression 50 - 60 # - # <p>We can do multiplication using an asterisk:</p> # + # Multiplication operation expression 5 * 5 # - # <p>We can also perform division with the forward slash: # + # Division operation expression 25 / 5 # + # Division operation expression 25 / 6 # - # <p>As seen in the quiz above, we can use the double slash for integer division, where the result is rounded to the nearest integer: # + # Integer division operation expression 25 // 5 # + # Integer division operation expression 25 // 6 # - # <h3 id="exer_exp">Exercise: Expression</h3> # <p>Let's write an expression that calculates how many hours there are in 160 minutes: # + # Write your code below. Don't forget to press Shift+Enter to execute the cell # - # Double-click __here__ for the solution. # # <!-- Your answer is below: # 160/60 # # Or # 160//60 # --> # <p>Python follows well accepted mathematical conventions when evaluating mathematical expressions. In the following example, Python adds 30 to the result of the multiplication (i.e., 120). # + # Mathematical expression 30 + 2 * 60 # - # <p>And just like mathematics, expressions enclosed in parentheses have priority. So the following multiplies 32 by 60. # + # Mathematical expression (30 + 2) * 60 # - # <h3 id="var">Variables</h3> # <p>Just like with most programming languages, we can store values in <i>variables</i>, so we can use them later on. For example:</p> # + # Store value into variable x = 43 + 60 + 16 + 41 # - # <p>To see the value of <code>x</code> in a Notebook, we can simply place it on the last line of a cell:</p> # + # Print out the value in variable x # - # <p>We can also perform operations on <code>x</code> and save the result to a new variable:</p> # + # Use another variable to store the result of the operation between variable and value y = x / 60 y # - # <p>If we save a value to an existing variable, the new value will overwrite the previous value:</p> # + # Overwrite variable with new value x = x / 60 x # - # <p>It's a good practice to use meaningful variable names, so you and others can read the code and understand it more easily:</p> # + # Name the variables meaningfully total_min = 43 + 42 + 57 # Total length of albums in minutes total_min # + # Name the variables meaningfully total_hours = total_min / 60 # Total length of albums in hours total_hours # - # <p>In the cells above we added the length of three albums in minutes and stored it in <code>total_min</code>. We then divided it by 60 to calculate total length <code>total_hours</code> in hours. You can also do it all at once in a single expression, as long as you use parenthesis to add the albums length before you divide, as shown below.</p> # + # Complicate expression total_hours = (43 + 42 + 57) / 60 # Total hours in a single expression total_hours # - # <p>If you'd rather have total hours as an integer, you can of course replace the floating point division with integer division (i.e., <code>//</code>).</p> # <h3 id="exer_exp_var">Exercise: Expression and Variables in Python</h3> # <p>What is the value of <code>x</code> where <code>x = 3 + 2 * 2</code></p> # + # Write your code below. Don't forget to press Shift+Enter to execute the cell # - # Double-click __here__ for the solution. # # <!-- Your answer is below: # 7 # --> # # <p>What is the value of <code>y</code> where <code>y = (3 + 2) * 2</code>?</p> # + # Write your code below. Don't forget to press Shift+Enter to execute the cell # - # Double-click __here__ for the solution. # # <!-- Your answer is below: # 10 # --> # <p>What is the value of <code>z</code> where <code>z = x + y</code>?</p> # + # Write your code below. Don't forget to press Shift+Enter to execute the cell # - # Double-click __here__ for the solution. # # <!-- Your answer is below: # 17 # --> # <hr> # <h2>The last exercise!</h2> # <p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work. # <hr> # <div class="alert alert-block alert-info" style="margin-top: 20px"> # <h2>Get IBM Watson Studio free of charge!</h2> # <p><a href="https://cocl.us/bottemNotebooksPython101Coursera"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png" width="750" align="center"></a></p> # </div> # <h3>About the Authors:</h3> # <p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank"><NAME></a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p> # Other contributors: <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a"><NAME></a> # <hr> # <p>Copyright &copy; 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
Python (Expressions, Types, Variables ).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Se pide, usando la regresión lineal de sklearn: # # Dibujar con una línea la relación que hay entre la altura y la edad de los alumnos de la clase del ejemplo: # # - ¿Es una buena técnica para este tipo de problemas? ¿Por qué? # - ¿Qué error se comete? Calcula los errores que está cometiendo tu modelo uno a uno, es decir, los (lo tienes que calcular tú con python). Aparte, usa el MSE y el RMSE. ¿Alguno es mejor para este problema? # - Representa la matriz de correlación, ¿los datos están correlacionados? lista_alumnos = [["Pedro", 47, 1.80], ["Tomás", 31, 1.80], ["Ana", 39, 1.65], ["Natalio", 29, 1.73], ["Monica", 47, 1.73], ["Jose", 24, 1.75], ["Carolina", 34, 1.64], ["Alberto", 36, 1.60], ["Cristina", 46, 1.70], ["Alba", 29, 1.68], ["Laura", 40, 1.60], ["Luis", 47, 1.69], ["Jaime", 38, 1.60], ["Fernando", 51, 1.75]] # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.metrics import mean_squared_error, mean_absolute_error # - df = pd.DataFrame(lista_alumnos, columns = ["Nombre", "Edad", "Altura"]) df.head() # + # plt.scatter(edad, altura); # Definir altura y edad antes de usarlo, si es que quieres probar de buenas a primeras # - X = df[['Edad']] y = df['Altura'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42);
Bloque 3 - Machine Learning/01_Supervisado/1-Linear Regression/ejercicios/01_Con_altura.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="5sQVmss-VqC_" colab_type="text" # #Calteh Birds Classification # This Project is done using Google Colaboratory. 1st you have to mount google drive with Colab and place dataset in zipped format on Google drive to access it. This dataset contains birds images of 200 categories. Training images are 5994 & test images are 5794. I have achieved top1 accuracy 77.32% and top5 accuracy 94.30% on test set. This project is done using a ResNet18(pretrained on imagenet). # + [markdown] id="PjyWdaRlWIY-" colab_type="text" # Next cell is for installation of PyTorch on Google Colab # + id="KSgSPtVRlSwc" colab_type="code" colab={} # !pip3 install http://download.pytorch.org/whl/cu80/torch-0.3.0.post4-cp36-cp36m-linux_x86_64.whl # !pip3 install torchvision # + id="gktAVH9AmwO4" colab_type="code" colab={} # !unzip drive/Bird/Birds.zip # this line copies the birds dataset from Google drive to Google Colab and also Unzip it for further processing. # + id="bTutjsYIpA3A" colab_type="code" colab={} # here are the necessary imports from __future__ import print_function, division import torch import torch.nn as nn import torch.optim as optim from torch.autograd import Variable import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import time import os import seaborn as sns plt.ion() # + [markdown] id="UbrmvLcwWLr2" colab_type="text" # #Load Data # We will use torchvision and torch.utils.data packages for loading the data. For the training, i have applied transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. I also made it sure that the input data is resized to 224x224 pixels as required by the pre-trained networks. # # The testing set is used to measure the model's performance on data it hasn't seen yet. For this i have not performed any scaling or rotation transformations, but i had resized and then cropped the images to the appropriate size. # # The pre-trained network i have used was trained on the ImageNet dataset where each color channel was normalized separately. For all three sets i have normalized the means and standard deviations of the images to what the network expects. For the means, it's [0.485, 0.456, 0.406] and for the standard deviations [0.229, 0.224, 0.225], calculated from the ImageNet images. These values will shift each color channel to be centered at 0 and range from -1 to 1. # + id="vMGMhoC-pK8n" colab_type="code" colab={} # Data augmentation and normalization for training # Just normalization for validation data_transforms = { 'train': transforms.Compose([ transforms.Resize(256), transforms.RandomRotation(45), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'test': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } data_dir = 'Birds' # loading datasets with PyTorch ImageFolder image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'test']} # defining data loaders to load data using image_datasets and transforms, here we also specify batch size for the mini batch dataloders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=8, shuffle=True, num_workers=4) for x in ['train', 'test']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'test']} class_names = image_datasets['train'].classes use_gpu = torch.cuda.is_available() # + [markdown] id="SG1uzQ1iXAQz" colab_type="text" # Sizes of training and test datasets # + id="XjD7ZMJ_qpYp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="41f6fd39-8e2a-4eda-a05f-9e16f0e311a5" dataset_sizes # + [markdown] id="oPrYtAjQXJZb" colab_type="text" # #Visualize a few images # Let's visualize a few training images so as to understand the data augmentations. # + id="D-iZY_vhpxAf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="4005e01c-aaa8-4379-df98-023cf622570c" def imshow(inp, title=None): """Imshow for Tensor.""" inp = inp.numpy().transpose((1, 2, 0)) mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) inp = std * inp + mean inp = np.clip(inp, 0, 1) plt.imshow(inp) if title is not None: plt.title(title) plt.pause(0.001) # pause a bit so that plots are updated # Get a batch of training data inputs, classes = next(iter(dataloders['train'])) # Make a grid from batch out = torchvision.utils.make_grid(inputs) imshow(out, title=[class_names[x] for x in classes]) # + [markdown] id="76uC1hXfXZrk" colab_type="text" # #Training the model # Now, let's write a general function to train a model. I also have written code to save the best checkpoint within Google drive for using next time # + id="afcvSjddsuv_" colab_type="code" colab={} def train_model(model, criterion, optimizer, num_epochs=10): since = time.time() best_model_wts = model.state_dict() best_acc = 0.0 for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'test']: if phase == 'train': #scheduler.step() model.train(True) # Set model to training mode else: model.train(False) # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for data in dataloders[phase]: # get the inputs inputs, labels = data # wrap them in Variable if use_gpu: inputs = Variable(inputs.cuda()) labels = Variable(labels.cuda()) else: inputs, labels = Variable(inputs), Variable(labels) # zero the parameter gradients optimizer.zero_grad() # forward outputs = model(inputs) _, preds = torch.max(outputs.data, 1) loss = criterion(outputs, labels) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.data[0] running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects.float() / dataset_sizes[phase] print('{} Loss: {:.4f} Acc: {:.4f}'.format( phase, epoch_loss, epoch_acc)) # deep copy the model if phase == 'test' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = model.state_dict() state = {'model':model_ft.state_dict(),'optim':optimizer_ft.state_dict()} torch.save(state,'drive/Bird/point_resnet_best.pth') print() time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) print('Best test Acc: {:4f}'.format(best_acc)) # load best model weights model.load_state_dict(best_model_wts) return model # + [markdown] id="PZkh0x9FXb_v" colab_type="text" # #Visualizing the model predictions # Generic function to display predictions for a few images # + id="6yUjTuX1szCP" colab_type="code" colab={} def visualize_model(model, num_images=8): images_so_far = 0 fig = plt.figure() for i, data in enumerate(dataloders['test']): inputs, labels = data #print(labels) if use_gpu: inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda()) else: inputs, labels = Variable(inputs), Variable(labels) #print(labels) #_, lab = torch.max(labels.data, 1) outputs = model(inputs) #print(outputs) _, preds = torch.max(outputs.data, 1) #print(preds) for j in range(inputs.size()[0]): images_so_far += 1 ax = plt.subplot(num_images//2, 2, images_so_far) ax.axis('off') ax.set_title('class: {} predicted: {}'.format(class_names[labels.data[j]], class_names[preds[j]])) imshow(inputs.cpu().data[j]) if images_so_far == num_images: return # + [markdown] id="ryJ5vtkVXhRo" colab_type="text" # #Finetuning the convnet # Load a pretrained Resnet 18 model and reset final fully connected layer. # + id="tjOZbefNs2W-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="cf9c63c5-52ae-4eee-e5a5-d8e6fb8203c9" model_ft = models.resnet18(pretrained=True) # loading a pre-trained(trained on image net) resnet18 model from torchvision models num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, 200) # changing the last layer for this dataset by setting last layer neurons to 200 as this dataset has 200 categories if use_gpu: # if gpu is available then use it model_ft = model_ft.cuda() #model_ft = model_ft.float() criterion = nn.CrossEntropyLoss() # defining loss function # Observe that all parameters are being optimized optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.0001, momentum=0.9) # + [markdown] id="WIjCTS2lX4Rm" colab_type="text" # You can load a checkpoint from your my drive or any other place if you have saved it. you have to load weights of model and optimizer # + id="gLweeTCOs88w" colab_type="code" colab={} checkpoint = torch.load('path to model') #checkpoint = torch.load('drive//Bird/point_resnet_best.pth') model_ft.load_state_dict(checkpoint['model']) optimizer_ft.load_state_dict(checkpoint['optim']) # + [markdown] id="YkNXw1v-YxST" colab_type="text" # #Train and evaluate # + id="XpfSb7TytDbO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 4301} outputId="57b94f56-2737-4c3e-d458-bdb22055a02c" model_ft = train_model(model_ft, criterion, optimizer_ft,num_epochs=50) # + [markdown] id="wSA7mHOAY2lm" colab_type="text" # #Checking Model's Predictions # + id="LoSmnTp416QY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 905} outputId="cf7fe259-e4f6-4727-a4db-fde78e6592de" visualize_model(model_ft) # + [markdown] id="Dgw13mODZIbu" colab_type="text" # #Finding top-1 & top-5 accuracy # Here i have defined a use full class and a function to find top-1 and top-5 accuracies # + id="0gbYM9SG19U9" colab_type="code" colab={} class AverageMeter(object): """Computes and stores the average and current value""" def __init__(self): self.reset() def reset(self): self.val = 0 self.avg = 0 self.sum = 0 self.count = 0 def update(self, val, n=1): self.val = val self.sum += val * n self.count += n self.avg = self.sum / self.count # + id="ztOLl0h32E8g" colab_type="code" colab={} def accuracy(output, target, topk=(1,)): """Computes the precision@k for the specified values of k""" #with torch.no_grad(): maxk = max(topk) batch_size = target.size(0) _, pred = output.topk(maxk, 1, True, True) pred = pred.t() correct = pred.eq(target.view(1, -1).expand_as(pred)) res = [] for k in topk: correct_k = correct[:k].view(-1).float().sum(0, keepdim=True) res.append(correct_k.mul_(100.0 / batch_size)) return res # + id="Fzf62TrS2H8f" colab_type="code" colab={} def calc_accuracy(model, data): model.eval() if use_gpu: model.cuda() top1 = AverageMeter() top5 = AverageMeter() for idx, (inputs, labels) in enumerate(dataloders[data]): if use_gpu: inputs, labels = inputs.cuda(), labels.cuda() # obtain the outputs from the model outputs = model.forward(Variable(inputs)) prec1, prec5 = accuracy(outputs, Variable(labels), topk=(1, 5)) top1.update(prec1[0], inputs.size(0)) top5.update(prec5[0], inputs.size(0)) return top1 ,top5 top1 ,top5 = calc_accuracy(model_ft, 'test') # + [markdown] id="yYQT7RdjZTGG" colab_type="text" # #Top-1 Accuracy # + id="63TZRiRA2THk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="f1b35cd3-9dc6-4a32-9878-c2ded0381187" top1.avg # + [markdown] id="K6B0VvPGZW0D" colab_type="text" # #Top-5 Accuracy # + id="9pcFjl112VC-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="9a7d68f5-5fb3-490f-d868-a436c59c5f33" top5.avg # + [markdown] id="9pAVjoT5cbno" colab_type="text" # #Inference for classification # Now I have written a function to use a trained network for inference. That is, I'll pass an image into the network and predict the class of the bird in the image. I have written a function called predict that takes an image and a model, then returns the top K most likely classes along with the probabilities. It should look like # # probs, classes = predict(image_path, model) print(probs) print(classes) # # [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339] ['70', '3', '45', '62', '55'] First I have to handle processing the input image such that it can be used in my network. # # #Image Preprocessing # I want to use PIL to load the image . It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training. # # First, I resized the images where the shortest side is 256 pixels, keeping the aspect ratio. This has done with the thumbnail or resize methods. Then I have cropped out the center 224x224 portion of the image. # # Color channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. I have converted the values. It's easiest with a Numpy array, which I had got from a PIL image like so np_image = np.array(pil_image). # # As before, the network expects the images to be normalized in a specific way. For the means, it's [0.485, 0.456, 0.406] and for the standard deviations [0.229, 0.224, 0.225]. I had subtracted the means from each color channel, then divided by the standard deviation. # # And finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. I have reordered dimensions using ndarray.transpose. The color channel needs to be first and retain the order of the other two dimensions. # + id="CIFr77GhccIg" colab_type="code" colab={} def process_image(image_path): ''' Scales, crops, and normalizes a PIL image for a PyTorch model, returns an Numpy array ''' # Open the image from PIL import Image img = Image.open(image_path) # Resize if img.size[0] > img.size[1]: img.thumbnail((10000, 256)) else: img.thumbnail((256, 10000)) # Crop left_margin = (img.width-224)/2 bottom_margin = (img.height-224)/2 right_margin = left_margin + 224 top_margin = bottom_margin + 224 img = img.crop((left_margin, bottom_margin, right_margin, top_margin)) # Normalize img = np.array(img)/255 mean = np.array([0.485, 0.456, 0.406]) #provided mean std = np.array([0.229, 0.224, 0.225]) #provided std img = (img - mean)/std # Move color channels to first dimension as expected by PyTorch img = img.transpose((2, 0, 1)) return img # + id="tSsEYuJ5czZA" colab_type="code" colab={} def imshow(image, ax=None, title=None): if ax is None: fig, ax = plt.subplots() if title: plt.title(title) # PyTorch tensors assume the color channel is first # but matplotlib assumes is the third dimension image = image.transpose((1, 2, 0)) # Undo preprocessing mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) image = std * image + mean # Image needs to be clipped between 0 and 1 image = np.clip(image, 0, 1) ax.imshow(image) return ax # + id="opnb8wLnc766" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 272} outputId="ceb46e91-5942-4237-ef32-8b9d559bec02" import os os.listdir('Birds/test/018.Spotted_Catbird/Spotted_Catbird_0001_796797.jpg') # + id="ysKFbiGgc4_q" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 367} outputId="54f726ec-0d63-4a5e-fed1-72eb5ff885c0" image_path = 'Birds/test/018.Spotted_Catbird/Spotted_Catbird_0001_796797.jpg' img = process_image(image_path) imshow(img) # + [markdown] id="g3Rzv7fTczKO" colab_type="text" # #Class Prediction # Once I have got images in the correct format, I have written a function for making predictions with my model. A common practice is to predict the top 5 or so (usually called top- K ) most probable classes. I have calculated the class probabilities then find the K largest values. # # To get the top K largest values in a tensor I have used x.topk(k). This method returns both the highest k probabilities and the indices of those probabilities corresponding to the classes. # + id="2d4QfOW9dX2a" colab_type="code" colab={} def predict(image_path, model, top_num=5): # Process image img = process_image(image_path) # Numpy -> Tensor image_tensor = torch.from_numpy(img).type(torch.FloatTensor) # Add batch of size 1 to image model_input = image_tensor.unsqueeze(0) # Probs probs = torch.exp(model.forward(Variable(model_input.cuda()))) # Top probs top_probs, top_labs = probs.topk(top_num) top_probs, top_labs =top_probs.data, top_labs.data top_probs = top_probs.cpu().numpy().tolist()[0] top_labs = top_labs.cpu().numpy().tolist()[0] top_birds = [class_names[lab] for lab in top_labs] return top_probs, top_birds # + [markdown] id="50c4BVByeGrJ" colab_type="text" # #Sanity Checking # Now I have used a trained model for predictions. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. I have used matplotlib to plot the probabilities for the top 5 classes as a bar graph, along with the input image. # + id="oKY8aitPd3X-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 640} outputId="ed4b0778-5cbf-483c-8479-99104330c3b8" def plot_solution(image_path, model): # Set up plot plt.figure(figsize = (6,10)) ax = plt.subplot(2,1,1) # Set up title title_ = image_path.split('/')[2] # Plot flower img = process_image(image_path) imshow(img, ax, title = title_); # Make prediction probs, birds = predict(image_path, model) # Plot bar chart plt.subplot(2,1,2) sns.barplot(x=probs, y=birds, color=sns.color_palette()[0]); plt.show() image_path = 'Birds/test/018.Spotted_Catbird/Spotted_Catbird_0026_796818.jpg' plot_solution(image_path, model_ft)
Birds-Task-PyTorch/Cub_Birds_200_2011_Classification.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (condadev) # language: python # name: condadev # --- # # VT Graphs in Jupyter Notebook # # In this notebook we will explore how to obtain attributes and relationship for different entities using VirusTotal API v3. Finally we can render all the relationships we have obtained using VTGraph. # ## Import libraries # + from msticpy.sectools.vtlookupv3 import VTLookupV3, VTEntityType import networkx as nx import matplotlib.pyplot as plt import os import pandas as pd pd.set_option('max_colwidth', 200) try: import nest_asyncio except ImportError as err: print("nest_asyncio is required for running VTLookup3 in notebooks.") resp = input("Install now? (y/n)") if resp.strip().lower().startswith("y"): # %pip install nest_asyncio import nest_asyncio else: raise err nest_asyncio.apply() # - # ## Create Lookup instance from msticpy.common.provider_settings import get_provider_settings # Try to obtain key from env varaible vt_key = os.environ.get("VT_API_KEY") if not vt_key: # if not try provider settings to get from msticpyconfig.yaml vt_key = get_provider_settings("TIProviders")["VirusTotal"].args["AuthKey"] # Instantiate vt_lookup object vt_lookup = VTLookupV3(vt_key) # The ID (SHA256 hash) of the file to lookup FILE = 'ed01ebfbc9eb5bbea545af4d01bf5f1071661840480439c6e5babe8e080e41aa' example_attribute_df = vt_lookup.lookup_ioc(observable=FILE, vt_type='file') example_attribute_df # ### Example showing all details for this ID # We can use get_object to retrieve all details # or just look it up directly at https://www.virustotal.com/gui/home/search vt_lookup.get_object(FILE, "file") example_relationship_df = vt_lookup.lookup_ioc_relationships( observable=FILE, vt_type='file', relationship='execution_parents') example_relationship_df # ### Obtaining result for multiple entities # # The function `lookup_iocs` is able to obtain attributes for all the rows in a DataFrame. If no `observable_column` and `observable_type` parameters are specified, the function will obtain the attributes of all the entities that are in the column `target`, and will obtain their types from the `target_type` column. # # This function is especially useful when a user has obtained a set of relationships, and would like to obtain their attributes. # # > **Note:** it can take some time to fetch results, depending on the number of nodes and relationships. example_multiple_attribute_df = vt_lookup.lookup_iocs(example_relationship_df) example_multiple_attribute_df # Also, if we would like to obtain the relationships for a set of entities, we have the function `lookup_iocs_relationships`. Here also, if no `observable_column` and `observable_type` parameters are specified, the function will obtain the relationships of all the entities that are in the column `target`, and will obtain their types from the `target_type` column. # # > **Note:** it can take some time to fetch results example_multiple_relationship_df = vt_lookup.lookup_iocs_relationships(example_relationship_df, 'contacted_domains') example_multiple_relationship_df # ## Simple plot of the relationships # We can display a simple plot of the relataionships locally but it doesn't tell us much about what # the nodes are and they types of relationships between them. # + from bokeh.io import output_notebook, show from bokeh.plotting import figure, from_networkx from bokeh.models import HoverTool graph = nx.from_pandas_edgelist( example_multiple_relationship_df.reset_index(), source="source", target="target", edge_attr="relationship_type", ) plot = figure( title="Simple graph plot", x_range=(-1.1, 1.1), y_range=(-1.1, 1.1), tools="hover" ) g_plot = from_networkx(graph, nx.spring_layout, scale=2, center=(0, 0)) plot.renderers.append(g_plot) output_notebook() show(plot) # - # ## Integration with VTGraph # # Once we have some DataFrames with the relationships, we are able to generate and visualize a VT Graph in our notebook. The function `create_vt_graph` accepts as input a **list of Relationship DataFrames**. # # > **Note:** it can take some time to generate the graph, depending on the number of nodes and relationships. # # Unlike our local graph, this displays rich information about the nodes and relationship and allows us to expand our investigation with further searches or ad hoc nodes. # # > **Note:** - the inline graph displays node attributes but doesn't allow you edit or to add to the graph with further searches.<br> # > Click on the link in the frame to go to the VirusTotal site to view. graph_id = vt_lookup.create_vt_graph( relationship_dfs=[example_relationship_df, example_multiple_relationship_df], name="My first Jupyter Notebook Graph", private=False, ) graph_id vt_lookup.render_vt_graph( graph_id = graph_id, width = 900, height = 600 )
docs/notebooks/VTLookupV3.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.5.0-rc2 # language: julia # name: julia-1.5 # --- using LCIO using StatsPlots using LinearAlgebra: norm using Corpuscles gr() theme(:gruvbox_dark) # We only want files that match a certain pattern. We read the whole directory (that could also contain other files), and then filter the ones that we want. fileList = filter(s->occursin(r"E250_SetA.Pmumuh2ss.Gwhizard-2_84.eL0.8\.pR0.3\..*.slcio", s), readdir("/nfs/dust/ilc/user/jstrube/StrangeHiggs/data/GeneratorLevel", join=true)) # Let's take a look at the collections that we have in the events. FNAME = fileList[1] FNAME = "/nfs/dust/ilc/user/jstrube/StrangeHiggs/data/E250-TDR_ws.Pe2e2h.Gwhizard-1_95.eL.pR.I106479.001.slcio" FNAME = "/nfs/dust/ilc/user/jstrube/StrangeHiggs/data/E250-TDR_ws.Pe2e2h.Gwhizard-2_82.eR.pL.slcio" LCIO.open(FNAME) do reader for (iEvent, event) in enumerate(reader) for (idx, mcp) in enumerate(getCollection(event, "MCParticle")) pdg = getPDG(mcp) println(idx, "\t", pdg, "\t") end break end end ZMassList = Float64[] for FILENAME in fileList LCIO.open(FILENAME) do reader for (iEvent, event) in enumerate(reader) mcpList = getCollection(event, "MCParticle") mu1 = mcpList[10] mu2 = mcpList[11] p = getMomentum(mu1) .+ getMomentum(mu2) E = getEnergy(mu1) + getEnergy(mu2) push!(ZMassList, sqrt(E^2 - sum(p.^2))) end end end ZMassList_Old_eLpR = Float64[] LCIO.open("/nfs/dust/ilc/user/jstrube/StrangeHiggs/data/E250-TDR_ws.Pe2e2h.Gwhizard-1_95.eL.pR.I106479.001.slcio") do reader for (iEvent, event) in enumerate(reader) mcpList = getCollection(event, "MCParticle") mu1 = mcpList[3] mu2 = mcpList[4] p = getMomentum(mu1) .+ getMomentum(mu2) E = getEnergy(mu1) + getEnergy(mu2) push!(ZMassList_Old_eLpR, sqrt(E^2 - sum(p.^2))) end end length(ZMassList_Old_eLpR) ZMassList_Old_eRpL = Float64[] LCIO.open("/nfs/dust/ilc/user/jstrube/StrangeHiggs/data/E250-TDR_ws.Pe2e2h.Gwhizard-1_95.eR.pL.I106480.001.slcio") do reader for (iEvent, event) in enumerate(reader) mcpList = getCollection(event, "MCParticle") mu1 = mcpList[3] mu2 = mcpList[4] p = getMomentum(mu1) .+ getMomentum(mu2) E = getEnergy(mu1) + getEnergy(mu2) push!(ZMassList_Old_eRpL, sqrt(E^2 - sum(p.^2))) end end length(ZMassList_Old_eRpL) ZMassList_New_unpolarized = Float64[] LCIO.open("/nfs/dust/ilc/user/jstrube/StrangeHiggs/data/E250-TDR_ws.Pe2e2h.Gwhizard-2_82.eR.pL.slcio") do reader for (iEvent, event) in enumerate(reader) mcpList = getCollection(event, "MCParticle") mu1 = mcpList[7] mu2 = mcpList[8] p = getMomentum(mu1) .+ getMomentum(mu2) E = getEnergy(mu1) + getEnergy(mu2) push!(ZMassList_New_unpolarized, sqrt(E^2 - sum(p.^2))) end end length(ZMassList_New_unpolarized) plot(ZMassList, seriestype=:stephist, label="new sim, e0.8L p0.3R", bins=0:0.1:110, normalized=true) plot!(ZMassList_Old_eLpR, seriestype=:stephist, label="old sim, eL pR", bins=0:0.1:110, normalized=true) plot!(ZMassList_Old_eRpL, seriestype=:stephist, label="old sim, eR pL", bins=0:0.1:110, normalized=true) plot!(ZMassList_New_unpolarized, seriestype=:stephist, label="new sim, unpolarized", bins=0:0.1:110, normalized=true, legend=:topleft) sqrtsList = Float64[] sqrtsList_ep = Float64[] sqrtsList_noPhotons = Float64[] for FILENAME in fileList LCIO.open(FILENAME) do reader for (iEvent, event) in enumerate(reader) mcpList = getCollection(event, "MCParticle") e1 = mcpList[5] e2 = mcpList[6] H = mcpList[7] γ1 = mcpList[8] γ2 = mcpList[9] μ1 = mcpList[10] μ2 = mcpList[11] p = getMomentum(μ1) .+ getMomentum(μ2) .+ getMomentum(H) .+ getMomentum(γ1) .+ getMomentum(γ2) E = getEnergy(μ1) + getEnergy(μ2) + getEnergy(H) + getEnergy(γ1) + getEnergy(γ2) push!(sqrtsList, sqrt(E^2 - sum(p.^2))) p = getMomentum(e1) .+ getMomentum(e2) E = getEnergy(e1) + getEnergy(e2) push!(sqrtsList_ep, sqrt(E^2 - sum(p.^2))) p = getMomentum(μ1) .+ getMomentum(μ2) .+ getMomentum(e1) .+ getMomentum(e2) .+ getMomentum(H) E = getEnergy(μ1) + getEnergy(μ2) + getEnergy(e1) + getEnergy(e2) + getEnergy(H) push!(sqrtsList_noPhotons, sqrt(E^2 - sum(p.^2))) end end end println(length(sqrtsList), "\t", length(sqrtsList_ep), "\t", length(sqrtsList_noPhotons)) sqrtsList_Old_eLpR = Float64[] sqrtsList_noPhotons_Old_eLpR = Float64[] LCIO.open("/nfs/dust/ilc/user/jstrube/StrangeHiggs/data/E250-TDR_ws.Pe2e2h.Gwhizard-1_95.eL.pR.I106479.001.slcio") do reader for (iEvent, event) in enumerate(reader) mcpList = getCollection(event, "MCParticle") γ1 = mcpList[6] γ2 = mcpList[7] μ1 = mcpList[8] μ2 = mcpList[9] H = mcpList[10] p = getMomentum(μ1) .+ getMomentum(μ2) .+ getMomentum(H) .+ getMomentum(γ1) .+ getMomentum(γ2) E = getEnergy(μ1) + getEnergy(μ2) + getEnergy(H) + getEnergy(γ1) + getEnergy(γ2) push!(sqrtsList_Old_eLpR, sqrt(E^2 - sum(p.^2))) p = getMomentum(μ1) .+ getMomentum(μ2) .+ getMomentum(H) E = getEnergy(μ1) + getEnergy(μ2) + getEnergy(H) push!(sqrtsList_noPhotons_Old_eLpR, sqrt(E^2 - sum(p.^2))) end end println(length(sqrtsList_Old_eLpR), "\t", length(sqrtsList_noPhotons_Old_eLpR)) sqrtsList_Old_eRpL = Float64[] sqrtsList_noPhotons_Old_eRpL = Float64[] LCIO.open("/nfs/dust/ilc/user/jstrube/StrangeHiggs/data/E250-TDR_ws.Pe2e2h.Gwhizard-1_95.eR.pL.I106480.001.slcio") do reader for (iEvent, event) in enumerate(reader) mcpList = getCollection(event, "MCParticle") γ1 = mcpList[6] γ2 = mcpList[7] μ1 = mcpList[8] μ2 = mcpList[9] H = mcpList[10] p = getMomentum(μ1) .+ getMomentum(μ2) .+ getMomentum(H) .+ getMomentum(γ1) .+ getMomentum(γ2) E = getEnergy(μ1) + getEnergy(μ2) + getEnergy(H) + getEnergy(γ1) + getEnergy(γ2) push!(sqrtsList_Old_eRpL, sqrt(E^2 - sum(p.^2))) p = getMomentum(μ1) .+ getMomentum(μ2) .+ getMomentum(H) E = getEnergy(μ1) + getEnergy(μ2) + getEnergy(H) push!(sqrtsList_noPhotons_Old_eRpL, sqrt(E^2 - sum(p.^2))) end end println(length(sqrtsList_Old_eRpL), "\t", length(sqrtsList_noPhotons_Old_eRpL)) # + sqrtsList_New_unpolarized = Float64[] sqrtsList_ep_New_unpolarized = Float64[] sqrtsList_noPhotons_New_unpolarized = Float64[] LCIO.open("/nfs/dust/ilc/user/jstrube/StrangeHiggs/data/E250-TDR_ws.Pe2e2h.Gwhizard-2_82.eR.pL.slcio") do reader for (iEvent, event) in enumerate(reader) mcpList = getCollection(event, "MCParticle") e1 = mcpList[3] e2 = mcpList[4] γ1 = mcpList[5] γ2 = mcpList[6] μ1 = mcpList[7] μ2 = mcpList[8] H = mcpList[9] p = getMomentum(μ1) .+ getMomentum(μ2) .+ getMomentum(H) .+ getMomentum(γ1) .+ getMomentum(γ2) E = getEnergy(μ1) + getEnergy(μ2) + getEnergy(H) + getEnergy(γ1) + getEnergy(γ2) push!(sqrtsList_New_unpolarized, sqrt(E^2 - sum(p.^2))) p = getMomentum(μ1) .+ getMomentum(μ2) .+ getMomentum(H) E = getEnergy(μ1) + getEnergy(μ2) + getEnergy(H) push!(sqrtsList_noPhotons_New_unpolarized, sqrt(E^2 - sum(p.^2))) p = getMomentum(e1) .+ getMomentum(e2) E = getEnergy(e1) + getEnergy(e2) push!(sqrtsList_ep_New_unpolarized, sqrt(E^2 - sum(p.^2))) end end println(length(sqrtsList_New_unpolarized), "\t", length(sqrtsList_ep_New_unpolarized), "\t", length(sqrtsList_noPhotons_New_unpolarized)) # - plot(sqrtsList, seriestype=:stephist, label="new sim, e0.8L p0.3R", bins=230:0.1:255, normalized=true) plot!(sqrtsList_Old_eLpR, seriestype=:stephist, label="old sim, eL pR", bins=230:0.1:255, normalized=true) plot!(sqrtsList_Old_eRpL, seriestype=:stephist, label="old sim, eR pL", bins=230:0.1:255, normalized=true) plot!(sqrtsList_New_unpolarized, seriestype=:stephist, label="new sim, unpolarized", bins=230:0.1:255, normalized=true, legend=:topleft) plot(sqrtsList_noPhotons, seriestype=:stephist, label="new sim, e0.8L p0.3R", bins=230:0.1:255, normalized=true) plot!(sqrtsList_noPhotons_Old_eLpR, seriestype=:stephist, label="old sim, eL pR", bins=230:0.1:255, normalized=true) plot!(sqrtsList_noPhotons_Old_eRpL, seriestype=:stephist, label="old sim, eR pL", bins=230:0.1:255, normalized=true) plot!(sqrtsList_noPhotons_New_unpolarized, seriestype=:stephist, label="new sim, unpolarized", bins=230:0.1:255, normalized=true, legend=:topleft)
ZPeak.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # ## Class pada Python # + # class Pemain: # pass # obj = Pemain() # obj.nama = 'Neymar' # obj.nomor_punggung = '10' # print(f'Nama: {obj.nama}, Nomor Punggug: {obj.nomor_punggung}') # + # class Pemain: # def __init__(self): # self.nama = 'Neymar' # self.nomor_punggung = '10' # obj = Pemain() # print(f'Nama: {obj.nama}, Nomor Punggug: {obj.nomor_punggung}') # + class Pemain: def __init__(self, nama, umur, power, nomor_punggung): self.nama = nama self.umur = umur self.power = power self.nomor_punggung = nomor_punggung def get_nama(self): return f'Namanya adalah {self.nama}' def get_power(self): return f'Powernya adalah {self.power}' def get_umur(self): return f'Umurnya adalah {self.umur}' def get_nomor_punggung(self): return f'Nomor Punggungnya adalah {self.nomor_punggung}' def get_citizen(self, citizen): return f'Kewarganegaraannya adalah {citizen}' def __repr__(self): return f'Nama: {self.nama}, Umur: {self.umur}, Power: {self.power}, Nomor Punggung: {self.nomor_punggung}' # def __str__(self): # return f'Nama: {self.nama}\nUmur: {self.umur}\nPower: {self.power}\nNomor Punggung: {self.nomor_punggung} p1 = Pemain('Neymar', '40', '90', '10') print(p1) print(p1.get_nama()) print(p1.get_umur()) print(p1.get_power()) print(p1.get_nomor_punggung()) print(p1.get_citizen('ga tau')) p2 = Pemain('Ronaldo', '33', '99', '7') print(p2) # - # ## Inheritance # + class IndoPlayer(Pemain): def __init__(self, nama, umur, power, nomor_punggung, citizen, province, club): super().__init__(nama, umur, power, nomor_punggung) self.citizen = citizen self.province = province self.club = club def get_citizen(self): return f'Citizenny adalah {self.citizen}' def get_province(self): return f'Provincenya adalah {self.province}' def get__club(self): return f'Clubnya adalah {self.club}' def __str__(self): return f'Nama: {self.nama}, Umur: {self.umur}, Power: {self.power}, Nomor Punggung: {self.nomor_punggung}, Citizen: {self.citizen}, Province: {self.province}, Club: {self.club}' ip1 = IndoPlayer('Bambang', '45', '85', '10', 'Indonesia', 'Medan', 'PSMS') print(ip1) print(ip1.get_nama()) print(ip1.get_umur()) print(ip1.get_power()) print(ip1.get_nomor_punggung()) print(ip1.get_citizen()) print(ip1.get_province()) print(ip1.get__club()) # + class OperasiBilangan: def __init__(self, pertama, kedua): self.pertama = pertama self.kedua = kedua def get_tambah(self): return self.pertama + self.kedua def get_kali(self): return self.pertama * self.kedua def get_bagi(self): return self.pertama / self.kedua def get_kurang(self): return self.pertama - self.kedua # test op = OperasiBilangan(10, 2) print(op.get_tambah()) print(op.get_kali()) print(op.get_bagi()) print(op.get_kurang()) print(f'bilangan {op.pertama} dan {op.kedua}, jika ditambahkan: {op.get_tambah()}, jika dikalikan: {op.get_kali()}, jika dibagikan: {op.get_bagi()}, jika dikurangkan: {op.get_kurang()}') # -
pertemuan_06.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # !pip install pretrainedmodels # + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" import os import cv2 import numpy as np import pandas as pd from tqdm.auto import tqdm tqdm.pandas() import seaborn as sns import shutil from sklearn.model_selection import train_test_split from PIL import Image import random # + catalog_eng= pd.read_csv("/kaggle/input/textphase1/data/catalog_english_taxonomy.tsv",sep="\t") X_train= pd.read_csv("/kaggle/input/textphase1/data/X_train.tsv",sep="\t") Y_train= pd.read_csv("/kaggle/input/textphase1/data/Y_train.tsv",sep="\t") X_test=pd.read_csv("/kaggle/input/textphase1/data/x_test_task1_phase1.tsv",sep="\t") dict_code_to_id = {} dict_id_to_code={} list_tags = list(Y_train['Prdtypecode'].unique()) for i,tag in enumerate(list_tags): dict_code_to_id[tag] = i dict_id_to_code[i]=tag Y_train['labels']=Y_train['Prdtypecode'].map(dict_code_to_id) train=pd.merge(left=X_train,right=Y_train, how='left',left_on=['Integer_id','Image_id','Product_id'], right_on=['Integer_id','Image_id','Product_id']) prod_map=pd.Series(catalog_eng['Top level category'].values,index=catalog_eng['Prdtypecode']).to_dict() train['product']=train['Prdtypecode'].map(prod_map) def get_img_path(img_id,prd_id,path): pattern = 'image'+'_'+str(img_id)+'_'+'product'+'_'+str(prd_id)+'.jpg' return path + pattern train_img = train[['Image_id','Product_id','labels','product']] train_img['image_path']=train_img.progress_apply(lambda x: get_img_path(x['Image_id'],x['Product_id'], path = '/kaggle/input/imagetrain/image_training/'),axis=1) X_test['image_path']=X_test.progress_apply(lambda x: get_img_path(x['Image_id'],x['Product_id'], path='/kaggle/input/imagetest/image_test/image_test_task1_phase1/'),axis=1) train_df, val_df, _, _ = train_test_split(train_img, train_img['labels'],random_state=2020, test_size = 0.1, stratify=train_img['labels']) # - list_labs = list(train_img['labels'].unique()) train_img.isna().sum() # ## Transfer Learning with PyTorch # !pip install torchsummary import torch import torch.nn as nn import torch.optim as optim import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import time from torch.utils.data import DataLoader import os import copy from torchsummary import summary print("PyTorch Version: ",torch.__version__) print("Torchvision Version: ",torchvision.__version__) # + # Top level data directory. Here we assume the format of the directory conforms # to the ImageFolder structure # Models to choose from [resnet, alexnet, vgg, squeezenet, densenet, inception] model_name = "resnet" # Number of classes in the dataset num_classes = len(dict_code_to_id) # Batch size for training (change depending on how much memory you have) batch_size = 64 # Number of epochs to train for epochs = 10 # Flag for feature extracting. When False, we finetune the whole model, # when True we only update the reshaped layer params # feature_extract = True # - # #### Data Augmentation # # # The transform RandomResizedCrop crops the input image by a random size(within a scale range of 0.8 to 1.0 of the original size and a random aspect ratio in the default range of 0.75 to 1.33 ). The crop is then resized to 256×256. # # RandomRotation rotates the image by an angle randomly chosen between -15 to 15 degrees. # # RandomHorizontalFlip randomly flips the image horizontally with a default probability of 50%. # # CenterCrop crops an 224×224 image from the center. # # ToTensor converts the PIL Image which has values in the range of 0-255 to a floating point Tensor and normalizes them to a range of 0-1, by dividing it by 255. # # Normalize takes in a 3 channel Tensor and normalizes each channel by the input mean and standard deviation for the channel. Mean and standard deviation vectors are input as 3 element vectors. Each channel in the tensor is normalized as T = (T – mean)/(standard deviation) # + input_size = 224 # for Resnt # Applying Transforms to the Data image_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(size=256, scale=(0.8, 1.0)), transforms.RandomRotation(degrees=15), transforms.RandomHorizontalFlip(), transforms.Resize(size=256), transforms.CenterCrop(size=input_size), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'valid': transforms.Compose([ transforms.Resize(size=256), transforms.CenterCrop(size=input_size), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'test': transforms.Compose([ transforms.Resize(size=256), transforms.CenterCrop(size=input_size), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) } # + # Load the Data from torch.utils.data import Dataset, DataLoader, Subset class ImageDataset(Dataset): def __init__(self,df,transform=None,mode='train'): self.df = df self.transform=transform self.mode=mode def __len__(self): return len(self.df) def __getitem__(self,idx): im_path = self.df.iloc[idx]['image_path'] img = cv2.imread(im_path) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img=Image.fromarray(img) if self.transform is not None: img = self.transform(img) img=img.cuda() if self.mode=='test': return img else: labels = torch.tensor(self.df.iloc[idx]['labels']).cuda() return img, labels # - train_dataset=ImageDataset(df=train_df,transform=image_transforms['train']) val_dataset=ImageDataset(df=val_df,transform=image_transforms['valid']) test_dataset=ImageDataset(df=X_test,transform=image_transforms['test'],mode='test') train_data=DataLoader(train_dataset,batch_size=batch_size,shuffle=True) valid_data=DataLoader(val_dataset,batch_size=batch_size,shuffle=False) test_data=DataLoader(test_dataset,batch_size=batch_size,shuffle=False) # ### Load the pre-trained model from torch.nn import functional as F import torch.nn as nn import pretrainedmodels class SEResnext50_32x4d(nn.Module): def __init__(self, pretrained='imagenet'): super(SEResnext50_32x4d, self).__init__() self.base_model = pretrainedmodels.__dict__["se_resnext50_32x4d"](pretrained=None) if pretrained is not None: self.base_model.load_state_dict( torch.load("../input/pretrained-model-weights-pytorch/se_resnext50_32x4d-a260b3a4.pth" ) ) self.l0 = nn.Linear(2048, num_classes) def forward(self, image): batch_size, _, _, _ = image.shape x = self.base_model.features(image) x = F.adaptive_avg_pool2d(x, 1).reshape(batch_size, -1) out = self.l0(x) return out # When a model is loaded in PyTorch, all its parameters have their ‘requires_grad‘ field set to true by default. That means each and every change to the parameter values will be stored in order to be used in the back propagation graph used for training. This increases memory requirements. So, since most of the parameters in our pre-trained model are already trained for us, we reset the requires_grad field to false. model = SEResnext50_32x4d(pretrained="imagenet") model.cuda() # + # summary(model,(3,224,224)) # - # Next, we define the loss function and the optimizer to be used for training. PyTorch provides many kinds of loss functions. We use the Negative Loss Likelihood function as it can be used for classifying multiple classes. PyTorch also supports multiple optimizers. We use the Adam optimizer. Adam is one the most popular optimizers because it can adapt the learning rate for each parameter individually. # Define Optimizer and Loss Function loss_criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters()) from sklearn.metrics import f1_score def flat_accuracy(preds, labels): pred_flat = np.argmax(preds, axis=1).flatten() labels_flat = labels.flatten() return np.sum(pred_flat == labels_flat) / len(labels_flat) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device) # + ''' Loop to train and validate Parameters :param model: Model to train and validate :param loss_criterion: Loss Criterion to minimize :param optimizer: Optimizer for computing gradients :param epochs: Number of epochs (default=25) Returns model: Trained Model with best validation accuracy history: (dict object): Having training loss, accuracy and validation loss, accuracy ''' seed_val = 42 random.seed(seed_val) np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val) start = time.time() history = [] best_f1 = 0 for epoch in range(epochs): epoch_start = time.time() print("Epoch: {}/{}".format(epoch+1, epochs)) print('Training') # Set to training mode model.train() # Loss and Accuracy within the epoch train_loss = 0.0 train_acc = 0.0 valid_loss = 0.0 valid_acc = 0.0 for i, (inputs, labels) in tqdm(enumerate(train_data)): inputs = inputs.to(device) labels = labels.to(device) # Clean existing gradients optimizer.zero_grad() # Forward pass - compute outputs on input data using the model outputs = model(inputs) # Compute loss loss = loss_criterion(outputs, labels) # Backpropagate the gradients loss.backward() # Update the parameters optimizer.step() # Compute the total loss for the batch and add it to train_loss train_loss += loss.item() # Validation - No gradient tracking needed true_labels=[] predictions=[] with torch.no_grad(): # Set to evaluation mode model.eval() # Validation loop print('Validation') for j, (inputs, labels) in tqdm(enumerate(valid_data)): inputs = inputs.to(device) labels = labels.to(device) # Forward pass - compute outputs on input data using the model outputs = model(inputs) # Compute loss loss = loss_criterion(outputs, labels) # Compute the total loss for the batch and add it to valid_loss valid_loss += loss.item() # Move logits and labels to CPU ------------------------ Our addition --------------------------- logits = outputs.detach().cpu().numpy() predicted_labels = np.argmax(logits,axis=-1) predictions.extend(predicted_labels) labels = labels.to('cpu').numpy() true_labels.extend(labels) # ----------------------------------------------------------------------------------------------- # Compute total accuracy in the whole batch and add to valid_acc valid_acc += flat_accuracy(logits, labels) curr_f1=f1_score(true_labels,predictions,average='macro') if curr_f1 > best_f1: best_f1=curr_f1 torch.save(model.state_dict(), 'best_model.pt') # Find average training loss and training accuracy avg_train_loss = train_loss / len(train_data) # Find average validation loss and validation accuracy avg_valid_loss = valid_loss/len(valid_data) avg_valid_acc = valid_acc/len(valid_data) # Report the final accuracy for this validation run. print(" Average training loss: {0:.2f}".format(avg_train_loss)) print(" Validation Loss: {0:.2f}".format(avg_valid_loss)) print("Validation F1-Score: {}".format(f1_score(true_labels,predictions,average='macro'))) history.append([avg_train_loss, avg_valid_loss, avg_valid_acc]) epoch_end = time.time() # print("Epoch : {:03d}, Training: Loss: {:.4f}, Accuracy: {:.4f}%, \n\t\tValidation : Loss : {:.4f}, Accuracy: {:.4f}%, Time: {:.4f}s".format(epoch, avg_train_loss, avg_train_acc*100, avg_valid_loss, avg_valid_acc*100, epoch_end-epoch_start)) # Save if the model has best accuracy till now # torch.save(model, dataset+'_model_'+str(epoch)+'.pt') # - # ## Prediction for validation data # + # Put model in evaluation mode model = SEResnext50_32x4d(pretrained=None) model.load_state_dict(torch.load('/kaggle/working/best_model.pt')) model.cuda() model.eval() # Tracking variables predictions = [] softmax_logits=[] true_labels=[] # Predict # Telling the model not to compute or store gradients, saving memory and # speeding up prediction with torch.no_grad(): for j, (inputs, labels) in tqdm(enumerate(valid_data)): inputs = inputs.to(device) # Forward pass, calculate logit predictions logits = model(inputs) #----- Add softmax--- m = torch.nn.Softmax(dim=1) output = m(logits) #-------#------ output = output.detach().cpu().numpy() # Move logits and labels to CPU labels = labels.to('cpu').numpy() logits = logits.detach().cpu().numpy() predictions.extend(np.argmax(logits,axis=-1)) softmax_logits.extend(output) true_labels.extend(labels) print(f1_score(predictions,true_labels,average='macro')) print('Prediction on validation DONE') softmax_logits=np.array(softmax_logits) print(softmax_logits.shape) np.save('Valid_resnext50_32x4d_phase1_softmax_logits.npy',softmax_logits) # - # ### Prediction for test data # + # Tracking variables predictions = [] softmax_logits=[] # Predict # Telling the model not to compute or store gradients, saving memory and # speeding up prediction with torch.no_grad(): for i,inputs in tqdm(enumerate(test_data)): inputs = inputs.to(device) # Forward pass, calculate logit predictions logits = model(inputs) #----- Add softmax--- m = torch.nn.Softmax(dim=1) output = m(logits) #-------#------ output = output.detach().cpu().numpy() # Move logits and labels to CPU logits = logits.detach().cpu().numpy() predictions.extend(np.argmax(logits,axis=-1)) softmax_logits.extend(output) print('Inference DONE') # + softmax_logits=np.array(softmax_logits) print(softmax_logits.shape) np.save('Test_resnext50_32x4d_phase1_softmax_logits.npy',softmax_logits) # - len(predictions) X_test['prediction_model']= predictions X_test['Prdtypecode']=X_test['prediction_model'].map(dict_id_to_code) X_test['Prdtypecode'].value_counts() X_test=X_test.drop(['prediction_model','Title','Description'],axis=1) X_test.to_csv('y_test_task1_phase1_pred.tsv',sep='\t',index=False)
SEResnext50_train_predict.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + # %load_ext autoreload # %autoreload 2 import cPickle as pickle import os; import sys; sys.path.append('..'); sys.path.append('../gp/') import gp import gp.nets as nets from nolearn.lasagne.visualize import plot_loss from nolearn.lasagne.visualize import plot_conv_weights from nolearn.lasagne.visualize import plot_conv_activity from nolearn.lasagne.visualize import plot_occlusion from sklearn.metrics import classification_report, accuracy_score, roc_curve, auc, precision_recall_fscore_support, f1_score, precision_recall_curve, average_precision_score, zero_one_loss from matplotlib.pyplot import imshow import matplotlib.pyplot as plt # %matplotlib inline # + NETS = [] NETS.append('../nets/IPMLB_FULL.p') # image + prob + binary + large border network_path = NETS[-1] with open(network_path, 'rb') as f: net = pickle.load(f) # - p = net.get_all_params() X_test, y_test = gp.Patch.load_rgba_test_only('ipmlb') net.layers_ X_test[0][0].shape import theano import theano.tensor as T from lasagne.layers import get_output # + x = X_test[100].reshape(1,4,75,75) layer = net.layers_['hidden5'] xs = T.tensor4('xs').astype(theano.config.floatX) get_activity = theano.function([xs], get_output(layer, xs)) activity = get_activity(x) # - activity activity plot_conv_activity(net.layers_['hidden5'], X_test[0].reshape(1,4,75,75)) # load cremi A data import h5py import mahotas as mh import numpy as np import tifffile as tif # + input_image = np.zeros((125,1250,1250)) input_rhoana = np.zeros((125,1250,1250), dtype=np.uint64) input_gold = np.zeros((125,1250,1250), dtype=np.uint64) input_prob = np.zeros((125,1250,1250)) for z in range(125): image, prob, gold, rhoana = gp.Cremi.read_section('/home/d/data/CREMI/C/', z) input_image[z] = image input_prob[z] = prob input_gold[z] = gold input_rhoana[z] = rhoana # - gp.Util.view(gold) gp.Util.view(rhoana) gp.Util.view(prob, color=False) gp.Util.view(image, color=False) gp.Util.vi(input_rhoana, input_gold) gp.metrics.adapted_rand(input_rhoana, input_gold) net.uuid = 'IPMLB' bigM_cremiB = gp.Legacy.create_bigM_without_mask(net, input_image, input_prob, input_rhoana, verbose=True) bigM_cA_after_95, out_cA_volume_after_auto_95, cA_auto_fixes_95, cA_auto_vi_s_95 = gp.Legacy.splits_global_from_M_automatic(net, bigM_cremiB, input_image, input_prob, input_rhoana, input_gold, sureness_threshold=.95) gp.Util.vi(out_cA_volume_after_auto_95, input_gold) gp.metrics.adapted_rand(out_cA_volume_after_auto_95, input_gold) cA_auto_fixes_95 ar = [] for z in range(input_rhoana.shape[0]): ar.append(gp.metrics.adapted_rand(input_rhoana[z], input_gold[z])) # + data = collections.OrderedDict() data['Initial\nSegmentation'] = ar data['GP*\n (sim.)'] = []#cylinder_sim_user_vi_s[-1] # data['GP*\n (sim.)'] = []#[v - 0.1 for v in dojo_vi_95[2]] data['FP\n (sim.)'] = []#dojo_vi_95[2] gp.Legacy.plot_arand(data, '/tmp/cremi.pdf')#, output_folder+'/dojo_vi.pdf') # - import collections
ipy_test/IPMLB_cremiC.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="EG-gmN9XfduE" # # **Introdução a Biblioteca Dash** # # *** # + [markdown] colab_type="text" id="ICG7wheNjLR4" # ## **Introdução** # *** # O objetivo deste material é introduzir e servir como guia rápido para a utilização da biblioteca Dash em Python. Dessa forma, esse material teve como base para a sua construção a documentação disponivel na internet. # # Acesse a site [Dash](https://plotly.com/dash/). # # Acesse a documentação completa clicando [aqui](https://dash.plotly.com/introduction). # # # + [markdown] colab_type="text" id="sZQx32xvxR8N" # ### **Algumas vantagens da biblioteca Dash** # Dash é uma ferramenta indispensável para a visualização de dados por meio da construção de uma aplicação na internet com a biblioteca plotly.py. # * É escrita em cima de plotly.js, Flask e React.js (Foi desenvolvida por Plotly) # * Dash é simples o bastante para construir aplicações de forma rápida # * Possibilita a manipulação de dados e apresentação desses de forma simultânea # * Renderizado no seu navegador da web (independente de qual seja o navegador) # * Biblioteca código aberto ([Acesse o repositório no GitHub](https://github.com/plotly/dash)) # * Sem HTML ou JavaScript, o Dash possibilita criar interfaces em Python com inúmeros componentes interativos # + [markdown] colab_type="text" id="erSSYzsWitW-" # ## **Instalação** # # Acesse o vídeo no YouTube clicando [aqui](https://youtu.be/CjhlN4UZc3I). # # * Com o seu terminal/prompt de comando aberto, escreva o seguinte código para instalação: # > `pip install dash==2.0.0` - versão atualizada em 03 Sep 2021 # # * Dash automaticamente instala os seguintes pacotes: # * `dash-renderer` # * versão 1.11.0 # * `dash-core-components` # * versão 2.0.0 # * `dash-html-components` # * versão 2.0.0 # * `dash-table` # * versão 5.0.0 # * `plotly` # * versão 5.0.0 # * Para verificar a versão de um pacote: # > `import [nome do pacote]` # > # > `print([nome do pacote].__version__)` # + [markdown] colab_type="text" id="9Ank4fFD5m3f" # ### **Instalando o pacote Dash** # + colab={} colab_type="code" id="SFzJudO_5OCw" pip install dash==2.0.0 # - # **Obs.:** Para verificar se os outros pacotes foram instalados automaticamente, no seu terminal, reproduza # `pip list` ou `pip freeze` e busque pelos seus nomes. # ### **Verificando a versão** import dash print(dash.__version__) # **Aconselho você a verificar a versão dos pacotes instalados para que você sempre esteja trabalhando com a versão mais atualizada da Biblioteca Dash e as demais instaladas automaticamente.** # + [markdown] colab_type="text" id="F_Dw5wq9x_7z" # ## **Layout** # *** # O layout é um componente fundamental das aplicações com a biblioteca Dash em Python. Ele é quem descreve como a aplicação se apresentará, ou seja, compreende a construção visual da aplicação web desde a parte estética até a organização dos componentes. # # Para construir a parte visual da aplicação, é disponível um conjunto de componentes com as bibliotecas `dash_core_components` e `dash_html_components`. # # Encontre mais exemplos em [Layout](https://dash.plotly.com/layout). # # Acesse o vídeo no YouTube clicando [aqui](https://youtu.be/S3xXAKBicPE). # + [markdown] colab_type="text" id="Y03_2vlOuSay" # *** # ### **Primeiro Exemplo - `Exemplo_1.py`** # O código desenvolvido e comentado está na pasta [codigos_videos](https://github.com/Miguel-mmf/Biblioteca_Dash_em_Python/tree/main/codigos_videos) nesse repositório. # *** # #### **Importando os módulos necessários** # + colab={} colab_type="code" id="BU_-pPOD3m7z" import dash import dash_core_components as dcc import dash_html_components as html # + [markdown] colab_type="text" id="-TcHCV9RWjlH" # #### **Ajustando o tipo de fonte usando CSS para modificar o padrão dos elementos utilizados e criando o aplicativo com a função `Dash()` da biblioteca Dash.** # + colab={} colab_type="code" id="5bLf0YAd52Oe" external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app = dash.Dash(__name__, external_stylesheets=external_stylesheets) # + [markdown] colab_type="text" id="hlnVBCv-X4vi" # #### **O layout é composto por 4 elementos, ou seja, elementos do tipo `html.Div`, `html.H1` e `dcc.Graph`**. # + colab={} colab_type="code" id="f_R4hWsE588H" app.layout = html.Div( children=[ html.H1( children='Apresentação da biblioteca Dash!'), html.Div( children=''' Dash: Aplicação na internet para Python. ''' ), dcc.Graph( id='example-graph', figure={ 'data':[ {'x':[0,1,2,3,4,5,6],'y':[0,1,2,3,4,5,6],'type':'line+markers','name':'Reta'}, {'x':[0,1,2,3,4,5,6],'y':[0,1,4,9,16,25,36],'type':'line+markers','name':'Parábola'} ], 'layout':{ 'title':'Gráfico Exemplo' } } ) ] ) # + [markdown] colab_type="text" id="DDv3JSh3YCvp" # No Dash já é incluso o `hot-reloading`, ou seja, graças ao # > `app.run_server(debug=True)` # # a aplicação será autualizada automáticamente assim que alguma modificação seja feita. # Caso não queira que a página no navegador seja atualizada sempre que for feita uma modificação no código fonte, utilize # > `app.run_server(dev_tools_hot_reload = False)` # # Entenda mais sobre no tópico [Dash Dev Tools](https://dash.plotly.com/devtools). # - # **Servindo a aplicação em dash como versão para teste** # + colab={} colab_type="code" id="WeQGMQt96C-n" if __name__ == '__main__': app.run_server(debug=True, port=1, use_reloader = False) # port=1 irá servir a aplicação na porta 1, a qual por padrão é a porta 8050 # + [markdown] colab_type="text" id="5R0DnnCkegkf" # O exemplo acima deve ser salvo como um arquivo Python (extensão ".py") e ser executado usando # # > `python [nome do arquivo].py`. # # Para utilizar os exemplos desse material com o Jupyter Notebook, acesse [Introducing JupyterDash](https://medium.com/plotly/introducing-jupyterdash-811f1f57c02e) ou você pode desativar o seguinte parametro: # # >`app.run_server(debug = True, use_reloader = False)`. # - # *** # ### Acesse o vídeo **Exemplo de Página com a Biblioteca Dash** no YouTube clicando [aqui](https://youtu.be/Xff2GEpcawQ). # *** # + [markdown] colab_type="text" id="oS4H02gX1eSY" # ## **CallBack** # *** # CallBack é uma função de chamada que compõe as aplicações com Dash. Essa é responsável por deixar a aplicação interativa e promover a atualização automática de funções assim que as propriedades de entrada (*input*) a essas são alteradas. # # Em qualquer atributo, seja ele criado com `dash_core_components` ou `dash_html_components`, os parametos criados são do tipo `style`, `className`, `id` e etc. Esses parâmetros, especialmente o parametro `id`, axiliam a callback uma vez que identifica um certo componente (não é possível ter um mesmo `id` para dois componentes). # # Encontre mais exemplos sobre callback em [Basic CallBacks](https://dash.plotly.com/basic-callbacks). # # Acesse o vídeo no YouTube clicando [aqui](https://youtu.be/Xff2GEpcawQ). # + [markdown] colab_type="text" id="TEKWbQX3upi9" # *** # ### **Segundo Exemplo - `Exemplo_2.py`** # O código desenvolvido e comentado está na pasta [codigos_videos](https://github.com/Miguel-mmf/Biblioteca_Dash_em_Python/tree/main/codigos_videos) nesse repositório. # *** # # Para esse exemplo, foram utilizados os seguintes componentes: # * `dcc.Tabs` # * `dcc.Tab` # * `go.Scatter` e `go.Bar` da biblioteca `plotly.graph_objects`, funções muito utilizadas para visualização de dados em gráfico. # # + [markdown] colab_type="text" id="eheY5sSintIR" # #### **Importando os módulos necessários** # + colab={} colab_type="code" id="mDLwmLrHigfn" import dash import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output import plotly.graph_objects as go # + [markdown] colab_type="text" id="dfHe3HCTpYSQ" # #### **Ajustando o tipo de fonte usando CSS para modificar o padrão dos elementos utilizados e criando o aplicativo com a função `Dash()` da biblioteca Dash.** # + colab={} colab_type="code" id="FGT0g4D_pbW3" external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app2 = dash.Dash(__name__, external_stylesheets=external_stylesheets) # - # Com aplicativos Dash mais complexos que envolvem modificação dinâmica do layout (como [Multi-Page Apps](https://dash.plotly.com/urls)), nem todos os componentes que aparecem em seus retornos de chamada serão incluídos no layout inicial. Você pode remover essa restrição desativando a validação de retorno de chamada: # > `app.config.suppress_callback_exceptions = True` # # Nesse exemplo, todas os componentes aparecem na mesma callback e não utilizamos modificação dinâmica de layout. Portanto, não é necessário suprimir as exceções. # # #### **Construindo o layout do app2** # + colab={} colab_type="code" id="8Hl-1nAVn1xg" app2.layout = html.Div( [ html.H2( ['Painel de visualização de gráficos'], style={ 'textAlign':'center', 'font-weight':'bold' } ), html.Hr(), dcc.Tabs( id='tabs', children=[ dcc.Tab(label='Gráfico de linha',value='tab-1'), dcc.Tab(label='Gráfico de Barra',value='tab-2'), dcc.Tab(label='Gráfico de Linha e Pontos',value='tab-3') ] ), html.Div(id='tabs-content'), html.Hr(), ] ) # - # #### **Função de apoio criada para retornar o gráfico gerado em cada aba** def gera_grafico(tipo): fig = go.Figure() fig.add_trace( go.Scatter( x=[0,1,2,3,4,5,6], y=[0,1,2,3,4,5,6], mode=tipo, name='Reta', ) ) fig.add_trace( go.Scatter( x=[0,1,2,3,4,5,6], y=[0,1,4,9,16,25,36], mode=tipo, name='Parábola', ) ) fig.update_layout(title='Gráfico Exemplo') return fig # + [markdown] colab_type="text" id="2cgZJwwgr3df" # #### **Construindo o `@app2.callback()`** # + colab={} colab_type="code" id="dutsjNa1sNdA" @app2.callback( Output('tabs-content','children'), [ Input('tabs','value') ] ) def update_tab(tab): if tab == 'tab-1': return html.Div( [ dcc.Graph( figure = gera_grafico('lines') ) ] ) elif tab == 'tab-2': fig_bar = go.Figure() fig_bar.add_trace( go.Bar( x=[0,1,2,3,4,5,6], y=[0,1,2,3,4,5,6], ) ) fig_bar.add_trace( go.Bar( x=[0,1,2,3,4,5,6], y=[0,1,4,9,16,25,36], ) ) fig_bar.update_layout(title='Gráfico em Barras Exemplo') return html.Div( [ dcc.Graph( figure = fig_bar ) ] ) elif tab == 'tab-3': return html.Div( [ dcc.Graph( figure = gera_grafico('lines+markers') ) ] ) else: return html.Div( ['Erro 404'] ) # + [markdown] colab_type="text" id="pqR24Rszs5JG" # #### **`Input` e `Output` são os responsáveis pelas alterações automáticas na interface da aplicação web.** # - # **Servindo a aplicação em dash como versão para teste** # + colab={} colab_type="code" id="BLKyu68Us3bO" tags=[] if __name__ == "__main__": app2.run_server(debug=True, port=2, use_reloader=False) # + [markdown] colab_type="text" id="4KpcByC3tpTO" # O exemplo acima deve ser salvo como um arquivo Python (extensão ".py") e ser executado usando # # > `python [nome do arquivo].py`. # # Para utilizar os exemplos desse material com o Jupyter Notebook, acesse [Introducing JupyterDash](https://medium.com/plotly/introducing-jupyterdash-811f1f57c02e) ou você pode desativar o seguinte parametro: # # >`app.run_server(debug = True, use_reloader = False)`. # + [markdown] colab_type="text" id="aG-PBYpdu8M6" # ## **Dash Core Components** # *** # Este pacote fornece o pacote principal de componentes React para Dash. # # Acesse o vídeo no YouTube clicando [aqui](https://youtu.be/q_83OeJNv2k). # # ### **Prévia dos componentes** # * [Overview Dash Core Components](https://dash.plotly.com/dash-core-components) # + [markdown] colab_type="text" id="6SJk_DPG1gB9" # ### **Dropdown** # # * [Dropdown Examples and Reference](https://dash.plotly.com/dash-core-components/dropdown) # + # estrutura simplificada dcc.Dropdown( options=[ { 'label': f'Opção {value}', 'value': f'{value}' } for value in range(0,10) ], value='Opção 1' ), # estrutura padrão dcc.Dropdown( options=[ {'label': f'Opção 1', 'value': '1'}, {'label': f'Opção 2', 'value': '2'}, {'label': f'Opção 3', 'value': '3'}, ], value='Opção 1' ), # + [markdown] colab_type="text" id="TjwE_9rNzukB" # ### **Slider** # # Esse componente funciona como um controle deslizante muito utilizado para gráficos em que é possível aumentar ou diminuir a amplitude de valores de um determinando eixo. # * [Slider Examples and Reference](https://dash.plotly.com/dash-core-components/slider) # + colab={} colab_type="code" id="dcqvqL-pzzrI" dcc.Slider( min=-5, max=10, step=0.5, value=-3 ) # + [markdown] colab_type="text" id="gO0sce73z8OA" # ### **RangeSlider** # Esse funciona de forma semelhante ao componente acima, porém com o RangeSlider é possível realizar esse controle através das duas extremidades do componente. # * [RangeSlider Examples and Reference](https://dash.plotly.com/dash-core-components/rangeslider). # + colab={} colab_type="code" id="GT1wfkGf0A_P" dcc.RangeSlider( count=1, min=-5, max=10, step=0.5, value=[-3, 7] ) # + [markdown] colab_type="text" id="pYaIzyY20E68" # ### **Input** # * [Input Examples and Reference](https://dash.plotly.com/dash-core-components/input) # + colab={} colab_type="code" id="TCpTghC_0KAw" dcc.Input( placeholder='Enter a value...', type='text', value='' ) # + [markdown] colab_type="text" id="mlVHlqSM0Nl4" # ### **Checkboxes** # # São caixas de seleção em que é possível receber informações. # * [dcc.Checklist](https://dash.plotly.com/dash-core-components/checklist) # + colab={} colab_type="code" id="MP0j4Kgv0S_H" # estrutura simplificada dcc.Checklist( options=[ { 'label': f'Opção {value}', 'value': f'{value}' } for value in range(0,10) ], value='Opção 1' ) # estrutura padrão dcc.Checklist( options=[ {'label': 'New York City', 'value': 'NYC'}, {'label': 'Montréal', 'value': 'MTL'}, {'label': 'San Francisco', 'value': 'SF'} ], value=['MTL', 'SF'] ) # + [markdown] colab_type="text" id="DGkkUeHx0VyS" # ### **Radio Items** # * [dcc.RadioItems Examples & Documentation](https://dash.plotly.com/dash-core-components/radioitems) # + colab={} colab_type="code" id="UT2GLOUm0ckc" # estrutura simplificada dcc.RadioItems( options=[ { 'label': f'Opção {value}', 'value': f'{value}' } for value in range(0,10) ], value='Opção 1' ) # estrutura padrão dcc.RadioItems( options=[ {'label': 'New York City', 'value': 'NYC'}, {'label': 'Montréal', 'value': 'MTL'}, {'label': 'San Francisco', 'value': 'SF'} ], value='MTL' ) # - # ### **Graph** # # O componente `dcc.Graph` compartilha da mesma sintaxe da biblioteca _plotly.py_, ou seja, a construção do que o parâmetro _figure_ recebe é semelhante (data,layout,...). # # É possível passar as informações do gráfico em forma de dícionário como apresentado abaixo ou atribuindo ao parâmetro _figure_ uma figura já criada com a biblioteca plotly.py. # # * [dcc.Graph](https://dash.plotly.com/dash-core-components/graph) dcc.Graph( id='example-graph', figure={ 'data':[ {'x':[0,1,2,3,4,5,6],'y':[0,1,2,3,4,5,6],'type':'line+markers','name':'Reta'}, {'x':[0,1,2,3,4,5,6],'y':[0,1,4,9,16,25,36],'type':'line+markers','name':'Parábola'} ], 'layout':{ 'title':'Gráfico Exemplo' } } ) # + [markdown] colab_type="text" id="L_wQ0b-l0grI" # ### **Link** # # * [dcc.Link](https://dash.plotly.com/dash-core-components/link) # + colab={} colab_type="code" id="h9Soe_CM0fx5" dcc.Link() # - # *** # ### **Visualização dos Componentes - `dash_core.py`** # # O código desenvolvido e comentado está na pasta [codigos_videos](https://github.com/Miguel-mmf/Biblioteca_Dash_em_Python/tree/main/codigos_videos) nesse repositório. # *** import dash import dash_core_components as dcc import dash_html_components as html # + external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app_dash_core_components = dash.Dash(__name__, external_stylesheets=external_stylesheets) # + colab={} colab_type="code" id="XuktMJLvzrHI" app_dash_core_components.layout = html.Div( [ html.H1( 'Dash Core Components', style={ 'textAlign':'center', 'font-weight':'bold' } ), html.Div( [ html.Hr(), html.H2( ['Componente Dropdown'] ), dcc.Dropdown( options=[ { 'label': f'Opção {value}', 'value': f'{value}' } for value in range(0,10) ], value='1' ), html.Hr(), html.H2( ['Componente Slider'] ), dcc.Slider( min=-5, max=10, step=0.5, value=-3 ), html.Hr(), html.H2( ['Componente RangeSlider'] ), dcc.RangeSlider( count=1, min=-5, max=10, step=0.5, value=[-3, 7] ), html.Hr(), html.H2( ['Componente Input'] ), dcc.Input( placeholder='Insira uma mensagem...', type='text', value='' ), html.Hr(), html.H2( ['Componente Link'] ), dcc.Link(), ], style={ 'margin-left':'10px', 'margin-right':'10px', 'width': '48%', 'display':'inline-block', 'float':'left', 'border': '2px solid lightblue' } ), html.Div( [ html.Hr(), html.H2( ['Componente Checklist'] ), dcc.Checklist( options=[ { 'label': f'Opção {value}', 'value': f'{value}' } for value in range(1,6) ], value=['1', '9'] ), html.Hr(), html.H2( ['Componente RadioItems'] ), dcc.RadioItems( options=[ { 'label': f'Opção {value}', 'value': f'{value}' } for value in range(1,6) ], value='MTL' ), html.Hr(), html.H2( ['Componente Graph'] ), dcc.Graph( id='example-graph', figure={ 'data':[ {'x':[0,1,2,3,4,5,6],'y':[0,1,2,3,4,5,6],'type':'line+markers','name':'Reta'}, {'x':[0,1,2,3,4,5,6],'y':[0,1,4,9,16,25,36],'type':'line+markers','name':'Parábola'} ], 'layout':{ 'title':'Gráfico Exemplo' } } ), ], style={ 'margin-left':'10px', 'margin-right':'10px', 'width': '48%', 'display':'inline-block', 'float':'right', 'border': '2px solid lightblue' } ) ], style={ 'margin-left':'10px', 'margin-right':'10px', 'border': '2px solid lightblue' } ) # + tags=[] if __name__ == "__main__": app_dash_core_components.run_server(debug=True, port=3, use_reloader=False) # + [markdown] colab_type="text" id="ERN_YijlvEmV" # ## **Dash HTML Components** # *** # Em vez da escrita em HTML ou de utilizar templates com HTML, você pode construir seu layout usando Python puro para desenvolver as mesmas funções com a biblioteca `dash_html_components`. # # Acesse o vídeo no YouTube clicando [aqui](https://youtu.be/N49IHkvV9qU). # # ### **Prévia dos componentes** # * [Overview Dash HTML Components](https://dash.plotly.com/dash-html-components) # + [markdown] colab_type="text" id="3z8uDeNC4MQv" # ### **html.Div** # # * [html.Div](https://dash.plotly.com/dash-html-components/div). # # É bastante utilizado para alocar outros componentes. Pensando no compontente html.Div como uma caixa, tem-se ele para armazenar algo, ou seja, uma divisão destinada a algo. # - html.Div( children=['Parâmetro principal'], id='', className='', style={}, n_clicks=0, #default ) # # + [markdown] colab_type="text" id="D0xg5PdY3Vob" # ### **html.H1 - html.H2 - html.H3 - html.H4 - html.H5 - html.H6** # # Para trabalhar com a propriedade *style* e *className* dos componentes hmtl e dcc, esse [site](https://www.w3schools.com/css/default.asp) fornece dicas de CSS, linguagem de marcação utilizada para adicionar estilo em documentos HTML. # + colab={} colab_type="code" id="RrUxkpM133KG" html.H1( children=[], id='', className='', style={}, n_clicks=0, #default ) # - # #### **html.P** # # * [html.P](). # # Elemento paragráfo/bloco indentado do texto. htlm.P( children=[], id='', className='', style={} ) # ### **html.Button** # # * [Button Examples and Reference](https://dash.plotly.com/dash-html-components/button). # # # + colab={} colab_type="code" id="U_ZCxcqc4L5V" html.Button( children=[], id='', className='', n_clicks=0, #default n_clicks_timestamp= -1, #default ) # + [markdown] colab_type="text" id="UQpPvllf4p8l" # ### **html.Hr** # # * [html.Hr](https://dash.plotly.com/dash-html-components/hr). # # Fornece uma linha horizontal. # # Quando é utilizado `dcc.Markdown`, podemos inserir essa linha com "***" ou "___" sem a necessidade de inserir um componente `html.Hr()`. # + colab={} colab_type="code" id="8WrEYi7B4zat" html.Hr( children=[], id='', className='', ) # - # #### **html.Table - html.Th - html.Td** # # De forma semelhante ao que é entregue utilizando a biblioteca Dash Table, com esses componentes utilizados em conjunto é possível construtir quadros para a apresentação de dados em forma de tabela. # # _Recomendo estudar a biblioteca _dash_table_, pois com ela é fácil de apresentar dados de forma mais interativa e dinâmica._ # # * [html.Table](https://dash.plotly.com/dash-html-components/table) # * [html.Th](https://dash.plotly.com/dash-html-components/th) # * [html.Td](https://dash.plotly.com/dash-html-components/td) html.Table( [ html.Tr( [ html.Th('1'), html.Th('2'), html.Th('3'), html.Th('4') ] ), html.Tr( [ html.Td('a11'), html.Td('a12'), html.Td('a13'), html.Td('a14') ] ), html.Tr( [ html.Td('a21'), html.Td('a22'), html.Td('a23'), html.Td('a24') ] ), ] ) # #### Exemplo de função para apresentação de dados de um _dataframe_ em forma de quadro usando componentes do tipo `dash_html_componentes` def gera_planilha(dataframe): return html.Table( [ # colunas html.Thead( html.Tr( [ html.Th(col) for col in dataframe.columns ] ) ), # dados html.Tbody( [ html.Tr( [ html.Td( dataframe.iloc[i][col] ) for col in dataframe.columns ] ) for i in range(len(dataframe)) ] ) ] ) # + [markdown] colab_type="text" id="hbrX2ksjvXK_" # ## **Dash DataTable** # *** # # Dash DataTable é um componente interativo em tabela que serve para visualização e edição de dados com Python. # # * [Dash DataTable](https://dash.plotly.com/datatable) # # Acesse o vídeo no YouTube clicando [aqui](https://youtu.be/-6HRKsD36qQ). # # Para demonstrar a função `dash_table.DataTable()` dessa biblioteca, utilizei o arquivo `sousa_geral_anual.csv` que está dentro da pasta codigos_videos. # - # *** # ### Dash DataTable - `Exemplo_3.py` # O código desenvolvido e comentado está na pasta [codigos_videos](https://github.com/Miguel-mmf/Biblioteca_Dash_em_Python/tree/main/codigos_videos) nesse repositório. # *** # # Para esse exemplo, não foi criada uma função chamada `gera_tabela(df)` com a função de retornar a tabela para a minha aplicação web. Entretanto, deixei uma função abaixo que ao receber um _dataframe_ retorna a tabela com os dados. # # Links de apoio: # * [Dash DataTable - Styling](https://dash.plotly.com/datatable/style). # * [Dash DataTable - Interactivity](https://dash.plotly.com/datatable/interactivity). # * [Editable DataTable](https://dash.plotly.com/datatable/editable). # # #### **Importando `dash_table`** # + colab={} colab_type="code" id="vzEw3IL1ttFg" import dash import pandas as pd import dash_table # + tags=[] df = pd.read_csv('sousa_geral_anual.csv') app3 = dash.Dash(__name__) app.layout = html.Div( [ html.H1('Tabela de dados da cidade de Sousa - PB'), dash_table.DataTable( # dados data= df.to_dict('records'), # identificação das colunas da planilha columns=[ { 'name': col, 'id': col } for col in df.columns ], ) ] ) # + tags=[] if __name__ == '__main__': app3.run_server(debug=True, use_reloader=False) # - # É uma boa prática gerar uma função para retornar o quadro de dados quando a aplicação for redenrizada no navegador. def gera_tabela(df): return dash_table.DataTable( data = df.to_dict('records'), columns = [ { 'name': col, 'id': col, } for col in df.columns ], style_cell={ 'textAlign': 'center', 'border': '1px solid grey', 'minWidth':'90px', 'width':'125px', 'maxWidth':'160px', 'fontSize':'14', 'font-family':'sans-serif' }, style_header={ 'backgroundColor': '#ADD8E6', 'fontWeight': 'bold' }, page_size = 12, style_table = { 'height':'auto', 'minWidth': '100%', 'overflowX': 'auto', 'border': '2px solid lightgreen', } ) # ## Construção de um Mini Dashboard com Dash em Python - `aula8_cidade_sousa.py` # *** # Para criação dessa apicação com Dash, utilizei o arquivo `sousa_geral_anual.csv` que está dentro da pasta codigos_videos. # # Acesse o vídeo no YouTube clicando [aqui](https://youtu.be/5mJvsZa6h5s). # # O código desenvolvido e comentado está na pasta [codigos_videos](https://github.com/Miguel-mmf/Biblioteca_Dash_em_Python/tree/main/codigos_videos) nesse repositório. # + import dash import dash_table import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output import pandas as pd # - # Funções de Apoio para construção do Layout e para a manipulação dos dados do arquivo `sousa_geral_anual.csv` # + def gera_tabela(df): return dash_table.DataTable( # dados data = df.to_dict('records'), # colunas columns = [ { 'name': col, 'id': col, } for col in df.columns ], style_cell={ 'textAlign': 'center', 'border': '1px solid grey', 'minWidth':'90px', 'width':'125px', 'maxWidth':'160px', 'fontSize':'14', 'font-family':'sans-serif' }, style_header={ 'backgroundColor': '#8FBC8F', 'fontWeight': 'bold' }, page_size = 16, style_table = { 'height':'auto', 'minWidth': '100%', 'overflowX': 'auto', 'border': '2px solid lightgreen', } ) # essa função está renomeando as colunas de um dataframe pelos nomes presentes na lista def gera_dados_selec(df): list_colunas = [ 'Tempo','Precipitação Média','Umidade a 2m','Umidade Relativa a 2m (%)','Pressão na Superfície (kPa)', 'Média das Temp. a 2m (°C)','Média das Temp. Mínimas a 2m (°C)','Média das Temp. Máximas a 2m (°C)', 'Média das Vel. Mínimas do Vento a 50m (m/s)','Média das Vel. Mínimas do Vento a 10m (m/s)', 'Média das Vel. Máximas do Vento a 50m (m/s)','Média das Vel. Máximas do Vento a 10m (m/s)', 'Média das Vel. do Vento a 50m (m/s)','Média das Vel. do Vento a 10m (m/s)','Temp. Máxima a 2m (°C)', 'Média das Vel. do Vento a 50m (m/s)','Vel. Máxima do Vento a 10m (m/s)','Temp. Mínima a 2m (°C)', 'Vel. Mínima do Vento a 50m (m/s)','Vel. Mínima do Vento a 10m (m/s)','Precipitação Acumulada' ] df.columns = map(lambda x: x.title(),list_colunas) return df # criando o cabeçalho para facilitar na construção do layout def div_topo(): return html.Div( children=[ html.H2( 'Informações da Cidade de Sousa-PB', style={'font-weight':'bold',} ), html.H4('Dados de 1983 até 2018') ], style={ 'textAlign':'center', 'font-weight':'bold', 'border': '2px solid lightgreen', 'box-shadow': '0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19)', 'background-color':'#8FBC8F', } ) # criando o rodapé para facilitar na construção do layout def div_base(): return html.Div( children=[ dcc.Markdown(''' ### **Sousa, Paraíba - 2020**'''), dcc.Markdown('''#### Site: [Cidade Sousa](https://www.sousa.pb.gov.br/)''') ], style={ 'textAlign':'center', 'font-weight':'bold', 'border': '2px solid lightgreen', 'box-shadow': '0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19)', 'background-color':'#8FBC8F', } ) # + external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app = dash.Dash( __name__, external_stylesheets=external_stylesheets ) df = gera_dados_selec(pd.read_csv('sousa_geral_anual.csv')) colunas = list(df.columns) # - app.layout = html.Div( [ div_topo(), html.H4( ['Selecione os Dados:'], style={ 'textAlign':'justify', 'text-indent': '50px', 'line-height': '3' } ), # parte central do layout html.Div( [ # bloco esquerdo da parte central do layout html.Div( [ dcc.Dropdown( # nome do componente id='columns', # lista de opções options=[ { 'label': i, 'value': i } for i in colunas[1:] ], value='Precipitação Média' ), html.Hr(), dcc.Graph( id='indicator-graphic', style={'border': '2px solid lightgreen','background-color':'#ADD8E6'} ), ], style={'margin-left': '15px','width':'48%', 'display':'inline-block'} ), # bloco direito da parte central do layout html.Div( [ gera_tabela(df), ], style={'margin-right': '15px','width':'48%', 'float': 'right', 'display':'inline-block'} ) ] ), html.Hr(), div_base() ], style={ 'border': '2px solid lightgreen' ,'box-shadow': '0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19)' ,'background-color':'#90EE90' } ) @app.callback( Output('indicator-graphic','figure'), [ Input('columns','value'), ] ) def update_graph(coluna): return{ # dados (uma chave contendum dicionário com informações do gráfico) 'data':[ dict( x=df['Tempo'], y=df[str(coluna)], mode='lines+markers', text=str(coluna), opacity=0.8 ) ], 'layout': dict( xaxis={ 'title':'Anual', 'type':'date', 'rangeslider': 'dict(visible=True)' }, yaxis={ 'title':coluna, 'type':'linear' }, margin={'l': 100, 'b': 40, 't': 40, 'r': 100}, legend={'x': 0, 'y': 1}, hovermode='closest' ) } if __name__ == '__main__': app.run_server(debug=True) # ![teste](images/rodape_no_github.jpg)
introducao_a_dash.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %load_ext cython import numpy as np import pandas as pd import numpy as np from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import DotProduct, WhiteKernel from sklearn.gaussian_process.kernels import RBF import statsmodels.api as sm from sklearn.metrics import mean_squared_error # ### PLS1 original def PLS1(X, Y): r = np.linalg.matrix_rank(X) E_h = X F_h = Y.reshape(-1,1) W = {} B = {} P = {} u = np.zeros(Y.shape[0],dtype=np.float32) w = np.zeros([X.shape[1],1],dtype=np.float32) t = np.zeros([X.shape[0], 1],dtype=np.float32) p = np.zeros([X.shape[1], 1],dtype=np.float32) b = 0.0 for i in range(r): #step 1 u = Y #step 2 w = np.dot(X.T, u) / np.dot(u.T, u) #step 3 w = w/np.linalg.norm(w) #step 4 t = np.dot(X, w)/np.dot(w.T, w) #step5-8 omitted #step 9 p = np.dot(X.T, t) / np.dot(t.T, t) #step 10 p = p/np.linalg.norm(p) #step 11 t = t* np.linalg.norm(p) #step 12 w = w * np.linalg.norm(p) #step 13 b = np.dot(u.T, t)/np.dot(t.T, t) # Calculation of the residuals t = t.reshape((-1,1)) p = p.reshape((-1,1)) E_h = E_h - np.dot(t,p.T) F_h = F_h - b*t #Replace X and Y X = E_h Y = F_h #update W and B W[i] = w B[i] = b P[i] = p return W,B,P def predict_original(X,Y,X_test): W,B,P = PLS1(X,Y) r = np.linalg.matrix_rank(X) Q = np.ones((1,r)) E_h = X_test y_pred = np.zeros((X_test.shape[0],1)) for i in range(r): t_hat = E_h @ W[i] E_h = E_h - t_hat @ P[i].T y_pred = y_pred + B[i] * t_hat return y_pred # ### optimized PLS1 using Cython # + magic_args="-a" language="cython" # # import numpy as np # cimport numpy as np # from libc.stdio cimport printf # from libc.math cimport sqrt # from cython.parallel import prange, parallel # import cython # cimport cython # # ctypedef np.double_t DTYPE_t # ctypedef np.int64_t TTYPE_t # # @cython.boundscheck(False) # @cython.wraparound(False) # @cython.cdivision(True) # # # # cpdef DTYPE_t dot_1d(np.ndarray[DTYPE_t,ndim = 2] v1, np.ndarray[DTYPE_t,ndim = 2] v2): # cdef DTYPE_t result = 0 # cdef int i = 0 # cdef int length = v1.shape[0] # cdef double el1 = 0 # cdef double el2 = 0 # for i in range(length): # el1 = v1[i,0] # el2 = v2[0,i] # result += el1*el2 # return result # # @cython.boundscheck(False) # @cython.wraparound(False) # @cython.cdivision(True) # cpdef double norm_1d(double[:,:] v1): # cdef double result = 0 # cdef int i = 0 # cdef int length = v1.shape[0] # for i in range(length): # result += v1[i,0]*v1[i,0] # result = sqrt(result) # return result # # @cython.boundscheck(False) # @cython.wraparound(False) # @cython.cdivision(True) # cpdef np.ndarray[DTYPE_t, ndim=2] scalar_multiply(DTYPE_t a, np.ndarray[DTYPE_t, ndim=2] b): # cdef np.ndarray[DTYPE_t, ndim=2] mat = b.copy() # cdef TTYPE_t blen = b.shape[0] # cdef TTYPE_t bwid = b.shape[1] # for i in range(blen): # for j in range(bwid): # mat[i, j] = a*b[i, j] # return mat # # @cython.boundscheck(False) # @cython.wraparound(False) # @cython.cdivision(True) # # cpdef np.ndarray[DTYPE_t, ndim=1] scalar_division(np.ndarray[DTYPE_t, ndim=1] vec, DTYPE_t sca): # cdef np.ndarray[DTYPE_t, ndim=1] mat = vec.copy() # cdef TTYPE_t blen = vec.shape[0] # cdef int i # with cython.nogil, parallel(): # for i in prange(blen): # mat[i] = vec[i]/sca # return mat # # # @cython.boundscheck(False) # @cython.wraparound(False) # @cython.cdivision(True) # # cpdef np.ndarray[DTYPE_t, ndim=2] scalar_division_1d2d(np.ndarray[DTYPE_t, ndim=2] vec, DTYPE_t sca): # cdef np.ndarray[DTYPE_t, ndim=2] mat = vec.copy() # cdef TTYPE_t blen = vec.shape[0] # cdef TTYPE_t i # with cython.nogil, parallel(): # for i in prange(blen): # mat[i,0] = vec[i, 0]/sca # return mat # # @cython.boundscheck(False) # @cython.wraparound(False) # @cython.cdivision(True) # # cpdef np.ndarray[DTYPE_t, ndim=2] minus_2d(np.ndarray[DTYPE_t, ndim=2] A, np.ndarray[DTYPE_t, ndim=2] B): # cdef np.ndarray[DTYPE_t, ndim=2] mat = A.copy() # cdef int i, j # with cython.nogil, parallel(): # for i in prange(A.shape[0]): # for j in prange(A.shape[1]): # mat[i,j] = A[i,j] - B[i,j] # return mat # # @cython.boundscheck(False) # @cython.wraparound(False) # @cython.cdivision(True) # # cpdef PLS_cython(np.ndarray[DTYPE_t, ndim=2] X, np.ndarray[DTYPE_t, ndim=2] Y, int r): # cdef np.ndarray[DTYPE_t, ndim=2] E_h = X.copy() # cdef np.ndarray[DTYPE_t, ndim=2] F_h = Y.copy() # W = {} # B = {} # P = {} # cdef np.ndarray[DTYPE_t, ndim=2] u = Y.copy() # cdef np.ndarray[DTYPE_t, ndim=2] w = np.zeros([X.shape[1],1]) # cdef np.ndarray[DTYPE_t, ndim=2] t = np.zeros([X.shape[0], 1]) # cdef np.ndarray[DTYPE_t, ndim=2] p = np.zeros([X.shape[1], 1]) # cdef DTYPE_t b = 0.0 # cdef int i # # # for i in range(r): # u = Y # w = scalar_division_1d2d(np.dot(X.T, u), dot_1d(u,u.T)) # # #step 3 # w = w/norm_1d(w) # # #step 4 # # t = scalar_division_1d2d(np.dot(X, w),dot_1d(w, w.T)) # #step5-8 omitted # #step 9 # p = scalar_division_1d2d(np.dot(X.T, t),dot_1d(t,t.T)) # p_norm = norm_1d(p) # #step 10 # p = p/p_norm # #step 11 # t = t* p_norm # #step 12 # w = w * p_norm # #step 13 # b = np.dot(u.T, t)/dot_1d(t,t.T) # # Calculation of the residuals # # E_h = minus_2d(E_h,np.dot(t,p.T)) # F_h = minus_2d(F_h,scalar_multiply(b,t)) # #Replace X and Y # X = E_h # Y = F_h # #update W and B # W[i] = w # B[i] = b # P[i] = p # return W,B,P # # cpdef predict_cython(np.ndarray[DTYPE_t, ndim=2] X, np.ndarray[DTYPE_t, ndim=2] Y,np.ndarray[DTYPE_t, ndim=2] X_test,int r): # W,B,P = PLS_cython(X,Y,r) # cdef np.ndarray[DTYPE_t, ndim=2] E_h = X_test.copy() # cdef np.ndarray[DTYPE_t, ndim=2] y_pred = np.zeros((X_test.shape[0],1)) # cdef np.ndarray[DTYPE_t, ndim=2] t_hat = np.zeros((X_test.shape[0],1)) # cdef int i,j # for i in range(r): # t_hat = np.dot(E_h, W[i]) # E_h = E_h - np.dot(t_hat, P[i].T) # for j in range(y_pred.shape[0]): # y_pred[j,0] = y_pred[j,0] + B[i] * t_hat[j,0] # return y_pred[:,0] # # # - # ### GPR def predict_GPR(X_train,y_train,X_test,y_test): kernel = DotProduct() + WhiteKernel() # kernel = RBF() gpr = GaussianProcessRegressor(kernel=kernel).fit(X_train, y_train) y_pred = gpr.predict(X_test) return y_pred # ### OLS def predict_OLS(X_train,y_train,X_test,y_test): model = sm.OLS(y_train,sm.add_constant(X_train)) results = model.fit() y_pred = results.predict(sm.add_constant(X_test)) return y_pred # ### Simulated Data # + import pandas as pd ''' np.random.seed(9856) x1 = np.random.normal(1, .2, 100) x2 = np.random.normal(5, .4, 100) x3 = np.random.normal(12, .8, 100) ''' x1 = np.linspace(0, 10,100) x2 = np.linspace(-5, 15,100) x3 = np.linspace(-20, -15,100) def generate_sim(x1, x2, x3): sim_data = {'x1': x1, 'x2': x2, 'x3': x3, 'x4': 5 * x1, 'x5': 2 * x2, 'x6': 4 * x3, 'x7': 6 * x1, 'x8': 5 * x2, 'x9': 4 * x3, 'x10': np.random.rand() * x1, 'x11': np.random.rand() * x2, 'x12': np.random.rand() * x3, 'x13': np.random.rand() * x1, 'x14': np.random.rand() * x2, 'x15': np.random.rand() * x3, 'x16': np.random.rand() * x1, 'x17': np.random.rand() * x1, 'x18': np.random.rand() * x2, 'x19': np.random.rand() * x3, 'y0': 3 * x2 + 3 * x3, 'y1': 6 * x1 + 3 * x3, 'y2': 7 * x2 + 2 * x1} # convert data to csv file data = pd.DataFrame(sim_data) sim_predictors = data.drop(['y0', 'y1', 'y2'], axis=1).columns.tolist() sim_values = ['y0'] pred = data[sim_predictors].values val = data[sim_values].values return pred, val X_test, y_test = generate_sim(x1, x2, x3) # pred = pred.astype(np.float32) # val = val.astype(np.float32) # val_opt = val.reshape(-1,1) # test_x1 = np.random.normal(1, .2, 3000).astype(np.float32) # test_x2 = np.random.normal(5, .4, 3000).astype(np.float32) # test_x3 = np.random.normal(12, .8, 3000).astype(np.float32) test_x1 = np.random.normal(1, .2, 5000) test_x2 = np.random.normal(5, .4, 5000) test_x3 = np.random.normal(12, .8, 5000) X_train, y_train = generate_sim(test_x1, test_x2, test_x3) # pred_test = pred_test.astype(np.float32) # pred_val = pred_val.astype(np.float32) # - # ### Test Results # #### Original PLS1 # %timeit y_pred_ori = predict_original(X_train, y_train,X_test) y_pred_ori = predict_original(X_train, y_train,X_test) y_pred_ori = y_pred_ori.reshape((y_pred_ori.shape[0])) mean_squared_error(y_test, y_pred_ori) # #### Optimized PLS1 using Cython r = np.linalg.matrix_rank(X_train) # %timeit y_pred_cython = predict_cython(X_train,y_train,X_test,r) y_pred_cython = predict_cython(X_train,y_train,X_test,r) y_pred_cython = y_pred_cython.reshape((y_pred_cython.shape[0])) mean_squared_error(y_test, y_pred_cython) # #### GPR y_pred_GPR = predict_GPR(X_train,y_train,X_test,y_test) mean_squared_error(y_test, y_pred_GPR) # %timeit predict_GPR(X_train,y_train,X_test,y_test) # #### OLS y_pred_OLS= predict_OLS(X_train,y_train,X_test,y_test) model = sm.OLS(y_train,sm.add_constant(X_train)) results = model.fit() results.summary() mean_squared_error(y_test, y_pred_OLS) # %timeit predict_OLS(X_train,y_train,X_test,y_test) # ### Real Data wine_data = pd.read_csv("winequality-red.csv") wine_data.head() wine_d = wine_data.drop(["quality"], axis = 1) wine_l = wine_data["quality"] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(wine_d, wine_l, test_size=0.1, random_state=42) X_train = X_train.values X_test = X_test.values y_train = y_train.values y_test = y_test.values # #### Original PLS1 p_ori = predict_original(X_train, y_train,X_test) mean_squared_error(y_test, p_ori) # %timeit predict_original(X_train, y_train,X_test) # #### Cythonized PLS1 r = np.linalg.matrix_rank(X_train) y_train = y_train.reshape([-1,1]).astype(np.double) y_pred = predict_cython(X_train,y_train,X_test,r) mean_squared_error(y_test, y_pred) # %timeit predict_cython(X_train,y_train,X_test,r) # #### OLS ols_p = predict_OLS(X_train,y_train,X_test,y_test) mean_squared_error(y_test, ols_p) # %timeit predict_OLS(X_train,y_train,X_test,y_test) # #### GPR y_pred_GPR = predict_GPR(X_train,y_train,X_test,y_test) mean_squared_error(y_test, y_pred_GPR) # %timeit predict_GPR(X_train,y_train,X_test,y_test)
PLS_1d_test.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This notebook gives a fairly complicated example of building a Sankey diagram from the sample "fruit" database. Other examples (TODO) break this process down into simpler stages. from floweaver import * from floweaver.jupyter import show_sankey, show_view_graph from IPython.display import SVG # Load the dataset: dataset = Dataset.from_csv('fruit_flows.csv', 'fruit_processes.csv') # This made-up dataset describes flows from farms to consumers: dataset._flows.head() # Additional information is available in the process dimension table: dataset._processes.head() # We'll also define some partitions that will be useful: # + farm_ids = ['farm{}'.format(i) for i in range(1, 16)] farm_partition_5 = Partition.Simple('process', [('Other farms', farm_ids[5:])] + farm_ids[:5]) partition_fruit = Partition.Simple('material', ['bananas', 'apples', 'oranges']) partition_sector = Partition.Simple('process.sector', ['government', 'industry', 'domestic']) # - # Now define the Sankey diagram definition. # # - Process groups represent sets of processes in the underlying database. The underlying processes can be specified as a list of ids (e.g. `['inputs']`) or as a Pandas query expression (e.g. `'function == "landfill"'`). # - Waypoints allow extra control over the partitioning and placement of flows. nodes = { 'inputs': ProcessGroup(['inputs'], title='Other inputs'), 'compost': ProcessGroup('function == "composting stock"', title='Compost'), 'farms': ProcessGroup('function in ["allotment", "large farm", "small farm"]', farm_partition_5), 'eat': ProcessGroup('function == "consumers" and location != "London"', partition_sector, title='consumers by sector'), 'landfill': ProcessGroup('function == "landfill" and location != "London"', title='Landfill'), 'composting': ProcessGroup('function == "composting process" and location != "London"', title='Composting'), 'fruit': Waypoint(partition_fruit, title='fruit type'), 'w1': Waypoint(direction='L', title=''), 'w2': Waypoint(direction='L', title=''), 'export fruit': Waypoint(Partition.Simple('material', ['apples', 'bananas', 'oranges'])), 'exports': Waypoint(title='Exports'), } # The ordering defines how the process groups and waypoints are arranged in the final diagram. It is structured as a list of vertical *layers* (from left to right), each containing a list of horizontal *bands* (from top to bottom), each containing a list of process group and waypoint ids (from top to bottom). ordering = [ [[], ['inputs', 'compost'], []], [[], ['farms'], ['w2']], [['exports'], ['fruit'], []], [[], ['eat'], []], [['export fruit'], ['landfill', 'composting'], ['w1']], ] # Bundles represent flows in the underlying database: bundles = [ Bundle('inputs', 'farms'), Bundle('compost', 'farms'), Bundle('farms', 'eat', waypoints=['fruit']), Bundle('farms', 'compost', waypoints=['w2']), Bundle('eat', 'landfill'), Bundle('eat', 'composting'), Bundle('composting', 'compost', waypoints=['w1', 'w2']), Bundle('farms', Elsewhere, waypoints=['exports', 'export fruit', ]), ] # Finally, the process groups, waypoints, bundles and ordering are combined into a Sankey diagram definition (SDD). When applied to the dataset, the result is a Sankey diagram! sdd = SankeyDefinition(nodes, bundles, ordering, flow_partition=dataset.partition('material')) sankey = show_sankey(sdd, dataset, width=800, height=500) sankey # For viewing on nbviewer, save a static version of the diagram SVG(sankey.svg) # To help get a better understanding of what's going on, it may be helpful to look at the intermediate "view graph": # # > This depends on graphviz being available show_view_graph(sdd) # Waypoints are shown with dashed borders. The black dots are "dummy nodes", added so that each link in the Sankey diagram has to pass only between adjacent layers.
examples/Fruit - complete example.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd data = pd.read_csv('employee.csv') data data.tail() print(data.columns) print(data.shape) data.info() data.describe() data.filter(['name', 'department']) data['name'] data[['name', 'department']] data.filter([0,1,2], axis=0) data[2:5] data[data['department']=='Sales'] data.department data['department'] data[data.department.isin(['Sales', 'Finance'])] data[data['department'].isin(['Sales','Finance'])] data[(data.performance_score >= 700)] data[(data.performance_score >= 500) & (data.performance_score < 700)] data.query('performance_score>=500 & performance_score <700') data data2 = pd.read_csv('employee.csv') data2 = data2.dropna() data2 data data.describe() data['age']=data.age.fillna(data.age.mean()) data data['income'] = data.income.fillna(data.income.median()) data data['gender']=data.gender.fillna(data.gender.mode()[0]) data data = pd.read_csv('employee.csv') data upper_limit = data.performance_score.mean() + 3*data.performance_score.std() lower_limit = data.performance_score.mean() - 3*data.performance_score.std() upper_limit, lower_limit data data = data[(data.performance_score < upper_limit) & (data.performance_score > lower_limit)] data upper_limit = data.performance_score.quantile(.99) lower_limit = data.performance_score.quantile(.01) upper_limit, lower_limit data = data[(data.performance_score < upper_limit) & (data.performance_score > lower_limit)] data data = pd.read_csv('employee.csv') encoded_data = pd.get_dummies(data['gender']) encoded_data data = data.join(encoded_data) data from sklearn.preprocessing import OneHotEncoder onehotencoder = OneHotEncoder() data['gender']=data.gender.fillna(data.gender.mode()[0]) onehotencoder.fit_transform(data[['gender']]).toarray() encoded_data = pd.get_dummies(data['department']) data = pd.read_csv('employee.csv') from sklearn.preprocessing import LabelEncoder label_encoder = LabelEncoder() encoded_data = label_encoder.fit_transform(data['department']) print(encoded_data) inverse_encode=label_encoder.inverse_transform([0,0,1,2]) print(inverse_encode) import pandas as pd from sklearn.preprocessing import OrdinalEncoder data = pd.read_csv('employee.csv') order_encoder = OrdinalEncoder(categories=['G3', 'G1', 'G2', 'G1', 'G4']) data data['grade_encoded'] = label_encoder.fit_transform(data['grade']) data label_encoder.inverse_transform(data['grade_encoded']) clear() data order_encoder from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(data['performance_score'].values.reshape(-1,1)) data['performance_std_scaler'] = scaler.transform(data['performance_score'].values.reshape(-1,1)) data from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() scaler.fit(data['performance_score'].values.reshape(-1,1)) data['performance_minmax_scaler'] = scaler.transform(data['performance_score'].values.reshape(-1,1)) data from sklearn.preprocessing import RobustScaler scaler = RobustScaler() scaler.fit(data['performance_score'].values.reshape(-1,1)) data['performance_robust_scaler'] = scaler.transform(data['performance_score'].values.reshape(-1,1)) data data = pd.read_csv('employee.csv') def performance_grade(score): if score >= 700: return 'A' elif score<700 and score >=500: return 'B' else: return 'C' data['performance_grade'] = data.performance_score.apply(performance_grade) data data['first_name']=data.name.str.split(" ").map(lambda var: var[0]) data['last_name']=data.name.str.split(" ").map(lambda var: var[1]) data data.name.str.split(" ")
Chapter07 Cleaning Messy Data/Untitled1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import re from pandas import DataFrame d1 = pd.read_csv('tidydata/song_data_yewon_ver02.csv') d1.info() hangs = d1.creator hangs[0] m = re.search('^(.+) 작사(.+) 작곡(.+) 편곡', str(hangs[0])) str(m.group(1)) # 여기! # + lyricist = [] composer = [] arranger = [] for i in range(0, len(hangs)): m = re.match('^(.+) 작사(.+) 작곡(.+) 편곡', str(hangs[i])) try: l = str(m.group(1)) lyricist.append(l) c = str(m.group(2)) composer.append(c) a = str(m.group(3)) arranger.append(a) except: lyricist.append('nan') composer.append('nan') arranger.append('nan') # - d2 = DataFrame({"lyricist": lyricist, "composer":composer, "arranger":arranger}) d2.info() d2.tail(100) d2['lyricist'] = d2['lyricist'].str.replace(r'작사',',') d2['composer'] = d2['composer'].str.replace(r'작곡',',') d2['arranger'] = d2['arranger'].str.replace(r'편곡',',') d2.tail(100) # 머지 df1 = pd.merge(d1, d2, how='outer', left_index=True, right_index=True) df1.tail(200) df2 = df1.drop(df1.columns[7], axis=1) df2 df2.to_csv('tidydata/song_data_yewon_ver03.csv', index=False)
SongTidy/creator_tidy_yewon_ver01.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Regularized Regression # + import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn import datasets boston = datasets.load_boston() X = boston['data'] y = boston['target'] # - # Before building the `RegularizedRegression` class, let's define a few helper functions. The first function standardizes the data by removing the mean and dividing by the standard deviation. This is the equivalent of the `StandardScaler` from `scikit-learn`. # # The `sign` function simply returns the sign of each element in an array. This is useful for calculating the gradient in Lasso regression. The `first_element_zero` option makes the function return a 0 (rather than a -1 or 1) for the first element. As discussed in the {doc}`concept section </content/c2/s1/regularized>`, this prevents Lasso regression from penalizing the magnitude of the intercept. # # + def standard_scaler(X): means = X.mean(0) stds = X.std(0) return (X - means)/stds def sign(x, first_element_zero = False): signs = (-1)**(x < 0) if first_element_zero: signs[0] = 0 return signs # - # The `RegularizedRegression` class below contains methods for fitting Ridge and Lasso regression. The first method, `record_info`, handles standardization, adds an intercept to the predictors, and records the necessary values. The second, `fit_ridge`, fits Ridge regression using # # $$ # \hat{\bbeta} = \left( \bX^\top \bX + \lambda I'\right) ^{-1} \bX^\top \by. # $$ # # The third method, `fit_lasso`, estimates the regression parameters using gradient descent. The gradient is the derivative of the Lasso loss function: # # $$ # \dadb{L(\bbetahat)}{\bbetahat} = - \bX^\top\left( \by - \bX \bbetahat \right) + \lambda I'\text{ sign}(\bbetahat). # $$ # # The gradient descent used here simply adjusts the parameters a fixed number of times (determined by `n_iters`). There many more efficient ways to implement gradient descent, though we use a simple implementation here to keep focus on Lasso regression. class RegularizedRegression: def _record_info(self, X, y, lam, intercept, standardize): # standardize if standardize == True: X = standard_scaler(X) # add intercept if intercept == False: ones = np.ones(len(X)).reshape(len(X), 1) # column of ones X = np.concatenate((ones, X), axis = 1) # concatenate # record values self.X = np.array(X) self.y = np.array(y) self.N, self.D = self.X.shape self.lam = lam def fit_ridge(self, X, y, lam = 0, intercept = False, standardize = True): # record data and dimensions self._record_info(X, y, lam, intercept, standardize) # estimate parameters XtX = np.dot(self.X.T, self.X) I_prime = np.eye(self.D) I_prime[0,0] = 0 XtX_plus_lam_inverse = np.linalg.inv(XtX + self.lam*I_prime) Xty = np.dot(self.X.T, self.y) self.beta_hats = np.dot(XtX_plus_lam_inverse, Xty) # get fitted values self.y_hat = np.dot(self.X, self.beta_hats) def fit_lasso(self, X, y, lam = 0, n_iters = 2000, lr = 0.0001, intercept = False, standardize = True): # record data and dimensions self._record_info(X, y, lam, intercept, standardize) # estimate parameters beta_hats = np.random.randn(self.D) for i in range(n_iters): dL_dbeta = -self.X.T @ (self.y - (self.X @ beta_hats)) + self.lam*sign(beta_hats, True) beta_hats -= lr*dL_dbeta self.beta_hats = beta_hats # get fitted values self.y_hat = np.dot(self.X, self.beta_hats) # The following cell runs Ridge and Lasso regression for the {doc}`Boston housing</content/appendix/data>` dataset. For simplicity, we somewhat arbitrarily choose $\lambda = 10$—in practice, this value should be chosen through cross validation. # + # set lambda lam = 10 # fit ridge ridge_model = RegularizedRegression() ridge_model.fit_ridge(X, y, lam) # fit lasso lasso_model = RegularizedRegression() lasso_model.fit_lasso(X, y, lam) # - # The below graphic shows the coefficient estimates using Ridge and Lasso regression with a changing value of $\lambda$. Note that $\lambda = 0$ is identical to ordinary linear regression. As expected, the magnitude of the coefficient estimates decreases as $\lambda$ increases. # + tags=["hide-input"] Xs = ['X'+str(i + 1) for i in range(X.shape[1])] lams = [10**4, 10**2, 0] fig, ax = plt.subplots(nrows = 2, ncols = len(lams), figsize = (6*len(lams), 10), sharey = True) for i, lam in enumerate(lams): ridge_model = RegularizedRegression() ridge_model.fit_lasso(X, y, lam) ridge_betas = ridge_model.beta_hats[1:] sns.barplot(Xs, ridge_betas, ax = ax[0, i], palette = 'PuBu') ax[0, i].set(xlabel = 'Regressor', title = fr'Ridge Coefficients with $\lambda = $ {lam}') ax[0, i].set(xticks = np.arange(0, len(Xs), 2), xticklabels = Xs[::2]) lasso_model = RegularizedRegression() lasso_model.fit_lasso(X, y, lam) lasso_betas = lasso_model.beta_hats[1:] sns.barplot(Xs, lasso_betas, ax = ax[1, i], palette = 'PuBu') ax[1, i].set(xlabel = 'Regressor', title = fr'Lasso Coefficients with $\lambda = $ {lam}') ax[1, i].set(xticks = np.arange(0, len(Xs), 2), xticklabels = Xs[::2]) ax[0,0].set(ylabel = 'Coefficient') ax[1,0].set(ylabel = 'Coefficient') plt.subplots_adjust(wspace = 0.2, hspace = 0.4) sns.despine() sns.set_context('talk');
content/c2/s2/regularized.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Ex04 - Filtragem em frequência # Neste exercício vamos testar a filtragem no Domínio da Frequência utilizando a Transformada Discreta de Fourier (DFT). # ### Parte 1 - DFT e DFT inversa # # Primeiro, vamos exercitar o cálculo e a visualização da DFT e DFT inversa. Também vamos verificar que a DFT inversa da DFT de uma imagem é a própria imagem, de acordo com as propriedades da Transformada de Fourier. import numpy as np import sys,os ea979path = os.path.abspath('../../') if ea979path not in sys.path: sys.path.append(ea979path) import ea979.src as ia # %matplotlib inline import matplotlib.image as mpimg import matplotlib.pyplot as plt import skimage.filters as skf # Para comprovar, usaremos a imagem do cameraman. Primeiro, iremos calcular a DFT da imagem usando a função fft2 do Numpy ([veja a documentação](https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fft2.html)). E, para visualizar o espectro, utilizaremos a função dftview da biblioteca ea979. O que esta função faz é calcular o log da magnitude e depois transladar a transformada para centralizá-la ([ea979/dftview](../src/dftview.ipynb)). # + f = mpimg.imread('../data/barcode.tif') F = np.fft.fft2(f) plt.figure(1, figsize=(14,8)) plt.subplot(1,2,1) plt.imshow(f, cmap='gray') plt.subplot(1,2,2) plt.imshow(ia.dftview(F), cmap='gray') # - # #### 1.1 Visualizando a DFT de várias formas # Vamos tentar visualizar a DFT sem usar a função pronta dftview. Visualize primeiro a Transformada de Fourier sem centralizá-la e sem usar o log. Depois visualize centralizando, ainda sem o log. E por último, visualize com dftview. Plote as 3 imagens, uma ao lado da outra para poder compará-las. # #### 1.2 Calculando a DFT inversa # Calcule a DFT inversa usando a função ifft2 da mesma biblioteca. Compare o resultado da DFT inversa com a imagem original, não só visualizando as imagens lado a lado, mas numericamente. # ### Parte 2 - Filtrando imagens no domínio da frequência # Para realizar a filtragem no domínio da frequência, utilizamos o Teorema da Convolução, que garante que a convolução no domínio espacial equivale a um produto no domínio da frequência. # # Ou seja, ao invés de aplicarmos um filtro no domínio espacial através da convolução da imagem $f(x,y)$ com uma máscara $h(x,y)$ podemos aplicar um filtro no domínio da frequência através do produto da Transformada de Fourier da imagem $F(u,v)$ com a Transformada de Fourier da máscara (filtro) $H(u,v)$. # # Ou seja, o primeiro passo é criar um filtro $H(u,v)$ no domínio da frequência. Lembrando que: # # - mais fácil criar o filtro centrado e depois transladar para as pontas # - filtro precisa ser simétrico conjungado # - para saber mais sobre como criar um filtro passa-baixas e um filtro passa-altas, consulte o tutorial [Filtros em frequência](12_Filtros_em_frequencia.ipynb) # #### 2.1 Projetando um filtro no domínio da frequência # # Utilize a função abaixo para criar 2 filtros ideais: um filtro passa-baixas (FPB) e um filtro passa-altas (FPA). Os filtros devem ser criados para filtrar a imagem do *barcode* apresentada na Parte 1 deste notebook. Visualize os filtros antes de usá-los. # + # Criando o filtro ideal (circulo) em frequência def cria_filtro_ideal(f, r1, r2): x,y = f.shape c1=ia.circle(f.shape, r1, np.divide(f.shape, 2)) c2=ia.circle(f.shape, r2, np.divide(f.shape, 2)) H = np.logical_xor(c1,c2) return ia.ptrans(H,(x//2,y//2)) # - # #### 2.2 Filtrando imagens no domínio da frequência # # Utilize os filtros FPB e FPA projetados acima para filtrar a imagem *barcode*. A filtragem deve ser feita no domínio da frequência (usando o Teorema da Convolução) e depois de calcular a Transformada de Fourier inversa, a imagem filtrada resultante deve ser visualizada no domínio espacial. Altere a frequência de corte de cada filtro e justifique sua escolha. Explique o efeito de cada um dos filtros aplicados. # ### Parte 3 - Recuperando uma imagem *halftone* # # A imagem a seguir foi gerada com uma técnica conhecida por haltone (meio-tom). Esta técnica simula a ilusão de tom contínuo reproduzindo seus muitos pontos num tamanho não facilmente perceptível por quem observa. Essa ilusão de óptica é importante porque compensa a inabilidade das impressões e tintas para criar escalas de tons que vão do sólido (geralmente preto) ao tom de um papel sem tinta (geralmente branco). # + f2 = mpimg.imread('../data/halftone.png') F2 = np.fft.fft2(f2) plt.figure(1, figsize=(14,8)) plt.subplot(1,2,1) plt.imshow(f2, cmap='gray') plt.subplot(1,2,2) plt.imshow(ia.dftview(F2), cmap='gray') # - # Como podemos ver na figura da direita, o espectro de Fourier de uma imagem *halftone* apresenta uma característica bem interessante: ele apresenta cópias do espectro original espalhadas nas alta frequências. # # Projete e utilize um filtro no domínio da frequência para melhorar a qualidade da imagem *halftone*. Explique sua solução.
1S2020/EA979A_Ex04_FiltragemFrequencia.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Testing # + [markdown] tags=[] # Think Bayes, Second Edition # # Copyright 2020 <NAME> # # License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) # + tags=[] # If we're running on Colab, install empiricaldist # https://pypi.org/project/empiricaldist/ import sys IN_COLAB = 'google.colab' in sys.modules if IN_COLAB: # !pip install empiricaldist # + tags=[] # Get utils.py import os if not os.path.exists('utils.py'): # !wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py # + tags=[] from utils import set_pyplot_params set_pyplot_params() # - # In Chapter xxx I presented a problem from David MacKay's book, [*Information Theory, Inference, and Learning Algorithms*](http://www.inference.org.uk/mackay/itila/p0.html): # # "A statistical statement appeared in *The Guardian* on Friday January 4, 2002: # # > When spun on edge 250 times, a Belgian one-euro coin came up heads 140 times and tails 110. \`It looks very suspicious to me,' said <NAME>, a statistics lecturer at the London School of Economics. \`If the coin were unbiased, the chance of getting a result as extreme as that would be less than 7%.' # # "But [MacKay asks] do these data give evidence that the coin is biased rather than fair?" # # We started to answer this question in Chapter xxx and came back to it in Chapter xxx. To review, our answer was based on these modeling decisions: # # * If you spin a coin on edge, there is some probability, $x$, that it will land heads up. # # * The value of $x$ varies from one coin to the next, depending on how the coin is balanced and possibly other factors. # # Starting with a uniform prior distribution for $x$, we updated it with the given data, 140 heads and 110 tails. Then we used the posterior distribution to compute the most likely value of $x$, the posterior mean, and a credible interval. # # But we never really answered MacKay's question: "Do these data give evidence that the coin is biased rather than fair?" # # In this chapter, finally, we will. # ## Estimation # # Let's review the solution to the Euro problem from Chapter xxx. We started with a uniform prior. # + import numpy as np from empiricaldist import Pmf xs = np.linspace(0, 1, 101) uniform = Pmf(1, xs) # - # And we used the binomial distribution to compute the probability of the data for each possible value of $x$. # + from scipy.stats import binom k, n = 140, 250 likelihood = binom.pmf(k, n, xs) # - # We computed the posterior distribution in the usual way. # + tags=[] posterior = uniform * likelihood posterior.normalize() # - # And here's what it looks like. # + tags=[] from utils import decorate posterior.plot(label='140 heads out of 250') decorate(xlabel='Proportion of heads (x)', ylabel='Probability', title='Posterior distribution of x') # - # Again, the posterior mean is about 0.56, with a 90% credible interval from 0.51 to 0.61. print(posterior.mean(), posterior.credible_interval(0.9)) # The prior mean was 0.5, and the posterior mean is 0.56, so it seems like the data is evidence that the coin is biased. # # But, it turns out not to be that simple. # ## Evidence # # In Chapter xxx, we said that data are considered evidence in favor of a hypothesis, $A$, if the data are more likely under $A$ than under the alternative, $B$; that is if # # $$P(D|A) > P(D|B)$$ # # Furthermore, we can quantify the strength of the evidence by computing the ratio of these likelihoods, which is known as the [Bayes factor](https://en.wikipedia.org/wiki/Bayes_factor) and often denoted $K$: # # $$K = \frac{P(D|A)}{P(D|B)}$$ # # So, for the Euro problem, let's consider two hypotheses, `fair` and `biased`, and compute the likelihood of the data under each hypothesis. # # If the coin is fair, the probability of heads is 50%, and we can compute the probability of the data (140 heads out of 250 spins) using the binomial distribution: # + k = 140 n = 250 like_fair = binom.pmf(k, n, p=0.5) like_fair # - # That's the probability of the data, given that the coin is fair. # # But if the coin is biased, what's the probability of the data? That depends on what "biased" means. # If we know ahead of time that "biased" means the probability of heads is 56%, we can use the binomial distribution again: like_biased = binom.pmf(k, n, p=0.56) like_biased # Now we can compute the likelihood ratio: K = like_biased / like_fair K # The data are about 6 times more likely if the coin is biased, by this definition, than if it is fair. # # But we used the data to define the hypothesis, which seems like cheating. To be fair, we should define "biased" before we see the data. # ## Uniformly Distributed Bias # # Suppose "biased" means that the probability of heads is anything except 50%, and all other values are equally likely. # # We can represent that definition by making a uniform distribution and removing 50%. # + tags=[] biased_uniform = uniform.copy() biased_uniform[0.5] = 0 biased_uniform.normalize() # - # To compute the probability of the data under this hypothesis, we compute the probability of the data for each value of $x$. xs = biased_uniform.qs likelihood = binom.pmf(k, n, xs) # Then multiply by the prior probabilities and add up the products: like_uniform = np.sum(biased_uniform * likelihood) like_uniform # So that's the probability of the data under the "biased uniform" hypothesis. # # Now we can compute the likelihood ratio of the data under the `fair` and `biased uniform` hypotheses: K = like_fair / like_uniform K # The data are about two times more likely if the coin is fair than if it is biased, by this definition of "biased". # # To get a sense of how strong that evidence is, we can apply Bayes's rule. # For example, if the prior probability is 50% that the coin is biased, the prior odds are 1, so the posterior odds are about 2.1 to 1 and the posterior probability is about 68%. prior_odds = 1 posterior_odds = prior_odds * K posterior_odds def prob(o): return o / (o+1) posterior_probability = prob(posterior_odds) posterior_probability # Evidence that "moves the needle" from 50% to 68% is not very strong. # Now suppose "biased" doesn't mean every value of $x$ is equally likely. Maybe values near 50% are more likely and values near the extremes are less likely. # # We could use a triangle-shaped distribution to represent this alternative definition of "biased": # + tags=[] ramp_up = np.arange(50) ramp_down = np.arange(50, -1, -1) a = np.append(ramp_up, ramp_down) triangle = Pmf(a, xs, name='triangle') triangle.normalize() # - # As we did with the uniform distribution, we can remove 50% as a possible value of $x$ (but it doesn't make much difference if we skip this detail). # + tags=[] biased_triangle = triangle.copy() biased_triangle[0.5] = 0 biased_triangle.normalize() # - # **Exercise:** Now compute the total probability of the data under this definition of "biased" and compute the Bayes factor, compared with the fair hypothesis. # # Is the data evidence that the coin is biased? # + # Solution goes here # + # Solution goes here # + # Solution goes here # - # ## Bayesian hypothesis testing # # What we've done so far in this chapter is sometimes called "Bayesian hypothesis testing" in contrast with [statistical hypothesis testing](https://en.wikipedia.org/wiki/Statistical_hypothesis_testing). # # In statistical hypothesis testing, we compute a p-value, which is hard to define concisely, and use it to determine whether the results are "statistically significant", which is also hard to define concisely. # # The Bayesian alternative is to report the Bayes factor, $K$, which summarizes the strength of the evidence in favor of one hypothesis or the other. # # Some people think it is better to report $K$ than a posterior probability because $K$ does not depend on a prior probability. # But as we saw in this example, $K$ often depends on a precise definition of the hypotheses, which can be just as controversial as a prior probability. # # In my opinion, Bayesian hypothesis testing is better because it measures the strength of the evidence on a continuum, rather that trying to make a binary determination. # But it doesn't solve what I think is the fundamental problem, which is that hypothesis testing is not asking the question we really care about. # # To see why, suppose you test the coin and decide that it is biased after all. What can you do with this answer? In my opinion, not much. # In contrast, there are two kind of questions I think are more useful (and therefore more meaningful): # # * Prediction: Based on what we know about the coin, what should we expect to happen in the future? # # * Decision-making: Can we use those predictions to make better decisions? # # At this point, we've seen a few examples of prediction. For example, in Chapter xxx we used the posterior distribution of goal-scoring rates to predict the outcome of soccer games. # # And we've seen one previous example of decision analysis: In Chapter xxx we used the distribution of prices to choose an optimal bid on *The Price is Right*. # # So let's finish this chapter with another example of Bayesian decision analysis, the Bayesian Bandit strategy. # ## Bayesian Bandits # # If you have ever been to a casino, you have probably seen a slot machines, which is sometimes called a "one-armed bandit" because it has a handle like an arm and the ability to take money like a bandit. # # The Bayesian Bandit strategy is named after one-armed bandits because it solves a problem based a simplified version of a slot machine. # # Suppose that each time you play a slot machine, there is a fixed probability that you win. And suppose that different machines give you different probabilities of winning, but you don't know what the probabilities are. # # Initially, you have the same prior belief about each of the machines, so you have no reason to prefer one over the others. But if you play each machine a few times, you can use the results to estimate the probabilities. And you can use the estimated probabilities to decide which machine to play next. # # At a high level, that's the Bayesian bandit strategy. Now let's see the details. # ## Prior beliefs # # If we know nothing about the probability of winning, we can start with a uniform prior. # + tags=[] xs = np.linspace(0, 1, 101) prior = Pmf(1, xs) prior.normalize() # - # Now I'll make four copies of the prior to represent our beliefs about the four machines. beliefs = [prior.copy() for i in range(4)] # + [markdown] tags=[] # This function displays four distributions in a grid. # + tags=[] import matplotlib.pyplot as plt options = dict(xticklabels='invisible', yticklabels='invisible') def plot(beliefs, **options): for i, pmf in enumerate(beliefs): plt.subplot(2, 2, i+1) pmf.plot(label='Machine %s' % i) decorate(yticklabels=[]) if i in [0, 2]: decorate(ylabel='PDF') if i in [2, 3]: decorate(xlabel='Probability of winning') plt.tight_layout() # - # Here's what the prior distributions look like for the four machines. plot(beliefs) # ## The update # # Each time we play a machine, we can use the outcome to update our beliefs. The following function does the update. likelihood = { 'W': xs, 'L': 1 - xs } def update(pmf, data): """Update the probability of winning.""" pmf *= likelihood[data] pmf.normalize() # This function updates the prior distribution in place. # `pmf` is a `Pmf` that represents the prior distribution of `x`, which is the probability of winning. # # `data` is a string, either `W` if the outcome is a win or `L` if the outcome is a loss. # # The likelihood of the data is either `xs` or `1-xs`, depending on the outcome. # # Suppose we choose a machine, play 10 times, and win once. We can compute the posterior distribution of `x`, based on this outcome, like this: # + tags=[] np.random.seed(17) # + bandit = prior.copy() for outcome in 'WLLLLLLLLL': update(bandit, outcome) # - # Here's what the posterior looks like. # + tags=[] bandit.plot() decorate(xlabel='Probability of winning', ylabel='PDF', title='Posterior distribution, 9 loss, one win') # - # ## Multiple bandits # Now suppose we have four machines with these probabilities: actual_probs = [0.10, 0.20, 0.30, 0.40] # Remember that as a player, we don't know these probabilities. # # The following function takes the index of a machine, simulate playing the machine once, and returns the outcome, `W` or `L`. # + from collections import Counter # count how many times we've played each machine counter = Counter() def play(i): """Play machine i. i: index of the machine to play returns: string 'W' or 'L' """ counter[i] += 1 p = actual_probs[i] if np.random.random() < p: return 'W' else: return 'L' # - # `counter` is a `Counter`, which is a kind of dictionary we'll use to keep track of how many time each machine is played. # # Here's a test that plays each machine 10 times. for i in range(4): for _ in range(10): outcome = play(i) update(beliefs[i], outcome) # Each time through the inner loop, we play one machine and update our beliefs. # # Here's what our posterior beliefs look like. plot(beliefs) # Here are the actual probabilities, posterior means, 90% credible intervals. # + tags=[] import pandas as pd def summarize_beliefs(beliefs): """Compute means and credible intervals. beliefs: sequence of Pmf returns: DataFrame """ columns = ['Actual P(win)', 'Posterior mean', 'Credible interval'] df = pd.DataFrame(columns=columns) for i, b in enumerate(beliefs): mean = np.round(b.mean(), 3) ci = b.credible_interval(0.9) ci = np.round(ci, 3) df.loc[i] = actual_probs[i], mean, ci return df # + tags=[] summarize_beliefs(beliefs) # - # We expect the credible intervals to contain the actual probabilities most of the time. # ## Explore and Exploit # # Based on these posterior distributions, which machine do you think we should play next? One option would be to choose the machine with the highest posterior mean. # # That would not be a bad idea, but it has a drawback: since we have only played each machine a few times, the posterior distributions are wide and overlapping, which means we are not sure which machine is the best; if we focus on one machine too soon, we might choose the wrong machine and play it more than we should. # # To avoid that problem, we could go to the other extreme and play all machines equally until we are confident we have identified the best machine, and then play it exclusively. # # That's not a bad idea either, but it has a drawback: while we are gathering data, we are not making good use of it; until we're sure which machine is the best, we are playing the others more than we should. # # The Bayesian Bandits strategy avoids both drawbacks by gathering and using data at the same time. In other words, it balances exploration and exploitation. # # The kernel of the idea is called [Thompson sampling](): when we choose a machine, we choose at random so that the probability of choosing each machine is proportional to the probability that it is the best. # # Given the posterior distributions, we can compute the "probability of superiority" for each machine. # # Here's one way to do it. We can draw a sample of 1000 values from each posterior distribution, like this: samples = np.array([b.choice(1000) for b in beliefs]) samples.shape # The result has 4 rows and 1000 columns. We can use `argmax` to find the index of the largest value in each column: indices = np.argmax(samples, axis=0) indices.shape # The `Pmf` of these indices is the fraction of times each machine yielded the highest values. pmf = Pmf.from_seq(indices) pmf # These fractions approximate the probability of superiority for each machine. So we could choose the next machine by choosing a value from this `Pmf`. pmf.choice() # But that's a lot of work to choose a single value, and it's not really necessary, because there's a shortcut. # # If we draw a single random value from each posterior distribution and select the machine that yields the highest value, it turns out that we'll select each machine in proportion to its probability of superiority. # # That's what the following function does. def choose(beliefs): """Use Thompson sampling to choose a machine. Draws a single sample from each distribution. returns: index of the machine that yielded the highest value """ ps = [b.choice() for b in beliefs] return np.argmax(ps) # This function chooses one value from the posterior distribution of each machine and then uses `argmax` to find the index of the machine that yielded the highest value. # # Here's an example. choose(beliefs) # ## The Strategy # # Putting it all together, the following function chooses a machine, plays once, and updates `beliefs`: def choose_play_update(beliefs, verbose=False): """Chose a machine, play it, and update beliefs.""" # choose a machine machine = choose(beliefs) # play it outcome = play(machine) # update beliefs update(beliefs[machine], outcome) if verbose: print(i, outcome, beliefs[machine].mean()) # To test it out, let's start again with a fresh set of beliefs and an empty `Counter`. beliefs = [prior.copy() for i in range(4)] counter = Counter() # If we run the bandit algorithm 100 times, we can see how `beliefs` gets updated: # + num_plays = 100 for i in range(num_plays): choose_play_update(beliefs) plot(beliefs) # - # The following table summarizes the results. # + tags=[] summarize_beliefs(beliefs) # - # The credible intervals usually contain the actual probabilities of winning. # # The estimates are still rough, especially for the lower-probability machines. But that's a feature, not a bug: the goal is to play the high-probability machines most often. Making the estimates more precise is a means to that end, but not an end itself. # # More importantly, let's see how many times each machine got played. # + tags=[] def summarize_counter(counter): """Report the number of times each machine was played. counter: Collections.Counter returns: DataFrame """ index = range(4) columns = ['Actual P(win)', 'Times played'] df = pd.DataFrame(index=index, columns=columns) for i, count in counter.items(): df.loc[i] = actual_probs[i], count return df # + tags=[] summarize_counter(counter) # - # If things go according to plan, the machines with higher probabilities should get played more often. # ## Summary # # In this chapter we finally solved the Euro problem, determining whether the data support the hypothesis that the coin is fair or biased. We found that the answer depends on how we define "biased". And we summarized the results using a Bayes factor, which quantifies the strength of the evidence. # # But the answer wasn't satisfying because, in my opinion, the question wasn't interesting. Knowing whether the coin is biased is not useful unless it helps us make better predictions and better decisions. # # As an example of a more interesting question, we looked at the "one-armed bandit" problem and a strategy for solving it, the Bayesian bandit algorithm, which tries to balance exploration and exploitation, that is, gathering more information and making the best use of the information we have. # # As a second example, we considered standardized tests and how they measure the ability of test-takers. We saw that a simple test, where all questions have the same difficulty, is most precise for test-takers with average ability and less precise for test-takers with the lowest and highest ability. # As an exercise, you'll have a chance to see whether adaptive testing can do better. # # Bayesian bandits and adaptive testing are examples of [Bayesian decision theory](https://wiki.lesswrong.com/wiki/Bayesian_decision_theory), which is the idea of using a posterior distribution as part of a decision-making process, often by choosing an action that minimizes the costs we expect on average (or maximizes a benefit). # # The strategy we used in Chapter xxx to bid on *The Price is Right* is another example. # # These strategies demonstrate what I think is the biggest advantage of Bayesian methods over classical statistics. When we represent knowledge in the form of probability distributions, Bayes's theorem tells us how to change our beliefs as we get more data, and Bayesian decision theory tells us how to make that knowledge actionable. # ## Exercises # # **Exercise:** Standardized tests like the [SAT](https://en.wikipedia.org/wiki/SAT) are often used as part of the admission process at colleges and universities. # The goal of the SAT is to measure the academic preparation of the test-takers; if it is accurate, their scores should reflect their actual ability in the domain of the test. # # Until recently, tests like the SAT were taken with paper and pencil, but now students have the option of taking the test online. In the online format, it is possible for the test to be "adaptive", which means that it can [choose each question based on responses to previous questions](https://www.nytimes.com/2018/04/05/education/learning/tests-act-sat.html). # # If a student gets the first few questions right, the test can challenge them with harder questions. If they are struggling, it can give them easier questions. # Adaptive testing has the potential to be more "efficient", meaning that with the name number of questions an adaptive test could measure the ability of a tester more precisely. # # To see whether this is true, we will develop a model of an adaptive test and quantify the precision of its measurements. # # Details of this exercise are in the notebook. # + [markdown] tags=[] # ## The Model # # The model we'll use is based on [item response theory](https://en.wikipedia.org/wiki/Item_response_theory), which assumes that we can quantify the difficulty of each question and the ability of each test-taker, and that the probability of a correct response is a function of difficulty and ability. # # Specifically, a common assumption is that this function is a three-parameter logistic function: # # $$\mathrm{p} = c + \frac{1-c}{1 + e^{-a (\theta-b)}}$$ # # where $\theta$ is the ability of the test-taker and $b$ is the difficulty of the question. # # $c$ is the lowest probability of getting a question right, supposing the test-taker with the lowest ability tries to answer the hardest question. On a multiple-choice test with four responses, $c$ might be 0.25, which is the probability of getting the right answer by guessing at random. # # $a$ controls the shape of the curve. # # The following function computes the probability of a correct answer, given `ability` and `difficulty`: # + tags=[] def prob_correct(ability, difficulty): """Probability of a correct response.""" a = 100 c = 0.25 x = (ability - difficulty) / a p = c + (1-c) / (1 + np.exp(-x)) return p # + [markdown] tags=[] # I chose `a` to make the range of scores comparable to the SAT, which reports scores from 200 to 800. # # Here's what the logistic curve looks like for a question with difficulty 500 and a range of abilities. # + tags=[] abilities = np.linspace(100, 900) diff = 500 ps = prob_correct(abilities, diff) # + tags=[] plt.plot(abilities, ps) decorate(xlabel='ability', ylabel='Probability correct', title='Probability of correct answer, difficulty=500', ylim=[0, 1.05]) # + [markdown] tags=[] # Someone with `ability=900` is nearly certain to get the right answer. # Someone with `ability=100` has about a 25% change of getting the right answer by guessing. # + [markdown] tags=[] # ## Simulating the test # # To simulate the test, we'll use the same structure we used for the bandit strategy: # # * A function called `play` that simulates a test-taker answering one question. # # * A function called `choose` that chooses the next question to pose. # # * A function called `update` that uses the outcome (a correct response or not) to update the estimate of the test-taker's ability. # # Here's `play`, which takes `ability` and `difficulty` as parameters. # + tags=[] def play(ability, difficulty): """Simulate a test-taker answering a question.""" p = prob_correct(ability, difficulty) return np.random.random() < p # + [markdown] tags=[] # `play` uses `prob_correct` to compute the probability of a correct answer and `np.random.random` to generate a random value between 0 and 1. The return value is `True` for a correct response and `False` otherwise. # # As a test, let's simulate a test-taker with `ability=600` answering a question with `difficulty=500`. The probability of a correct response is about 80%. # + tags=[] prob_correct(600, 500) # + [markdown] tags=[] # Suppose this person takes a test with 51 questions, all with the same difficulty, `500`. # We expect them to get about 80% of the questions correct. # # Here's the result of one simulation. # + tags=[] np.random.seed(18) # + tags=[] num_questions = 51 outcomes = [play(600, 500) for _ in range(num_questions)] np.mean(outcomes) # + [markdown] tags=[] # We expect them to get about 80% of the questions right. # # Now let's suppose we don't know the test-taker's ability. We can use the data we just generated to estimate it. # And that's what we'll do next. # + [markdown] tags=[] # ## The Prior # # The SAT is designed so the distribution of scores is roughly normal, with mean 500 and standard deviation 100. # So the lowest score, 200, is three standard deviations below the mean, and the highest score, 800, is three standard deviations above. # # We could use that distribution as a prior, but it would tend to cut off the low and high ends of the distribution. # Instead, I'll inflate the standard deviation to 300, to leave open the possibility that `ability` can be less than 200 or more than 800. # # Here's a `Pmf` that represents the prior distribution. # + tags=[] from scipy.stats import norm mean = 500 std = 300 qs = np.linspace(0, 1000) ps = norm(mean, std).pdf(qs) prior = Pmf(ps, qs) prior.normalize() # + [markdown] tags=[] # And here's what it looks like. # + tags=[] prior.plot(label='std=300', color='C5') decorate(xlabel='Ability', ylabel='PDF', title='Prior distribution of ability', ylim=[0, 0.032]) # + [markdown] tags=[] # ## The Update # # The following function takes a prior `Pmf` and the outcome of a single question, and updates the `Pmf` in place. # + tags=[] def update_ability(pmf, data): """Update the distribution of ability.""" difficulty, outcome = data abilities = pmf.qs ps = prob_correct(abilities, difficulty) if outcome: pmf *= ps else: pmf *= 1 - ps pmf.normalize() # + [markdown] tags=[] # `data` is a tuple that contains the difficulty of a question and the outcome: `True` if the response was correct and `False` otherwise. # # As a test, let's do an update based on the outcomes we simulated previously, based on a person with `ability=600` answering 51 questions with `difficulty=500`. # + tags=[] actual_600 = prior.copy() for outcome in outcomes: data = (500, outcome) update_ability(actual_600, data) # + [markdown] tags=[] # Here's what the posterior distribution looks like. # + tags=[] actual_600.plot(color='C4') decorate(xlabel='Ability', ylabel='PDF', title='Posterior distribution of ability') # + [markdown] tags=[] # The posterior mean is pretty close to the test-taker's actual ability, which is 600. # + tags=[] actual_600.mean() # + [markdown] tags=[] # If we run this simulation again, we'll get different results. # + [markdown] tags=[] # ## Adaptation # # Now let's simulate an adaptive test. # I'll use the following function to choose questions, starting with the simplest strategy: all questions have the same difficulty. # + tags=[] def choose(i, belief): """Choose the difficulty of the next question.""" return 500 # + [markdown] tags=[] # As parameters, `choose` takes `i`, which is the index of the question, and `belief`, which is a `Pmf` representing the posterior distribution of `ability`, based on responses to previous questions. # # This version of `choose` doesn't use these parameters; they are there so we can test other strategies (see the exercises at the end of the chapter). # # The following function simulates a person taking a test, given that we know their actual ability. # + tags=[] def simulate_test(actual_ability): """Simulate a person taking a test.""" belief = prior.copy() trace = pd.DataFrame(columns=['difficulty', 'outcome']) for i in range(num_questions): difficulty = choose(i, belief) outcome = play(actual_ability, difficulty) data = (difficulty, outcome) update_ability(belief, data) trace.loc[i] = difficulty, outcome return belief, trace # + [markdown] tags=[] # The return values are a `Pmf` representing the posterior distribution of ability and a `DataFrame` containing the difficulty of the questions and the outcomes. # # Here's an example, again for a test-taker with `ability=600`. # + tags=[] belief, trace = simulate_test(600) # + [markdown] tags=[] # We can use the trace to see how many responses were correct. # + tags=[] trace['outcome'].sum() # + [markdown] tags=[] # And here's what the posterior looks like. # + tags=[] belief.plot(color='C4', label='ability=600') decorate(xlabel='Ability', ylabel='PDF', title='Posterior distribution of ability') # + [markdown] tags=[] # Again, the posterior distribution represents a pretty good estimate of the test-taker's actual ability. # + [markdown] tags=[] # ## Quantifying precision # # To quantify the precision of the estimates, I'll use the standard deviation of the posterior distribution. The standard deviation measures the spread of the distribution, so higher value indicates more uncertainty about the ability of the test-taker. # # In the previous example, the standard deviation of the posterior distribution is about 40. # + tags=[] belief.mean(), belief.std() # + [markdown] tags=[] # For an exam where all questions have the same difficulty, the precision of the estimate depends strongly on the ability of the test-taker. To show that, I'll loop through a range of abilities and simulate a test using the version of `choice` that always returns `difficulty=500`. # + tags=[] actual_abilities = np.linspace(200, 800) results = pd.DataFrame(columns=['ability', 'posterior_std']) series = pd.Series(index=actual_abilities, dtype=float, name='std') for actual_ability in actual_abilities: belief, trace = simulate_test(actual_ability) series[actual_ability] = belief.std() # + [markdown] tags=[] # The following plot shows the standard deviation of the posterior distribution for one simulation at each level of ability. # # The results are noisy, so I also plot a curve fitted to the data by [local regression](https://en.wikipedia.org/wiki/Local_regression). # + tags=[] from utils import plot_series_lowess plot_series_lowess(series, 'C1') decorate(xlabel='Actual ability', ylabel='Standard deviation of posterior') # + [markdown] tags=[] # The test is most precise for people with ability between `500` and `600`, less precise for people at the high end of the range, and even worse for people at the low end. # # When all the questions have difficulty `500`, a person with `ability=800` has a high probability of getting them right. So when they do, we don't learn very much about them. # # If the test includes questions with a range of difficulty, it provides more information about people at the high and low ends of the range. # # As an exercise at the end of the chapter, you'll have a chance to try out other strategies, including adaptive strategies that choose each question based on previous outcomes. # + [markdown] tags=[] # ## Discriminatory power # # In the previous section we used the standard deviation of the posterior distribution to quantify the precision of the estimates. Another way to describe the performance of the test (as opposed to the performance of the test-takers) is to measure "discriminatory power", which is the ability of the test to distinguish correctly between test-takers with different ability. # # To measure discriminatory power, I'll simulate a person taking the test 100 times; after each simulation, I'll use the mean of the posterior distribution as their "score". # + tags=[] def sample_posterior(actual_ability, iters): """Simulate multiple tests and compute posterior means. actual_ability: number iters: number of simulated tests returns: array of scores """ scores = [] for i in range(iters): belief, trace = simulate_test(actual_ability) score = belief.mean() scores.append(score) return np.array(scores) # + [markdown] tags=[] # Here are samples of scores for people with several levels of ability. # + tags=[] sample_500 = sample_posterior(500, iters=100) # + tags=[] sample_600 = sample_posterior(600, iters=100) # + tags=[] sample_700 = sample_posterior(700, iters=100) # + tags=[] sample_800 = sample_posterior(800, iters=100) # + [markdown] tags=[] # Here's what the distributions of scores look like. # + tags=[] from empiricaldist import Cdf cdf_500 = Cdf.from_seq(sample_500) cdf_600 = Cdf.from_seq(sample_600) cdf_700 = Cdf.from_seq(sample_700) cdf_800 = Cdf.from_seq(sample_800) # + tags=[] cdf_500.plot(label='ability=500', color='C1', linestyle='dashed') cdf_600.plot(label='ability=600', color='C3') cdf_700.plot(label='ability=700', color='C2', linestyle='dashed') cdf_800.plot(label='ability=800', color='C0') decorate(xlabel='Test score', ylabel='CDF', title='Sampling distribution of test scores') # + [markdown] tags=[] # On average, people with higher ability get higher scores, but anyone can have a bad day, or a good day, so there is some overlap between the distributions. # # For people with ability between `500` and `600`, where the precision of the test is highest, the discriminatory power of the test is also high. # # If people with abilities `500` and `600` take the test, it is almost certain that the person with higher ability will get a higher score. # + tags=[] np.mean(sample_600 > sample_500) # + [markdown] tags=[] # Between people with abilities `600` and `700`, it is less certain. # + tags=[] np.mean(sample_700 > sample_600) # + [markdown] tags=[] # And between people with abilities `700` and `800`, it is not certain at all. # + tags=[] np.mean(sample_800 > sample_700) # + [markdown] tags=[] # But remember that these results are based on a test where all questions are equally difficult. # If you do the exercises at the end of the chapter, you'll see that the performance of the test is better if it includes questions with a range of difficulties, and even better if the test it is adaptive. # + [markdown] tags=[] # Go back and modify `choose`, which is the function that chooses the difficulty of the next question. # # 1. Write a version of `choose` that returns a range of difficulties by using `i` as an index into a sequence of difficulties. # # 2. Write a version of `choose` that is adaptive, so it choose the difficulty of the next question based `belief`, which is the posterior distribution of the test-taker's ability, based on the outcome of previous responses. # # For both new versions, run the simulations again to quantify the precision of the test and its discriminatory power. # # For the first version of `choose`, what is the ideal distribution of difficulties? # # For the second version, what is the adaptive strategy that maximizes the precision of the test over the range of abilities? # + # Solution goes here # + # Solution goes here # -
notebooks/chap10.ipynb