code
stringlengths
38
801k
repo_path
stringlengths
6
263
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Class 9: Functions # # ## A painful analogy # # What do you do when you wake up in the morning? # # I don't know about you, but I **get ready.** # # "Obviously," you say, a little too snidely for my liking. You're particular, very detail-oriented, and need more information out of me. # # Fine, then. Since you're going to be nitpicky, I might be able to break it down a little bit more for you... # # 1. I get out of bed # 2. I take a shower # 3. I get dressed # 4. **I eat breakfast** # # Unfortunately that's not good enough for you. "But how do you eat breakfast?" Well, maybe I... # # 1. Get a bowl out of a cabinet # 2. Get some cereal out of the pantry # 3. Get some milk out of the fridge # 4. Pour some cereal into a bowl # 5. Pour some milk into the bowl # 6. Sit down at the table and start eating # # "Are you eating with a spoon?" you interrupt. "When did you get the spoon out? Was that after the milk, or before the bowl?" # # It's annoying people like this that make us have **functions.** # # > **FUN FACT:** The joke's on you, because **I don't even actually eat cereal.** Maybe I don't even get ready in the morning, either. # # ## What is a function? # # Functions are chunks of code that do something. They're different than the code we've written so far because **they have names**. # # # Instead of detailing each and every step involved in eating breakfast, I just use "I eat breakfast" as a shorthand for many, many detailed steps. Functions are the same - they allow us to take complicated parts of code, give it a name, and type **`just_eat_breakfast()`** every morning instead of twenty-five lines of code. # # ## What are some examples of functions? # # We've used a lot of functions in our time with Python. You remember our good buddy `len`? It's a **function** that gives back the length of whatever you send its way, e.g. `len("ghost")` is `5` and `len("cartography")` is 11. len # **Almost everything useful is a function.** Python has [a ton of other built-in functions](https://docs.python.org/2/library/functions.html)! # # Along with `len`, a couple you might have seen are: # # * `abs(...)` takes a number and returns the absolute value of the number # * `int(...)` takes a string or float and returns it as an integer # * `round(...)` takes a float and returns a rounded version of it # * `sum(...)` takes a list and returns the sum of all of its elements # * `max(...)` takes a list and returns the largest of all of its selements # * `print(...)` takes whatever you want to give it and displays it on the screen # # Functions can also come from packages and libraries. The `.get` part of `requests.get` is a function, too! # # And here, to prove it to you? max print import requests #methods. A function that is attached requests.get # + # And if we just wanted to use them, for some reason n = -34 print(n, "in absolute value is", abs(n)) print("We can add after casting to int:", 55 + int("55")) n = 4.4847 print(n, "can be rounded to", round(n)) print(n, "can also be rounded to 2 decimal points", round(n, 2)) numbers = [4, 22, 40, 54] print("The total of the list is", sum(numbers)) # - # **See? Functions make the world run.** # # One useful role they play is **functions hide code that you wouldn't want to type a thousand times.** For example, you might have used `urlretrieve` from `urllib` to download files from around the internet. If you *didn't* use `urlretrieve` you'd have to type all of this: def urlretrieve(url, filename=None, reporthook=None, data=None): url_type, path = splittype(url) with contextlib.closing(urlopen(url, data)) as fp: headers = fp.info() # Just return the local path and the "headers" for file:// # URLs. No sense in performing a copy unless requested. if url_type == "file" and not filename: return os.path.normpath(path), headers # Handle temporary file setup. if filename: tfp = open(filename, 'wb') else: tfp = tempfile.NamedTemporaryFile(delete=False) filename = tfp.name _url_tempfiles.append(filename) with tfp: result = filename, headers bs = 1024*8 size = -1 read = 0 blocknum = 0 if "content-length" in headers: size = int(headers["Content-Length"]) if reporthook: reporthook(blocknum, bs, size) while True: block = fp.read(bs) if not block: break read += len(block) tfp.write(block) blocknum += 1 if reporthook: reporthook(blocknum, bs, size) if size >= 0 and read < size: raise ContentTooShortError( "retrieval incomplete: got only %i out of %i bytes" % (read, size), result) return result # Horrifying, right? Thank goodness for functions. # # ## Writing your own functions # # I've always been kind of jealous of `len(...)` and its crowd. It seemed unfair that Python made a list of cool, important functions, and neither me nor you had any say in the matter. What if I want a function that turns all of the periods in a sentence into exclamation points, or prints out a word a hundred million times? # # Well, turns out **that isn't a problem**. We can do that. Easily! *And we will*. If you can type `def` and use a colon, you can write a function. # # A function that you write yourself looks like this: # A function to multiply a number by two def double(number): bigger = number * 2 return bigger double(4) #cant make variables inside functions # It has a handful of parts: # # 1. **`def`** - tells Python "hey buddy, we're about to define a function! Get ready." And Python appropriately prepares itself. # 2. **`double`** - is the **name** of the function, and it's how you'll refer to the function later on. For example, `len`'s function name is (obviously) `len`. # 3. **`(number)`** - defines the **parameters** that the function "takes." You can see that this function is called `double`, and you send it one parameter that will be called `number`. # 4. **`return bigger`** - is called the **return statement**. If the function is a factory, this is the shipping department - **return** tells you what to send back to the main program. # # You'll see it doesn't *do* anything, though. That's because we haven't **called** the function, which is a programmer's way of saying **use** the function. Let's use it! print("2 times two is", double(2)) print("10 times two is", double(10)) print("56 times two is", double(56)) age = 76 print("Double your age is", double(age)) # ## Function Naming # # Your function name has to be **unique**, otherwise Python will get confused. No other functions or variabels can share its name! # # For example, if you call it `len` it'll forget about the built-in `len` function, and if you give one of your variables the name `print` suddenly Python won't understand how `print(...)` works anymore. # # If you end up doing this, you'll get errors like the one below # + #be careful that every name you give to your function is original def greet(name): return "Hello " + name # This one works print(greet("Soma")) # Overwrite the function greet with a string greet = "blah" # Trying the function again breaks print(greet("Soma")) # - # ## Parameters # # In our function `double`, we have a parameter called `number`. # # ````py # def double(number): # bigger = number * 2 # return bigger # ```` # Notice in the last example up above, though, we called `double(age)`. Those don't match!!! # # The thing is, **your function doesn't care what the variable you send it is called**. Whatever you send it, it will rename. It's like if someone adopted my cat *Smushface*, they might think calling her *Petunia* would be a little bit nicer (it wouldn't be, but I wouldn't do anything about it). # # Here's an example with my favorite variable name `potato_soup` # + def exclaim(potato_soup): return potato_soup + "!!!!!!!!!!" invitation = "I hope you can come to my wedding" print(exclaim(invitation)) line = "I am sorry to hear you have the flu" print(exclaim(line)) # - # `invitation` and `line` both get renamed to `potato_soup` inside of the function, so you can reuse the function with **any** variable of **any** name. # # Let's say I have a function that does some **intense calculations**: # # ````py # def sum_times_two(a, b): # added = a + b # return added * 2 # ```` # # To reiterate: **`a` and `b` have nothing to do with the values outside of the function**. You don't have to make variables called `a` and `b` and then send them to the function, the function takes care of that by itself. For example, the below examples are perfectly fine. # # ````py # sum_times_two(2, 3) # r = 4 # y = 7 # sum_times_two(r, y) # ```` # # When you're outside of the function, you almost **never have to think about what's inside the function.** You don't care about what variabels are called or *anything*. It's a magic box. Think about how you don't know what `len` looks like inside, or `print`, but you use them all of the time! # ## Why functions? # # Two reasons to use functions, since maybe you'll ask: # # **Don't Repeat Yourself** - If you find yourself writing the same code again and again, it's a good time to put that code into a function. `len(...)` is a function because Python people decided that you shouldn't have to write length-calculating code every time you wanted to see how many characters were in a string. # # **Code Modularity** - sometimes it's just nice to *organize* your code. All of your parts that deal with counting dog names can go over here, and all of the stuff that has to do with boroughs goes over there. In the end it can make for more readable and maintanable code. (Maintainable code = code you can edit in the future without thinking real hard) # # Those reasons probably don't mean much to you right now, and I sure don't blame you. Abstract programming concepts are just dumb abstract things until you actually start using them. # # Let's say I wanted to greet someone and then tell them how long their name is, because I'm pedantic. # + name = "Nancy" name_length = len(name) print("Hello", name, "your name is", name_length, "letters long") name = "Brick" name_length = len(name) print("Hello", name, "your name is", name_length, "letters long") name = "<NAME>" name_length = len(name) print("Hello", name, "your name is", name_length, "letters long") # - # **Do you know how exhausted I got typing all of that out?** And how it makes no sense at all? Luckily, functions save us: all of our code goes into one place so we don't have to repeat ourselves, *and* we can give it a descriptive name. # + def weird_greeting(name): name_length = len(name) print("Hello", name, "your name is", name_length, "letters long") names = ['Nancy', 'Brick', 'Napolion'] for name in names: print(weird_greeting(name)) # weird_greeting("Nancy") # weird_greeting("Brick") # weird_greeting("<NAME>") # - # # `return` # # The role of a function is generally **to do something and then send the result back to us**. `len` sends us back the length of the string, `requests.get` sends us back the web page we requested. # # ````py # def double(a): # return a * 2 # ```` # # **This is called the `return` statement.** You don't *have* to send something back (`print` doesn't) but you usually want to. # # # Writing a custom function # # Let's say we have some code that compares the number of boats you have to the number of cars you have. # # ````python # if boat_count > car_count: # print "Larger" # else: # print "Smaller" # ```` # # Simple, right? But unfortunately we're at a rich people convention where they're always comparing the number of boats to the number of cars to the number of planes etc etc etc. If we have to check *again and again and again and again* for all of those people and always print *Larger* or *Smaller* I'm sure we'd get bored of typing all that. So let's convert it to a function! # # Let's give our function a **name** of `size_comparison`. Remember: We can name our functions whatever we want, *as long as it's unique*. # # Our function will take **two parameters**. they're `boat_coat` and `car_count` above, but we want generic, re-usable names, so maybe like, uh, `a` and `b`? # # For our function's **return value**, let's have it send back `"Larger"` or `"Smaller"`. # Our cool function def size_comparison(a, b): if a > b: return "Larger" else: return "Smaller" print(size_comparison(4, 5.5)) print(size_comparison(65, 2)) print(size_comparison(34.2, 33)) # # Your Turn # # This is a do-now even though it's not the beginning of class! # ### 1a. Driving Speed # # With the code below, it tells you how fast you're driving. I figure that a lot of people are more familiar with kilometers an hour, though, so let's write a function that does the conversion. I wrote a skeleton, now you can fill in the conversion. # # Make it display a whole number. # + def to_kmh(speeed): kph = speeed * 1.609344 return round(kph) mph = 40 print("You are driving", mph, "in mph") print("You are driving", to_kmh(mph), "in kmh") # - # ### 1b. Driving Speed Part II # # Now write a function called `to_mpm` that, when given miles per hour, computes the meters per minute. # + #Magic numbers def to_mpm(speeed): mph = speeed * 26.8 return round(mph) mph = 40 print("You are driving", mph, "in mph") print("You are driving", to_mpm(mph), "meters per minutes") # - # ### 1c. Driving Speed Part III # # Rewrite `to_mpm` to use the `to_kmh` function. **D.R.Y.**! # + def to_mpm(speeed): return to_kmh(speeed) * 1000/60 mph = 40 print("You are driving", mph, "in mph") print("You are driving", to_mpm(mph), "meters per minutes") # - # ### 2. Broken Function # # The code below won't work. Why not? # + # You have to wash ten cars on every street, along with the cars in your driveway. # With the following list of streets, how many cars do we have? def total(n): return n * 10 # Here are the streets streets = ['10th Ave', '11th Street', '45th Ave'] # Let's count them up total = len(streets) # And add one count = total + 1 # And see how many we have print(total(count)) # - # ### 3. Data converter # # We have a bunch of data in different formats, and we need to normalize it! The data looks like this: # # ````python # var first = { 'measurement': 3.4, 'scale': 'kilometer' } # var second = { 'measurement': 9.1, 'scale': 'mile' } # var third = { 'measurement': 2.0, 'scale': 'meter' } # var fourth = { 'measurement': 9.0, 'scale': 'inches' } # ```` # # Write a function called `to_meters(...)`. When you send it a dictionary, have it examine the `measurement` and `scale` and return the adjusted value. For the values above, 3.4 kilometers should be 3400.0 meters, 9.1 miles should be around 14600, and 9 inches should be apprxoimately 0.23. # + first = { 'measurement': 3.4, 'scale': 'kilometer' } second = { 'measurement': 9.1, 'scale': 'mile' } third = { 'measurement': 2.0, 'scale': 'meter' } fourth = { 'measurement': 9.0, 'scale': 'inches' } def to_meters(number): if number['scale'] == 'kilometer': return number['measurement'] * 1000 return 99 print(to_meters(first)) print(to_meters(second)) # -
classes and excersises/class9/09 - Functions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS109A Introduction to Data Science # # ## Homework 4: Logistic Regression # # **Harvard University**<br/> # **Fall 2019**<br/> # **Instructors**: <NAME>, <NAME>, and <NAME> # # <hr style="height:2pt"> # # #RUN THIS CELL import requests from IPython.core.display import HTML styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text HTML(styles) # ### INSTRUCTIONS # # - **This is an individual homework. No group collaboration.** # - To submit your assignment, follow the instructions given in Canvas. # - Restart the kernel and run the whole notebook again before you submit. # - As much as possible, try and stick to the hints and functions we import at the top of the homework, as those are the ideas and tools the class supports and are aiming to teach. And if a problem specifies a particular library, you're required to use that library, and possibly others from the import list. # - Please use .head() when viewing data. Do not submit a notebook that is excessively long because output was not suppressed or otherwise limited. # + import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression from sklearn.linear_model import LogisticRegression from sklearn.linear_model import LogisticRegressionCV from sklearn.linear_model import LassoCV from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import cross_val_score from sklearn.metrics import accuracy_score from sklearn.model_selection import KFold from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split import matplotlib import matplotlib.pyplot as plt # %matplotlib inline import zipfile import seaborn as sns sns.set() from scipy.stats import ttest_ind # - # <div class='theme'> Cancer Classification from Gene Expressions </div> # # In this problem, we will build a classification model to distinguish between two related classes of cancer, acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML), using gene expression measurements. The dataset is provided in the file `data/dataset_hw4.csv`. Each row in this file corresponds to a tumor tissue sample from a patient with one of the two forms of Leukemia. The first column contains the cancer type, with **0 indicating the ALL** class and **1 indicating the AML** class. Columns 2-7130 contain expression levels of 7129 genes recorded from each tissue sample. # # In the following questions, we will use linear and logistic regression to build classification models for this data set. # # <div class='exercise'><b> Question 1 [20 pts]: Data Exploration </b></div> # # The first step is to split the observations into an approximate 80-20 train-test split. Below is some code to do this for you (we want to make sure everyone has the same splits). Print dataset shape before splitting and after splitting. `Cancer_type` is our target column. # # # **1.1** Take a peek at your training set: you should notice the severe differences in the measurements from one gene to the next (some are negative, some hover around zero, and some are well into the thousands). To account for these differences in scale and variability, normalize each predictor to vary between 0 and 1. **NOTE: for the entirety of this homework assignment, you will use these normalized values, not the original, raw values**. # # # **1.2** The training set contains more predictors than observations. What problem(s) can this lead to in fitting a classification model to such a dataset? Explain in 3 or fewer sentences. # # # **1.3** Determine which 10 genes individually discriminate between the two cancer classes the best (consider every gene in the dataset). # # Plot two histograms of best predictor -- one using the training set and another using the testing set. Each histogram should clearly distinguish two different `Cancer_type` classes. # # **Hint:** You may use t-testing to make this determination: #https://en.wikipedia.org/wiki/Welch%27s_t-test . # # # **1.4** Using your most useful gene from the previous part, create a classification model by simply eye-balling a value for this gene that would discriminate the two classes the best (do not use an algorithm to determine for you the optimal coefficient or threshold; we are asking you to provide a rough estimate / model by manual inspection). Justify your choice in 1-2 sentences. Report the accuracy of your hand-chosen model on the test set (write code to implement and evaluate your hand-created model). # # <hr> <hr> # <hr> # # ### Solutions # **The first step is to split the observations into an approximate 80-20 train-test split. Below is some code to do this for you (we want to make sure everyone has the same splits). Print dataset shape before splitting and after splitting. `Cancer_type` is our target column.** np.random.seed(10) df = pd.read_csv('data/hw4_enhance.csv', index_col=0) X_train, X_test, y_train, y_test = train_test_split(df.loc[:, df.columns != 'Cancer_type'], df.Cancer_type, test_size=0.2, random_state = 109, stratify = df.Cancer_type) print(df.shape) print(X_train.shape, X_test.shape, y_train.shape, y_test.shape) print(df.Cancer_type.value_counts(normalize=True)) # **1.1 Take a peek at your training set: you should notice the severe differences in the measurements from one gene to the next (some are negative, some hover around zero, and some are well into the thousands). To account for these differences in scale and variability, normalize each predictor to vary between 0 and 1. **NOTE: for the entirety of this homework assignment, you will use these normalized values, not the original, raw values.** # #your code here X_train.describe() # + #your code here min_vals = X_train.min() max_vals = X_train.max() X_train = (X_train - min_vals)/(max_vals - min_vals) X_test = (X_test - min_vals)/(max_vals - min_vals) # - # **1.2 The training set contains more predictors than observations. What problem(s) can this lead to in fitting a classification model to such a dataset? Explain in 3 or fewer sentences.** # *your answer here* # # # p>>n - Linear Regression and Logisitic Regression does not work. We need to regularize or reduce dimensions. # # The training set is improper as it contains many more columns compared to number of samples. If we fit models to the given dataset, they will be highly overfitted. This is called the curse of dimensionality. # # Multicollinearity # **1.3 Determine which 10 genes individually discriminate between the two cancer classes the best (consider every gene in the dataset).** # # **Plot two histograms of best predictor -- one using the training set and another using the testing set. Each histogram should clearly distinguish two different `Cancer_type` classes.** # # **Hint:** You may use t-testing to make this determination: #https://en.wikipedia.org/wiki/Welch%27s_t-test. # + #your code here predictors = df.columns predictors = predictors.drop('Cancer_type'); print(predictors.shape) means_0 = X_train[y_train==0][predictors].mean() means_1 = X_train[y_train==1][predictors].mean() stds_0 = X_train[y_train==0][predictors].std() stds_1 = X_train[y_train==1][predictors].std() n1 = X_train[y_train==0].shape[0] n2 = X_train[y_train==1].shape[0] t_tests = np.abs(means_0-means_1)/np.sqrt( stds_0**2/n1 + stds_1**2/n2) #your code here best_preds_idx = np.argsort(-t_tests.values) best_preds = t_tests.index[best_preds_idx] print(t_tests[best_preds_idx[0:10]]) print(t_tests.index[best_preds_idx[0:10]]) best_pred = t_tests.index[best_preds_idx[0]] print(best_pred) # - #your code here plt.figure(figsize=(12,8)) plt.subplot(211) plt.hist( X_train[y_train==0][best_pred], bins=10, label='Class 0') plt.hist( X_train[y_train==1][best_pred],bins=30, label='Class 1') plt.title(best_pred + " train") plt.legend() plt.subplot(212) plt.hist( X_test[y_test==0][best_pred], bins=30,label='Class 0') plt.hist( X_test[y_test==1][best_pred], bins=30, label='Class 1') plt.title(best_pred + " test") plt.legend(); # + # #your code here # from scipy.stats import ttest_ind # predictors = df.columns # predictors = predictors.drop('Cancer_type'); # print(predictors.shape) # t_tests = ttest_ind(X_train[y_train==0],X_train[y_train==1],equal_var=False) # best_preds_idx_t_tests = np.argsort(t_tests.pvalue) # predictors[best_preds_idx_t_tests][0:15] # # (7129,) # # Index(['M31523_at', 'X95735_at', 'M84526_at', 'X61587_at', 'U50136_rna1_at', # # 'X17042_at', 'U29175_at', 'Y08612_at', 'Z11793_at', 'J04615_at', # # 'X76648_at', 'U72936_s_at', 'M80254_at', 'M29551_at', 'X62320_at'], # # dtype='object') # - # **1.4 Using your most useful gene from the previous part, create a classification model by simply eye-balling a value for this gene that would discriminate the two classes the best (do not use an algorithm to determine for you the optimal coefficient or threshold; we are asking you to provide a rough estimate / model by manual inspection). Justify your choice in 1-2 sentences. Report the accuracy of your hand-chosen model on the test set (write code to implement and evaluate your hand-created model)** # # + #your code here threshold = 0.45 train_score = accuracy_score(y_train.values, X_train[best_pred]<=threshold) #Check this! test_score = accuracy_score(y_test.values, X_test[best_pred]<=threshold) results = [['naive train', train_score], ['naive test', test_score]] df_res = pd.DataFrame.from_dict(results) df_res # - # By observing the distribution of 'M31523_at' in the training histogram above, we roughly estimate that 0.45 distinguishes the two classes, so we use the threshold of 0.45. # <div class='exercise'><b> Question 2 [25 pts]: Linear and Logistic Regression </b></div> # # In class, we discussed how to use both linear regression and logistic regression for classification. For this question, you will explore these two models by working with the single gene that you identified above as being the best predictor. # # **2.1** Fit a simple linear regression model to the training set using the single gene predictor "best_predictor" to predict cancer type (use the normalized values of the gene). We could interpret the scores predicted by the regression model for a patient as being an estimate of the probability that the patient has Cancer_type=1 (AML). Is this a reasonable interpretation? If not, what is the problem with such? # # Create a figure with the following items displayed on the same plot (Use training data): # - the model's predicted value (the quantitative response from your linear regression model as a function of the normalized value of the best gene predictor) # - the true binary response. # # **2.2** Use your estimated linear regression model to classify observations into 0 and 1 using the standard Bayes classifier. Evaluate the classification accuracy of this classification model on both the training and testing sets. # # **2.3** Next, fit a simple logistic regression model to the training set. How do the training and test classification accuracies of this model compare with the linear regression model? # # Remember, you need to set the regularization parameter for sklearn's logistic regression function to be a very large value in order to **not** regularize (use 'C=100000'). # # # **2.4** # Print and interpret Logistic regression coefficient and intercept. # # # Create 2 plots (with training and testing data) with 4 items displayed on each plot. # - the quantitative response from the linear regression model as a function of the best gene predictor. # - the predicted probabilities of the logistic regression model as a function of the best gene predictor. # - the true binary response. # - a horizontal line at $y=0.5$. # # Based on these plots, does one of the models appear better suited for binary classification than the other? Explain in 3 sentences or fewer. # # # <hr> # # ### Solutions # **2.1 Fit a simple linear regression model to the training set using the single gene predictor "best_predictor" to predict cancer type (use the normalized values of the gene). We could interpret the scores predicted by the regression model for a patient as being an estimate of the probability that the patient has Cancer_type=1 (AML). Is this a reasonable interpretation? If not, what is the problem with such?** # # **Create a figure with the following items displayed on the same plot (Use training data):** # - the model's predicted value (the quantitative response from your linear regression model as a function of the normalized value of the best gene predictor) # - the true binary response. # + # your code here print(best_pred) linreg = LinearRegression() linreg.fit(X_train[best_pred].values.reshape(-1,1), y_train) y_train_pred = linreg.predict(X_train[best_pred].values.reshape(-1,1)) y_test_pred = linreg.predict(X_test[best_pred].values.reshape(-1,1)) # + # your code here fig = plt.figure(); host = fig.add_subplot(111) par1 = host.twinx() host.set_ylabel("Probability") par1.set_ylabel("Class") host.plot(X_train[best_pred], y_train_pred, '-'); host.plot(X_train[best_pred], y_train, 's'); host.set_xlabel('Normalized best_pred') host.set_ylabel('Probability of being ALM') labels = ['ALL', 'ALM']; # You can specify a rotation for the tick labels in degrees or with keywords. par1.set_yticks( [0.082, 0.81]); par1.set_yticklabels(labels); # - # *your answer here* # # Yes there is a problem with interpretation - seems like our probabilities are <0 and >1. # **2.2 Use your estimated linear regression model to classify observations into 0 and 1 using the standard Bayes classifier. Evaluate the classification accuracy of this classification model on both the training and testing sets.** # + # your code here train_score = accuracy_score(y_train, y_train_pred>0.5) test_score = accuracy_score(y_test, y_test_pred>0.5) print("train score:", train_score, "test score:", test_score) df_res = df_res.append([['Linear Regression train', train_score], ['Linear Regression test', test_score]] ) df_res # - # # **2.3** **Next, fit a simple logistic regression model to the training set. How do the training and test classification accuracies of this model compare with the linear regression model? Are the classifications substantially different? Explain why this is the case.** # # **Remember, you need to set the regularization parameter for sklearn's logistic regression function to be a very large value in order to **not** regularize (use 'C=100000'). # + # your code here logreg = LogisticRegression(C=100000, solver='lbfgs') logreg.fit(X_train[[best_pred]], y_train) y_train_pred_logreg = logreg.predict(X_train[[best_pred]]) y_test_pred_logreg = logreg.predict(X_test[[best_pred]]) y_train_pred_logreg_prob = logreg.predict_proba(X_train[[best_pred]])[:,1] y_test_pred_logreg_prob = logreg.predict_proba(X_test[[best_pred]])[:,1] train_score_logreg = accuracy_score(y_train, y_train_pred_logreg) test_score_logreg = accuracy_score(y_test, y_test_pred_logreg) print("train score:", train_score_logreg, "test score:", test_score_logreg) df_res = df_res.append([['Logistic Regression train', train_score_logreg], ['Logistic Regression test', test_score_logreg]] ) df_res # - # *your answer here* # # Results are not significantly different. # **2.4 Print and interpret Logistic regression coefficient and intercept.** # # **Create 2 plots (with training and testing data) with 4 items displayed on each plot.** # - the quantitative response from the linear regression model as a function of the best gene predictor. # - the predicted probabilities of the logistic regression model as a function of the best gene predictor. # - the true binary response. # - a horizontal line at $y=0.5$. # # **Based on these plots, does one of the models appear better suited for binary classification than the other? Explain in 3 sentences or fewer.** # # $ \hat{p}(X) = \frac{e^{\hat{\beta_0}+\hat{\beta_1}X_1 } }{1 + e^{\hat{\beta_0}+\hat{\beta_1}X_1 }} $ # your code here logreg.intercept_, logreg.coef_, -logreg.intercept_/logreg.coef_ # The slope is how steep is the sigmoid function is. Negative slope indicates probability of predicting y = 1 decreases as X gets larger. The intercept offers an indication of how much right or left shifted the curve (inflection point) is by -intercept/slope: the curve is approx 0.4656 to the right in this case. print("Intercept:",logreg.intercept_) prob = logreg.predict_proba(np.array([0]).reshape(-1,1))[0,1] #Predictions when best_pred = 0 print("When %s is 0, log odds are %.5f "%(best_pred,logreg.intercept_)) print("In other words, we predict `cancer_type` with %.5f probability "%(prob)) #np.exp(4.07730445)/(1+np.exp(4.07730445)) = 0.98333 # + print("Coefficient: ",logreg.coef_) print("A one-unit increase in coefficient (%s) is associated with an increase in the odds of `cancer_type` by %.5f"%(best_pred,np.exp(logreg.coef_))) #print("A one-unit increase in coefficient (%s) is associated with an increase in the log odds of `cancer_type` by %.5f"%(best_pred,logreg.coef_)) #Explanation # #Assume best_pred = 0.48 # prob = logreg.predict_proba(np.array([0.48]).reshape(-1,1))[0,1] # print("Prob. when best_pred is 0.48 = ",prob) # print("Log odds when best_pred is 0.48 = ", np.log(prob/(1-prob))) # #Increase best_pred by 1, best_pred = 1.48 # prob1 = logreg.predict_proba(np.array([1.48]).reshape(-1,1))[0,1] # print("Prob. when best_pred is 1.48 = ",prob1) # print("Log odds when best_pred is 1.48 = ", np.log(prob1/(1-prob1))) # np.log(prob1/(1-prob1)) - (np.log(prob/(1-prob))) #coefficient # + # your code here fig, ax = plt.subplots(1,2, figsize=(16,5)) sort_index = np.argsort(X_train[best_pred].values) # plotting true binary response ax[0].scatter(X_train[best_pred].iloc[sort_index].values, y_train.iloc[sort_index].values, color='red', label = 'Train True Response') # plotting ols output ax[0].plot(X_train[best_pred].iloc[sort_index].values, y_train_pred[sort_index], color='red', alpha=0.3, \ label = 'Linear Regression Predictions') # plotting logreg prob output ax[0].plot(X_train[best_pred].iloc[sort_index].values, y_train_pred_logreg_prob[sort_index], alpha=0.3, \ color='green', label = 'Logistic Regression Predictions Prob') ax[0].axhline(0.5, c='c') ax[0].legend() ax[0].set_title('Train - True response v/s obtained responses') ax[0].set_xlabel('Gene predictor value') ax[0].set_ylabel('Cancer type response'); # Test sort_index = np.argsort(X_test[best_pred].values) # plotting true binary response ax[1].scatter(X_test[best_pred].iloc[sort_index].values, y_test.iloc[sort_index].values, color='black', label = 'Test True Response') # plotting ols output ax[1].plot(X_test[best_pred].iloc[sort_index].values, y_test_pred[sort_index], color='red', alpha=0.3, \ label = 'Linear Regression Predictions') # plotting logreg prob output ax[1].plot(X_test[best_pred].iloc[sort_index].values, y_test_pred_logreg_prob[sort_index], alpha=0.3, \ color='green', label = 'Logistic Regression Predictions Prob') ax[1].axhline(0.5, c='c') ax[1].legend() ax[1].set_title('Test - True response v/s obtained responses') ax[1].set_xlabel('Gene predictor value') ax[1].set_ylabel('Cancer type response'); # - # Logistic Regression is better suited for this problem, our probabilities are within the range as expected. # <div class='exercise'> <b> Question 3 [20pts]: Multiple Logistic Regression </b> </div> # # # **3.1** Next, fit a multiple logistic regression model with **all** the gene predictors from the data set (reminder: for this assignment, we are always using the normalized values). How does the classification accuracy of this model compare with the models fitted in question 2 with a single gene (on both the training and test sets)? # # # **3.2** How many of the coefficients estimated by this multiple logistic regression in the previous part (P3.1) are significantly different from zero at a *significance level of 5%*? Use the same value of C=100000 as before. # # **Hint:** To answer this question, use *bootstrapping* with 100 bootstrap samples/iterations. # # # **3.3** Comment on the classification accuracy of both the training and testing set. Given the results above, how would you assess the generalization capacity of your trained model? What other tests would you suggest to better guard against possibly having a false sense of the overall efficacy/accuracy of the model as a whole? # # **3.4** Now let's use regularization to improve the predictions from the multiple logistic regression model. Specifically, use LASSO-like regularization and cross-validation to train the model on the training set. Report the classification accuracy on both the training and testing set. # # **3.5** Do the 10 best predictors from Q1 hold up as important features in this regularized model? If not, explain why this is the case (feel free to use the data to support your explanation). # <hr> # ### Solutions # **3.1 Next, fit a multiple logistic regression model with all the gene predictors from the data set (reminder: for this assignment, we are always using the normalized values). How does the classification accuracy of this model compare with the models fitted in question 2 with a single gene (on both the training and test sets)?** # # + # your code here # fitting multi regression model multi_regr = LogisticRegression(C=100000, solver = "lbfgs", max_iter=10000, random_state=109) multi_regr.fit(X_train, y_train) # predictions y_train_pred_multi = multi_regr.predict(X_train) y_test_pred_multi = multi_regr.predict(X_test) # accuracy train_score_multi = accuracy_score(y_train, y_train_pred_multi) test_score_multi = accuracy_score(y_test, y_test_pred_multi) print('Training set accuracy for multiple logistic regression = ', train_score_multi) print('Test set accuracy for multiple logistic regression = ', test_score_multi) df_res = df_res.append([['Multiple Logistic Regression train', train_score_multi], ['Multiple Logistic Regression test', test_score_multi]] ) df_res # - # *your answer here* # # Better results, overfitted model. # **3.2** **How many of the coefficients estimated by this multiple logistic regression in the previous part (P3.1) are significantly different from zero at a *significance level of 5%*? Use the same value of C=100000 as before.** # # **Hint:** To answer this question, use *bootstrapping* with 100 bootstrap samples/iterations. # + # your code here # bootstrapping code n = 100 # Number of iterations boot_coefs = np.zeros((X_train.shape[1],n)) # Create empty storage array for later use # iteration for each sample for i in range(n): # Sampling WITH replacement the indices of a resampled dataset sample_index = np.random.choice(range(y_train.shape[0]), size=y_train.shape[0], replace=True) # finding subset x_train_samples = X_train.values[sample_index] y_train_samples = y_train.values[sample_index] # finding logreg coefficient logistic_mod_boot = LogisticRegression(C=100000, fit_intercept=True, solver = "lbfgs", max_iter=10000) logistic_mod_boot.fit(x_train_samples, y_train_samples) boot_coefs[:,i] = logistic_mod_boot.coef_ # + # your code here ci_upper = np.percentile(boot_coefs, 97.5, axis=1) ci_lower = np.percentile(boot_coefs, 2.5, axis=1) # ct significant predictors sig_b_ct = 0 sig_preds = [] cols = list(X_train.columns) # if ci contains 0, then insignificant for i in range(len(ci_upper)): if ci_upper[i]<0 or ci_lower[i]>0: sig_b_ct += 1 sig_preds.append(cols[i]) print("Significant coefficents at 5pct level = %i / %i" % (sig_b_ct, len(ci_upper))) # print('Number of significant columns: ', len(sig_preds)) # - # **3.3 Comment on the classification accuracy of both the training and testing set. Given the results above, how would you assess the generalization capacity of your trained model? What other tests would you suggest to better guard against possibly having a false sense of the overall efficacy/accuracy of the model as a whole?** # *your answer here* # # Proper cross validation and/or regularization. # **3.4 Now let's use regularization to improve the predictions from the multiple logistic regression model. Specifically, use LASSO-like regularization and cross-validation to train the model on the training set. Report the classification accuracy on both the training and testing set.** # + # your code here # fitting regularized multi regression model - L1 penalty # Any reason for using liblinear - Use 5 fold CV multi_regr = LogisticRegressionCV( solver='liblinear', penalty='l1', cv=5) multi_regr.fit(X_train, y_train) # predictions y_train_pred_multi = multi_regr.predict(X_train) y_test_pred_multi = multi_regr.predict(X_test) # accuracy train_score_multi = accuracy_score(y_train, y_train_pred_multi) test_score_multi = accuracy_score(y_test, y_test_pred_multi) print('Training set accuracy for multiple logistic regression = ', train_score_multi) print('Test set accuracy for multiple logistic regression = ', test_score_multi) df_res = df_res.append([['Reg-loR train', train_score_multi], ['Reg-loR val', test_score_multi]] ) df_res # - # **3.5 Do the 10 best predictors from Q1 hold up as important features in this regularized model? If not, explain why this is the case (feel free to use the data to support your explanation).** # your code here best_pred_1_3 = set(t_tests.index[best_preds_idx[0:10]]) print(best_pred_1_3) # your code here multi_regr_coefs =multi_regr.coef_!=0 #Followin is a list of Lasso coefficients and # of Log Reg L1 coefficients predictors[multi_regr_coefs[0]] , np.sum(multi_regr_coefs[0]) # your code here best_pred_1_3.difference(predictors[multi_regr_coefs[0]]) #Following predictors were important using t-test, however not for Log Reg - L1. # your code here #checking correlation between above list and best predictor df[['X17042_at', 'X76648_at', 'Y08612_at','M31523_at']].corr().style.background_gradient(cmap='Blues') # *your answer here* # # Idea here is that the predictors that did not make it to the list of regularization ... are the ones strongly correlated with the the best predictor. Notice high (absolute) correlation values in last row / last column. # # <div class='exercise'> <b> Question 4 [25pts]: Multiclass Logistic Regression </b> </div> # **4.1** Load the data `hw4_mc_enhance.csv.zip` and examine its structure. How many instances of each class are there in our dataset? # # **4.2** Split the dataset into train and test, 80-20 split, random_state = 8. # # We are going to use two particular features/predictors -- 'M31523_at', 'X95735_at'. Create a scatter plot of these two features using training set. We should be able to discern from the plot which sample belongs to which `cancer_type`. # # **4.3** Fit the following two models using cross-validation: # - Logistic Regression Multiclass model with linear features. # - Logistic Regression Multiclass model with Polynomial features, degree = 2. # # **4.4** Plot the decision boundary and interpret results. **Hint:** You may utilize the function `overlay_decision_boundary` # # **4.5** Report and plot the CV scores for the two models and interpret the results. # # <hr> # ### Solutions # **4.1 Load the data `hw4_mc_enhance.csv.zip` and examine its structure. How many instances of each class are there in our dataset?** #your code here zf = zipfile.ZipFile('data/hw4_mc_enhance.csv.zip') df = pd.read_csv(zf.open('hw4_mc_enhance.csv')) display(df.describe()) display(df.head()) #your code here print(df.columns) #How many instances of each class are there in our dataset ? print(df.cancer_type.value_counts()) # **4.2 Split the dataset into train and test, 80-20 split, random_state = 8.** # # **We are going to utilize these two features - 'M31523_at', 'X95735_at'. Create a scatter plot of these two features using training dataset. We should be able to discern from the plot which sample belongs to which `cancer_type`.** # + # your code here # Split data from sklearn.model_selection import train_test_split random_state = 8 data_train, data_test = train_test_split(df, test_size=.2, random_state=random_state) data_train_X = data_train[best_preds[0:2]] data_train_Y = data_train['cancer_type'] # your code here print(best_preds[0:2]) # + # your code here X = data_train_X.values y = data_train_Y.values pal = sns.utils.get_color_cycle() class_colors = {0: pal[0], 1: pal[1], 2: pal[2]} class_markers = {0: 'o', 1: '^', 2: 'v'} class_names = {"ClassA": 0, "ClassB": 1, "ClassC": 2} def plot_cancer_data(ax, X, y): for class_name, response in class_names.items(): subset = X[y == response] ax.scatter( subset[:, 0], subset[:, 1], label=class_name, alpha=.9, color=class_colors[response], lw=.5, edgecolor='k', marker=class_markers[response]) ax.set(xlabel='Biomarker 1', ylabel='Biomarker 2') ax.legend(loc="lower right") fig, ax = plt.subplots(figsize=(10,6)) ax.set_title( 'M31523_at vs. X95735_at') plot_cancer_data(ax, X, y) # - # **4.3 Fit the following two models using crossvalidation:** # # **Logistic Regression Multiclass model with linear features.** # # **Logistic Regression Multiclass model with Polynomial features, degree = 2.** # # + # your code here from sklearn.pipeline import make_pipeline from sklearn.preprocessing import PolynomialFeatures from sklearn.preprocessing import StandardScaler polynomial_logreg_estimator = make_pipeline( PolynomialFeatures(degree=2, include_bias=False), LogisticRegressionCV(multi_class="ovr")) # Since this is a Pipeline, you can call `.fit` and `.predict` just as if it were any other estimator. # # Note that you can access the logistic regression classifier itself by # polynomial_logreg_estimator.named_steps['logisticregressioncv'] # + # your code here standardize_before_logreg = True if not standardize_before_logreg: # without standardizing... logreg_ovr = LogisticRegressionCV(multi_class="ovr", cv=5, max_iter=300).fit(X, y) polynomial_logreg_estimator = make_pipeline( PolynomialFeatures(degree=2, include_bias=False), LogisticRegressionCV(multi_class="ovr", cv=5, max_iter=300)).fit(X, y); else: # with standardizing... since we want to standardize all features, it's really this easy: logreg_ovr = make_pipeline( StandardScaler(), LogisticRegressionCV(multi_class="ovr", cv=5, max_iter=300)).fit(X, y) polynomial_logreg_estimator = make_pipeline( PolynomialFeatures(degree=2, include_bias=False), StandardScaler(), LogisticRegressionCV(multi_class="ovr", cv=5)).fit(X, y); # - # **4.4 Plot the decision boundary and interpret results. Hint: You may utilize the function `overlay_decision_boundary`** # # + def overlay_decision_boundary(ax, model, colors=None, nx=200, ny=200, desaturate=.5, xlim=None, ylim=None): """ A function that visualizes the decision boundaries of a classifier. ax: Matplotlib Axes to plot on model: Classifier to use. - if `model` has a `.predict` method, like an sklearn classifier, we call `model.predict(X)` - otherwise, we simply call `model(X)` colors: list or dict of colors to use. Use color `colors[i]` for class i. - If colors is not provided, uses the current color cycle nx, ny: number of mesh points to evaluated the classifier on desaturate: how much to desaturate each of the colors (for better contrast with the sample points) xlim, ylim: range to plot on. (If the default, None, is passed, the limits will be taken from `ax`.) """ # Create mesh. xmin, xmax = ax.get_xlim() if xlim is None else xlim ymin, ymax = ax.get_ylim() if ylim is None else ylim xx, yy = np.meshgrid( np.linspace(xmin, xmax, nx), np.linspace(ymin, ymax, ny)) X = np.c_[xx.flatten(), yy.flatten()] # Predict on mesh of points. model = getattr(model, 'predict', model) y = model(X) #print("Do I predict" , y) # y[np.where(y=='aml')]=3 # y[np.where(y=='allT')]=2 # y[np.where(y=='allB')]=1 y = y.astype(int) # This may be necessary for 32-bit Python. y = y.reshape((nx, ny)) # Generate colormap. if colors is None: # If colors not provided, use the current color cycle. # Shift the indices so that the lowest class actually predicted gets the first color. # ^ This is a bit magic, consider removing for next year. colors = (['white'] * np.min(y)) + sns.utils.get_color_cycle() if isinstance(colors, dict): missing_colors = [idx for idx in np.unique(y) if idx not in colors] assert len(missing_colors) == 0, f"Color not specified for predictions {missing_colors}." # Make a list of colors, filling in items from the dict. color_list = ['white'] * (np.max(y) + 1) for idx, val in colors.items(): color_list[idx] = val else: assert len(colors) >= np.max(y) + 1, "Insufficient colors passed for all predictions." color_list = colors color_list = [sns.utils.desaturate(color, desaturate) for color in color_list] cmap = matplotlib.colors.ListedColormap(color_list) # Plot decision surface ax.pcolormesh(xx, yy, y, zorder=-2, cmap=cmap, norm=matplotlib.colors.NoNorm(), vmin=0, vmax=y.max() + 1) xx = xx.reshape(nx, ny) yy = yy.reshape(nx, ny) if len(np.unique(y)) > 1: ax.contour(xx, yy, y, colors="black", linewidths=1, zorder=-1) else: print("Warning: only one class predicted, so not plotting contour lines.") # - # Your code here def plot_decision_boundary(x, y, model, title, ax): plot_cancer_data(ax, x, y) overlay_decision_boundary(ax, model, colors=class_colors) ax.set_title(title) # your code here fig, axs = plt.subplots(1, 2, figsize=(12, 5)) named_classifiers = [ ("Linear", logreg_ovr), ("Polynomial", polynomial_logreg_estimator) ] for ax, (name, clf) in zip(axs, named_classifiers): plot_decision_boundary(X, y, clf, name, ax) # **4.5 Report and plot the CV scores for the two models and interpret the results.** # + # your code here cv_scores = [ cross_val_score(model, X, y, cv=3) for name, model in named_classifiers] plt.boxplot(cv_scores); plt.xticks(np.arange(1, 4), [name for name, model in named_classifiers]) plt.xlabel("Logistic Regression variant") plt.ylabel("Validation-Set Accuracy"); # - # your code here print("Cross-validation accuracy:") pd.DataFrame(cv_scores, index=[name for name, model in named_classifiers]).T.aggregate(['mean', 'std']).T # We are looking for low standard deviations in cross validation scores. If standard deviation is low (like in this case), we expect accuracy on an unseen dataset/test datasets to be rougly equal to mean cross validation accuracy. # <div class='exercise'><b> Question 5: [10 pts] Including an 'abstain' option </b></div> # # One of the reasons a hospital might be hesitant to use your cancer classification model is that a misdiagnosis by the model on a patient can sometimes prove to be very costly (e.g., missing a diagnosis or wrongly diagnosing a condition, and subsequently, one may file a lawsuit seeking a compensation for damages). One way to mitigate this concern is to allow the model to 'abstain' from making a prediction whenever it is uncertain about the diagnosis for a patient. However, when the model abstains from making a prediction, the hospital will have to forward the patient to a specialist, which would incur additional cost. How could one design a cancer classification model with an abstain option, such that the cost to the hospital is minimized? # # **Hint:** Think of ways to build on top of the logistic regression model and have it abstain on patients who are difficult to classify. # **5.1** More specifically, suppose the cost incurred by a hospital when a model mis-predicts on a patient is $\$5000$ , and the cost incurred when the model abstains from making a prediction is \$1000. What is the average cost per patient for the OvR logistic regression model (without quadratic or interaction terms) from **Question 4**. Note that this needs to be evaluated on the patients in the testing set. # **5.2** Design a classification strategy (into the 3 groups plus the *abstain* group) that has a low cost as possible per patient (certainly lower cost per patient than the logistic regression model). Give a justification for your approach. # <hr> # ### Solutions # **5.1 More specifically, suppose the cost incurred by a hospital when a model mis-predicts on a patient is $\$5000$ , and the cost incurred when the model abstains from making a prediction is \$1000. What is the average cost per patient for the OvR logistic regression model (without quadratic or interaction terms) from Question 4.** <br><bR> Note that this needs to be evaluated on the patients in the testing set. # *your answer here* # # **Philosophy:** Assuming the OvR logistic regression model, we estimate $p_j$ for $j\in \{1,2,3\}$, the marginal probability of being in each class. `sklearn` handles the normalization for us, although the normalization step is not necessary for the multinomial model since the softmax function is already constrained to sum to 1. # # Following the hint, we will proceed by using the trained OvR logistic regression model to estimate $\hat{p}_j$ and then use the missclassifications to estimate the cost of them. # data_test.head() # + # predict only in two best predictors dec = logreg_ovr.predict(data_test.loc[:,best_preds[0:2]].values) dec = pd.Series(dec).astype('category').cat.codes # true values in test, our y_test vl = np.array(data_test.cancer_type.astype('category').cat.codes) # - # your code here def cost(predictions, truth): ''' Counts the cost when we have missclassifications in the predictions vs. the truth set. Option = -1 is the abstain option and is only relevant when the values include the abstain option, otherwise initial cost defaults to 0 (for question 5.1). Arguments: prediction values and true values Returns: the numerical cost ''' cost = 1000 * len(predictions[predictions == -1]) # defaults to 0 for 5.1 true_vals = truth[predictions != -1] # defaults to truth for 5.1 predicted_vals = predictions[predictions != -1] # defaults to predictions for 5.1 cost += 5000 * np.sum(true_vals != predicted_vals) return cost print("Cost incurred for OvR Logistic Regression Model without abstain: $", cost(dec,vl)/len(vl)) # **5.2 Design a classification strategy (into the 3 groups plus the *abstain* group) that has a low cost as possible per patient (certainly lower cost per patient than the logistic regression model). Give a justification for your approach.** # Following 5.1, we make the decision to abstain or not based on minimizing the expected cost. # <br><br> # The expected cost for abstaining is $\$1000$. The expected cost for predicting is $ \$5000 * P(\text{misdiagnosis}) = 5000 * (1 - \hat{p}_k)$ where $k$ is the label of the predicted class. # # So our decision rule is if the cost of making a missdiagnosis is less than the cost of abstaining (expressed by the formula $5000 * (1 - \hat{p}_k) < 1000$), then attempt a prediction. Otherwise, abstain. # + # your code here def decision_rule(lrm_mod,input_data): probs = lrm_mod.predict_proba(input_data) predicted_class = np.argmax(probs,axis = 1) conf = 1.0 - np.max(probs,axis = 1) predicted_class[5000*conf > 1000.0] = -1 #Abstain return predicted_class inp = data_test.loc[:,best_preds[0:2]].values dec2 = decision_rule(logreg_ovr,inp) print("Cost incurred for new model: $", cost(dec2,vl)/len(vl)) # -
content/Homework/HW4/cs109a_HW4_solutions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt import os from sklearn.preprocessing import MinMaxScaler import seaborn as sns import tensorflow as tf from keras.models import Sequential from keras.layers import Dense from keras.callbacks import ModelCheckpoint from keras.layers import Dropout from keras.layers import LSTM dataset = pd.read_csv("../data/gesture7.txt") dataset.head() dataset = dataset.dropna() dataset_train = np.array(dataset) dataset_train = dataset_train[np.random.permutation(len(dataset_train))] X_train = dataset_train[:,:-1] Y_train = dataset_train[:,-1:] Y_train = [int(i) for i in Y_train] Y_train = np.eye(np.max(Y_train) + 1)[Y_train] Y_train.shape X_train = np.asarray(X_train) Y_train = np.asarray(Y_train) # + model = Sequential() model.add(Dense(16, input_shape=(16,))) model.add(Dropout(0.2)) model.add(Dense(32)) model.add(Dropout(0.2)) model.add(Dense(32)) model.add(Dropout(0.2)) model.add(Dense(7, activation="softmax")) # - model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics=['accuracy']) history = model.fit(X_train, Y_train, epochs = 100, batch_size = 32, verbose=2,validation_split=0.2) fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(9,9)) plt.subplot(2,1,1) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.legend(["Train","Validation"]) plt.title("Loss") plt.subplot(2,1,2) plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.legend(["Train","Validation"]) plt.title("Accuracy") history.params
Python/Fitting/LSTM.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Shallow Neural Network in Keras # Build a shallow neural network to classify MNIST digits # #### Set seed for reproducibility import numpy as np np.random.seed(42) # #### Load dependencies import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense from keras.optimizers import SGD # #### Load data (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train.shape y_train.shape y_train[0:99] X_test.shape X_test[0] y_test.shape y_test[0] # #### Preprocess data X_train = X_train.reshape(60000, 784).astype('float32') X_test = X_test.reshape(10000, 784).astype('float32') X_train /= 255 X_test /= 255 X_test[0] n_classes = 10 y_train = keras.utils.to_categorical(y_train, n_classes) y_test = keras.utils.to_categorical(y_test, n_classes) y_test[0] # #### Design neural network architecture # + # add architecture here # - model.summary() (64*784) (64*784)+64 (10*64)+10 # #### Configure model # + # compile model here # - # #### Train! # + # train model here # - model.evaluate(X_test, y_test)
notebooks/live_training/shallow_net_in_keras_LT.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # enable matplotlib widgets # %matplotlib widget # import the class model from the semi_auto_fit module from optichem.semi_auto_fit import model s = model() # set the refractive index of the ATR crystal (2.4 for diamond, 4 for Ge, 3.4 for Si, or 2.4 for ZnSe) s.set_crystal_index(2.4 + 1j*1e-5) # set the angle of incidence (found in ATR manual) s.set_angle_of_incidence(45) # set the refractivde index in the visible spectrum (usually found on materials SDS) s.set_n(1.367) # set x-axis input units of uploaded spectra and desried output units options include: '1/m', 'Hz', 'rad/s', 'nm', 'um', 'm', 'eV' s.set_input_units('1/cm') s.set_output_units('um') # upload ATR data file s.upload('ATR_measurements/IPA_ATR_data.txt') # set wavelength range manually s.set_range(6.75, 12.5) # - # inital solver and plot results s.start_solver() s.plot_fit() # + # save optical property data s.save_optical_prop('optichem_results/IPA_optical_properties.txt') # save vibrational modes for stiching s.save_modes('optichem_results/IPA_wL_range_2')
tests/IPA_test_fit.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Importando Bibliotecas import pandas as pd # ### Lendo arquivos json json = open('datasets/aluguel.json') print(json.read()) df_json = pd.read_json('datasets/aluguel.json') df_json # ### Lendo arquivos txt txt = open('datasets/aluguel.txt') print(txt.read()) df_txt = pd.read_table('datasets/aluguel.txt') df_txt # ### Lendo arquivos Excel xlsx = pd.read_excel('datasets/aluguel.xlsx') xlsx # ### Criando um DataFrame exemplo_dados = [['Sidney','Desenvolvedor Python','Sidia'], ['Tux','Engenheiro Linux','Alura'], ['Neto','Engenheiro Machine Learning','Udacity']] exemplo_dados = pd.DataFrame(data=exemplo_dados,columns=['Nome','Cargo','Empresa']) exemplo_dados
extras/Tipos_de_Fonte_de_Dados_em_Pandas.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.6 64-bit (windows store) # name: python3 # --- import numpy as np x = np.random.randint(1,21,10) print(x) x1 = np.arange(2,12) print(x1) print(x + x1) #Создайте массив , состоящий из 10 случайных целых чисел из диапазона от 1 до 20, затем создайте массив состоящий из 10 элементов, начиная с 2. Выведите значения этих массивов, а также их сумму, при помощи функции print() x = np.random.randint(0,10, 7) print(x) print(x * 3) #Создайте массив , состоящий из 7 случайных целых чисел из диапазона от 0 до 9, затем создайте новый массив состоящий из 7 элементов первого массива, увеличенных в 3 раза. Выведите значения этих массивов при помощи функции print() np.sqrt(np.arange(7, 7+20))
Data Science/Numpy/arrayOperations.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="3bJWutAIcnoc" # ### Pre-requisite installations # + colab={"base_uri": "https://localhost:8080/"} id="RnMWN_FG8QFH" outputId="3fbe640a-b709-4896-9c6e-b88f03958ff1" import nltk nltk.download('punkt') # + [markdown] id="wyYqZbyJRw0f" # ### Models Training # + id="kFWgMuXxT725" import os, re import json import random import numpy as np import pandas as pd from string import punctuation from nltk import word_tokenize from itertools import groupby from statistics import median import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split # + id="Wy-c29HEOlNI" # For randomization and re-producability of results random.seed(123) np.random.seed(123) # + id="A0wvecCQOlKS" run_results=pd.DataFrame(columns=['Classifier', 'Mean Fit Time(s)', 'Mean Test Time(s)', 'Mean Train Score', 'Mean CV Score', 'Best Train Score','Test Score','F1 Score']) # + id="rOXW0O08hLHG" # Using Glove embeddings embeddings_size=100 glove_path='/content/drive/MyDrive/Colab Notebooks/models/glove.6B.%dd.txt'%embeddings_size # + id="AgS2b3x2hRga" embeddings_index = dict() with open(glove_path) as gfile: for line in gfile: values = line.split() word, vectors = values[0], np.asarray(values[1:], dtype='float32') embeddings_index[word] = vectors # + id="ZTKaFOk3DUoN" file_path = '/content/drive/MyDrive/Colab Notebooks/VICCI/data/generated_train_data.json' training_data = None with open(file_path, 'r') as file: training_data = json.load(file) # + id="DjgJ0aQ5Oqkv" queries, intents = [], [] for train_set in training_data: for query in train_set['query']: queries.append(query) intents.append(train_set['intent']) # + colab={"base_uri": "https://localhost:8080/"} id="TdMQimxocSa0" outputId="24106efd-cc26-45f6-c3f7-7a14b646d208" # Training data shape len(queries), len(intents) # + id="lXnXcdeVseIa" queries_train, queries_test, intents_train, intents_test = train_test_split( queries, intents, train_size=0.7, random_state=123, stratify=intents) # + colab={"base_uri": "https://localhost:8080/"} id="pYeEcPWGh64p" outputId="b0b9b91a-00af-4a9e-a214-10185747a17c" # Train and test set shape len(queries_train), len(queries_test), len(intents_train), len(intents_test) # + id="r_C4RCpCOqhs" from sklearn.preprocessing import StandardScaler, LabelEncoder from sklearn.feature_extraction.text import TfidfVectorizer # + id="1S8tG8KZPf45" # We don't want to exclude stopwords as questions in chat are short and crisp and # words like "what" and "not" carry lot of weightage, but word_tokenizer treats the # sentence ending punctuations as separate tokens which have to be removed tfidf = TfidfVectorizer(max_features=600, encoding='latin-1', sublinear_tf=True, lowercase=True, tokenizer=word_tokenize, ngram_range=(1,2), stop_words=list(punctuation), token_pattern=None) # + colab={"base_uri": "https://localhost:8080/"} id="1BD68egzQ4vI" outputId="a9e304cd-f503-4a70-ed23-b0bbaecc56f7" tfidf.fit(queries_train) # + id="93yXgmn6xH9x" tfidf_dict = dict(zip(tfidf.get_feature_names(), list(tfidf.idf_))) tfidf_feat = tfidf.get_feature_names() # + id="yqFqb8_2hdly" # We have to calculate the tf-Idf weighted average of the glove embeddings tfidf_weighted_glove_train = [] for query in queries_train: tokens = [tokn.lower() for tokn in word_tokenize(query) if tokn not in list(punctuation)] query_vec = np.zeros(embeddings_size) weight_sum = 0 for tokn in tokens: if tokn in embeddings_index and tokn in tfidf_dict: vec = embeddings_index[tokn] # the tf-Idf score of a word in query is pumped up based on the ratio of its # count in the query to the total query length score = tfidf_dict[tokn]*((tokens.count(tokn)/len(tokens))+1) query_vec += (vec * score) weight_sum += score else: # print(tokn) pass if weight_sum != 0: query_vec /= weight_sum tfidf_weighted_glove_train.append(query_vec) tfidf_weighted_glove_train = np.array(tfidf_weighted_glove_train) # + id="dFG0XrIW1RiF" # Similar vectorization for the test data tfidf_weighted_glove_test = [] for query in queries_test: tokens = [tokn.lower() for tokn in word_tokenize(query) if tokn not in list(punctuation)] query_vec = np.zeros(embeddings_size) weight_sum = 0 for tokn in tokens: if tokn in embeddings_index and tokn in tfidf_dict: vec = embeddings_index[tokn] score = tfidf_dict[tokn]*((tokens.count(tokn)/len(tokens))+1) query_vec += (vec * score) weight_sum += score else: # print(tokn) pass if weight_sum != 0: query_vec /= weight_sum tfidf_weighted_glove_test.append(query_vec) tfidf_weighted_glove_test = np.array(tfidf_weighted_glove_test) # + colab={"base_uri": "https://localhost:8080/"} id="JjsrVP5RinjK" outputId="29c16d4d-a346-4bd9-fe78-35935218c25c" # Total feature length after conactenating both Tf-Idf and Weighted Glove len(tfidf_feat)+tfidf_weighted_glove_train.shape[1] # + id="tTL9mhDZQ4sj" X_train = np.hstack((tfidf.transform(queries_train).todense(), tfidf_weighted_glove_train)) X_test = np.hstack((tfidf.transform(queries_test).todense(), tfidf_weighted_glove_test)) # + colab={"base_uri": "https://localhost:8080/"} id="REGnkxdrkYkr" outputId="1a22b9d6-f148-412f-a467-d39d80ca03ec" X_train.shape, X_test.shape # + colab={"base_uri": "https://localhost:8080/"} id="Dx8RMNBEQZxk" outputId="cb9e71cb-e274-47eb-c4fb-3a6a653aa8ba" lbencoder = LabelEncoder() lbencoder.fit(intents_train) # + id="YYrnlSc-8wcE" Y_train = lbencoder.transform(intents_train) Y_test = lbencoder.transform(intents_test) # + id="W5e_aWemQ4pu" from sklearn.svm import SVC from xgboost import XGBClassifier from sklearn.naive_bayes import MultinomialNB from sklearn.linear_model import SGDClassifier, LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import GridSearchCV, StratifiedShuffleSplit from sklearn.metrics import accuracy_score, confusion_matrix, classification_report # + id="7UpcaH2cV70N" def classifier_analyzer(classifier, params): ss = StratifiedShuffleSplit(n_splits=5, test_size=0.25, random_state=123) # we are explicitly passing StratifiedShuffleSplit because we want the CV data to # shuffles in each split which is not the default behaviour of GridSearchCV gsCV = GridSearchCV(classifier, params, scoring='accuracy', n_jobs=-1, refit=True, cv=ss, return_train_score=True) gscv_result = gsCV.fit(X_train, Y_train).cv_results_ print("Mean fit time : %.3fs" % gscv_result['mean_fit_time'].mean()) print("Mean test time : %.3fs" % gscv_result['mean_score_time'].mean()) print("Mean train score : %.3f" % gscv_result['mean_train_score'].mean()) print("Mean CV score : %.3f" % gscv_result['mean_test_score'].mean()) # Get the train score on the best estimator print("Best Train Score : %.3f" % accuracy_score(Y_train, gsCV.predict(X_train))) # Get the test score on the best estimator Y_pred = gsCV.predict(X_test) print("Best Test Score : %.3f" % accuracy_score(Y_test, Y_pred)) print("Best params : ", gsCV.best_params_) return Y_pred # + colab={"base_uri": "https://localhost:8080/"} id="ZYttIUQmV7u1" outputId="08406072-3c8a-49bc-d80c-1db25c1c4cbd" # Logistic Regression lr_clf = LogisticRegression(random_state=123, n_jobs=-1) # not all the combination of penalty and solver will be compatible so we define # a list of params dict. First we fix the solver param, then go to fix C lr_params = [{'penalty' : ['l2'], 'solver':['newton-cg', 'sag', 'lbfgs'] }, {'penalty' : ['elasticnet'], 'solver':['saga'], 'l1_ratio':[0, 0.25, 0.5, 0.75, 1]}] Y_pred = classifier_analyzer(lr_clf, lr_params) # + colab={"base_uri": "https://localhost:8080/"} id="QWmVWhX6aZ-M" outputId="9a8038c4-d798-467b-be73-2f3845fbcc8b" lr_clf = LogisticRegression(random_state=123, n_jobs=-1) lr_params = [{'penalty' : ['l2'], 'solver':['newton-cg'], 'C': [0.01, 0.1, 1, 10, 100, 500] } ] Y_pred = classifier_analyzer(lr_clf, lr_params) # + colab={"base_uri": "https://localhost:8080/"} id="1mPRbY-Prr9W" outputId="bc211f68-c279-4d3b-e4f1-5b8d8567c568" print("Classification Report for the best params : ") print(classification_report(Y_test, Y_pred, target_names=lbencoder.classes_)) # + id="9kDRD1pNpMUu" run_results.loc[run_results.shape[0]]=['Logistic Reg', 0.927, 0.001, 0.958, 0.910, 0.990, 0.993, 0.99] # + colab={"base_uri": "https://localhost:8080/"} id="1VjUmEY1auW7" outputId="f436c371-a224-486e-d31e-a15ce91aa679" # KNN knn_clf = KNeighborsClassifier( n_jobs=-1) knn_params = {'n_neighbors':[3,5,7,10,15], 'weights':['uniform','distance'], 'metric':['cosine','minkowski','euclidean']} Y_pred = classifier_analyzer(knn_clf, knn_params) # + colab={"base_uri": "https://localhost:8080/"} id="WRZBQWrDx8Lp" outputId="1cfb1484-63b1-4624-c6ec-14408410b4f1" print("Classification Report for the best params : ") print(classification_report(Y_test, Y_pred, target_names=lbencoder.classes_)) # + id="wiJcw79cp76Z" run_results.loc[run_results.shape[0]]=['kNN', 0.023, 0.148, 0.946, 0.864, 0.992, 0.981, 0.98] # + colab={"base_uri": "https://localhost:8080/"} id="xaHaGNOYbUfh" outputId="36872050-f546-4747-ebfb-e28ed3be8f98" # SVM svm_clf = SVC(probability=True, random_state=123) svm_params = {'C':[0.001, 0.01, 0.1, 1, 10], 'kernel':['rbf', 'poly', 'sigmoid']} Y_pred = classifier_analyzer(svm_clf, svm_params) # + colab={"base_uri": "https://localhost:8080/"} id="rJgL6QGyyi1_" outputId="0eac0528-dfc7-41f6-b564-835cc1cab5fa" print("Classification Report for the best params : ") print(classification_report(Y_test, Y_pred, target_names=lbencoder.classes_)) # + id="tLhyU6bxqGob" run_results.loc[run_results.shape[0]]=['SVM', 1.989, 0.080, 0.514, 0.469, 0.990, 0.996, 1.0] # + colab={"base_uri": "https://localhost:8080/"} id="8IAALT9Mbd1z" outputId="7644713e-08a3-49ac-84bd-4261807eb84a" # SGD Classifier sgd_clf = SGDClassifier(early_stopping=False, n_jobs=-1, random_state=123) sgd_params = {'loss': ['hinge', 'modified_huber'], 'penalty': ['l2', 'elasticnet'], 'max_iter': [100, 300, 500, 700], 'alpha': [0.00001, 0.0001, 0.001, 0.01, 0.1], 'epsilon': [0.01, 0.05, 0.1]} Y_pred = classifier_analyzer(sgd_clf, sgd_params) # + colab={"base_uri": "https://localhost:8080/"} id="wWrUCdqwzfbw" outputId="c1c0bd66-b1e3-4058-ec13-a2701901ff79" print("Classification Report for the best params : ") print(classification_report(Y_test, Y_pred, target_names=lbencoder.classes_)) # + id="vKSFZFmwqPdY" run_results.loc[run_results.shape[0]]=['SGD Classifier', 0.376, 0.001, 0.935, 0.883, 0.989, 0.981, 0.98] # + colab={"base_uri": "https://localhost:8080/"} id="jPvV2f5mcq9R" outputId="af6634b8-ad28-4da6-e589-f02105fcc27f" # XGBoost xgb_clf = XGBClassifier(random_state=123, n_jobs=-1) # First we fix the objective param then, others xgb_params = [{'objective': ['binary:logistic', 'binary:hinge', 'multi:softprob','multi:softmax'] },{ 'objective' : ['multi:softmax'], 'num_class' : [len(set(intents))] }] Y_pred = classifier_analyzer(xgb_clf, xgb_params) # + colab={"base_uri": "https://localhost:8080/"} id="cSs6DcJykCqh" outputId="5561acb9-f8e9-4612-c95a-46910ffc9e63" xgb_clf = XGBClassifier(objective='binary:logistic', random_state=123, n_jobs=-1) # First we fix the objective param then, others xgb_params = { 'max_depth' : [3, 5, 7], 'n_estimators':[5,10,20,35,60], 'learning_rate' : [0.1, 0.2, 0.3, 0.5, 0.7] } Y_pred = classifier_analyzer(xgb_clf, xgb_params) # + colab={"base_uri": "https://localhost:8080/"} id="fwrNb2FK138P" outputId="e81e1d4f-5e9d-4d93-d873-6c0dd980b14d" print("Classification Report for the best params : ") print(classification_report(Y_test, Y_pred, target_names=lbencoder.classes_)) # + id="gPi_U0rLrESn" run_results.loc[run_results.shape[0]]=['XGBoost', 5.227, 0.016, 0.994, 0.917, 0.992, 0.974, 0.97] # + id="9H-CmNlCBH1K" from sklearn.preprocessing import MinMaxScaler # + colab={"base_uri": "https://localhost:8080/"} id="RyCr7Ow2BMR3" outputId="8198a4e9-e344-4347-f0e8-20d57fb7187d" # MultinomialNB cant take negative values scaler = MinMaxScaler() scaler.fit(X_train) # + id="bK9IST-SBTKR" X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) # + colab={"base_uri": "https://localhost:8080/"} id="CYVazNrLbz5N" outputId="3a345957-d857-4bd6-ad7c-927e71364dc4" # MultiNomial naive bayes mnb_clf = MultinomialNB() mnb_params = {'alpha': [0.1, 0.3, 0.5, 0.7, 0.9, 1.0]} Y_pred = classifier_analyzer(mnb_clf, mnb_params) # + colab={"base_uri": "https://localhost:8080/"} id="nI8FvJwiB7qe" outputId="36ca2b45-52b6-4573-f5b4-715ba69ba28d" print("Classification Report for the best params : ") print(classification_report(Y_test, Y_pred, target_names=lbencoder.classes_)) # + id="h30-hQVhqf5W" run_results.loc[run_results.shape[0]]=['MultiNomial NB', 0.005, 0.001, 0.963, 0.872, 0.979, 0.933, 0.93] # + colab={"base_uri": "https://localhost:8080/", "height": 235} id="_ClPmK9RIW48" outputId="763bfb55-c31e-4dd7-8fa4-ed6ee402df60" run_results.sort_values(by=[ 'F1 Score', 'Test Score'], ascending=False) # + [markdown] id="QVc-4eedJb1b" # ### Test on User Inputs # + id="UIUsMxxJOpQt" inputs = ["what are the tests available for covid?", "bye", "after how much time do I see the symptoms?", "That's great.", "how do i protect myself?", "what is covid-19?", "ok. what are the vaccines available?", "i am looking for vaccination. i need help", "how many people have suffered?"] # + colab={"base_uri": "https://localhost:8080/"} id="EYJB5ZZ3J81I" outputId="994c571c-11d5-4e16-fb8a-6e5a757094c2" lr_clf = LogisticRegression(C=1, penalty='l2', solver='newton-cg', random_state=123, n_jobs=-1) lr_clf.fit(X_train, Y_train) # + colab={"base_uri": "https://localhost:8080/"} id="TZB5cbN0KLN1" outputId="3a4bdab6-e613-4c45-aecd-2e93d9603486" for inp in inputs: tokens = [tokn.lower() for tokn in word_tokenize(inp) if tokn not in list(punctuation)] query_vec = np.zeros(embeddings_size) weight_sum = 0 for tokn in tokens: if tokn in embeddings_index and tokn in tfidf_dict: vec = embeddings_index[tokn] score = tfidf_dict[tokn]*((tokens.count(tokn)/len(tokens))+1) query_vec += (vec * score) weight_sum += score else: # print(tokn) pass if weight_sum != 0: query_vec /= weight_sum pred = lr_clf.predict_proba(np.hstack((tfidf.transform([inp]).todense(), query_vec.reshape(1,-1)))) tag = lbencoder.inverse_transform([pred.argmax()])[0] print(inp," - ",tag," - ",pred[0][pred.argmax()]) # + colab={"base_uri": "https://localhost:8080/"} id="hrhaJGRWKv7P" outputId="929d7566-c594-4aaf-bcd9-7388b0f1f06f" svm_clf = SVC(C=10, kernel='rbf', probability=True, random_state=123) svm_clf.fit(X_train, Y_train) # + colab={"base_uri": "https://localhost:8080/"} id="z_aB5ZjjLDHX" outputId="a78e968c-0dc2-4602-db86-fc4a21a44f24" for inp in inputs: tokens = [tokn.lower() for tokn in word_tokenize(inp) if tokn not in list(punctuation)] query_vec = np.zeros(embeddings_size) weight_sum = 0 for tokn in tokens: if tokn in embeddings_index and tokn in tfidf_dict: vec = embeddings_index[tokn] score = tfidf_dict[tokn]*((tokens.count(tokn)/len(tokens))+1) query_vec += (vec * score) weight_sum += score else: # print(tokn) pass if weight_sum != 0: query_vec /= weight_sum pred = svm_clf.predict_proba(np.hstack((tfidf.transform([inp]).todense(), query_vec.reshape(1,-1)))) tag = lbencoder.inverse_transform([pred.argmax()])[0] print(inp," - ",tag," - ",pred[0][pred.argmax()])
experiments/model_training.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: SageMath 9.1 # language: sagemath # metadata: # cocalc: # description: Open-source mathematical software system # priority: 1 # url: https://www.sagemath.org/ # name: sage-9.1 # resource_dir: /ext/jupyter/kernels/sage-9.1 # --- # # List practice # # 1. Try to produce the following lists: # - [ ] first ten positive even numbers; # - [ ] first hundred cubic numbers; # - [ ] first ten primes. # - [ ] $\pi, \pi^2, \pi^3, \dots, \pi^{10}$ # # # Series # # What is the result of the following: # - $\frac{3}{10}+\frac{3}{100}+\frac{3}{1000}+\dots$ # # - $\frac{1}{1^2}+\frac{2}{2^2}+\frac{3}{3^2}+\frac{4}{4^2}+\dots+$ # # Discrete logarithm # # Find the smallest integer $n$ such that # # $$ 10^n \equiv 1 \pmod{7} $$ # # Geometric sequence # # What is the result of # # - $1+2+4+8+16+\dots+2^{15}$ # # # Remainder # # What is the last two digits of # # - $1+2+4+8+16+\dots+2^{15}$ # # Fibonacci sequence # # Can we simplify the below sum # # - $F_1 + F_4 + F_7 + F_{10} + \dots + F_{100} $
20210407/materials/e1-submission.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:py38] * # language: python # name: conda-env-py38-py # --- # # Deck 110 ID Experiment # # (combine with result of 195) from erddapy import ERDDAP import pandas as pd import numpy as np import datetime import matplotlib.pyplot as plt from requests.exceptions import HTTPError server_url = 'http://akutan.pmel.noaa.gov:8080/erddap' e = ERDDAP(server=server_url) # searchterm mooring... subset this later df = pd.read_csv(e.get_search_url(response='csv', search_for='ICOADS')) print(df['Dataset ID'].values) # + kw = { } constraints = { 'DCK=~': "110", } # - temp_data = ['datasets_IMMA_ICOADS_ncei_1940s'] for dataset_id in temp_data: #read and import dataset print(dataset_id) try: d = ERDDAP(server=server_url, protocol='tabledap', response='csv', ) d.constraints=constraints d.dataset_id=dataset_id print(d.get_download_url()) except HTTPError: print('Failed to generate url {}'.format(dataset_id)) try: df_m = d.to_pandas( #index_col='time (UTC)', parse_dates=True, skiprows=(1,) # units information can be dropped. ) df_m.sort_index(inplace=True) df_m.columns = [x[1].split()[0] for x in enumerate(df_m.columns)] df_m['ID'] = df_m['ID'].astype(str) except: print(f"something failed in data download {dataset_id}") pass # + # load clean deck195 dck_195 = pd.read_csv('Deck195_CleanID.csv') # - # ## plot of daily observations (or weekly, monthly, etc) to look for the following artifact # # For operational security the Navy issued new rules for log-keeping in October 1942. From this date to ~June 1944 the Navigation data was not included in the deck log. Rather, it was in the header of a separate 'War Diary' -- basically the same as the old 'Remarks' page, but handled differently. # # This lasted until about June 1944, when the old ways returned. # # The questions: is there a corresponding hole in Deck 195 (Nov 1942-June 1944)? If not are the positions only given in whole degrees? Or does it look normal? # # I think we could plot the total number of obs reported per day to get a first look? # # df_m = df_m.set_index(pd.DatetimeIndex(df_m['time'])) dfg = df_m.groupby(df_m.index.year) # + jupyter={"outputs_hidden": true} for years in dfg.groups.keys(): dfy = dfg.get_group(years) for months in (dfy.groupby(dfy.index.dayofyear)).groups.keys(): print(f'{years}, {months}, {(dfy.groupby(dfy.index.dayofyear)).get_group(months).time.count()}') # - obs_195 = pd.read_csv('195doyobs.csv',header=None) obs_110 = pd.read_csv('110doyobs.csv',header=None) obs_110 obs_110['datetime'] = np.nan for value, row in obs_110.iterrows(): obs_110['datetime'][value] = pd.to_datetime(str(int(row[0]))+' '+str(int(row[1])), format='%Y %j') obs_195['datetime'] = np.nan for value, row in obs_195.iterrows(): obs_195['datetime'][value] = pd.to_datetime(str(int(row[0]))+' '+str(int(row[1])), format='%Y %j') # + fig, ax = plt.subplots(1,1, figsize=(10, 5), facecolor='w', edgecolor='k') plt.plot(obs_195.datetime,obs_195[2],label='Daily Obs Deck 195') plt.plot(obs_110.datetime,obs_110[2],label='Daily Obs Deck 110') plt.legend() # -
2020/KWood/ICOADS/Deck1110p195 ICOAADS Summary.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 1A.algo - Programmation dynamique et plus court chemin # # La programmation dynamique est une façon des calculs qui revient dans beaucoup d'algorithmes. Elle s'applique dès que ceux-ci peuvent s'écrire de façon récurrente. from jyquickhelper import add_notebook_menu add_notebook_menu() # La [programmation dynamique](http://fr.wikipedia.org/wiki/Programmation\_dynamique) est une façon de résoudre de manière similaire une classe de problèmes d'optimisation qui vérifie la même propriété. On suppose qu'il est possible de découper le problème $P$ en plusieurs parties $P_1$, $P_2$, ... Si $S$ est la solution optimale du problème $P$, alors chaque partie $S_1$, $S_2$, ... de cette solution appliquée aux sous-problèmes est aussi optimale. # # Par exemple, on cherche le plus court chemin $c(A,B)$ entre les villes $A$ et $B$. Si celui-ci passe par la ville $M$ alors les chemins $c(A,M)+c(M,B) = c(A,B)$ sont aussi les plus courts chemins entre les villes $A,M$ et $M,B$. La démonstration se fait simplement par l'absurde : si la distance $c(A,M)$ n'est pas optimale alors il est possible de constuire un chemin plus court entre les villes $A$ et $B$. Cela contredit l'hypothèse de départ. # # Ces problèmes ont en règle générale une expression simple sous forme de récurrence : si on sait résoudre le problème pour un échantillon de taille $n$, on appelle cette solution $S(n)$ alors on peut facilement la solution $S(n+1)$ en fonction de $S(n)$. Parfois cette récurrence va au delà : $S(n+1) = f(S(n), S(n-1), ..., S(0))$. # ## Les données # # On récupère le fichier ``matrix_distance_7398.txt`` depuis [matrix_distance_7398.zip](http://www.xavierdupre.fr/enseignement/complements/matrix_distance_7398.zip) qui contient des distances entre différentes villes (pas toutes). import pyensae pyensae.download_data("matrix_distance_7398.zip", website = "xd") # On peut lire ce fichier soit avec le module [pandas](http://pandas.pydata.org/) introduit lors de la séance 10 [TD 10 : DataFrame et Matrice](http://www.xavierdupre.fr/app/ensae_teaching_cs/helpsphinx/notebooks/td1a_cenonce_session_10.html#io) : import pandas df = pandas.read_csv("matrix_distance_7398.txt", sep="\t", header=None, names=["v1","v2","distance"]) df.head() # Le membre ``values`` se comporte comme une matrice, une liste de listes : matrice = df.values matrice[:5] # On peut aussi utiliser le petit exemple qui a été présenté lors de la séance 4 sur les fichiers [TD 4 : Modules, fichiers, expressions régulières](http://www.xavierdupre.fr/app/ensae_teaching_cs/helpsphinx/notebooks/td1a_cenonce_session4.html#file). Les données se présente sous forme de matrice. Les deux premières colonnes sont des chaînes de caractères, la dernière est une valeur numérique qu'il faut convertir. with open ("matrix_distance_7398.txt", "r") as f : matrice = [ row.strip(' \n').split('\t') for row in f.readlines() ] for row in matrice: row[2] = float(row[2]) print(matrice[:5]) # Chaque ligne définit un voyage entre deux villes effectué d'une traite, sans étape. Les accents ont été supprimés du fichier. # ## Exercice 1 # # Construire la liste des villes sans doublons. # ## Exercice 2 # # Constuire un dictionnaire ``{ (a,b) : d, (b,a) : d }`` où ``a,b`` sont des villes et ``d`` la distance qui les sépare ? # # On veut calculer la distance entre la ville de ``Charleville-Mezieres`` et ``Bordeaux`` ? Est-ce que cette distance existe dans la liste des distances dont on dispose ? # ## Algorithme du plus court chemin # # On créé un tableau ``d[v]`` qui contient ou contiendra la distance optimale entre les villes ``v`` et ``Charleville-Mezieres``. La valeur qu'on cherche est ``d['Bordeaux']``. On initialise le tableau comme suit : # # - ``d['Charleville-Mezieres'] = 0`` # - ``d[v] = infini`` pour tout $v \neq$ ``'Charleville-Mezieres'``. # ## Exercice 3 # # Quelles sont les premières cases qu'on peut remplir facilement ? # ## Exercice 4 # # Soit une ville $v$ et une autre $w$, on s'aperçoit que $d[w] > d[v] + dist[w,v]$. Que proposez-vous de faire ? En déduire un algorithme qui permet de déterminer la distance la plus courte entre Charleville-Mezieres et Bordeaux. # # Si la solution vous échappe encore, vous pouvez vous inspirer de l'[Algorithme de Djikstra](http://fr.wikipedia.org/wiki/Algorithme_de_Dijkstra). # ## La répartition des skis # # Ce problème est un exemple pour lequel il faut d'abord prouver que la solution vérifie une certaine propriété avant de pouvoir lui appliquer une solution issue de la programmation dynamique. # # $N=10$ skieurs rentrent dans un magasins pour louer 10 paires de skis (parmi $M>N$). On souhaite leur donner à tous une paire qui leur convient (on suppose que la taille de la paire de skis doit être au plus proche de la taille du skieurs. On cherche donc à minimiser : # # $\arg \min_\sigma \sum_{i=1}^{N} \left| t_i - s_{\sigma(i)} \right|$ # # Où $\sigma$ est un ensemble de $N$ paires de skis parmi $M$ (un [arrangement](http://fr.wikipedia.org/wiki/Arrangement) pour être plus précis). # # A première vue, il faut chercher la solution du problème dans l'ensemble des arrangements de $N$ paires parmi $M$. Mais si on ordonne les paires et les skieurs par taille croissantes : $t_1 \leqslant t_2 \leqslant ... \leqslant t_N$ (tailles de skieurs) et $s_1 \leqslant s_2 \leqslant ... \leqslant s_M$ (tailles de skis), résoudre le problème revient à prendre les skieurs dans l'ordre croissant et à les placer en face d'une paire dans l'ordre où elles viennent. C'est comme si on insérait des espaces dans la séquence des skieurs sans changer l'ordre : # # $\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline t_1 & & t_2 & t_3 & & & t_4 & ... & t_{N-1} & & t_{N} & \\ \hline s_1 & s_2 & s_3 & s_4 & s_5 & s_6 & s_7 & ... & s_{M-3} & s_{M-2} & s_{M-1} & s_M \\ \hline \end{array}$ # ## Exercice facultatif # # Il faut d'abord prouver que l'algorithme suggéré ci-dessus permet bien d'obtenir la solution optimale. # ## Exercice 5 # # Après avoir avoir trié les skieurs et les paires par tailles croissantes. On définit : # # $p(n,m) = \sum_{i=1}^{n} \left| t_i - s_{\sigma_m^*(i)} \right|$ # # Où $\sigma_m^*$ est le meilleur choix possible de $n$ paires de skis parmi les $m$ premières. Exprimer $p(n,m)$ par récurrence (en fonction de $p(n,m-1)$ et $p(n-1,m-1)$. On suppose qu'un skieur sans paire de ski correspond au cas où la paire est de taille nulle. # ## Exercice 6 # # Ecrire une fonction qui calcule l'erreur pour la distribution optimale ? On pourra choisir des skieurs et des paires de tailles aléatoires par exemple. import random skieurs = [ random.gauss(1.75, 0.1) for i in range(0,10) ] paires = [ random.gauss(1.75, 0.1) for i in range(0,15) ] skieurs.sort() paires.sort() print(skieurs) print(paires) # ## Exercice 7 # # Quelle est la meilleure distribution des skis aux skieurs ? # ## Exercice 8 # # Quels sont les coûts des deux algorithmes (plus court chemin et ski) ? # ## Prolongements : degré de séparation sur Facebook # # Le plus court chemin dans un graphe est un des algorithmes les plus connus en programmation. Il permet de déterminer la solution en un coût **polynômial** - chaque itération est en $O(n^2)$. La programmation dynamique caractèrise le passage d'une vision combinatoire à une compréhension récursif du même problème. Dans le cas du plus court chemin, l'approche combinatoire consiste à énumérer tous les chemins du graphe. L'approche dynamique consiste à démontrer que la première approche combinatoire aboutit à un calcul très redondant. On note $e(v,w)$ la matrice des longueurs des routes, $e(v,w) = \infty$ s'il n'existe aucune route entre les villes $v$ et $w$. On suppose que $e(v,w)=e(w,v)$. La construction du tableau ``d`` se définit de manière itérative et récursive comme suit : # # **Etape 0** # # $d(v) = \infty, \, \forall v \in V$ # # **Etape $n$** # # $d(v) = \left \{ \begin{array}{ll} 0 & si \; v = v_0 \\ \min \{ d(w) + e(v,w) \, | \, w \in V \} & sinon \end{array} \right.$ où $v_0 =$ ``'Charleville-Mezieres'`` # # # Tant que l'étape $n$ continue à faire des mises à jour ($\sum_v d(v)$ diminue), on répète l'étape $n$. Ce même algorithme peut être appliqué pour déterminer le [degré de séparation](http://www.atlantico.fr/decryptage/theorie-six-degres-separation-relations-entre-individus-facebook-nombre-amis-229803.html) dans un réseau social. L'agorithme s'applique presque tel quel à condition de définir ce que sont une ville et une distance entre villes dans ce nouveau graphe. Vous pouvez tester vos idées sur cet exemple de graphe [Social circles: Facebook](http://snap.stanford.edu/data/egonets-Facebook.html). L'algorithme de [Dikjstra](http://fr.wikipedia.org/wiki/Algorithme_de_Dijkstra) calcule le plus court chemin entre deux noeuds d'un graphe, l'algorithme de [Bellman-Ford](http://fr.wikipedia.org/wiki/Algorithme_de_Bellman-Ford) est une variante qui calcule toutes les distances des plus courts chemin entre deux noeuds d'un graphe. import pyensae files = pyensae.download_data("facebook.tar.gz",website="http://snap.stanford.edu/data/") fe = [ f for f in files if "edge" in f ] fe # Il faut décompresser ce fichier avec [7zip](http://www.7-zip.org/) si vous utilisez ``pysense < 0.8``. Sous Linux (et Mac), il faudra utiliser une commande décrite ici [tar](http://doc.ubuntu-fr.org/tar). import pandas df = pandas.read_csv("facebook/1912.edges", sep=" ", names=["v1","v2"]) print(df.shape) df.head()
_doc/notebooks/td1a_algo/td1a_cenonce_session7.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Drawing a grouped boxplot is as simple as setting y as the numerical variable, x as the group column, and hue as the subgroup column. # + # libraries & dataset import seaborn as sns import matplotlib.pyplot as plt sns.set(style="darkgrid") df = sns.load_dataset('tips') sns.boxplot(x="day", y="total_bill", hue="smoker", data=df, palette="Set1", width=0.5) plt.show()
src/notebooks/34-grouped-boxplot.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # > This is one of the 100 recipes of the [IPython Cookbook](http://ipython-books.github.io/), the definitive guide to high-performance scientific computing and data science in Python. # # # 15.2. Solving equations and inequalities from sympy import * init_printing() var('x y z a') # Use the function solve to resolve equations (the right hand side is always 0). solve(x**2 - a, x) # You can also solve inequations. You may need to specify the domain of your variables. Here, we tell SymPy that x is a real variable. x = Symbol('x') solve_univariate_inequality(x**2 > 4, x) # ## Systems of equations # This function also accepts systems of equations (here a linear system). solve([x + 2*y + 1, x - 3*y - 2], x, y) # Non-linear systems are also supported. solve([x**2 + y**2 - 1, x**2 - y**2 - S(1)/2], x, y) # Singular linear systems can also be solved (here, there are infinitely many equations because the two equations are colinear). solve([x + 2*y + 1, -x - 2*y - 1], x, y) # Now, let's solve a linear system using matrices with symbolic variables. var('a b c d u v') # We create the augmented matrix, which is the horizontal concatenation of the system's matrix with the linear coefficients, and the right-hand side vector. M = Matrix([[a, b, u], [c, d, v]]); M solve_linear_system(M, x, y) # This system needs to be non-singular to have a unique solution, which is equivalent to say that the determinant of the system's matrix needs to be non-zero (otherwise the denominators in the fractions above are equal to zero). det(M[:2,:2]) # > You'll find all the explanations, figures, references, and much more in the book (to be released later this summer). # # > [IPython Cookbook](http://ipython-books.github.io/), by [<NAME>](http://cyrille.rossant.net), Packt Publishing, 2014 (500 pages).
notebooks/chapter15_symbolic/02_solvers.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/LauraOwensKY/MachineLearning-WWCodeDataScience/blob/master/LBO_3_Classification_Trees.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="uODjEkLUOgad" colab_type="text" # In this notebook we will build a classification model using DecisionTrees and Random forest classifier from python's scikit learn library # + [markdown] id="UHjqNTv5OgbA" colab_type="text" # ## Table of contents # 1. Data Loading # 2. Data Exploration # 3. Visualization # 4. Preprocessing # 5. Decision Trees and hyperparameter analysis # 5. Random Forest # 6. Model comparision using ROC curve # + [markdown] id="LryMIXQ1OgbN" colab_type="text" # ## Loading Data # + [markdown] id="RbAYZlLbOgbb" colab_type="text" # In this section we will import all the necessary packages and load the datasets we plan to work on. We will use the # <a href='https://www.kaggle.com/jessemostipak/hotel-booking-demand'> Hotel booking data </a> and build a model to determine which customers will cancel their hotel booking # + id="DSv0H0C8Ogbo" colab_type="code" colab={} import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from pylab import * from matplotlib.legend_handler import HandlerLine2D from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.naive_bayes import GaussianNB from sklearn.model_selection import train_test_split from sklearn import metrics from sklearn.preprocessing import LabelEncoder from sklearn.metrics import recall_score, precision_score, accuracy_score, f1_score from sklearn.metrics import confusion_matrix,auc,roc_auc_score,roc_curve import warnings warnings.filterwarnings('ignore') # + id="8qxT3DwyOgc2" colab_type="code" colab={} # Load the data # file_path = 'C:\Users\Tejal\Documents\Tejal\WWC-siliconvalley\hotel_bookings.csv' file_path='https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2020/2020-02-11/hotels.csv' df = pd.read_csv(file_path) # + [markdown] id="dzNv-lKnOgdy" colab_type="text" # ## Explore the dataset # + [markdown] id="MMaiA3WbOgd6" colab_type="text" # Understanding the data, its features and distribution is a major part of builiding ML models. # + id="OnH9DHKNOgeA" colab_type="code" outputId="90f4f302-e103-4482-ea38-3eb4ecd95a6f" colab={} df.head() # + id="t1ziMoBQOge8" colab_type="code" outputId="8de45ec2-4a1d-4ee9-b93e-04fec1272f06" colab={} # Data has 119390 rows (data points) and 32 columns (features) df.shape # + id="fFHcBx5TOgfm" colab_type="code" outputId="f0cdec1f-216d-4187-a72c-1c609f7ccc7c" colab={} # Check the datatype of features df.dtypes # + id="xeNXOifbOggR" colab_type="code" outputId="96450333-66ec-4330-c642-2fcc43465bc0" colab={} # Feature list df.columns # + id="PagJhhffOgg2" colab_type="code" outputId="18814066-165c-478c-b5e7-cbd6d107e60c" colab={} # Check for null values percent_missing = df.isnull().sum() * 100 / len(df) missing_value_df = pd.DataFrame({'column_name': df.columns, 'percent_missing': percent_missing}) missing_value_df.sort_values('percent_missing', ascending=False,inplace=True) missing_value_df # + [markdown] id="9wIlxUliOghh" colab_type="text" # Company, agent, country and children have null values. There are multiple techniques for imputing null value but for simplicity we impute them with 0. As company has a very high null value percentage we will drop the column # + id="0hhSOdZhOghq" colab_type="code" colab={} # Let us create a copy of dataframe for backup and impute null with 0 backup_df=df.copy df = df.drop('company',axis=1) df=df.fillna(0) # + id="iwI3jtt0OgiQ" colab_type="code" outputId="6b46ea7d-cc4e-4397-bf44-378c18eb5a1e" colab={} # The df has no Null values (df['agent'].isnull().sum()/len(df)) * 100 # + [markdown] id="mFn2DYAoOgi3" colab_type="text" # ## Data Visualization # + [markdown] id="TIjq-4-oOgi-" colab_type="text" # In this task, our target variable is is_cancelled which indicates if the booking was cancelled. 1 --> canceled, 0 --> Not canceled # + id="RLijg9eqOgjG" colab_type="code" outputId="8b0a9d56-2434-4c13-928e-2d39fc257d07" colab={} df['is_canceled'].value_counts().plot(kind='pie',autopct='%1.1f%%') # + [markdown] id="19WOIcqmOgju" colab_type="text" # 37% customers have cancelled their bookings. we see that our data in imbalanced # + id="yX9WO4NnOgj3" colab_type="code" outputId="1cdba227-f7b7-4a2c-f7e0-4eb012329f54" colab={} df.columns # + id="_ykeSyiJOgkl" colab_type="code" outputId="56289a1b-d3d8-4f08-afe4-22f105472254" colab={} # Hotel feature count and distribution across 0 and 1 class df['hotel'].value_counts().plot(kind='pie',autopct='%1.1f%%') # + id="-lhvIyYNOglC" colab_type="code" outputId="2fbcf2ef-7c17-4f6d-9b77-1316b0dfc6fc" colab={} sns.countplot(x='is_canceled',hue='hotel',data=df) # + [markdown] id="mXTHBZ9lOglu" colab_type="text" # As data has higher city hotel reservation data points compared to resort, above observation is on par with same trend # + id="oi8SO_iQOgly" colab_type="code" outputId="7958c389-8c67-444c-fea0-e0ef1df8960f" colab={} #market segments df.groupby(['market_segment'])['is_canceled'].count().plot(kind='bar') # + [markdown] id="xd54RuNcOgmN" colab_type="text" # ## Feature Engineering # + [markdown] id="6UVLzBwZOgmS" colab_type="text" # 1. Derive new features using existing features # 2. Remove irrelevant features # 3. Transform existing features # 4. Encoding categorical variables # + id="yEJWEh0fOgmW" colab_type="code" colab={} # Split data into train validation & test set in train:val:test=60:20:20 size # We are splitting the data into 3 chunks as we will be tuning many hyperparameters in this notebook train, val_test = train_test_split(df, test_size=0.4, random_state = 42) val, test = train_test_split(val_test, test_size=0.5, random_state = 42) # + id="uX6vkhXfOgm1" colab_type="code" outputId="3eab87dd-0755-4440-dc17-b0afeb61d7da" colab={} train.shape # + id="GJYpc1VBOgnl" colab_type="code" outputId="128447a1-26f4-4e83-e4a6-33461d30f5a6" colab={} test.shape # + id="-iF26WMZOgn9" colab_type="code" outputId="7f0dee09-6321-47c8-9dd0-35d2d90eebe5" colab={} val.shape # + id="vZlxKXQoOgom" colab_type="code" colab={} #Let us add weekend stay and weekday stay days to get total days of stay train['total_days'] = train['stays_in_week_nights'] + train['stays_in_weekend_nights'] test['total_days'] = test['stays_in_week_nights'] + test['stays_in_weekend_nights'] val['total_days'] = val['stays_in_week_nights'] + val['stays_in_weekend_nights'] # drop the weekend stay and weekday stay days features train = train.drop('stays_in_week_nights',axis=1).drop('stays_in_weekend_nights',axis=1) test = test.drop('stays_in_week_nights',axis=1).drop('stays_in_weekend_nights',axis=1) val = val.drop('stays_in_week_nights',axis=1).drop('stays_in_weekend_nights',axis=1) # + id="E3JsdPE2OgpB" colab_type="code" outputId="3ab38a23-47e1-443b-938f-706bd92482aa" colab={} train_0=train[(train['is_canceled']==0)] train_1=train[train['is_canceled']==1] sns.set(rc={"figure.figsize": (20, 20)}) subplot(2,2,1) ax = sns.distplot(train_0['total_days'], bins=100, color='r') subplot(2,2,2) ax=sns.distplot(train_1['total_days'], bins=100, color='g') # + id="16HyGP7ZOgp-" colab_type="code" colab={} #Total customers train['total_customers'] = train['adults'] + train['children']+train['babies'] test['total_customers'] = test['adults'] + test['children']+test['babies'] val['total_customers'] = val['adults'] + val['children']+val['babies'] train = train.drop('adults',axis=1).drop('children',axis=1).drop('babies',axis=1) test = test.drop('adults',axis=1).drop('children',axis=1).drop('babies',axis=1) val = val.drop('adults',axis=1).drop('children',axis=1).drop('babies',axis=1) # + id="cTCtTjLTOgqg" colab_type="code" outputId="02e9a007-843e-4033-dd3a-30e62772d422" colab={} train['total_customers'].value_counts().plot('bar',figsize=(5,5)) # + id="PJa28ERlOgrB" colab_type="code" colab={} train = train.drop(['reservation_status_date'],axis=1) test = test.drop(['reservation_status_date'],axis=1) val = val.drop(['reservation_status_date'],axis=1) # + id="YxcMVzScOgrV" colab_type="code" outputId="1a09691f-74c0-49c9-8b9a-093754d5e11b" colab={} print (len(train['agent'].unique())) # 309 unique values - Large number of unique agents and it is categorical, difficult to encode train = train.drop('agent',axis=1) test = test.drop('agent',axis=1) val = val.drop('agent',axis=1) # + id="tG-qqn6fOgr0" colab_type="code" outputId="36eb3caf-e437-41ba-cab0-b9b91f461588" colab={} print(len(train['country'].unique())) # 160 countries train = train.drop('country',axis=1) test = test.drop('country',axis=1) val = val.drop('country',axis=1) # + id="escoCJhkOgsH" colab_type="code" outputId="1c2745e9-f4e8-4cef-86fd-7856a894afb7" colab={} train.hist(column='previous_bookings_not_canceled',bins=20,figsize=(10,5)) # + id="dROOF3aUOgsf" colab_type="code" colab={} #train['previous_bookings_not_canceled'].value_counts() # We observe that most data has value = 0; hence we drop the feature #train.groupby(['is_canceled'])['previous_bookings_not_canceled'].value_counts() # We observe that data distribution across both class is remains same train = train.drop('previous_bookings_not_canceled',axis=1) test = test.drop('previous_bookings_not_canceled',axis=1) val = val.drop('previous_bookings_not_canceled',axis=1) # + id="rXAAKOhJOgs7" colab_type="code" outputId="79d8c8ef-83ec-41d9-c0c3-bd79679ea23c" colab={} train.groupby(['is_canceled'])['previous_cancellations'].value_counts().plot('bar',figsize=(5,5)) # We observe that most data has value = 0; and trend remains same across the 2 classes train = train.drop('previous_cancellations',axis=1) test = test.drop('previous_cancellations',axis=1) val = val.drop('previous_cancellations',axis=1) # + id="FFr16GFuOgtS" colab_type="code" outputId="a2def9ea-64e4-4daa-baf4-b3961da3365e" colab={} len(train.columns) # + [markdown] id="Smy8YmfYOgtm" colab_type="text" # ## Feature Correlation # + id="IN7615hEOgtp" colab_type="code" colab={} backup_train = train.copy() backup_test = test.copy() backup_val = val.copy() # + id="enYVC79pOgt7" colab_type="code" colab={} #Custom encoding train['arrival_date_month'] = train['arrival_date_month'].map({'January':1, 'February': 2, 'March':3, \ 'April':4, 'May':5, 'June':6, 'July':7,\ 'August':8, 'September':9, 'October':10, \ 'November':11, 'December':12}) # + id="sUMd35yDOguL" colab_type="code" colab={} encode = LabelEncoder() # + id="wfFTfIYGOguk" colab_type="code" outputId="e0db9bff-b5bc-4d0c-81bb-f60cb1b91b83" colab={} train.columns # + id="dtth6g1YOgu1" colab_type="code" outputId="368515ea-5671-4fd5-a271-a5c1cb916225" colab={} train['market_segment'].unique() # + id="q1y4dkBfOgvI" colab_type="code" colab={} cat_col=['hotel','arrival_date_year','meal','market_segment','distribution_channel','reserved_room_type', 'assigned_room_type',\ 'deposit_type','customer_type','reservation_status'] for i in cat_col: train[i] = encode.fit_transform(train[i]) # + id="3zmFWqy1Ogva" colab_type="code" outputId="526f3e63-b5fa-4aaf-d446-de18d6c5faad" colab={} train['market_segment'].unique() # + id="XRlAH88cOgvr" colab_type="code" outputId="a6437890-496e-4d06-ea55-4aef5b7f16bd" colab={} train.head() # + [markdown] id="w-TcrKJpOgv6" colab_type="text" # ### Feature correlation # <b>Spearman</b> and <b>Pearson</b> are the 2 statistical methods to compute the correlation between features. # - Pearson is suggested method for features with continuous values and linear relationship # - Spearman is suggested method when features have ordinal categorical data or non-linear relationship # <br>Pandas correlation method by default uses Pearson method, but we can also change it to spearman </br> # + id="5N6rvz6ZOgv8" colab_type="code" outputId="025c2d34-da61-4f24-bb5c-b67937e30051" colab={} train.corr() # + id="SDghCT4uOgwK" colab_type="code" outputId="4379eada-d23d-41ee-c5a6-87d8c9d47398" colab={} feat_corr = train.corr() feat_corr['is_repeated_guest'].sort_values() # + id="uXmBohfgOgwY" colab_type="code" outputId="996b5d5a-e086-46dc-b87f-01a92f39ee6d" colab={} plt.figure(figsize=(8,6)) sns.heatmap(feat_corr) # + [markdown] id="gEGZZcnDOgxB" colab_type="text" # The diagonal shows correlation of each feature with itself, hence indicates highest correlation. # Using the table and plot we observe that few features have veryhigh correlation # Ex:- # 1. Arrival_date_week_number and arrival_date_month = 0.99 # 2. reserved_room_type vs assigned_room_type = 0.81 # + id="8NSml9bROgxF" colab_type="code" outputId="d0b1038c-e239-4215-9a8a-711b18d22f91" colab={} feat_corr['is_canceled'].sort_values() # + [markdown] id="uOqwUHUOOgxS" colab_type="text" # The reservation_status has high correlation with is_canceled. In Naive Bayes session, we saw that removing the reservation_status feature caused the model performance to drop considerably. Lets see how it affects Trees # + [markdown] id="PcEhsNUgOgxW" colab_type="text" # ## Implementing Decision Tree # + [markdown] id="iw1IAFbMOgxa" colab_type="text" # There are various decision tree algorithms like - ID3, C4.5, C5.0 and CART. Scikit learn implements optimized version of CART alogrithm. We have multiple hyperparameters in decision tree, <a href="https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html">sklearn documentation</a> # <br> <br> # We will try to see the effect of following hyperparameters on modelling - # 1. criterion -{gini and entropy} # 2. max_depth # 3. class_weight # + [markdown] id="I66vuCfXOgxc" colab_type="text" # ### Model 1 - # Default hyperparaments -- Gini criterion, no class weight and no pruning # + id="dR67Lvd4Ogxf" colab_type="code" colab={} y_train = backup_train["is_canceled"] X_train = backup_train.drop(["is_canceled"], axis=1) y_val = backup_val["is_canceled"] X_val = backup_val.drop(["is_canceled"], axis=1) # + [markdown] id="md-QQZ8uOgxw" colab_type="text" # <b>Encoding categorical features # + [markdown] id="GoZYvHqNOgxy" colab_type="text" # Scikit's Decision tree and Random Forest implementations cannot handle string values so we need to encode the categorical values to convert them to numeric value. However, few other languages like R, Spark and Weka have Decision trees that can handle string feature values <br><br> # We use one-hot encoding instead of label encoder to avoid the illusion of continuous values for categorical features # + id="TVp3Vr5qOgx1" colab_type="code" colab={} cat_cols=['hotel','arrival_date_month','arrival_date_year','meal','market_segment','distribution_channel','reserved_room_type', 'assigned_room_type',\ 'deposit_type','customer_type','reservation_status'] X_train_enc = pd.get_dummies(data=X_train,columns=cat_cols) X_val_enc = pd.get_dummies(data=X_val,columns=cat_cols) X_train_enc,X_val_enc =X_train_enc.align(X_val_enc, join='left', axis=1) X_val_enc=X_val_enc.fillna(0) # + id="4kUpkSVdOgyB" colab_type="code" outputId="9ba84059-4388-4d7b-8b13-f7bbfbf7e63c" colab={} X_train_enc.head() # + id="VFspdfQDOgyR" colab_type="code" outputId="7c6e5084-7a65-40c2-e681-473bbacc9b68" colab={} clf = DecisionTreeClassifier(random_state = 0) clf.fit(X_train_enc, y_train) # + id="N0a2OGCzOgyk" colab_type="code" colab={} y_pred = clf.predict(X_val_enc) y_prob = clf.predict_proba(X_val_enc) # + id="rPXwgJ5SOgy2" colab_type="code" outputId="60787cc6-04ec-4da8-97ad-39496461be1e" colab={} y_pred[:10] # + [markdown] id="iRGk2HcUOgzE" colab_type="text" # ## Evaluation metric # + [markdown] id="zO3z-l4iOgzG" colab_type="text" # <b> Precission and Recall </b> # + [markdown] id="gqMwoeMWOgzK" colab_type="text" # <img src="https://github.com/WomenWhoCode/WWCodeDataScience/blob/master/Intro_to_MachineLearning/img/PR%20diagram1.PNG?raw=1" width="200p"/> # + [markdown] id="2qXpDu20OgzN" colab_type="text" # <img src="https://github.com/WomenWhoCode/WWCodeDataScience/blob/master/Intro_to_MachineLearning/img/PR%20diagram%202.PNG?raw=1" width="400"/> # + [markdown] id="EwbkSgtyOgzR" colab_type="text" # <br><b> Confusion Matrix </b> # + [markdown] id="RoyXqYUxOgzU" colab_type="text" # <img src="https://github.com/WomenWhoCode/WWCodeDataScience/blob/master/Intro_to_MachineLearning/img/Confusion%20matrix.PNG?raw=1" width="200"/> # + id="pvTWfOdqOgzW" colab_type="code" outputId="3021bb2d-6b2e-47bd-e93d-d8b16940f497" colab={} print('test-set confusion matrix:\n', confusion_matrix(y_val,y_pred)) print("recall score: ", recall_score(y_val,y_pred)) print("precision score: ", precision_score(y_val,y_pred)) print("f1 score: ", f1_score(y_val,y_pred)) print("accuracy score: ", accuracy_score(y_val,y_pred)) # + [markdown] id="cIOTx078Ogzl" colab_type="text" # Feature Importance # + id="MHVI3KQfOgzn" colab_type="code" outputId="837a59fc-e7a7-4930-bbec-96693f783578" colab={} d = pd.DataFrame( {'Features': list(X_train_enc.columns), 'Importance': clf.feature_importances_ }) d.sort_values(by=['Importance'],ascending=False)[:5] # + [markdown] id="PkTkDyIkOgz7" colab_type="text" # The model performance is perfect but only 1 feature has been used in the model, hence we should remove that feature to avoid data leak # + [markdown] id="GwCuN3jFOgz-" colab_type="text" # ### Model 2 - # Remove the feature that is highly correlated with target feature # <br> # <b>Reservation_status</b> has high correlation with is_canceled. Looking at the values in column reveals that canceled is a reservation type. This might be causing data leak. Hence we will delete this feature and train with default hyperparameters # + id="3h8vnF-xOg0D" colab_type="code" outputId="91097e4e-dcc6-4ae3-9328-67a5a77c4706" colab={} backup['reservation_status'].unique() # + id="AmD01Eo0Og0Y" colab_type="code" colab={} X_train = X_train.drop('reservation_status',axis=1) X_val = X_val.drop('reservation_status',axis=1) # + id="rDNgjvNJOg0k" colab_type="code" colab={} cat_cols=['hotel','arrival_date_month','arrival_date_year','meal','market_segment','distribution_channel','reserved_room_type', 'assigned_room_type',\ 'deposit_type','customer_type'] X_train_enc = pd.get_dummies(data=X_train,columns=cat_cols,drop_first=True) X_val_enc = pd.get_dummies(data=X_val,columns=cat_cols,drop_first=True) X_train_enc,X_val_enc =X_train_enc.align(X_val_enc, join='left', axis=1) X_val_enc=X_val_enc.fillna(0) # + id="yeuDe9jpOg0u" colab_type="code" outputId="6a50992d-e199-439c-dddd-f834cf1efa09" colab={} clf2 = DecisionTreeClassifier(random_state = 0) clf2.fit(X_train_enc, y_train) # + id="-t6DpvW-Og07" colab_type="code" colab={} y_pred2 = clf2.predict(X_val_enc) y_prob2 = clf2.predict_proba(X_val_enc) # + id="7ZnSsCN6Og1H" colab_type="code" outputId="786bd4c0-89b2-4b0a-f778-a2ea4447617c" colab={} y_prob2[:10] # + id="N4aGi64GOg1U" colab_type="code" outputId="a2029f2e-e464-48cf-abe6-3cb82a70fcc1" colab={} print('test-set confusion matrix:\n', confusion_matrix(y_val,y_pred2)) print("recall score: ", recall_score(y_val,y_pred2)) print("precision score: ", precision_score(y_val,y_pred2)) print("f1 score: ", f1_score(y_val,y_pred2)) print("accuracy score: ", accuracy_score(y_val,y_pred2)) # + id="AozTYQBxOg1g" colab_type="code" outputId="c5844597-51d2-4473-d5a1-336283ebfb2d" colab={} d = pd.DataFrame( {'Features': list(X_train_enc.columns), 'Importance': clf2.feature_importances_ }) d.sort_values(by=['Importance'],ascending=False)[:15] # + [markdown] id="K-KoaloqOg1s" colab_type="text" # ### Model 3 # Let us remove 1 feature from the correlated feature pair, We will remove the feature with lesser importance # 1. Arrival_date_week_number and arrival_date_month = 0.99 # 2. reserved_room_type vs assigned_room_type = 0.81 # 3. market_segment vs distribution_channel = 0.76 # + id="nfSOGjFtOg1v" colab_type="code" colab={} X_train = X_train.drop('arrival_date_month',axis=1) X_val = X_val.drop('arrival_date_month',axis=1) X_train = X_train.drop('market_segment',axis=1) X_val = X_val.drop('market_segment',axis=1) X_train = X_train.drop('reserved_room_type',axis=1) X_val = X_val.drop('reserved_room_type',axis=1) # + id="fWpBlgRFOg17" colab_type="code" outputId="d1492195-e05e-4098-b37c-8114c935f173" colab={} X_train.dtypes # + id="pLzeqCpcOg2I" colab_type="code" colab={} cat_cols=['hotel','arrival_date_year','meal','distribution_channel','assigned_room_type',\ 'deposit_type','customer_type'] X_train_enc = pd.get_dummies(data=X_train,columns=cat_cols,drop_first=True) X_val_enc = pd.get_dummies(data=X_val,columns=cat_cols,drop_first=True) X_train_enc,X_val_enc =X_train_enc.align(X_val_enc, join='left', axis=1) X_val_enc=X_val_enc.fillna(0) # + id="JPYym6pcOg2X" colab_type="code" outputId="1f4db486-d8fa-4fb5-ef3b-2e06e245a2d3" colab={} clf3 = DecisionTreeClassifier(random_state = 0) clf3.fit(X_train_enc, y_train) y_pred3=clf3.predict(X_val_enc) print("f1 score: ", f1_score(y_val,y_pred3)) # + [markdown] id="NAZsiHgOOg2s" colab_type="text" # #### Training metric # + id="xr5Yq-PbOg2u" colab_type="code" outputId="faf9ef28-9492-45c1-b04d-d565bbce7db8" colab={} y_pred3_train = clf3.predict(X_train_enc) print('test-set confusion matrix:\n', confusion_matrix(y_train,y_pred3_train)) print("f1 score: ", f1_score(y_train,y_pred3_train)) print("accuracy score: ", accuracy_score(y_train,y_pred3_train)) # + [markdown] id="SomDvldEOg28" colab_type="text" # We see that the model has very low training error but considerably high test error. This indicates that the model is overfitted. # + id="ZyeMETvZOg3A" colab_type="code" outputId="2b6a7f32-e057-4189-cda9-65bceaf2fb02" colab={} d = pd.DataFrame( {'Features': list(X_train_enc.columns), 'Importance': clf3.feature_importances_ }) d.sort_values(by=['Importance'],ascending=False)[:15] # + [markdown] id="eQo-Wb1AOg3P" colab_type="text" # ### Model 4 - # $Gini Impurity= \sum_{k=1}^{c} (P_k)*(1-P_k)$ <br><br> # $Entropy = \sum_{k=1}^{c}-P_k*log_2(P_k)$ # + [markdown] id="K_7fkB6oOg3g" colab_type="text" # <img src="https://github.com/WomenWhoCode/WWCodeDataScience/blob/master/Intro_to_MachineLearning/img/Impurity%20criterion.PNG?raw=1" width="400"/> # We see that both criterion follow same curve indicating that there is no significant difference between the two. # + id="_gslKyWvOg3p" colab_type="code" colab={} y_train = backup_train["is_canceled"] X_train = backup_train.drop(["is_canceled"], axis=1).drop(["reservation_status"],axis=1) y_val = backup_val["is_canceled"] X_val = backup_val.drop(["is_canceled"], axis=1).drop(["reservation_status"],axis=1) # + id="7r5CalyfOg31" colab_type="code" colab={} cat_cols=['hotel','arrival_date_month','arrival_date_year','meal','market_segment','distribution_channel','reserved_room_type', 'assigned_room_type',\ 'deposit_type','customer_type'] X_train_enc = pd.get_dummies(data=X_train,columns=cat_cols,drop_first=True) X_val_enc = pd.get_dummies(data=X_val,columns=cat_cols,drop_first=True) X_train_enc,X_val_enc =X_train_enc.align(X_val_enc, join='left', axis=1) X_val_enc=X_val_enc.fillna(0) # + id="crXtAHuMOg4D" colab_type="code" outputId="3aab24aa-7150-40c1-d112-ccf360f857af" colab={} clf4 = DecisionTreeClassifier(criterion="entropy",random_state = 0) clf4.fit(X_train_enc, y_train) # + id="uub3jPMuOg4S" colab_type="code" outputId="aeb9efd1-3eb4-40e1-e4c5-b2f2b4498e67" colab={} y_pred4 = clf4.predict(X_val_enc) y_prob4 = clf4.predict_proba(X_val_enc) print("f1 score: ", f1_score(y_val,y_pred4)) # + [markdown] id="qRXHrSBQOg4e" colab_type="text" # Impurity criterion did not affect our model performance. The feature importance of the 2 models also look similar # + id="95CT2H9IOg4g" colab_type="code" outputId="a4fb0a5e-e27d-443c-be7e-47316cfd4a8d" colab={} d = pd.DataFrame( {'Features': list(X_train_enc.columns), 'Clf2_Importance': clf2.feature_importances_, 'Clf4_Importance': clf4.feature_importances_ }) d.sort_values(by=['Clf4_Importance'],ascending=False)[:15] # + [markdown] id="9YS1e7onOg4t" colab_type="text" # ### Model 5 - # Pruning the model to avoid overfitting <br> # We have multiple hyperparameters such as max_depth, min_sample_split etc which can be tuned to prune the model. # + id="QILNZx8bOg4v" colab_type="code" outputId="1d26e46c-7af4-4b8a-ea68-ecb802660d65" colab={} max_depths = np.linspace(1, 32, 32, endpoint=True) train_results = [] test_results = [] for max_depth in max_depths: clf5 = DecisionTreeClassifier(max_depth=max_depth) clf5.fit(X_train_enc, y_train) train_pred = clf5.predict(X_train_enc) f1_score1 = f1_score(y_train,train_pred) train_results.append(f1_score1) y_pred = clf5.predict(X_val_enc) f1_score1 = f1_score(y_val,y_pred) test_results.append(f1_score1) plt.figure(figsize=(10,10)) line1, = plt.plot(max_depths, train_results, 'b', label="Train F1-score") line2, = plt.plot(max_depths, test_results, 'r', label="Val F1-score") plt.legend(handler_map={line1: HandlerLine2D(numpoints=2)}) plt.ylabel('F1-score') plt.xlabel('Tree depth') plt.show() # + [markdown] id="amkQL7FXOg46" colab_type="text" # As the tree depth increases, our training f1-score improves and we eventually get f1-score=1, however the widening gap between the test and training error curve indicates that the model is unable to generalize well on unseen data i.e. the model has overfitted # + id="VbUwdeWbOg49" colab_type="code" outputId="1e1de410-ad1b-4dfa-df03-95b793d180d6" colab={} clf5 = DecisionTreeClassifier(criterion="gini",random_state = 0,max_depth=12) clf5.fit(X_train_enc, y_train) y_pred5 = clf5.predict(X_val_enc) y_pred5_train = clf5.predict(X_train_enc) print("Train data f1 score: ", f1_score(y_train,y_pred5_train)) print("Val data f1 score: ", f1_score(y_val,y_pred5)) # + [markdown] id="nHoXlzjwOg5I" colab_type="text" # ### Model 6 # <b> Weighted Decision tree or Cost-sensitive tree </b> <br> # Experiment with class weight. Our data is slightly imbalanced, so try to assign higher weight for positive samples versus negative samples using <b>class_weight</b> hyeperparameter. We can assign {class:weight} or “balanced”<br> # # The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)) # </br> # + id="shhJjNlTOg5K" colab_type="code" outputId="c733a470-850d-4a11-d31e-4835240792d9" colab={} clf6 = DecisionTreeClassifier(criterion="gini",random_state = 0,max_depth=12,class_weight={0:1,1:2}) clf6.fit(X_train_enc, y_train) y_pred6 = clf6.predict(X_val_enc) y_pred6_train = clf6.predict(X_train_enc) print("Train data f1 score: ", f1_score(y_train,y_pred6_train)) print("Val data f1 score: ", f1_score(y_val,y_pred6)) # + [markdown] id="HpSGmzA1Og5W" colab_type="text" # By adjusting the class_weight, our test f1 score has improved by 1.3% # + [markdown] id="MFglM7xiOg5Y" colab_type="text" # Decision tree has many hyperparameters and we can use sklearn's <b>GridSearchCV</b> or <b>RandomizedSearchCV</b> to find the best hyperparaters. <a href="https://scikit-learn.org/stable/modules/grid_search.html">Sklearn documentation</a> # + [markdown] id="GuzdKQGbOg5a" colab_type="text" # ## Ensemble model - Random forest # Random forest is a ensemble of decision trees which aims to improve prediction accuracy while avoiding over fitting. <br> # + [markdown] id="apGP4-rhOg5c" colab_type="text" # In sklearn's Random forest implementation, each subsample used to fit each tree is same size as actual data but sampled using replacement if <b>bootstrap </b> hyperparameter is set to True. It has many hyperparameters similar to decision tree. <a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html">sklearn documenatation </a> # + id="BKlnQr20Og5f" colab_type="code" outputId="03e80123-b479-4fe4-9276-2a05a55be000" colab={} X_train.dtypes # + id="Le9Hod9jOg5s" colab_type="code" colab={} y_train = backup_train["is_canceled"] X_train = backup_train.drop(["is_canceled"], axis=1).drop(["reservation_status"],axis=1) y_val = backup_val["is_canceled"] X_val = backup_val.drop(["is_canceled"], axis=1).drop(["reservation_status"],axis=1) # + id="T-JZs7pqOg6E" colab_type="code" colab={} cat_cols=['hotel','arrival_date_month','arrival_date_year','meal','market_segment','distribution_channel','reserved_room_type', 'assigned_room_type',\ 'deposit_type','customer_type'] X_train_enc = pd.get_dummies(data=X_train,columns=cat_cols,drop_first=True) X_val_enc = pd.get_dummies(data=X_val,columns=cat_cols,drop_first=True) X_train_enc,X_val_enc =X_train_enc.align(X_val_enc, join='left', axis=1) X_val_enc=X_val_enc.fillna(0) # + id="HUIREXG1Og6W" colab_type="code" outputId="205f41cb-cc35-44f8-f65a-e125bb44012f" colab={} rf1=RandomForestClassifier() rf1.fit(X_train_enc,y_train) y_pred_rf1=rf1.predict(X_val_enc) y_pred_rf1_train=rf1.predict(X_train_enc) print("Train data f1 score: ", f1_score(y_train,y_pred_rf1_train)) print("Val data f1 score: ", f1_score(y_val,y_pred_rf1)) # + id="piBZg_3nOg6i" colab_type="code" outputId="37dd296a-6014-4ae1-939a-a2695ac80ea1" colab={} rf1.get_params() # + [markdown] id="R8EVN62mOg6w" colab_type="text" # With default setting, we see that the model has better performance on test data but the training data f1 score is very high, signifying a overfitting. # + [markdown] id="S72GdsQ8Og6z" colab_type="text" # ### Model 2 # 1. Using optimal max_depth # 2. Class_weight = "balanced_subsample" - Same as balanced but the ratio is computed for each tree based on the bootstrapped data considered # + id="STJfbxblOg61" colab_type="code" outputId="a8268121-0f36-4e2b-bd40-abe3aeaa8c05" colab={} max_depths = np.linspace(1, 32, 32, endpoint=True) train_results = [] test_results = [] for max_depth in max_depths: rf2=RandomForestClassifier(max_depth=max_depth,n_estimators=30) rf2.fit(X_train_enc, y_train) train_pred = rf2.predict(X_train_enc) f1_score1 = f1_score(y_train,train_pred) train_results.append(f1_score1) y_pred = rf2.predict(X_val_enc) f1_score1 = f1_score(y_val,y_pred) test_results.append(f1_score1) plt.figure(figsize=(10,10)) line1, = plt.plot(max_depths, train_results, 'b', label="Train F1-score") line2, = plt.plot(max_depths, test_results, 'r', label="Val F1-score") plt.legend(handler_map={line1: HandlerLine2D(numpoints=2)}) plt.ylabel('F1-score') plt.xlabel('Tree depth') plt.show() # + id="c2xG8j-ZOg7C" colab_type="code" outputId="fa6b5876-0f7a-428a-fb2b-08d9aaff0d41" colab={} rf2=RandomForestClassifier(max_depth=15,class_weight="balanced_subsample") rf2.fit(X_train_enc,y_train) y_pred_rf2=rf2.predict(X_val_enc) y_pred_rf2_train=rf2.predict(X_train_enc) print("Train data f1 score: ", f1_score(y_train,y_pred_rf2_train)) print("Val data f1 score: ", f1_score(y_val,y_pred_rf2)) # + [markdown] id="HL2hRNgQOg7N" colab_type="text" # ### Model 3 # 1. Increase number of estimators to 80; It increases the training time # + id="1p-q93HAOg7P" colab_type="code" outputId="96c13f8e-4dc3-4adf-d7c0-414c0350a44a" colab={} rf3=RandomForestClassifier(max_depth=17,class_weight="balanced_subsample",n_estimators=80) rf3.fit(X_train_enc,y_train) y_pred_rf3=rf3.predict(X_val_enc) y_pred_rf3_train=rf3.predict(X_train_enc) print("Train data f1 score: ", f1_score(y_train,y_pred_rf3_train)) print("Val data f1 score: ", f1_score(y_val,y_pred_rf3)) # + [markdown] id="UkhN8GUgOg7b" colab_type="text" # ## ROC curve # We will compare the best versions of the 2 classifiers we have trained so far in classification session - # 1. Decision tree model # 2. Random forest # + [markdown] id="S-NxTJPsOg7d" colab_type="text" # Decision Tree # + id="UPw0ytrFOg7f" colab_type="code" colab={} y_test = backup_test["is_canceled"] X_test = backup_test.drop(["is_canceled"], axis=1).drop('reservation_status',axis=1) cat_cols=['hotel','arrival_date_month','arrival_date_year','meal','market_segment','distribution_channel','reserved_room_type', 'assigned_room_type',\ 'deposit_type','customer_type'] X_test_enc = pd.get_dummies(data=X_test,columns=cat_cols,drop_first=True) X_train_enc,X_test_enc =X_train_enc.align(X_test_enc, join='left', axis=1) X_test_enc=X_test_enc.fillna(0) # + id="8YePy6mVOg7q" colab_type="code" colab={} y_prob6 = clf6.predict_proba(X_test_enc) false_positive_rateDT_6, true_positive_rateDT_6, thresholdDT_6 = roc_curve(y_test, y_prob6[:,1]) roc_aucDT_6 = auc(false_positive_rateDT_6, true_positive_rateDT_6) # + [markdown] id="hKKWTtYnOg72" colab_type="text" # Random Forest # + id="9l5o35LLOg75" colab_type="code" colab={} y_test = backup_test["is_canceled"] X_test = backup_test.drop(["is_canceled"], axis=1).drop('reservation_status',axis=1) cat_cols=['hotel','arrival_date_month','arrival_date_year','meal','market_segment','distribution_channel','reserved_room_type', 'assigned_room_type',\ 'deposit_type','customer_type'] X_test_enc = pd.get_dummies(data=X_test,columns=cat_cols,drop_first=True) X_train_enc,X_test_enc =X_train_enc.align(X_test_enc, join='left', axis=1) X_test_enc=X_test_enc.fillna(0) # + id="ijCLbCf9Og8Z" colab_type="code" colab={} y_prob_rf = rf3.predict_proba(X_test_enc) false_positive_rateRF, true_positive_rateRF, thresholdRF = roc_curve(y_test, y_prob_rf[:,1]) roc_aucRF = auc(false_positive_rateRF, true_positive_rateRF) # + id="v_90dTgpOg8j" colab_type="code" outputId="291e1259-6077-4e14-fe5e-375e60032c94" colab={} plt.figure(figsize = (10,10)) plt.title('Receiver Operating Characteristic') plt.plot(false_positive_rateDT_6, true_positive_rateDT_6, color = 'red', label = 'DT AUC = %0.2f' % roc_aucDT_6) plt.plot(false_positive_rateRF, true_positive_rateRF, color = 'green', label = 'RF AUC = %0.2f' % roc_aucRF) plt.legend(loc = 'lower right') plt.plot([0, 1], [0, 1], linestyle = '--') plt.axis('tight') plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') # + [markdown] id="zZ5F1roYOg8u" colab_type="text" # We see that Random forest has the better performance among the 2 models # + id="i1Z9fyxlOg8x" colab_type="code" colab={}
LBO_3_Classification_Trees.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import psycopg2 from getpass import getpass # defaults: host='localhost', port='5432' pg_conn=psycopg2.connect(host='localhost', port='5432', dbname='rpg_data', user='postgres', password=getpass()) # - pg_cur = pg_conn.cursor() pg_cur.execute("CREATE TABLE test (id serial PRIMARY KEY, num integer, data varchar);") pg_cur.execute("INSERT INTO test (num, data) VALUES (%s, %s)", (100, "abc'def")) pg_cur.execute("SELECT * FROM test;") pg_cur.fetchall() # + import pandas as pd pd.read_sql("SELECT * FROM test;", con=pg_conn)
10-SQL-and-Databases/Connect to Local PostgreSQL Server.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.10.4 64-bit # language: python # name: python3 # --- # # The `Environment` Class # The `Environment` class forms the basis of the simulation setup in `particula`; it is in `particula/environment.py`. In summary, we use it to *describe* the environment in which particles and condensing vapors exist and interact. For now, we define an environment by properties (via attributes) such as temperature and pressure with derived properties (via methods) such as the dynamic viscosity and mean free path of the medium gas. # # The `Environment` class can be imported from `particula.environment`, from particula.environment import Environment # and then it can be initiated with (some of) the following attributes. # # ## `Environment` attributes # # | attribute | unit | default value # | --------- | ---- | ------------- # | `temperature` | K | `298.15` # | `pressure` | Pa | `101325` # | `dynamic_viscosity` | Pa s | `util.dynamic_viscosity` # | `molecular_weight` | kg / mol | `constants.MOLECULAR_WEIGHT_AIR` # | `reference_viscosity` | Pa s | `constants.REF_VISCOSITY_AIR_STP` # | `reference_temperature`| K | `constants.REF_TEMPERATURE_STP` # | `sutherland_constant` | K | `constants.SUTHERLAND_CONSTANT` # | `gas_constant` | J / mol / K | `constants.GAS_CONSTANT` # # For example, `Environment(temperature=300)` initiates the class with a temperature of 300 K and the above defaults. Note that, `particula` will assign K if the input is scalar like `temperature=300`. However, it will raise an error if the input has the *wrong* units. All attributes can be accessed by `.<attr>` after `Environment` like below, where `<attr>` is one of the above. # + from particula import u from particula.environment import Environment EnvOne = Environment(temperature=300) print("temperature is ", EnvOne.temperature) # will print 300 K print("pressure is ", EnvOne.pressure) # will print 101325 Pa (kg/m/s^2) # + from particula import u from particula.environment import Environment EnvTwo = Environment(temperature=300*u.K) print("temperature is ", EnvTwo.temperature) # will print 300 K as well # + from particula import u from particula.environment import Environment EnvThree = Environment() print("temperature is ", EnvThree.temperature) # will print 298.15 K (the default value) # + from particula import u from particula.environment import Environment # EnvWrong = Environment(temperature=300*u.m) # will raise an error # - # In the table above, a variety of attributes have default values from `particula.constants`, like the gas constant and Sutherland constant. This is to allow the user flexibility, but, if not provided, they all fall on default values from the literature as discussed below. # Finally, one attribute has a default value from `particula.util`. As explained below, we calculate the dynamic viscosity via the Sutherland formula using the temperature (as well as the reference values of temperature, viscosity, and Sutherland constant). The user can override this calculating by offering a different dynamic viscosity with the attribute `dynamic_viscosity`. # ## `Environment` methods # # As discussed above, the `Environment` class enables us to calculate the dynamic viscosity as well as the mean free path. # ### `Environment.dynamic_viscosity()` # # - [`particula.util.dynamic_viscosity`](./utilities/dynamic_viscosity.ipynb) from particula.environment import Environment Environment().dynamic_viscosity() # will produce approx 1.84e-5 kg/m/s # ### `Environment.mean_free_path()` # # - [`particula.util.mean_free_path`](./utilities/mean_free_path.ipynb) from particula.environment import Environment print("mean free path is ", Environment().mean_free_path()) # will produce approx 66.5 nm # ## Notes # # - While technically allowed, we discourage the users from using vectored temperature and pressure quantities for now. We design `particula` with the intention of having each point in time (calculation) having a unique set of conditions. # - Attributes are called without parentheses, e.g. `Environment().temperature`. # - Methods are called with parentheses, e.g. `Environment().mean_free_path()`. #
docs/documentation/environment.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Template for test from pred import Predictor from pred import sequence_vector from pred import chemical_vector # Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation. # # Included is N Phosphorylation however no benchmarks are available, yet. # # # Training data is from phospho.elm and benchmarks are from dbptm. par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"] benchmarks = ["Data/Benchmarks/phos_CDK1.csv", "Data/Benchmarks/phos_CK2.csv", "Data/Benchmarks/phos_MAPK1.csv", "Data/Benchmarks/phos_PKA.csv", "Data/Benchmarks/phos_PKC.csv"] for j in benchmarks: for i in par: print("y", i, " ", j) y = Predictor() y.load_data(file="Data/Training/clean_s_filtered.csv") y.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=0) y.supervised_training("mlp_adam") y.benchmark(j, "S") del y print("x", i, " ", j) x = Predictor() x.load_data(file="Data/Training/clean_s_filtered.csv") x.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=1) x.supervised_training("mlp_adam") x.benchmark(j, "S") del x # Y Phosphorylation par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"] benchmarks = ["Data/Benchmarks/phos_CDK1.csv", "Data/Benchmarks/phos_CK2.csv", "Data/Benchmarks/phos_MAPK1.csv", "Data/Benchmarks/phos_PKA.csv", "Data/Benchmarks/phos_PKC.csv"] for j in benchmarks: for i in par: try: print("y", i, " ", j) y = Predictor() y.load_data(file="Data/Training/clean_Y_filtered.csv") y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=0) y.supervised_training("bagging") y.benchmark(j, "Y") del y print("x", i, " ", j) x = Predictor() x.load_data(file="Data/Training/clean_Y_filtered.csv") x.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=1) x.supervised_training("bagging") x.benchmark(j, "Y") del x except: print("Benchmark not relevant") # T Phosphorylation par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"] benchmarks = ["Data/Benchmarks/phos_CDK1.csv", "Data/Benchmarks/phos_CK2.csv", "Data/Benchmarks/phos_MAPK1.csv", "Data/Benchmarks/phos_PKA.csv", "Data/Benchmarks/phos_PKC.csv"] for j in benchmarks: for i in par: print("y", i, " ", j) y = Predictor() y.load_data(file="Data/Training/clean_t_filtered.csv") y.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=0) y.supervised_training("mlp_adam") y.benchmark(j, "T") del y print("x", i, " ", j) x = Predictor() x.load_data(file="Data/Training/clean_t_filtered.csv") x.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=1) x.supervised_training("mlp_adam") x.benchmark(j, "T") del x
old/Phosphorylation Sequence Tests -MLP -dbptm+ELM -EnzymeBenchmarks-VectorAvr..ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Map CircaDB and treatment data to hetionet ID import numpy as np import pandas as pd import src.mapping_function as mf # ### Circadian treatment data # Map treatment to drugbank IDs, and therapeutic area to disease ontology IDs # read in circadian treatment data treatment = pd.read_excel('downloads/HumCircMed2018v2.xlsx', sheet_name = 0) treatment.head(2) # read in Drugbank ID map drugbank = pd.read_csv('https://github.com/dhimmel/drugbank/raw/6b9ae386d6ba4a0eca2d66d4b0337a6e90fe81f4/data/drugbank.tsv', sep = '\t', header = 0) drugbank.head(2) # read in Disease ontology ID map disease_ontology = pd.read_csv('data/disease_doid.tsv', sep = '\t') disease_ontology.head(2) # map treatments to drugbank ID drug_trtmnt_id = mf.map_drugbank_id(list(treatment.loc[:,'drug.trtmnt']), list(drugbank.loc[:,'drugbank_id']), list(drugbank.loc[:,'name'])) # map therapeutic areas to DO ID disease_id = mf.map_disease_id(list(treatment.loc[:,'therapeutic.area']), list(disease_ontology.loc[:,'DOID']), list(disease_ontology.loc[:,'therapeutic.area'])) # insert mapped IDs as new columns treatment.insert(2,'drug.trtmnt_drugbank_id',drug_trtmnt_id) treatment.insert(9,'therapeutic.area_doid',disease_id) treatment.head(2) # output new dataframe treatment.to_csv('data/HumCircMed2018v2_mapped.tsv',sep = '\t', header = True, index = False) # ### CircaDB data # Map treatment to drugbank IDs, and therapeutic area to disease ontology IDs # read in CircaDB data circa_db = pd.read_excel('downloads/aat8806_Data_file_S1.xlsx', sheet_name = 0) circa_db.head(2) # read in tissue uberon map tissue_uberon = pd.read_csv('data/tissue_uberon.tsv', sep = '\t') tissue_uberon.head(2) # read in GTEx data gtex_exp = pd.read_csv('downloads/GTEx_Analysis_2016-01-15_v7_RNASeQCv1.1.8_gene_median_tpm.gct.gz', sep = '\t', skiprows = 2, compression = 'gzip') gtex_exp.head(2) # + # extract tissue-specific circadian scores by gene all_genes = circa_db['Entrez.ID'].unique().tolist() all_tissues = circa_db['tissue'].unique().tolist() tissue_len = len(all_tissues) gene_fdr_list = [] gene_amp_list = [] gene_list = [] all_genes_ensg = [] for gene in all_genes: gene_id = circa_db.index[circa_db['Entrez.ID'] == gene] if len(gene_id) == tissue_len: all_genes_ensg.append(circa_db.iloc[gene_id[0],1]) gene_list.append(gene) gene_fdr = np.array(circa_db.fdr[gene_id]) gene_amp = np.array(circa_db.rAmp[gene_id]) gene_fdr_list.append(gene_fdr) gene_amp_list.append(gene_amp) gene_list = np.array(gene_list) gene_fdr_list = np.array(gene_fdr_list) gene_amp_list = np.array(gene_amp_list) # extract tissue-specific expression (median of all samples) by gene all_gene_exp = mf.map_gtex_expression(all_genes_ensg, all_tissues, list(tissue_uberon.loc[:,'gtex_name']), list(tissue_uberon.loc[:,'tissue']), gtex_exp) all_gene_exp = np.array(all_gene_exp) all_gene_exp = np.transpose(all_gene_exp) # - # combine FDR, amplitude, expression into one ndarray combine_array = np.concatenate((gene_fdr_list,gene_amp_list,all_gene_exp),axis=1) combine_data_df = pd.DataFrame(combine_array) # specify each column name of the ndarray fdr_names = [] for i in range(0, len(all_tissues)): fdr_names.append(all_tissues[i] + '_fdr') amp_names = [] for i in range(0, len(all_tissues)): amp_names.append(all_tissues[i] + '_amp') exp_names = [] for i in range(0, len(all_tissues)): exp_names.append(all_tissues[i] + '_exp') combine_names = np.concatenate((fdr_names,amp_names,exp_names)) combine_data_df.columns = combine_names combine_data_df.insert(0, 'gene_id', gene_list) combine_data_df.head(2) # output dataframe that contains FDR, amplitude, expression of all genes measured combine_data_df.to_csv('data/circa_db_mapped.tsv',sep = '\t', header = True, index = False, float_format = '%.4f')
explore/circadian-efficacy/data_id_mapping.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Q# # language: qsharp # name: iqsharp # --- # # Single-Qubit Gates Tutorial Workbook # # **What is this workbook?** # A workbook is a collection of problems, accompanied by solutions to them. # The explanations focus on the logical steps required to solve a problem; they illustrate the concepts that need to be applied to come up with a solution to the problem, explaining the mathematical steps required. # # Note that a workbook should not be the primary source of knowledge on the subject matter; it assumes that you've already read a tutorial or a textbook and that you are now seeking to improve your problem-solving skills. You should attempt solving the tasks of the respective kata first, and turn to the workbook only if stuck. While a textbook emphasizes knowledge acquisition, a workbook emphasizes skill acquisition. # # This workbook describes the solutions to the problems offered in the [Single-Qubit Gates tutorial](./SingleQubitGates.ipynb). # Since the tasks are offered as programming problems, the explanations also cover some elements of Q# that might be non-obvious for a first-time user. # # **What you should know for this workbook** # # You should be familiar with the following concepts before tackling the Single-Qubit Gates tutorial (and this workbook): # 1. Basic linear algebra # 2. The concept of qubit # ### <span style="color:blue">Exercise 1</span>: The $Y$ gate # # **Input:** A qubit in an arbitrary state $|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$. # # **Goal:** Apply the $Y$ gate to the qubit, i.e., transform the given state into $i\alpha|1\rangle - i\beta|0\rangle$. # # ### Solution # # We have to do exactly what the task asks us to do: apply the Pauli gate $Y=\begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}$. # # This has the effect of turning $|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$ into $Y|\psi\rangle = i\alpha|1\rangle - i\beta|0\rangle$, which in matrix form looks as follows: # # $$ \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} \begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \begin{bmatrix} -i\beta \\ i\alpha \end{bmatrix}$$ # + %kata T1_ApplyY operation ApplyY (q : Qubit) : Unit is Adj+Ctl { Y(q); // As simple as that } # - # [Return to exercise 1 of the Single-Qubit Gates tutorial.](./SingleQubitGates.ipynb#Exercise-1:-The-$Y$-gate) # ### <span style="color:blue">Exercise 2</span>: Applying a global phase $i$ # # **Input:** A qubit in an arbitrary state $|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$. # # **Goal:** Use several Pauli gates to change the qubit state to $i|\psi\rangle = i\alpha|0\rangle + i\beta|1\rangle$. # ### Solution # # We need to apply a gate which applies a global phase of $i$, i.e. $|\psi\rangle \rightarrow i|\psi\rangle$. # The matrix representation of such a gate is $\begin{bmatrix} i & 0 \\ 0 & i \end{bmatrix} = i\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = iI$. # Since we are restricted to the Pauli gates, we use the property that a product of any two distinct Pauli gates equals the third gate with a $+i$ or a $-i$ global phase: $-iXYZ=I$. This can be restated as $XYZ = iI$. # $$\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} = \begin{bmatrix} i & 0 \\ 0 & i \end{bmatrix}$$ # # > Remember the rightmost gates in mathematical notation are applied first in Q# code. Hence we first apply the $Z$ gate, followed by the $Y$ gate and finally the $X$ gate. # + %kata T2_GlobalPhaseI operation GlobalPhaseI (q : Qubit) : Unit is Adj+Ctl { Z(q); Y(q); X(q); } # - # [Return to exercise 2 of the Single-Qubit Gates tutorial.](./SingleQubitGates.ipynb#Exercise-2:-Applying-a-global-phase-$i$) # ### <a name="exercise-3"></a> <span style="color:blue">Exercise 3</span>*: Applying a $-1$ phase to $|0\rangle$ state # # **Input:** A qubit in an arbitrary state $|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$. # # **Goal:** Use several Pauli gates to change the qubit state to $- \alpha|0\rangle + \beta|1\rangle$, i.e., apply the transformation represented by the following matrix:: # # $$\begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix}$$ # # ### Solution # # The first thing to notice is that the gate $\begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix}$ is quite similar to the Pauli $Z$ gate $\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$. # The only difference being that the negative phase is applied on the $|0\rangle$ instead of $|1\rangle$. Hence we can simulate this gate by switching $|0\rangle$ and $|1\rangle$ states, applying the Pauli $Z$ gate and switching them back. The Pauli $X$ gate (also called the $NOT$ gate or the bit flip gate) is the perfect gate to flip the state of the qubit and to undo the action afterwards. # # Hence we can express the $Z_0 = \begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix}$ matrix as # # $$ Z_0 = \begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = XZX$$ # + %kata T3_SignFlipOnZero operation SignFlipOnZero (q : Qubit) : Unit is Adj+Ctl { X(q); // Flip the qubit Z(q); // Apply negative phase on the |1> state X(q); // Flip the qubit back } # - # [Return to exercise 3 of the Single-Qubit Gates tutorial.](./SingleQubitGates.ipynb#Exercise-3*:-Applying-a-$-1$-phase-to-$|0\rangle$-state) # ### <span style="color:blue">Exercise 4</span>: Preparing a $|-\rangle$ state # # **Input:** A qubit in state $|0\rangle$. # # **Goal:** Transform the qubit into state $|-\rangle$. # ### Solution # # We know that applying the Hadamard gate $H$ on the computational basis states $|0\rangle$ and $|1\rangle$ results in Hadamard basis states $|+\rangle$ and $|-\rangle$, respectively. # We are given a qubit in the state $|0\rangle$. We first apply the Pauli $X$ gate to turn it into $X|0\rangle=|1\rangle$, and then apply the $H$ gate, turning the qubit into the required $H|1\rangle=|-\rangle$ state. # + %kata T4_PrepareMinus operation PrepareMinus (q : Qubit) : Unit is Adj+Ctl { X(q); // Turn |0> into |1> H(q); // Turn |1> into |-> } # - # [Return to exercise 4 of the Single-Qubit Gates tutorial.](./SingleQubitGates.ipynb#Exercise-4:-Preparing-a-$|-\rangle$-state) # ### <span style="color:blue">Exercise 5</span>: Three-fourths phase # # **Input:** A qubit in an arbitrary state $|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$. # # **Goal:** Use several phase shift gates to apply the transformation represented by the following matrix to the given qubit: # # $$\begin{bmatrix} 1 & 0 \\ 0 & e^{3i\pi/4} \end{bmatrix}$$ # # ### Solution # # The three-fourths phase gate above can be expressed as a product of 2 canonical gates - the $T$ gate is $\begin{bmatrix} 1 & 0 \\ 0 & e^{i\pi/4} \end{bmatrix}$ and the $S$ gate is $\begin{bmatrix} 1 & 0 \\ 0 & e^{i\pi/2} \end{bmatrix}$. # # $$\begin{bmatrix} 1 & 0 \\ 0 & e^{i3\pi/4} \end{bmatrix} # = \begin{bmatrix} 1 & 0 \\ 0 & e^{i\pi/4} \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & e^{i\pi/2} \end{bmatrix} # = \begin{bmatrix} 1 & 0 \\ 0 & e^{i\pi/4} \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & i \end{bmatrix} = TS$$ # # Note that $TS = ST$, so it doesn't matter in what order to apply those gates. # + %kata T5_ThreeQuatersPiPhase operation ThreeQuatersPiPhase (q : Qubit) : Unit is Adj+Ctl { S(q); T(q); } # - # [Return to exercise 5 of the Single-Qubit Gates tutorial.](./SingleQubitGates.ipynb#Exercise-5:-Three-fourths-phase) # ### <span style="color:blue">Exercise 6</span>: Preparing a rotated state # # **Inputs:** # # 1. Real numbers $\alpha$ and $\beta$ such that $\alpha^2 + \beta^2 = 1$. # 3. A qubit in state $|0\rangle$. # # **Goal:** Use a rotation gate to transform the qubit into state $\alpha|0\rangle -i\beta|1\rangle$. # ### Solution # # We use the rotation gate $R_x(\theta)$. This gate turns the state $|0\rangle$ into $R_x(\theta)|0\rangle = \cos\frac{\theta}{2}|0\rangle - i\sin\frac{\theta}{2}|1\rangle$. # This is similar to the state we need. We just need to find an angle $\theta$ such that $\cos\frac{\theta}{2}=\alpha$ and $\sin\frac{\theta}{2}=\beta$. We can use these two equations to solve for $\theta$: $\theta = 2\arctan\frac{\beta}{\alpha}$. (*Note: It is given that $\alpha^2 + \beta^2=1$*). # Hence the required gate is $R_x(2\arctan\frac{\beta}{\alpha})$, which in matrix form is # $\begin{bmatrix} \alpha & -i\beta \\ -i\beta & \alpha \end{bmatrix}$. # This gate turns $|0\rangle = \begin{bmatrix} 1 \\ 0\end{bmatrix}$ into $\begin{bmatrix} \alpha & -i\beta \\ -i\beta & \alpha \end{bmatrix} \begin{bmatrix} 1 \\ 0\end{bmatrix} = \begin{bmatrix} \alpha \\ -i\beta \end{bmatrix} = \alpha|0\rangle -i\beta|1\rangle$. # # > Trigonometric functions are available in Q# via [Math](https://docs.microsoft.com/en-us/qsharp/api/qsharp/microsoft.quantum.math) namespace. In this case we will need [ArcTan2](https://docs.microsoft.com/en-us/qsharp/api/qsharp/microsoft.quantum.math.arctan2). # + %kata T6_PrepareRotatedState open Microsoft.Quantum.Math; operation PrepareRotatedState (alpha : Double, beta : Double, q : Qubit) : Unit is Adj+Ctl { Rx(2.0 * ArcTan2(beta, alpha), q); } # - # [Return to exercise 6 of the Single-Qubit Gates tutorial.](./SingleQubitGates.ipynb#Exercise-6:-Preparing-a-rotated-state) # ### <span style="color:blue">Exercise 7</span>**: Preparing an arbitrary state # # **Inputs:** # # 1. A non-negative real number $\alpha$. # 2. A non-negative real number $\beta = \sqrt{1 - \alpha^2}$. # 3. A real number $\theta$. # 4. A qubit in state $|0\rangle$. # # **Goal:** Transform the qubit into state $\alpha|0\rangle + e^{i\theta}\beta|1\rangle$. # # > Since only the relative amplitudes and relative phase have any physical meaning, this allows us to prepare any single-qubit quantum state we want to. # ### Solution # # This exercise can be done in two steps. # # 1. Convert the state from $|0\rangle$ to $\alpha|0\rangle + \beta|1\rangle$. # This can be done similar to the Exercise 6, by first preparing an $\alpha|0\rangle -i\beta|1\rangle$ state using $R_x$ gate, and then removing the relative phase of $-i$ by applying the $S$ gate, which would turn $\alpha|0\rangle -i\beta|1\rangle$ to $\alpha|0\rangle + \beta|1\rangle$. # An alternative, simpler approach is to use the $R_y$ gate, which allows us to get the necessary state right away without introducing a relative phase: # $$R_y(2\arctan\frac{\beta}{\alpha}) = \begin{bmatrix} \alpha & -\beta \\ \beta & \alpha \end{bmatrix}$$ # 2. Add a phase of $e^{i\theta}$ to the $|1\rangle$ basis state using the $R_1(\theta)$ gate. This would turn $\alpha|0\rangle +\beta|1\rangle$ to $\alpha|0\rangle + e^{i\theta}\beta|1\rangle$. # # The solution can be represented as $R_1(\theta)R_y(2\arctan\frac{\beta}{\alpha})$ or in matrix form as # $$\begin{bmatrix} 1 & 0 \\ 0 & e^{i\theta} \end{bmatrix} # \begin{bmatrix} \alpha & -\beta \\ \beta & \alpha \end{bmatrix} = # \begin{bmatrix} \alpha & -\beta \\ e^{i\theta}\beta & e^{i\theta}\alpha \end{bmatrix}$$ # # This turns $|0\rangle = \begin{bmatrix} 1 \\ 0\end{bmatrix}$ into # $\begin{bmatrix} \alpha & -\beta \\ e^{i\theta}\beta & e^{i\theta}\alpha \end{bmatrix} \begin{bmatrix} 1 \\ 0\end{bmatrix} = # \begin{bmatrix} \alpha \\ e^{i\theta}\beta \end{bmatrix} = \alpha|0\rangle +e^{i\theta}\beta|1\rangle$. # + %kata T7_PrepareArbitraryState open Microsoft.Quantum.Math; operation PrepareArbitraryState (alpha : Double, beta : Double, theta : Double, q : Qubit) : Unit is Adj+Ctl { Ry(2.0 * ArcTan2(beta, alpha), q); // Step 1 R1(theta, q); // Step 2 } # - # [Return to exercise 7 of the Single-Qubit Gates tutorial.](./SingleQubitGates.ipynb#Exercise-7**:-Preparing-an-arbitrary-state) # ## Conclusion # # Congratulations! You have learned enough to try solving the first part of the [Basic Gates kata](../../BasicGates/BasicGates.ipynb). # When you are done with that, you can continue to the next tutorials in the series to learn about the [multi-qubit systems](../MultiQubitSystems/MultiQubitSystems.ipynb) and the [multi-qubit gates](../MultiQubitGates/MultiQubitGates.ipynb).
tutorials/SingleQubitGates/Workbook_SingleQubitGates.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # A Spam Classifier # > This project builds a spam classifier using Apache SpamAssassin's public datasets. # # - toc:true # - branch: master # - badges: true # - comments: true # - author: <NAME> # - categories: [project, machine learning, classification] # - image: images/roc.png # # Introduction # In this project, I built a spam classifer by implementing machine learning models. Models were trained by the datasets from [Apache SpamAssassin website](https://spamassassin.apache.org/old/publiccorpus/). # + [markdown] colab_type="text" id="YdJZCF6BChN5" # # Get the data # + [markdown] colab_type="text" id="XD2Zem3_ChN7" # ## Download emails and load them into my program # + colab={} colab_type="code" executionInfo={"elapsed": 31296, "status": "ok", "timestamp": 1597025625883, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="KAY5kkeSChOM" #collapse-hide import os import urllib import tarfile import urllib.request download_root = "https://spamassassin.apache.org/old/publiccorpus/" file_names = ["20030228_easy_ham.tar.bz2", "20030228_easy_ham_2.tar.bz2", "20030228_hard_ham.tar.bz2", "20030228_spam.tar.bz2", "20030228_spam_2.tar.bz2"] store_path = os.path.join("data") def fetch_data(root_url=download_root, file_names=file_names, store_path=store_path): # make directory storing emails os.makedirs(store_path, exist_ok=True) # download files for file in file_names: file_url = os.path.join(download_root, file) path = os.path.join(store_path, file) urllib.request.urlretrieve(file_url, path) # extract emails for file in file_names: path = os.path.join(store_path, file) with tarfile.open(path, 'r') as f: f.extractall(path=store_path) #fetch_data() # get file names of emails email_folders = ["hard_ham", "easy_ham", "easy_ham_2", "spam", "spam_2"] ham_names = {} for ham in email_folders[:3]: ham_path = os.path.join(store_path, ham) names = [name for name in sorted(os.listdir(ham_path)) if len(name) > 20] ham_names[ham] = names spam_names = {} for spam in email_folders[3:]: spam_path = os.path.join(store_path, spam) names = [name for name in sorted(os.listdir(spam_path)) if len(name) > 20] spam_names[spam] = names # parse emails import email import email.policy def load_email(directory, filename, spam_path=store_path): path = os.path.join(spam_path, directory) with open(os.path.join(path, filename), "rb") as f: return email.parser.BytesParser(policy=email.policy.default).parse(f) hams = [] for ham in email_folders[:3]: emails = [load_email(ham, filename=name) for name in ham_names[ham]] hams.extend(emails) spams = [] for spam in email_folders[3:]: emails = [load_email(spam, filename=name) for name in spam_names[spam]] spams.extend(emails) # - # explain how to download the emails and load them in my notebook # + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" executionInfo={"elapsed": 31289, "status": "ok", "timestamp": 1597025625885, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="zHYis4K1ChOR" outputId="d26d982f-fe86-445d-cca4-9c56c5991cf8" len(hams), len(spams), len(spams) / (len(hams) + len(spams)) # + [markdown] colab_type="text" id="usYJ4ksmChOX" # Accuracy of random guess is 70%, so we must do better than that. # + [markdown] colab_type="text" id="nBOi4vYKChOY" # ## Take a look at the emails # + [markdown] colab_type="text" id="feDh6cSJChOY" # **headers** # + colab={"base_uri": "https://localhost:8080/", "height": 360} colab_type="code" executionInfo={"elapsed": 31287, "status": "ok", "timestamp": 1597025625886, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="kN_1KCFuChOY" outputId="baa0d8a5-e980-4e3d-8ebd-690698848696" hams[1].items() # + colab={"base_uri": "https://localhost:8080/", "height": 37} colab_type="code" executionInfo={"elapsed": 31284, "status": "ok", "timestamp": 1597025625886, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="1zvKwDL_ChOc" outputId="011ec35a-fd27-44d0-89fb-57dc70a1e6f8" hams[1]["Subject"] # + [markdown] colab_type="text" id="3UL27ZtnChOe" # **Contents** # + colab={"base_uri": "https://localhost:8080/", "height": 340} colab_type="code" executionInfo={"elapsed": 31283, "status": "ok", "timestamp": 1597025625887, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="V2GUOr67ChOe" outputId="2029d602-8b72-4a08-9b55-198be04aa324" print(hams[1].get_content()[:600]) # + [markdown] colab_type="text" id="EWbtApIwChOg" # ## Get email structure # + [markdown] colab_type="text" id="pt9VKPLyChOg" # There are some emails that have multiple parts. # + colab={} colab_type="code" executionInfo={"elapsed": 31279, "status": "ok", "timestamp": 1597025625887, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="chGNMzwfChOj" from collections import Counter def get_email_structure(email): if isinstance(email, str): return email payload = email.get_payload() if isinstance(payload, list): return "multipart({})".format(", ".join([ get_email_structure(sub_email) for sub_email in payload ])) else: return email.get_content_type() def structure_counter(emails): structures = [get_email_structure(email) for email in emails] return Counter(structures) # + colab={"base_uri": "https://localhost:8080/", "height": 581} colab_type="code" executionInfo={"elapsed": 31869, "status": "ok", "timestamp": 1597025626480, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="5vZJx0-bChOl" outputId="82de1007-9d00-419a-ed6b-721f135b33ef" structure_counter(hams).most_common() # + colab={"base_uri": "https://localhost:8080/", "height": 476} colab_type="code" executionInfo={"elapsed": 34753, "status": "ok", "timestamp": 1597025629367, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="YRFx4qS2ChOm" outputId="43c19c9c-05a0-4c22-c3cb-19b6d6bd180b" structure_counter(spams).most_common() # + [markdown] colab_type="text" id="ANV6BRLFChOo" # It seems that most hams are plain text, while spams are more often html. What we need to do next? # - # # Preprocessing emails # write helper funtions and make pipeline # + [markdown] colab_type="text" id="nPYK1xCWChOo" # ## Split emails into train and test set # + colab={} colab_type="code" executionInfo={"elapsed": 34753, "status": "ok", "timestamp": 1597025629368, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="u-JPI8HUChOp" import numpy as np import pandas as pd from sklearn.model_selection import train_test_split X = np.array(hams+spams) y = np.array([0] * len(hams) + [1] * len(spams)) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=44, stratify=y) X_train.shape, X_test.shape # + [markdown] colab_type="text" id="2YqHdN6VChOw" # ## Email to text # + [markdown] colab_type="text" id="w7rLc6wjChOx" # **Parse HTML** # + colab={} colab_type="code" executionInfo={"elapsed": 34749, "status": "ok", "timestamp": 1597025629370, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="bK1OJwARChOx" from bs4 import BeautifulSoup def html_to_plain_text(html): soup = BeautifulSoup(html, "lxml") strings = "" for i in soup.find_all(): if i.string: strings += i.string + "\n" return strings # + [markdown] colab_type="text" id="82KdmSH1ChOy" # **Turn email to plain text** # + colab={} colab_type="code" executionInfo={"elapsed": 34748, "status": "ok", "timestamp": 1597025629370, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="sYsHO4qMChOz" def email_to_text(email): html = None for part in email.walk(): ctype = part.get_content_type() if not ctype in ("text/plain", "text/html"): continue try: content = part.get_content() except: # in case of encoding issues content = str(part.get_payload()) if ctype == "text/plain": return content else: html = content if html: return html_to_plain_text(html) # + colab={"base_uri": "https://localhost:8080/", "height": 510} colab_type="code" executionInfo={"elapsed": 34746, "status": "ok", "timestamp": 1597025629370, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="6Syq36KcChO1" outputId="92523220-400c-4d3d-874f-2733c292c087" example_spam = email_to_text(spams[10]) print(example_spam) # + [markdown] colab_type="text" id="xItGLOX6ChO3" # ## Replace url with "URL" # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 34741, "status": "ok", "timestamp": 1597025629371, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="RCOXn7r3ChO6" outputId="280cbcc1-2281-4db3-8639-ac45d53ae190" import re url_pattern = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+' example_spam = re.sub(url_pattern, "URL", example_spam) example_spam # + [markdown] colab_type="text" id="COK-uIqkChO7" # ## Tokenize # + colab={"base_uri": "https://localhost:8080/", "height": 71} colab_type="code" executionInfo={"elapsed": 34738, "status": "ok", "timestamp": 1597025629371, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="5A_u2OAAChO8" outputId="aa2a9c1c-fa61-4d23-d8bb-634f0eb36cb4" import nltk from nltk.tokenize import word_tokenize nltk.download('punkt') example_spam_tokenized = word_tokenize(example_spam) example_spam_tokenized[:10] # + [markdown] colab_type="text" id="wMXybZj0ChO-" # ## Stemming # + colab={} colab_type="code" executionInfo={"elapsed": 34735, "status": "ok", "timestamp": 1597025629373, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="cgstbEZmChPA" def stemming_email(tokenized_email): stemmer = nltk.PorterStemmer() stemmed_words = [stemmer.stem(word) for word in tokenized_email] return " ".join(stemmed_words) stemmed_eamil = stemming_email(example_spam_tokenized) stemmed_eamil # + [markdown] colab_type="text" id="xM-l289RChPD" # ## Write a sklearn estimator to transform our email # + colab={} colab_type="code" executionInfo={"elapsed": 34731, "status": "ok", "timestamp": 1597025629373, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="_q1l6T9_ChPD" from sklearn.base import BaseEstimator, TransformerMixin class EmailToTokenizedStemmed(BaseEstimator, TransformerMixin): def __init__(self, strip_headers=True, lower_case=True, remove_punctuation=True, replace_urls=True, replace_numbers=True, stemming=True): self.strip_headers = strip_headers self.lower_case = lower_case self.remove_punctuation = remove_punctuation self.replace_urls = replace_urls self.replace_numbers = replace_numbers self.stemming = stemming def fit(self, X, y=None): return self def transform(self, X, y=None): X_transformed = [] for email in X: text = email_to_text(email) or "" if self.lower_case: text = text.lower() if self.replace_urls: url_pattern = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+' text = re.sub(url_pattern, "URL", text) if self.replace_numbers: text = re.sub(r'\d+(?:\.\d*(?:[eE]\d+))?', 'NUMBER', text) if self.remove_punctuation: text = re.sub(r'[^a-zA-Z0-9]+', ' ', text, flags=re.M) text = word_tokenize(text) text = stemming_email(text) X_transformed.append(text) return np.array(X_transformed) # + [markdown] colab_type="text" id="weWoT6vzChPE" # ## Vectorizing # + colab={} colab_type="code" executionInfo={"elapsed": 34730, "status": "ok", "timestamp": 1597025629373, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="sLnGqXczChPE" from sklearn.feature_extraction.text import TfidfVectorizer # + [markdown] colab_type="text" id="YKz2mQlnChPF" # ## Make Pipeline # + colab={} colab_type="code" executionInfo={"elapsed": 34729, "status": "ok", "timestamp": 1597025629374, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="rrEGR-nSChPG" from sklearn.pipeline import Pipeline email_pipeline = Pipeline([ ("Tokenizing and Stemming", EmailToTokenizedStemmed()), ("tf-idf Vectorizing", TfidfVectorizer()), ("passthrough", None) ]) # - # ## The processed datasets # + colab={} colab_type="code" executionInfo={"elapsed": 78633, "status": "ok", "timestamp": 1597025673278, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="oc1IIxpKChPH" X_train_processed = email_pipeline.fit_transform(X_train) X_test_processed = email_pipeline.transform(X_test) # + [markdown] colab_type="text" id="BB6juDZeChPL" # ___ # + [markdown] colab_type="text" id="CLtpvggdChPL" # # Modeling # + colab={} colab_type="code" id="qPYhR6YbChPM" # machine learning from sklearn.model_selection import StratifiedKFold, RandomizedSearchCV from sklearn.metrics import confusion_matrix, classification_report from sklearn.naive_bayes import MultinomialNB from sklearn.linear_model import LogisticRegressionCV from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier # plotting import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # others from scipy.stats import uniform, randint, loguniform import joblib # storing models # - # **Functions from evaluating and comparing models** # + colab={} colab_type="code" id="qPYhR6YbChPM" models = {} # storing trained models models_names = [] # storing models names # add models and its name to dict def add_model(name, model, models_list=models, name_list=models_names): name_list.append(name) models_list[name] = model # - def get_classification_report(model, X_test=X_test_processed, y_test=y_test): y_pred = model.predict(X_test) print(classification_report(y_test, y_pred, target_names=["not spam", "spam"], digits=4)) # ## Building models and tuning them # how I trained and tuned the models? what's the process? # + [markdown] colab_type="text" id="20VwDQN7ChPN" # ### Naive Bayes (baseline model) # + colab={} colab_type="code" executionInfo={"elapsed": 725, "status": "ok", "timestamp": 1597024275917, "user": {"displayName": "\u6d2a\u57f9\u7fca", "photoUrl": "", "userId": "11336533706330979787"}, "user_tz": -480} id="Nnb5zlV_ChPN" outputId="b1f133ef-c529-4a4a-c3ce-fa8d8431beac" nb = MultinomialNB().fit(X_train_processed, y_train) # - add_model("Naive Bayes", nb) # + [markdown] colab_type="text" id="vA2XCXD4ChPT" # ### Logistic regression # - logitCV = LogisticRegressionCV(max_iter=1000, Cs=20, cv=10, scoring="accuracy") logitCV.fit(X_train_processed, y_train) add_model("Logistic regression", logitCV) # ### SVM # + svc = SVC() svc_params = {'C': loguniform(1e0, 1e3), 'gamma': loguniform(1e-4, 1e-3), 'kernel': ['rbf'], 'class_weight':['balanced', None]} svc_grid = RandomizedSearchCV(svc, svc_params, n_jobs=-1, cv=10, n_iter=15, scoring="accuracy") svc_grid.fit(X_train_processed, y_train) svc_best = svc_grid.best_estimator_ #svc = joblib.load("tmp/svc.pkl") # - svc.get_params() add_model("SVM", svc) # + [markdown] colab_type="text" id="2yUIvQEQChPe" # ### Random Forest # + colab={} colab_type="code" id="m9sQgkMAChPe" max_depths = [10, 50, 100, 150] for depth in max_depths: rf = RandomForestClassifier(n_jobs=-1, oob_score=True, n_estimators=1500, random_state=44, max_depth=depth) rf.fit(X_train_processed, y_train) print(f"Max Depth: {depth:3}, oob accuracy: {rf.oob_score_:.4f}") # - max_depths = [90, 100, 110, 120, 130] for depth in max_depths: rf = RandomForestClassifier(n_jobs=-1, oob_score=True, n_estimators=1000, random_state=44, max_depth=depth) rf.fit(X_train_processed, y_train) print(f"Max Depth: {depth:3}, oob accuracy: {rf.oob_score_:.4f}") rf = RandomForestClassifier(n_jobs=-1, oob_score=True, n_estimators=1000, random_state=44, max_depth=100) rf.fit(X_train_processed, y_train) add_model("Random forest", rf) # ## Evaluate on test set for name in models_names: print(name) get_classification_report(models[name]) print("-----------------------------------------") print() # ## Comparing performance of models using ROC curve and AUC # + from sklearn.metrics import roc_curve, roc_auc_score def plot_roc_curve(models_names=models_names, models=models): plt.figure(dpi=120) for name in models_names: if name == "SVM": y_score = models[name].decision_function(X_test_processed) fpr, tpr, thresholds = roc_curve(y_test, y_score) auc = roc_auc_score(y_test, y_score) label = name + f"({auc:.4f})" plt.plot(fpr, tpr, label=label) else: y_score = models[name].predict_proba(X_test_processed)[:,1] fpr, tpr, thresholds = roc_curve(y_test, y_score) auc = roc_auc_score(y_test, y_score) label = name + f"({auc:.4f})" plt.plot(fpr, tpr, label=label) plt.plot([0, 1], [0,1], "b--") plt.xlim(-0.01, 1.02) plt.ylim(-0.01, 1.02) plt.legend(title="Model (AUC score)",loc=(1.01, 0.4)) # - plot_roc_curve() # # Conclusion
_notebooks/2020-09-12-Apache Spam Classifier.ipynb
# # 📃 Solution for Exercise M5.02 # # The aim of this exercise is to find out whether a decision tree # model is able to extrapolate. # # By extrapolation, we refer to values predicted by a model outside of the # range of feature values seen during the training. # # We will first load the regression data. # + import pandas as pd penguins = pd.read_csv("../datasets/penguins_regression.csv") feature_name = "Flipper Length (mm)" target_name = "Body Mass (g)" data_train, target_train = penguins[[feature_name]], penguins[target_name] # - # <div class="admonition note alert alert-info"> # <p class="first admonition-title" style="font-weight: bold;">Note</p> # <p class="last">If you want a deeper overview regarding this dataset, you can refer to the # Appendix - Datasets description section at the end of this MOOC.</p> # </div> # First, create two models, a linear regression model and a decision tree # regression model, and fit them on the training data. Limit the depth at # 3 levels for the decision tree. # + # solution from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor linear_regression = LinearRegression() tree = DecisionTreeRegressor(max_depth=3) linear_regression.fit(data_train, target_train) tree.fit(data_train, target_train) # - # Create a synthetic dataset containing all possible flipper length from # the minimum to the maximum of the training dataset. Get the predictions of # each model using this dataset. # + # solution import numpy as np data_test = pd.DataFrame(np.arange(data_train[feature_name].min(), data_train[feature_name].max()), columns=[feature_name]) # + tags=["solution"] target_predicted_linear_regression = linear_regression.predict(data_test) target_predicted_tree = tree.predict(data_test) # - # Create a scatter plot containing the training samples and superimpose the # predictions of both models on the top. # + # solution import matplotlib.pyplot as plt import seaborn as sns sns.scatterplot(data=penguins, x=feature_name, y=target_name, color="black", alpha=0.5) plt.plot(data_test[feature_name], target_predicted_linear_regression, label="Linear regression") plt.plot(data_test[feature_name], target_predicted_tree, label="Decision tree") plt.legend() _ = plt.title("Prediction of linear model and a decision tree") # + [markdown] tags=["solution"] # The predictions that we got were within the range of feature values seen # during training. In some sense, we observe the capabilities of our model to # interpolate. # - # Now, we will check the extrapolation capabilities of each model. Create a # dataset containing a broader range of values than your previous dataset, # in other words, add values below and above the minimum and the maximum of # the flipper length seen during training. # solution offset = 30 data_test = pd.DataFrame(np.arange(data_train[feature_name].min() - offset, data_train[feature_name].max() + offset), columns=[feature_name]) # Finally, make predictions with both models on this new interval of data. # Repeat the plotting of the previous exercise. # solution target_predicted_linear_regression = linear_regression.predict(data_test) target_predicted_tree = tree.predict(data_test) # + tags=["solution"] sns.scatterplot(data=penguins, x=feature_name, y=target_name, color="black", alpha=0.5) plt.plot(data_test[feature_name], target_predicted_linear_regression, label="Linear regression") plt.plot(data_test[feature_name], target_predicted_tree, label="Decision tree") plt.legend() _ = plt.title("Prediction of linear model and a decision tree") # + [markdown] tags=["solution"] # The linear model will extrapolate using the fitted model for flipper lengths # < 175 mm and > 235 mm. In fact, we are using the model parametrization to # make this predictions. # # As mentioned, decision trees are non-parametric models and we observe that # they cannot extrapolate. For flipper lengths below the minimum, the mass of # the penguin in the training data with the shortest flipper length will always # be predicted. Similarly, for flipper lengths above the maximum, the mass of # the penguin in the training data with the longest flipper will always be # predicted.
notebooks/trees_sol_02.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import tensorflow as tf from tensorflow import keras EPOCHS = 200 BATCH_SIZE = 128 VERBOSE = 1 NB_CLASSES = 10 N_HIDDEN = 128 VALIDATION_SPLIT = 0.2 DROPOUT = 0.3 mnist = keras.datasets.mnist (X_train, Y_train), (X_test, Y_test) = mnist.load_data() RESHAPED = 784 X_train = X_train.reshape(60000, RESHAPED) X_test = X_test.reshape(10000, RESHAPED) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255.0 X_test /= 255.0 print(X_train.shape[0], 'train samples') print(X_test.shape[0], 'test samples') Y_train = tf.keras.utils.to_categorical(Y_train, NB_CLASSES) Y_test = tf.keras.utils.to_categorical(Y_test, NB_CLASSES) model = tf.keras.models.Sequential() model.add(keras.layers.Dense(N_HIDDEN, input_shape=(RESHAPED,), name='input_dense_layer',activation='relu')) model.add(keras.layers.Dropout(DROPOUT)) model.add(keras.layers.Dense(N_HIDDEN, name='hidden_dense_layer',activation='relu')) model.add(keras.layers.Dropout(DROPOUT)) model.add(keras.layers.Dense(NB_CLASSES, name='output_dense_layer',activation='softmax')) model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy']) model.summary() model.fit(X_train, Y_train, batch_size=BATCH_SIZE, epochs=EPOCHS, verbose=VERBOSE, validation_split=VALIDATION_SPLIT) test_loss, test_acc = model.evaluate(X_test, Y_test) print() print('Test accuracy:', test_acc) # -
Notebooks/Wk02-Demo04-Model01-Experiment06.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <div align="right" style="text-align: right"><i><NAME><br>2012, 2020</i></div> # # # The Unfinished Game ... of Risk # # ![](components_.jpg) # # [<NAME>](https://web.stanford.edu/~kdevlin/)'s [book](https://www.amazon.com/Unfinished-Game-Pascal-Fermat-Seventeenth-Century/dp/0465018963) [*The Unfinished Game*](https://wordplay.blogs.nytimes.com/2015/12/14/devlin-unfinished-game/) describes how Fermat and Pascal discovered the rules of probability that guide gambling in games. The question they confront is: what if a gambling game is interrupted, but one player is in the lead by a certain score. How much of the pot should the leader get? # # My friends and I faced a similar question when a game of [Risk](https://www.ultraboardgames.com/risk/game-rules.php) ran on too long (as they often do) and we were unable to finish. Player **A** had just cashed in cards and added troops to a single large force in Brazil that was poised to make a sweeping attack on player **B**, whose territories were situated in Africa and Asia in such a way that **A** could attack North Africa first and then go from one territory to the next without ever having to branch off. We wrote down the number of **A**'s armies in Brazil, **72**, and the number of armies in **B**'s successive territories: **22, 8, 2, 2, 2, 7, 1, 1, 3, 1, 2, 3, 5, 1.** What is the probability that **A** can capture all these territories? # # ______ # # # Monte Carlo Simulation # # We can answer the question with a [**Monte Carlo simulation**](https://en.wikipedia.org/wiki/Monte_Carlo_method) that follows the rules of Risk and uses random numbers to roll the dice. First some preliminaries: # + from typing import List, Tuple import matplotlib.pyplot as plt import numpy as np import random attackers = 72 territories = (22, 8, 2, 2, 2, 7, 1, 1, 3, 1, 2, 3, 5, 1) die = (1, 2, 3, 4, 5, 6) Die = int # a type alias # - # How many territories and how many total defenders? len(territories), sum(territories) # It looks fairly even: the attackers have 12 more armies, 72 to 60, but they will have to leave an army in each of the 14 territories along the way. # # In Risk a **battle** consists of a one-time roll of some dice and the resulting loss of armies. The attacker will roll three dice if possible (but can roll a maximum of the number of armies in the attacking territory minus one) and the defender will roll two dice if possible (or only one if they have only one army remaining). We compare the highest die on each side, with the defender losing an army if the attacker's die is higher, and the attacker losing an army if tied or lower. Then if both sides rolled at least two dice, we do the same comparison with the second highest die on each side. The function `deaths` returns a tuple of (number of attacking armies that die, number of defending armies that die). def deaths(attack_dice: List[Die], defend_dice: List[Die]) -> Tuple[int, int]: """The number of (attackers, defenders) who die due to this roll.""" dead = [0, 0] for a, d in zip(sorted(attack_dice, reverse=True), sorted(defend_dice, reverse=True)): dead[a > d] += 1 return tuple(dead) # + def test1() -> bool: # Four examples from www.ultraboardgames.com/risk/game-rules.php assert deaths([6, 1, 1], [3]) == (0, 1) # 6 beats 3, so defender loses 1 assert deaths([6, 2, 1], [3, 2]) == (1, 1) # 6 beats 3, but 2 ties 2, so 1 loss each assert deaths([3, 3], [4, 3]) == (2, 0) # 3 loses to 4, and attacker 3 loses to 3 assert deaths([6], [5, 4]) == (0, 1) # 6 beats 5, so defender loses 1 return True test1() # - # An **invasion** consists of a series of battles until either the defenders are all defeated (in which case the attacker can move armies into the captured territory, but must leave one army behind), or the attackers have fewer than two armies left, and can no longer attack. # # A **campaign** consists of a sequence of invasions, designed to capture all the defender's territories. # In the function `campaign`, the variable `attackers` tracks the number of attackers invading the current territory. (The total number of armies for the attacking side will in general be more than `attackers`, because there will be one army left behind in each captured territory.) If `verbose` is true, we'll `say` what is happening along the way. In the end, return the number of `attackers` in the final territory minus the total number of defenders still alive. This will be positive if all the defenders have been defeated, and nonpositive if not. # + def campaign(attackers: int, territories: List[int], verbose=True) -> int: """Given a number of armies for attacker and a list of territory army sizes for defender, randomly play to the end, and return remaining attackers minus remaining defenders.""" for t, defenders in enumerate(territories, 1): if verbose: say(attackers, 'attack', defenders, t) while attackers > 1 and defenders > 0: A, D = deaths(roll(min(3, attackers - 1)), roll(min(2, defenders))) attackers -= A defenders -= D if attackers == 1: if verbose: say(attackers, "can't beat", defenders, t) break if verbose: say(attackers, 'defeat', defenders, t) attackers -= 1 # Capture a new territory, leave one behind return attackers - (defenders + sum(territories[t:])) def say(attackers, action, defenders, t): """Say what is happening.""" print(f'{attackers:3} armies {action} {defenders:2} defenders in territory {t}') def roll(n) -> List[Die]: """A random roll of `n` dice, sorted largest first.""" return sorted((random.choice(die) for _ in range(n)), reverse=True) # - # # The Answer # # Let's answer the question of who wins the unfinished game: campaign(attackers, territories) # The attackers won a resounding victory, capturing all the territories and moving 22 remaining armies into the final territory. # # But that was just one simulation; other simulations could have different results. Let's summarize, say, 100,000 simulations: # %time scores = [campaign(attackers, territories, False) for _ in range(100000)] # + def summary(scores): """Summarize the scores from simulations.""" P = np.average([s > 0 for s in scores]) N, avg = len(scores), np.average(scores) title = f'Attacker wins {P:.0%}. Average armies left: {avg:.1f}' plt.ylabel(f'Frequency (out of {N:,d} runs)'); plt.xlabel('Armies left (positive for attacker)') plt.hist(scores, bins=max(scores) - min(scores) + 1) plt.plot([0, 0], [0, N/25], 'r:'); plt.title(title) summary(scores) # - # We see that the attackers win 81% of the time, and the scores look roughly like a bell-shaped curve, but with a non-normal pattern on the left side. What's causing the non-normal pattern? Note that the number of defenders in the final four territories are `(2, 3, 5, 1)`, and the width of the four spikes on the left are `(1, 2, 4, 1)`, one less than the number of defenders (except for the last spike). I think there are spikes rather than a smooth curve because it is doubly-difficult to capture a territory: you need to have two armies left, not just one, so that you can leave one behind. # # Exact Probabilities # # By repeatedly running a simulation, we can get approximate probabilities. But what if we wanted **exact probabilities** (at least exact under the assumption that the dice are exactly fair and that floating point roundoff is not an issue)? I'll start by defining the function `outcomes`, so that, for example, `outcomes(3, 2)` returns a Counter of all possible outcomes of rolling 3 attacker dice versus 2 defender dice; each outcome is a pair of `(attacker_deaths, defender_deaths)` and has a count of how often it occurred. # + from functools import lru_cache from collections import Counter import itertools @lru_cache() def outcomes(num_A: int, num_D: int) -> List[Tuple[int, int]]: """All (equiprobable) outcomes of (attacker_deaths, defender_deaths) from rolling num_A dice versus num_D dice.""" return Counter(deaths(dice[:num_A], dice[num_A:]) for dice in itertools.product(die, repeat=num_A + num_D)) # - outcomes(3, 2) # The result for `outcomes(3, 2)` says that the attackers lose two armies in 2,275 out of the $6^5 = 7,776$ different outcomes; the defenders lose two in 2,890 outcomes; and they each lose one in the remaining 2,611 outcomes. I found a [web page](http://datagenetics.com/blog/november22011/) that lists the results for all six possible battles (attackers roll 1, 2, or 3 dice; defenders roll 1 or 2); let's verify that our `outcomes` agree with theirs: # + def test2() -> bool: # See http://datagenetics.com/blog/november22011/ assert outcomes(1, 1) == {(1, 0): 21, (0, 1): 15} assert outcomes(2, 1) == {(1, 0): 91, (0, 1): 125} assert outcomes(3, 1) == {(1, 0): 441, (0, 1): 855} assert outcomes(1, 2) == {(1, 0): 161, (0, 1): 55} assert outcomes(2, 2) == {(2, 0): 581, (1, 1): 420, (0, 2): 295} assert outcomes(3, 2) == {(2, 0): 2275, (1, 1): 2611, (0, 2): 2890} return True test2() # - # Now we know exact outcomes for a single roll of the dice; what about an invasion where $A$ attackers keep attacking until they defeat $D$ defenders (or all but one attacker dies trying)? The function `winP(A, D)` gives the probability of the attackers winning. It works recursively. The two base cases are that the probability is zero if the attackers don't have at least two armies and the probability is one if the defenders have no armies. # # In the recursive case we observe the possible outcomes for the first battle, and then compute the win probability for the invasion as the average, over every possible outcome of the dice rolls, of the win probability for the number of remaining attackers and defenders after the battle, weighted by the number of times that the outcome occurs. (We use `np.average` because it accepts an optional `weights` argument.) @lru_cache(None) def winP(attackers: int, defenders: int) -> float: """The probability that `attackers` can invade and defeat all the `defenders`.""" if attackers <= 1: return 0 elif defenders == 0: return 1 else: battle = outcomes(min(3, attackers - 1), min(2, defenders)) return np.average([winP(attackers - A, defenders - D) for (A, D) in battle], weights=list(battle.values())) # Let's try a simple example: `winP(2, 1)` should be `15/36`, the same as the probability of the attacker prevailing in `outcomes(2, 1)`: winP(2, 1) == 15/36 # What's the probability that 12 attackers successfully invade 10 defenders? winP(12, 10) # Let's make a chart, with the number of defenders varying from 1 to 60, and the number of attackers separated into eight cases (depicted as eight lines), where in each case there are Δ more attackers than defenders, for Δ = -5, -2, 0, 1, 2, 5, and 10: def chart(Ds=range(1, 61), deltas=(-5, -2, -1, 0, 1, 2, 5, 10)): plt.figure(figsize=(9, 6)); plt.grid() plt.title('Each line: attackers with Δ more armies than defenders') plt.xlabel('Number of Defenders'); plt.ylabel('Win Probability for Attackers') for delta in reversed(deltas): Ps = [winP(max(0, D + delta), D) for D in Ds] plt.plot(Ds, Ps, '.-', label=f'Δ={delta}') plt.legend() chart() # Note that the purple line (fourth from bottom), where the number of attackers exactly equals the number of defenders, gives a low win probability for a small attacking force, but reaches 50% for 12-on-12, and 73% for 60-on-60. The red line, where the attackers have one more army than the defenders, dips from one to two defenders but is over 50% for a 6-on-5 attack. Similarly, the green line, where the attackers have a surplus of two armies, dips sharply from 75% to 66% as the number of defenders goes from 1 to 2, dips slightly more for 3 and 4 defenders, and then starts to rise. So overall, an attacker does not need a big advantage in armies as long as there are many armies on both sides. Even when the attacker is at a disadvantage in numbers (as in the bottom grey line where the attacker has five fewer armies), the attacker can still have an advantage in win probability; `winP(55, 60)` is about 57%. # # Simulation versus Exact Computation # # Let's see how the exact computation compares with the simulation: # + A, D = 32, 30 exact = winP(A, D) simul = np.average([campaign(A, [D], False) > 0 for _ in range(10000)]) exact, simul # - # We see that they give similar results, differing by one or two parts per thousand. So when would you prefer a simulation, and when an exact computation? # # **Advantages of a simulation:** # - Usually simpler to code; can use an `if` statement to distinguish two branches; don't have to follow every branch. # - Can be more efficient to compute; don't have to spend a lot of computation time on extremely rare events. # - Can handle event sequences of potentially unbounded length. # # **Advantages of an exact calculation:** # - Don't need to rerun multiple times; don't need statistical inference to analyze the range of outcomes. # - If a very rare event can be extremely bad (or good), it is important to know exactly how likely the rare event is. # # I chose to use a **simulation** for `campaign` because: # - I thought it would be easier to code. # - I would be happy with only four bits of accuracy: enough to determine the win percentage within 6%. (I ended up running enough simulations to get within 1%.) # - I didn't care about the 1-in-a-billion chance of, say, the attackers losing 17 battles in a row; I just want to know the overall odds of the attackers losing the campaign. (Note that if I was doing a simulation of a nuclear reactor, I would certainly be very concerned with a 1-in-a-billion chance of a meltdown, and would need to code ways of exploring that possibility more carefully.) # - A simulation is more flexible. Consider a small change to the rules of Risk: if the dice show five 1s in a battle, both sides add one army. We can easily implement that in a simulation in two or three lines of code, and the effect on run time will be negligible, because it only happens one in 7,776 times. But in an exact calculation this new rule would change everything: it would become an infinite game, and we would have to make a wholesale rearrangement of the code to deal with that. # # I chose to use **exact calculation** for `winP` because: # - I knew the computational demands would be very small either way. # - I saw there was something going on with odd/even number of armies, and I wanted to distinguish slight variations that are real from slight variations that are random, so getting exact numbers was useful. # # # (*Note:* I call it an "exact" calculation, but it is actually limited to the precision of 64-bit floating point numbers. If you truly need an exact answer for a discrete combinatorics problem, do division with the `fraction.Fraction` class, not with `float`. And of course it is an exact computation in the model; if the model does not reflect the world, the results will be wrong.)
ipynb/risk.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### To run this code, please install PyIOmica, a python open multi - omics analysis platform. # To install the current release from PyPI (Python Package Index) use pip: # # pip install pyiomica # # The github repository of PyIOmica: https://github.com/gmiaslab/pyiomica # # Instruction of PyIOmica, see: <NAME>, <NAME>, <NAME>, PyIOmica: longitudinal omics analysis and trend identification, Bioinformatics, 2019, 1–2, doi: https://doi.org/10.1093/bioinformatics/btz896 # # Also need to install Louvain Community Detection package, # # pip install python-louvain # # The github repository of python-louvain: https://github.com/taynaud/python-louvain # + import numpy as np import networkx as nx import community as cy from networkx.algorithms import community import itertools from copy import deepcopy from pyiomica import visualizationFunctions from pyiomica import visibilityGraphCommunityDetection from scipy import signal import matplotlib.pyplot as plt ###no warnings import warnings warnings.filterwarnings('ignore') # %matplotlib inline import random # - #function to plot the community as color bar def __plotCommunityAsHeatmap(data, times, fileName,noRemovedData=None, title='', figsize=(8,4), cmap='jet', graph_type='natural',weight=None, withsign=False, direction=None, cutoff=None): '''plot time series and community structure as heatmap, nodes in same community with same color Args: data: Numpy 2-D array of floats times: Numpy 1-D array of floats fileName: name of the figure file to save title: the figure title,default is empty noRemovedData: for uneven case, this data is the original data without remove any time points default is none figsize: tuple of int, Default (8,4) Figure size in inches cmap: the color map to plot heatmap graph_type: string, default: 'natural' "horizontal", Horizontal Visibility Graph "natural",natural Visibility Graph "dual_horizontal", dual perspective horizontal visibility graph "dual_natural", dual perspective natural visibility graph weight: str, default:None None: no weighted 'time': weight = abs(times[i] - times[j]) 'tan': weight = abs((data[i] - data[j])/(times[i] - times[j])) + 10**(-8) 'distance': weight = A[i, j] = A[j, i] = ((data[i] - data[j])**2 + (times[i] - times[j])**2)**0.5 withsign: boolean, Default False Whether to return the sign of adjacency matrix, If True, the link from Natural perspective VG is positive, the link from reflected perspective VG is negative Else, the are all positive direction:str, default is None, the direction that nodes aggregate to communities None: no specfic direction, e.g. both sieds left: nodes can only aggregate to the lefe side hubs, e.g. early hubs right: nodes can only aggregate to the right side hubs, e.g. later hubs cutoff: will be used to combine initial communities, e.g. whenever the shortest path length of two adjacent hub nodes is smaller than cutoff, the communities with the two hub nodes will be combined. the cutoff can be int,float or string int or float: the percentile of all shortest path length distribution, between 0 ~ 100 'auto': use optimized cutoff None: no cutoff the default is None Returns: None Usage: __plotCommunityAsHeatmap(data, times, 'Test.png', 'Test Data') ''' methods = ['GN', 'LN','PL'] G_nx, A = visibilityGraphCommunityDetection.createVisibilityGraph(data, times, graph_type=graph_type, weight=weight, withsign=withsign) community_pl = visibilityGraphCommunityDetection.communityDetectByPathLength(G_nx, direction=direction, cutoff = cutoff) heatMapData = [] if noRemovedData != None: lh = len(noRemovedData[0]) else: lh = len(times) temp1 = np.zeros(lh) temp2 = np.zeros(lh) temp3 = np.zeros(lh) comp = community.girvan_newman(G_nx) k = len(community_pl) limited = itertools.takewhile(lambda c: len(c) <= k, comp) for communities in limited: community_gn = (list(sorted(c) for c in communities)) for i, row in enumerate(community_gn): for j in row: temp3[int(G_nx.nodes[j]['timepoint'])] = i+1 res3 = [element for element in temp3 if element != 0] if temp3[0] == 0: temp3[0] = res3[0] for i,e in enumerate(temp3): if e == 0: temp3[i] = temp3[i-1] heatMapData.append(temp3) community_ln = cy.best_partition(G_nx) for key,value in community_ln.items(): temp2[int(G_nx.nodes[key]['timepoint'])] = value + 1 res2 = [element for element in temp2 if element != 0] if temp2[0] == 0: temp2[0] = res2[0] for i,e in enumerate(temp2): if e == 0: temp2[i] = temp2[i-1] heatMapData.append(temp2) for i, row in enumerate(community_pl): for j in row: temp1[int(G_nx.nodes[j]['timepoint'])] = i + 1 res1 = [element for element in temp1 if element != 0] if temp1[0] == 0: temp1[0] = res1[0] for i,e in enumerate(temp1): if e == 0: temp1[i] = temp1[i-1] heatMapData.append(temp1) fig = plt.figure(figsize=figsize) ax1 = plt.subplot(211) ax1.bar(times, data, width=0.3,color='b') if noRemovedData != None: origT = noRemovedData[0] origD = noRemovedData[1] removT = [x for x in origT if x not in times] removD = origD[removT] ax1.bar(removT, removD, width=0.3,color='grey') if noRemovedData != None: length = len(noRemovedData[0]) Ttimes = np.array(noRemovedData[0]) else: length = len(data) Ttimes = np.array(times) ax1.axhline(y=0, color='k') ax1.set_ylabel('Signal Intensity', fontsize=16) ax1.set_xticks(np.arange(0,length,10)) ax1.set_xticklabels(Ttimes[np.arange(0,length,10)],fontsize=16) ax1.set_yticks([]) ax1.set_xlim(left=0,right=length) ax1.set_title(title,fontsize=20) ax2 = plt.subplot(212, sharex=ax1) #im2 = ax2.imshow(heatMapData) im2 = ax2.pcolor(heatMapData, cmap=cmap) ax2.set_xticks(np.arange(0,length,10)) ax2.set_xticklabels(Ttimes[np.arange(0,length,10)],fontsize=16) ax2.set_yticks(np.arange(0,len(heatMapData),1), minor=True) ax2.set_yticks(np.arange(0.5,len(heatMapData),1)) ax2.set_yticklabels(methods,fontsize=16) ax2.grid(which="minor", color="w", axis='y', linestyle='-', linewidth=3) ax2.tick_params(which="minor", top=False, bottom=False, left=False,right=False) for edge, spine in ax2.spines.items(): spine.set_visible(False) fig.tight_layout() fig.savefig(fileName, dpi=600) plt.close(fig) return None # #### Illustration of create weighted perspective visibility graph and community structure based on shortest path length community detection algorithm # + ### create time series np.random.seed(11) random.seed(11) times = np.arange( 0, 2*np.pi, 0.35) tp = list(range(len(times))) data = 5*np.cos(times) + 2*np.random.random(len(times)) ### plot time series fig, ax = plt.subplots(figsize=(8,3)) ax.plot(tp,data) ax.set_title('Time Series', fontdict={'color': 'k'},fontsize=20) ax.set_xlabel('Times', fontsize=20) ax.set_ylabel('Signal intensity', fontsize=20) ax.set_xticks(tp) ax.set_xticklabels([str(item) for item in np.round(tp,2)],fontsize=20, rotation=0) ax.set_yticks([]) fig.tight_layout() fig.savefig('./draft_fig/fig1/A.eps', dpi=600) plt.close(fig) ### plot weighted Natural visibility graph, weight is Euclidean distance g_nx_NVG, A_NVG = visibilityGraphCommunityDetection.createVisibilityGraph(data,tp,"natural", weight = 'distance') visualizationFunctions.PlotNVGBarGraph_Dual(A_NVG, data, tp,fileName='./draft_fig/fig1/B.eps', title = 'Natural Visibility Graph',fontsize=20,figsize=(8,3)) ### plot reflected prespective weighted Natural visibility graph, weight is Euclidean distance g_nx_revNVG, A_revNVG = visibilityGraphCommunityDetection.createVisibilityGraph(-data,tp,"natural", weight = 'distance') visualizationFunctions.PlotNVGBarGraph_Dual(A_revNVG, -data, tp,fileName='./draft_fig/fig1/C.eps', title='Reflected Prespective Natural Visibility Graph',fontsize=20,figsize=(8,3)) ### plot dual prespective Natural visibility graph, weight is Euclidean distance g_nx_dualNVG, A_dualNVG = visibilityGraphCommunityDetection.createVisibilityGraph(data,tp,"dual_natural", weight = 'distance', withsign=True) visualizationFunctions.PlotNVGBarGraph_Dual(A_dualNVG, data, tp,fileName='./draft_fig/fig1/D.eps', title='Dual Prespective Natural Visibility Graph',fontsize=20,figsize=(10,4)) ### plot line layout dual prespective Natural visibility graph with community structure, weight is Euclidean distance communities = visibilityGraphCommunityDetection.communityDetectByPathLength(g_nx_dualNVG, direction = None, cutoff='auto') com = (communities, g_nx_dualNVG) visualizationFunctions.makeVisibilityGraph(data, tp, 'draft_fig/fig1', 'E', layout='line',communities=com, level=0.8,figsize = (10,6), extension='.eps') # - # #### plot community structure as heatmap, nodes in same community are with same color, comparing our algorithm with other traditional methods # + t = np.linspace( 0, 1, 150,endpoint=False) tp = np.arange(len(t)) #Cosine signals # 20 percent noise data20a = np.cos(6*np.pi*t) + 0.2*(-1+2*np.random.random(len(t))) __plotCommunityAsHeatmap(data20a, tp, './draft_fig/fig3/A.eps',title = '20 percent noise' ,cmap='jet', graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') #80 percent noise data = np.cos(6*np.pi*t) + 0.8*(-1+2*np.random.random(len(t))) __plotCommunityAsHeatmap(data, tp, './draft_fig/fig3/B.eps', title = '80 percent noise' ,cmap='jet', graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') # ### ramdonly remove 10 percent time points, 10 percent uneven samples # np.random.shuffle(tp) # tp_uneven = tp[:round(0.9*len(t))] # tp_uneven = sorted(tp_uneven) # data_uneven = data20a[tp_uneven] # __plotCommunityAsHeatmap(data_uneven, tp_uneven, './draft_fig/fig3/C.eps',noRemovedData=(sorted(tp),data20a), title = '10 percent uneven samples' ,cmap='jet', # graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') #20% np.random.shuffle(tp) tp_uneven = tp[:round(0.8*len(t))] tp_uneven = sorted(tp_uneven) data_uneven = data20a[tp_uneven] __plotCommunityAsHeatmap(data_uneven, tp_uneven, './draft_fig/fig3/C.eps',noRemovedData=(sorted(tp),data20a), title = '20 percent uneven samples' ,cmap='jet', graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') # ### randomly remove 40 percent time points, 40 percent uneven samples # np.random.shuffle(tp) # tp_uneven = tp[:round(0.6*len(t))] # tp_uneven = sorted(tp_uneven) # data_uneven = data20a[tp_uneven] # __plotCommunityAsHeatmap(data_uneven, tp_uneven, './draft_fig/fig3/D.eps',noRemovedData=(sorted(tp),data20a), title = '40 percent uneven samples' ,cmap='jet', # graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') #80% np.random.shuffle(tp) tp_uneven = tp[:round(0.2*len(t))] tp_uneven = sorted(tp_uneven) data_uneven = data20a[tp_uneven] __plotCommunityAsHeatmap(data_uneven, tp_uneven, './draft_fig/fig3/D.eps',noRemovedData=(sorted(tp),data20a), title = '80 percent uneven samples' ,cmap='jet', graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') # + ###square wave signal t = np.linspace( 0, 1, 150, endpoint=False) tp = np.arange(len(t)) #20 percent noise data20b = signal.square(6*np.pi*t) + 0.2 * (-1+2*np.random.random(len(t))) __plotCommunityAsHeatmap(data20b, tp, './draft_fig/fig3/E.eps', title = '20 percent noise' ,cmap='jet', graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') #80 percent noise data = signal.square(6*np.pi*t) + 0.8 * (-1+2*np.random.random(len(t))) __plotCommunityAsHeatmap(data, tp, './draft_fig/fig3/F.eps', title = '80 percent noise' ,cmap='jet', graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') # #10 percent uneven samples # np.random.shuffle(tp) # tp_uneven = tp[:round(0.9*len(t))] # tp_uneven = sorted(tp_uneven) # data_uneven = data20b[tp_uneven] # __plotCommunityAsHeatmap(data_uneven, tp_uneven, './draft_fig/fig3/G.eps',noRemovedData=(sorted(tp),data20b), title = '10 percent uneven samples' ,cmap='jet', # graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') #20 percent uneven samples np.random.shuffle(tp) tp_uneven = tp[:round(0.8*len(t))] tp_uneven = sorted(tp_uneven) data_uneven = data20b[tp_uneven] __plotCommunityAsHeatmap(data_uneven, tp_uneven, './draft_fig/fig3/G.eps',noRemovedData=(sorted(tp),data20b), title = '20 percent uneven samples' ,cmap='jet', graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') # #40 percent uneven samples # np.random.shuffle(tp) # tp_uneven = tp[:round(0.6*len(t))] # tp_uneven = sorted(tp_uneven) # data_uneven = data20b[tp_uneven] # __plotCommunityAsHeatmap(data_uneven, tp_uneven, './draft_fig/fig3/H.eps',noRemovedData=(sorted(tp),data20b), title = '40 percent uneven samples' ,cmap='jet', # graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') #80 percent uneven samples np.random.shuffle(tp) tp_uneven = tp[:round(0.2*len(t))] tp_uneven = sorted(tp_uneven) data_uneven = data20b[tp_uneven] __plotCommunityAsHeatmap(data_uneven, tp_uneven, './draft_fig/fig3/H.eps',noRemovedData=(sorted(tp),data20b), title = '80 percent uneven samples' ,cmap='jet', graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') # + ###sawtooth wave signal t = np.linspace( 0, 1, 150, endpoint=False) tp = np.arange(len(t)) #20 percent noise data20c = signal.sawtooth(6*np.pi*t, 0)+ 0.2 * (-1+2*np.random.random(len(t))) #20 percent noise __plotCommunityAsHeatmap(data20c, tp, './draft_fig/fig3/I.eps', title = '20 percent noise' ,cmap='jet', graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') #80 percent noise data = signal.sawtooth(6*np.pi*t,0)+ 0.8 * (-1+2*np.random.random(len(t))) __plotCommunityAsHeatmap(data, tp, './draft_fig/fig3/J.eps', title = '80 percent noise' ,cmap='jet', graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') # #10 percent uneven samples # np.random.shuffle(tp) # tp_uneven = tp[:round(0.9*len(t))] #10 percent uneven samples # tp_uneven = sorted(tp_uneven) # data_uneven = data20c[tp_uneven] # __plotCommunityAsHeatmap(data_uneven, tp_uneven, './draft_fig/fig3/K.eps',noRemovedData=(sorted(tp),data20c),title = '10 percent uneven samples' ,cmap='jet', # graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') #20 percent uneven samples np.random.shuffle(tp) tp_uneven = tp[:round(0.8*len(t))] #20 percent uneven samples tp_uneven = sorted(tp_uneven) data_uneven = data20c[tp_uneven] __plotCommunityAsHeatmap(data_uneven, tp_uneven, './draft_fig/fig3/K.eps',noRemovedData=(sorted(tp),data20c),title = '20 percent uneven samples' ,cmap='jet', graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') # #40 percent uneven samples # np.random.shuffle(tp) # tp_uneven = tp[:round(0.6*len(t))] # tp_uneven = sorted(tp_uneven) # data_uneven = data20c[tp_uneven] # __plotCommunityAsHeatmap(data_uneven, tp_uneven, './draft_fig/fig3/L.eps',noRemovedData=(sorted(tp),data20c),title = '40 percent uneven samples' ,cmap='jet', # graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') #80 percent uneven samples np.random.shuffle(tp) tp_uneven = tp[:round(0.2*len(t))] #80 percent uneven samples tp_uneven = sorted(tp_uneven) data_uneven = data20c[tp_uneven] __plotCommunityAsHeatmap(data_uneven, tp_uneven, './draft_fig/fig3/L.eps', noRemovedData=(sorted(tp),data20c), title = '80 percent uneven samples' ,cmap='jet', graph_type='dual_natural', weight='distance', direction=None, cutoff='auto') # + #no noise t = np.linspace( 0, 1, 150, endpoint=False) tp = np.arange(len(t)) data0a = np.cos(6*np.pi*t) __plotCommunityAsHeatmap(data0a, tp, './draft_fig/fig2/A.eps',title = 'Cosine Signal' ,cmap='jet', graph_type='dual_natural', weight='distance', direction='left', cutoff='auto') data0b = signal.square(6*np.pi*t) __plotCommunityAsHeatmap(data0b, tp, './draft_fig/fig2/B.eps', title = 'Square Wave Signal' ,cmap='jet', graph_type='dual_natural', weight='distance', direction='left', cutoff='auto') # data0a = np.sin(6*np.pi*t) # __plotCommunityAsHeatmap(data0a, tp, './draft_fig/fig2/Aex.eps',title = 'Sin Signal' ,cmap='jet', # graph_type='dual_natural', weight='distance', direction='left', cutoff='auto') # - def __compareCommunityOfDiffWeights(data, times, fileName,noRemovedData=None, title='', figsize=(8,4), cmap='jet',graph_type='natural', direction=None, cutoff=None, ySignalTick=False, xtickInterval = 20, f=1): '''plot time series and community structure as heatmap, nodes in same community with same color compare different weights Args: data: Numpy 2-D array of floats times: Numpy 1-D array of floats fileName: name of the figure file to save title: the figure title,default is empty noRemovedData: for uneven case, this data is the original data without remove any time points default is none figsize: tuple of int, Default (8,4) Figure size in inches cmap: the color map to plot heatmap graph_type: string, default: 'natural' "horizontal", Horizontal Visibility Graph "natural",natural Visibility Graph "dual_horizontal", dual perspective horizontal visibility graph "dual_natural", dual perspective natural visibility graph direction:str, default is None, the direction that nodes aggregate to communities None: no specfic direction, e.g. both sieds left: nodes can only aggregate to the lefe side hubs, e.g. early hubs right: nodes can only aggregate to the right side hubs, e.g. later hubs cutoff: will be used to combine initial communities, e.g. whenever the shortest path length of two adjacent hub nodes is smaller than cutoff, the communities with the two hub nodes will be combined. the cutoff can be int,float or string int or float: the percentile of all shortest path length distribution, between 0 ~ 100 'auto': use optimized cutoff None: no cutoff the default is None Returns: None Usage: __compareCommunityOfDiffWeights(data, times, 'Test.png', 'Test Data') ''' methods = ['None', u'Δ Time','Tangent','Euclidean Distance'] G_n, A = visibilityGraphCommunityDetection.createVisibilityGraph(data, times, graph_type=graph_type, weight=None) c_n = visibilityGraphCommunityDetection.communityDetectByPathLength(G_n, direction=direction, cutoff = cutoff) G_tm, A = visibilityGraphCommunityDetection.createVisibilityGraph(data, times, graph_type=graph_type, weight='time') c_tm = visibilityGraphCommunityDetection.communityDetectByPathLength(G_tm, direction=direction, cutoff = cutoff) G_tn, A = visibilityGraphCommunityDetection.createVisibilityGraph(data, times, graph_type=graph_type, weight='tan') c_tn = visibilityGraphCommunityDetection.communityDetectByPathLength(G_tn, direction=direction, cutoff = cutoff) G_dis, A = visibilityGraphCommunityDetection.createVisibilityGraph(data, times, graph_type=graph_type, weight='distance') c_dis = visibilityGraphCommunityDetection.communityDetectByPathLength(G_dis, direction=direction, cutoff = cutoff) heatMapData = [] if noRemovedData != None: lh = len(noRemovedData[0]) else: lh = len(times) def __get_heatmap_array(G, community,lh): temp = np.zeros(lh) for i, row in enumerate(community): for j in row: temp[int(f*float(G.nodes[j]['timepoint']))] = i + 1 res = [element for element in temp if element != 0] if temp[0] == 0: temp[0] = res[0] for i,e in enumerate(temp): if e == 0: temp[i] = temp[i-1] return temp heatMapData.append(__get_heatmap_array(G_n,c_n,lh)) heatMapData.append(__get_heatmap_array(G_tm,c_tm,lh)) heatMapData.append(__get_heatmap_array(G_tn,c_tn,lh)) heatMapData.append(__get_heatmap_array(G_dis,c_dis,lh)) fig = plt.figure(figsize=figsize) ax1 = plt.subplot(211) T = [int(round(x*f)) for x in times ] ax1.bar(T, data, width=0.3,color='b') if noRemovedData != None: origT = noRemovedData[0] origD = noRemovedData[1] removT = [x for x in origT if x not in times] removD = origD[removT] rT = [int(round(x*f)) for x in removT ] ax1.bar(rT, removD, width=0.3,color='grey') if noRemovedData != None: length = len(noRemovedData[0]) Ttimes = np.arange(length) else: length = len(data) Ttimes = np.arange(len(times)) ax1.axhline(y=0, color='k') ax1.set_ylabel('Signal Intensity', fontsize=16) ax1.set_xticks(np.arange(0,length,xtickInterval)) ax1.set_xticklabels(Ttimes[np.arange(0,length,xtickInterval)],fontsize=16) if ySignalTick == False: ax1.set_yticks([]) ax1.set_xlim(left=0,right=length) ax1.set_title(title,fontsize=20) ax2 = plt.subplot(212, sharex=ax1) #im2 = ax2.imshow(heatMapData) im2 = ax2.pcolor(heatMapData, cmap=cmap) ax2.set_xticks(np.arange(0,length,xtickInterval)) ax2.set_xticklabels(Ttimes[np.arange(0,length,xtickInterval)],fontsize=16) ax2.set_yticks(np.arange(0,len(heatMapData),1), minor=True) ax2.set_yticks(np.arange(0.5,len(heatMapData),1)) ax2.set_yticklabels(methods,fontsize=16) ax2.grid(which="minor", color="w", axis='y', linestyle='-', linewidth=3) ax2.tick_params(which="minor", top=False, bottom=False, left=False,right=False) for edge, spine in ax2.spines.items(): spine.set_visible(False) fig.tight_layout() fig.savefig(fileName, dpi=600) plt.close(fig) return None # + # compare different weights # NB: remove seed to run new simulations np.random.seed(20) random.seed(20) t = np.linspace( 0, 1, 150,endpoint=False) tp = np.arange(len(t)) cf = 'auto' graphtype = 'dual_natural' #Cosine signals data00 = np.cos(6*np.pi*t) __compareCommunityOfDiffWeights(data00, tp, './draft_fig/fig4/A1.eps',title = '0 percent noise' ,cmap='jet', graph_type=graphtype,direction=None, cutoff=cf) # 20 percent noise data20a = np.cos(6*np.pi*t) + 0.2*(-1+2*np.random.random(len(t))) __compareCommunityOfDiffWeights(data20a, tp, './draft_fig/fig4/A2.eps',title = '20 percent noise' ,cmap='jet', graph_type=graphtype,direction=None, cutoff=cf) #80 percent noise data = np.cos(6*np.pi*t) + 0.8*(-1+2*np.random.random(len(t))) __compareCommunityOfDiffWeights(data, tp, './draft_fig/fig4/A3.eps', title = '80 percent noise' ,cmap='jet', graph_type=graphtype, direction=None, cutoff=cf) #20 missing% np.random.shuffle(tp) tp_uneven = tp[:round(0.8*len(t))] tp_uneven = sorted(tp_uneven) data_uneven = data20a[tp_uneven] __compareCommunityOfDiffWeights(data_uneven, tp_uneven, './draft_fig/fig4/A4.eps',noRemovedData=(sorted(tp),data20a), title = '20 percent missing data' ,cmap='jet', graph_type=graphtype, direction=None, cutoff=cf) #80 missing% np.random.shuffle(tp) tp_uneven = tp[:round(0.2*len(t))] tp_uneven = sorted(tp_uneven) data_uneven = data20a[tp_uneven] __compareCommunityOfDiffWeights(data_uneven, tp_uneven, './draft_fig/fig4/A5.eps',noRemovedData=(sorted(tp),data20a), title = '80 percent missing data' ,cmap='jet', graph_type=graphtype, direction=None, cutoff=cf) # + # different intensity and frequency # NB: remove seed to run new simulations np.random.seed(20) random.seed(20) t = np.linspace(0, 1, 150,endpoint=False) tp = np.arange(len(t)) cf = 'auto' graphtype = 'dual_natural' factor = 10 yTick = True xinter = 20 data00 = np.cos(6*np.pi*t) __compareCommunityOfDiffWeights(data00, tp, './draft_fig/fig4/B1.eps',title = 'amplitude=1,frequency=1 ' ,cmap='jet', graph_type=graphtype,direction=None, cutoff=cf,ySignalTick=yTick) data_a50 = factor*np.cos(6*np.pi*t) __compareCommunityOfDiffWeights(data_a50, tp, './draft_fig/fig4/B2.eps',title ='amplitude='+str(factor)+',frequency=1' ,cmap='jet', graph_type=graphtype,direction=None, cutoff=cf,ySignalTick=yTick) factor = 100 t = np.linspace(0, 1, 150,endpoint=False) data_a100 = factor*np.cos(6*np.pi*t) __compareCommunityOfDiffWeights(data_a100, tp, './draft_fig/fig4/B3.eps',title ='amplitude='+str(factor)+',frequency=1' ,cmap='jet', graph_type=graphtype,direction=None, cutoff=cf,ySignalTick=yTick) f = 5 data_f5 = np.cos(f*6*np.pi*t) __compareCommunityOfDiffWeights(data_f5, tp, './draft_fig/fig4/B4.eps',title = 'amplitude=1,frequency='+str(f) ,cmap='jet', graph_type=graphtype,direction=None, cutoff=cf,ySignalTick=yTick,xtickInterval=xinter) f = 10 data_f10= np.cos(f*6*np.pi*t) __compareCommunityOfDiffWeights(data_f10, tp, './draft_fig/fig4/B5.eps',title = 'amplitude=1,frequency='+str(f) ,cmap='jet', graph_type=graphtype,direction=None, cutoff=cf,ySignalTick=yTick,xtickInterval=xinter) # -
Simulation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Функции # ### Извикване на функция # # ``` # <<име на функция>>(<<аргументи>>) # ``` # # Някои познати функции: len("hello") # ### Дефиниране на функция # # ``` # def <<име на функция>>(<<параметри>>): # <<тяло на функция>> # ``` # # `return` връща резултат. def plus_one(number): return number + 1 plus_one(10)
archive/2016/week3/Functions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.10 64-bit (''api_book'': venv)' # name: python3 # --- # # Basic ML serving # In the previous chapter we have created two very important objects and stored them in files: # # **ml-model-xgb.pkl** - the fitted on data and ready to be used xgboost machine learning model. # # **ml-model-lr.pkl** - the fitted on data and ready to be used logistic regression machine learning model. # # **ml-features.json** - a dictionary containing the features that the model was trained with. **NOTE**: it is very important to preserve the exact **key** sequence in all the future use of the model. # # The file structure is as follows: # # ``` # ├── ml_models # │ ├── ml-features.json # │ ├── ml-model-lr.pkl # │ └── ml-model-xgb.pkl # ``` # # No matter what type of serving - simple or complex - we are doing, these two objects is a minimum requirement if we want anyone to use our ML solution. # # A basic chart for ML model serving is the following: # # ![ml-serving-basic](media/basic-ml-serving.png) # # The steps are the following: # # * Preparate the raw data for the model # * Use the model with the prepared data # * Store/use the predictions # # Importing Python packages # + # Json reading import json # Pickle reading import pickle # Operating system functionality import os # Input simulation from ipywidgets import interactive, widgets, interact from IPython.display import display # Data wrangling import pandas as pd # - # # Reading the model objects # Before any serving can be done, the necessary objects need to be loaded into the host computer memory. This simple fact is true even for the most complex real time ML model serving systems: somewhere, between all the clouds and servers, all the objects are loaded to computer memory and is used in runtime. # + # Saving the path to the ML folder _ml_folder = os.path.join("..", 'ml_models') _ml_model_path = os.path.join(_ml_folder, "ml-model-lr.pkl") _ml_features_path = os.path.join(_ml_folder, "ml-features.json") # Reading the model object model = pickle.load(open(_ml_model_path, 'rb')) features = json.load(open(_ml_features_path, 'rb')) # Printing out the features print(features) # - # # Model serving # ## Simulating input # To test out our ML model, we will simulate an input for it. # + # Has the bomb been planted? bomb_planted = True # Boolean for the presence of the difusal kit ct_defuse_kit_present = False # CT health share of total; the range is (0, 1.0) ct_health_share = 0.75 # CT and T alive players; the value set is {1, 2, 3, 4, 5} ct_players_alive = 4 t_players_alive = 3 # CT and T helmets; the value set is {1, 2, 3, 4, 5} ct_helmets = 4 t_helmets = 3 # + # Creating a dataframe which will be used as raw input raw_input = { "bomb_planted": bomb_planted, "ct_defuse_kit_present": ct_defuse_kit_present, "ct_health_share": ct_health_share, "ct_players_alive": ct_players_alive, "t_players_alive": t_players_alive, "ct_helmets": ct_helmets, "t_helmets": t_helmets } # Displaying the raw input print(raw_input) # - # ### Input preprocesing function # Lets define a function that prepares the input for the model given the raw input dictionary and the saved features JSON object. def prepare_input(raw_input_dict: dict, features: dict) -> pd.DataFrame: """ Function that accepts the raw input dictionary and the features dictionary and returns a pandas dataframe with the input prepared for the model. """ # Extracting the key names feature_names = list(raw_input_dict.keys()) original_feature_names = list(features.keys()) # Ensuring that all the keys present in **features** are in **raw_input_dict** missing_features = set(original_feature_names) - set(feature_names) if len(missing_features): return print(f"Missing features in input: {missing_features}") # Iterating and preprocesing prepared_features = {} for feature in feature_names: # Extracting the type of the feature feature_type = features.get(feature) # Converting to that type feature_value = raw_input_dict.get(feature) if feature_type == "float64": feature_value = float(feature_value) if feature_type == "int64": feature_value = int(feature_value) # Saving to the prepared features dictionary prepared_features[feature] = feature_value # Creating a dataframe from the prepared features df = pd.DataFrame(prepared_features, index=[0]) # Ensuring that the names are in the exact order df = df[original_feature_names] # Returning the dataframe return df # Testing out the function input_df = prepare_input(raw_input, features) print(f"Preprocesed input for ML model:\n{input_df}") # ## Using the model # Now that we have the input with the exact same structure with which the model was built, we can use that input for extracting predictions. # + # Getting the probabilities p = model.predict_proba(input_df)[0] # Initial results print(p) # - # The extracted probabilities is vector of two and defines the following probabilities: # # $$ \mathbb{P(Y = 0|X)} $$ # # And # # $$ \mathbb{P(Y = 1|X)}$$ # # Note that: # # $$ \mathbb{P(Y = 1|X)} = 1 - \mathbb{P(Y = 0|X)} $$ # # In other words, the first coordinate is the probability that CT team will lose with the given inputs and the second coordinate is the probability that CT will win with the given inputs. # # These two probabilities is the final output of the machine learning model. How we use these results is a matter of our fantasy. # # Limitations of this approach # This notebook is perfect to play around with the model and to see how certain feature values influence the model. This interactive notebook could be called as a sufficient serving API for a data scientist for debugging reasons or for presentation of the results. # # But we cannot call this type of serving **production** grade serving for several reasons: # # * The enduser would have to download the notebook, the model and the feature list every time there is an update. # * Different machines may produce different results of the notebook (or would produce an error) # * It is hard to integrate a jupyter notebook to any working systems that the enduser may have. # * The code in the notebook would not deal well with a batch of inputs (>1 row in the dataset) # # In the next chapters of this book we will explain all the steps and technologies needed in order to create a production ready ML serving system.
api-book/_build/jupyter_execute/chapter-4-ML/machine-learning-model-serving.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import requests from pprint import pprint # #import jasu # #my_food_name = '吉野屋白菜キムチ' # my_food_name = str(input()) # general_food_list = generalize_food(my_food_name) # ids = [get_genre_id(general_food) # for general_food in general_food_list] url = 'https://app.rakuten.co.jp/services/api/Recipe/CategoryRanking/20170426?' payload = { 'applicationId': [1098435242683481258], 'categoryId': '12-450', } r = requests.get(url, params=payload) resp = r.json() print ('-'*40) recipe_keys = [1,2,3,4] recipe_values = [{"recipe":[{"URL":[recipe["recipeUrl"]]},{"title":[recipe["recipeTitle"]]},{"image":recipe["foodImageUrl"]},{"indication":recipe["recipeIndication"]},{"cost":recipe["recipeCost"]}]} for recipe in resp['result']] dic = dict(zip(recipe_keys,recipe_values)) pprint(dic) print ('-'*40) # + # レシートの食材名(1つ) → 一般化された食材名リスト def generalize_food(my_food_name): with open("genre_name_list.txt") as general_food_list: # 内包表記 result = {general_food_name.strip() for general_food_name in general_food_list if general_food_name.strip() in my_food_name} return result # 一般化された食材(1つ) → そのid def get_genre_id(target_food): with open("genre_id.txt") as id_food_text: for id_food in id_food_text: id, food = id_food.split() if food == target_food: return id # - print(generalize_food("ごまさば"))
jasu_recipe/Untitled.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''base'': conda)' # name: python385jvsc74a57bd04aed8422883dcce4c415b635faad05cbba72d9d3ae0f391a4fe3f5e91f7e6027 # --- # + import entrezpy.esearch.esearcher esearch = entrezpy.esearch.esearcher.Esearcher( '<NAME>', '<EMAIL>' ) a = esearch.inquire({ 'db' : 'pubmed','term':'kruse eiken vestergaard', 'retmax': 50, 'rettype': 'uilist' }) uids = list(set(a.get_result().uids)) print( uids ) # + tags=[] import entrezpy.esummary.esummarizer esummary = entrezpy.esummary.esummarizer.Esummarizer( '<NAME>', '<EMAIL>' ) a = esummary.inquire({ 'db' : 'pubmed', 'id' : uids }) summaries = a.get_result().summaries print( summaries ) # + tags=[] import entrezpy.efetch.efetcher e = entrezpy.efetch.efetcher.Efetcher( '<NAME>', '<EMAIL>') analyzer = e.inquire({'db' : 'pubmed', 'id' : uids, 'retmode' : 'text', 'rettype' : 'abstract'}) print(analyzer.count, analyzer.retmax, analyzer.retstart, analyzer.uids) # + import entrezpy.elink.elinker import entrezpy.elink.elink_analyzer e = entrezpy.elink.elinker.Elinker('<NAME>', '<EMAIL>') a = entrezpy.elink.elink_analyzer.ElinkAnalyzer() query = {'dbfrom' : 'pubmed', 'id' : uids , 'cmd' : 'llinks'} analyzer = e.inquire(query, analyzer=a) analyzer.get_result().dump() # - # # MeSH mesh_term = '("COVID-19"[Mesh]) AND "Machine Learning"[Mesh]' # ## Count # + import entrezpy.esearch.esearcher esearch = entrezpy.esearch.esearcher.Esearcher( '<NAME>', '<EMAIL>' ) a = esearch.inquire({ 'db' : 'pubmed','term': mesh_term, 'rettype': 'uilist', 'sort': 'most+recent'}) uids = list(set(a.get_result().uids)) uids = uids[0:1] len( uids ) # - # ## Text # + import entrezpy.esearch.esearcher esearch = entrezpy.esearch.esearcher.Esearcher( '<NAME>', '<EMAIL>' ) a = esearch.inquire({ 'db' : 'pubmed','term': mesh_term, 'rettype': 'uilist', 'sort': 'most+recent'}) uids = list(set(a.get_result().uids)) uids = uids[0:1] print( uids ) # - # ## Grab JSON # + tags=["outputPrepend"] import entrezpy.esummary.esummarizer import json esummary = entrezpy.esummary.esummarizer.Esummarizer( '<NAME>', '<EMAIL>' ) a = esummary.inquire({ 'db' : 'pubmed', 'id' : uids }) summaries = a.get_result().summaries print(json.dumps(summaries, indent=4, sort_keys=True)) # + tags=[] import entrezpy.efetch.efetcher e = entrezpy.efetch.efetcher.Efetcher( '<NAME>', '<EMAIL>') analyzer = e.inquire({'db' : 'pubmed', 'id' : uids, #'retmode' : 'xml', 'rettype' : 'abstract'}) # - analyzer.analyze_result() entrezpy.base.analyzer.EutilsAnalzyer.analyze_result() from bs4 import BeautifulSoup y=BeautifulSoup(analyzer) analyzer.PubmedArticle # # Semantic API Citation Count # #!pip install semanticscholar import semanticscholar as sch import semanticscholar as sch paper = sch.paper('10.1007/s00198-016-3828-8', timeout=2) #paper.keys() #paper['citations'] #paper['influentialCitationCount'] paper['numCitedBy'] #for author in paper['authors']: # print(author['name']) # print(author['authorId']) # # Create a parser # + import entrezpy.esummary.esummarizer import entrezpy.efetch.efetcher import entrezpy.elink.elinker import entrezpy.elink.elink_analyzer def parse_uids(uids): esummary = entrezpy.esummary.esummarizer.Esummarizer( '<NAME>', '<EMAIL>' ) efetch = entrezpy.efetch.efetcher.Efetcher( '<NAME>', '<EMAIL>') elinkanalysis = entrezpy.elink.elink_analyzer.ElinkAnalyzer() elink = entrezpy.elink.elinker.Elinker('<NAME>', '<EMAIL>') result_summary = esummary.inquire({ 'db' : 'pubmed', 'id' : uids, 'retmax': 10 }).get_result().summaries result_fetch = efetch.inquire({'db' : 'pubmed', 'id' : uids, 'retmode' : 'xml', 'rettype' : 'abstract'}).get_result() result_elinks = elink.inquire({'dbfrom' : 'pubmed', 'id' : uids , 'cmd' : 'llinks'}, analyzer=a).get_result().dump() parse_uids(uids = [25466529]) # - # # Gitlab example # + sys.path.insert(1, os.path.join(sys.path[0], '../src')) import entrezpy.efetch.efetcher # Python argument parser (see [2] for more details) demo_src = 'https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.EFetch' ap = argparse.ArgumentParser(description="Entrezpy Efetcher() examples ({})".format(demo_src)) ap.add_argument('--email', type=str, required=True, help='email required by NCBI') ap.add_argument('--apikey', type=str, default=None, help='NCBI apikey (optional)') args = ap.parse_args() # Prepare list of examples. Each example is a parameter dictionary as expected # by Efetcher. examples = [ {'db' : 'pubmed','id' : [17284678,9997], 'retmode':'text', 'rettype': 'abstract'} ] # Loop over examples start = time.time() for i in range(len(examples)): # Loop over retmodes for j in ['xml', 'text']: qrystart = time.time() # Init Efetcher ef = entrezpy.efetch.efetcher.Efetcher('efetcher', args.email, args.apikey) # Set retmode examples[i].update({'retmode':j}) # Fetch example and return default efetch analyzer a = ef.inquire(examples[i]) print("+Query {}\n+++\tParameters: {}\n+++\tStatus:".format(i, examples[i]), end='') # Test is query has been successful, e.g. no connection or NCBI errors. # In such a case, None would have been returned if not a: print("\tFailed: Response errors") return 0 print("\tResponse OK") print("+++\tQuery time: {} sec".format(time.time()-qrystart)) print("+Total time: {} sec".format(time.time()-start)) return 0 # -
notebooks/entrezpy/search_covid.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''spotify'': conda)' # metadata: # interpreter: # hash: a6734130388b60d2adfe9a41ba3ffb192d5b173694da81b45e75d546d4eae05c # name: python3 # --- # ## Import libraries # Import the libraries used by the code below import matplotlib.pyplot as plt from kneed import KneeLocator from sklearn.cluster import KMeans from sklearn.metrics import silhouette_score from sklearn.preprocessing import StandardScaler import pandas as pd import joblib # ## Read file # Read file in and set index to id filePath = "data/data.csv" spotify_df = pd.read_csv(filePath) spotify_df = spotify_df.set_index("id") spotify_df.head(5) # ## Feature selection # Select features for training the clustering model spotify_df_features = spotify_df[["acousticness", "danceability", "duration_ms", "energy", "instrumentalness", "key", "liveness", "loudness", "speechiness", "tempo", "valence"]] # ## Standardise the features # Use StandardScalar to standardise the features scaler = StandardScaler() spotify_scaler = scaler.fit(spotify_df_features) # save later spotify_df_scaled = spotify_scaler.transform(spotify_df_features) ## a boolean flag to run the analysis or not. Running the analysis can take 1+ hours to run. run_analysis = False # ## Run the kmeans algorithm # Run the Kmeans algorithm multiple times and record run metrics that shall be used later to evaluate what hyperparameters is best # + kmeans_kwargs = { "init": "k-means++", "n_init": 10, "max_iter": 300, "random_state": 42 } if run_analysis: n_cluster_start = 2 n_cluster_end = 25 sse = [] silhouette_coefficients = [] for i in range(n_cluster_start, n_cluster_end): kmeans = KMeans(n_clusters=i, **kmeans_kwargs) kmeans.fit(spotify_df_scaled) sse.append(kmeans.inertia_) score = silhouette_score(spotify_df_scaled, kmeans.labels_) silhouette_coefficients.append(score) # - # ## Plot analysis if run_analysis: plt.plot(range(n_cluster_start, n_cluster_end), sse) plt.xticks(range(n_cluster_start, n_cluster_end)) plt.xlabel("Number of Clusters") plt.ylabel("SSE") plt.show() if run_analysis: kl = KneeLocator(range(n_cluster_start,n_cluster_end), sse, curve="convex", direction="decreasing") kl.elbow if run_analysis: plt.plot(range(n_cluster_start, n_cluster_end), silhouette_coefficients) plt.xticks(range(n_cluster_start, n_cluster_end)) plt.xlabel("Number of Clusters") plt.ylabel("Silhouette Coefficient") plt.show() # ## Train the final kmeans model # After viewing the analysis, train the final model that shall be used by the app # create model from elbow kmeans = KMeans(n_clusters=25, **kmeans_kwargs) kmeans.fit(spotify_df_scaled) # ## Label the input dataset # Add cluster labels to the dataset labels = kmeans.predict(spotify_df_scaled) spotify_df["label"] = labels spotify_df.head(5) # ## Save labelled data to CSV # save labeled df to csv spotify_df.to_csv("data/spotify_data_labeled.csv") # ## Save the trained model and scalar object # We need to save the trained model and scalar object so that the web app can use it later # export scaler joblib.dump(spotify_scaler, "model/scaler.joblib") # export model joblib.dump(kmeans, "model/kmeans.joblib")
ml/kmeans.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # __Keras__: High-level API that allows defining different neural network architectures and training them using various of gradient-based optimizers. In the backend, Keras uses a low-level computational framework that is implemented in C, C++, and FORTRAN. Several such low-level frameworks are available open source. Keras supports the following three: # * TensorFlow, which was developed by Google and is the default backend of Keras # * CNTK, an opensource framework from Microsoft # * Theano, which was originally developed at University of Montreal, Canada # # Examples in this book use TensorFlow as the backend. # from __future__ import print_function import os import sys import pandas as pd import numpy as np # %matplotlib inline from matplotlib import pyplot as plt import seaborn as sb import datetime df = pd.read_csv('https://github.com/sri-spirited/Practical-Time-Series-Analysis-Python/blob/master/Data%20Files/PRSA_data_2010.1.1-2014.12.31.csv?raw=true') print('Shape of the dataframe:', df.shape) df.head() df.dropna(subset=['pm2.5'], axis=0, inplace=True) df.reset_index(drop=True, inplace=True) # When drop=True, a series is returned df['datetime'] = df[['year', 'month', 'day', 'hour']].apply(lambda row: datetime.datetime(year=row['year'], month=row['month'], day=row['day'], hour=row['hour']), axis=1) df.sort_values('datetime', ascending=True, inplace=True) plt.figure(figsize=(5.5, 5.5)) g = sb.boxplot(df['pm2.5']) g.set_title('Box plot of pm2.5'); plt.figure(figsize=(5.5, 5.5)) g = sb.tsplot(df['pm2.5']) g.set_title('Time series of pm2.5') g.set_xlabel('Index') g.set_ylabel('pm2.5 readings'); # + plt.figure(figsize=(5.5, 5.5)) g = sb.tsplot(df['pm2.5'].loc[df['datetime']<=datetime.datetime(year=2010,month=6,day=30)], color='g') g.set_title('pm2.5 during 2010') g.set_xlabel('Index') g.set_ylabel('pm2.5 readings') plt.figure(figsize=(5.5, 5.5)) g = sns.tsplot(df['pm2.5'].loc[df['datetime']<=datetime.datetime(year=2010,month=1,day=31)], color='g') g.set_title('pm2.5 during Jan 2010') g.set_xlabel('Index') g.set_ylabel('pm2.5 readings'); # - from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler(feature_range=(0, 1)) df['scaled_pm2.5'] = scaler.fit_transform(np.array(df['pm2.5']).reshape(-1, 1)) split_date = datetime.datetime(year=2014, month=1, day=1, hour=0) df_train = df.loc[df['datetime']<split_date] df_val = df.loc[df['datetime']>=split_date] print('Shape of train:', df_train.shape) print('Shape of validation:', df_val.shape) df_train.head() df_val.head() df_val.reset_index(drop=True, inplace=True) # + plt.figure(figsize=(5.5, 5.5)) g = sns.tsplot(df_train['scaled_pm2.5'], color='b') g.set_title('Time series of scaled pm2.5 in train set') g.set_xlabel('Index') g.set_ylabel('Scaled pm2.5 readings') plt.figure(figsize=(5.5, 5.5)) g = sns.tsplot(df_val['scaled_pm2.5'], color='r') g.set_title('Time series of scaled pm2.5 in validation set') g.set_xlabel('Index') g.set_ylabel('Scaled pm2.5 readings') # - def makeXy(ts, nb_timesteps): X = [] y = [] for i in range(nb_timesteps, ts.shape[0]):#From 7 to len(array) X.append(list(ts.loc[i-nb_timesteps:i-1])) y.append(ts.loc[i]) X, y = np.array(X), np.array(y) return X, y X_train, y_train = makeXy(df_train['scaled_pm2.5'], 7) print('Shape of train arrays:', X_train.shape, y_train.shape) X_val, y_val = makeXy(df_val['scaled_pm2.5'], 7) print('Shape of validation arrays:', X_val.shape, y_val.shape) from keras.layers import Dense, Input, Dropout from keras.optimizers import SGD from keras.models import Model from keras.models import load_model from keras.callbacks import ModelCheckpoint input_layer = Input(shape=(7,), dtype='float32') dense1 = Dense(32, activation='tanh')(input_layer) dense2 = Dense(16, activation='tanh')(dense1) dense3 = Dense(16, activation='tanh')(dense2) dropout_layer = Dropout(0.2)(dense3) output_layer = Dense(1, activation='linear')(dropout_layer) ts_model = Model(inputs=input_layer, outputs=output_layer) ts_model.compile(loss='mean_absolute_error', optimizer='adam') ts_model.summary() # + # save_weights_at = os.path.join('datasets', 'PRSA_data_2010.1.1-2014.12.31.{epoch:02d}-{val_loss:.4f}.hdf5') # save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0, # save_best_only=True, save_weights_only=False, mode='min', # period=1) ts_model.fit(x=X_train, y=y_train, batch_size=16, epochs=20, verbose=1, #callbacks=[save_best], validation_data=(X_val, y_val), shuffle=True) # - # best_model = load_model(os.path.join('datasets', 'PRSA_data_2010.1.1-2014.12.31.03-0.0119.hdf5')) preds = ts_model.predict(X_val) # preds = best_model.predict(X_val) pred_pm25 = scaler.inverse_transform(preds) pred_pm25 = np.squeeze(pred_pm25) from sklearn.metrics import mean_absolute_error mae = mean_absolute_error(df_val['pm2.5'].loc[7:], pred_pm25) print('MAE for the validation set:', round(mae, 4)) plt.figure(figsize=(5.5, 5.5)) plt.plot(range(50), df_val['pm2.5'].loc[7:56], linestyle='-', marker='*', color='r') plt.plot(range(50), pred_pm25[:50], linestyle='-', marker='.', color='b') plt.legend(['Actual','Predicted'], loc=2) plt.title('Actual vs Predicted pm2.5') plt.ylabel('pm2.5') plt.xlabel('Index') # plt.savefig('plots/Section 5/_05_10.png', format='png', dpi=300) # # Time series forecasting by LSTM from keras.layers.recurrent import LSTM X_train, X_val = X_train.reshape((X_train.shape[0], X_train.shape[1], 1)), X_val.reshape((X_val.shape[0], X_val.shape[1], 1)) print('Shape of arrays after reshaping:', X_train.shape, X_val.shape) input_layer = Input(shape=(7,1), dtype='float32') lstm_layer1 = LSTM(64, input_shape=(7,1), return_sequences=True)(input_layer) lstm_layer2 = LSTM(32, input_shape=(7,64), return_sequences=False)(lstm_layer1) dropout_layer = Dropout(0.2)(lstm_layer2) output_layer = Dense(1, activation='linear')(dropout_layer) ts_model = Model(inputs=input_layer, outputs=output_layer) ts_model.compile(loss='mean_absolute_error', optimizer='adam') ts_model.summary() # save_weights_at = os.path.join('datasets', 'PRSA_data_2010.1.1-2014.12.31.{epoch:02d}-{val_loss:.4f}.hdf5') # save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0, # save_best_only=True, save_weights_only=False, mode='min', # period=1) ts_model.fit(x=X_train, y=y_train, batch_size=16, epochs=20, verbose=1, callbacks=[save_best], validation_data=(X_val, y_val), shuffle=True) # best_model = load_model(os.path.join('datasets', 'PRSA_data_2010.1.1-2014.12.31.01-0.0117.hdf5')) preds = best_model.predict(X_val) pred_pm25 = scaler.inverse_transform(preds) pred_pm25 = np.squeeze(pred_pm25) from sklearn.metrics import mean_absolute_error mae = mean_absolute_error(df_val['pm2.5'].loc[7:], pred_pm25) print('MAE for the validation set:', round(mae, 4)) plt.figure(figsize=(5.5, 5.5)) plt.plot(range(50), df_val['pm2.5'].loc[7:56], linestyle='-', marker='*', color='r') plt.plot(range(50), pred_pm25[:50], linestyle='-', marker='.', color='b') plt.legend(['Actual','Predicted'], loc=2) plt.title('Actual vs Predicted pm2.5') plt.ylabel('pm2.5') plt.xlabel('Index') # # Time series forecasting by GRU # from keras.layers.recurrent import GRU X_train, X_val = X_train.reshape((X_train.shape[0], X_train.shape[1], 1)), X_val.reshape((X_val.shape[0], X_val.shape[1], 1)) print('Shape of arrays after reshaping:', X_train.shape, X_val.shape) input_layer = Input(shape=(7,1), dtype='float32') gru_layer1 = GRU(64, input_shape=(7,1), return_sequences=True)(input_layer) gru_layer2 = GRU(32, input_shape=(7,64), return_sequences=False)(gru_layer1) dropout_layer = Dropout(0.2)(gru_layer2) output_layer = Dense(1, activation='linear')(dropout_layer) ts_model = Model(inputs=input_layer, outputs=output_layer) ts_model.compile(loss='mean_absolute_error', optimizer='adam') ts_model.summary() save_weights_at = os.path.join('datasets', 'PRSA_data_2010.1.1-2014.12.31.{epoch:02d}-{val_loss:.4f}.hdf5') save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0, save_best_only=True, save_weights_only=False, mode='min', period=1) ts_model.fit(x=X_train, y=y_train, batch_size=16, epochs=20, verbose=1, callbacks=[save_best], validation_data=(X_val, y_val), shuffle=True) best_model = load_model(os.path.join('datasets', 'PRSA_data_2010.1.1-2014.12.31.05-0.0120.hdf5')) preds = best_model.predict(X_val) pred_pm25 = scaler.inverse_transform(preds) pred_pm25 = np.squeeze(pred_pm25) from sklearn.metrics import mean_absolute_error mae = mean_absolute_error(df_val['pm2.5'].loc[7:], pred_pm25) print('MAE for the validation set:', round(mae, 4)) plt.figure(figsize=(5.5, 5.5)) plt.plot(range(50), df_val['pm2.5'].loc[7:56], linestyle='-', marker='*', color='r') plt.plot(range(50), pred_pm25[:50], linestyle='-', marker='.', color='b') plt.legend(['Actual','Predicted'], loc=2) plt.title('Actual vs Predicted pm2.5') plt.ylabel('pm2.5') plt.xlabel('Index') # # Time series forecasting by 1D Convolution from keras.layers import Flatten from keras.layers.convolutional import ZeroPadding1D from keras.layers.convolutional import Conv1D from keras.layers.pooling import AveragePooling1D X_train, X_val = X_train.reshape((X_train.shape[0], X_train.shape[1], 1)), X_val.reshape((X_val.shape[0], X_val.shape[1], 1)) print('Shape of arrays after reshaping:', X_train.shape, X_val.shape) input_layer = Input(shape=(7,1), dtype='float32') zeropadding_layer = ZeroPadding1D(padding=1)(input_layer) conv1D_layer1 = Conv1D(64, 3, strides=1, use_bias=True)(zeropadding_layer) conv1D_layer2 = Conv1D(32, 3, strides=1, use_bias=True)(conv1D_layer1) avgpooling_layer = AveragePooling1D(pool_size=3, strides=1)(conv1D_layer2) flatten_layer = Flatten()(avgpooling_layer) dense_layer1 = Dense(32)(avgpooling_layer) dense_layer2 = Dense(16)(dense_layer1) dropout_layer = Dropout(0.2)(flatten_layer) output_layer = Dense(1, activation='linear')(dropout_layer) ts_model = Model(inputs=input_layer, outputs=output_layer) ts_model.compile(loss='mean_absolute_error', optimizer='adam') ts_model.summary() save_weights_at = os.path.join('datasets', 'PRSA_data_2010.1.1-2014.12.31.{epoch:02d}-{val_loss:.4f}.hdf5') save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0, save_best_only=True, save_weights_only=False, mode='min', period=1) ts_model.fit(x=X_train, y=y_train, batch_size=16, epochs=20, verbose=1, callbacks=[save_best], validation_data=(X_val, y_val), shuffle=True) best_model = load_model(os.path.join('datasets', 'PRSA_data_2010.1.1-2014.12.31.02-0.0133.hdf5')) preds = best_model.predict(X_val) pred_pm25 = scaler.inverse_transform(preds) pred_pm25 = np.squeeze(pred_pm25) from sklearn.metrics import mean_absolute_error mae = mean_absolute_error(df_val['pm2.5'].loc[7:], pred_pm25) print('MAE for the validation set:', round(mae, 4)) plt.figure(figsize=(5.5, 5.5)) plt.plot(range(50), df_val['pm2.5'].loc[7:56], linestyle='-', marker='*', color='r') plt.plot(range(50), pred_pm25[:50], linestyle='-', marker='.', color='b') plt.legend(['Actual','Predicted'], loc=2) plt.title('Actual vs Predicted pm2.5') plt.ylabel('pm2.5') plt.xlabel('Index')
Section 5/5.PM2.5_MLPs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Dictionaries # # Now it's time to practice with dictionaries! # # Let's start by making a dictionary of relatives in your family. Make a dictionary where the keys are a relation (ex. mom, dad, sister, brother, uncle, aunt, cousin, etc.), and the values are the people's name(s) who are related to you in that way: # + # make dictionary of relations family = {'mom':'carmen', 'dad':'dan', 'sister':'maya', 'brother':'sam', 'uncle':['russell','david'], 'aunt':['helene','sharry','diana','debbie'], 'cousin':['zane','nico','hannah','alyssa','zach','kamilya','ismail','saidi']} # print out your dictionary print(family) # - # Now access all your aunts: # access all aunts in dictionary family['aunt'] # What if you don't remember all of the keys in your dictionary? How can you print them all out? # print all keys in dictionary family.keys() # Now you've decided that you want to add some of your closest friends to the list too. Add a key-value pair to your dictionary with some of your closest friends: # + # add friends to dictionary family['friends'] = ['brooke','marlena','kelly'] # print dictionary to see if it worked! family # - # See how friends isn't at the end of the dictionary? That's a friendly reminder that dictionaries are *unordered*. # # Now let's loop through our dictionary and print out the name of the key and the length of the values (how many of that relation you have): for k,v in family.items(): print(k) print(len(v)) # **Challenge:** Let's add in a friend to our dictionary. How can we do that? (*Note:* We didn't learn this exact thing. Feel free to use Google if you need to!) # + # add friend to friends key-value pair family['friends'].append('stephanie') # print dictionary to see if it worked print(family) # - # Say we changed our mind and now we actually want our friends to be in a separate list. How could we do that using one line of Python code? # + # get friends out of dictionary friends = family.pop('friends') # print friends print(friends) # - # Nice job! You're becoming an expert at accessing and manipulating keys and values in dictionaries.
Practices/_Keys/KEY_Practice18_Dictionaries.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Tuber_Culosis_Data_Visualisation # + ## Importing important libraries import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt matplotlib.rc('xtick', labelsize=12) matplotlib.rc('ytick', labelsize=30) matplotlib.rcParams.update({'font.size': 28}) import math import datetime as dt import os import sys # - # ## Utility Functions # + ## Visulalization function def Visualize(dataset,List_of_count_to_print,title1,ylab,vx=50,vy=30,w=.80): df = dataset n = 0 for i in List_of_count_to_print: filter1 = df['Country'] == i df = df[filter1] labels = df['Date'] conf = df['Confirmed'] Recov = df['Recovered'] Death = df['Deaths'] #high = max(conf) #low = min(conf) x = np.arange(len(labels)) # the x label locations width = w # the width of the bars fig, ax = plt.subplots(figsize=(vx,vy)) rects1 = ax.bar(x - width, conf, width, label='confirmed') rects2 = ax.bar(x , Recov, width, label='Recovered') rects3 = ax.bar(x + width , Death, width, label='Death') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel(ylab) ax.set_title(title1) ax.set_xticks(x) plt.xticks(rotation=90) #plt.ylim([math.ceil(low-0.5*(high-low)), math.ceil(high+0.5*(high-low))]) ax.set_xticklabels(labels) ax.legend() n = n + 1 plt.show() ## function to Check the List of Countries avaialable def count_avalaible(dataframe,country_coul_rep = 'Country'): x = 0 for i in set(dataframe.loc[:,country_coul_rep]): print(i,end=' | ') x = x + 1 if(x > 6): x = 0 print() print("\n\n##Total No of Countries = " + str(len(set(dataframe.loc[:,country_coul_rep])))) # - # ## Loading Tuber_Cluosis Data Tuber_Culosis_Countires_Wise = pd.read_csv('../../TB Data/TB_notifications_2020-08-17.csv') Tuber_Culosis_Countires_Wise # + ## Check the List of Countries avaialable ## Columns renaming for Uniformity Tuber_Culosis_Countires_Wise = Tuber_Culosis_Countires_Wise.rename(columns={'country': 'Country'}) count_avalaible(Tuber_Culosis_Countires_Wise,'Country') # + ## Analysing the data Structure Country_to_look_for = 'India' ylab = "Affected_Population" xlab = "Year" filter1 = Tuber_Culosis_Countires_Wise['Country'] == Country_to_look_for Tuber_Culosis_Countires_Wise_country_specific = Tuber_Culosis_Countires_Wise[filter1] Tuber_Culosis_Countires_Wise_country_specific #Tuber_Culosis_Countires_Wise ## Uncomment this to view for all countires at once # + ## Visualisation df = Tuber_Culosis_Countires_Wise_country_specific labels = df['year'] prev_c_newinc = df['c_newinc'] prev_hiv_tbscr = df['hiv_tbscr'] prev_hiv_reg = df['hiv_reg'] title1 = 'TB prevelance in ' + str(Country_to_look_for) #high = int(max(prev_2018)) #low = 0 x = np.arange(len(labels)) # the x label locations width = .50 # the width of the bars fig, ax = plt.subplots(figsize=(40,30)) rects1 = ax.bar(x-width/2, prev_c_newinc, width, label='prev_c_newinc') rects2 = ax.bar(x, prev_hiv_tbscr, width, label='prev_hiv_tbscr') rects3 = ax.bar(x+width/2, prev_hiv_reg, width, label='prev_hiv_reg') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel(ylab) ax.set_xlabel(xlab) ax.set_title(title1) ax.set_xticks(x) #ax.set_yticks(y) plt.xticks(rotation=90) #plt.ylim([math.ceil(low-0.5*(high-low)), math.ceil(high+0.5*(high-low))]) ax.set_xticklabels(labels) ax.legend() plt.show() # - # ## Cleaning Tuber_Culosis DATA(Preprocessing) # + Tuber_Culosis_Countires_Wise = Tuber_Culosis_Countires_Wise.fillna(0) Tuber_Culosis_Countires_Wise_imp_features_only = Tuber_Culosis_Countires_Wise.drop(Tuber_Culosis_Countires_Wise.columns.difference(['Country','c_newinc','new_labconf']), 1) Tuber_Culosis_Countires_Wise_imp_features_only = Tuber_Culosis_Countires_Wise_imp_features_only.replace('United Kingdom of Great Britain and Northern Ireland', 'United Kingdom') Tuber_Culosis_Countires_Wise_imp_features_only = Tuber_Culosis_Countires_Wise_imp_features_only.replace('United States of America', 'US') # - pd.set_option('display.max_columns', None) Tuber_Culosis_Countires_Wise Tuber_Culosis_Countires_Wise_imp_features_only # + ## Column match print('-----------------------------------------------------------------') countries = ['Afghanistan','Italy' , 'Kuwait', 'India', 'South Africa' ,'US', 'United Kingdom','Sri Lanka', 'Chile' , 'Norway', 'New Zealand' ,'Switzerland', 'Australia', 'Canada', 'China','Slovenia','North Macedonia'] k = 0 match = [] for i in set(Tuber_Culosis_Countires_Wise_imp_features_only.loc[:,'Country']): if(i in countries): k +=1 match.append(i) print(i) print(k) print("-------Not Matching --------------------") for i in countries: if(i not in match ): print(i) # - # ## Writing the cleaned data in Cleaned Folder Tuber_Culosis_Countires_Wise_imp_features_only.to_csv('../Pre_Processed_Data/Tuber_Culosis_Countires_Wise_Processed.csv') # ## Visualisation After Cleaning # + ## Visualisation df = Tuber_Culosis_Countires_Wise_country_specific labels = df['year'] prev_c_newinc = df['c_newinc'] prev_hiv_tbscr = df['hiv_tbscr'] prev_hiv_reg = df['hiv_reg'] title1 = 'TB prevelance in ' + str(Country_to_look_for) #high = int(max(prev_2018)) #low = 0 x = np.arange(len(labels)) # the x label locations width = .50 # the width of the bars fig, ax = plt.subplots(figsize=(40,30)) rects1 = ax.bar(x-width/2, prev_c_newinc, width, label='prev_c_newinc') rects2 = ax.bar(x, prev_hiv_tbscr, width, label='prev_hiv_tbscr') rects3 = ax.bar(x+width/2, prev_hiv_reg, width, label='prev_hiv_reg') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel(ylab) ax.set_xlabel(xlab) ax.set_title(title1) ax.set_xticks(x) #ax.set_yticks(y) plt.xticks(rotation=90) #plt.ylim([math.ceil(low-0.5*(high-low)), math.ceil(high+0.5*(high-low))]) ax.set_xticklabels(labels) ax.legend() plt.show() # -
Experiment Scripts/Data_Visualisation_Code/.ipynb_checkpoints/Tuber_Culosis_Data_Visualisation-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.0.1 # language: julia # name: julia-1.0 # --- using Pkg pkg"activate ." # Run this cell in order to download all the package dependencies with the exact versions used in the book # This is necessary if (some of) the packages have been updated and have introduced breaking changes pkg"instantiate" using CSV, DataFrames df = CSV.read("Map_of_Registered_Business_Locations.csv") describe(df) size(df, 1) df[df[Symbol("Parking Tax")] .== true, :][1:10, [Symbol("DBA Name"), Symbol("Parking Tax")]] using Pkg pkg"add Query" using Query @from i in df begin @where i[Symbol("Parking Tax")] == true @select i @collect DataFrame end rename!(df, [n => replace(string(n), " "=>"_") |> Symbol for n in names(df)]) @from i in df begin @where i.Parking_Tax == true @select i @collect DataFrame end assign = :(x = 2) eval(assign) x fieldnames(typeof(assign)) dump(assign) assign.args[2] = 3 eval(assign) x assign4 = Expr(:(=), :x, 4) eval(assign4) x quote y = 42 x = 10 end eval(ans) y x name = "Dan" greet = :("Hello " * $name) eval(greet) macro greet(name) :("Hello " * $name) end @greet("Adrian") @greet "Julia" macro twostep(arg) println("I execute at parse time. The argument is: ", arg) return :(println("I execute at runtime. The argument is: ", $arg)) end ex = macroexpand(@__MODULE__, :(@twostep :(1, 2, 3))); eval(ex) shopping_list = DataFrame(produce=["Apples", "Milk", "Bread"], qty=[5, 2, 1]) @from p in shopping_list begin @select p end @from p in shopping_list begin @select p.produce end @from p in shopping_list begin @select p.produce end @from p in shopping_list begin @select uppercase(p.produce), 2p.qty end @from p in shopping_list begin @select { produce = uppercase(p.produce), qty = 2p.qty } end @from p in shopping_list begin @select { PRODUCE = uppercase(p.produce), double_qty = 2p.qty } @collect end @from p in shopping_list begin @select {PRODUCE = uppercase(p.produce), double_qty = 2p.qty} @collect DataFrame end @from p in shopping_list begin @where p.qty < 2 @select p @collect DataFrame end @from p in shopping_list begin @let weekly_qty = 7p.qty @where weekly_qty > 10 @select { p.produce, week_qty=weekly_qty } @collect DataFrame end products_info = DataFrame(produce = ["Apples", "Milk", "Bread"], price = [2.20, 0.45, 0.79], allergenic = [false, true, true]) shopping_info = @from p in shopping_list begin @join pinfo in products_info on p.produce equals pinfo.produce @select { p.produce, p.qty, pinfo.price, pinfo.allergenic } @collect DataFrame end @from p in shopping_info begin @group p.produce by p.allergenic @collect end @from p in shopping_info begin @group p by p.allergenic into q @select { allergenic = key(q), count = length(q.allergenic), produce = join(q.produce, ", ") } @collect DataFrame end @from p in products_info begin @orderby descending(p.price), p.produce @select p @collect DataFrame end pkg"add DataValues" using DataValues clean_df = @from b in df begin @where lowercase(b.City) == "san francisco" && b.State == "CA" && ! isna(b.Street_Address) && ! isna(b.Source_Zipcode) && ! isna(b.NAICS_Code) && ! isna(b.NAICS_Code_Description) && ! isna(b.Business_Location) && occursin(r"\((.*), (.*)\)", get(b.Business_Location)) && isna(b.Business_End_Date) && isna(b.Location_End_Date) @select { b.DBA_Name, b.Source_Zipcode, b.NAICS_Code, b.NAICS_Code_Description, b.Business_Location } @collect DataFrame end clean_df_geo = @from b in clean_df begin @let geo = split(match(r"(\-?\d+(\.\d+)?),\s*(\-?\d+(\.\d+)?)", get(b.Business_Location)).match, ", ") @select {b.DBA_Name, b.Source_Zipcode, b.NAICS_Code, b.NAICS_Code_Description, lat = parse(Float64, geo[1]), long = parse(Float64, geo[2])} @collect DataFrame end describe(clean_df_geo) unique(clean_df_geo[:, :Source_Zipcode]) |> length using Pkg pkg"add Clustering" using Clustering using CSV, DataFrames, Query clean_df_geo = CSV.read("clean_df_geo.tsv", delim = '\t', nullable = false) model_data = @from b in clean_df_geo begin @group b by b.Source_Zipcode into g @let bcount = Float64(length(g)) @orderby descending(bcount) @select { zipcode = Float64(get(key(g))), businesses_count = bcount } @collect DataFrame end tail(model_data) using Gadfly plot(model_data, x=:businesses_count, Geom.histogram) model_data = @from b in clean_df_geo begin @group b by b.Source_Zipcode into g @let bcount = Float64(length(g)) @where bcount > 10 @orderby descending(bcount) @select { zipcode = Float64(get(key(g))), businesses_count = bcount } @collect DataFrame end training_data = permutedims(convert(Array, model_data), [2, 1]) result = kmeans(training_data, 3, init=:kmpp, display=:iter) result.assignments model_data[:cluster_id] = result.assignments model_data plot(model_data, y = :zipcode, x = :businesses_count, color = result.assignments, Geom.point, Scale.x_continuous(minvalue=0, maxvalue=5000), Scale.y_continuous(minvalue=94050, maxvalue=94200), Scale.x_continuous(format=:plain)) companies_in_top_areas = @from c in clean_df_geo begin @where in(c.Source_Zipcode, [94110, 94103, 94109]) @select c @collect DataFrame end plot(companies_in_top_areas, y = :long, x = :lat, Geom.point, Scale.x_continuous(minvalue=36, maxvalue=40), Scale.y_continuous(minvalue=-125, maxvalue=-120), color=:Source_Zipcode) companies_in_top_areas = @from c in companies_in_top_areas begin @where c.lat != minimum(companies_in_top_areas[:lat]) @select c @collect DataFrame end plot(companies_in_top_areas, y = :long, x = :lat, Geom.point, Scale.x_continuous(minvalue=36, maxvalue=40), Scale.y_continuous(minvalue=-125, maxvalue=-120), color=:Source_Zipcode) activities = @from c in companies_in_top_areas begin @group c by c.NAICS_Code_Description into g @orderby descending(length(g)) @select { activity = key(g), number_of_companies = length(g) } @collect DataFrame end plot(activities, y=:number_of_companies, Geom.bar, color=:activity, Scale.y_continuous(format=:plain), Guide.XLabel("Activities"), Guide.YLabel("Number of companies")) model_data = @from c in companies_in_top_areas begin @select { latitude = c.lat, longitude = c.long } @collect DataFrame end training_data = permutedims(convert(Array{Float64}, model_data), [2, 1]) result = kmeans(training_data, 12, init=:kmpp, display=:iter) result.counts plot(result.counts, Geom.bar, y=result.counts, Guide.YLabel("Number of businesses"), Guide.XLabel("Cluster ID"), color=result.counts) companies_in_top_areas[:cluster_id] = result.assignments plot(companies_in_top_areas, color=:cluster_id, x=:lat, y=:long) export_data = @from c in companies_in_top_areas begin @select { Name = c.DBA_Name, Zip = c.Source_Zipcode, Group = string("Cluster $(c.cluster_id)"), Latitude = c.lat, Longitude = c.long, City = "San Francisco", State = "CA" } @collect DataFrame end CSV.write("businesses.csv", head(export_data, 250))
JuliaProgrammingProjects/author_code/Chapter08/Chapter 8.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Geo (Python 3.8) # language: python # name: geo # --- # # Multi-Attribute Similarity Search for Interactive Data Exploration with the SimSearch REST API # ## Connection to an instance of the SimSearch service # + import requests import json import pandas as pd import numpy as np import math import scipy.stats from matplotlib import pyplot as plt from IPython.core.display import display, HTML # Various custom helper functions from functions import results_pairwise, flatten, changeDataType, map_points, weighted_heatmap, filterNaN, filter_dict_median, frequency, barchart, plot_wordcloud, generate_color # Use 5 decimal digits for floating numerical values pd.options.display.float_format = '{:,.5f}'.format # - # URL of the web service # E.g., assuming that the SimSearch service has been deployed locally at port 8090: serviceURL = 'http://localhost:8090/simsearch/api/' # ### __*Mount request*__: Define data sources available for similarity search # ##### __IMPORTANT!__ This step needs to be performed __once__ for each data source. # ##### Once data is successfully ingested, it can be queried as long as the SimSearch service is up-and-running. # + # Specify a mount request to the SimSearch API that will index the data sources specified in the parameters mount = serviceURL + 'index' # JSON specification for the data sources and the similarity operations supported for their attributes # In this example, note that the CSV dataset is available at a REMOTE HTTP server; however, data may be also available at the server's file system # The spatial operation makes use of two attributes (longitude, latitude) available in the original dataset, but it is mentioned with an alias ('position'): params = {'sources':[{'name':'remotePath1','type':'csv','url':'http://download.smartdatalake.eu/datasets/gdelt/'}], 'search':[{'operation':'spatial_knn','source':'remotePath1','dataset':'sample.csv','header':'true','separator':',','key_column':'article_id','search_column':['longitude','latitude'],'alias_column':'position'}, {'operation':'categorical_topk','source':'remotePath1','dataset':'sample.csv','separator':',','token_delimiter':';','header':'true','key_column':'article_id','search_column':'persons'}, {'operation':'temporal_topk','source':'remotePath1','dataset':'sample.csv','separator':',','header':'true','key_column':'article_id','search_column':'timestamp'}]} # IMPORTANT! No API key is required for such requests # A new API key will be generated once this request completes successfully headers = {'Content-Type' : 'application/json'} # Post this request with these parameters resMount = requests.post(mount, json=params, headers=headers) # Provide the resulting message (with the API key to be used in subsequent requests) print(resMount.json()) # - # #### __IMPORTANT!__ Remember your API key for subsequent requests to this SimSearch instance # ##### Create a dictionary from the response ... dictMount = json.loads(resMount.text) # ##### ... and extract the API key necessary for connecting to the SimSearch instance: # Keep your API key obtained from the mount request for further use with any other requests against this instance API_KEY = dictMount.get('apiKey') API_KEY # ### __*Append request*__: Include extra attributes to this SimSearch instance # #### Specify the dataset, the attributes and the respective similarity operations: # + # Specify an append request to the SimSearch API that will also index the data sources specified in the parameters mount = serviceURL + 'append' # JSON specification for the data source(s) and the similarity operations supported for their attributes # In this example, note that the CSV dataset must be available at the local file system (in the server) params = {'sources':[{'name':'localPath1','type':'csv','directory':'/data/gdelt/}], 'search':[{'operation':'numerical_topk','source':'localPath1','dataset':'sample.csv','separator':',','header':'true','key_column':'article_id','search_column':'positive_sentiment'}, {'operation':'numerical_topk','source':'localPath1','dataset':'sample.csv','separator':',','header':'true','key_column':'article_id','search_column':'negative_sentiment'}]} # IMPORTANT! API key is required for such requests headers = { 'api_key' : API_KEY, 'Content-Type' : 'application/json'} # Post this request with these parameters resAppend = requests.post(mount, json=params, headers=headers) # Provide the resulting message (with the API key to be used in subsequent requests) print(resAppend.json()) # - # ### __*Catalog request*__: List all queryable attributes # + # Specify a catalog request to the SimSearch API catalog = serviceURL + 'catalog' # JSON specification may be empty in order to list all available attributes ... params = {} # ... or specify a particular type of similarity operation #params= {'operation': 'numerical_topk'} # API key is required for such requests headers = { 'api_key' : API_KEY, 'Content-Type' : 'application/json'} # Post this request with these parameters to the SimSearch service; response is given in a JSON array response = requests.post(catalog, json=params, headers=headers) #print(response.json()) # - # Report the queryable attributes, their data types, and their supported similarity operations: attrs = pd.DataFrame(response.json()) attrs # ### __*Search request*__: submit a top-*k* similarity search query # #### Top-k value # Desired number of top-k results to return k = 30 # #### Query values per attribute involved in this search request: # Each query value should conform with the data type of the respective attribute valKeywords = ['donald trump', 'joe biden', 'vladimir putin'] valTimestamp = '2019-07-14 12:45:00' valPosSentiment = '2.5' valPosition = 'POINT(2.35 48.85)' # #### Weight specifications # Note that multiple combinations of weights are specified per attribute; In this example, two lists of top-k results will be returned weightKeywords = ['1.0','0.8'] weightTimestamp = ['1.0','0.9'] weightPosSentiment = ['1.0','0.3'] weightPosition = ['1.0','0.6'] # #### Rank method to apply for similarity search # Possible values for the ranking algorithm: 'threshold' (default); 'no_random_access; 'partial_random_access'; 'pivot_based'. rankMethod = 'threshold' # #### Compose & submit this search request # + # Specify a search request to the SimSearch API search = serviceURL + 'search' # Specify all query parameters # Can also specify extra attributes (not involved in similarity conditions) to be included in the list of query results params = {'algorithm':rankMethod, 'output': {'extra_columns':['negative_sentiment','name']}, 'k':k, 'queries':[{'column':'persons','value':valKeywords ,'weights':weightKeywords}, {'column':'positive_sentiment','value':valPosSentiment ,'weights':weightPosSentiment}, {'column':'timestamp','value':valTimestamp,'weights':weightTimestamp}, {'column':'position','value':valPosition,'weights':weightPosition}]} # Valid API key is required for such requests headers = { 'api_key' : API_KEY, 'Content-Type' : 'application/json'} # Post this request with these parameters to the SimSearch service; response is given in a JSON array resSearch = requests.post(search, json=params, headers=headers) #print(resSearch.json()) # - # Report final ranked results: An array of top-k results is returned for each specified combination of weights. # For each combination, a similarity matrix is also returned, measuring the similarity between all pairs of the top-k results. df = pd.DataFrame(resSearch.json()) df # Print a given combination of weights: weights = df['weights'] # E.g., the ***2nd*** combination of weights for the attributes print(weights.iloc[1]) # ### Top-*k* results for each combination of weights # Also flatten attribute values and scores contained in the nested JSON array returned as response: # + results = [None] * len(weights) # Results for each combination of weights # This flattening returns geodataframes, i.e., one column holds geometries (point locations) for index, item in enumerate(weights): results[index] = flatten(df['rankedResults'].iloc[index]) # - # #### Listing of results for a given batch # + # Display the table as HTML with clickable URLs display(HTML(results[1].to_html(render_links=True,escape=False))) # Results for the 1st combination of weights #results[1] # - # ### Intra-Correlation: Similarity of the results for a given combination of weights # + # Create as many plots as the weight combinations fig, ax = plt.subplots(1,len(weights),figsize=(10,10)) simMatrix = [None] * len(weights) # Create a pivot table for the similarity matrix of each weight combination and plot it for index, item in enumerate(weights): plt.subplot(1, len(weights), index+1) sim = pd.DataFrame(df['similarityMatrix'].iloc[index]) simMatrix[index] = sim.pivot(index='left', columns='right', values='score') plt.imshow(simMatrix[index], interpolation='none') plt.title('W' + str(index+1)) # - # ### Inter-Correlation: Statistics regarding pairwise correlation of results # ##### First, create lists of rankings for two batches of results (i.e., from two combinations of weights) # E.g., A is the second and B is the fourth batch of SimSearch results A, B = results_pairwise(results[0], results[1]) # ##### Pearson's: scipy.stats.pearsonr(A.values[0], B.values[0]) # ##### Spearman's rho: scipy.stats.spearmanr(A.values[0], B.values[0]) # ##### Kendall's tau: scipy.stats.kendalltau(A, B) # ## Map visualizations # #### Map plots from each batch based on the spatial locations # + listMapPoints = [] # clustered points with a BBOX showing the overall spatial extent listHeatmaps = [] # heatmaps illutrating the spatial density # Create all map plots from each batch of results (weight combinations) for index, item in enumerate(weights): listMapPoints.append(map_points(results[index], show_bbox=True)) listHeatmaps.append(weighted_heatmap(results[index], radius=20)) # - # ### Plot clustered points for each batch of results # + contents = '' numPlots = sum(m is not None for m in listMapPoints) percent = (100.0/numPlots) - 0.5 # Construct an HTML for displaying maps side-by-side for m in listMapPoints: if m is not None: contents += '<iframe srcdoc="{}" style="float:left; width: {}px; height: {}px; display:inline-block; width: {}%; margin: 0 auto; border: 2px solid black"></iframe>'.format(m.get_root().render().replace('"', '&quot;'),400,400,percent) display(HTML(contents)) # - # ### Plot heatmaps for each batch of results # + contents = '' numPlots = sum(m is not None for m in listHeatmaps) percent = (100.0/numPlots) - 0.5 # Construct an HTML for displaying heatmaps side-by-side for m in listHeatmaps: if m is not None: contents += '<iframe srcdoc="{}" style="float:left; width: {}px; height: {}px; display:inline-block; width: {}%; margin: 0 auto; border: 2px solid black"></iframe>'.format(m.get_root().render().replace('"', '&quot;'),400,400,percent) display(HTML(contents)) # - # ##### Display maps of clustered points side-by-side # ## Keyword visualizations # ##### **IMPORTANT!** First, specify the attribute that contains _keywords_, required in creating workclouds: attrKeywords = 'persons_value' # #### Top-10 keywords per batch of results for index, item in enumerate(weights): # Use only those keywords above the median frequency for each batch kf = filter_dict_median(frequency(results[index],attrKeywords)) # Create barchart barchart(kf, plot_width=4, plot_height=3, orientation='Horizontal', plot_title='keywords for W'+str(index+1), x_axis_label='Frequency', top_k=10) # ### A word cloud per batch of results # + plot_wordcloud(results[0], attrKeywords) plot_wordcloud(results[1], attrKeywords) # - # ## Visualizations for numerical attributes # ### Histograms to display distribution of values for numerical attributes # ##### **IMPORTANT!** First, specify the attribute that contains _numerical_ values # Specify the attribute containing the numerical values of interest in the response attrNumeric = 'positive_sentiment_value' # #### Frequency histograms # + dfNumerical = [None] * len(weights) dfBins = [None] * len(weights) numBins = 20 # fixed number of bins # Create as many plots as the weight combinations fig, ax = plt.subplots(1,len(weights)) # Figure size per histogram fig.set_figheight(3) # optional setting the height of the image fig.set_figwidth(16) # optional setting the width of the image # Create histogram from numerical data values for each combination of weights for index, item in enumerate(weights): dfNumerical[index] = results[index][attrNumeric] #pd.to_numeric(results[index][attrNumeric], errors='coerce') bins = np.linspace(math.ceil(min(dfNumerical[index])), math.floor(max(dfNumerical[index])), numBins) label = ' '.join(str(weights[index])) ax[index].hist(dfNumerical[index], bins=dfBins[index], alpha = 0.8) #, color = generate_color(weights[index])) ax[index].set(title='W'+str(index+1), ylabel='Frequency', xlabel=attrNumeric) plt.show() # - # #### Boxplots to show the mean value and the distribution of values per batch # + fig, ax = plt.subplots() box_plot_data=[filterNaN(results[0][attrNumeric]),filterNaN(results[1][attrNumeric])] ax.boxplot(box_plot_data) # Custom ticks plt.xticks([1, 2], ['W1', 'W2']) plt.gca().set(title='Distribution per Weight combination', ylabel=attrNumeric) ax.set_yscale('log') plt.show() # - # ### Plot distribution on date/time attribute # ##### **IMPORTANT!** First, specify the date/time attribute of interest: attrTemporal = 'timestamp_value' # #### Frequency histograms on MONTH (extracted from timestamp values) # + dfTemporal = [None] * len(weights) # Create as many plots as the weight combinations fig, ax = plt.subplots(1,len(weights)) # Figure size per histogram fig.set_figheight(3) # optional setting the height of the image fig.set_figwidth(16) # optional setting the width of the image # Plot aggregate values per MONTH for each combination of weights for index, item in enumerate(weights): dfTemporal[index] = results[index][attrTemporal] dfTemporal[index].groupby(dfTemporal[index].dt.month).count().plot.bar(ax=ax[index]) ax[index].set(title='W'+str(index+1), ylabel='Frequency', xlabel='Month') # -
data/notebooks/SimSearch_API_demo.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: transformers # language: python # name: transformers # --- # + # load generated questions import numpy as np from collections import Counter, defaultdict topics = defaultdict(list) qas = defaultdict(list) non_unique_questions = defaultdict(list) with open('../results/cast19_test/t5_cast19_test_paragraphs_17.tsv', 'r') as fin: for line in fin: cells = line.rstrip('\n').split('\t') q_id = cells[0] p_id = cells[1] topic = q_id.split('_')[0] # skip duplicate questions qs = set(cells[2:]) for q in qs: # store all questions per topic if q not in topics[topic]: topics[topic].append(q) # store all passages per question if p_id not in qas[q]: qas[q].append(p_id) # store all topics per question if topic not in non_unique_questions[q]: non_unique_questions[q].append(topic) # + # same as in TREC min_passages_per_question = 3 doc_id = 0 documents = [] for topic, qs in topics.items(): # print(topic) for q in qs: # index questions with sufficient support (non-unique within topic) if len(qas[q]) < min_passages_per_question: continue # index questions unique across topic if len(non_unique_questions[q]) > 1: continue # print(q) # print(qas[q]) documents.append({'id': doc_id, 'contents': q}) doc_id += 1 # break print(len(documents), 'questions') # + # Create json document files. anserini_folder = json_files_path = "../results/questions_index/collection" path_index = "../results/questions_index/index" if not os.path.isdir(json_files_path): os.makedirs(json_files_path) for i, doc in enumerate(documents): with open(json_files_path+'/docs{:02d}.json'.format(i), 'w', encoding='utf-8', ) as f: f.write(json.dumps(doc) + '\n') # Run index java command os.system("sh {}target/appassembler/bin/IndexCollection -collection JsonCollection" \ " -generator DefaultLuceneDocumentGenerator -threads 9 -input {}" \ " -index {}anserini_index -storePositions -storeDocvectors -storeRaw". \ format(anserini_folder, json_files_path, path_index)) # - # + import random # same as in TREC min_passages_per_question = 3 for topic, qs in topics.items(): print(topic) # sample the first question at random while True: q1 = random.sample(qs, 1)[0] if len(qas[q1]) >= min_passages_per_question: print(q1) break # sample next questions by overlap break # - # + # build fingerprints from nltk.tokenize import RegexpTokenizer from nltk.corpus import stopwords from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() tokenizer = RegexpTokenizer(r'\w+') stopwords = stopwords.words('english') # same as in TREC min_passages_per_question = 3 t_qs = defaultdict(list) t_as = defaultdict(list) topic_words = {} for topic, qs in topics.items(): print(topic) words = Counter() for q in qs: if len(qas[q]) < min_passages_per_question: continue question_words = [lemmatizer.lemmatize(w) for w in tokenizer.tokenize(q.lower()) if w not in stopwords] words.update(question_words) print(question_words) t_qs[topic].append(q) t_as[topic].extend(qas[q]) frequent_words = [w for w in words if words[w] > 1] print(frequent_words) topic_words[topic] = frequent_words break # + # encode questions into fingerprints import numpy as np for topic, frequent_words in topic_words.items(): question_sequence = [] qs = t_qs[topic] print('dialogue vocabulary', len(frequent_words)) print(frequent_words) fp = np.zeros((len(qs), len(frequent_words))) for i, q in enumerate(qs): for w in [lemmatizer.lemmatize(w) for w in tokenizer.tokenize(q.lower())]: if w in frequent_words: fp[i][frequent_words.index(w)] = 1 print(fp) print('\n') # show word roles # for j, w in enumerate(frequent_words): # print(w, fp[:, j]) # 0 sample the first question at random q1 = random.sample(range(len(qs)), 1)[0] print(qs[q1]) print(fp[q1]) # 1+ select related questions by overlap break # -
src/index_generated_questions_t5_cast19.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.4.2 # language: julia # name: julia-1.4 # --- # This example demonstrates how to use the `transformer` algorithm to calculate the boost factor for a 20 disk configuration. # ### Julia Init #Parallelization using Distributed # If you installed BoostFractor @everywhere begin using BoostFractor end # If you just downloaded the githup repository and do not want to install the package @everywhere begin push!(LOAD_PATH, "../src"); using BoostFractor; end #Plotting using PyPlot # ### Initialize Transformer @everywhere begin # Coordinate System dx = 0.02 coords = SeedCoordinateSystem(X = -0.5:dx:0.5, Y = -0.5:dx:0.5) diskR = 0.15 # SetupBoundaries (note that this expects the mirror to be defined explicitly as a region) epsilon = 24 eps = Array{Complex{Float64}}([NaN, 1,epsilon,1,epsilon,1,epsilon,1,epsilon,1,epsilon,1,epsilon,1,epsilon,1,epsilon,1,epsilon,1,epsilon,1,epsilon,1,epsilon,1,epsilon,1,epsilon,1,epsilon,1,epsilon,1,epsilon,1,epsilon,1,epsilon,1,epsilon,1]) distance = [0.0, 1.00334, 1.0, 6.94754, 1.0, 7.1766, 1.0, 7.22788, 1.0, 7.19717, 1.0, 7.23776, 1.0, 7.07746, 1.0, 7.57173, 1.0, 7.08019, 1.0, 7.24657, 1.0, 7.21708, 1.0, 7.18317, 1.0, 7.13025, 1.0, 7.2198, 1.0, 7.45585, 1.0, 7.39873, 1.0, 7.15403, 1.0, 7.14252, 1.0, 6.83105, 1.0, 7.42282, 1.0, 0.0]*1e-3 sbdry = SeedSetupBoundaries(coords, diskno=20, distance=distance, epsilon=eps) # Initialize modes Mmax = 4 Lmax = 3 modes = SeedModes(coords, ThreeDim=true, Mmax=Mmax, Lmax=Lmax, diskR=diskR) # Mode-Vector defining beam shape to be reflected on the system m_reflect = zeros(Mmax*(2*Lmax+1)) m_reflect[Lmax+1] = 1.0 end # ### Run Transformer # + df = 0.01*1e9 frequencies = 21.98e9:df:22.26e9 # We will build a 3-dim array [reflection / boost factor, mode-vector, frequency ] # The following function appends to the last dimension @everywhere zcat(args...) = cat(dims = 3, args...) # Sweep over frequency @time EoutModes0 = @sync @distributed (zcat) for f in frequencies println("Frequency: $f") boost, refl = transformer(sbdry,coords,modes; reflect=m_reflect, prop=propagator,diskR=0.15,f=f) transpose([boost refl]) end; # - # ### Plot the result # + # Plot the power contained in each mode in a "stacked areas" plot # Total power of modes iterated over / plotted so far tot = zeros(length(EoutModes0[1,1,:])) # Iterate over modes for i in 1:(modes.M*(modes.L*2+1)) # get (m,l) indices m = Int( floor((i-1)/(modes.L*2+1))+1 ) l = (i-1)%(modes.L*2+1)-modes.L # create label labeling = (l > 0 ? "m=$m, \$\\ell=\\pm\$$l" : l == 0 ? "m=$m, \$\\ell=0\$" : nothing) # do the plotting fill_between(frequencies/1e9,tot, tot.+abs2.(EoutModes0[1,i,:]), alpha=0.7/(1+abs(l)), label=labeling, color="C$(m-1)") # update total power tot .+= abs2.(EoutModes0[1,i,:]) end # Coupling to antenna beam coupled_power = abs2.(sum(conj.(EoutModes0[1,:,:]).*m_reflect, dims=1)[1,:]) plot(frequencies/1e9, coupled_power, c="darkblue", linewidth=2, label="Modes coupled to antenna") # Legend, Labels, etc. legend(loc="upper left", bbox_to_anchor=(1, 1.03)) xlabel("Frequency [GHz]") ylabel("Power Boost Factor \$\\beta^2\$") grid(alpha=0.5) textstring="20 disks, \$\\o = 30\\,{\\rm cm}, \\epsilon = 24\$" text(0.95, 0.9, textstring, ha="right", fontsize=12, transform=gca().transAxes, bbox=Dict("facecolor" => "white", "alpha" => 0.8, "pad" => 5, "linewidth" => 0)); # -
examples/Transformer, 20 Disks.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + code_folding=[] # Load Image # Editor appearance set up & Load plot & Calculate DGCI # Extend width of Jupyter Notebook Cell to the size of browser from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) # Import packages needed import gc import pickle import platform from tkinter import Tk from tkinter.filedialog import askopenfilename, asksaveasfilename import matplotlib import matplotlib.pyplot as plt import matplotlib.patches as patches from matplotlib.widgets import PolygonSelector import numpy as np import pandas as pd from skimage import io, draw from ipywidgets import widgets from osgeo import gdal import general_funcs # OS related settings if platform.system() == 'Windows': print('Windows') # # %matplotlib nbagg # Sometimes tk/qt will not let cells rerun after an ERROR occurs # %matplotlib tk # %matplotlib qt elif platform.system() == 'Darwin': print('macOS') Tk().withdraw() # %matplotlib osx elif platform == 'linux' or platform == 'linux2': print('Linux') # This line of "print" must exist right after %matplotlib command, # otherwise JN will hang on the first import statement after this. print('Interactive plot activated') # Load image and print size & pre-process # Use skimage to load multi-layer tiff file image_file = askopenfilename(title='Load image file', initialdir='./data/field_image') plot_loc_file = askopenfilename(title='Load plot location file', initialdir='./data/plot_location') img = io.imread(image_file) print("Original Image Shape: ", img.shape) # Load GPS coordinate from file & Calculate pixel location try: with open(plot_loc_file, 'rb') as f: interested_area = pickle.load(f) plot_vertices_gps = pickle.load(f) plot_notes = pickle.load(f) except Exception as e: showerror(type(e).__name__, str(e)) # Trim to area of interest ul = np.min(interested_area, 1) br = np.max(interested_area, 1) img = img[ul[1]:br[1], ul[0]:br[0], :] print("Trimmed Image Shape: ", img.shape) # Calculating pixel location from GPS coordinate ds = gdal.Open(image_file) gt = ds.GetGeoTransform() plot_vertices = general_funcs.plotVGPS2plotV(plot_vertices_gps, gt) all_vertices = np.concatenate(list(plot_vertices.values()), axis=0) # Extract layers from the multilayer tiff file and do some adjustments scale_factor = 3 h, w, d = img.shape layer_RGB, layer_IR, layer_mask = general_funcs.extract_layers(img) if d != 2: layer_RGB_low_res = general_funcs.low_res(layer_RGB, scale_factor) layer_temp = np.zeros(layer_IR.shape) if layer_IR.max() > int('0b' + '1100000000000000', 2): # 16 bit to 14 bit + nonlinear temp calculation layer_temp[np.where(layer_mask!=0)] = general_funcs.flir_non_linear_thermal_to_temp(layer_IR[np.where(layer_mask!=0)] - int('0b' + '1100000000000000', 2)) else: layer_temp[np.where(layer_mask!=0)] = general_funcs.flir_linear_high_res_thermal_to_temp(layer_IR[np.where(layer_mask!=0)]) # RGB to HSV (H 0-360, S 0-1, V 0-255) if d == 5 or d == 4: layer_HSV = general_funcs.RGB2HSV(layer_RGB) # Calculate Vegetation Index layer_DGCI = general_funcs.DGCI(layer_HSV) print('DGCI calculated') # Remove original img file to save space in memory del(img) # + code_folding=[] # Show Image def update_transparency(slider_value): global transparency transparency = slider_value def update_hue_mask(button): button.description = 'Update Hue Mask (Updating)' global hue_range_updated if hue_range_updated: hue_mask[np.where((layer_hue>=hue_range[0]) * (layer_hue<=hue_range[1]))] = 1 hue_mask[np.where(np.invert((layer_hue>=hue_range[0]) * (layer_hue<=hue_range[1])))] = 0 hue_range_updated = False red_masked_RGB = red_mask * hue_mask * transparency + layer_RGB * (1 - hue_mask * transparency) red_masked_RGB = red_masked_RGB.astype(np.uint8) ax.imshow(red_masked_RGB) a, b, c = hue_mask.shape canopy = hue_mask[:, :, 0].sum() canopy_closure = canopy/(a*b) canopy_closure = "{:.2%}".format(canopy_closure) canopy_closure_text.value = canopy_closure button.description = 'Update Hue Mask (Done)' def update_hue_range(slider_value): global hue_range, hue_range_updated hue_range = np.array(slider_value) hue_range_updated = True def save_hue_range(button): lon_meter_per_pix, lat_meter_per_pix = general_funcs.meter_per_pix(gt) size_per_pix = lon_meter_per_pix * lat_meter_per_pix for_flag = 0 for plot_name in plot_vertices.keys(): for_flag += 1 if for_flag % 10 == 1: print('Calculating plot No.', for_flag, plot_name) one_plot_vertices = plot_vertices[plot_name] one_plot_vertices_transformed = one_plot_vertices - ul rr, cc = draw.polygon(one_plot_vertices_transformed[:, 1], one_plot_vertices_transformed[:, 0], layer_mask.shape) rr, cc = rr.astype(np.uint16), cc.astype(np.uint16) plot_mask = np.zeros(layer_mask.shape).astype(np.int8) plot_mask[rr, cc] = 1 inds = np.where(plot_mask != 0) inds = (inds[0].astype(np.uint16), inds[1].astype(np.uint16)) plot_areas.append(size_per_pix * rr.size) hue_ranges.append(hue_range) # Calculate indices avg_temp = np.round(np.average(layer_temp[inds]), 2) avg_temps.append(avg_temp) max_temp = np.round(np.max(layer_temp[inds]), 2) max_temps.append(max_temp) min_temp = np.round(np.min(layer_temp[inds]), 2) min_temps.append(min_temp) if d == 5: avg_DGCI = np.average(layer_DGCI[inds]) avg_DGCI = np.round(avg_DGCI, 2) avg_DGCIs.append(avg_DGCI) max_DGCI = np.max(layer_DGCI[inds]) max_DGCIs.append(max_DGCI) min_DGCI = np.min(layer_DGCI[inds]) min_DGCIs.append(min_DGCI) avg_RGB = np.mean(layer_RGB[inds], axis=0) avg_RGB = np.round(avg_RGB, 2) avg_RGBs.append(avg_RGB) avg_HSV = np.mean(layer_HSV[inds], axis=0) avg_HSV = np.round(avg_HSV, 2) avg_HSVs.append(avg_HSV) # Apply hue mask mask_hue_restrict = plot_mask * hue_mask[:, :, 0] inds = np.where(mask_hue_restrict == 1) cnp_cls = mask_hue_restrict.sum()/plot_mask.sum() cnp_cls = "{:.2%}".format(cnp_cls) canopy_closures.append(cnp_cls) if inds[0].size != 0: avg_DGCI = np.average(layer_DGCI[inds]) avg_DGCI = np.round(avg_DGCI, 2) avg_DGCIs_hue_restrict.append(avg_DGCI) max_DGCI = np.max(layer_DGCI[inds]) max_DGCIs_hue_restrict.append(max_DGCI) min_DGCI = np.min(layer_DGCI[inds]) min_DGCIs_hue_restrict.append(min_DGCI) avg_RGB = np.mean(layer_RGB[inds], axis=0) avg_RGB = np.round(avg_RGB, 2) avg_RGBs_hue_restrict.append(avg_RGB) avg_HSV = np.mean(layer_HSV[inds], axis=0) avg_HSV = np.round(avg_HSV, 2) avg_HSVs_hue_restrict.append(avg_HSV) # avg_IR = np.average(layer_IR[inds]) avg_temp = np.round(np.average(layer_temp[inds]), 2) avg_temps_hue_restrict.append(avg_temp) # max_IR = np.max(layer_IR[inds]) max_temp = np.round(np.max(layer_temp[inds]), 2) max_temps_hue_restrict.append(max_temp) # min_IR = np.min(layer_IR[inds]) min_temp = np.round(np.min(layer_temp[inds]), 2) min_temps_hue_restrict.append(min_temp) else: avg_DGCIs_hue_restrict.append(0) max_DGCIs_hue_restrict.append(0) min_DGCIs_hue_restrict.append(0) avg_RGBs_hue_restrict.append(np.asarray([0, 0, 0])) avg_HSVs_hue_restrict.append(np.asarray([0, 0, 0])) avg_temps_hue_restrict.append(0) max_temps_hue_restrict.append(0) min_temps_hue_restrict.append(0) if d != 4: vmin, vmax = general_funcs.cal_vmin_vmax(layer_IR, layer_mask) if d == 5 or d == 4: # ax.imshow(layer_RGB) ; elif d == 2: # myax = ax.imshow(layer_IR, cmap='gist_gray', vmin=vmin, vmax=vmax) ; all_widgets.children = [show_RGB_button, show_IR_button, show_DGCIImage_button, show_temp_button, show_DGCI_button, show_canopy_closure_button, save_button] # Show indices of plots def show_RGB(button): ax.imshow(layer_RGB) def show_IR(button): axim = ax.imshow(layer_IR, cmap=plt.get_cmap(color_map)) def show_DGCI_image(button): axim = ax.imshow(layer_DGCI, cmap=plt.get_cmap(color_map)) def show_temp(button): if button.description == 'Show Average Temperature': for plot_name in plot_vertices.keys(): one_plot_vertices = plot_vertices[plot_name] one_plot_vertices_transformed = one_plot_vertices - ul polygon = patches.Polygon(one_plot_vertices_transformed, True, facecolor = matplotlib.colors.to_rgba('red', 0.05), edgecolor=matplotlib.colors.to_rgba('orange', 0.5)) ax.add_patch(polygon) text_loc = np.mean(one_plot_vertices_transformed, 0) axtx = ax.text(text_loc[0], text_loc[1], plot_name + '\n' + str(avg_temps[int(list(plot_vertices.keys()).index(plot_name))]) + '℃', ha='center', va='center') button.description = 'Hide Average Temperature' elif button.description == 'Hide Average Temperature': ax.patches.clear() ax.texts.clear() button.description = 'Show Average Temperature' plt.show() def show_DGCI(button): if button.description == 'Show Average DGCI': for plot_name in plot_vertices.keys(): one_plot_vertices = plot_vertices[plot_name] one_plot_vertices_transformed = one_plot_vertices - ul polygon = patches.Polygon(one_plot_vertices_transformed, True, facecolor = matplotlib.colors.to_rgba('red', 0.05), edgecolor=matplotlib.colors.to_rgba('orange', 0.5)) ax.add_patch(polygon) text_loc = np.mean(one_plot_vertices_transformed, 0) axtx = ax.text(text_loc[0], text_loc[1], plot_name + '\n' + str(avg_DGCIs[int(list(plot_vertices.keys()).index(plot_name))]), ha='center', va='center') button.description = 'Hide Average DGCI' elif button.description == 'Hide Average DGCI': ax.patches.clear() ax.texts.clear() button.description = 'Show Average DGCI' # plt.show() def show_canopy_closure(button): if button.description == 'Show Canopy Closure': for plot_name in plot_vertices.keys(): one_plot_vertices = plot_vertices[plot_name] one_plot_vertices_transformed = one_plot_vertices - ul polygon = patches.Polygon(one_plot_vertices_transformed, True, facecolor = matplotlib.colors.to_rgba('red', 0.05), edgecolor=matplotlib.colors.to_rgba('orange', 0.5)) ax.add_patch(polygon) text_loc = np.mean(one_plot_vertices_transformed, 0) axtx = ax.text(text_loc[0], text_loc[1], plot_name + '\n' + str(canopy_closures[int(list(plot_vertices.keys()).index(plot_name))]), ha='center', va='center') button.description = 'Hide Canopy Closure' elif button.description == 'Hide Canopy Closure': ax.patches.clear() ax.texts.clear() button.description = 'Show Canopy Closure' def save_result(button): global avg_temps, max_temps, min_temps, avg_DGCIs, max_DGCIs, min_DGCIs, avg_RGBs, avg_HSVs, canopy_closures, plot_areas, hue_ranges global avg_DGCIs_hue_restrict, max_DGCIs_hue_restrict, min_DGCIs_hue_restrict, avg_RGBs_hue_restrict, avg_HSVs_hue_restrict, avg_temps_hue_restrict, max_temps_hue_restrict, min_temps_hue_restrict plot_areas = np.array(plot_areas) hue_ranges = np.array(hue_ranges) if d != 4: avg_temps = np.array(avg_temps) max_temps = np.array(max_temps) min_temps = np.array(min_temps) if d != 2: avg_DGCIs = np.array(avg_DGCIs) max_DGCIs = np.array(max_DGCIs) min_DGCIs = np.array(min_DGCIs) avg_RGBs = np.array(avg_RGBs) avg_HSVs = np.array(avg_HSVs) avg_DGCIs_hue_restrict = np.array(avg_DGCIs_hue_restrict) max_DGCIs_hue_restrict = np.array(max_DGCIs_hue_restrict) min_DGCIs_hue_restrict = np.array(min_DGCIs_hue_restrict) avg_RGBs_hue_restrict = np.array(avg_RGBs_hue_restrict) avg_HSVs_hue_restrict = np.array(avg_HSVs_hue_restrict) canopy_closures = np.array(canopy_closures) if d == 5: avg_temps_hue_restrict = np.array(avg_temps_hue_restrict) max_temps_hue_restrict = np.array(max_temps_hue_restrict) min_temps_hue_restrict = np.array(min_temps_hue_restrict) if d == 2: df = pd.DataFrame(data=np.column_stack((list(plot_vertices.keys()), avg_temps, max_temps, min_temps, plot_areas, list(plot_notes.values()))), columns=['Plot Name', 'Avg Temp', 'Max Temp', 'Min Temp', 'Plot Area', 'Plot Notes']) # elif d == 4: # df = pd.DataFrame(data=np.column_stack((list(plot_vertices.keys()), avg_DGCIs, avg_DGCIs_hue_restrict, max_DGCIs, max_DGCIs_hue_restrict, min_DGCIs, min_DGCIs_hue_restrict, canopy_closures, avg_RGBs, avg_RGBs_hue_restrict, avg_HSVs, avg_HSVs_hue_restrict, plot_areas, hue_ranges)), # columns=['Plot Name', 'Avg DGCI', 'Avg DGCI (hue restrict)', 'Max DGCI', 'Max DGCI (hue restrict)', 'Min DGCI', 'Min DGCI (hue restrict)', 'Canopy Closure', 'Avg R', 'Avg G', 'Avg B', 'Avg R (hue restrict)', 'Avg G (hue restrict)', 'Avg B (hue restrict)', 'Avg H', 'Avg S', 'Avg V', 'Avg H (hue restrict)', 'Avg S (hue restrict)', 'Avg V (hue restrict)', 'Plot Area', 'Hue Range Start', 'Hue Range End']) elif d == 5: df = pd.DataFrame(data=np.column_stack((list(plot_vertices.keys()), avg_temps, avg_temps_hue_restrict, max_temps, max_temps_hue_restrict, min_temps, min_temps_hue_restrict, avg_DGCIs, avg_DGCIs_hue_restrict, max_DGCIs, max_DGCIs_hue_restrict, min_DGCIs, min_DGCIs_hue_restrict, canopy_closures, avg_RGBs, avg_RGBs_hue_restrict, avg_HSVs, avg_HSVs_hue_restrict, plot_areas, hue_ranges, list(plot_notes.values()))), columns=['Plot Name', 'Avg Temp', 'Avg Temp (hue restrict)', 'Max Temp', 'Max Temp (hue restrict)', 'Min Temp', 'Min Temp (hue restrict)', 'Avg DGCI', 'Avg DGCI (hue restrict)', 'Max DGCI', 'Max DGCI (hue restrict)', 'Min DGCI', 'Min DGCI (hue restrict)', 'Canopy Closure', 'Avg R', 'Avg G', 'Avg B', 'Avg R (hue restrict)', 'Avg G (hue restrict)', 'Avg B (hue restrict)', 'Avg H', 'Avg S', 'Avg V', 'Avg H (hue restrict)', 'Avg S (hue restrict)', 'Avg V (hue restrict)', 'Plot Area', 'Hue Range Start', 'Hue Range End', 'Plot Notes']) fn = image_file.split('/')[-2] + '_' + image_file.split('/')[-1].split('.')[0] file_name = asksaveasfilename(filetypes=[('csv', '*.csv')], title='Save Indices', initialfile=fn+'_indices', initialdir='./data/final_result/') if not file_name: return if not file_name.endswith('.csv'): file_name += '.csv' try: df.to_csv(file_name) print('Indices saved to', file_name) except Exception as e: showerror(type(e).__name__, str(e)) hue_range = [60, 180] hue_range_updated = True transparency = 0.5 if d != 2: layer_hue = layer_HSV[:, :, 0] hue_mask = np.zeros(layer_RGB.shape).astype(np.uint8) red_mask = (np.ones(hue_mask.shape) * (255, 0, 0)).astype(np.uint8) hue_mask_low_res = np.zeros(layer_RGB_low_res.shape).astype(np.uint8) red_mask_low_res = (np.ones(hue_mask_low_res.shape) * (255, 0, 0)).astype(np.uint8) avg_temps = [] max_temps = [] min_temps = [] avg_DGCIs = [] max_DGCIs = [] min_DGCIs = [] avg_RGBs = [] avg_HSVs = [] pixel_num = [] avg_temps_hue_restrict = [] max_temps_hue_restrict = [] min_temps_hue_restrict = [] avg_DGCIs_hue_restrict = [] max_DGCIs_hue_restrict = [] min_DGCIs_hue_restrict = [] avg_RGBs_hue_restrict = [] avg_HSVs_hue_restrict = [] pixel_num_hue_restrict = [] canopy_closures = [] plot_areas = [] hue_ranges = [] # Widgets style = {'description_width': 'initial'} slider_layout = widgets.Layout(width='99%') hue_slider = widgets.FloatRangeSlider(value=hue_range, min=0, max=360, step=0.01, continuous_update=False, description='Hue Range', layout=slider_layout, style=style) hue_interactive = widgets.interactive(update_hue_range, slider_value=hue_slider) transparency_slider = widgets.FloatSlider(value=transparency, min=0, max=1, continuous_update=False, description='Mask Transparency', readout_format='.1f', layout=slider_layout, style=style) transparency_interactive = widgets.interactive(update_transparency, slider_value=transparency_slider) # Button widgets button_layout = widgets.Layout(align='center', width='100%') canopy_closure_text = widgets.Text(value='0', description='Canopy Closure', layout=button_layout, disabled=True, style=style) update_hue_mask_button = widgets.Button(description='Apply Hue Mask', layout=button_layout) save_hue_range_button = widgets.Button(description='Next', layout=button_layout, button_style='success') update_hue_mask_button.on_click(update_hue_mask) save_hue_range_button.on_click(save_hue_range) show_RGB_button = widgets.Button(description='Show RGB Image', layout=button_layout) show_IR_button = widgets.Button(description='Show IR Image', layout=button_layout) show_DGCIImage_button = widgets.Button(description='Show DGCI Image', layout=button_layout) show_temp_button = widgets.Button(description='Show Average Temperature', layout=button_layout) show_DGCI_button = widgets.Button(description='Show Average DGCI', layout=button_layout) show_canopy_closure_button = widgets.Button(description='Show Canopy Closure', layout=button_layout) save_button = widgets.Button(description='Save Result', layout=button_layout, button_style='success') show_RGB_button.on_click(show_RGB) show_IR_button.on_click(show_IR) show_DGCIImage_button.on_click(show_DGCI_image) show_temp_button.on_click(show_temp) show_DGCI_button.on_click(show_DGCI) show_canopy_closure_button.on_click(show_canopy_closure) save_button.on_click(save_result) # Box widgets box_layout = widgets.Layout(width='100%') button_set1 = widgets.HBox(children=[update_hue_mask_button, save_hue_range_button], layout=box_layout) all_widgets = widgets.VBox(children=[transparency_slider, hue_slider, button_set1, canopy_closure_text], layout=box_layout) display(all_widgets) out = widgets.Output() display(out) with out: plt.close('all') # Histogram of hue value fig, ax = plt.subplots(figsize=(7, 7)) fig.subplots_adjust(left=0.1, bottom=0.1, right=0.9, top=0.9) if d != 2: fig_hist, ax_hist = plt.subplots(figsize=(5, 5)) flattened_layer_hue = layer_hue.flatten() flattened_layer_mask = layer_mask.flatten() flattened_layer_hue = flattened_layer_hue[flattened_layer_mask != 0] ax_hist.hist(flattened_layer_hue, bins=100) mean = np.mean(flattened_layer_hue) var = np.var(flattened_layer_hue) std = np.std(flattened_layer_hue) print('Mean:', mean) print('Variance:', var) print('Standard Deviation', std) # Plot image with hue mask ax.imshow(layer_RGB_low_res) plt.show() else: # mask_not_0_inds = np.where(layer_mask > 0) vmin, vmax = general_funcs.cal_vmin_vmax(layer_IR, layer_mask) myax = ax.imshow(layer_IR, cmap='gist_gray', vmin=vmin, vmax=vmax)
Image_Processing_2-Calculate_Indices.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # List Comprehensions # # Complete the following set of exercises to solidify your knowledge of list comprehensions. import os; # #### 1. Use a list comprehension to create and print a list of consecutive integers starting with 1 and ending with 50. consecutive_integers = [x for x in range(1, 51)] print(consecutive_integers) # #### 2. Use a list comprehension to create and print a list of even numbers starting with 2 and ending with 200. even_numbers = [e for e in range(2, 201) if e % 2 == 0] print(even_numbers) # #### 3. Use a list comprehension to create and print a list containing all elements of the 10 x 4 array below. # + a = [[0.84062117, 0.48006452, 0.7876326 , 0.77109654], [0.44409793, 0.09014516, 0.81835917, 0.87645456], [0.7066597 , 0.09610873, 0.41247947, 0.57433389], [0.29960807, 0.42315023, 0.34452557, 0.4751035 ], [0.17003563, 0.46843998, 0.92796258, 0.69814654], [0.41290051, 0.19561071, 0.16284783, 0.97016248], [0.71725408, 0.87702738, 0.31244595, 0.76615487], [0.20754036, 0.57871812, 0.07214068, 0.40356048], [0.12149553, 0.53222417, 0.9976855 , 0.12536346], [0.80930099, 0.50962849, 0.94555126, 0.33364763]]; b = [v for u in a for v in u] print(b) # - # #### 4. Add a condition to the list comprehension above so that only values greater than or equal to 0.5 are printed. b = [v for u in a for v in u if v >= 0.5] print(b) # #### 5. Use a list comprehension to create and print a list containing all elements of the 5 x 2 x 3 array below. # + b = [[[0.55867166, 0.06210792, 0.08147297], [0.82579068, 0.91512478, 0.06833034]], [[0.05440634, 0.65857693, 0.30296619], [0.06769833, 0.96031863, 0.51293743]], [[0.09143215, 0.71893382, 0.45850679], [0.58256464, 0.59005654, 0.56266457]], [[0.71600294, 0.87392666, 0.11434044], [0.8694668 , 0.65669313, 0.10708681]], [[0.07529684, 0.46470767, 0.47984544], [0.65368638, 0.14901286, 0.23760688]]]; list_comp = [z for first in b for second in first for z in second] print(list_comp) # - # #### 6. Add a condition to the list comprehension above so that the last value in each subarray is printed, but only if it is less than or equal to 0.5. subarrays_list = [y[2] for x in b for y in x if y[2] <= 0.5] print(subarrays_list) os.getcwd() # #### 7. Use a list comprehension to select and print the names of all CSV files in the */data* directory. import os csv_files = [file for file in os.listdir('C:\\Users\\aleja\\daft-miami-1019-labs\\module-1\\List-Comprehension\\data') if file.endswith(".csv")] print(csv_files) # ### Bonus # Try to solve these katas using list comprehensions. # **Easy** # - [Insert values](https://www.codewars.com/kata/invert-values) # - [Sum Square(n)](https://www.codewars.com/kata/square-n-sum) # - [Digitize](https://www.codewars.com/kata/digitize) # - [List filtering](https://www.codewars.com/kata/list-filtering) # - [Arithmetic list](https://www.codewars.com/kata/541da001259d9ca85d000688) # # **Medium** # - [Multiples of 3 or 5](https://www.codewars.com/kata/514b92a657cdc65150000006) # - [Count of positives / sum of negatives](https://www.codewars.com/kata/count-of-positives-slash-sum-of-negatives) # - [Categorize new member](https://www.codewars.com/kata/5502c9e7b3216ec63c0001aa) # # **Advanced** # - [Queue time counter](https://www.codewars.com/kata/queue-time-counter) # + #INSERT VALUES numbers = [1, 2, 3, 4, 5] inverted = [(-1)*n for n in numbers] print(inverted) # + #SUM SQUARE(n) set = [1, 2, 2] sum_of_squares = [s**2 for s in set] print(sum(sum_of_squares)) # + #DIGITIZE num = "3625463295493" list_of_digits = [num[k] for k in range(len(num))] print(list_of_digits) # + #LIST FILTERING strs_and_ints_list = [1, 'wrgwr', 5, 6, 'qwfregtrnyr', 3, 8, 'h', 9, '1'] filter_list = [l for l in strs_and_ints_list if type(l) == int] print(filter_list) # - #ARITHMETIC LIST first_term = int(input("Enter the first term of the sequence\n")) c = int(input("Enter the constant that you are going to add to each term\n")) n = int(input("Enter the number of terms of the sequence wanted\n")) arithmetic_sequence = [x for x in range(first_term, (first_term+(n)*c), c)] print(arithmetic_sequence) # + #MULTIPLES OF 3 AND 5 delimiter_number = int(input("Enter a delimiter number\n")) multiples_of_3_and_5 = [m for m in range(delimiter_number) if m % 3 == 0 or m % 5 == 0] print(multiples_of_3_and_5) # + #COUNT OF POSITIVES/SUM OF NEGATIVES input_string = input("Enter a list") input_list = input_string.split(", ") int_input_list = [] for x in input_list: y = int(x) int_input_list.append(y) if input_list is not None: positives_list = [p for p in int_input_list if p >= 0] negatives_list = [n for n in int_input_list if n < 0] output_list = [len(positives_list), sum(negatives_list)] print(output_list) else: empty_list = [] print(empty_list) # + #CATEGORIZE NEW MEMBER data_input = input("Enter the age and handicap of the member as lists (in brackets with an intermediate space)\n") print(data_input.split()) data_updated = [] for item in data_input: it = item.split(',') data_updated.append(it) print(data_updated) ''' memberships = [] for lst in data_input: if int(lst[0]) >= 55 and int(lst[2]) > 7: member = "senior" memberships.append(member) else: member = 'open' memberships.append(member) print(memberships)''' # + number_of_members = int(input("How many members to analyze?\n")) members_data = [] for times in range(number_of_members): age_handicap = input("Enter the age and the handicap separated by a comma and space\n") lst = age_handicap.split(", ") members_data.append(lst) memberships = [] for item in members_data: age = int(item[0]) handicap = int(item[1]) if age >= 55 and handicap > 7: membership = "senior" memberships.append(membership) else: membership = "open" memberships.append(membership) print(memberships) # -
module-1/List-Comprehension/your-code/lc.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.1 64-bit (''test'': conda)' # name: python3 # --- # # Graph postprocessing with cause2e # This notebook shows examples of how ```cause2e``` can be used for postprocessing causal graphs. Postprocessing can be performed by the ```discovery.StructureLearner``` after learning the causal graph. The graph class is mostly based on the ```networkx``` package, which unfortunately does not support mixed graphs (directed and undirected edges at the same time), so we added some functionality. # ### Imports import os from cause2e import path_mgr, discovery, knowledge # ## Set up paths to data and output directories # This step is conveniently handled by the ```PathManager``` class, which avoids having to wrestle with paths throughout the multistep causal analysis. If we want to perform the analysis in a directory ```'dirname'``` that contains ```'dirname/data'``` and ```'dirname/output'``` as subdirectories, we can also use ```PathManagerQuick``` for an even easier setup. The experiment name is used for generating output files with meaningful names, in case we want to study multiple scenarios (e.g. with varying model parameters). For this analysis, we use the sprinkler dataset. cwd = os.getcwd() wd = os.path.dirname(cwd) paths = path_mgr.PathManagerQuick(experiment_name='sprinkler', data_name='sprinkler.csv', directory=wd ) # ## Initialize the StructureLearner # As in the other notebooks, we set up a ```StructureLearner``` and read our data. learner = discovery.StructureLearner(paths) learner.read_csv(index_col=0) # The first step in the analysis should be an assessment of which variables we are dealing with. In the sprinkler dataset, each sample tells us # - the current season # - whether it is raining # - whether our lawn sprinkler is activated # - whether our lawn is slippery # - whether our lawn is wet. print(learner.variables) # It necessary to communicate to the ```StructureLearner``` if the variables are discrete, continuous, or both. We check how many unique values each variable takes on in our sample and deduce that all variables are discrete. print(learner.data.nunique()) # This information is passed to the ```StructureLearner``` by indicating the exact sets of discrete and continuous variables. learner.discrete = set(learner.variables) learner.continuous = set() # ### Provide domain knowledge # Humans can often infer parts of the causal graph from domain knowledge. The nodes are always just the variables in the data, so the problem of finding the right graph comes down to selecting the right edges between them. # # As a reminder: The correct causal graph has an edge from variable A to variable B if and only if variable A directly influences variable B (changing the value of variable A changes the value of variable B if we keep all other variables fixed). # # There are three ways of passing domain knowledge: # - Indicate which edges must be present in the causal graph. # - Indicate which edges must not be present in the causal graph. # - Indicate a temporal order in which the variables have been created. This is then used to generate forbidden edges, since the future can never influence the past. # # In this example, we only assume that the current season is directly influencing the weather and the probability that the sprinkler is on. This makes sense: During the summer, it is less likely to rain and sprinklers are more likely to be activated. required = {('Season', 'Rain'), ('Season', 'Sprinkler')} edge_creator = knowledge.EdgeCreator() edge_creator.require_edges(required) # We pass the knowledge to the ```StructureLearner``` and check if it has been correctly received. learner.set_knowledge(edge_creator) learner.show_knowledge() # ### Apply a structure learning algorithm # Now that the ```StructureLearner``` has received the data and the domain knowledge, we can try to recover the original graph using causal discovery methods provided by the internally called ```py-causal``` package. There are many parameters that can be tuned (choice of algorithm, search score, independence test, hyperparameters, ...) and we can get an overview by calling some informative methods of the learner. Reasonable default arguments are provided (FGES with CG-BIC score for possibly mixed datatypes and respecting domain knowledge), so we use these for our minimal example. learner.run_quick_search() # The output of the search is a proposed causal graph. We can ignore the warning about stopping the Java Virtual Machine (needed by ```py-causal``` which is a wrapper around the ```TETRAD``` software that is written in Java) if we do not run into any problems. If the algorithm cannot orient all edges, we need to do this manually. Therefore, the output includes a list of all undirected edges, so we do not miss them in complicated graphs with many variables and edges. In our case, all the edges are already oriented. # # The result seems reasonable: # - The weather depends on the season. # - The sprinkler use also depends on the season. # - The lawn will be wet if it rains or if the sprinkler is activated. # - The lawn will be slippery if it is wet. # # We can also see that the result is automatically saved to different file formats and that our graph respects the previously indicated domain knowledge. This is the ideal case where no postprocessing is necessary. But what happens if we are not happy with the results of the search procedure? In order to create such a situation, we enlarge the prescribed domain knowledge by an edge from ```'Wet'``` to ```'Rain'```, which is clearly the wrong causal direction. required = {('Season', 'Rain'), ('Season', 'Sprinkler'), ('Wet', 'Rain')} edge_creator = knowledge.EdgeCreator() edge_creator.require_edges(required) learner.set_knowledge(edge_creator) learner.show_knowledge() # We repeat the search with the updated domain knowledge. learner.run_quick_search() # We notice several differences to the last run: # - The result of the search looks completely different. # - There are undirected edges. A list of them is printed. # - The graph cannot be saved. # # Let us have a look at what has happened: # - The different search result is a necessary consequence of the enlarged domain knowledge: Since the true graph has an edge from ```Rain``` to ```Wet```, but we prescribed the edge with the opposite orientation, it is not possible to recover the true graph. # - Undirected edges show that the ```StructureLearner``` cannot infer the direction of an edge from the data and domain knowledge. Given that we do not allow the true graph to be found, difficulties in creating a sensible graph with orientations backed by the data are to be expected. # - The graph is only saved automatically after the search if it is a directed acyclic graph that respects the domain knowledge. In our case, the graph is not even fully directed and we get an error message. This check ensures that we do not save faulty graphs without noticing problems that will later on corrupt our causal estimates. # # In real application cases with many variables, it is not surprising if the result of the search is not completely satisfactory. Therefore, we show how to postprocess the graph until we arrive at the correct result. The possible postprocessing operations are adding, removing and reversing edges. # The edge between ```'Season'``` and ```'Wet'``` should not be present at all (the true graph shows only an indirect causal effect), so we remove it. learner.remove_edge('Season', 'Wet', directed=False) # The edge between ```'Sprinkler'``` and ```'Wet'``` should be present, but it lacks the correct orientation. We can fix this by a call to ```add_edge```. Note that we do not need to remove the undirected edge, it is simply overwritten by the new directed edge. This property can be verified by checking the updated list of undirected edges. learner.add_edge('Sprinkler', 'Wet') # The same holds true for the edge between ```'Slippery'``` and ```'Wet'```. learner.add_edge('Wet', 'Slippery') # Now we only need to fix the error that has caused all the confusion for the search procedure, namely the edge from ```'Wet'``` to ```'Rain'```. learner.reverse_edge('Rain', 'Wet') # After we have fixed all the bad edges, we can save the graph to different file formats. learner.save_graphs() # Oh no! There is still an error message that the edge from ```'Wet'``` to ```'Rain'``` is missing. ```Cause2e``` is strict about respecting domain knowledge because this is how we overcome spurious correlations. In our case, we know that the domain knowledge was wrong, so we can force the learner to save the graph anyway by passing ```strict=False```. The key lesson here is to be diligent when specifying domain knowledge, as is illustrated by all the complications that one misspecified orientation has caused. learner.save_graphs(strict=False) # ## Some more postprocessing options # The most important thing in dealing with graphs is having a visualization. By default, the graph is shown whenever we make any changes to it, but we might nonetheless want to call ```display_graph``` to have a look at it without modifying anything. learner.display_graph() # In scenarios with many variables, it can be hard to find nodes or edges in the graph. For this purpose, the ```StructureLearner``` is equipped with the ```has_node``` and ```has_edge methods``` that allow us to query the graph programmatically instead of visually. print(learner.has_node('Country')) print(learner.has_node('Sprinkler')) # For inspecting edges, not only source and destination node, but also whether an edge is directed is important information. print(learner.has_edge('Season', 'Rain')) print(learner.has_edge('Season', 'Rain', directed=False)) # In case we see that a variable is missing in the graph, we can implicitly add it by creating an edge to another variable. This can happen if a variable is unmeasured (not contained in the data set that was used for learning the graph) or if we have simply forgotten about it and want to include it in postprocessing, e.g. for subsequent discussions that rely on the causal graph as a means of communicating our assumptions. learner.add_edge('Country', 'Rain') # As we have seen at our first attempt at saving the graph, ```cause2e``` can check whether a causal graph conforms to the previously communicated domain knowledge. Outside of a saving procedure, this functionality can be accessed explicitly via the ```respects_knowledge``` method. required = {('Season', 'Rain'), ('Season', 'Sprinkler')} edge_creator = knowledge.EdgeCreator() edge_creator.require_edges(required) learner.set_knowledge(edge_creator) print(learner.respects_knowledge()) # In the same way, we can directly check the graph for acyclicity by calling the ```is_acyclic``` method, and for undirected edges by calling the ```has_undirected_edges``` method. print(learner.is_acyclic()) print(learner.has_undirected_edges())
examples/graph_postprocessing.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: wri_env # language: python # name: wri_env # --- import ee # + ee.Authenticate() # - ee.Initialize() import datetime ee_date = ee.Date('2020-01-01') py_date = datetime.datetime.utcfromtimestamp(ee_date.getInfo()['value']/1000.0) py_date = datetime.datetime.utcnow() ee_date = ee.Date(py_date) # + # Load a Landsat image. img = ee.Image('LANDSAT/LT05/C01/T1_SR/LT05_034033_20000913') # - task = ee.batch.Export.image.toDrive(image=img, # an ee.Image object. region=my_geometry, # an ee.Geometry object. description='mock_export', folder='gdrive_folder', fileNamePrefix='mock_export', scale=1000, crs='EPSG:4326') # + import folium def add_ee_layer(self, ee_image_object, vis_params, name): map_id_dict = ee.Image(ee_image_object).getMapId(vis_params) folium.raster_layers.TileLayer( tiles=map_id_dict['tile_fetcher'].url_format, attr='Map Data &copy; <a href="https://earthengine.google.com/">Google Earth Engine</a>', name=name, overlay=True, control=True ).add_to(self) # - folium.Map.add_ee_layer = add_ee_layer folium.Map.add_ee_layer()
dssg/data-exploration/carlos/NTL/NTL.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python2 # --- # # Hola Recurrent Neural Nets! # # # <center> The sun rises in the ____. </center> # # # If we were asked to predict the blank in the above sentence, we might probably predict as # 'east'. How did we predict that the word 'east' would be the right word? Because we read # the whole sentence, understood the context and predicted that the word 'east' would be an # appropriate word here. # # If we use a feedforward neural network to predict the blank, it would not predict the right # word. This is due to the fact that in feedforward network, each input is independent of each # other and they make predictions only based on the current input and they don't remember # previous inputs. # # Thus, input to the network will be just the word before the blank which is, 'the'. With this # word alone as an input, our network cannot predict the correct word because it doesn't # know the context of the sentence which means - it doesn't know the previous set of words # to understand the context of the sentence and to predict an appropriate next word. # # Here is where we use Recurrent Neural networks. It predicts output not only based on the # current input but also on the previous hidden state. But why does it have to predict the # output based on the current input and the previous hidden state and why it can't just use # the current input and the previous input? # # Because the previous input will store information just about the previous word while the # previous hidden state captures the contextual information about all the words in the # sentence that the network has seen so far. Basically, the previous hidden state acts like a # memory and it captures the context of the sentence. With this context and the current input, # we can predict the relevant word. # # For instance, let us take the same sentence, The sun rises in the ____. As shown in the # following figure, we first pass the word 'the' as an input and then pass the next word 'sun' # as input but along with this we also pass the previous hidden state $h_0$. So every time, we # pass the input word - we also pass a previous hidden state. # # In the final step, we pass the word 'the' and also the previous hidden state $h_3$ which # captures the contextual information about the sequence of words that the network has seen # # so far. Thus, $h_3$ acts as memory and stores information about all the previous words that # the network has seen. With $h_3$ and the current input word 'the', we can now predict the # relevant next word. # # ![image](images/1.png) # # _In a nutshell, RNN uses previous hidden state as memory which captures and stores the # information (inputs) that the network has seen so far._ # # # # RNN is widely applied for use cases that involves sequential data like time series, text, # audio, speech, video, weather and many more. It has been greatly used in various Natural # Language Processing (NLP) tasks such as language translation, sentiment analysis, text # generation and so on. # # # # In the next section, we will learn about the difference between feedforward networks and RNN.
Chapter04/4.01 Hola Recurrent Neural Networks.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Intro # # Notes from the book _Deep learning for Coders with fastai and PyTorch_ # <NAME> & <NAME>, First Edition, July 2020 #hide # !pip install -Uqq fastbook import fastbook fastbook.setup_book() #hide from fastbook import * # # Image classification # # Using ResNet34 and fastai to build a dogs/cats classifier # + # CLICK ME from fastai.vision.all import * path = untar_data(URLs.PETS)/'images' def is_cat(x): return x[0].isupper() dls = ImageDataLoaders.from_name_func( path, get_image_files(path), valid_pct=0.2, seed=42, label_func=is_cat, item_tfms=Resize(224)) learn = cnn_learner(dls, resnet34, metrics=error_rate) learn.fine_tune(1) # - # ### Sample an image img = PILImage.create(image_cat()) img.to_thumb(192) # ### Upload your own image uploader = widgets.FileUpload() uploader img = PILImage.create(uploader.data[0]) is_cat,_,probs = learn.predict(img) print(f"Is this a cat?: {is_cat}.") print(f"Probability it's a cat: {probs[1].item():.6f}") # # General recommendations # # * When working with a deep learning case, always start with a pretrained model. This is called _transfer learning_. This will significantly accelerate your project. For example, for computer vision, pre-trained networks already have accurate ways to detect colors, shapes, gradients, angles and so on. Your more specialized computer vision problem will be able to benefit from this low-level abstractions by using a pre-trained model. # # * Resize images to 224px. # # * Set seed to 42. # # * When working with a pretrained model, use the method `fine_tune()` instead of `fit()`. Using the `fit()` method will retrain the whole network, and you don't want that to happen when you are starting from a pretrained model. # # * To get help with fastai methods, use the `doc()` method: doc(ImageDataLoaders.from_name_func) # Click on the _Show in docs_ link to open the docs website for detailed content # # Models for regression # # In this case, we will use a movie dataset to predict movies people might liked based on their watching habits. The model will try to predict a value between 0.5 - 5.0. This is not a classification problem, but rather a regression one (predict a value across a continuum space) # + from fastai.collab import * path = untar_data(URLs.ML_SAMPLE) dls = CollabDataLoaders.from_csv(path/'ratings.csv') learn = collab_learner(dls, y_range=(0.5,5.5)) learn.fine_tune(10) # - learn.show_results() # Because this is a continuum, we use the param `y_range` to define the range of values for the prediction.
pytorch/01_intro.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="bj9NCyO3svNM" colab_type="code" colab={} """ We use following lines because we are running on Google Colab If you are running notebook on a local computer, you don't need these """ from google.colab import drive drive.mount('/content/gdrive') import os os.chdir('/content/gdrive/My Drive/finch/tensorflow1/recommender/movielens/main') # + id="IIhxIOpidAB7" colab_type="code" outputId="b6c225ea-4b4d-4907-8ed1-a06ec58a8a91" executionInfo={"status": "ok", "timestamp": 1563411386072, "user_tz": -480, "elapsed": 35240, "user": {"displayName": "\u5982\u5b50", "photoUrl": "https://lh3.googleusercontent.com/-cJ4VJthuDc0/AAAAAAAAAAI/AAAAAAAABAw/iwZyEawePbs/s64/photo.jpg", "userId": "01997730851420384589"}} colab={"base_uri": "https://localhost:8080/", "height": 52} import tensorflow as tf print("TensorFlow Version", tf.__version__) print('GPU Enabled:', tf.test.is_gpu_available()) import pprint import logging from pathlib import Path # + id="JdPMZXf5tBg1" colab_type="code" colab={} # stream data from text files def gen_fn(f_path): movietype2idx = {} with open('../vocab/movie_types.txt') as f: for i, line in enumerate(f): line = line.rstrip() movietype2idx[line] = i with open(f_path) as f: print('Reading', f_path) for line in f: line = line.rstrip() (user_id, user_gender, user_age, user_job, movie_id, movie_types, movie_title, score) = line.split('\t') movie_types_ = [0] * len(movietype2idx) for movie_type in movie_types.split(): movie_types_[movietype2idx[movie_type]] = 1 movie_title = movie_title.split() yield (user_id, user_age, user_job, user_gender, movie_id, movie_types_, movie_title), score def input_fn(mode, params): _shapes = (([], [], [], [], [], [18], [None]), []) _types = ((tf.string, tf.string, tf.string, tf.string, tf.string, tf.int32, tf.string), tf.float32) _pads = (('-1', '-1', '-1', '-1', '-1', -1, '<pad>'), 0.) if mode == tf.estimator.ModeKeys.TRAIN: ds = tf.data.Dataset.from_generator( lambda: gen_fn('../data/train.txt'), output_shapes = _shapes, output_types = _types,) ds = ds.shuffle(params['buffer_size']) ds = ds.repeat() ds = ds.padded_batch(params['batch_size'], _shapes, _pads) if mode == tf.estimator.ModeKeys.EVAL: ds = tf.data.Dataset.from_generator( lambda: gen_fn('../data/test.txt'), output_shapes = _shapes, output_types = _types,) ds = ds.padded_batch(params['batch_size'], _shapes, _pads) return ds # + id="2BxViuvz-nhu" colab_type="code" colab={} def model_fn(features, labels, mode, params): # Receive inputs user_id, user_age, user_job, user_gender, movie_id, movie_types, movie_title = features # Flag for Dropout / Batch Norm is_training = (mode == tf.estimator.ModeKeys.TRAIN) # Word Indexing lookup_user_id = tf.contrib.lookup.index_table_from_file( '../vocab/user_id.txt', num_oov_buckets=1) lookup_user_age = tf.contrib.lookup.index_table_from_file( '../vocab/user_age.txt', num_oov_buckets=1) lookup_user_job = tf.contrib.lookup.index_table_from_file( '../vocab/user_job.txt', num_oov_buckets=1) lookup_user_gender = tf.contrib.lookup.index_table_from_file( '../vocab/user_gender.txt', num_oov_buckets=1) lookup_movie_id = tf.contrib.lookup.index_table_from_file( '../vocab/movie_id.txt', num_oov_buckets=1) lookup_movie_title = tf.contrib.lookup.index_table_from_file( '../vocab/movie_title.txt', num_oov_buckets=1) user_id = lookup_user_id.lookup(user_id) user_age = lookup_user_age.lookup(user_age) user_job = lookup_user_job.lookup(user_job) user_gender = lookup_user_gender.lookup(user_gender) movie_id = lookup_movie_id.lookup(movie_id) movie_title = lookup_movie_title.lookup(movie_title) # Embedding user_id = tf.contrib.layers.embed_sequence( ids = user_id, vocab_size = params['user_id_size'] + 1, embed_dim = params['large_embed_dim'], scope='user_id') user_age = tf.contrib.layers.embed_sequence( ids = user_age, vocab_size = params['user_age_size'] + 1, embed_dim = params['small_embed_dim'], scope='user_age') user_job = tf.contrib.layers.embed_sequence( ids = user_job, vocab_size = params['user_job_size'] + 1, embed_dim = params['small_embed_dim'], scope='user_job') user_gender = tf.contrib.layers.embed_sequence( ids = user_gender, vocab_size = params['user_gender_size'], embed_dim = params['small_embed_dim'], scope='user_gender') movie_id = tf.contrib.layers.embed_sequence( ids = movie_id, vocab_size = params['movie_id_size'] + 1, embed_dim = params['large_embed_dim'], scope='movie_id') movie_types = tf.to_float(movie_types) movie_title = tf.contrib.layers.embed_sequence( ids = movie_title, vocab_size = params['movie_title_size'] + 1, embed_dim = params['large_embed_dim'], scope='movie_title') # User Network user_feature = tf.concat((user_id, user_age, user_job, user_gender), -1) user_feature = tf.layers.dropout(user_feature, params['dropout_rate'], training=is_training) user_feature = tf.layers.dense(user_feature, params['hidden_dim'], params['activation'], name='user_feature/fc') # Movie Network movie_title = tf.layers.dropout(movie_title, params['dropout_rate'], training=is_training) movie_title = tf.reduce_max(tf.layers.conv1d(movie_title, filters=params['large_embed_dim'], kernel_size=params['kernel_size'], activation=params['activation'], name='movie_feature/conv1d'), axis=1) movie_feature = tf.concat((movie_id, movie_types, movie_title), -1) movie_feature = tf.layers.dropout(movie_feature, params['dropout_rate'], training=is_training) movie_feature = tf.layers.dense(movie_feature, params['hidden_dim'], params['activation'], name='movie_feature/fc') # Aggregation scores = tf.concat([tf.abs(user_feature - movie_feature), user_feature * movie_feature, user_feature, movie_feature], -1) scores = tf.layers.dropout(scores, params['dropout_rate'], training=is_training) scores = tf.layers.dense(scores, params['hidden_dim'], params['activation']) scores = tf.layers.dropout(scores, params['dropout_rate'], training=is_training) scores = tf.layers.dense(scores, params['hidden_dim'], params['activation']) logits = tf.layers.dense(scores, 5) predictions = tf.to_float(tf.argmax(logits, -1) + 1) if labels is not None: labels = (labels + 5.) / 2. loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits( labels=tf.to_int32(labels-1), logits=logits)) if mode == tf.estimator.ModeKeys.TRAIN: tf.logging.info('\n'+pprint.pformat(tf.trainable_variables())) global_step=tf.train.get_or_create_global_step() decay_lr = tf.train.exponential_decay( params['lr'], global_step, 1000, .96) optim = tf.train.AdamOptimizer(decay_lr) train_op = optim.minimize( loss_op, global_step=tf.train.get_or_create_global_step()) hook = tf.train.LoggingTensorHook({'lr': decay_lr}, every_n_iter=100) return tf.estimator.EstimatorSpec(mode=mode, loss=loss_op, train_op=train_op, training_hooks=[hook],) if mode == tf.estimator.ModeKeys.EVAL: mae_op = tf.metrics.mean_absolute_error(labels=labels, predictions=predictions) return tf.estimator.EstimatorSpec(mode=mode, loss=loss_op, eval_metric_ops={'mae': mae_op}) # + id="0ASyyUEGL4ct" colab_type="code" colab={} params = { 'log_path': '../log/dnn.txt', 'model_dir': '../model/dnn', 'user_id_size': 6040, 'user_age_size': 7, 'user_job_size': 21, 'user_gender_size': 2, 'movie_id_size': 3691, 'movie_title_size': 3761, 'small_embed_dim': 30, 'large_embed_dim': 200, 'hidden_dim': 200, 'activation': tf.nn.elu, 'kernel_size': 3, 'dropout_rate': 0.2, 'lr': 3e-4, 'num_patience': 7, 'buffer_size': int(6E5), 'batch_size': 256, } # + id="1GHxpMrcvtAM" colab_type="code" outputId="169a842f-6a98-469f-f8fd-8267961e25cf" executionInfo={"status": "ok", "timestamp": 1563415673752, "user_tz": -480, "elapsed": 667995, "user": {"displayName": "\u5982\u5b50", "photoUrl": "https://lh3.googleusercontent.com/-cJ4VJthuDc0/AAAAAAAAAAI/AAAAAAAABAw/iwZyEawePbs/s64/photo.jpg", "userId": "01997730851420384589"}} colab={"base_uri": "https://localhost:8080/", "height": 1000} # Create directory if not exist Path(os.path.dirname(params['log_path'])).mkdir(exist_ok=True) Path(params['model_dir']).mkdir(exist_ok=True, parents=True) # Logging logger = logging.getLogger('tensorflow') logger.setLevel(logging.INFO) fh = logging.FileHandler(params['log_path']) logger.addHandler(fh) # Create an estimator config = tf.estimator.RunConfig( save_checkpoints_steps=params['buffer_size']//params['batch_size'], keep_checkpoint_max=params['num_patience']+2,) estimator = tf.estimator.Estimator( model_fn=model_fn, model_dir=params['model_dir'], config=config, params=params) # This hook early-stops model if testing accuracy is not improved hook = tf.estimator.experimental.stop_if_no_decrease_hook( estimator=estimator, metric_name='mae', max_steps_without_decrease=params['num_patience']*params['buffer_size']//params['batch_size'], run_every_secs=None, run_every_steps=params['buffer_size']//params['batch_size']) # Train on training data and Evaluate on testing data train_spec = tf.estimator.TrainSpec( input_fn=lambda: input_fn(mode=tf.estimator.ModeKeys.TRAIN, params=params), hooks=[hook]) eval_spec = tf.estimator.EvalSpec( input_fn=lambda: input_fn(mode=tf.estimator.ModeKeys.EVAL, params=params), steps=None, throttle_secs=10,) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
finch/tensorflow1/recommender/movielens/main/dnn_softmax.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Multiple Regression Analysis # ## Motivation for Multiple Regression # ### The Model with Two Independent Variables # # we skip some similar definition and approaches$\DeclareMathOperator*{\argmin}{argmin} # \newcommand{\using}[1]{\stackrel{\mathrm{#1}}{=}} # \newcommand{\ffrac}{\displaystyle \frac} # \newcommand{\space}{\text{ }} # \newcommand{\bspace}{\;\;\;\;} # \newcommand{\QQQ}{\boxed{?\:}} # \newcommand{\CB}[1]{\left\{ #1 \right\}} # \newcommand{\SB}[1]{\left[ #1 \right]} # \newcommand{\P}[1]{\left( #1 \right)} # \newcommand{\dd}{\mathrm{d}} # \newcommand{\Tran}[1]{{#1}^{\mathrm{T}}} # \newcommand{\d}[1]{\displaystyle{#1}} # \newcommand{\EE}[2][\,\!]{\mathbb{E}_{#1}\left[#2\right]} # \newcommand{\Var}[2][\,\!]{\mathrm{Var}_{#1}\left[#2\right]} # \newcommand{\Cov}[2][\,\!]{\mathrm{Cov}_{#1}\left(#2\right)} # \newcommand{\Corr}[2][\,\!]{\mathrm{Corr}_{#1}\left(#2\right)} # \newcommand{\I}[1]{\mathrm{I}\left( #1 \right)} # \newcommand{\N}[1]{\mathrm{N} \left( #1 \right)} # \newcommand{\ow}{\text{otherwise}}$. One thing to mention is that the key assumption about how $u$ related to $x_1$ and $x_2$ is $\EE{u \mid x_1 , x_2} = 0$. # # ### The Model with $k$ Independent Variables # # Not too many things different from before, only one thing: $\EE{u \mid x_1 , x_2, \dots, x_k}= 0$. # # # ## Mechanics and Interpretation of Ordinary Least Squares # ### Obtaining the OLS estimates # # OLS ***first order conditions***: # # $$\left\{\begin{align} # \sum_{i=1}^{n} \P{y_i - \hat\beta_0 - \hat\beta_1x_{i1} - \cdots - \hat\beta_k x_{ik}} &= 0\\ # \sum_{i=1}^{n} x_{i1}\P{y_i - \hat\beta_0 - \hat\beta_1x_{i1} - \cdots - \hat\beta_k x_{ik}} &= 0\\ # \sum_{i=1}^{n} x_{i2}\P{y_i - \hat\beta_0 - \hat\beta_1x_{i1} - \cdots - \hat\beta_k x_{ik}} &= 0\\ # &\vdots \\ # \sum_{i=1}^{n} x_{ik}\P{y_i - \hat\beta_0 - \hat\beta_1x_{i1} - \cdots - \hat\beta_k x_{ik}} &= 0\\ # \end{align}\right.$$ # # Also these can be obtained by **moment methods**矩方法 cause these are equivalent to $\EE{u} = 0$ and $\EE{x_j u} = 0$ where $j = 1, 2, \dots, k$. # # $Remark$ # # Further assumption about whether there's an unique solution about this equation set will be shown later # ### Interpreting the OLS Regression Equation # # OLS ***regression line***: $\hat y = \hat \beta_0 + \hat\beta_1 x_1 + \hat\beta_2 x_2 + \cdots \hat\beta_k x_k$ # # ### On the Meaning of "Holding Other Factors Fixed" in Multiple regression # # The power of multiple regression analysis is that it allows us to do in nonexperimental environments what natural scientists are able to do in a controlled laboratory setting: keep other factors fixed. # # ### Changing More than One Independent Variable Simultaneously # ### OLS Fitted Values and Residuals # # ***Fitted value***: $\hat y_i = \hat \beta_0 + \hat\beta_1 x_{i1} + \cdots + \hat\beta_k x_{ik}$ # # ***Residual***: $\hat u_i = y_i - \hat y_i$ # # Here're some properties inherited from the **SLR** method. # # - The sample average of the residuals is zero: $\sum\limits_{i=1}^{n} \hat u_i = 0$ ($\EE{\hat u} = 0$) and so $\bar y = \bar{\hat y}$ # - The sample covariance between each independent variable and the OLS residuals is zero: $\sum\limits_{i=1}^{n} x_{ij} \hat u_i = 0$ and so: the sample covariance between the OLS fitted values and the OLS residuals is zero: $\sum\limits_{i=1}^{n}\hat y_i \hat u_i = 0$ # - $\bar y = \hat \beta_0 + \hat\beta_1 \bar x_1 + \cdots + \hat \beta_k \bar x_k$ # ### A "Partialling Out" Interpretation of Multiple Regression # # We now focus on the "partialling Out" Interpretation. # # Consider the case with $k=2$ independent variables, and the regression result is $ # \hat y = \hat\beta_0 + \hat\beta_1 x_1 + \hat\beta_2 x_2$. Here's another expression for $\hat\beta_1$. # # $$\hat\beta_1 = \ffrac{\sum\limits_{i=1}^{n} \hat r_{i1}y_i} {\sum\limits_{i=1}^{n} \hat r_{i1}^2}$$ # # Here $\hat r_{i1}$ is the residual from a simple regression of $x_1$ on $x_2$ (which has NOTHING to do with $y$). After that we do another regression of $y$ on $\hat r_i$, so that to obtian $\hat\beta_1$. # # This can be interpreted as we first partial out the correlated part of $x_1$ and $x_2$ where only $\hat r_i$ left. So that $\hat \beta_1$ now can measure the sample relation ship between $y$ and $x_1$ after $x_2$ has been partialled out. # # $$\hat\beta_1 = \ffrac{\sum \P{\hat r_{i1} - \bar{\hat r}_{i1}}\P{y_i - \bar y}} {\sum \P{\hat r_{i1} - \bar{\hat r}_{i1}}^2 }$$ # # then by the fact that $\sum \hat r_{i1} = 0$ (it's a residual anyway), we can see its final appearance. Also in the general model with $k$ explanatory variables, $\hat\beta_1$ can be written as the same. # # $Proof$ # # >To derive the equation, we first follow the strategy and write: $x_{i1} = \hat x_{i1} + \hat r_{i1}$ from the regression of $x_1$ on $x_2, x_3, \dots,x_k$ for all $i = 1,2,\dots,n$. Then we plug it back into **OLS first order conditions** and obtain: # > # >$$\sum\nolimits_{i=1}^{n} \P{\hat x_{i1} + \hat r_{i1}} \P{y_i - \hat\beta_0 - \hat\beta_1 x_{i1} - \cdots - \hat\beta_k x_{ik}} = 0$$ # > # >At this time, $\hat x_{i1}$ is the linear combination of the other explainatory variables $x_{12}, x_{i3}, \dots, x_{ik}$ and thus $\sum \hat x_{i1}\hat u_i = 0$. Therefore, # # >$$\sum_{i=1}^{n} \hat r_{i1} \P{y_i - \hat\beta_0 - \hat\beta_1 x_{i1} - \cdots - \hat\beta_k x_{ik}} = 0$$ # > # >Then, on account of the fact that $\hat r_{i1}$ are the residuals from regressing $x_1$ on $x_2, x_3, \dots, x_k$ we have $\sum_{i=1}^{n} x_{ij} \hat r_{i1} = 0$ for all $j = 2,3,\dots,k$. Therefore, the preceding equation is simplified to: # > # >$$\sum_{i=1}^{n} \hat r_{i1} \P{y_i - \hat\beta_1 x_{i1}} = 0 \\ # \Longrightarrow # \hat\beta_1 = \ffrac{\d{\sum_{i=1}^{n} \hat r_{i1}y_i}} {\d{\sum_{i=1}^{n} \hat r_{i1} x_{i1}}}$$ # > # >Finally, use the fact that $\sum_{i=1}^{n} \hat x_{i1} \hat r_{i1} = 0$, we have # > # >$$\hat\beta_1 = \ffrac{\d{\sum_{i=1}^{n} \hat r_{i1}y_i}} {\d{\sum_{i=1}^{n} \hat r_{i1} \P{x_{i1} - \hat x_{i1} }}} = \ffrac{\d{\sum_{i=1}^{n} \hat r_{i1}y_i}} {\d{\sum_{i=1}^{n} \hat r_{i1}^2}}$$ # ### Comparison of Simple and Multiple Regression Estimates # # If the model has two variables but we itentionally omit one, say $x_2$, then denote the simple regression result as $\tilde y = \tilde \beta_0 + \tilde \beta_1 x_1$ while the one with full variable has the form: $\hat y = \hat \beta_0 + \hat\beta_1 x_1 + \hat\beta_2 x_2$. And the relation between $\tilde \beta_0$ and $\tilde \beta_1$ can be expressed as: $\boxed{\tilde\beta_1 = \hat\beta_1 + \hat\beta_2 \cdot \tilde\delta_1}$, where $\tilde\delta_1$ is the slope coefficient from the simple regression of $x_{i2}$ on $x_{i1}$, $i = 1,2,\dots, n$. # # Interpretation: $\tilde\beta_1$ is somewhat the sum of the partial effects of $x_1$ on $\hat y$ and the partial effects of $x_2$ on $\hat y$ times the slope in the sample regression of $x_2$ on $x_1$. ***3A.4*** # # And only in two cases will they equal: # # 1. $\hat\beta_2 = 0$, that the partial effect of $x_2$ on $\hat y$ is $0$ in the sample$\\[0.5em]$ # 2. $\tilde\delta_1 = 0$, that $x_1$ and $x_2$ are uncorrelated in the sample # # And the generalized one: # # 1. the OLS coefficients on $x_2$ through $x_k$ are all $0$ # 2. $x_1$ is uncorrelated with *each* of $x_2, x_3,\dots,x_k$ # ### Goodness-of-Fit # # Then we define the $\text{SST} = \sum \P{y_i - \bar y}^2$, $\text{SSE} = \sum \P{\hat y_i - \bar y}^2$, and $\text{SSR} = \sum \hat u_i^2$ and $R^2 = \ffrac{\text{SSE}} {\text{SST}}$. And using the same argument in the SLR, we have $\text{SSR} = \text{SST} - \text{SSE}$. # # And there's also an alternative expression of $R^2$, seemingly using an asymptotic way: # # $$\begin{align} # R^2 &= \rho^2\P{y_i, \hat y_i} \\ # &= \ffrac{\Cov{y_i, \hat y_i}^2} {\Var{y_i}\Var{\hat y_i}} \\ # &= \ffrac{\EE{\P{y_i - \EE{y_i}}\P{\hat y_i - \EE{\hat y_i}}}^2} {\EE{y_i - \EE{y_i}}^2\EE{\hat y_i - \EE{\hat y_i}}^2} \\ # &\approx \ffrac{\P{\sum\limits_{i=1}^{n} \P{y_i - \bar y}\P{\hat y - \bar{\hat y}}}^2} {\P{\sum\limits_{i=1}^{n} \P{y_i - \bar{y}}^2}\P{\sum\limits_{i=1}^{n} \P{\hat y - \bar{\hat y}}^2}} # \end{align}$$ # # ### Regression through the Origin # ## The Expected Value of the OLS Estimators # # $Assumption$ $\text{MLR}.1$ to $\text{MLR}.4$ # # - Linear in parameters: In the population, the relationship between $y$ and the explanatory variables is linear: $y = \beta_0 + \beta_1 x_1 + \cdots + \beta_k x_k + u$. This model is called the ***population model*** or ***true model*** # - Random Sampling: The data is a random sample drawn from the population: $\CB{\P{x_{i1},x_{i2},\dots,x_{ik}}:i=1,\dots,n}$ # - and we write: $y_i = \beta_0 + \beta_1 x_{i1} + \beta_2 x_{i2} + \cdots + \beta_kx_{ik} + u_i\\[0.5em]$ # - No perfect collinearity: In the sample (and therefore in the population), none of the independent variables is constant and there are no *exact linear* relationships among the independent variables # - Later we will see that its variance will soar up if almost linear # - And if yes, we say ***perfect collinearity*** occurs and can't be estimated using OLS. # - Zero conditional mean: The value of the explanatory variables must contain no information about the mean of the unobserved factors: $\EE{u \mid x_{1} , x_{2}, \dots, x_{k}} = 0$ # # $Theorem.1$ # # By assumptions $\text{MLR}.1$ to $\text{MLR}.4$, we claim that $\EE{\hat\beta_j} = \beta_j$ # # $Proof$ # # > Using matrices is a better way, but here we just focus on one slope parameter. # > # >First under $\text{MLR}.3$ we have $\hat\beta_1 = \ffrac{\sum\limits_{i=1}^{n} \hat r_{i1} y_i} {\sum\limits_{i = 1}^{n} \hat{r}^2_{i1}}$. # > # >Then under $\text{MLR}.1$, we have $y_i = \beta_0 + \beta_1 x_{i1} + \beta_2 x_{i2} + \cdots + \beta_k x_{ik} + u_i$; we can substitute this $y_i$ back and obtain $\hat\beta_1 = \ffrac{\sum\limits_{i=1}^{n} \hat r_{i1} \P{\beta_0 + \beta_1 x_{i1} + \beta_2 x_{i2} + \cdots + \beta_k x_{ik} + u_i}} {\sum\limits_{i = 1}^{n} \hat{r}^2_{i1}}$. # > # >We now deal with the terms separately. # > # >$\sum \hat r_{i1} = 0$, since it's a residual; $\sum x_{ij}\hat r_{i1} = 0$, since that's the Covariance of residual and explanatory variable; they hold true for all $j = 2,3,\dots,k$. And $\sum x_{i1}\hat r_{i1} = \sum \hat r_{i1}^2$ since $x_{i1} = \text{linear function}\P{x_{i2}, x_{i3}, \dots, x_{ik}} + \hat r_{i1}$. # # >Finally, $\hat \beta_1 = \beta_1 + \ffrac{\sum\limits_{i=1}^{n} \hat r_{i1} u_i} {\sum\limits_{i = 1}^{n} \hat{r}^2_{i1}}$ # > # >Next step, under assumption $\text{MLR}.2$ and $\text{MLR}.4$ we consider the expecation of $\hat\beta_1$ conditioned on $\mathbf{X} = \P{X_1, X_2, \dots, X_k}$: # > # >$$\begin{align} # \EE{\hat\beta_1 \mid \mathbf{X}} &= \beta_1 + \ffrac{\sum\limits_{i=1}^{n} \hat r_{i1} \EE{u_i\mid \mathbf{X}}} {\sum\limits_{i = 1}^{n} \hat{r}^2_{i1}} \\ # &= \beta_1 + \ffrac{\sum\limits_{i=1}^{n} \hat r_{i1} \cdot 0} {\sum\limits_{i = 1}^{n} \hat{r}^2_{i1}} \\ # &= \beta_1 = \EE{\hat\beta_1} # \end{align}$$ # # ### Including Irrelevant Variables in a Regression Model # # More variables are included in the model while they have no partial effects on $y$ in the population. A simple example, say the model is $y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + u$ and $x_3$ is useless here. Then in terms of conditional expectations, $\EE{y \mid x_1,x_2,x_3} = \EE{y \mid x_1,x_2} = \beta_0 + \beta_1 x_1 + \beta_2 x_2$ # # Then, $\EE{\hat\beta_3} = 0$. Though $\hat\beta_3$ might not be exactly $0$, it's expectation will. And our conclusion is including one or more *irrelevant variables* in a multiple regression model, or overspecifying the model, does not affect the unbiasedness of the OLS estimators. However, variances will suffer the harm. # # ### Omitted Variable Bias: The Simple Case # # See this from a simple case. We suppose the true model is $y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + u$ while we assume it to be in another form $y = \alpha_0 + \alpha_1 x_1 + w$. What's the bias from this? # # First we assume that $x_2 = \delta_0 + \delta_1 x_1 + v$, thus the model changes to $y = \P{\beta_0 + \beta_2 \delta_0} + \P{\beta_1 + \beta_2 \delta_1}x_1 + \P{\beta_2 v + u}$. Here the estimated intercept is $\P{\beta_0 + \beta_2 \delta_0}$, altered, from $\beta_0$; the estimated slope on $x_1$ will be $\P{\beta_1 + \beta_2 \delta_1}$, also altered, from $\beta_1$; and the error term, which was changed to $\P{\beta_2 v + u}$ from a simple $u$. **All estimated coefficients will be biased now**. # # If we do the sample regression of $y$ only on $x_1$, we will have $\tilde y = \tilde\beta_0 + \tilde\beta_1 x_1$. An interesting algebraic relationship is $\tilde\beta_1 = \hat\beta_1 + \hat\beta_2 \tilde\delta_1$. Thus, $\EE{\tilde\beta_1} = \beta_1 + \beta_2 \tilde\delta_1$ and $\text{Bias}\P{\tilde\beta_1} = \EE{\tilde\beta_1} - \beta_1 = \beta_2 \tilde\delta_1$, called the ***omitted variable***. # # 1. $\beta_2 = 0$: when it just really not a variable in the **true model**. # 2. $\tilde\delta_1 = 0$: since $\tilde\delta_1$ is the sample covariance between $x_1$ and $x_2$ over the sample variance of $x_1$, it's $0$ $iff$ $x_1$ and $x_2$ are *uncorrelated* in the sample. # ### Omitted Variable Bias: More general Cases # # Suppose the population model: $y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + u$ satisfies the Assumptions $\text{MLR}.1$ to $\text{MLR}.4$. However if we omit the variable $x_3$, the estimated model is $\tilde y = \tilde\beta_0 + \tilde\beta_1 x_1 + \tilde\beta_2 x_2$. # # To see how $\EE{\tilde\beta_1}$ and $\EE{\tilde\beta_2}$ are biased, we write them out. To obtain a value for this, we first need to assume that $x_1$ and $x_2$ are uncorrelated, then # # $$\begin{align} # \EE{\tilde\beta_1} &= \EE{\hat\beta_1 + \hat\beta_3 \tilde\delta_1} & \EE{\tilde\beta_2} &= \EE{\hat\beta_2 + \hat\beta_3 \tilde\delta_2}\\ # &= \beta_1 + \beta_3 \cdot \ffrac{\d{\sum_{i=1}^{n} \P{x_{i1} - \bar{x}_1}x_{i3}}} {\d{\sum_{i=1}^{n} \P{x_{i1} - \bar{x}_1}^2}} &&= \beta_2 + \beta_3 \cdot \ffrac{\d{\sum_{i=1}^{n} \P{x_{i2} - \bar{x}_2}x_{i3}}} {\d{\sum_{i=1}^{n} \P{x_{i2} - \bar{x}_2}^2}} # \end{align}$$ # ## The Variance of the OLS Estimators # # $Assumption$ $\text{MLR}.5$ # - Homoscedasticity: The value of the explanatory variables must contain no information about the variance of the unobserved factors: $\Var{u \mid x_{1},x_{2},\dots, x_{k}} = \sigma^2$ # # $Remark$ # # $\text{MLR}.1$ to $\text{MLR}.5$ are collectively known as the ***Gauss-Markov assumptions*** (for cross-sectional regression). # # $Theorem.2$ # # By assumptions $\text{MLR}.1$ to $\text{MLR}.5$, we claim that $\Var{\hat\beta_j} = \ffrac{\sigma^2} {\text{SST}_j \P{1-R_j^2}}$ for $j = 1,2,\dots,k$. Here $\text{SST}_j$ is the **Total sample variation** in explanatory variable $x_j$: $\sum_{i=1}^{n}\P{x_{ij} - \bar x_j}^2$ and $R_j^2 = \rho^2\P{x_j,\hat x_j} = \ffrac{\P{\sum\limits_{i=1}^{n} \P{x_{ij} - \bar x_j}\P{\hat x_{ij} - \bar{\hat x_j}}}^2} {\P{\sum\limits_{i=1}^{n} \P{x_{ij} - \bar{x_j}}^2}\P{\sum\limits_{i=1}^{n} \P{\hat x_{ij} - \bar{\hat x_j}}^2}}$. This $R_j^2$ is from regressing $x_j$ on all other independent variables (and including an intercept). # # ### The Components of The OLS Variances: Multicollinearity # # - The Error Variance: $\sigma^2$. # - Bigger error variance, bigger sampling variance, less imprecise the estimation # - The total Sample Variation in $x_j$: $\text{SST}$ # - More sample, higher $\text{SST}$, more accurate # - No sample variance is so rare and not allowed by $\text{MLR}.4$ # - ***micronumerosity***: Small sample size can lead to large sampling variances $\text{SST}_j$ # - the Linear relationships among the Independent Variables: $R_j^2$ # - If two are correlated, then $R_j \to 1$ which greatly magnify the variance # - ***multicollinearity***: high (but not perfect) correlation between two or more independent variables # # Here we call $1/\P{1-R_j^2}$ the ***Variance Inflation Factor***. And the conlusion is: dropping some variables will reduce the multicollinearity while lead to omitted variable bias. # ### Variances in Misspecified Models # # - True Model: $y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + u$ # - Estimated Model: $\hat y = \hat \beta_0 + \hat \beta_1 x_1 + \hat \beta_2 x_2$ # - Estimated Model with $\beta_2$ omitted: $\tilde y = \tilde \beta_0 + \tilde \beta_1 x_1$ # # Their variances are: $\Var{\hat \beta_1} = \ffrac{\sigma^2} {\text{SST}_1 \P{1-R_1^2}}$ and $\Var{\tilde \beta_1} = \ffrac{\sigma^2} {\text{SST}_1}$. And we can divine this into two cases: # # 1. for $\beta_2 = 0$, $\EE{\hat \beta_1} = \beta_1$ and $\EE{\tilde\beta_1} = \beta_1$, besides, $\Var{\tilde\beta_1} < \Var{\hat\beta_1}$ # 2. for $\beta_2 \neq 0$, $\EE{\hat \beta_1} = \beta_1$ and $\EE{\tilde\beta_1} \neq \beta_1$, but still $\Var{\tilde\beta_1} < \Var{\hat\beta_1}$ # ### Estimating $\sigma^2$: Standard Errors of the OLS Estimators # # Analogy to the simple regresion, $\hat\sigma^2 = \ffrac{1} {n-k-1} \sum \hat u_i^2 = \ffrac{\text{SSR}} {n-k-1}$, unbiased. Here's the theorem # # $Remark$ # # $n-k-1$ is the ***degrees of freedom***: $\text{number of observations} - \text{number of estimated parameters}$ # # $Theorem.3$ # # Under $\text{MLR}.1$ through $\text{MLR}.5$, $\EE{\hat\sigma^2} = \sigma^2$ # # Here, $\hat\sigma^2$ is called the ***standard error of the regression (SER)***. # # Then we can use this to estimate the sampling variation. The ***standard deviation*** of $\hat\beta_j$: # # $$\text{sd}\P{\hat\beta_j} = \sqrt{\Var{\hat\beta_j}} = \sqrt{\ffrac{1} {\text{SST}_j \P{1-R_j^2}}}\sigma$$ # # and then the estimated one, ***standard error*** of $\hat\beta_j$, by replacing the $\sigma$ in the last expression with $\hat\sigma$: # # $$\text{se}\P{\hat\beta_j} = \sqrt{\widehat{\Var{\hat\beta_j}}} = \sqrt{\ffrac{1} {\text{SST}_j \P{1-R_j^2}}}\hat\sigma$$ # ## Efficiency of OLS: The Gauss-Markov Theorem # # 1. Under Assumption $\text{MLR}.1$ to $\text{MLR}.4$, OLS is unbiased # 2. And then $\text{MLR}.5$, it becomes the one with the smallest variance # # Thus we call this estimation the ***best linear unbiased estimators (BLUE)*** of the regression coefficients. # # $Remark$ # # Linear here means that the estimator can be expressed as a weighted sum of dependent variables: # # $$\tilde\beta_j = \sum_{i=1}^{n} w_{ij}y_i$$ # # And best means the least variance among all others. # # $Theorem.4$ GAUSS-MARKOV THEOREM # # Under Assumption $\text{MLR}.1$ through $\text{MLR}.5$, $\hat\beta_1, \hat\beta_2,\dots,\hat\beta_k$, the OLS estimators, are the ***best linear unbiased estimators (BLUEs)*** of $\beta_1, \beta_2,\dots,\beta_k$, respectively. # ***
FinMath/Econometrics/Chap_03.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + ########Reading data from Mongo####### from pyspark.sql import SparkSession import re from pyspark.sql import SparkSession from pyspark.sql.functions import * from pyspark.sql.types import * from pyspark.sql import functions as F # spark mongo connector spark = SparkSession.builder.master("local[*]").config("spark.mongodb.input.uri","mongodb://localhost:27017/twitter_db.Dummy") .config("spark.mongodb.output.uri","mongodb://locahost:27017/twitter_db.Dummy").config("spark.jars.packages","org.mongodb.spark:mongo-spark-connector_2.12:3.0.0").getOrCreate() df = spark.read.format("mongo").option("uri","mongodb://localhost:27017/twitter_db.Dummy").load() df.registerTempTable("Twitter") #df.count() # - """ # Cleaning Data sf_1=df.select("text") words = sf_1.select(explode(split(df.text, "t_end")).alias("word")) word=df.select(F.regexp_replace('text', r'http\S+', '').alias("text")) word=word.select(F.regexp_replace('text', '@\w+', '').alias("text")) word=word.select(F.regexp_replace('text', '#', '').alias("text")) word=word.select(F.regexp_replace('text', 'RT', '').alias("text")) word=word.select(F.regexp_replace('text', ':', '').alias("text")) word.show(truncate=False) """ # + ##########Feature Selection############ # Selecting required columns # JSON file we take requried fileds using Spark SQL sf=df.select(explode(split(df.text, "t_end")).alias("text"),"created_at",col("user.location").alias("Location"),"retweet_count",col("user.followers_count").alias("User_followers"),col("user.favourites_count").alias("favourites_count"),col("user.verified").alias("Verified User"),"lang") sf=sf.select(F.regexp_replace('text', r'http\S+', '').alias("Text"),"created_at","Location","retweet_count","favourites_count","Verified User","User_followers","lang") sf=sf.select(F.regexp_replace('Text', '@\w+', '').alias("text"),"created_at","Location","retweet_count","favourites_count","Verified User","User_followers","lang") sf=sf.select(F.regexp_replace('text', '#', '').alias("text"),"created_at","Location","retweet_count","favourites_count","Verified User","User_followers","lang") sf=sf.select(F.regexp_replace('text', 'RT', '').alias("text"),"created_at","Location","retweet_count","favourites_count","Verified User","User_followers","lang") sf=sf.select(F.regexp_replace('text', ':', '').alias("Text"),from_unixtime(unix_timestamp('created_at', 'EEE MMM d HH:mm:ss z yyyy'),format="yyyy-MM-dd").alias('date'),"Location","User_followers","favourites_count","retweet_count","Verified User") sf=sf.fillna({"Location": "unknown"}) sf=sf.fillna({"retweet_count": 0}) sf=sf.filter((col("lang") == 'en')) sf.show() #sf.printSchema() #sf.show() #sf.withColumn('newDate',f.date_sub("created_at",10)).show() #df.show(2) # - #Counting total tweets #sf.where(col("retweet_count").isin({"0"})) # + ############ Writing structured data into MongoDB ########### sf.write.format("mongo").option("uri","mongodb://localhost:27017/Sample2.Data").mode("append").save() # + import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import pymongo #import pandas as pd from pymongo import MongoClient # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory from subprocess import check_output #print(check_output(["ls", "../input"]).decode("utf8")) # Any results you write to the current directory are saved as output. #tweets=pd.read_csv("final.csv",encoding = "ISO-8859-1") #tweets.head() # - connection = pymongo.MongoClient("localhost",27017) db = connection["twtt"] collection = db["tweets"] print("connect to mongoDB") tweets = pd.DataFrame(collection.find()) #df.to_csv('file1.csv') # + #tweets=pd.read_csv("file1.csv",encoding = "ISO-8859-1") tweets.head() # + from nltk.sentiment.vader import SentimentIntensityAnalyzer from nltk.sentiment.util import * from nltk import tokenize sid = SentimentIntensityAnalyzer() tweets['sentiment_compound_polarity']=tweets.Text.apply(lambda x:sid.polarity_scores(x)['compound']) tweets['sentiment_neutral']=tweets.Text.apply(lambda x:sid.polarity_scores(x)['neu']) tweets['sentiment_negative']=tweets.Text.apply(lambda x:sid.polarity_scores(x)['neg']) tweets['sentiment_pos']=tweets.Text.apply(lambda x:sid.polarity_scores(x)['pos']) tweets['sentiment_type']='' tweets.loc[tweets.sentiment_compound_polarity>0,'sentiment_type']='POSITIVE' tweets.loc[tweets.sentiment_compound_polarity==0,'sentiment_type']='NEUTRAL' tweets.loc[tweets.sentiment_compound_polarity<0,'sentiment_type']='NEGATIVE' tweets.head() # + #tweets.save # - tweets.sentiment_type.value_counts().plot(kind='bar',title="sentiment analysis") # + import re, nltk from nltk.stem import WordNetLemmatizer from nltk.corpus import stopwords stop_words = set(stopwords.words('english')) wordnet_lemmatizer = WordNetLemmatizer() import nltk #nltk.download('stopwords') def normalizer(tweet): only_letters = re.sub("[^a-zA-Z]", " ",tweet) tokens = nltk.word_tokenize(only_letters)[2:] lower_case = [l.lower() for l in tokens] filtered_result = list(filter(lambda l: l not in stop_words, lower_case)) lemmas = [wordnet_lemmatizer.lemmatize(t) for t in filtered_result] return lemmas # - normalizer("Here is text about an airline I like.") pd.set_option('display.max_colwidth', -1) # Setting this so we can see the full content of cells tweets['normalized_tweet'] = tweets.Text.apply(normalizer) tweets[['Text','normalized_tweet']].head() tweets from nltk import ngrams def ngrams(input_list): #onegrams = input_list bigrams = [' '.join(t) for t in list(zip(input_list, input_list[1:]))] trigrams = [' '.join(t) for t in list(zip(input_list, input_list[1:], input_list[2:]))] return bigrams+trigrams tweets['grams'] = tweets.normalized_tweet.apply(ngrams) tweets[['grams']].head() import collections def count_words(input): cnt = collections.Counter() for row in input: for word in row: cnt[word] += 1 return cnt import numpy as np tweets[(tweets.sentiment_type == 'NEGATIVE')][['grams']].apply(count_words)['grams'].most_common(20) tweets[(tweets.sentiment_type == 'POSITIVE')][['grams']].apply(count_words)['grams'].most_common(20) import numpy as np from scipy.sparse import hstack from sklearn.feature_extraction.text import CountVectorizer count_vectorizer = CountVectorizer(ngram_range=(1,2)) vectorized_data = count_vectorizer.fit_transform(tweets.Text) indexed_data = hstack((np.array(range(0,vectorized_data.shape[0]))[:,None], vectorized_data)) def sentiment2target(sentiment): return { 'NEGATIVE': 0, 'NEUTRAL': 1, 'POSITIVE' : 2 }[sentiment] targets = tweets.sentiment_type.apply(sentiment2target) from sklearn.model_selection import train_test_split data_train, data_test, targets_train, targets_test = train_test_split(indexed_data, targets, test_size=0.3, random_state=0) data_train_index = data_train[:,0] data_train = data_train[:,1:] data_test_index = data_test[:,0] data_test = data_test[:,1:] from sklearn.naive_bayes import MultinomialNB clf = MultinomialNB() clf_output=clf.fit(data_train,targets_train) clf.score(data_test, targets_test) sentences = count_vectorizer.transform([ "What a great airline, the trip was a pleasure!", "My issue was quickly resolved after calling customer support. Thanks!", "What the hell! My flight was cancelled again. This sucks!", "Service was awful. I'll never fly with you again.", "You fuckers lost my luggage. Never again!", "I have mixed feelings about airlines. I don't know what I think.", "" ]) clf.predict_proba(sentences) predictions_on_test_data = clf.predict_proba(data_test) index = np.transpose(np.array([range(0,len(predictions_on_test_data))])) indexed_predictions = np.concatenate((predictions_on_test_data, index), axis=1).tolist() def marginal(p): top2 = p.argsort()[::-1] return abs(p[top2[0]]-p[top2[1]]) margin = sorted(list(map(lambda p : [marginal(np.array(p[0:3])),p[3]], indexed_predictions)), key=lambda p : p[0]) list(map(lambda p : tweets.iloc[data_test_index[int(p[1])].toarray()[0][0]].Text, margin[0:10])) list(map(lambda p : predictions_on_test_data[int(p[1])], margin[0:10])) list(map(lambda p : tweets.iloc[data_test_index[int(p[1])].toarray()[0][0]].Text, margin[-10:])) list(map(lambda p : predictions_on_test_data[int(p[1])], margin[-10:])) import matplotlib.pyplot as plt marginal_probs = list(map(lambda p : p[0], margin)) n, bins, patches = plt.hist(marginal_probs, 25, facecolor='blue', alpha=0.75) plt.title('Marginal confidence histogram - All data') plt.ylabel('Count') plt.xlabel('Marginal confidence') plt.show() positive_test_data = list(filter(lambda row : row[0]==2, hstack((targets_test[:,None], data_test)).toarray())) positive_probs = clf.predict_proba(list(map(lambda r : r[1:], positive_test_data))) marginal_positive_probs = list(map(lambda p : marginal(p), positive_probs)) n, bins, patches = plt.hist(marginal_positive_probs, 25, facecolor='green', alpha=0.75) plt.title('Marginal confidence histogram - Positive data') plt.ylabel('Count') plt.xlabel('Marginal confidence') plt.show() positive_test_data = list(filter(lambda row : row[0]==1, hstack((targets_test[:,None], data_test)).toarray())) positive_probs = clf.predict_proba(list(map(lambda r : r[1:], positive_test_data))) marginal_positive_probs = list(map(lambda p : marginal(p), positive_probs)) n, bins, patches = plt.hist(marginal_positive_probs, 25, facecolor='blue', alpha=0.75) plt.title('Marginal confidence histogram - Neutral data') plt.ylabel('Count') plt.xlabel('Marginal confidence') plt.show() negative_test_data = list(filter(lambda row : row[0]==0, hstack((targets_test[:,None], data_test)).toarray())) negative_probs = clf.predict_proba(list(map(lambda r : r[1:], negative_test_data))) marginal_negative_probs = list(map(lambda p : marginal(p), negative_probs)) n, bins, patches = plt.hist(marginal_negative_probs, 25, facecolor='red', alpha=0.75) plt.title('Marginal confidence histogram - Negative data') plt.ylabel('Count') plt.xlabel('Marginal confidence') plt.show() data_train.shape data_test_index.shape data_test.shape data_train_index.shape
naive_bayes_algo.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import pandas as pd import numpy as np import json from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier from sklearn.neighbors import KNeighborsRegressor from sklearn.pipeline import Pipeline from sklearn.metrics import r2_score import descriptors.preprocessing as pp import descriptors.dft_featurisation as dft_ft import descriptors.rdkit_featurisation as rdkit_ft from analysis import analysis_train_set_size, random_split, stratified_split # - estimators = [('predictor', RandomForestRegressor())] pipe = Pipeline(estimators) metric = r2_score # # Getting all featurization df_dataset = pd.read_csv('data/rxnfp_featurization/rxn_dataset_2.csv') X_rxnfp = np.array([json.loads(x) for x in df_dataset.rxnfp]) substrate_rxnfp = np.array(df_dataset.Substrate) DOI_rxnfp = np.array(df_dataset.DOI) mechanisms_rxnfp = np.array(df_dataset["A-X type"]) origins_rxnfp = np.array(df_dataset.Origin) y_rxnfp = np.array(df_dataset.Yields) df_dft = pd.read_csv("data/NiCOlit.csv", sep = ',') df_dft = pp.preprocess(df_dft) indexes_kept_dft = np.array(df_dft.index) X_dft, y_dft, DOI_dft, mechanisms_dft, origins_dft, sub_dft, lig_dft = dft_ft.process_dataframe_dft(df_dft, data_path="data/utils/", origin=False) df_fp = pd.read_csv('data/NiCOlit.csv') df_fp = pp.preprocess(df_fp) X_fp, y_fp, DOI_fp, mechanisms_fp, origins_fp = rdkit_ft.process_dataframe(df_fp) # # Random split values, baseline_values, model_values, stratification_values, additional_stratification_values = random_split(X_fp, y_fp, origins_fp, mechanisms_fp, n_iterations=2) display_df = pd.DataFrame(zip(values, baseline_values, model_values, stratification_values, additional_stratification_values), columns = ['Yields', 'Baseline', 'Predicted Yields', 'Origin', 'Coupling Partner']) display_df.to_csv("results/random_split_fp_descriptors_test_size_0.2") # + # Training set size influence metric_values, baseline_values, sizes = analysis_train_set_size(X_fp[indexes_kept_dft, :], y_fp[indexes_kept_dft], DOI_fp[indexes_kept_dft], metric=metric, predictor=pipe, n_iterations_external=1, n_iterations_internal=1) metric_mean = np.mean(metric_values, axis=1) metric_lower = np.percentile(metric_values, 5, axis=1) metric_upper = np.percentile(metric_values, 95, axis=1) baseline_mean = np.mean(baseline_values, axis=1) baseline_lower = np.percentile(baseline_values, 5, axis=1) baseline_upper = np.percentile(baseline_values, 95, axis=1) display_df = pd.DataFrame(zip(metric_mean, metric_lower, metric_upper, baseline_mean, baseline_lower, baseline_upper, sizes), columns = ['Metric mean', 'Metric lower','Metric upper','Baseline mean', 'Baseline lower','Baseline upper', 'Sizes']) display_df.to_csv("results/training_size_influence_fp_descriptors") # - values, baseline_values, model_values, stratification_values, additional_stratification_values = random_split(X_dft, y_dft, origins_dft, mechanisms_dft, n_iterations=10) display_df = pd.DataFrame(zip(values, baseline_values, model_values, stratification_values, additional_stratification_values), columns = ['Yields', 'Baseline', 'Predicted Yields', 'Origin', 'Coupling Partner']) display_df.to_csv("results/random_split_dft_descriptors_test_size_0.2") # + # Training set size influence metric_values, baseline_values, sizes = analysis_train_set_size(X_dft, y_dft, DOI_dft, metric=metric, predictor=pipe, n_iterations_external=1, n_iterations_internal=1) metric_mean = np.mean(metric_values, axis=1) metric_lower = np.percentile(metric_values, 5, axis=1) metric_upper = np.percentile(metric_values, 95, axis=1) baseline_mean = np.mean(baseline_values, axis=1) baseline_lower = np.percentile(baseline_values, 5, axis=1) baseline_upper = np.percentile(baseline_values, 95, axis=1) display_df = pd.DataFrame(zip(metric_mean, metric_lower, metric_upper, baseline_mean, baseline_lower, baseline_upper, sizes), columns = ['Metric mean', 'Metric lower','Metric upper','Baseline mean', 'Baseline lower','Baseline upper', 'Sizes']) display_df.to_csv("results/training_size_influence_dft_descriptors") # + indices = np.where(origins_dft == "Scope")[0] values, baseline_values, model_values, stratification_values, additional_stratification_values = random_split(X_dft[indices, :], y_dft[indices], origins_dft[indices], mechanisms_dft[indices], n_iterations=10) display_df = pd.DataFrame(zip(values, baseline_values, model_values, stratification_values, additional_stratification_values), columns = ['Yields', 'Baseline', 'Predicted Yields', 'Origin', 'Coupling Partner']) display_df.to_csv("results/random_split_dft_descriptors_scope_test_size_0.2") indices = np.where(origins_dft == "Optimisation")[0] values, baseline_values, model_values, stratification_values, additional_stratification_values = random_split(X_dft[indices, :], y_dft[indices], origins_dft[indices], mechanisms_dft[indices], n_iterations=10) display_df = pd.DataFrame(zip(values, baseline_values, model_values, stratification_values, additional_stratification_values), columns = ['Yields', 'Baseline', 'Predicted Yields', 'Origin', 'Coupling Partner']) display_df.to_csv("results/random_split_dft_descriptors_optimisation_test_size_0.2") # - values, baseline_values, model_values, stratification_values, additional_stratification_values = random_split(X_rxnfp, y_rxnfp, origins_rxnfp, mechanisms_rxnfp, n_iterations=1) display_df = pd.DataFrame(zip(values, baseline_values, model_values, stratification_values, additional_stratification_values), columns = ['Yields', 'Baseline', 'Predicted Yields', 'Origin', 'Coupling Partner']) display_df.to_csv("results/random_split_rxnfp_descriptors_test_size_0.2") # + # Training set size influence metric_values, baseline_values, sizes = analysis_train_set_size(X_rxnfp[indexes_kept_dft, :], y_rxnfp[indexes_kept_dft], DOI_rxnfp[indexes_kept_dft], metric=metric, predictor=pipe, n_iterations_external=1, n_iterations_internal=1) metric_mean = np.mean(metric_values, axis=1) metric_lower = np.percentile(metric_values, 5, axis=1) metric_upper = np.percentile(metric_values, 95, axis=1) baseline_mean = np.mean(baseline_values, axis=1) baseline_lower = np.percentile(baseline_values, 5, axis=1) baseline_upper = np.percentile(baseline_values, 95, axis=1) display_df = pd.DataFrame(zip(metric_mean, metric_lower, metric_upper, baseline_mean, baseline_lower, baseline_upper, sizes), columns = ['Metric mean', 'Metric lower','Metric upper','Baseline mean', 'Baseline lower','Baseline upper', 'Sizes']) display_df.to_csv("results/training_size_influence_rxnfp_descriptors") # - # # Substrate split values, global_baseline_results, global_results, stratification_results, additional_stratification_results = stratified_split(X_fp, y_fp, list(df_fp["substrate"]), origins_fp , metric=metric, predictor=RandomForestRegressor(), test_size=0.2, n_iterations=1) display_df = pd.DataFrame(zip(stratification_results, additional_stratification_results, global_results, global_baseline_results, values), columns =['Substrate', 'Origin', 'Predicted Yields', 'Global baseline', 'Yields']) display_df.to_csv("results/substrate_split_fp_descriptors") values, global_baseline_results, global_results, stratification_results, additional_stratification_results = stratified_split(X_dft, 1 * y_dft>50, list(df_dft["substrate"]), origins_dft , metric=metric, predictor=RandomForestClassifier(), test_size=0.2, n_iterations=10) display_df = pd.DataFrame(zip(stratification_results, additional_stratification_results, global_results, global_baseline_results, values), columns =['Substrate', 'Origin', 'Predicted Yields', 'Global baseline', 'Yields']) display_df.to_csv("results/substrate_split_dft_descriptors_classification") values, global_baseline_results, global_results, stratification_results, additional_stratification_results = stratified_split(X_fp, y_fp, list(df_fp["substrate"]), origins_fp, metric=metric, predictor=KNeighborsRegressor(n_neighbors=1), test_size=0.2, n_iterations=2) display_df = pd.DataFrame(zip(stratification_results, additional_stratification_results, global_results, global_baseline_results, values), columns =['Substrate', 'Origin', 'Predicted Yields', 'Global baseline', 'Yields']) display_df.to_csv("results/substrate_split_fp_descriptors_KNN") values, global_baseline_results, global_results, stratification_results, additional_stratification_results = stratified_split(X_rxnfp, y_rxnfp, substrate_rxnfp, origins_rxnfp , metric=metric, predictor=RandomForestRegressor(), test_size=0.2, n_iterations=2) display_df = pd.DataFrame(zip(stratification_results, additional_stratification_results, global_results, global_baseline_results, values), columns =['Substrate', 'Origin', 'Predicted Yields', 'Global baseline', 'Yields']) display_df.to_csv("results/substrate_split_rxnfp_descriptors") # # DOI split values, global_baseline_results, global_results, stratification_results, additional_stratification_results = stratified_split(X_fp, y_fp, DOI_fp, origins_fp , metric=metric, predictor=RandomForestRegressor(), test_size=0.2, n_iterations=1) display_df = pd.DataFrame(zip(stratification_results, additional_stratification_results, global_results, global_baseline_results, values), columns =['Substrate', 'Origin', 'Predicted Yields', 'Global baseline', 'Yields']) display_df.to_csv("results/doi_split_fp_descriptors") values, global_baseline_results, global_results, stratification_results, additional_stratification_results = stratified_split(X_dft, y_dft, DOI_dft, origins_dft , metric=metric, predictor=RandomForestRegressor(), test_size=0.2, n_iterations=10) display_df = pd.DataFrame(zip(stratification_results, additional_stratification_results, global_results, global_baseline_results, values), columns =['Substrate', 'Origin', 'Predicted Yields', 'Global baseline', 'Yields']) display_df.to_csv("results/doi_split_dft_descriptors") values, global_baseline_results, global_results, stratification_results, additional_stratification_results = stratified_split(X_rxnfp, y_rxnfp, DOI_rxnfp, origins_rxnfp , metric=metric, predictor=RandomForestRegressor(), test_size=0.2, n_iterations=1) display_df = pd.DataFrame(zip(stratification_results, additional_stratification_results, global_results, global_baseline_results, values), columns =['Substrate', 'Origin', 'Predicted Yields', 'Global baseline', 'Yields']) display_df.to_csv("results/doi_split_rxnfp_descriptors") # # Coupling partner split values, global_baseline_results, global_results, stratification_results, additional_stratification_results = stratified_split(X_fp, y_fp, mechanisms_fp, origins_fp , metric=metric, predictor=RandomForestRegressor(), test_size=0.2, n_iterations=1) display_df = pd.DataFrame(zip(stratification_results, additional_stratification_results, global_results, global_baseline_results, values), columns =['Substrate', 'Origin', 'Predicted Yields', 'Global baseline', 'Yields']) display_df.to_csv("results/mechanisms_split_fp_descriptors") values, global_baseline_results, global_results, stratification_results, additional_stratification_results = stratified_split(X_dft, y_dft, mechanisms_dft, origins_dft , metric=metric, predictor=RandomForestRegressor(), test_size=0.2, n_iterations=10) display_df = pd.DataFrame(zip(stratification_results, additional_stratification_results, global_results, global_baseline_results, values), columns =['Substrate', 'Origin', 'Predicted Yields', 'Global baseline', 'Yields']) display_df.to_csv("results/mechanisms_split_dft_descriptors") values, global_baseline_results, global_results, stratification_results, additional_stratification_results = stratified_split(X_rxnfp, y_rxnfp, mechanisms_rxnfp, origins_rxnfp , metric=metric, predictor=RandomForestRegressor(), test_size=0.2, n_iterations=1) display_df = pd.DataFrame(zip(stratification_results, additional_stratification_results, global_results, global_baseline_results, values), columns =['Substrate', 'Origin', 'Predicted Yields', 'Global baseline', 'Yields']) display_df.to_csv("results/mechanisms_split_rxnfp_descriptors") # # Restricted chemical space: Suzuki indexes = np.where(mechanisms_fp=='B')[0] values, baseline_values, model_values, stratification_values, additional_stratification_values = random_split(X_fp[indexes, :], y_fp[indexes], origins_fp[indexes], mechanisms_fp[indexes], n_iterations=5) display_df = pd.DataFrame(zip(values, baseline_values, model_values, stratification_values, additional_stratification_values), columns = ['Yields', 'Baseline', 'Predicted Yields', 'Origin', 'Coupling Partner']) display_df.to_csv("results/random_split_fp_descriptors_test_size_0.2_mechanism_suzuki") indexes = np.where(mechanisms_dft=='B')[0] values, baseline_values, model_values, stratification_values, additional_stratification_values = random_split(X_dft[indexes, :], y_dft[indexes], origins_dft[indexes], mechanisms_dft[indexes], n_iterations=1) display_df = pd.DataFrame(zip(values, baseline_values, model_values, stratification_values, additional_stratification_values), columns = ['Yields', 'Baseline', 'Predicted Yields', 'Origin', 'Coupling Partner']) display_df.to_csv("results/random_split_dft_descriptors_test_size_0.2_mechanism_suzuki") values, baseline_values, model_values, stratification_values, additional_stratification_values = random_split(X_rxnfp[indexes, :], y_rxnfp[indexes], origins_rxnfp[indexes], mechanisms_rxnfp[indexes], n_iterations=1) display_df = pd.DataFrame(zip(values, baseline_values, model_values, stratification_values, additional_stratification_values), columns = ['Yields', 'Baseline', 'Predicted Yields', 'Origin', 'Coupling Partner']) display_df.to_csv("results/random_split_rxnfp_descriptors_test_size_0.2_mechanism_suzuki") # TODO: clean r2 = [] length = [] for mecha in np.unique(mechanisms_dft): indexes = np.where(mechanisms_dft==mecha)[0] values, baseline_values, model_values, stratification_values, additional_stratification_values = random_split(X_dft[indexes, :], y_dft[indexes], origins_dft[indexes], mechanisms_dft[indexes], n_iterations=10) print(mecha) print(len(indexes)) print(round(r2_score(values, model_values), 3)) r2.append(round(r2_score(values, model_values), 3)) length.append(len(indexes)) values, baseline_values, model_values, stratification_values, additional_stratification_values = random_split(X_dft, y_dft, origins_dft, mechanisms_dft, n_iterations=50) for mecha in np.unique(mechanisms_dft): indexes = np.where(np.array(additional_stratification_values)==mecha)[0] print(mecha) print(round(r2_score(np.array(values)[indexes], np.array(model_values)[indexes]),3)) for ax_t in df_dft["A-X type"].unique(): print(ax_t) print(len(df_dft[df_dft["A-X type"]==ax_t]["DOI"].unique()))
generate_model_results.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt import os import glob # + dname = r"C:\Users\Edwin\Desktop\101020\Edwin\Rescan\reformatted scans" # dir based on script abs path os.chdir(dname) data_paths = glob.glob('./*.xlsx') names = ['A1', 'A2', 'B1', 'B2', 'C1', 'C2', 'D1','D2','E1','F1'] # make sure next time the well unique ID is actually used!! lol prevents from mannually have to create # - wavelength_range = [400,900] fig, ax = plt.subplots() for i, path in enumerate(data_paths[6:]): final_path = dname + path df = pd.read_excel(final_path, sheet_name='Sheet2') well = path[4:6] x = df['X4'] lower_index = np.where(x == wavelength_range[0])[0][0] # add logic if only one upper or lower limit provided auto finds missing limit upper_index = np.where(x == wavelength_range[1])[0][0] x = df['X4'][lower_index:upper_index] y = (pd.to_numeric(df['Y5'], errors='coerce').fillna(0))[lower_index:upper_index] ax.plot(x,y, label = well) ax.legend() plt.xlabel('Wavelength nm') plt.ylabel('Absorbance') if len(wavelength_range) == 2: lower_index = np.where(wavelengths == wavelength_range[0])[0][0] # add logic if only one upper or lower limit provided auto finds missing limit upper_index = np.where(wavelengths == wavelength_range[1])[0][0] dataframe = dataframe fig, ax = plt.subplots() for i, (key, row) in enumerate(dataframe.iterrows()): if key == 'Wavelength': x = row[lower_index:upper_index] else: y = row[lower_index:upper_index] ax.plot(x,y,label = labels[i]) # ax.annotate('hello', xy=(1.05, 0.85), xycoords='axes fraction') ax.legend() x = df['X4'] y = df['Y5'] plt.plot(x,y) series = (pd.to_numeric(series , errors='coerce').fillna(0))
PlanPrepareProcess_OT2/Process/UVVis/UVVis Single Spec.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 2. Python Control Constructs # Evaluate the active cells in this notebook (use `shift+Enter` or the "Play" icon in the notebook menu bar), while reading, to see the outcome of the various code snippets. At the end of the Notebook there is Task section, with an Absalon turn-in. # ## For loops # For loops have this syntax -- note the colon and the (automatic) indentation: n=10 for i in range(n): print(i,i**0.2) # range() is a built-in _generator_; a _thing_ that can generate a range of numbers (but that _isn't_ an actual range of numbers, just a generator for it). Hence you can ask for a billion numbers to be generatded in a for loop, without actually using the memory that would be needed to hold a billion numbers. range(2,n) # By itself, `range()` is just an expression (a "generator"). But the generator can also produce an actual list, or an array: list(range(2,n)) from numpy import array array(range(3,n,2)) # The `arange()` generator spits out floating point numbers, for example (note that -- as with `range()` -- the 2nd range limit is not included; this is consistent with the [a,b) math notation, and also gives in this example (2.0 - 1.0) / 0.1 elements from numpy import arange array(arange(1.0,2.0,0.1)) # In fact, any such "iterator" -- a generator, a list, a tuple, etc -- can stand after the __in__ in a __for__ loop. # ## While loops # A __while__ loops kees running until the condition after the `while` is false: n=1 while n < 1100: print(n) n *= 2 # ## Breaking out of loops # Sometimes one wants to break out of a loop before it is finished, based on some test. This is done with a __break__ instruction. Here's an example, giving the first N-faculty (N!) larger than a hundred thousand a=1 a_max=10**5 for i in range(2,109): a=a*i if a>a_max: break print(i,a) # Surprisingly few loops, right? # ## Conditional constructs # The syntax of __if statements__ in Python is condition=[False,False,False,False,True] if condition[0]: print('code 0') elif condition[1]: print('code 1') elif condition[2]: print('code 2') else: print('code 3') # As always in Python, conditional blocks begin after a colon at the end of a line, and are set aside by indentation only -- no _begin/end_ pairs or curly brackets are used. _It is therefore extremely important to keep track of the indentation of your code, because results depend on it._ # A condition may also be used in direct assignments: a=12 if condition[1] else 11 a # ## Comprehension # A _comprehension_ is compact way to make a new _list_, which is a function of or selection from an existing _list_ (or _iterator_). It looks like this: a = [1,3,4,8,5,3,2,4,6,3] b = [x**2 for x in a] print(b) # That example produces a one-to-one mapping of a list to a new one of the same lengt. A _selection_ is made by adding an `if` expression: c = [x for x in a if 2*(x//2)==x] print(c) # ### __Task:__ # The task is to make a for-loop construct that produce a list of all primes smaller than a thousand, using for example code like this: # ``` # list=[] # for i in range(2,1000): # ok=True # ... # if ok: # list.append(i) # i=i+1 # print(list) # ``` # Hints: # 1. A number is a prime if it is not divisible by any prime smaller than itself # 2. In Python 3, the expression `11/2` gives `5.5`, while `11//2` (integer arithmetics) gives 5 # ### __Absalon turn in:__ # 1. Upload the notebook # 2. Paste the printout of the list into the Absalon text field # ## Next # When you have reached this point you are ready to open the __1d Python Standard Libraries__ notebook, to learn about libraries. __NOTE__: Make sure to open the copy of this notebook in your own file space, not the original (read-only) copy in the course file space.
docs/overview/IAC/HandsOn/0-Python-Intro/2-Python-Control.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Using Leaflet's Smooth Factor Option # # [Polyline](http://leafletjs.com/reference.html#polyline) objects in leaflet are smoothed by default. This removes points from the line, putting less load on the browser when drawing. The level of smoothing [can be specified](http://leafletjs.com/reference.html#polyline-smoothfactor) by passing `smoothFactor` as an option when creating any Polyline object. # # In folium, the level of smoothing can be determined by passing `smooth_factor` as an argument when initialising GeoJson, TopoJson and Choropleth objects. There are no upper or lower bounds to the smoothing level; Leaflet's default is 1. # + import json import folium import requests m = folium.Map(location=[-59.1759, -11.6016], tiles="cartodbpositron", zoom_start=2) url = ( "https://raw.githubusercontent.com/python-visualization/folium/master/examples/data" ) fname = f"{url}/antarctic_ice_shelf_topo.json" topo = json.loads(requests.get(fname).text) folium.TopoJson( data=topo, object_path="objects.antarctic_ice_shelf", name="default_smoothing", smooth_factor=1, style_function=lambda x: {"color": "#004c00", "opacity": "0.7"}, ).add_to(m) folium.TopoJson( data=topo, object_path="objects.antarctic_ice_shelf", name="default_smoothing", smooth_factor=10, style_function=lambda x: {"color": "#1d3060", "opacity": "0.7"}, ).add_to(m) folium.LayerControl().add_to(m) m
examples/SmoothFactor.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np raw_data = pd.read_csv('city.csv', sep=';', encoding='cp1251') raw_data.head() data = raw_data.iloc[:,3].values dic = {} for val in data: val = val.split('(')[0].strip() val = val[0].upper() + val[1:] if 64 < ord(val[0]) < 91 or 'о.' in val: continue dic[val[0]] = dic.get(val[0], []) dic[val[0]].append(val) # + from random import randint alfa = 'АБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЭЮЯ' def game(l=5): dic_st = {key:dic[key][:] for key in dic} ch = alfa[randint(0, len(alfa)-1)] words = [] for i in range(l): arr = dic_st[ch] inx = randint(0, len(arr)-1) word = arr[inx] words.append(word) del arr[inx] i = -1 while word[i].upper() not in alfa: i -= 1 ch = word[i].upper() return f'<s>Играв в слова: {", ".join(words)}</s>\n' # + with open('words_train.txt', 'bw') as f: for _ in range(10000): f.write(game(randint(10, 15)).encode('UTF-8')) with open('words_valid.txt', 'bw') as f: for _ in range(1000): f.write(game(randint(10, 15)).encode('UTF-8')) # -
cities/prepare_words_data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="K4_jgPwE1k_I" # # Abstract # + [markdown] id="4ieuNAAt1k_K" # In this notebook we will study the famed Iris data set. This data set has been studied for many years by people doing machine learning. In fact the dataset was first treated by <NAME>. He is a rather handsom fellow, and we should be particularily impressed with his goatee! # # <img src="https://upload.wikimedia.org/wikipedia/commons/e/e0/Hermann_Emil_Fischer_c1895.jpg # " style="width:300px"> # # We chose this dataset because it is well studied but also well known to be non-trivial to get good answers from it. # + [markdown] id="zaWoMJcS1k_K" # It is so well studied that there are even YouTube videos about it. We can directly embed such a video in our notebook to liven up our presentation. # + colab={"base_uri": "https://localhost:8080/", "height": 322} id="2o-HYBH91k_L" outputId="81e64f15-0b42-407a-cf74-190540fab2fa" from IPython.lib.display import YouTubeVideo YouTubeVideo('hd1W4CyPX58') # + [markdown] id="0Opdinkl1k_M" # In fact all of the datasets that we have for this workshop come with their own descriptions. Note this data set has been studied since 1936 (at least), and there are still open questions. The data has many interesting features, such as 3 different classes, of varying difficulty and four dimensional features. # + colab={"base_uri": "https://localhost:8080/"} id="Sy2XKsIR1xuC" outputId="d0fae882-2102-4e31-a579-14dad2e54842" # ! wget https://raw.githubusercontent.com/rcpaffenroth/DS595-Machine-Learning-for-Engineering-and-Science-Applications/main/examples/basic_python_demos/iris.txt # ! wget https://raw.githubusercontent.com/rcpaffenroth/DS595-Machine-Learning-for-Engineering-and-Science-Applications/main/examples/basic_python_demos/iris.csv # + colab={"base_uri": "https://localhost:8080/"} id="LKd2FUUv1k_M" outputId="c492dd8c-6e35-42d1-adf1-c6ade946b4d3" print(open('iris.txt', 'r').read()) # + [markdown] id="UlwduEW11k_N" # <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/4/41/Iris_versicolor_3.jpg/640px-Iris_versicolor_3.jpg" style="width:300px"> # + [markdown] id="gOStJ-f61k_N" # # Loading in the libraries. # + id="mKVk_8gV1k_N" import numpy as np import pandas as pa import sklearn import matplotlib.pylab as py from mpl_toolkits.mplot3d import Axes3D from matplotlib.colors import ListedColormap # + id="gKeiqsvF1k_O" # %matplotlib inline # + [markdown] id="Dck_ra6J1k_O" # # Introduction # + [markdown] id="pASSBLKC1k_O" # The first thing that Randy does with data is to take a a look at this, so we thought that we would do the same thing. So, we begin our analysis by reading in the data. This was not as trivial as we have originally thought since we kept running into errors. The problem was that: # # <b> we needed to make sure the data was in the correct directory</b> # # The eaisiest thing to do is have the data in the same directory at the Jupyter notebook. # + colab={"base_uri": "https://localhost:8080/"} id="kSGn_qpt1k_O" outputId="3f011b48-0ab5-410c-c821-4e766ffec9f2" # ls # + id="e9owf9xQ1k_O" data = pa.read_csv('iris.csv') # + [markdown] id="B2RSY16i1k_P" # The size of our data was # + colab={"base_uri": "https://localhost:8080/"} id="HJiIOAoA1k_P" outputId="e979ef20-9f74-4725-9957-49923600b10b" data.shape # + [markdown] id="UX4-Zbct1k_P" # Which means we have 150 total measurements and each measurement has 5 values. For examples our first measurement is # + colab={"base_uri": "https://localhost:8080/"} id="ermjdkfK1k_P" outputId="ed61c4c6-9c5c-461c-dcf7-deeb05fe484c" data.iloc[0,:] # + [markdown] id="C1B_1OEA1k_P" # Observe that we want to predict the flower type from the predictors # # 1. sepal length # 2. sepal width # 3. petal length # 4. petal width # # <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/4/41/Iris_versicolor_3.jpg/640px-Iris_versicolor_3.jpg" style="width:300px"> # + [markdown] id="qeuYXaah1k_Q" # # Looking at the data # + id="UqvKCOiJ1k_Q" X = np.array(data.iloc[:, :-1]) y = np.array(data.iloc[:, -1]) # + colab={"base_uri": "https://localhost:8080/", "height": 286} id="qOw0iIkX1k_Q" outputId="170aa9ea-b4ca-4d51-d870-3c88c23c45ad" py.figure() py.plot(X[y==0,0],X[y==0,1],'r.') py.plot(X[y==1,0],X[y==1,1],'g.') py.plot(X[y==2,0],X[y==2,1],'b.') # + colab={"base_uri": "https://localhost:8080/", "height": 283} id="WePXgXfZ1k_Q" outputId="7d7cab37-b201-4688-9d37-4744d935311e" py.figure() py.plot(X[y==0,2],X[y==0,3],'r.') py.plot(X[y==1,2],X[y==1,3],'g.') py.plot(X[y==2,2],X[y==2,3],'b.') # + colab={"base_uri": "https://localhost:8080/", "height": 499} id="i2dleFGG1k_R" outputId="17f69b0c-d04f-4ae2-89f0-59a389cfc1d7" pa.plotting.scatter_matrix(data, alpha=0.2, figsize=(8,8)); # + colab={"base_uri": "https://localhost:8080/", "height": 481} id="HiA7dbcD1k_R" outputId="84038f91-4c54-4dd9-b883-c8d42a17f8a9" fig = py.figure(figsize=(8, 6)) ax = Axes3D(fig, elev=-150, azim=110) ax.scatter(X[y==0, 0], X[y==0, 1], X[y==0, 2], c='r') ax.scatter(X[y==1, 0], X[y==1, 1], X[y==1, 2], c='g') ax.scatter(X[y==2, 0], X[y==2, 1], X[y==2, 2], c='b') # + [markdown] id="h4MzCoAu1k_R" # # Data processing # + [markdown] id="xTAu5Ei41k_R" # ## Asking the right question # + [markdown] id="8jYqhmUi1k_S" # One of the most important aspects of data analysis is to ask the right question. So, we thought that we would preprocess our data to ask two different questions, one we think is easy, and one we think is hard. The easy question we propose is to # just distinguish class 0 from classes 1 and 2. Based upon our visualizations, we assume that this problem willl be very easy to solve using the petal lengths and petal widths # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="kJh_6Dtn1k_S" outputId="7524ff99-d8de-44f4-e17f-868da1f8338c" py.figure() py.plot(X[y==0,2],X[y==0,3],'r.') py.plot(X[y!=0,2],X[y!=0,3],'g.') # + [markdown] id="ebzCOYrL1k_S" # We also plan to test the sepal widths and the sepal lengths, even though that problem will likely be harder. # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="ZFGXb3GI1k_S" outputId="d93fc43b-821d-4fcc-b3b5-f2f5465967f0" py.figure() py.plot(X[y==0,0],X[y==0,1],'r.') py.plot(X[y!=0,0],X[y!=0,1],'g.') # + [markdown] id="OCr-IyiJ1k_S" # Finally we will test the full data set (all four columns) against all three classes. However, plotting this data is rather difficult, so we needed to try something more advanced. Randy suggested Principal Component Analysis, so we gave that a try. # + [markdown] id="fHg3C9Cv1k_S" # ## Principle Component Analysis (PCA) # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="TzDk3te71k_T" outputId="606487fa-ff18-4ac0-ab40-763613c0fb66" from sklearn.decomposition import PCA py.figure() XPCA = PCA(n_components=3).fit_transform(X) py.plot(XPCA[y==0,0],XPCA[y==0,1],'r.') py.plot(XPCA[y==1,0],XPCA[y==1,1],'g.') py.plot(XPCA[y==2,0],XPCA[y==2,1],'b.') # + [markdown] id="mXkXsQBX1k_T" # Just as one can project from a high dimensional space to a two-dimensional space, one can also do the same thing to project to a three-dimensional space. # + [markdown] id="AohYII0G1k_T" # # Our first classification tool, K-Nearest Neighbors. # + [markdown] id="eHnXP9KY1k_T" # http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html # # n_neighbors : int, optional (default = 5) # Number of neighbors to use by default for k_neighbors queries. # # weights : str or callable, optional (default = ‘uniform’) # weight function used in prediction. Possible values: # ‘uniform’ : uniform weights. All points in each neighborhood are weighted equally. # ‘distance’ : weight points by the inverse of their distance. in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away. # [callable] : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights. # # metric : string or DistanceMetric object (default = ‘minkowski’) # the distance metric to use for the tree. The default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric. See the documentation of the DistanceMetric class for a list of available metrics. # # p : integer, optional (default = 2) # Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. # # metric_params : dict, optional (default = None) # Additional keyword arguments for the metric function. # + id="EuM-3w3M1k_U" # # Import the K-NN solver from sklearn import neighbors clf = neighbors.KNeighborsClassifier(n_neighbors=3) # + [markdown] id="kufgBKNY1k_U" # http://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html # # C : float, optional (default=1.0) # Penalty parameter C of the error term. # # loss : string, ‘hinge’ or ‘squared_hinge’ (default=’squared_hinge’) # Specifies the loss function. ‘hinge’ is the standard SVM loss (used e.g. by the SVC class) while ‘squared_hinge’ is the square of the hinge loss. # # penalty : string, ‘l1’ or ‘l2’ (default=’l2’) # Specifies the norm used in the penalization. The ‘l2’ penalty is the standard used in SVC. The ‘l1’ leads to coef_ vectors that are sparse. # # class_weight : {dict, ‘balanced’}, optional # Set the parameter C of class i to class_weight[i]*C for SVC. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)) # + id="pQ7BRkqK1k_V" # # Load in a classifier #from sklearn import svm #clf = svm.LinearSVC() # + [markdown] id="5Gwc28JO1k_V" # http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier # # criterion : string, optional (default=”gini”) # The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. # # max_depth : int or None, optional (default=None) # The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples. # # min_samples_split : int, float, optional (default=2) # The minimum number of samples required to split an internal node: # If int, then consider min_samples_split as the minimum number. # If float, then min_samples_split is a percentage and ceil(min_samples_split * n_samples) are the minimum number of samples for each split. # # min_samples_leaf : int, float, optional (default=1) # The minimum number of samples required to be at a leaf node: # If int, then consider min_samples_leaf as the minimum number. # If float, then min_samples_leaf is a percentage and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node. # # max_leaf_nodes : int or None, optional (default=None) # Grow a tree with max_leaf_nodes in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes. # # class_weight : dict, list of dicts, “balanced” or None, optional (default=None) # Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y. # The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)) # For multi-output, the weights of each column of y will be multiplied. # Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. # + id="wNoProHi1k_V" ## Decision tree classifier #from sklearn import tree #clf = tree.DecisionTreeClassifier() # + [markdown] id="X7rOrZ8I1k_V" # n_estimators : integer, optional (default=10) # The number of trees in the forest. # + id="iX3vTog_1k_V" # # Decision tree classifier #from sklearn import ensemble #clf = ensemble.RandomForestClassifier() # + [markdown] id="skYwzWU-1k_V" # http://scikit-learn.org/stable/modules/generated/sklearn.discriminant_analysis.LinearDiscriminantAnalysis.html # # n_components : int, optional # Number of components (< n_classes - 1) for dimensionality reduction. # + id="axpCiK-41k_W" # # Linear Disciminant Analysis #from sklearn import discriminant_analysis #clf = discriminant_analysis.LinearDiscriminantAnalysis() # + id="KcJAfxCj1k_W" # This is the one complicated bit of code in the whole demo. There is no need to modify it, but please feel free if you want! def runTest(clf,X,y,trainingPercent=0.66,plotTest=True): # A little cheat to make the pictures consistent np.random.seed(1234) # If there is one thing that I want to harp on, it is the difference # between testing and training errors! So, here we create a training # set on which we computer the parameters of our algorithm, and a # testing set for seeing how well we generalize (and work on real # world problems). perm = np.random.permutation(len(y)) n = X.shape[0] trainSize = int(trainingPercent*n) Xtrain = X[perm[:trainSize],0:2] Xtest = X[perm[trainSize:],0:2] yHat = y yHattrain = yHat[perm[:trainSize]] yHattest = yHat[perm[trainSize:]] # Run the calculation! clf.fit(Xtrain, yHattrain) # step size in the mesh for plotting the decision boundary. h = .02 # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) py.figure(figsize=(8, 6)) cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF']) cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF']) py.pcolormesh(xx, yy, Z, cmap=cmap_light, shading='auto') # Plot also the training points py.scatter(Xtrain[:, 0], Xtrain[:, 1], c=yHattrain, cmap=cmap_bold,marker='o') if plotTest: py.scatter(Xtest[:, 0], Xtest[:, 1], c=yHattest, cmap=cmap_bold,marker='+') py.xlim(xx.min(), xx.max()) py.ylim(yy.min(), yy.max()) # py.show() # Print out some metrics print('training score',clf.score(Xtrain,yHattrain)) print('testing score',clf.score(Xtest,yHattest)) # + [markdown] id="JSM3_ora1k_W" # The problem we though would be easy was easy! # + colab={"base_uri": "https://localhost:8080/", "height": 411} id="ZKXolSgy1k_W" outputId="1bf6dd42-8810-464f-d17b-747f08403a21" yEasy = y.copy() yEasy[y==2] = 1 runTest(clf,X[:,[2,3]],yEasy) # + [markdown] id="_3K4xJSF1k_W" # The problem we thought would be a bit harder was still pretty easy. # + colab={"base_uri": "https://localhost:8080/", "height": 411} id="DGjUbYLA1k_W" outputId="e39802a3-cd4f-42c8-fd97-e9af80196f32" yEasy = y.copy() yEasy[y==2] = 1 runTest(clf,X[:,[0,1]],yEasy) # + [markdown] id="qcVK2UbK1k_X" # ## Testing the hypothesis that the problem we thought was hard actually is hard # + [markdown] id="_iW-Ez8R1k_X" # The hard problem is hard, and we pretty clearly overfit! # + colab={"base_uri": "https://localhost:8080/", "height": 411} id="5TzH7U0h1k_X" outputId="b9e3df52-24f5-4dee-bffa-1cf47d0ae86d" runTest(clf,X[:,[0,1]],y,plotTest=False) # + [markdown] id="_OcgzRnq1k_X" # However, Randy won't be mad since we figured out we were overfitting! # + [markdown] id="2PiAAFkg1k_X" # ## A better way of analyzing the data # + [markdown] id="4NQR7pnL1k_X" # Thinking about what must have to been going on, we decided to try a different projection of the data. # + colab={"base_uri": "https://localhost:8080/", "height": 411} id="xVM4CyTp1k_X" outputId="79f239d2-a75d-4ccb-8738-a0dc153019ca" runTest(clf,X[:,[2,3]],y) # + [markdown] id="srDI2YGm1k_Y" # This is much better! The trainig and testing errors are much closer. # + [markdown] id="fRC_8a6m1k_Y" # ## More advanced techniques and other explorations # + [markdown] id="ccbGB1Yx1k_Y" # Getting just a bit fancier, we actually do PCA!! # + colab={"base_uri": "https://localhost:8080/", "height": 411} id="ZPoYo8Eb1k_Y" outputId="7602ea5c-b38c-46fd-b63e-5f0d96cb406f" runTest(clf,XPCA[:,[0,1]],y) # + [markdown] id="rEOM6LRr1k_Y" # What happens if we reduce the training data by a lot? # + colab={"base_uri": "https://localhost:8080/", "height": 411} id="6Erld49o1k_Y" outputId="2c0c12a0-b77d-4ff1-905c-c8148239ec50" runTest(clf,XPCA[:,[0,1]],y, trainingPercent = 0.2) # + [markdown] id="fO-0uTgU1k_Y" # Of course, that makes the results much worse! # + id="I1bgYjkb1k_Y"
examples/basic_python_demos/MachineLearning_different_methods.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Deep learning the collisional cross sections of the peptide universe from a million experimental values # <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> # Pre-print: https://doi.org/10.1101/2020.05.19.102285 # Publication: pending # revised 09/2020 # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import matplotlib.colors from scipy import optimize from Bio.SeqUtils.ProtParam import ProteinAnalysis # - aminoacids = 'A R N D C Q E G H I L K M F P S T W Y V'.split() # + # amino acid bulkiness # <NAME>., <NAME>., <NAME>. Biol. 21:170-201(1968). aa_bulkiness = { "A": 11.500, "R": 14.280, "N": 12.820, "D": 11.680, "C": 13.460, "Q": 14.450, "E": 13.570, "G": 3.400, "H": 13.690, "I": 21.400, "L": 21.400, "K": 15.710, "M": 16.250, "F": 19.800, "P": 17.430, "S": 9.470, "T": 15.770, "W": 21.670, "Y": 18.030, "V": 21.570 } def bulkiness(sequence): total_bulk = sum(aa_bulkiness[aa] for aa in sequence) return total_bulk / len(sequence) # + cmap = plt.get_cmap("RdYlBu") colors = cmap(np.linspace(0, 1, num=20)) charge_col = {'2': colors[0], '3': colors[6], '4': colors[18]} cmap2 = plt.get_cmap("YlOrRd") cmap3 = plt.get_cmap("YlOrRd_r") # - evidences = pd.read_csv('output/evidence_aligned.csv') evidences.head() len(evidences) # + evidences['lastAA'] = evidences['Sequence'].str[-1:] ## calculate physicochemical properties evidences['gravy'] = [ProteinAnalysis(seq).gravy() for seq in evidences['Sequence']] evidences['bulkiness'] = [bulkiness(seq) for seq in evidences['Sequence']] # Amino acids favoring secondary structures (Levitt, M. Biochemistry 17, 4277–4285 (1978)) evidences['helix_fraction'] = [(seq.count('A') + seq.count('L') + seq.count('M') + seq.count('H') + seq.count('Q') + seq.count('E'))/len(seq) for seq in evidences['Sequence']] evidences['sheet_fraction'] = [(seq.count('V') + seq.count('I') + seq.count('F') + seq.count('T') + seq.count('Y'))/len(seq) for seq in evidences['Sequence']] evidences['turn_fraction'] = [(seq.count('G') + seq.count('S') + seq.count('D') + seq.count('N') + seq.count('P'))/len(seq) for seq in evidences['Sequence']] # - evidences_trp = evidences.loc[evidences['lastAA'].str.contains('K|R')] len(evidences_trp) # + evidences_trp_H = evidences_trp.loc[evidences_trp['Sequence'].str.count('H') > 0] positions = [] for sequence in evidences_trp_H['Sequence']: pos = np.array([pos for pos, char in enumerate(sequence) if char == 'H']) vector = pos - np.median(range(len(sequence))) relpos = sum(vector) / len(sequence) positions.append(relpos) evidences_trp_H['H_pos'] = positions len(evidences_trp_H) # + # Calculate trend line functions CCS_fit_charge2 = evidences[evidences['Charge'] == 2] CCS_fit_charge3 = evidences[evidences['Charge'] == 3] CCS_fit_charge4 = evidences[evidences['Charge'] == 4] def trendline_func(x, a, b): return a * np.power(x, b) params_charge2, params_covariance_charge2 = optimize.curve_fit( trendline_func, CCS_fit_charge2['m/z'], CCS_fit_charge2['CCS']) params_charge3, params_covariance_charge3 = optimize.curve_fit( trendline_func, CCS_fit_charge3['m/z'], CCS_fit_charge3['CCS']) params_charge4, params_covariance_charge4 = optimize.curve_fit( trendline_func, CCS_fit_charge4['m/z'], CCS_fit_charge4['CCS']) print('2+') print(params_charge2, params_covariance_charge2) print('---') print('3+') print(params_charge3, params_covariance_charge3) print('---') print('4+') print(params_charge4, params_covariance_charge4) # + fig, axs = plt.subplots(1,3, figsize=(12, 4)) # panel a im1 = axs[0].scatter(x = evidences['m/z'], y = evidences['CCS'], c = evidences['gravy'], alpha = 0.8, s = 0.8, linewidth=0, #vmin = -1, vmax = 1, cmap = cmap); axs[0].plot(np.arange(300,1800,1), trendline_func( np.arange(300,1800,1), params_charge2[0], params_charge2[1]), color = "black", ls = 'dashed', lw = .5) axs[0].plot(np.arange(300,1800,1), trendline_func( np.arange(300,1800,1), params_charge3[0], params_charge3[1]), color = "black", ls = 'dashed', lw = .5) axs[0].plot(np.arange(300,1800,1), trendline_func( np.arange(300,1800,1), params_charge4[0], params_charge4[1]), color = "black", ls = 'dashed', lw = .5) axs[0].set_ylabel('CCS ($\AA^2$)') axs[0].set_xlabel('$\it{m/z}$') axs[0].text(-0.2, 1.05, "a", transform=axs[0].transAxes, fontsize=16, fontweight='bold', va='top', ha='right') cb = fig.colorbar(im1, ax = axs[0]) cb.set_label('GRAVY score') # panel b im2 = axs[1].scatter(x = evidences_trp['m/z'], y = evidences_trp['CCS'], c = (evidences_trp['Sequence'].str.count('P') / evidences_trp['Sequence'].str.len() * 100), alpha = 0.5, s = 0.5, linewidth=0, vmin = 0, vmax = 15, cmap = cmap3) axs[1].plot(np.arange(300,1800,1), trendline_func( np.arange(300,1800,1), params_charge2[0], params_charge2[1]), color = "black", ls = 'dashed', lw = .5) axs[1].plot(np.arange(300,1800,1), trendline_func( np.arange(300,1800,1), params_charge3[0], params_charge3[1]), color = "black", ls = 'dashed', lw = .5) axs[1].plot(np.arange(300,1800,1), trendline_func( np.arange(300,1800,1), params_charge4[0], params_charge4[1]), color = "black", ls = 'dashed', lw = .5) axs[1].set_ylabel('CCS ($\AA^2$)') axs[1].set_xlabel('$\it{m/z}$') axs[1].text(-0.2, 1.05, "b", transform=axs[1].transAxes, fontsize=16, fontweight='bold', va='top', ha='right') cb = fig.colorbar(im2, ax = axs[1]) cb.set_ticks([0,5,10,15]) cb.set_ticklabels(['0', '5', '10', '$\geq$ 15']) cb.set_label('Rel. P count (%)', labelpad = -10) # panel c im3 = axs[2].scatter(x = evidences_trp_H['m/z'], y = evidences_trp_H['CCS'], c = evidences_trp_H['H_pos'], alpha = 0.5, s = 0.5, linewidth=0, vmin = -1, vmax = 1, cmap = cmap) axs[2].plot(np.arange(300,1800,1), trendline_func( np.arange(300,1800,1), params_charge2[0], params_charge2[1]), color = "black", ls = 'dashed', lw = .5) axs[2].plot(np.arange(300,1800,1), trendline_func( np.arange(300,1800,1), params_charge3[0], params_charge3[1]), color = "black", ls = 'dashed', lw = .5) axs[2].plot(np.arange(300,1800,1), trendline_func( np.arange(300,1800,1), params_charge4[0], params_charge4[1]), color = "black", ls = 'dashed', lw = .5) axs[2].set_ylabel('CCS ($\AA^2$)') axs[2].set_xlabel('$\it{m/z}$') axs[2].text(-0.2, 1.05, "c", transform=axs[2].transAxes, fontsize=16, fontweight='bold', va='top', ha='right') cb = fig.colorbar(im3, ax = axs[2]) cb.set_ticks([-1,1]) cb.set_ticklabels(['C-term', 'N-term']) cb.set_label('H position', labelpad = -20) plt.tight_layout() plt.savefig('figures/Figure3.jpg') plt.show(); # - # <b>Figure 3. A global view on peptide cross sections.</b> <b>a,</b> Mass-to-charge vs. collisional cross section distribution of all peptides in this study colored by the GRAVY hydrophobicity index (n = 559,979). </b> <b>b,</b> Subset of peptides with C-terminal arginine or lysine colored by the fraction of prolines in the linear sequence (n = 452,592). </b> <b>c,</b> Histidine-containing peptides of b colored by the relative position of histidine (n = 171,429). Trend lines (dashed) are fitted to the overall peptide distribution to visualize the correlation of ion mass and mobility in each charge state. # + fig, axs = plt.subplots(1,3, figsize=(12, 4)) # panel a im1 = axs[0].scatter(x = evidences_trp['m/z'], y = evidences_trp['CCS'], c = evidences_trp['helix_fraction'], alpha = 0.5, s = 0.5, linewidth=0, vmin = 0, vmax = 0.5, cmap = cmap3); axs[0].set_ylabel('CCS ($\AA^2$)') axs[0].set_xlabel('$\it{m/z}$') axs[0].text(-0.2, 1.05, "a", transform=axs[0].transAxes, fontsize=16, fontweight='bold', va='top', ha='right') cb = fig.colorbar(im1, ax = axs[0]) cb.set_ticks([0,0.1,0.2,0.3, 0.4, 0.5]) cb.set_ticklabels(['0.0', '0.1', '0.2', '0.3', '0.4', '$\geq$ 0.5']) cb.set_label('Helix fraction') # panel b im2 = axs[1].scatter(x = evidences_trp['m/z'], y = evidences_trp['CCS'], c = evidences_trp['turn_fraction'], alpha = 0.5, s = 0.5, linewidth=0, vmin = 0, vmax = 0.5, cmap = cmap3) axs[1].set_ylabel('CCS ($\AA^2$)') axs[1].set_xlabel('$\it{m/z}$') axs[1].text(-0.2, 1.05, "b", transform=axs[1].transAxes, fontsize=16, fontweight='bold', va='top', ha='right') cb = fig.colorbar(im2, ax = axs[1]) cb.set_ticks([0,0.1,0.2,0.3, 0.4, 0.5]) cb.set_ticklabels(['0.0', '0.1', '0.2', '0.3', '0.4', '$\geq$ 0.5']) cb.set_label('Turn fraction') # panel c im3 = axs[2].scatter(x = evidences_trp['m/z'], y = evidences_trp['CCS'], c = evidences_trp['sheet_fraction'], alpha = 0.5, s = 0.5, linewidth=0, vmin = 0, vmax = 0.5, cmap = cmap3) axs[2].set_ylabel('CCS ($\AA^2$)') axs[2].set_xlabel('$\it{m/z}$') axs[2].text(-0.2, 1.05, "c", transform=axs[2].transAxes, fontsize=16, fontweight='bold', va='top', ha='right') cb = fig.colorbar(im3, ax = axs[2]) cb.set_ticks([0,0.1,0.2,0.3, 0.4, 0.5]) cb.set_ticklabels(['0.0', '0.1', '0.2', '0.3', '0.4', '$\geq$ 0.5']) cb.set_label('Sheet fraction') plt.tight_layout() plt.savefig('figures/Figure_S3.png') plt.show(); # - # <b>Supplementary Figure 5.</b> Fraction of amino acids favoring <b>a,</b> helical (A, L, M, H, Q, E), <b>b,</b> turn (V, I, F, T, Y) and <b>c,</b> sheet (G, S, D, N, P) secondary structures according to Levitt 1978. # ### Comparison LysC vs. LysN # + evidences['firstAA'] = evidences['Sequence'].str[:1] evidences['lastAA'] = evidences['Sequence'].str[-1:] evidence_subset_LysC = evidences[evidences['lastAA'].isin(['K'])] evidence_subset_LysN = evidences[evidences['firstAA'].isin(['K'])] # + mod_seq_lysC = [] mod_seq_lysN = [] seq_lysC = [] seq_lysN = [] internal_seq = [] CCS_lysC = [] CCS_lysN = [] deltas = [] Mass = [] mz = [] for index, row in evidence_subset_LysC.iterrows(): internal_sequence = row['Modified sequence'][1:-2] tmp = evidence_subset_LysN.loc[evidence_subset_LysN['Modified sequence'].str[2:-1] == internal_sequence] if(len(tmp) > 0): for i, sequence in enumerate(tmp['Sequence']): if ( (row['Charge'] == tmp.iloc[i]['Charge'])): mod_seq_lysC.append(row['Modified sequence']) mod_seq_lysN.append(tmp.iloc[i]['Modified sequence']) seq_lysC.append(row['Sequence']) seq_lysN.append(tmp.iloc[i]['Sequence']) internal_seq.append(internal_sequence) CCS_lysC.append(row['CCS']) CCS_lysN.append(tmp.iloc[i]['CCS']) Mass.append(row['Mass']) mz.append(row['m/z']) deltas.append(row['CCS'] - tmp.iloc[i]['CCS']) lysc_lysn = pd.DataFrame() lysc_lysn['mod_seq_lysC'] = mod_seq_lysC lysc_lysn['mod_seq_lysN'] = mod_seq_lysN lysc_lysn['seq_lysC'] = seq_lysC lysc_lysn['seq_lysN'] = seq_lysN lysc_lysn['internal_seq'] = internal_seq lysc_lysn['CCS_lysC'] = CCS_lysC lysc_lysn['CCS_lysN'] = CCS_lysN lysc_lysn['deltas'] = deltas lysc_lysn['Mass'] = Mass lysc_lysn['mz'] = mz lysc_lysn.to_csv('output/peptides_LysN_LysC.csv'); print(len(deltas)) # - lysc_lysn['charge'] = np.rint(lysc_lysn['Mass']/lysc_lysn['mz']) # Median relative shift ((lysc_lysn['CCS_lysC']-lysc_lysn['CCS_lysN'])/lysc_lysn['CCS_lysC']*100).median() # + lysc_lysn_charge2 = lysc_lysn[lysc_lysn['charge'] == 2] lysc_lysn_charge3 = lysc_lysn[lysc_lysn['charge'] == 3] lysc_lysn_charge4 = lysc_lysn[lysc_lysn['charge'] == 4] len(lysc_lysn_charge2), len(lysc_lysn_charge3), len(lysc_lysn_charge4) # + ((lysc_lysn_charge2['CCS_lysC']-lysc_lysn_charge2['CCS_lysN'])/lysc_lysn_charge2['CCS_lysC']*100).hist(bins = 50) plt.xlabel('CCS (LysC-LysN)/LysC (%) ') plt.ylabel('Count'); plt.savefig("figures/Suppl_Fig_5c.jpg") # + ((lysc_lysn_charge3['CCS_lysC']-lysc_lysn_charge3['CCS_lysN'])/lysc_lysn_charge3['CCS_lysC']*100).hist(bins = 80) plt.xlabel('CCS (LysC-LysN)/LysC (%) ') plt.ylabel('Count'); plt.savefig("figures/Suppl_Fig_5d.png") # + sns.kdeplot(evidence_subset_LysC.loc[evidence_subset_LysC['Charge'] == 3]['m/z'], evidence_subset_LysC.loc[evidence_subset_LysC['Charge'] == 3]['CCS'], cmap="Blues", shade=True, shade_lowest=False) plt.xlabel('m/z') plt.ylabel('CCS ($\AA^2$)') plt.savefig("figures/Suppl_Fig_5a_charge3.png"); # + sns.kdeplot(evidence_subset_LysN.loc[evidence_subset_LysN['Charge'] == 3]['m/z'], evidence_subset_LysN.loc[evidence_subset_LysN['Charge'] == 3]['CCS'], cmap="Blues", shade=True, shade_lowest=False) plt.xlabel('m/z') plt.ylabel('CCS ($\AA^2$)') plt.savefig("figures/Suppl_Fig_5b_charge3.png"); # + sns.kdeplot(evidence_subset_LysC.loc[evidence_subset_LysC['Charge'] == 2]['m/z'], evidence_subset_LysC.loc[evidence_subset_LysC['Charge'] == 2]['CCS'], cmap="Blues", shade=True, shade_lowest=False) plt.xlabel('m/z') plt.ylabel('CCS ($\AA^2$)') plt.savefig("figures/Suppl_Fig_5a_charge2.png"); # + sns.kdeplot(evidence_subset_LysN.loc[evidence_subset_LysN['Charge'] == 2]['m/z'], evidence_subset_LysN.loc[evidence_subset_LysN['Charge'] == 2]['CCS'], cmap="Blues", shade=True, shade_lowest=False) plt.xlabel('m/z') plt.ylabel('CCS ($\AA^2$)') plt.savefig("figures/Suppl_Fig_5b_charge2.png"); # - # ### Comparison bulkiness vs. hydrophobicity # + fig, axs = plt.subplots(1,2, figsize=(12, 4)) # panel a im1 = axs[0].scatter(x = evidences_trp['m/z'], y = evidences_trp['CCS'], c = evidences_trp['bulkiness'], alpha = 1, s = 0.5, linewidth=0, vmin = 11, vmax = 19, cmap = cmap); axs[0].set_ylabel('CCS ($\AA^2$)') axs[0].set_xlabel('$\it{m/z}$') axs[0].text(-0.2, 1.05, "a", transform=axs[0].transAxes, fontsize=16, fontweight='bold', va='top', ha='right') cb = fig.colorbar(im1, ax = axs[0]) cb.set_label('Bulkiness') # panel b im2 = axs[1].scatter(x = evidences_trp['m/z'], y = evidences_trp['CCS'], c = evidences_trp['gravy'], alpha = 1, s = 0.5, linewidth=0, vmin = -3, vmax = 2, cmap = cmap); axs[1].set_ylabel('CCS ($\AA^2$)') axs[1].set_xlabel('$\it{m/z}$') axs[1].text(-0.2, 1.05, "b", transform=axs[1].transAxes, fontsize=16, fontweight='bold', va='top', ha='right') cb = fig.colorbar(im2, ax = axs[1]) cb.set_label('GRAVY score') plt.tight_layout() plt.savefig('figures/revision_bulk_hydrophob.png') plt.show(); # + # define quantiles for deviation from trend line CCS_fit_charge2['deltaFit'] = (CCS_fit_charge2['CCS'] - trendline_func( CCS_fit_charge2['m/z'], params_charge2[0], params_charge2[1])) / trendline_func( CCS_fit_charge2['m/z'], params_charge2[0], params_charge2[1]) q1 = CCS_fit_charge2['deltaFit'].quantile(0.20) q2 = CCS_fit_charge2['deltaFit'].quantile(0.40) q3 = CCS_fit_charge2['deltaFit'].quantile(0.60) q4 = CCS_fit_charge2['deltaFit'].quantile(0.80) CCS_fit_charge2.loc[CCS_fit_charge2['deltaFit'] < q1, 'quantile'] = 1 CCS_fit_charge2.loc[(CCS_fit_charge2['deltaFit'] >= q1) & (CCS_fit_charge2['deltaFit'] < q2), 'quantile'] = 2 CCS_fit_charge2.loc[(CCS_fit_charge2['deltaFit'] >= q2) & (CCS_fit_charge2['deltaFit'] < q3), 'quantile'] = 3 CCS_fit_charge2.loc[(CCS_fit_charge2['deltaFit'] >= q3) & (CCS_fit_charge2['deltaFit'] < q4), 'quantile'] = 4 CCS_fit_charge2.loc[(CCS_fit_charge2['deltaFit'] >= q4), 'quantile'] = 5 CCS_fit_charge3['deltaFit'] = (CCS_fit_charge3['CCS'] - trendline_func( CCS_fit_charge3['m/z'], params_charge3[0], params_charge3[1])) / trendline_func( CCS_fit_charge3['m/z'], params_charge3[0], params_charge3[1]) q1 = CCS_fit_charge3['deltaFit'].quantile(0.20) q2 = CCS_fit_charge3['deltaFit'].quantile(0.40) q3 = CCS_fit_charge3['deltaFit'].quantile(0.60) q4 = CCS_fit_charge3['deltaFit'].quantile(0.80) CCS_fit_charge3.loc[CCS_fit_charge3['deltaFit'] < q1, 'quantile'] = 1 CCS_fit_charge3.loc[(CCS_fit_charge3['deltaFit'] >= q1) & (CCS_fit_charge3['deltaFit'] < q2), 'quantile'] = 2 CCS_fit_charge3.loc[(CCS_fit_charge3['deltaFit'] >= q2) & (CCS_fit_charge3['deltaFit'] < q3), 'quantile'] = 3 CCS_fit_charge3.loc[(CCS_fit_charge3['deltaFit'] >= q3) & (CCS_fit_charge3['deltaFit'] < q4), 'quantile'] = 4 CCS_fit_charge3.loc[(CCS_fit_charge3['deltaFit'] >= q4), 'quantile'] = 5 # - from matplotlib.colors import ListedColormap cmap = ListedColormap(sns.color_palette('tab10', n_colors = 5)) # + plt.scatter(CCS_fit_charge2['m/z'], CCS_fit_charge2['CCS'], s = .1, alpha = .1, c=CCS_fit_charge2['quantile'], cmap = cmap) plt.ylabel('CCS ($\AA^2$)') plt.xlabel('$\it{m/z}$') plt.savefig('figures/Supplementary_Figure_quantiles_a.jpg'); # + sns.violinplot(y = 'gravy', x = 'quantile', data = CCS_fit_charge2) plt.ylabel('GRAVY score') plt.xlabel('Quantile') plt.savefig('figures/Supplementary_Figure_quantiles_b.jpg') # + sns.violinplot(y = 'bulkiness', x = 'quantile', data = CCS_fit_charge2) plt.ylabel('Bulkiness') plt.xlabel('Quantile') plt.savefig('figures/Supplementary_Figure_quantiles_c.jpg') # + plt.scatter(CCS_fit_charge3['m/z'], CCS_fit_charge3['CCS'], s = .1, alpha = .1, c=CCS_fit_charge3['quantile'], cmap = cmap) plt.ylabel('CCS ($\AA^2$)') plt.xlabel('$\it{m/z}$') plt.savefig('figures/Supplementary_Figure_quantiles_d.jpg') # + sns.violinplot(y = 'gravy', x = 'quantile', data = CCS_fit_charge3) plt.ylabel('GRAVY score') plt.xlabel('Quantile') plt.savefig('figures/Supplementary_Figure_quantiles_e.jpg') # + sns.violinplot(y = 'bulkiness', x = 'quantile', data = CCS_fit_charge3) plt.ylabel('Bulkiness') plt.xlabel('Quantile') plt.savefig('figures/Supplementary_Figure_quantiles_f.jpg')
Figure3.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import requests import json import pandas as pd import numpy as np # # Landkreisdaten LK_data = pd.read_csv('../../data/data_gatherer/LK_data_clean.csv') #source: https://de.wikipedia.org/wiki/Liste_der_Landkreise_in_Deutschland def get_city_opendata(city, country): tmp = 'https://public.opendatasoft.com/api/records/1.0/search/?dataset=worldcitiespop&q=%s&sort=population&facet=country&refine.country=%s' cmd = tmp % (city, country) res = requests.get(cmd) dct = json.loads(res.content) out = dct['records'][0]['fields'] return out kreisstaedte = np.array(LK_data['Kreissitz']) def get_kreisstadt_info(kreisstaedte): for idx,stadt in enumerate(kreisstaedte): if idx % 10 == 0: print(idx) try: dpt = get_city_opendata(stadt, 'de') data_lk.append(dpt) except: print('Stadt ', stadt, 'nicht im Datensatz') return data_lk data_lk = [] data_lk = get_kreisstadt_info(kreisstaedte) Einw_pro_LK = np.array(LK_data['Einw.']) BL_pro_LK = np.array(LK_data['BL']) #create dictionary kreisstadt_dict = { f"kreisstadt{index}": { "city": data_lk[index]['city'], "ident": index, "position": {"lat": data_lk[index]['latitude'], "lon": data_lk[index]['longitude']}, "population_LK": Einw_pro_LK[index], "BL": BL_pro_LK[index] } for index, elm in enumerate(data_lk) } #check population accounted for pop_cnt = 0 for i in range(290): pop_cnt += kreisstadt_dict[f"kreisstadt{i}"]['population_LK'] print(pop_cnt) # + # dump data import csv w = csv.writer(open("kreisstaedte_data_final.csv", "w")) for key, val in kreisstadt_dict.items(): w.writerow([key, val]) # - # # Städtedaten staedte_data = pd.read_csv('../../data/data_gatherer/staedte_data.csv') #source: https://de.wikipedia.org/wiki/Liste_der_kreisfreien_St%C3%A4dte_in_Deutschland staedte = np.array(staedte_data['Stadt']) def get_stadt_info(staedte): for idx,stadt in enumerate(staedte): if idx % 10 == 0: print(idx) try: dpt = get_city_opendata(stadt, 'de') data_stadt.append(dpt) except: print('Stadt ', stadt, 'nicht im Datensatz') return data_stadt data_stadt = [] data_stadt = get_stadt_info(staedte) # + print(type(data_stadt)) for i, elm in enumerate(data_stadt): try: data_stadt[i]['population'] except: print(i, elm) print("Doesn't have population info") # - with open('stadt.txt') as f: lines = f.read().splitlines() # + # rest is manual work # -
src/data_gatherer/generate-full-population-dataset.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import sys sys.path.append('../scripts/') from sarsa import * class NstepSarsaAgent(SarsaAgent): def __init__(self, time_interval, estimator, puddle_coef=100, alpha=0.5, widths=np.array([0.2, 0.2, math.pi/18]).T, \ lowerleft=np.array([-4, -4]).T, upperright=np.array([4, 4]).T, dev_borders=[0.1,0.2,0.4,0.8], nstep=10): super().__init__(time_interval, estimator, puddle_coef, alpha, widths, lowerleft, upperright, dev_borders) self.s_trace = [] self.a_trace = [] self.r_trace = [] self.nstep = nstep def set_action_value_function(self): ###nstep_sarsa2agent ss = {} for index in self.indexes: ss[index] = StateInfo(len(self.actions)) for i, a in enumerate(self.actions): ss[index].q[i] = -1000.0 return ss def decision(self, observation=None): ##終了処理## if self.update_end: return 0.0, 0.0 if self.in_goal: self.update_end = True ##カルマンフィルタの実行## self.estimator.motion_update(self.prev_nu, self.prev_omega, self.time_interval) self.estimator.observation_update(observation) ##行動決定と報酬の処理## s_, a_ = self.policy(self.estimator.pose) r = self.time_interval*self.reward_per_sec() self.r_trace.append(r) #インデックスの整合性のためにrは先に登録しておく self.total_reward += r ##s', a', rの記録とQ値の更新## self.q_update(s_, a_, self.nstep) self.s_trace.append(s_) self.a_trace.append(a_) ##出力## self.prev_nu, self.prev_omega = self.actions[a_] return self.actions[a_] def q_update(self, s_, a_, n): if n > len(self.s_trace) or n == 0: return s = self.s_trace[-n] a = self.a_trace[-n] q = self.ss[s].q[a] r = sum(self.r_trace[-n:]) q_ = self.final_value if self.in_goal else self.ss[s_].q[a_] self.ss[s].q[a] = (1.0 - self.alpha)*q + self.alpha*(r + q_) if self.in_goal: self.q_update(s_, a_, n-1) class WarpRobot2(WarpRobot): ###nstep_sarsa2warprobot2 def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def choose_pose(self): #初期位置を8x8[m]領域でランダムに変更 return np.array([random.random()*8-4, random.random()*8-4, random.random()*2*math.pi]).T def one_step(self, time_interval): #300ステップ(パラメータを変えるとずれるが30秒に相当)で打ち切る処理を追加 if self.agent.update_end: with open("log.txt", "a") as f: f.write("{}\n".format(self.agent.total_reward + self.agent.final_value)) self.reset() return elif len(self.poses) >= 300: with open("log.txt", "a") as f: f.write("DNF\n") self.reset() return super().one_step(time_interval) # + def trial(): time_interval = 0.1 world = PuddleWorld(400000, time_interval, debug=False) #長時間アニメーション時間をとる ## 地図を生成して3つランドマークを追加 ## m = Map() for ln in [(-4,2), (2,-3), (4,4), (-4,-4)]: m.append_landmark(Landmark(*ln)) world.append(m) ##ゴールの追加## goal = Goal(-3,-3) world.append(goal) ##水たまりの追加## world.append(Puddle((-2, 0), (0, 2), 0.1)) world.append(Puddle((-0.5, -2), (2.5, 1), 0.1)) ##ロボットを1台登場させる## init_pose = np.array([3, 3, 0]).T kf = KalmanFilter(m, init_pose) a = NstepSarsaAgent(time_interval, kf) r = WarpRobot2(init_pose, sensor=Camera(m, distance_bias_rate_stddev=0, direction_bias_stddev=0), agent=a, color="red", bias_rate_stds=(0,0)) world.append(r) world.draw() return a #a = trial() # -
section_reinforcement_learning/nstep_sarsa2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import math import numpy as np import matplotlib.pyplot as plt # # Introduccion # En este ejercicio se trata de desarrollar mediante el metodo de la funcion inversa una distribucion exponencial con $\lambda=\frac{5}{h}$ en un tiempo maximo de $T=3h$ . Para ello se procede a partir de la funcion densidad de probabilidad $f(x)=A\cdot e^{-x\cdot A}$ entonces su integral es # # $${\displaystyle \int_{0}^{X} A\cdot e^{-x\cdot A}\,{\text{d}}x} # =\left. -e^{-x\cdot A} \right|_0^X = - ( e^{ -X\cdot A}-e^{-0 \cdot A}) # = 1 - e^{-X \cdot A}$$ # # # # $${\displaystyle F(x)=P(X\leq x)=\left\{{\begin{matrix}0&{\text{para }}x<0\\1-e^{-X \cdot A}&{\text{para }}x\geq 0\end{matrix}}\right.}$$ # # # Podemos entonces invertir. # # $$z = 1 - e^{-X \cdot A}$$ # $$1-z= e^{-X \cdot A} $$ # # $$-\frac{ln(1-z)}{A} = X $$ # # como $z$ es una variable aleatoria $z\in(0,1)$ entonces $(1-z)\in(0,1)$ y podemos tomar una variable aleatoria $U=1-z$ con distribucion plana de tal manera que quede # # $$-\frac{ln(U)}{A} = X $$ # # posteriormente, sumamos los saltos hasta que llega a 3 horas, dando asi una media de $15=\frac{5}{h} \cdot 3 h$ y si sortemaos varias veces obtenemos la distribucion de poisson. # def procesopoiss(N,lambdaa,T): x=np.random.random(N) y=[] for j in range(N): y.append(-math.log(1-x[j])*(1/lambdaa)) suma=0 for i in range(N): if (suma>T): suma=suma-y[i-1] A=i-1 break else: suma=y[i]+suma return (suma,A) lista2=[] M=1000 l1=[] l2=[] for i in range (M): lista2,A=procesopoiss(1000,5,3) l1.append(A) l2.append(lista2) # + # comparacion teorica # Graficando Poisson pois=[] A=3000 for i in range(40): x=A*((15**i)*math.exp(-15)/math.factorial(i)) pois.append(x) plt.plot(pois); plt.hist(l1,bins=8); plt.show() # - # # Conclusion # # Vemos como el proceso de poisson simulado por los pasos de una variable con distribucion exponencial nos permiten ver como aleatoriamente la media deberia estar en 15,(efectivamente se ve eso) pero no se recibe siempre el valor de 15. La distribucion sigue Poisson teorica como se puede ver en la comparacion teorica.
ejercicio5.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] deletable=true editable=true # <h1> Serving embeddings </h1> # # This notebook illustrates how to: # <ol> # <li> Create a custom embedding as part of a regression/classification model # <li> Representing categorical variables in different ways # <li> Math with feature columns # <li> Serve out the embedding, as well as the original model's predictions # </ol> # # + deletable=true editable=true # change these to try this notebook out BUCKET = 'cloud-training-demos-ml' PROJECT = 'cloud-training-demos' REGION = 'us-central1' # + deletable=true editable=true import os os.environ['BUCKET'] = BUCKET os.environ['PROJECT'] = PROJECT os.environ['REGION'] = REGION # + deletable=true editable=true # %bash gcloud config set project $PROJECT gcloud config set compute/region $REGION # + deletable=true editable=true language="bash" # if ! gsutil ls | grep -q gs://${BUCKET}/; then # gsutil mb -l ${REGION} gs://${BUCKET} # fi # + [markdown] deletable=true editable=true # ## Creating dataset # # The problem is to estimate demand for bicycles at different rental stations in New York City. The necessary data is in BigQuery: # + deletable=true editable=true query = """ #standardsql WITH bicycle_rentals AS ( SELECT COUNT(starttime) as num_trips, EXTRACT(DATE from starttime) as trip_date, MAX(EXTRACT(DAYOFWEEK from starttime)) as day_of_week, start_station_id FROM `bigquery-public-data.new_york.citibike_trips` GROUP BY trip_date, start_station_id ), rainy_days AS ( SELECT date, (MAX(prcp) > 5) AS rainy FROM ( SELECT wx.date AS date, IF (wx.element = 'PRCP', wx.value/10, NULL) AS prcp FROM `bigquery-public-data.ghcn_d.ghcnd_2016` AS wx WHERE wx.id = 'USW00094728' ) GROUP BY date ) SELECT num_trips, day_of_week, start_station_id, rainy FROM bicycle_rentals AS bk JOIN rainy_days AS wx ON wx.date = bk.trip_date """ import google.datalab.bigquery as bq df = bq.Query(query).execute().result().to_dataframe() # + deletable=true editable=true # shuffle the dataframe to make it easier to split into train/eval later df = df.sample(frac=1.0) df.head() # + [markdown] deletable=true editable=true # ## Feature engineering # Let's build a model to predict the number of trips that start at each station, given that we know the day of the week and whether it is a rainy day. # # Inputs to the model: # * day of week (integerized, since it is 1-7) # * station id (hash buckets, since we don't know full vocabulary. The dataset has about 650 unique values. we'll use a much larger hash bucket size, but then embed it into a lower dimension) # * rainy (true/false) # # Label: # * num_trips # # By embedding the station id into just 2 dimensions, we will also get to learn which stations are like each other, at least in the context of rainy-day-rentals. # + [markdown] deletable=true editable=true # ### Change data type # # Let's change the Pandas data types to more efficient (for TensorFlow) forms. # + deletable=true editable=true df.dtypes # + deletable=true editable=true import numpy as np df = df.astype({'num_trips': np.float32, 'day_of_week': np.int32, 'start_station_id': np.int32, 'rainy': str}) df.dtypes # + [markdown] deletable=true editable=true # ### Scale the label to make it easier to optimize. # + deletable=true editable=true df['num_trips'] = df['num_trips'] / 1000.0 # + deletable=true editable=true num_train = (int) (len(df) * 0.8) train_df = df.iloc[:num_train] eval_df = df.iloc[num_train:] print("Split into {} training examples and {} evaluation examples".format(len(train_df), len(eval_df))) # + deletable=true editable=true train_df.head() # + [markdown] deletable=true editable=true # <h2> Creating an Estimator model </h2> # # Pretty minimal, but it works! # + deletable=true editable=true import tensorflow as tf import pandas as pd def make_input_fn(indf, num_epochs): return tf.estimator.inputs.pandas_input_fn( indf, indf['num_trips'], num_epochs=num_epochs, shuffle=True) def serving_input_fn(): feature_placeholders = { 'day_of_week': tf.placeholder(tf.int32, [None]), 'start_station_id': tf.placeholder(tf.int32, [None]), 'rainy': tf.placeholder(tf.string, [None]) } features = { key: tf.expand_dims(tensor, -1) for key, tensor in feature_placeholders.items() } return tf.estimator.export.ServingInputReceiver(features, feature_placeholders) def train_and_evaluate(output_dir, nsteps): station_embed = tf.feature_column.embedding_column( tf.feature_column.categorical_column_with_hash_bucket('start_station_id', 5000, tf.int32), 2) feature_cols = [ tf.feature_column.categorical_column_with_identity('day_of_week', num_buckets = 8), station_embed, tf.feature_column.categorical_column_with_vocabulary_list('rainy', ['false', 'true']) ] estimator = tf.estimator.LinearRegressor( model_dir = output_dir, feature_columns = feature_cols) train_spec=tf.estimator.TrainSpec( input_fn = make_input_fn(train_df, None), max_steps = nsteps) exporter = tf.estimator.LatestExporter('exporter', serving_input_fn) eval_spec=tf.estimator.EvalSpec( input_fn = make_input_fn(eval_df, 1), steps = None, start_delay_secs = 1, # start evaluating after N seconds throttle_secs = 10, # evaluate every N seconds exporters = exporter) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) import shutil OUTDIR='./model_trained' shutil.rmtree(OUTDIR, ignore_errors=True) train_and_evaluate(OUTDIR, 10) # + [markdown] deletable=true editable=true # ## Predict using the exported model # + deletable=true editable=true # %writefile test.json {"day_of_week": 3, "start_station_id": 384, "rainy": "false"} {"day_of_week": 4, "start_station_id": 384, "rainy": "true"} # + deletable=true editable=true # %bash EXPORTDIR=./model_trained/export/exporter/ MODELDIR=$(ls $EXPORTDIR | tail -1) gcloud ml-engine local predict --model-dir=${EXPORTDIR}/${MODELDIR} --json-instances=./test.json # + [markdown] deletable=true editable=true # ## Serving out the embedding also # # To serve out the embedding, we need to use a model function (a custom estimator) so that we have access to output_alternates # + deletable=true editable=true import tensorflow as tf import pandas as pd def make_input_fn(indf, num_epochs): return tf.estimator.inputs.pandas_input_fn( indf, indf['num_trips'], num_epochs=num_epochs, shuffle=True) def serving_input_fn(): feature_placeholders = { 'day_of_week': tf.placeholder(tf.int32, [None]), 'start_station_id': tf.placeholder(tf.int32, [None]), 'rainy': tf.placeholder(tf.string, [None]) } features = { key: tf.expand_dims(tensor, -1) for key, tensor in feature_placeholders.items() } return tf.estimator.export.ServingInputReceiver(features, feature_placeholders) def model_fn(features, labels, mode): # linear model station_col = tf.feature_column.categorical_column_with_hash_bucket('start_station_id', 5000, tf.int32) station_embed = tf.feature_column.embedding_column(station_col, 2) # embed dimension embed_layer = tf.feature_column.input_layer(features, station_embed) cat_cols = [ tf.feature_column.categorical_column_with_identity('day_of_week', num_buckets = 8), tf.feature_column.categorical_column_with_vocabulary_list('rainy', ['false', 'true']) ] cat_cols = [tf.feature_column.indicator_column(col) for col in cat_cols] other_inputs = tf.feature_column.input_layer(features, cat_cols) all_inputs = tf.concat([embed_layer, other_inputs], axis=1) predictions = tf.layers.dense(all_inputs, 1) # linear model # 2. Use a regression head to use the standard loss, output, etc. my_head = tf.contrib.estimator.regression_head() spec = my_head.create_estimator_spec( features = features, mode = mode, labels = labels, logits = predictions, optimizer = tf.train.FtrlOptimizer(learning_rate = 0.1) ) # 3. Create predictions predictions_dict = { "predicted": predictions, "station_embed": embed_layer } # 4. Create export outputs export_outputs = { "predict_export_outputs": tf.estimator.export.PredictOutput(outputs = predictions_dict) } # 5. Return EstimatorSpec after modifying the predictions and export outputs return spec._replace(predictions = predictions_dict, export_outputs = export_outputs) def train_and_evaluate(output_dir, nsteps): estimator = tf.estimator.Estimator( model_fn = model_fn, model_dir = output_dir) train_spec=tf.estimator.TrainSpec( input_fn = make_input_fn(train_df, None), max_steps = nsteps) exporter = tf.estimator.LatestExporter('exporter', serving_input_fn) eval_spec=tf.estimator.EvalSpec( input_fn = make_input_fn(eval_df, 1), steps = None, start_delay_secs = 1, # start evaluating after N seconds throttle_secs = 10, # evaluate every N seconds exporters = exporter) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) import shutil OUTDIR='./model_trained' shutil.rmtree(OUTDIR, ignore_errors=True) train_and_evaluate(OUTDIR, 1000) # + deletable=true editable=true # %bash EXPORTDIR=./model_trained/export/exporter/ MODELDIR=$(ls $EXPORTDIR | tail -1) gcloud ml-engine local predict --model-dir=${EXPORTDIR}/${MODELDIR} --json-instances=./test.json # + [markdown] deletable=true editable=true # ## Explore embeddings # # Let's explore the embeddings for some stations. Let's look at stations with overall similar numbers of trips. Do they have similar embedding values? # + deletable=true editable=true stations=""" SELECT COUNT(starttime) as num_trips, MAX(start_station_name) AS station_name, start_station_id FROM `bigquery-public-data.new_york.citibike_trips` GROUP BY start_station_id ORDER BY num_trips desc """ stationsdf = bq.Query(stations).execute().result().to_dataframe() # + deletable=true editable=true stationsdf.head() # + deletable=true editable=true stationsdf[500:505] # + deletable=true editable=true # %writefile test.json {"day_of_week": 4, "start_station_id": 435, "rainy": "true"} {"day_of_week": 4, "start_station_id": 521, "rainy": "true"} {"day_of_week": 4, "start_station_id": 3221, "rainy": "true"} {"day_of_week": 4, "start_station_id": 3237, "rainy": "true"} # + [markdown] deletable=true editable=true # 435 and 521 are in the first list (of top rental locations) and in the Chelsea Market area. # 3221 and 3237 are in the second list (of rare rentals) and in Long Island. # Do the embeddings reflect this? # + deletable=true editable=true # %bash EXPORTDIR=./model_trained/export/exporter/ MODELDIR=$(ls $EXPORTDIR | tail -1) gcloud ml-engine local predict --model-dir=${EXPORTDIR}/${MODELDIR} --json-instances=./test.json # + [markdown] deletable=true editable=true # In this case, the first dimension of the embedding is almost zero in all cases. So, we only need a one dimensional embedding. And in that, it is quite clear that the Manhattan, frequent rental stations have positive values (0.0081, 0.0011) whereas the Long Island, rare rental stations have negative values (-0.0025, -0.0031). # + deletable=true editable=true # + [markdown] deletable=true editable=true # Copyright 2018 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License # + deletable=true editable=true
blogs/serving_embed/serving_embed.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # # Clustering # ## KMeans df <- read.csv('evasao.csv') head(df) str(df) summary(df) library(dplyr) df2 <- filter(df,df$abandonou == 1) %>% select('periodo','repetiu','desempenho') head(df2) #install.packages('scatterplot3d') library(scatterplot3d) scatterplot3d(df2$periodo,df2$repetiu,df2$desempenho) modelo <- kmeans(df2,4) modelo plot <- scatterplot3d(df2$periodo,df2$repetiu,df2$desempenho, color = modelo$cluster, pch = modelo$cluster) plot$points3d(modelo$centers, pch = 8, col = 2, cex = 9)
book-R/kmeans_evasao.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + pycharm={"is_executing": false, "name": "#%%\n"} import pandas as pd from sklearn.model_selection import train_test_split, GridSearchCV, StratifiedKFold from imblearn.combine import SMOTEENN, SMOTETomek import xgboost as xgb from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, plot_confusion_matrix import matplotlib.pyplot as plt df = pd.read_csv("dataset/preprocessed.csv") df.shape # + pycharm={"is_executing": false, "name": "#%%\n"} df = df.drop(df[df.target == -1].index) df.shape # + pycharm={"is_executing": false, "name": "#%%\n"} # Separate input features and target y = df.target # + pycharm={"is_executing": false, "name": "#%%\n"} X = df.drop('target', axis = 1) # + pycharm={"name": "#%%\n"} # setting up testing and training sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 27) y_test_3_class = ["Low" if x == 1 or x == 2 else "Medium" if x == 3 else 'High' for x in y_test] y_train_3_class = ["Low" if i == 1 or i == 2 else "Medium" if i == 3 else 'High' for i in y_train] # - # ### Train the model without re-sampling (5- classes) # + pycharm={"is_executing": false, "name": "#%%\n"} xgboost = xgb.XGBClassifier(learning_rate = 0.1, n_estimators = 100, max_depth = 20, nthread = -1, random_state = 1, verbosity = 0, gamma = 0.5) xgboost.fit(X_train, y_train) # + pycharm={"is_executing": false, "name": "#%%\n"} preds = xgboost.predict(X_test) # + pycharm={"is_executing": false, "name": "#%%\n"} print("Accuracy: \t", accuracy_score(y_test, preds)) print("F1 Score: \t", f1_score(y_test, preds, average = 'macro')) print("Precision:\t", precision_score(y_test, preds, average = 'macro')) print("Recall: \t", recall_score(y_test, preds, average = 'macro')) # Plot normalized confusion matrix classes = ["1", "2", "3", "4", "5"] title = "XGB Without Re-sample 5-class" disp = plot_confusion_matrix(xgboost, X_test, y_test, display_labels = classes, cmap = "copper", normalize = "true") disp.ax_.set_title(title) plt.show() # + [markdown] pycharm={"name": "#%% md\n"} # ### Train the model without re-sampling (3- classes) # + pycharm={"name": "#%%\n"} xgboost = xgb.XGBClassifier(learning_rate = 0.1, n_estimators = 100, max_depth = 20, nthread = -1, random_state = 1, verbosity = 0, gamma = 0.5) xgboost.fit(X_train, y_train_3_class) xgboost_predictions = xgboost.predict(X_test) # + pycharm={"name": "#%%\n"} # Performance results print("Accuracy: \t", accuracy_score(y_test_3_class, xgboost_predictions)) print("F1 Score: \t", f1_score(y_test_3_class, xgboost_predictions, average = 'macro')) print("Precision:\t", precision_score(y_test_3_class, xgboost_predictions, average = 'macro')) print("Recall: \t", recall_score(y_test_3_class, xgboost_predictions, average = 'macro')) # + pycharm={"name": "#%%\n"} # Plot normalized confusion matrix title = "XGB Without Re-sample 3-class" disp = plot_confusion_matrix(xgboost, X_test, y_test_3_class, cmap = "copper", normalize = "true") disp.ax_.set_title(title) plt.show() # - # ### Train the model with re-sampling(5- classes) # + pycharm={"is_executing": false, "name": "#%%\n"} sm = SMOTEENN(random_state = 27, n_jobs = -1) X_train_resample, y_train_resample = sm.fit_sample(X_train, y_train) # + pycharm={"is_executing": false, "name": "#%%\n"} xgb_sampled = xgb.XGBClassifier(learning_rate = 0.1, n_estimators = 100, max_depth = 20, nthread = -1, random_state = 1, verbosity = 0, gamma = 0.5) xgb_sampled.fit(X_train_resample, y_train_resample) # + pycharm={"is_executing": false, "name": "#%%\n"} xgb_sampled_pred = xgb_sampled.predict(X_test) # + pycharm={"is_executing": false, "name": "#%%\n"} print("Accuracy: \t", accuracy_score(y_test, xgb_sampled_pred)) print("F1 Score: \t", f1_score(y_test, xgb_sampled_pred, average = 'macro')) print("Precision:\t", precision_score(y_test, xgb_sampled_pred, average = 'macro')) print("Recall: \t", recall_score(y_test, xgb_sampled_pred, average = 'macro')) # Plot normalized confusion matrix classes = ["1", "2", "3", "4", "5"] title = "XGB With Re-sample 5-class" disp = plot_confusion_matrix(xgb_sampled, X_test, y_test, display_labels = classes, cmap = "copper", normalize = "true") disp.ax_.set_title(title) plt.show() # + [markdown] pycharm={"name": "#%% md\n"} # ### Train the model with re-sampling(3- classes) # + pycharm={"name": "#%%\n"} sm = SMOTEENN(random_state = 27, n_jobs = -1) X_train_resample_3_class, y_train_resample_3_class = sm.fit_sample(X_train, y_train_3_class) # + pycharm={"name": "#%%\n"} xgb_sampled = xgb.XGBClassifier(learning_rate = 0.1, n_estimators = 100, max_depth = 20, nthread = -1, random_state = 1, verbosity = 0, gamma = 0.5) xgb_sampled.fit(X_train_resample_3_class, y_train_resample_3_class) # + pycharm={"name": "#%%\n"} xgb_sampled_pred = xgb_sampled.predict(X_test) # + pycharm={"name": "#%%\n"} print("Accuracy: \t", accuracy_score(y_test_3_class, xgb_sampled_pred)) print("F1 Score: \t", f1_score(y_test_3_class, xgb_sampled_pred, average = 'macro')) print("Precision:\t", precision_score(y_test_3_class, xgb_sampled_pred, average = 'macro')) print("Recall: \t", recall_score(y_test_3_class, xgb_sampled_pred, average = 'macro')) # Plot normalized confusion matrix title = "XGB With Re-sample 3-class" disp = plot_confusion_matrix(xgb_sampled, X_test, y_test_3_class, cmap = "copper", normalize = "true") disp.ax_.set_title(title) plt.show()
xgb.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # cd .. # + from pyspark.sql import SparkSession, Row, functions as F, types as T import pandas as pd import numpy as np spark = SparkSession.builder.getOrCreate() # read the network for 2007Q1 snapshot = spark.read.csv( "data/processed/2007-1-user-network-v3.csv", sep='\t', schema="src INT, dst INT, weight INT" ) # read the role distribution information roleG = pd.read_csv("data/processed/roles/2007-1-nmf-G.csv") x = roleG.apply(lambda x: Row( user_id=int(x[0]), vec=x[1:].astype(float).tolist() ), axis=1).values rolx_df = spark.createDataFrame(list(x)) # + # generate features enwiki = spark.read.parquet("data/processed/enwiki-meta-compact") user_text = ( enwiki .where("year=2007 and quarter=1") .groupby("user_id") .agg( F.expr("count(distinct article_id) as n_articles"), F.expr("count(*) as n_edits"), F.expr("sum(log(textdata)) as edit_count") ) ) # + from sklearn.decomposition import NMF rolx_vec = pd.read_csv( "data/processed/roles/2007-1-v", header=None, sep=" " ) rolx_mapping = pd.read_csv( "data/processed/roles/2007-1-mappings", header=None, skiprows=1, sep=" " ) df = pd.concat([rolx_mapping, rolx_vec], axis=1).iloc[:, 1:] nmf = NMF(n_components=8, solver="mu", #beta_loss="kullback-leibler", tol=1e-6, max_iter=1000) X = df.values[:, 1:] W = nmf.fit_transform(X) roleG = pd.concat([df.iloc[:, 0], pd.DataFrame(W)], axis=1) # + def to_row(r): return Row(user_id=int(r[0]), vec=r[1:].astype(float).tolist()) rolx_df = spark.createDataFrame(list(map(to_row, roleG.values))) # + edgelist = ( snapshot .union(snapshot.selectExpr("dst as src", "src as dst", "weight")) .distinct() ) @F.udf(T.ArrayType(T.FloatType())) def average_vec(vecs): avg = np.array(vecs).sum(axis=0)/len(vecs) return avg.astype(float).tolist() neighbors = ( edgelist .join(rolx_df.selectExpr("user_id as dst", "vec"), on="dst") .groupby("src") .agg(F.collect_list("vec").alias("vec_list")) .withColumn("neighborhood_roles", average_vec("vec_list")) .selectExpr("src as user_id", "neighborhood_roles") ) # - df = ( user_text .join(rolx_df, on="user_id", how="inner") .join(neighbors, on="user_id", how="inner") .select( "user_id", F.array("n_articles", "n_edits", "edit_count").alias("features"), F.expr("vec as roles"), "neighborhood_roles" ) ).toPandas() # + @F.udf(T.IntegerType()) def label(vec): return vec.index(max(vec)) admins = spark.read.csv("data/processed/admin_mapping.csv", schema="user_id INT, username STRING") discrete_df = ( user_text .join(rolx_df, on="user_id", how="inner") .join(admins, on="user_id", how="left") .withColumn("is_admin", F.expr("username is not null").cast('int')) .withColumn("label", label("vec")) .groupBy("label").agg( F.count("user_id").alias("n_users"), F.sum("is_admin").alias("n_admins"), F.avg("n_articles"), F.stddev("n_articles"), F.avg("n_edits"), F.stddev("n_edits"), F.avg("edit_count"), F.stddev("edit_count"), ) .withColumn("pct_admin", F.expr("n_admins*1.0/n_users")) .orderBy("label") ) discrete_df.show(vertical=True) #discrete = discrete_df.toPandas() # - plt.bar(discrete.label, discrete['count']) plt.title("Roles vs User Count") plt.bar(discrete.label, discrete['avg(n_edits)']) # + from sklearn.decomposition import non_negative_factorization print(df.shape) df.head() # - features = np.array(list(df.features)) features.shape roles = np.array(list(df.roles)) roles.shape neighborhood_roles = np.array(list(df.roles)) neighborhood_roles.shape # + # W (n x r) dot H (r x f) = M (n x f) # f x n = (f x r) dot (r x n) W, _, _ = non_negative_factorization( X = features.T, H = roles.T, n_components=roles.shape[1], update_H = False, solver='mu', #beta_loss="kullback-leibler", init='custom', tol=10e-6, max_iter=1000 ) W.shape # - W_def, _, _ = non_negative_factorization( X = features.T, H = np.ones((1, roles.shape[0])), n_components=1, update_H = False, solver='mu', #beta_loss="kullback-leibler", init='custom', tol=10e-6, max_iter=1000 ) W_def.shape # + import matplotlib.pyplot as plt text = (W/W_def)[2] plt.stem(np.arange(W.shape[1]), text) plt.title("NodeSense: Roles vs Edit Contribution") plt.xlabel("Role") plt.ylabel("Effect of Role to Edit Contribution") plt.show() # - W, _, _ = non_negative_factorization( X = neighborhood_roles.T, H = roles.T, n_components=roles.shape[1], update_H = False, solver='mu', #beta_loss="kullback-leibler", init='custom', tol=10e-6, max_iter=1000 ) W_def, _, _ = non_negative_factorization( X = neighborhood_roles.T, H = np.ones((1, roles.shape[0])), n_components=1, update_H = False, solver='mu', #beta_loss="kullback-leibler", init='custom', tol=10e-6, max_iter=1000 ) plt.imshow((W/W_def).T, cmap="coolwarm") plt.title("NeighborhoodSense: Role Affinities") # ## community roles # + community_df = spark.read.csv("data/processed/user_community_assignments.csv", header=True) community_df.show(n=5) community_df.printSchema() community_roles = ( rolx_df.join( community_df .where("year='2007' and quarter='1'") .selectExpr("cast(user_id as int) as user_id", "cast(community_id as int) as community_id"), on="user_id", how="left" ) ) community_roles.selectExpr("count(distinct community_id)").show() community_df = ( community_roles .groupby("community_id") .agg( F.count("*").alias("size"), F.collect_list("vec").alias("vec_list") ) .orderBy(F.desc("size")) .select("community_id", "size", average_vec("vec_list").alias("community_roles")) ).toPandas() # - community_df[:20] community_roles = np.array(list(community_df["community_roles"])) community_roles.shape from sklearn.preprocessing import normalize normed = normalize(community_roles, axis=1, norm='l1') plt.imshow(normed, cmap='coolwarm', aspect='auto') plt.imshow(normed[:5], cmap='coolwarm', aspect='auto') plt.imshow(normed[:10], cmap='coolwarm', aspect='auto') plt.imshow(normed[-100:], cmap='coolwarm', aspect='auto') ind=np.argsort(normed[:,0]) b=normed[ind] plt.imshow(b[-100:], cmap='coolwarm', aspect='auto')
notebooks/3.7-acmiyaguchi-rolx-sensemaking.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from bandit_env import bandit_env import numpy as np import random from collections import Counter # + # np.random.seed(0) game = bandit_env([2.5, -3.5, 1.0, 5.0, -2.5], [0.33, 1.0, 0.66, 1.98, 1.65]) print(game.n) print(game.r_mean) print(game.r_stddev) # - steps = 1000 expec = [0 for i in range(game.n)] print(expec) # np.random.seed(0) counter = [0, 0, 0, 0, 0] reward = 0 epsilon = 1/16 # print(expec) for i in range(steps): decider = np.random.rand() # print(decider) if decider <= epsilon: # index, prize = random.choice(enumerate(expec)) index = random.randint(0, game.n - 1) prize = game.pull(index) print('HHYY:YIFDNLSHFIO:H') else: # index, prize = greedy_action(expec) d = Counter(expec) occur = d[max(expec)] # print(expec) # print(occur) # if i == 15: # break if occur == 1: index = np.argmax(expec) prize = game.pull(index) else: # indexes = list(np.where(np.array(expec) == max(expec))) indexes = [k for k, l in enumerate(expec) if l == max(expec)] # print(indexes) index = random.choice(indexes) # print(index) prize = game.pull(int(index)) counter[index] += 1 expec[index] = expec[index] + (prize - expec[index])/(counter[index]) reward += prize reward/1000 expec counter
Assignment 1/.ipynb_checkpoints/Untitled-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Unicode # The best way to start understanding what they are is to cover one of the simplest character encodings, ASCII. # So what is a more formal definition of a character encoding? # At a very high level, it’s a way of translating characters (such as letters, punctuation, symbols, whitespace, and control characters) to integers and ultimately to bits. Each character can be encoded to a unique sequence of bits. The entire ASCII table contains 128 characters. # * 0 through 31 Control/non-printable characters # * 32 through 64 Punctuation, symbols, numbers, and space # * 65 through 90 Uppercase English alphabet letters # * 91 through 96 Additional graphemes, such as [ and \ # * 97 through 122 Lowercase English alphabet letters # * 123 through 126 Additional graphemes, such as { and | # * 127 Control/non-printable character (DEL) whitespace = ' \t\n\r\v\f' ascii_lowercase = 'abcdefghijklmnopqrstuvwxyz' ascii_uppercase = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' ascii_letters = ascii_lowercase + ascii_uppercase digits = '0123456789' hexdigits = digits + 'abcdef' + 'ABCDEF' octdigits = '01234567' punctuation = r"""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~""" printable = digits + ascii_letters + punctuation + whitespace print(printable) import string s = "What's wrong with ASCII?!?!?" s.rstrip(string.punctuation) # Here’s a handy way to represent ASCII strings as sequences of bits in Python. Each character from the ASCII string gets pseudo-encoded into 8 bits, with spaces in between the 8-bit sequences that each represent a single character: def make_bitseq(s: str) -> str: if not s.isascii(): raise ValueError("ASCII only allowed") return " ".join(f"{ord(i):08b}" for i in s) make_bitseq("bits") # The f-string f"{ord(i):08b}" uses Python’s [Format Specification Mini-Language](https://docs.python.org/3/library/string.html#formatspec) # Using the Python ord() function gives you the base-10 code point for a single str character. # The right hand side of the colon is the format specifier. 08 means width 8, 0 padded, and the b functions as a sign to output the resulting number in base 2 (binary). i = 'X' print(f"in Hex : {ord('X'):02x}") make_bitseq("$25.43") int('11', base=2) # Binary to int int('11', base=8) # Octal to int int('11', base=16) # Hex to int # Python accepts literal forms of each of the 3 alternative numbering systems above # 0b11 # Binary literal 0o11 # Octal literal 0x11 # Hex literaL # ## Unicode # Unicode fundamentally serves the same purpose as ASCII, but it just encompasses a way, way, way bigger set of code points # Think of Unicode as a massive version of the ASCII table—one that has 1,114,112 possible code points (really 1,111,998 characters). That’s 0 through 1,114,111, or 0 through 17 * (216) - 1, or 0x10ffff hexadecimal. In fact, ASCII is a perfect subset of Unicode. The first 128 characters in the Unicode table correspond precisely to the ASCII characters that you’d reasonably expect them to. # Unicode itself is not an encoding. Rather, Unicode is implemented by different character encodings. # There is one thing that Unicode doesn’t tell you: it doesn’t tell you how to get actual bits from text—just code points. It doesn’t tell you enough about how to convert text to binary data and vice versa. # Unicode is an abstract encoding standard, not an encoding. That’s where UTF-8 and other encoding schemes come into play. # # The results of str.encode() is a bytes object, the default encoding in str.encode() and bytes.decode() is UTF-8. # %timeit "😘".encode("utf-8") b'\xf0\x9f\x98\x98'.decode("utf-8") "é".encode("utf-8") # sequence represents two bytes, 0xc3 and xa9s in hex # Anything from the Unicode character set is usable in identifiers Python’s re module defaults to the re.UNICODE flag rather than re.ASCII. This means, for instance, that r"\w" matches Unicode word characters, not just ASCII letters. é = 1 import locale locale.getpreferredencoding() # A crucial feature is that UTF-8 is a variable-length encoding. ibrow = "🤨" len(ibrow) ibrow.encode("utf-8") len(ibrow.encode("utf-8")) # Calling list() on a bytes object gives you the decimal value for each byte list(b'\xf0\x9f\xa4\xa8') # Wikipedia’s [UTF-8 article](https://en.wikipedia.org/wiki/UTF-8) has some more technical detail, and there is always the official [Unicode Standard](http://www.unicode.org/versions/latest/) # ## UTF-16 and UTF-32 # Wrong results like this are possible when the same encoding isn’t used bidirectionally. # UTF-16 has 2 or 4 Bytes Per Character variable. UTF-32 4 bytes not variable. letters = "αβγδ" rawdata = letters.encode("utf-8") rawdata.decode("utf-8") rawdata.decode("utf-16") rawdata # UTF-8 will not always take up less space than UTF-16. Example with the [iroha poem](https://de.wikipedia.org/wiki/Iroha) text = "以呂波耳本部止 千利奴流乎和加 餘多連曽津祢那 良牟有為能於久 耶万計不己衣天 阿佐伎喩女美之 恵比毛勢須" len(text.encode("utf-8")) len(text.encode("utf-16")) # ## Python’s Built-In Functions # ascii(), bin(), hex(), and oct() are for obtaining a different representation of an input. Each one produces a str. The first, ascii(), produces an ASCII only representation of an object, with non-ASCII characters escaped. The remaining three give binary, hexadecimal, and octal representations of an integer, respectively. These are only representations, not a fundamental change in the input. # # bytes(), str(), and int() are class constructors for their respective types, bytes, str, and int. They each offer ways of coercing the input into the desired type. For instance, as you saw earlier, while int(11.0) is probably more common, you might also see int('11', base=16). # # ord() and chr() are inverses of each other in that the Python ord() function converts a str character to its base-10 code point, while chr() does the opposite. # Example ascii() ascii("abcdefg") ascii("jalepeño") ascii(0xc0ffee) # Hex literal (int) # Example bin() bin(400) bin(0xc0ffee) [bin(i) for i in [1, 2, 4, 8, 16]] # Example bytes() representing raw binary data: bytes((104, 101, 108, 108, 111, 32, 119, 111, 114, 108, 100)) bytes(range(97, 123)) bytes("real 🐍", "utf-8") # chr() converts an integer code point to a single Unicode character: chr(97) chr(0b01100100) # hex() gives the hexadecimal representation of an integer, with the prefix "0x": [hex(i) for i in [1, 2, 4, 8, 16]] # int() coerces the input to int, optionally interpreting the input in a given base: int('11', base=2) int.from_bytes(b"Python", "big") int.from_bytes(b"Python", "big") # Python ord() function converts a single Unicode character to its integer code point: ord("a") [ord(i) for i in "hello world"] [104, 101, 108, 108, 111, 32, 119, 111, 114, 108, 100] # str() coerces the input to str, representing text: str(b"\xc2\xbc cup of flour", "utf-8") # ## Python String Literals: # there are up to six ways that Python will allow you to type the same Unicode character. "a" == "\x61" == "\N{LATIN SMALL LETTER A}" == "\u0061" == "\U00000061" # a short function to convert strings that look like "U+10346" into something Python can work with. def make_uchr(code: str): return chr(int(code.lstrip("U+").zfill(8), 16)) make_uchr("U+10346") make_uchr("U+0026") # alef_hamza = chr(1571) alef_hamza alef_hamza.encode("unicode-escape") # ## Careful of wrong assumptions # ### Other Encodings # One example is Latin-1 (also called ISO-8859-1), which is technically the default for the Hypertext Transfer Protocol (HTTP) data = b"\xbc cup of flour" data.decode("utf-8") # ops! 😳 it waas not UTF-8! # %timeit data.decode("latin-1") # 😀 # ## unicodedata # import unicodedata unicodedata.name("€") # Inspired from https://realpython.com/python-encodings-guide/ import struct import codecs asd = ['e2','07'] text = ''.join(asd) text encoded = codecs.decode(text, 'hex') struct.unpack("<H", encoded) struct.unpack(">H", encoded) text = '0001' encoded = codecs.decode(text, 'hex') encoded struct.unpack(">H", encoded) # big endian struct.unpack("<H", encoded) # little endian
files/notebooks/unicode.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import logging import importlib importlib.reload(logging) # see https://stackoverflow.com/a/21475297/1469195 log = logging.getLogger() log.setLevel('INFO') import sys logging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s', level=logging.INFO, stream=sys.stdout) # + # %%capture import os import site os.sys.path.insert(0, '/home/schirrmr/code/reversible/') os.sys.path.insert(0, '/home/schirrmr/braindecode/code/braindecode/') os.sys.path.insert(0, '/home/schirrmr/code/explaining/reversible//') # %load_ext autoreload # %autoreload 2 import numpy as np import logging log = logging.getLogger() log.setLevel('INFO') import sys logging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s', level=logging.INFO, stream=sys.stdout) import matplotlib from matplotlib import pyplot as plt from matplotlib import cm # %matplotlib inline # %config InlineBackend.figure_format = 'png' matplotlib.rcParams['figure.figsize'] = (12.0, 1.0) matplotlib.rcParams['font.size'] = 14 import seaborn seaborn.set_style('darkgrid') from reversible2.sliced import sliced_from_samples from numpy.random import RandomState import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import numpy as np import copy import math import itertools import torch as th from braindecode.torch_ext.util import np_to_var, var_to_np from reversible2.splitter import SubsampleSplitter from reversible2.view_as import ViewAs from reversible2.affine import AdditiveBlock from reversible2.plot import display_text, display_close from reversible2.high_gamma import load_file, create_inputs from reversible2.high_gamma import load_train_test th.backends.cudnn.benchmark = True from reversible2.models import deep_invertible # + rng = RandomState(201904113)#2 ganz gut # 13 sehr gut) X = rng.randn(5,2) * np.array([1,0])[None] + np.array([-1,0])[None] X = np.sort(X, axis=0) X_test = rng.randn(500,2) * np.array([1,0])[None] + np.array([-1,0])[None] X_test = np.sort(X_test, axis=0) train_inputs = np_to_var(X[::2], dtype=np.float32) val_inputs = np_to_var(X[1::2], dtype=np.float32) test_inputs = np_to_var(X_test, dtype=np.float32) plt.figure(figsize=(5,5)) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1]) plt.scatter(var_to_np(val_inputs)[:,0], var_to_np(val_inputs)[:,1]) plt.scatter([-1],[0], color='black') plt.xlim(-2.5,5.5) plt.ylim(-4,4) # + cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-2}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-3}]) # + n_epochs = 10001 for i_epoch in range(n_epochs): nll = -th.mean(model_and_dist.get_total_log_prob(0, train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2)) optim.zero_grad() nll.backward() optim.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) plt.title("Train NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}".format( -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), )) display_close(fig) # + cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse from reversible2.gaussian import get_gauss_samples set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) tr_log_stds = (th.zeros_like(train_inputs, requires_grad=True) - 2).detach().clone().requires_grad_(True) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-2}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-3}]) optim_stds = th.optim.Adam([{'params': tr_log_stds, 'lr':1e-2},]) # + n_epochs = 10001 for i_epoch in range(n_epochs): outs = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) outs = outs + (th.randn_like(outs) * th.exp(tr_log_stds)) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim.zero_grad() nll.backward() optim.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), )) plt.axis('equal') #before not done display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) # - # ## from start with mixture # + cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse from reversible2.gaussian import get_gauss_samples set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) tr_log_stds = (th.zeros_like(train_inputs, requires_grad=True) - 2).detach().clone().requires_grad_(True) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-3}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) optim_stds = th.optim.Adam([{'params': [tr_log_stds], 'lr':1e-3},]) # + n_epochs = 101 for i_epoch in range(n_epochs): outs = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) outs = outs + (th.randn_like(outs) * th.exp(tr_log_stds)) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim.zero_grad() nll.backward() optim.step() tr_out = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) va_out = model_and_dist.model(val_inputs + (th.rand_like(val_inputs) -0.5) * 1e-2).detach() demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) nll = -th.mean(th.log(probs + eps)) #optim.zero_grad() optim_stds.zero_grad() nll.backward() #optim.step() optim_stds.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) # - # # ### only stds # + n_epochs = 1001 for i_epoch in range(n_epochs): tr_out = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) va_out = model_and_dist.model(val_inputs + (th.rand_like(val_inputs) -0.5) * 1e-2).detach() demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) nll = -th.mean(th.log(probs + eps)) #optim.zero_grad() optim_stds.zero_grad() nll.backward() #optim.step() optim_stds.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) # - tr_log_stds # + optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-3, 'betas': (0,0.99)}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-3, 'betas': (0,0.99)}]) optim_stds = th.optim.Adam([{'params': [tr_log_stds], 'lr':1e-2, 'betas': (0,0.99)},]) # + n_epochs = 201 for i_epoch in range(n_epochs): outs = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) outs = outs + (th.randn_like(outs) * th.exp(tr_log_stds)) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim.zero_grad() nll.backward() optim.step() tr_out = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) va_out = model_and_dist.model(val_inputs + (th.rand_like(val_inputs) -0.5) * 1e-2).detach() demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) nll = -th.mean(th.log(probs + eps)) optim.zero_grad() optim_stds.zero_grad() nll.backward() optim.step() optim_stds.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Train", "Fake"),) display_close(fig) # - from reversible2.dist_model import ModelAndDist # + fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Train", "Fake"),) display_close(fig) # - test_inputs.shape np.array([1,0])[None] + np.array([-1,0])[None] -th.mean(get_gaussian_log_probs(np_to_var([-1,], dtype=np.float32), np_to_var([0,], dtype=np.float32), test_inputs[:,0:1])) tr_log_stds from reversible2.gaussian import get_gaussian_log_probs # ## stop gradient from out for mixture # + cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse from reversible2.gaussian import get_gauss_samples set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) tr_log_stds = (th.zeros_like(train_inputs, requires_grad=True) - 2).detach().clone().requires_grad_(True) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-3}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) optim_stds = th.optim.Adam([{'params': [tr_log_stds], 'lr':1e-3},]) # + from reversible2.invert import invert n_epochs = 10001 for i_epoch in range(n_epochs): with th.no_grad(): outs = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) outs = outs + (th.randn_like(outs) * th.exp(tr_log_stds)) ins = invert(model_and_dist.model, outs).detach() outs = model_and_dist.model(ins) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim.zero_grad() nll.backward() optim.step() tr_out = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) va_out = model_and_dist.model(val_inputs + (th.rand_like(val_inputs) -0.5) * 1e-2).detach() demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) nll = -th.mean(th.log(probs + eps)) #optim.zero_grad() optim_stds.zero_grad() nll.backward() #optim.step() optim_stds.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) # + cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse from reversible2.gaussian import get_gauss_samples set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) tr_log_stds = (th.zeros_like(train_inputs, requires_grad=True) - 2).detach().clone().requires_grad_(True) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-3}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) optim_stds = th.optim.Adam([{'params': [tr_log_stds], 'lr':1e-3},]) # + from reversible2.invert import invert n_epochs = 10001 for i_epoch in range(n_epochs): with th.no_grad(): outs = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) outs = outs + (th.randn_like(outs) * th.exp(tr_log_stds)) ins = invert(model_and_dist.model, outs).detach() outs = model_and_dist.model(ins) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim.zero_grad() nll.backward() optim.step() tr_out = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) va_out = model_and_dist.model(val_inputs + (th.rand_like(val_inputs) -0.5) * 1e-2).detach() this_log_stds = th.mean(tr_log_stds, dim=1).unsqueeze(-1) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(this_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - this_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) nll = -th.mean(th.log(probs + eps)) #optim.zero_grad() optim_stds.zero_grad() nll.backward() #optim.step() optim_stds.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) # + optim_dist = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-3},]) # + from reversible2.invert import invert n_epochs = 10001 for i_epoch in range(n_epochs): test_ins = test_inputs outs = model_and_dist.model(test_ins + (th.rand_like(test_ins) -0.5) * 1e-2) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim_dist.zero_grad() nll.backward() optim_dist.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) # - # ## Train on both train and valid # + cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse from reversible2.gaussian import get_gauss_samples set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) tr_log_stds = (th.zeros_like(train_inputs, requires_grad=True) - 2).detach().clone().requires_grad_(True) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-3}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) optim_stds = th.optim.Adam([{'params': [tr_log_stds], 'lr':1e-3},]) # + from reversible2.invert import invert n_epochs = 10001 for i_epoch in range(n_epochs): val_and_train_ins = th.cat((train_inputs, val_inputs)) outs = model_and_dist.model(val_and_train_ins + (th.rand_like(val_and_train_ins) -0.5) * 1e-2) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim.zero_grad() nll.backward() optim.step() optim_stds.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) # - # ### retrain only distribution on test # + optim_dist = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-3},]) # + from reversible2.invert import invert n_epochs = 10001 for i_epoch in range(n_epochs): test_ins = test_inputs outs = model_and_dist.model(test_ins + (th.rand_like(test_ins) -0.5) * 1e-2) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim_dist.zero_grad() nll.backward() optim_dist.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) # - # ## mixture in input space # + cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse from reversible2.gaussian import get_gauss_samples set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) tr_log_stds = (th.zeros_like(train_inputs, requires_grad=True) - 2).detach().clone().requires_grad_(True) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-3}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) optim_stds = th.optim.Adam([{'params': [tr_log_stds], 'lr':1e-3},]) # + from reversible2.invert import invert n_epochs = 10001 for i_epoch in range(n_epochs): ins = train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2 ins = ins + (th.randn_like(ins) * th.exp(tr_log_stds)) outs = model_and_dist.model(ins) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim.zero_grad() nll.backward() optim.step() demeaned = val_inputs.unsqueeze(1) - train_inputs.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) nll = -th.mean(th.log(probs + eps)) #optim.zero_grad() optim_stds.zero_grad() nll.backward() #optim.step() optim_stds.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) ax = plt.gca() for trin, lstd in zip(train_inputs, tr_log_stds): ellipse = Ellipse(var_to_np(trin), var_to_np(th.exp(lstd)[0] * 2), var_to_np(th.exp(lstd)[1] * 2), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) # + #a) model of likelihood for current distribution # - # ### full covariance matrix for micture # + cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse from reversible2.gaussian import get_gauss_samples set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) tr_log_stds = (th.zeros_like(train_inputs, requires_grad=True) - 2).detach().clone().requires_grad_(True) tr_logit_corrs = (th.zeros_like(train_inputs[:,0], requires_grad=True)).detach().clone().requires_grad_(True) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-3}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) optim_stds = th.optim.Adam([{'params': [tr_log_stds, tr_logit_corrs], 'lr':1e-3},]) # + from reversible2.invert import invert n_epochs = 10001 for i_epoch in range(n_epochs): with th.no_grad(): outs = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) stds = th.exp(tr_log_stds) corrs = th.sigmoid(tr_logit_corrs) covs = th.prod(stds, dim=1) * corrs cov_matrices = th.stack([th.diag(s ** 2) + c.repeat(2,2) - th.diag(c.repeat(2)) for s, c in zip(stds, covs)]) mix_dists = [th.distributions.MultivariateNormal(loc=m, covariance_matrix=c_mat) for m,c_mat in zip(outs, cov_matrices)] samples = th.cat([dist.sample((1,)) for dist in mix_dists]) ins = invert(model_and_dist.model, samples).detach() outs = model_and_dist.model(ins) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim.zero_grad() nll.backward() optim.step() tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs).detach() stds = th.exp(tr_log_stds) corrs = th.sigmoid(tr_logit_corrs) covs = th.prod(stds, dim=1) * corrs cov_matrices = th.stack([th.diag(s ** 2) + c.repeat(2,2) - th.diag(c.repeat(2)) for s, c in zip(stds, covs)]) mix_dists = [th.distributions.MultivariateNormal(loc=m, covariance_matrix=c_mat) for m,c_mat in zip(tr_out, cov_matrices)] probs = th.mean(th.stack([th.exp(dist.log_prob(va_out)) for dist in mix_dists]), dim=0) nll = -th.mean(th.log(probs + eps)) #optim.zero_grad() optim_stds.zero_grad() nll.backward() #optim.step() optim_stds.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) stds = th.exp(tr_log_stds) corrs = th.sigmoid(tr_logit_corrs) covs = th.prod(stds, dim=1) * corrs cov_matrices = th.stack([th.diag(s ** 2) + c.repeat(2,2) - th.diag(c.repeat(2)) for s, c in zip(stds, covs)]) mix_dists = [th.distributions.MultivariateNormal(loc=m, covariance_matrix=c_mat) for m,c_mat in zip(tr_out, cov_matrices)] probs = th.mean(th.stack([th.exp(dist.log_prob(va_out)) for dist in mix_dists]), dim=0) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) # - th.prod(th.sqrt(th.diag(cov_matrices[0]))) cov_matrices tr_log_stds # + from reversible2.invert import invert n_epochs = 10001 for i_epoch in range(n_epochs): ins = train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2 ins = ins + (th.randn_like(ins) * th.exp(tr_log_stds)) outs = model_and_dist.model(ins) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim.zero_grad() nll.backward() optim.step() tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) stds = th.exp(tr_log_stds) corrs = th.sigmoid(tr_logit_corrs) covs = th.prod(stds, dim=1) * corrs cov_matrices = th.stack([th.diag(s ** 2) + c.repeat(2,2) - th.diag(c.repeat(2)) for s, c in zip(stds, covs)]) mix_dists = [th.distributions.MultivariateNormal(loc=m, covariance_matrix=c_mat) for m,c_mat in zip(tr_out, cov_matrices)] probs = th.mean(th.stack([th.exp(dist.log_prob(va_out)) for dist in mix_dists]), dim=0) nll = -th.mean(th.log(probs + eps)) #optim.zero_grad() optim_stds.zero_grad() nll.backward() #optim.step() optim_stds.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) ax = plt.gca() for trin, lstd in zip(train_inputs, tr_log_stds): ellipse = Ellipse(var_to_np(trin), var_to_np(th.exp(lstd)[0] * 2), var_to_np(th.exp(lstd)[1] * 2), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) # - # ## weight norm idea # + cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) from reversible2.dist_model import set_dist_to_empirical from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim_model = th.optim.Adam([ {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) optim_dist = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-3}, ]) from reversible2.weight_norm import weight_norm all_w_norms = [] for m in model.modules(): if hasattr(m, 'weight'): w_norm = th.zeros(1, requires_grad=True) weight_norm(m, fixed_log_norm=w_norm) all_w_norms.append(w_norm) optim_wnorm = th.optim.Adam([{'params': all_w_norms, 'lr':1e-3},]) for w in all_w_norms: w.data[:] = -1 set_dist_to_empirical(model_and_dist.model,model_and_dist.dist, [train_inputs]) # - # + n_epochs = 10001 for i_epoch in range(n_epochs): nll = -th.mean(model_and_dist.get_total_log_prob(0, train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2)) optim_model.zero_grad() optim_dist.zero_grad() nll.backward() optim_model.step() optim_dist.step() nll = -th.mean(model_and_dist.get_total_log_prob(0, val_inputs + (th.rand_like(val_inputs) -0.5) * 1e-2)) optim_wnorm.zero_grad() optim_dist.zero_grad() nll.backward() optim_wnorm.step() optim_dist.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0] * 2), var_to_np(std[1] * 2), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) plt.axis('equal') plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), )) display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) # + te_in = test_inputs.unsqueeze(1) noise = th.rand(len(te_in), 200, 2) - 0.5 te_in = te_in + noise * 0.1 te_in = te_in.view(-1, te_in.shape[-1]) plt.scatter(var_to_np(te_in)[:,0], var_to_np(te_in)[:,1]) te_out = model_and_dist.model(te_in) plt.figure() plt.scatter(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1]) #plt.axis('equal') # - te_in
notebooks/toy-1d-2d-examples/1d5SampleGauss.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Converting Exact GP Models to TorchScript # # In this notebook, we'll demonstrate converting an Exact GP model to TorchScript. In general, this is the same as for standard PyTorch models where we'll use `torch.jit.trace`, but there are two pecularities to keep in mind for GPyTorch: # # 1. The first time you make predictions with a GPyTorch model (exact or approximate), we cache certain computations. These computations can't be traced, but the results of them can be. Therefore, we'll need to pass data through the untraced model once, and then trace the model. # 1. For exact GPs, we can't trace models unless `gpytorch.settings.fast_pred_var` is used. This is a technical issue that may not be possible to overcome due to limitations on what can be traced in PyTorch; however, if you really need to trace a GP but can't use the above setting, open an issue so we have visibility on there being demand for this. # 1. You can't trace models that return Distribution objects. Therefore, we'll write a simple wrapper than unpacks the MultivariateNormal that our GPs return in to just a mean and variance tensor. # ## Define and train an exact GP # # In the next cell, we define some data, define a GP model and train it. Nothing new here -- pretty much just move on to the next cell after this one. # + import math import torch import gpytorch from matplotlib import pyplot as plt # %matplotlib inline # %load_ext autoreload # %autoreload 2 # Training data is 100 points in [0,1] inclusive regularly spaced train_x = torch.linspace(0, 1, 100) # True function is sin(2*pi*x) with Gaussian noise train_y = torch.sin(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * 0.2 # We will use the simplest form of GP model, exact inference class ExactGPModel(gpytorch.models.ExactGP): def __init__(self, train_x, train_y, likelihood): super(ExactGPModel, self).__init__(train_x, train_y, likelihood) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) def forward(self, x): mean_x = self.mean_module(x) covar_x = self.covar_module(x) return gpytorch.distributions.MultivariateNormal(mean_x, covar_x) # initialize likelihood and model likelihood = gpytorch.likelihoods.GaussianLikelihood() model = ExactGPModel(train_x, train_y, likelihood) # this is for running the notebook in our testing framework import os smoke_test = ('CI' in os.environ) training_iter = 2 if smoke_test else 50 # Find optimal model hyperparameters model.train() likelihood.train() # Use the adam optimizer optimizer = torch.optim.Adam([ {'params': model.parameters()}, # Includes GaussianLikelihood parameters ], lr=0.1) # "Loss" for GPs - the marginal log likelihood mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model) for i in range(training_iter): # Zero gradients from previous iteration optimizer.zero_grad() # Output from model output = model(train_x) # Calc loss and backprop gradients loss = -mll(output, train_y) loss.backward() optimizer.step() # - # ## Trace the Model # # In the next cell, we trace our GP model. To overcome the fact that we can't trace Modules that return Distributions, we write a wrapper Module that unpacks the GP output in to a mean and variance. # # # Additionally, we'll need to run with the `gpytorch.settings.trace_mode` setting enabled, because PyTorch can't trace custom autograd Functions. Note that this results in some inefficiencies. # # Then, before calling `torch.jit.trace` we first call the model on `test_x`. This step is **required**, as it does some precomputation using torch functionality that cannot be traced. # + class MeanVarModelWrapper(torch.nn.Module): def __init__(self, gp): super().__init__() self.gp = gp def forward(self, x): output_dist = self.gp(x) return output_dist.mean, output_dist.variance with torch.no_grad(), gpytorch.settings.fast_pred_var(), gpytorch.settings.trace_mode(): model.eval() test_x = torch.linspace(0, 1, 51) pred = model(test_x) # Do precomputation traced_model = torch.jit.trace(MeanVarModelWrapper(model), test_x) # - # ## Compare Predictions from TorchScript model and Torch model # + with torch.no_grad(): traced_mean, traced_var = traced_model(test_x) print(torch.norm(traced_mean - pred.mean)) print(torch.norm(traced_var - pred.variance)) # - traced_model.save('traced_exact_gp.pt')
examples/08_Advanced_Usage/TorchScript_Exact_Models.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] papermill={} tags=[] # # Pandas - Merge Dataframes # <a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Python%20Snippets/Pandas/Pandas_Merge_Dataframes.ipynb" target="_parent"><img src="https://img.shields.io/badge/-Open%20in%20Naas-success?labelColor=000000&logo=data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iMTAyNHB4IiBoZWlnaHQ9IjEwMjRweCIgdmlld0JveD0iMCAwIDEwMjQgMTAyNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgdmVyc2lvbj0iMS4xIj4KIDwhLS0gR2VuZXJhdGVkIGJ5IFBpeGVsbWF0b3IgUHJvIDIuMC41IC0tPgogPGRlZnM+CiAgPHRleHQgaWQ9InN0cmluZyIgdHJhbnNmb3JtPSJtYXRyaXgoMS4wIDAuMCAwLjAgMS4wIDIyOC4wIDU0LjUpIiBmb250LWZhbWlseT0iQ29tZm9ydGFhLVJlZ3VsYXIsIENvbWZvcnRhYSIgZm9udC1zaXplPSI4MDAiIHRleHQtZGVjb3JhdGlvbj0ibm9uZSIgZmlsbD0iI2ZmZmZmZiIgeD0iMS4xOTk5OTk5OTk5OTk5ODg2IiB5PSI3MDUuMCI+bjwvdGV4dD4KIDwvZGVmcz4KIDx1c2UgaWQ9Im4iIHhsaW5rOmhyZWY9IiNzdHJpbmciLz4KPC9zdmc+Cg=="/></a> # + [markdown] papermill={} tags=[] # This notebook will help you understand how to use the pandas merge function. It explains how to merge two datasets together and consolidate multiple dataset into one. # + [markdown] papermill={} tags=[] # **Author:** [<NAME>](https://www.linkedin.com/in/oludolapo-oketunji/) # + [markdown] papermill={} tags=[] # **Tags:** #pandas #python #merging #merge #dataframes #consolidate # + [markdown] papermill={} tags=[] # ## Input # + [markdown] papermill={} tags=[] # ### Import Library # + papermill={} tags=[] import pandas as pd import numpy as np # + [markdown] papermill={} tags=[] # ### Create dataframes to be merged # + [markdown] papermill={} tags=[] # #### Dataframe 1 # + papermill={} tags=[] # Creating values to be used as datasets dict1 = { "student_id": [1,2,3,4,5,6,7,8,9,10], "student_name": ["Peter","Dolly","Maggie","David","Isabelle","Harry","Akin","Abbey","Victoria","Sam"], "student_course": np.random.choice(["Biology","Physics","Chemistry"], size=10) } # + papermill={} tags=[] # Create dataframe df_1 = pd.DataFrame(dict1) df_1 # + [markdown] papermill={} tags=[] # #### Dataframe 2 # + papermill={} tags=[] # Creating values to be used as datasets dict2 = { "student_id": np.random.choice([1,2,3,4,5,6,7,8,9,10], size=100), "student_grade": np.random.choice(["A","B","C","D","E","F"], size=100), "professors": np.random.choice(["<NAME>","<NAME>","<NAME>","<NAME>"], size=100), } # + papermill={} tags=[] # Create dataframe df_2 = pd.DataFrame(dict2) # OR Data2=pd.read_csv(filepath) df_2 # + [markdown] papermill={} tags=[] # ## Model # pd.merge: acts like an SQL inner join and joins based on similar columns or index unless specified to join differently<br /> # + [markdown] papermill={} tags=[] # ### Merging dataframes with same values with same column names # Using pd.merge(left, right) acts like sql inner join and only joins on the common column they have.<br> # It tries finding everything from the right and append to left 'student_id' is common to both so it has been merged into one and included all the other df_2 columns to df_1 table.<br> # + papermill={} tags=[] df = pd.merge(df_1, df_2) # + [markdown] papermill={} tags=[] # ## Output # + [markdown] papermill={} tags=[] # ### Display result # + papermill={} tags=[] df # + [markdown] papermill={} tags=[] # ## Other options # + [markdown] papermill={} tags=[] # ### Specifiying the comon column using parameters "on" # + papermill={} tags=[] df = pd.merge(df_1, df_2, on="student_id") df # + [markdown] papermill={} tags=[] # ### Specifying what kind of Joins you want since merging does inner joins by default # + [markdown] papermill={} tags=[] # - "inner" > Inner Join: INCLUDING ROWS OF FIRST AND SECOND ONLY IF THE VALUE IS THE SAME IN BOTH DATAFRAMES<br /> # - "outer" > Outer Join: IT JOINS ALL THE ROWS OF FIRST AND SECOND DATAFRAMES TOGETHER AND CREATE NaN VALUE IF A ROW DOESN'T HAVE A VALUE AFTER JOINING<br /> # - "left" > Left Join: INCLUDES ALL THE ROWS IN THE FIRST DATAFRAME AND ADDS THE COLUMNS OF SECOND DATAFRAME BUT IT WON'T INCLUDE THE ROWS OF THE SECOND DATAFRAME IF IT'S NOT THE SAME WITH THE FIRST<br /> # - "right" > Right Join: INCLUDES ALL THE ROWS OF SECOND DATAFRAME AND THE COLUMNS OF THE FIRST DATAFRAME BUT WON'T INCLUDE THE ROWS OF THE FIRST DATAFRAME IF IT'S NOT SIMILAR TO THE SECOND DATAFRAME # + papermill={} tags=[] df = pd.merge(df_1, df_2, on="student_id", how='left') df # + [markdown] papermill={} tags=[] # ### Merging dataframes with same values but different column names # We add two more parameters :<br> # - Left_on means merge using this column name<br> # - Right_on means merge using this column name<br> # i.e merge both id and student_id together<br> # since they don't have same name, they will create different columns on the new table # + papermill={} tags=[] df_1 = df_1.rename(columns={"student_id": "id"}) # Renamed student_id to id so as to give this example df_1 # + papermill={} tags=[] df = pd.merge(df_1, df_2, left_on="id", right_on="student_id") df # + [markdown] papermill={} tags=[] # ### Merging with the index of the first dataframe # + papermill={} tags=[] df_1.set_index("id") # this will make id the new index for df_1 # + papermill={} tags=[] df = pd.merge(df_1, df_2, left_index=True, right_on="student_id")#the new index will be from index of df_2 where they joined df # + [markdown] papermill={} tags=[] # ### Merging both table on their index i.e two indexes # + papermill={} tags=[] df_2.set_index("student_id") # making student_id the index of Data2 # + papermill={} tags=[] df = pd.merge(df_1, df_2, left_index=True, right_index=True) # new index will be from the left index unlike when joining only one index df
Python Snippets/Pandas/Pandas_Merge_Dataframes.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: geocomp # language: python # name: geocomp # --- # + [markdown] colab_type="text" id="SotodcnM5XV7" # # Pandas for time series # # Pandas is very useful for handling time series. # # First we'll need some data. I started at [Diskos](https://portal.diskos.cgg.com/whereoil-data/). It's a bit confusing as there are a lot of places to get data, but I've heard of FactPages so let's start there. # + [markdown] colab_type="text" id="GuHK_ikz5XV-" # ## FactPages... Use pandas to read CSV directly # # Right-click and copy URL for CSV from this link: # # http://factpages.npd.no/factpages/Default.aspx?culture=nb-no&nav1=field&nav2=TableView|Production|Saleable|Monthly # # This file is saved in `../data/field_production_monthly.csv` as well, in case the link breaks. # + colab={} colab_type="code" id="E9jRc80g5XWB" csv = "https://factpages.npd.no/ReportServer_npdpublic?/FactPages/TableView/field_production_monthly&rs:Command=Render&rc:Toolbar=false&rc:Parameters=f&rs:Format=CSV&Top100=false&IpAddress=192.168.127.12&CultureCode=nb-no" # + colab={} colab_type="code" id="TSrVvUKa5XWK" import pandas as pd df = pd.read_csv(csv) # + colab={"base_uri": "https://localhost:8080/", "height": 222} colab_type="code" executionInfo={"elapsed": 504, "status": "ok", "timestamp": 1560165249751, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-wc2cQrA5f94/AAAAAAAAAAI/AAAAAAAAhtI/-p0Ej-pd3c8/s64/photo.jpg", "userId": "10999355236974656427"}, "user_tz": -120} id="YCiGwIiD5XWR" outputId="4549f0e7-b247-40e0-eafb-f17a785c85a6" df.head() # + [markdown] tags=["exercise"] # ### Exercise # # - How many rows are there in this dataframe? # - How many fields are represented? (Look at the column called `'prfInformationCarrier'`) # - How many years of data are there? # - What is the total production? # - # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 838, "status": "ok", "timestamp": 1560165251376, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-wc2cQrA5f94/AAAAAAAAAAI/AAAAAAAAhtI/-p0Ej-pd3c8/s64/photo.jpg", "userId": "10999355236974656427"}, "user_tz": -120} id="Cka6DXa95XWd" outputId="8533f839-3ce1-4657-c0b5-ca128c80e2d4" tags=["hide"] len(df) # + tags=["hide"] df.describe() # + tags=["hide"] df.prfInformationCarrier.unique().size # + tags=["hide"] df.prfPrdOeNetMillSm3.sum() # + [markdown] colab_type="text" id="35O6iE1B5XWu" # ## Rename some columns # + [markdown] colab={} colab_type="code" id="J5KVbZnn5XW6" tags=["exercise"] # ### Exercise # # Rename some of the columns of the dataframe as follows: # # 'prfYear': 'year' # 'prfMonth': 'month' # 'prfInformationCarrier': 'field' # 'prfPrdOilNetMillSm3': 'oil' # 'prfPrdOeNetMillSm3': 'OE' # 'prfPrdProducedWaterInFieldMillSm3': 'water' # - # + colab={} colab_type="code" id="J5KVbZnn5XW6" tags=["hide"] columns = {'prfYear': 'year', 'prfMonth': 'month', 'prfInformationCarrier': 'field', 'prfPrdOilNetMillSm3': 'oil', 'prfPrdOeNetMillSm3': 'OE', 'prfPrdProducedWaterInFieldMillSm3': 'water', } df = df.rename(columns=columns) # + [markdown] colab_type="text" id="35O6iE1B5XWu" # ## Add a datetime # + [markdown] colab_type="text" id="f4TYPRo25XWw" # We'd like to give this dataframe a **datetime** index with `pandas` datetimes. To do this easily, we need: # # - EITHER columns named like `'year'`, `'month'`, `'day'` # - OR a column with a datetime string like `2019-06-30`. # # In this dataframe, we have the former, so let's work with that. # + [markdown] tags=["exercise"] # ### Exercise # # - Make a column for the **day**, using a constant like 1. # - Make a datetime column called `'ds'` (for 'date stamp') using `pd.to_datetime()`, passing in a dataframe consisting of the three columns for year, month and the day you just made. # - Finally, to turn the new column into an index, give its name (`'ds'`) to `df.set_index()`. # - # + colab={} colab_type="code" id="upI71Sll5XXE" tags=["hide"] df['day'] = 1 # + tags=["hide"] df[['year', 'month', 'day']].head() # + colab={} colab_type="code" id="FvsUBkAH5XXM" tags=["hide"] df['ds'] = pd.to_datetime(df[['year', 'month', 'day']]) df = df.set_index('ds') # - # You should end up with a new dataframe with the `'ds'` column as an index. # + colab={"base_uri": "https://localhost:8080/", "height": 339} colab_type="code" executionInfo={"elapsed": 787, "status": "ok", "timestamp": 1560165253191, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-wc2cQrA5f94/AAAAAAAAAAI/AAAAAAAAhtI/-p0Ej-pd3c8/s64/photo.jpg", "userId": "10999355236974656427"}, "user_tz": -120} id="kO2npE3a5XXU" outputId="08178609-0c2f-4c86-eb6a-22299ea182f9" df.head() # - # ## Simplify the dataframe # # Before we carry on, let's simplify the dataframe a bit, reducing it to a few columns: **field**, **water**, **other**, and **oil** (the order is a slightly cheaty way to get the colours I want on the charts, without having to fiddle with them). # + colab={} colab_type="code" id="3ZFVUQiS5XXf" df['other'] = df.OE - df.oil df = df.drop('OE', axis=1) df = df[['field', 'water', 'other', 'oil']] # + colab={"base_uri": "https://localhost:8080/", "height": 233} colab_type="code" executionInfo={"elapsed": 432, "status": "ok", "timestamp": 1560165253838, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-wc2cQrA5f94/AAAAAAAAAAI/AAAAAAAAhtI/-p0Ej-pd3c8/s64/photo.jpg", "userId": "10999355236974656427"}, "user_tz": -120} id="I8M3nBuo5XXl" outputId="0e6147dd-99b4-40ba-fdcf-0b03ee330175" df.head() # + [markdown] colab_type="text" id="1pCS6NRw5XXu" # ## Time series with `pandas` # # `pandas` knows all about time series. So we can easily make a time series plot: # - df.oil[df.field=='TROLL'].plot() # We can easily stretch it out, or add other lines: # + colab={"base_uri": "https://localhost:8080/", "height": 246} colab_type="code" executionInfo={"elapsed": 1210, "status": "ok", "timestamp": 1560165256686, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-wc2cQrA5f94/AAAAAAAAAAI/AAAAAAAAhtI/-p0Ej-pd3c8/s64/photo.jpg", "userId": "10999355236974656427"}, "user_tz": -120} id="0mwROnOf5XXw" outputId="43430308-037f-4661-a9ba-e7d3b60d0c65" df[df.field=='TROLL'].plot(figsize=(15,3)) # - # Let's make a dataframe of only the TROLL field. troll = df[df.field=='TROLL'] # Now we can slice using natural dates: troll['2005':'2010'].plot() troll['Jun 2005':'Jun 2007'].plot() # Try to imagine doing that in Excel! # + [markdown] colab_type="text" id="3evyE8q45XYH" # Let's get the summed annual production for the Troll field: # + colab={"base_uri": "https://localhost:8080/", "height": 417} colab_type="code" executionInfo={"elapsed": 645, "status": "ok", "timestamp": 1560165257216, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-wc2cQrA5f94/AAAAAAAAAAI/AAAAAAAAhtI/-p0Ej-pd3c8/s64/photo.jpg", "userId": "10999355236974656427"}, "user_tz": -120} id="8PnzW_445XYJ" outputId="ac8db087-9fd9-4ff1-e34d-c00eb6a63783" troll.loc['2010':'2018'].resample('Y').sum() # + [markdown] colab_type="text" id="gHXO-g9M5XYY" # Throw `.plot()` on the end: # + colab={"base_uri": "https://localhost:8080/", "height": 301} colab_type="code" executionInfo={"elapsed": 992, "status": "ok", "timestamp": 1560165258099, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-wc2cQrA5f94/AAAAAAAAAAI/AAAAAAAAhtI/-p0Ej-pd3c8/s64/photo.jpg", "userId": "10999355236974656427"}, "user_tz": -120} id="msYhvyrj5XYZ" outputId="b5197ff1-8c5f-414f-9d64-6b861faba9d6" troll.loc['1995':'2018'].resample('Y').sum().plot() # + [markdown] colab_type="text" id="s1sFKATv5XYk" # Or we can get totals for *ALL* fields in the database: # + colab={"base_uri": "https://localhost:8080/", "height": 301} colab_type="code" executionInfo={"elapsed": 1393, "status": "ok", "timestamp": 1560165258927, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-wc2cQrA5f94/AAAAAAAAAAI/AAAAAAAAhtI/-p0Ej-pd3c8/s64/photo.jpg", "userId": "10999355236974656427"}, "user_tz": -120} id="hAaWcVpN5XYm" outputId="ee18d70e-6ad6-4768-b7bb-9e17d3e9809b" df.loc['2000':'2018'].resample('Y').sum().plot() # + [markdown] colab_type="text" id="iCIJPqC25XYu" # Let's look at the contribution TROLL made to NCS production since 1993: # + colab={"base_uri": "https://localhost:8080/", "height": 301} colab_type="code" executionInfo={"elapsed": 1391, "status": "ok", "timestamp": 1560165259281, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-wc2cQrA5f94/AAAAAAAAAAI/AAAAAAAAhtI/-p0Ej-pd3c8/s64/photo.jpg", "userId": "10999355236974656427"}, "user_tz": -120} id="X8mtzF1b5XYv" outputId="6da2cf27-d087-437c-a55a-712bddb011fe" import matplotlib.pyplot as plt fig, ax = plt.subplots() df.loc['1993':'2018', 'oil'].resample('Y').sum().plot(ax=ax) df.loc[df.field!='TROLL'].loc['1993':'2018', 'oil'].resample('Y').sum().plot(ax=ax) plt.show() # + [markdown] colab_type="text" id="sixxa8Dj5b5j" # Looking for forecasting? Head over to... # # ### [Time series forecasting](Time_series_forecasting.ipynb.ipynb) # - # ## Add operators # # There's also a list of operators here >> https://factpages.npd.no/factpages/Default.aspx?culture=nb-no&nav1=field&nav2=TableView%7cProduction%7cSaleable%7cMonthly # # This file is also in `/data`, but we can read directly from the web with `pandas`, as before: url = "https://factpages.npd.no/ReportServer_npdpublic?/FactPages/TableView/field_operator_hst&rs:Command=Render&rc:Toolbar=false&rc:Parameters=f&rs:Format=CSV&Top100=false&IpAddress=192.168.127.12&CultureCode=nb-no" dg = pd.read_csv(url) dg.head() dg['from'] = pd.to_datetime(dg.fldOperatorFrom) dg['to'] = pd.to_datetime(dg.fldOperatorTo) dg['to'] = dg['to'].fillna(pd.to_datetime('today')) dg.head() # Let's get the operator of each field, for each month, and put it in our production dataframe. # # There is probably a more elegant way to do this with `join` or `merge` or something... but I can't figure it out. def process_row(row): """ Process a row in df to get the operator at that time. Note that 'name' is a special attribute for the current index. """ this_df = dg.loc[dg.fldName==row.field, :] record = this_df.loc[(dg['from'] < row.name) & (row.name <= dg['to']), "cmpLongName"] if any(record.values): return record.values[0] else: return np.nan df['operator'] = df.apply(process_row, axis=1) plt.figure(figsize=(10,6)) for name, group in df[df.field=='TROLL'].groupby('operator'): plt.plot(group.oil, label=name) plt.legend() # ---- # # (c) Agile Scientific 2019, licensed CC-BY
master/Demo__Pandas_for_timeseries.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os from art import aggregators from art import scores from art import significance_tests import pandas as pd import ast # + beta_10_N_25 = pd.read_csv('all_scores_modified_greedy_approach_karpathy_test_metrics_gamma_20_beta_0.1.csv')['REFCLIP_SCORE'] beta_10_N_20 = pd.read_csv('all_scores_modified_greedy_approach_karpathy_test_metrics_gamma_25_beta_0.1.csv')['REFCLIP_SCORE'] beta_05_N_30 = pd.read_csv('all_scores_modified_greedy_approach_karpathy_test_metrics_gamma_30_beta_0.05.csv')['REFCLIP_SCORE'] beta_10_N_25 = ast.literal_eval(beta_10_N_25.values[0]) beta_10_N_20 = ast.literal_eval(beta_10_N_20.values[0]) beta_05_N_30 = ast.literal_eval(beta_05_N_30.values[0]) # convert to score beta_10_N_25 = [scores.Score([i]) for i in beta_10_N_25] beta_10_N_20 = [scores.Score([i]) for i in beta_10_N_20] beta_05_N_30 = [scores.Score([i]) for i in beta_05_N_30] # - # **0.05-30 vs 0.1-20** test = significance_tests.ApproximateRandomizationTest( scores.Scores(scores = beta_05_N_30), scores.Scores(scores = beta_10_N_20), aggregators.average, trials=10000) p_value_1 = test.run() p_value_1 # **0.05-30 vs 0.1-25** test = significance_tests.ApproximateRandomizationTest( scores.Scores(scores = beta_05_N_30), scores.Scores(scores = beta_10_N_25), aggregators.average, trials=10000) p_value_2 = test.run() p_value_2 # **0.1-20 vs 0.1-25** test = significance_tests.ApproximateRandomizationTest( scores.Scores(scores = beta_10_N_20), scores.Scores(scores = beta_10_N_25), aggregators.average, trials=10000) p_value_3 = test.run() p_value_3 # **Plots** # + from matplotlib import pyplot as plt from pylab import rcParams rcParams['figure.figsize'] = 10,10 import seaborn as sns import numpy as np from heatmap import heatmap sns.set(color_codes=True, font_scale=1.2) # %matplotlib inline # %config InlineBackend.figure_format = 'retina' # %load_ext autoreload # %autoreload 2 # - def plot_matrix(data, title): corr = pd.melt(data.reset_index(), id_vars='index') corr.columns = ['x', 'y', 'value'] heatmap( corr['x'], corr['y'], color=corr['value'], color_range=[-0.9, 1], palette=sns.diverging_palette(12, 120, n=256), size=corr['value'].abs(), size_range=[0.5,1], marker='o', x_order=data.columns, y_order=data.columns[::-1], size_scale=5000 ) plt.title(title, loc = 'right') plt.savefig(title, bbox_inches='tight', dpi = 150) # + array_pvalues = np.array([[0.25, p_value_1, p_value_2], [0.05, 0.25, p_value_3], [0.05, 0.05, 0.25]]) matrix_df = pd.DataFrame(array_pvalues, columns = [r'$\beta$ = 0.05, $N$ = 30', r'$\beta$ = 0.10, $N$ = 20', r'$\beta$ = 0.10, $N$ = 25'], index = [r'$\beta$ = 0.05, $N$ = 30', r'$\beta$ = 0.10, $N$ = 20', r'$\beta$ = 0.10, $N$ = 25']) # - matrix_df with plt.style.context('science'): sns.set_style('white', {"grid.color": "0", "grid.linestyle": ":", "grid.color": 'black', 'axes.facecolor': '#FFFFFF'}) fig = plt.figure(figsize=(7,7), facecolor=(1, 1, 1)) plot_matrix(matrix_df, 'p-values RefClipScore metric') plt.grid(False)
fit_beta_gamma/Approximate Randomization Testing/tests.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # CancerNet # # [Kaggle Breast IDC dataset](https://www.kaggle.com/paultimothymooney/breast-histopathology-images) # # After working through lesson1, I decided to test the applicability of the model trained to recognized different dog species on breast cancer images. The dataset consists of 277524 images of size (50,50) with a roughly 71:29 negative positive split # # Each image in the dataset comes from 162 whole mount slide images scanned at 40x and each image gives us the patient ID, x and y co-ordinates of the patch from the original image as well as the target class for that patch # # I'll first try out the model as is from lesson1 and then tune it to for this dataset. The idea is to learn what works and what doesn't for medical imaging with a lot of experimentation in between # ## Load up the required libraries and jupyter setup commands # %matplotlib inline # %reload_ext autoreload # %autoreload 2 from fastai.vision import * from fastai.metrics import * import glob from shutil import copyfile, copy, move # The first step is to download and de-compress the data. Make sure to setup the Kaggle API. The instructions can be found [here](https://github.com/Kaggle/kaggle-api) # # *kaggle datasets download -d paultimothymooney/breast-histopathology-images* # # After running the above command in the terminal, the raw dataset should be downloaded in the filesystem # # *unzip -q breast-histopathology-images.zip* # # *unzip -q IDC_regular_ps50_idx5.zip -d histopath* # ## Set up the Pathlib variables for easier path manipulation # # [pathlib](https://docs.python.org/3/library/pathlib.html) is an excellent abstraction for representing file paths allowing for more cleaner and readable code # # _Modify source directory as per your filesystem path_ path = Path('/home/jupyter/tutorials/histopath') train = path/'train' test = path/'test' valid = path/'valid' def get_split(files): """Returns the input as a train/test split Input: files: List of filenames Output: Lists of train and test filenames """ n_files = len(files) n_test = round(test_split * n_files) test = files[:n_test] temp = files[n_test:] n_valid = round(valid_split * n_files) random.shuffle(temp) valid = temp[:n_valid] train = temp[n_valid:] return train, valid, test def shuffle(input_list, flag): """Helper function specifically for train and valid folder shuffle Input: input_list: either one of train or valid split list flag: either one of train/valid path variable defined above Output: Files from input_list are moved into respective folders """ for i in range(len(input_list)): filename = input_list[i].stem + '.png' label = str(input_list[i]).split("_")[4].split(".")[0] if (label == "class0"): os.rename(input_list[i], flag/'0'/filename) else: os.rename(input_list[i], flag/'1'/filename) def shuffle_data(train_list, valid_list, test_list): """ Shuffle the data into train and test folders The data is moved/copied into the proper directory with the proper label on the basis of the filename Input: List of train, valid and test set filenames Output: Files from original dataset are moved into the appropriate train/test folders """ # Train and Valid split shuffle(train_list, train) shuffle(valid_list, valid) #Test split for i in range(len(test_list)): file = str(test_list[i].stem) + '.png' os.rename(test_list[i], test/file) def get_data(filenames): train_set, valid_set, test_set = get_split(filenames) #Sum of train and test split should add up to the total number of images in our dataset print(len(train_set), len(test_set), len(valid_set)) # %time shuffle_data(train_set, valid_set, test_set) # ## Create list of filenames and shuffle the data into appropriate folders # # First, let's get a list of filenames that we'll then split into train, valid and test sets. # # The next step is to shuffle the data into the respective folders based on the split lists # # #### Organize data into train, valid and test folders # # This shuffling will be done on the split defined above # # **Ensure that this is run only once.** Alternatively, use _shutil.copy_ instead of _os.rename_ if you want to run this code cell more than once # # Also, make sure that the train, valid and test folders alongwith the respective label folders(for train and valid) have been created # + filenames = list(path.rglob('*.png')) #Alternatively #files = get_files(path, recurse=True) np.random.seed(42) np.random.shuffle(filenames) # - test_split = 0.2 valid_split = 0.3 get_data(filenames) # ## Default Training # # Let's look at a model using default fastai hyperparameters for transforms and learning rates and then we'll look at hyperparameter tuning to get a better model tfms = get_transforms() # ### Create a DataBunch # # Rather than using ImageDataBunch.from_folder, let's create out databunch using the DataBlock API which is more flexible. Underneath, the ImageDataBunch class calls the DataBlock API. For cases when the data is not structured as the factory methods expect it, DataBlock is the first thing you should try # # _data = ImageDataBunch.from_folder(path, train, valid, ds_tfms = tfms, bs=64, size=50).normalize()_ # # [DataBlock](https://docs.fast.ai/data_block.html) # # [ImageDataBunch.from_folder](https://docs.fast.ai/vision.data.html#ImageDataBunch.from_folder) # # [show_batch](https://docs.fast.ai/basic_data.html#DataBunch.show_batch) data = (ImageItemList.from_folder(path) .split_by_folder() .label_from_folder() .add_test_folder() .transform(tfms, size=50) .databunch(bs=64).normalize()) data data.c, data.classes data.show_batch(3, figsize=(9, 9)) learn = create_cnn(data, models.resnet34, metrics = accuracy) learn.model learn.fit_one_cycle(2) learn.save('stage-1', path) # ## Results # # Use ClassificationIntepretation to visualize the results interpret = ClassificationInterpretation.from_learner(learn) # I'm no expert when it comes to medical imaging or breast cancer for that matter. So I'm just going to stick to a confusion matrix for interpretation rather than plotting the top losses interpret.plot_confusion_matrix() # Let's continue training our model to make it better, iteratively # # Unfreezing the layers basically makes the entire network available for training rather than just the final layers that fastai library adds upon loading the pre-trained model # # For more info on 1-Cycle Policy, [this](https://sgugger.github.io/the-1cycle-policy.html) is an excellent resource # # *From here on out, it's all about experimentation*. There's defined way to do this, just needs intuition and practice which will get better over time learn.unfreeze() learn.fit_one_cycle(1) learn.load('stage-1') learn.lr_find() learn.recorder.plot() learn.unfreeze() learn.fit_one_cycle(2, 1e-6) learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(1, max_lr=slice(1e-6, 1e-5)) learn.save('stage-2', path) learn.load('stage-1') learn.lr_find() learn.recorder.plot() learn.unfreeze() learn.fit_one_cycle(1) learn.fit_one_cycle(1) learn.save('stage-3', path) learn.unfreeze() learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(1) learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(1, 1e-6) learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(1, 1e-5) learn.unfreeze() learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(1) learn.lr_find() learn.recorder.plot() learn.load('stage-3') learn.unfreeze() learn.fit_one_cycle(2) learn.save('stage-4', path) learn.unfreeze() learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(1) learn.load('stage-4') learn.unfreeze() learn.fit_one_cycle(1) # Let's consider this the baseline model that we'll compare our experiments on # # Possible experiments include: # 1. ResNet50 # 2. DenseNet101 # 3. Data Augmentation using get_transforms() # # ## Let's start with Data Augmentation tfms = get_transforms(flip_vert = True, max_rotate = 4., max_zoom = 1.2, max_warp = 0.) data = (ImageItemList.from_folder(path) .split_by_folder() .label_from_folder() .add_test_folder() .transform(tfms, size=49) .databunch(bs=64).normalize()) learn = create_cnn(data, models.resnet34, metrics = accuracy) learn.fit_one_cycle(2) learn.save('aug-stage1', path) learn.unfreeze() learn.fit_one_cycle(1) learn.save('aug-stage2', path) # + learn.load('aug-stage1') learn.unfreeze() learn.fit_one_cycle(1) # - learn.save('aug-stage3', path) learn.fit_one_cycle(1) learn.fit_one_cycle(2) learn.fit_one_cycle(1) learn.save('aug-stage4', path) learn.fit_one_cycle(2) learn.save('final', path) learn.fit_one_cycle(2) learn.fit_one_cycle(1) # Our model is now at a decent accuracy. As you can see, there's potential to minimize the loss even further but for the purpose of this tutorial, we'll stop here # # Feel free to continue any more experiments and change the transforms, batch size and a number of other possible hyper parameters learn.load('final') learn.export('final.pkl') # The line above will export the minimum state of the model for inference and deployment
lesson1/notebooks/CancerNet.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + # Get credentials from IPython.utils import io with io.capture_output() as captured: # %run ../Introduction.ipynb from datetime import datetime from sentenai import Sentenai sentenai = Sentenai(host=host, port=port) # - # # Introduction to Stream Databases # # Stream databases are the building block of data sets in Sentenai. A Stream database represent something as simple as a single data feed, or as complex as an entire production line on a factory floor. Data streams in a stream database are organized into a tree-like hierarchy of relationships. This hierarchy is purely organizational in nature, but splitting complex flat data sets into a hierarchical organization can have substantial performance benefits. # Here's an example of a flat dataset called "Activity" like you'd typically see in a relational database: # # | Timestamp | Location | Sensor | Temperature | Humidity # | --- | --- | --- | --- | --- | # | 2020-01-01T00:00:00Z | Boston | 53 | 48 | .83 # | 2020-01-01T00:00:00Z | Providence | 48 | 43 | .55 # # # In Sentenai you might instead arrange your dataset: # ``` # - Activity # - Boston # - 53 # - Temperature # - Humidity # - Providence # - 48 # - Temperature # - Humidity # ``` # # This organization implies there are natural filters you'd want to apply: # # `Activity/Boston/53/Temperature` is equivalent to filtering a table on Location and Sensor id. # ## Managing data in a Stream Database # # Stream databases feature transactional data logging. Any streams within the stream graph can be updated in a single transaction, provided the update applies to the same time interval across all updated streams. # # So if we initialize a new database `test-2': db = sentenai.init('test-2') # We can update the entire database with each insertion without requiring any up-front schema declaration of stream instantiation: db.log[datetime(2020, 1, 1): datetime(2020, 1, 1, 1): 'update-1'] = { 'Boston': { '53': { 'Temperature': 48.0, 'Humidity': 0.83, }, }, 'Providence': { '48': { 'Temperature': 43.0, 'Humidity': 0.55, } } } # Now that we've inserted our first update into the database's `log`, we can see for ourselves the structure using the `.graph` property of the database. db.graph.show() # Each node in this tree is a stream of either events or values. A stream of values is just a stream of events with a value attached, so in essence every node in the tree is the same. # ### Inserting bulk data # In addition to the basic pattern of insertion into the log with `[ start : (optional) end : (optional) id ]`, you can also stream large amounts of data over a single connection using Python's `with` keyword and a streaming context manager. To start up the streaming connection, type: # ``` # with db.log as log: # ``` # This creates a new streaming connection, called `log`, for us to use to insert data. This connection can actively manage asynchronous uploading and buffering data, so you can feel free to iterate over very large files without worrying about them being loaded fully into memory. Here's a basic example: import random with db.log as log: for i in range(50): db.log[datetime(2020, 1, 1, 2, i) : : ] = { 'Boston': {'53': { 'Humidity' : random.random(), 'Temperature': random.random() * 100 }}} # ### Retrieving an update by unique id # # Updates can be retrieved by id: # + tags=[] db.log['update-1'] # + [markdown] tags=[] # ### Deleting an update # # If an individual update has been given a unique id, it can be deleted from the log by its id: # - del db.log['update-1'] try: print(db.log['update-1']) except KeyError as k: print(k) # ### Retrieving updates by time # # You can retrieve a set of updates by time slice: `db.log[ start : end : limit ]`. All arguments are optional. To retrieve all updates you can do `db.log[:]`. To get the first 5 updates, do: for x in db.log[ : : 5]: print(x) # Negative limit values reverse the retrieval order. To get the last five updates, you can do: for x in db.log[ : : -5]: print(x) # + [markdown] tags=[] # ## Properties of a Stream Database # # A stream database has several properties that are useful when writing programs. # - # Name print(db.name) # Origin db.origin # number of updates len(db) # ## Accessing Streams in a Stream Database # # So far we've seen how to manage data in a stream database, but we haven't yet seen how to work with that data. # # Stream databases are dict-like objects, so we can access streams in the same way we would access a value in a dict: for key in db: print(db[key]) # The `.keys()` and `.items()` methods are also supported. # For paths that are multiple levels deep you can access them one of two ways: print(db['Boston']['43']) print(db['Boston', '43']) # These ways are entirely equivalent, but one may be preferable over the other depending on the situation. # ### [Next chapter: Streams](Streams.ipynb)
tutorial/Databases.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="c8Cx-rUMVX25" # ##### Copyright 2019 The TensorFlow Authors. # + cellView="form" id="I9sUhVL_VZNO" #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] id="6Y8E0lw5eYWm" # # Post-training float16 quantization # + [markdown] id="CGuqeuPSVNo-" # <table class="tfo-notebook-buttons" align="left"> # <td> # <a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_float16_quant"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> # </td> # <td> # <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> # </td> # <td> # <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> # </td> # <td> # <a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> # </td> # </table> # + [markdown] id="BTC1rDAuei_1" # ## Overview # # [TensorFlow Lite](https://www.tensorflow.org/lite/) now supports # converting weights to 16-bit floating point values during model conversion from TensorFlow to TensorFlow Lite's flat buffer format. This results in a 2x reduction in model size. Some harware, like GPUs, can compute natively in this reduced precision arithmetic, realizing a speedup over traditional floating point execution. The Tensorflow Lite GPU delegate can be configured to run in this way. However, a model converted to float16 weights can still run on the CPU without additional modification: the float16 weights are upsampled to float32 prior to the first inference. This permits a significant reduction in model size in exchange for a minimal impacts to latency and accuracy. # # In this tutorial, you train an MNIST model from scratch, check its accuracy in TensorFlow, and then convert the model into a Tensorflow Lite flatbuffer # with float16 quantization. Finally, check the accuracy of the converted model and compare it to the original float32 model. # + [markdown] id="2XsEP17Zelz9" # ## Build an MNIST model # + [markdown] id="dDqqUIZjZjac" # ### Setup # + id="gyqAw1M9lyab" import logging logging.getLogger("tensorflow").setLevel(logging.DEBUG) import tensorflow as tf from tensorflow import keras import numpy as np import pathlib # + id="c6nb7OPlXs_3" tf.float16 # + [markdown] id="eQ6Q0qqKZogR" # ### Train and export the model # + id="hWSAjQWagIHl" # Load MNIST dataset mnist = keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Normalize the input image so that each pixel value is between 0 to 1. train_images = train_images / 255.0 test_images = test_images / 255.0 # Define the model architecture model = keras.Sequential([ keras.layers.InputLayer(input_shape=(28, 28)), keras.layers.Reshape(target_shape=(28, 28, 1)), keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu), keras.layers.MaxPooling2D(pool_size=(2, 2)), keras.layers.Flatten(), keras.layers.Dense(10) ]) # Train the digit classification model model.compile(optimizer='adam', loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit( train_images, train_labels, epochs=1, validation_data=(test_images, test_labels) ) # + [markdown] id="5NMaNZQCkW9X" # For the example, you trained the model for just a single epoch, so it only trains to ~96% accuracy. # + [markdown] id="xl8_fzVAZwOh" # ### Convert to a TensorFlow Lite model # # Using the Python [TFLiteConverter](https://www.tensorflow.org/lite/convert/python_api), you can now convert the trained model into a TensorFlow Lite model. # # Now load the model using the `TFLiteConverter`: # + id="_i8B2nDZmAgQ" converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() # + [markdown] id="F2o2ZfF0aiCx" # Write it out to a `.tflite` file: # + id="vptWZq2xnclo" tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/") tflite_models_dir.mkdir(exist_ok=True, parents=True) # + id="Ie9pQaQrn5ue" tflite_model_file = tflite_models_dir/"mnist_model.tflite" tflite_model_file.write_bytes(tflite_model) # + [markdown] id="7BONhYtYocQY" # To instead quantize the model to float16 on export, first set the `optimizations` flag to use default optimizations. Then specify that float16 is the supported type on the target platform: # + id="HEZ6ET1AHAS3" converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_types = [tf.float16] # + [markdown] id="xW84iMYjHd9t" # Finally, convert the model like usual. Note, by default the converted model will still use float input and outputs for invocation convenience. # + id="yuNfl3CoHNK3" tflite_fp16_model = converter.convert() tflite_model_fp16_file = tflite_models_dir/"mnist_model_quant_f16.tflite" tflite_model_fp16_file.write_bytes(tflite_fp16_model) # + [markdown] id="PhMmUTl4sbkz" # Note how the resulting file is approximately `1/2` the size. # + id="JExfcfLDscu4" # !ls -lh {tflite_models_dir} # + [markdown] id="L8lQHMp_asCq" # ## Run the TensorFlow Lite models # + [markdown] id="-5l6-ciItvX6" # Run the TensorFlow Lite model using the Python TensorFlow Lite Interpreter. # + [markdown] id="Ap_jE7QRvhPf" # ### Load the model into the interpreters # + id="Jn16Rc23zTss" interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file)) interpreter.allocate_tensors() # + id="J8Pztk1mvNVL" interpreter_fp16 = tf.lite.Interpreter(model_path=str(tflite_model_fp16_file)) interpreter_fp16.allocate_tensors() # + [markdown] id="2opUt_JTdyEu" # ### Test the models on one image # + id="AKslvo2kwWac" test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32) input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] interpreter.set_tensor(input_index, test_image) interpreter.invoke() predictions = interpreter.get_tensor(output_index) # + id="XZClM2vo3_bm" import matplotlib.pylab as plt plt.imshow(test_images[0]) template = "True:{true}, predicted:{predict}" _ = plt.title(template.format(true= str(test_labels[0]), predict=str(np.argmax(predictions[0])))) plt.grid(False) # + id="3gwhv4lKbYZ4" test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32) input_index = interpreter_fp16.get_input_details()[0]["index"] output_index = interpreter_fp16.get_output_details()[0]["index"] interpreter_fp16.set_tensor(input_index, test_image) interpreter_fp16.invoke() predictions = interpreter_fp16.get_tensor(output_index) # + id="CIH7G_MwbY2x" plt.imshow(test_images[0]) template = "True:{true}, predicted:{predict}" _ = plt.title(template.format(true= str(test_labels[0]), predict=str(np.argmax(predictions[0])))) plt.grid(False) # + [markdown] id="LwN7uIdCd8Gw" # ### Evaluate the models # + id="05aeAuWjvjPx" # A helper function to evaluate the TF Lite model using "test" dataset. def evaluate_model(interpreter): input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Run predictions on every image in the "test" dataset. prediction_digits = [] for test_image in test_images: # Pre-processing: add batch dimension and convert to float32 to match with # the model's input data format. test_image = np.expand_dims(test_image, axis=0).astype(np.float32) interpreter.set_tensor(input_index, test_image) # Run inference. interpreter.invoke() # Post-processing: remove batch dimension and find the digit with highest # probability. output = interpreter.tensor(output_index) digit = np.argmax(output()[0]) prediction_digits.append(digit) # Compare prediction results with ground truth labels to calculate accuracy. accurate_count = 0 for index in range(len(prediction_digits)): if prediction_digits[index] == test_labels[index]: accurate_count += 1 accuracy = accurate_count * 1.0 / len(prediction_digits) return accuracy # + id="T5mWkSbMcU5z" print(evaluate_model(interpreter)) # + [markdown] id="Km3cY9ry8ZlG" # Repeat the evaluation on the float16 quantized model to obtain: # + id="-9cnwiPp6EGm" # NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite # doesn't have super optimized server CPU kernels. For this reason this may be # slower than the above float interpreter. But for mobile CPUs, considerable # speedup can be observed. print(evaluate_model(interpreter_fp16)) # + [markdown] id="L7lfxkor8pgv" # In this example, you have quantized a model to float16 with no difference in the accuracy. # # It's also possible to evaluate the fp16 quantized model on the GPU. To perform all arithmetic with the reduced precision values, be sure to create the `TfLiteGPUDelegateOptions` struct in your app and set `precision_loss_allowed` to `1`, like this: # # ``` # //Prepare GPU delegate. # const TfLiteGpuDelegateOptions options = { # .metadata = NULL, # .compile_options = { # .precision_loss_allowed = 1, // FP16 # .preferred_gl_object_type = TFLITE_GL_OBJECT_TYPE_FASTEST, # .dynamic_batch_enabled = 0, // Not fully functional yet # }, # }; # ``` # # Detailed documentation on the TFLite GPU delegate and how to use it in your application can be found [here](https://www.tensorflow.org/lite/performance/gpu_advanced?source=post_page---------------------------)
tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="1009e6da" # # Feature inversion # # - description: "Feature visualization with PyTorch" # - toc: false # - branch: master # - badges: true # - comments: true # - categories: [deep-learning, PyTorch, computer-vision, from scratch] # - image: images/Covers/2021_07_16.png # - hide: false # - search_exclude: true # - metadata_key1: metadata_value1 # - metadata_key2: metadata_value2 # + [markdown] id="lUIVIgZogWyZ" # # Introduction # # Feature visualization refers to an ensemle of techniques employed to extract, visualize or understand the information (weights, bias, feature maps) inside a neural network. [Olah et al. (2017)](https://distill.pub/2017/feature-visualization/) provide a good introduction to the subject, and the [openAI microscope](https://microscope.openai.com/models) allows one to explore pretrained convolutional networks through feature visualization. # # Neural style transfer relies on the technique of **inversion** (see Mahendran and Vedaldi [2014](https://arxiv.org/pdf/1512.02017.pdf), [2016](https://arxiv.org/abs/1512.02017)). "*We do so by modelling a representation as a # function $\phi_0 = \phi(x_0)$ of the image $x_0$. Then, we attempt to recover the image from the information contained only in the code $\phi_0$*" (Mahendran and Vedaldi, 2016). Because some operation such as RELu or pooling operations (taking the max or mean of a subset of pixel) are destructive the image $x_0$ is not uniquely recoverable. For the same reason, it is easier to invert images from lower layers of the network than from higher ones. A couple of techniques such as total variation regularization or jittering the input image can help overcoming these theoretical limitations (see [Mordvintsev et al., 2016](https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html) and previous references). # + [markdown] id="8d719c59-0231-4877-8807-e34c399455e3" # ## Algorithm outline # # In this blogpost, I implement feature inversion as described in [<NAME> (2016)](https://arxiv.org/abs/1512.02017). By doing so, I also lay the core foundation for the whole neural style transfer algorithm that I'll expand the next part of this series. The algorithm is implemented in PyTorch. # # We have two images of interest: the *content image* (input) and the *generated image* (output). The algorithm and implementation goes as follows: # 1. We import the necessary libraries # 1. We create a `Image` class to store and transform images. # 1. We download a pretrained netowrk and create a smaller neural network, `SmallNet` that implements only the lower layers that we need. That network takes an image as input and outputs the feature maps of requested layers. # 1. We define a loss function between the *content* and *generated* feature maps, as well as a series of regularizers to limit inversion artefacts. # 1. Finally, we instantiate all the classes and train the model. # # + [markdown] id="3abd82f1" # # 1. Setup # # We start by importing all the necessary libraries. # + executionInfo={"elapsed": 4896, "status": "ok", "timestamp": 1625212141235, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "13412020910432312638"}, "user_tz": -540} id="c1df70ee" # Imports import torch import torchvision from torch import nn import skimage from skimage import transform from skimage import io # from im_func import show_image, timer import numpy as np import matplotlib.pyplot as plt from IPython.display import display, clear_output import torch.nn.functional as F import torchvision.transforms.functional as TF from time import time import contextlib @contextlib.contextmanager def timer(msg='timer'): tic = time() yield return print(f"{msg}: {time() - tic:.2f}") # + [markdown] id="742b1300-ebac-4769-bdf9-1f261685b30a" # # 2. `Image` class # # Next we define an `Image` class. This class stores the image. It also applies preprocessing when the class is instantiated. I also added a function to perform image jittering, either translating or rotating the image. This kind of transformation, when applied to the input (i.e. content) image has been shown to improve feature inversion, especially when attempting to recover higher layers in the network. In effect those transformations enforce correlation between neighbouring pixels and help recover the information that was lost during pooling operations ([Mahendran and Vedaldi, 2016](https://arxiv.org/abs/1512.02017)). However, making the image jump all over the place from one iteration of the solver to the other may render optimization difficult or unstable. Thus, I opted to implement jittering a a random walk of the translation distance and angle of rotation. During instanciation we will set the parameter `optimizable` to `False` for the *content image*, and `True` for the *generated image*. The image is cast into a `tensor` when `optimizable==False`, and into a `nn.Parameter` with `requires_grad==True` object when `optimizable==True`. Image values are clamped between 0 and 1 during the forward pass. # + executionInfo={"elapsed": 6, "status": "ok", "timestamp": 1625212141238, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "13412020910432312638"}, "user_tz": -540} id="93075000-0407-4a30-b3a1-1a30a41d18bb" rgb_mean = torch.tensor([0.485, 0.456, 0.406]) # Fixed values for PyTorch pretrained models rgb_std = torch.tensor([0.229, 0.224, 0.225]) class Image(nn.Module): def __init__(self, img=None, optimizable=True, img_shape=[64,64], jit_max=2, angle_max=2.0): super(Image,self).__init__() self.img_shape = img_shape if type(img)==type(None): self.img = torch.randn([1, 3] + self.img_shape) else: self.img = img self.img = self.preprocess() if optimizable == True: self.img = nn.Parameter(self.img) self.jit_i = 0 self.jit_j = 0 self.jit_max = jit_max self.angle = 0.0 self.angle_max = angle_max def preprocess(self): with torch.no_grad(): transforms = torchvision.transforms.Compose([ torchvision.transforms.ToPILImage(), torchvision.transforms.Resize(self.img_shape), torchvision.transforms.ToTensor(), ]) return transforms(self.img).unsqueeze(0) def postprocess(self): with torch.no_grad(): img = self.img.data[0].to(rgb_std.device).clone() img.clamp_(0, 1) return torchvision.transforms.ToPILImage()(img.permute(1, 2, 0).permute(2, 0, 1)) def jittered_image(self): with torch.no_grad(): jit_max = 2 temp = np.random.standard_normal(2)*2.0 self.jit_i += temp[0] self.jit_j += temp[1] self.angle += np.random.standard_normal(1)[0]*1.0 self.angle = np.clip(self.angle,-self.angle_max,self.angle_max) self.jit_i, self.jit_j = np.clip([self.jit_i, self.jit_j],-self.jit_max,self.jit_max)#.astype(int) # print(self.angle, self.jit_i, self.jit_j, temp) return torchvision.transforms.functional.affine(self.img.data, angle=self.angle, translate=(self.jit_i/self.img_shape[1], self.jit_j/self.img_shape[0]), scale=1., shear=[0.0,0.0])#,interpolation=torchvision.transforms.functional.InterpolationMode.BILINEAR) def forward(self, jitter=False): self.img.data.clamp_(0, 1) if jitter: return self.jittered_image() else: return self.img # + [markdown] id="4ffe564f" # # 3. `SmallNet` class # # Next, we import a pretrained model from the PyTorch zoo. Here, I use VGG16 ([Simonyan and Zisserman, 2014](https://arxiv.org/abs/1409.1556)). The VGG architecture (printed below) is composed of a series of of blocks. The smallest block unit is composed of a convolutional layer + ReLU activation layer (**Conv2d+ReLU**). Larger blocks are composed of a series of **Conv2d+ReLU** followed by a maximum pooling (**MaxPool2d**) operation. The pooling operation divides the number of pixels by two. The complete VGG16 architecture also contains a final few fully connected layers to perform the classification, but we don't need those here. For testing I'll extract features from layer 7, and then again for layer 29 (last convolutional layer). # + colab={"base_uri": "https://localhost:8080/", "height": 578} executionInfo={"elapsed": 1822, "status": "ok", "timestamp": 1625212332971, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "13412020910432312638"}, "user_tz": -540} id="27a76b5e" outputId="015102ba-3ba1-4a20-ae02-1944dfe4f3da" pretrained_net = torchvision.models.vgg16(pretrained=True)#.features.to(device).eval() display(pretrained_net.features) content_layer = [29] # + [markdown] id="9ce160c4" # We are only interested in the features contains in the list of layers defined earlier in `content_layer`. Thus, we don't need to make a complete forward pass through the model. We need to feed our image to the VGG16 network up to the last layer of `content_layer` only. Thus, we create a small network class `SmallNet` that is a smaller version of the VGG16 network. We also add an initial normalization operation before the VGG16 network. # # A forward pass through `SmallNet` outputs a list of feature maps for specified layers (here we would specify the `content_layer` list). We will use `SmallNet` both on the *content* and *generated* image to obtain their respective feature maps. Here there is subtlety: the *content* image is not optimizable (i.e. `requires_grad==False`), but the *generated* image is optimizable (i.e. `requires_grad==False`). Thus, we need to *detach* the content image's feature map to avoid tracking their gradient during optimization. We also need to make copies of the feature maps to be used later during optimization. This part is essential but a bit tricky. When I implemented the algorithm, first, I didn't `clone()` the layers and backpropagation was crashing, but the error message is not very explicit. Then, I didn't detach the content feature maps. The backpropagation was not crashing and I was getting a reasonable output, but convergence was terrible and the results were quite underwhelming. It took me a while to figure out the problem. # + executionInfo={"elapsed": 8, "status": "ok", "timestamp": 1625212332972, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "13412020910432312638"}, "user_tz": -540} id="2e81bfff" class SmallNet(nn.Module): def __init__(self, pretrained_net, last_layer): super(SmallNet,self).__init__() self.net= nn.Sequential(*([torchvision.transforms.Normalize(mean=rgb_mean, std=rgb_std)] + [pretrained_net.features[i] for i in range(last_layer + 1)])).to(device).eval() def forward(self, X, extract_layers): # Passes the image X through the pretrained network and # returns a list containing the feature maps of layers specified in the list extract_layers detach = not(X.requires_grad) # We don't want to keep track of the gradients on the content image feature_maps = [] for il in range(len(self.net)): X = self.net[il](X) if (il-1 in extract_layers): # note: il-1 because I added a normalization layer before the pretrained net in self.net if detach: feature_maps.append(X.clone().detach()) else: feature_maps.append(X.clone()) return feature_maps # + [markdown] id="54399e56" # # 4. `Losses` class # # The *content loss* is the mean squared error between the feature maps that correspond to the *content* and *generated* images. A number of artefacts commonly appear during image inversion. One of the main reason for these artefacts is that some of the information from the original image is destroyed by pooling operations, and to a lesser extent by convolution and ReLU operations. To decrease the effect of these artefacts, we add a number of regularizer to the loss function. The *intensity regularizer* discourages large color intensity. The *total variation regularizer* inhibits small wavelength variation (i.e. denoising). Each regularization term is associated with a weight. The losses and regularizers are described in detail in [Mahendran and Vedaldi (2016)](https://arxiv.org/abs/1512.02017). # + executionInfo={"elapsed": 457, "status": "ok", "timestamp": 1625212333422, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "13412020910432312638"}, "user_tz": -540} id="441900ad" class Losses(nn.Module): def __init__(self, img_ref, content_weight=1.0, tv_weight=0.0, int_weight=0.0, alpha=6, beta=1.5): super(Losses,self).__init__() # img_ref is used to compute a reference total variation and reference intensity # tv_weight: weight of the total variation regularizer # int_weight: weight of the intensity regularizer # alpha: exponent for the intensity regularizer # beta: exponent for the total variation regularizer self.content_weight = content_weight self.tv_weight = tv_weight self.int_weight = int_weight self.content_loss = 1e10 self.tv_loss = 1e10 self.int_loss = 1e10 self.total_loss = 1e10 self.alpha = alpha self.beta = beta self.B, self.V = self.get_regularizer_refs(img_ref) def get_content_loss(self, feature_map_gen, feature_map_content): # Mean squared error between generated and content image loss = 0 for i in range(len(feature_map_content)): loss += F.mse_loss(feature_map_gen[i], feature_map_content[i].detach()) return loss/(i+1) def get_regularizer_refs(self, img): eps = 1e-10 L2 = torch.sqrt(img[:,0,:,:]**2 + img[:,1,:,:]**2 + img[:,2,:,:]**2 + eps) B = L2.mean() d_dx = img[:,:,1:,:]-img[:,:,:-1,:] d_dy = img[:,:,:,1:]-img[:,:,:,:-1] L2 = torch.sqrt(d_dx[:,:,:,1:]**2 + d_dy[:,:,1:,:]**2 + eps) V = L2.mean() return B, V def get_int_loss(self, img): # Intensity loss H = img.shape[2] W = img.shape[3] C = img.shape[1] eps = 1e-10 L2 = torch.sqrt(img[:,0,:,:]**2 + img[:,1,:,:]**2 + img[:,2,:,:]**2 + eps) loss = 1./H/W/C/(self.B**self.alpha) * torch.sum(L2**self.alpha) return loss def get_TV_loss(self, img): # Total variation loss H = img.shape[2] W = img.shape[3] C = img.shape[1] eps = 1e-10 # avoids accidentally taking the sqrt of a negative number because of rounding errors # total variation d_dx = img[:,:,1:,:]-img[:,:,:-1,:] d_dy = img[:,:,:,1:]-img[:,:,:,:-1] # I ignore the first row or column of the image when computing the norm, in order to have vectors with matching sizes # Thus, d_dx and d_dy are not strictly colocated, but that should be a good enough approximation because neighbouring pixels are correlated L2 = torch.sqrt(d_dx[:,:,:,1:]**2 + d_dy[:,:,1:,:]**2 + eps) TV = torch.sum(L2**self.beta) # intensity regularizer loss = 1./H/W/C/(self.V**self.beta) * TV return loss def forward(self,img,feature_map, feature_map_target): self.content_loss = self.get_content_loss(feature_map, feature_map_target) self.int_loss = self.get_int_loss(img) self.tv_loss = self.get_TV_loss(img) self.total_loss = ( self.content_weight*self.content_loss + self.int_weight*self.int_loss + self.tv_weight*self.tv_loss ) return self.total_loss # + [markdown] id="a91ccdd7-0bc1-41d4-b34a-8f39c023432b" # # 5. Training # ## 5.1. Setup training # # We create two instances of the `Image` class, for the *content* and *generated* images, respectively. We instantiate `SmallNet` and pass it the `content_layer`. We instantiate `Losses` and we chose weights for the regularizers. We create an optimizer and pass it the optimizable *generated* image. Here, we use L-BFGS as recommended by [Gatys et al. (2016)](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf). We also define conditions of absolute and relative loss limits to stop training, and an option to jitter or not the input image during training. Jittering is useful to better inverse features but can cause training to not converge, therefore it is recommended to stop jittering once a reasonnable result is reached, and then optimize a bit more without jittering. We implement this option by specifying maximum number of steps for which jittering is active (`jitter_nsteps==0` deactivates jittering altogether). # + colab={"base_uri": "https://localhost:8080/", "height": 319} executionInfo={"elapsed": 2888, "status": "ok", "timestamp": 1625212369216, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "13412020910432312638"}, "user_tz": -540} id="68496dfc" outputId="6d15a61d-9d0e-48c5-f8ee-57aa2459c0be" device = 'cpu' # Images content_im = skimage.io.imread("https://github.com/scijs/baboon-image/blob/master/baboon.png?raw=true") fig, ax = plt.subplots(1,1,figsize=[5,5]) _ = plt.imshow(content_im); plt.title("content"); _ = plt.axis("off") img_shape = [256, int(256*content_im.shape[1]/content_im.shape[0])] img_content = Image(img=content_im, optimizable=False, img_shape=img_shape).to(device) img_gen = Image(None, optimizable=True, img_shape=img_shape).to(device) # SmallNet net = SmallNet(pretrained_net, content_layer[-1]) # Losses loss_fn = Losses(img_content(), tv_weight=0.2, int_weight=0.0) # Optimizer optimizer = torch.optim.LBFGS(img_gen.parameters(),lr=1.0) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.5) abs_loss_limit = 1e-3 rel_loss_limit = 1e-7 # Other options n_steps = 100 n_out = 1 # how often (in number of time steps) to plot the the updated image during inversion jitter_nsteps = 50 # Jitter the input image for the first that many steps # + [markdown] id="25c1cc11" # # 5.2. Train # # Here, we train the model. At each iteration we compute the feature maps of the *generated* image and compare it to the feature maps of the *content image* to compute the loss. The feature maps of the *content image* are computed only once if `jitter==False` or at each epoch otherwise. We visualize the updated *generated* image every few epochs along the current values of loss and regularizers. # + colab={"base_uri": "https://localhost:8080/", "height": 999} executionInfo={"elapsed": 35547, "status": "error", "timestamp": 1625212406327, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "13412020910432312638"}, "user_tz": -540} id="4bd72045" outputId="d101a16d-ccbc-4db1-a87b-84e8c9d0b028" tags=[] #hide_output fig, ax = plt.subplots(1,1,figsize=[10,10]) # for sanity if jitter_nsteps<0: jitter_nsteps = 0 def closure(): optimizer.zero_grad() fm_gen = net(img_gen(), content_layer) loss = loss_fn(img_gen(), fm_gen, fm_content) loss.backward() return loss last_loss = 1e10 frame = 0 for i in range(n_steps): if i%n_out==0: with torch.no_grad(): plt.clf() plt.imshow(img_gen.postprocess()) if i>0: rel_loss = torch.abs(last_loss-loss_fn.total_loss) else: rel_loss = 1e10 plt.title(f"Epoch {i:02}, losses:\n" + f"content: {loss_fn.content_weight*loss_fn.content_loss:.2e}, " + f"total variation {loss_fn.tv_weight*loss_fn.tv_loss:.2e}, " + f"intensity {loss_fn.int_weight*loss_fn.int_loss:.2e}, \n" + f"total absolute:{loss_fn.total_loss:.2e}, relative: {rel_loss:.2e}") clear_output(wait = True) display(fig) plt.savefig(f"./Output/Frame_{i:05d}.jpg") if i>0: if loss_fn.total_loss<abs_loss_limit: clear_output(wait = True) print(f'success: absolute loss limit ({abs_loss_limit:.1e}) reached') break if torch.abs(last_loss-loss_fn.total_loss)<rel_loss_limit: clear_output(wait = True) print(f'stopped because relative loss limit ({rel_loss_limit:.1e}) was reached') break if loss_fn.total_loss.isnan(): print(f'stopped because loss is NaN') break last_loss = loss_fn.total_loss if i<jitter_nsteps: fm_content = net(img_content(jitter=True), content_layer) elif i==jitter_nsteps: fm_content = net(img_content(jitter=False), content_layer) optimizer.step(closure) scheduler.step() # - # # Results # # The algorithm can reconstruct an image pretty close to the original from shallow layers, but the inversion becomes more abstract with deeper layers. The color reconstruction also becomes increasingly unfaithful the deeper we go. For example, here are illustrations of the optimization process on layer 7 (top), and 29, i.e. last layer (bottom). Also, you can both output images are jittered, the jittering is only clearly visible for layer 7. # ![](https://github.com/abauville/blog/raw/master/images/2021_07_16_feature_inversion/layer_7.gif) # ![](https://github.com/abauville/blog/raw/master/images/2021_07_16_feature_inversion/layer_29.gif) # <!-- <img src="https://github.com/abauville/blog/raw/master/images/2021_07_16_feature_inversion/layer_7.gif" alt="Optimization layer 7" width="360"/> # <img src="https://github.com/abauville/blog/raw/master/images/2021_07_16_feature_inversion/layer_29.gif" alt="Optimization layer 29" width="360"/> --> # + [markdown] id="9b4097eb-1fd8-4453-b040-5874852888d4" # # Conclusion # + [markdown] id="605424b0-0767-458f-9660-99fa7b726e74" # In this blogpost I implemented feature visualization and layed the foundation for my implementation of neural style transfer. My implementation of feature visualization follows the recommandation of [<NAME> Vedaldi (2016)](https://arxiv.org/abs/1512.02017). I implemented some of the techniques described by [Olah et al. (2017)](https://distill.pub/2017/feature-visualization/) to improve the inversion of deep layers, such as total variation and intensity regularization or jittering the output image. I obtain a clear output for relatively shallow layers, but there is room to improvement regarding deeper layers. I found that the trickiest part of the implementation, and where I spent the most time debugging was in the feature map extraction. At this point it is important to clone and detach feature maps appropriately. Failing to do so may break backpropagation, or worse, backpropagation can go on but with the wrong feature maps which result underwhelming yet reasonnable output. In that case you might make the mistake of trying to tune the parameters rather than looking for a bug.
_notebooks/2021-07-16-feature-inversion.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="fWjWlshjLsok" colab_type="code" colab={} import keras from keras.datasets import mnist from tensorflow.python.keras import Sequential from tensorflow.python.keras.layers import Dense, Dropout from tensorflow.compat.v1.keras.optimizers import RMSprop # + id="mUZCxpzbL1P1" colab_type="code" colab={} #Carregando treino e teste #função possui parentes (x_treino, y_treino),(x_teste, y_teste) = mnist.load_data() # + id="-2EPDhqyNM18" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="bb64add2-cb9a-4a71-a10b-777711fee341" #how many images I have for train? print("img for train",len(x_treino)) #how many image for test? print("img for test", len(x_teste)) #who type x_treino? print("Tipo do x_treino", type(x_treino)) #get first image primeira_imagem = x_treino[0] print(primeira_imagem) # + id="RO-DKCK9OSIj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="d740eb7c-68d0-46eb-db9e-062122ef847c" import matplotlib.pyplot as plt indice = 15000 plt.imshow(x_treino[indice]) plt.show # + id="v3VfulcHQPh3" colab_type="code" colab={} #achatando matrizes de pixel e transformando em uma unica lista com valores entre 0 e 1 quantidade_treino = len(x_treino) quantidade_teste = len(x_teste) resolucao_imagem = x_treino[0].shape resolucao_total = resolucao_imagem[0] * resolucao_imagem[1] x_treino = x_treino.reshape(quantidade_treino, resolucao_total) x_teste = x_teste.reshape(quantidade_teste, resolucao_total) # + id="oCyxlqK2S6Za" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="9b745769-8d60-4051-d8cd-86cc47042ad7" #normalização dos dados #255 vira 1 # 0 vira 0 #assim por diante x_treino = x_treino.astype('float32') x_teste = x_teste.astype('float32') x_treino /= 255 x_teste /= 255 print(type(x_treino[0][350])) print(x_treino[0][350]) # + id="Jg19KxBeUzJD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="86d61317-9c7f-4329-e685-c72ee6dc7073" #dados normalizados print("Dados normalizados", x_treino[0]) # + id="_iHKq-iVWLwY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="8b5d47a1-98f7-43c4-c2ae-5e801a836013" #camada de saida valores_unicos = set(y_treino) quantidade_valores_unicos = len(valores_unicos) print(valores_unicos) #transformação dos valores unicos em variaveis categoricas #representação categorica de um numero em uma rede neural # numero 0 -> [1,0,0,0,0,0,0,0,0,0] #... #numero 9 - > [0,0,0,0,0,0,0,0, 9] print("y_treino[0 antes", y_treino[0] ) y_treino = keras.utils.to_categorical(y_treino, quantidade_valores_unicos) y_teste = keras.utils.to_categorical(y_teste, quantidade_valores_unicos) print("y_treino[0] depois", y_treino[0]) #executa apenas 1 vez ou pode quebrar todo código # + id="zd0TMqnLWmoX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 336} outputId="89485325-6d1c-4f10-a4a5-cba92234a12e" #criando modelo de rede neural model = Sequential() #primeira hidden layer com 30 neuronios e função de ativação ReLu #Na camada 1 , informamos input shape, que é (784,) , ela é uma tupla, entao virgula é essencial model.add(Dense(450, activation='relu', input_shape=(resolucao_total, ))) #adicionando dropout pra evitar overfit model.add(Dropout(0.2)) #segunda hidden layer , essa tem pedaços maiores, entao sera cada uma com 20 neutonios + relu model.add(Dense(300, activation='relu')) #mais um regularizador apos a 2 hidden model.add(Dropout(0.2)) #finalizamos com camada de saida, output , informando valores unicos model.add(Dense(quantidade_valores_unicos, activation='softmax')) #exibe modelo criado model.summary() # + id="SCm8A_R0d-A5" colab_type="code" colab={} model.compile(loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy']) # + id="xhEv3cUkfNfc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 727} outputId="8be60b21-b4ab-43ea-9bb5-95634fc9aaf7" #Treinar o modelo history = model.fit(x_treino,y_treino, batch_size=128, epochs=20 ,verbose=1 ,validation_data= (x_teste, y_teste)) # + id="FtCIpTgEgLA-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 371} outputId="2c919ac3-256e-455f-ff1f-a59413a5ab91" #Fazendo nossas previsoes indice = 9800 #Qual valor categorico de y_test[indice] print("Valor categorico em y_teste[indice]", y_teste[indice]) #parece ser 7 #print(x_teste[indice]) imagem = x_teste[indice].reshape((1, resolucao_total)) #print(imagem) #fazendo previsao prediction = model.predict(imagem) print("Previsão :", prediction) #prediction_class = model.predict_classes(imagem) import numpy as np prediction_class = np.argmax(model.predict(imagem), axis= -1) print("Previsão ajustada", prediction_class) (x_treino_img, y_treino_img), (x_teste_img, y_teste_img) = mnist.load_data() plt.imshow(x_teste_img[indice], cmap=plt.cm.binary) # + id="LJvZf22eh12X" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="2c3153a0-8083-44db-8588-dfa34ef47751" #Fazendo nossas previsoes while True: print("digite um valor entre 0 e 10000") indice = int(input()) if indice == -1: break #Qual valor categorico de y_test[indice] print("Valor categorico em y_teste[indice]", y_teste[indice]) #print(x_teste[indice]) imagem = x_teste[indice].reshape((1, resolucao_total)) #print(imagem) #fazendo previsao prediction = model.predict(imagem) print("Previsão :", prediction) #prediction_class = model.predict_classes(imagem) import numpy as np prediction_class = np.argmax(model.predict(imagem), axis= -1) print("Previsão ajustada", prediction_class) (x_treino_img, y_treino_img), (x_teste_img, y_teste_img) = mnist.load_data() plt.imshow(x_teste_img[indice], cmap=plt.cm.binary) plt.show() # + id="L4UEqjzwkW0C" colab_type="code" colab={}
mnist_Recognition_Digit.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: wsc_port # language: python # name: wsc_port # --- # + import os os.chdir('..') # %load_ext autoreload # %autoreload 2 # + import torch from src.main import setup_torch, get_corpus from src.utils import get_latest_model_file from src.winograd_schema_challenge import analyse_single_wsc, generate_full_sentences, find_missing_wsc_words_in_corpus_vocab, winograd_test from src.wsc_parser import generate_df_from_json # - df = generate_df_from_json() # + setup_torch() device = torch.device("cuda") corpus = get_corpus() ntokens = len(corpus.dictionary) # TODO remove these two lines # assert ntokens == 602755 # assert corpus.valid.size()[0] == 11606861 assert corpus.train.max() < ntokens assert corpus.valid.max() < ntokens assert corpus.test.max() < ntokens # - model_file_name = get_latest_model_file() find_missing_wsc_words_in_corpus_vocab(df, corpus, english=True) df.loc[58, 'correct_sentence'] = str(df.iloc[58].correct_sentence).replace('4:00', '4 @:@ 00').replace('4:30', '4 @:@ 30') df.loc[58, 'incorrect_sentence'] = str(df.iloc[58].incorrect_sentence).replace('4:00', '4 @:@ 00').replace('4:30', '4 @:@ 30') df.loc[59, 'correct_sentence'] = str(df.iloc[59].correct_sentence).replace('4:00', '4 @:@ 00').replace('4:30', '4 @:@ 30') df.loc[59, 'incorrect_sentence'] = str(df.iloc[59].incorrect_sentence).replace('4:00', '4 @:@ 00').replace('4:30', '4 @:@ 30') df.loc[174, 'correct_sentence'] = str(df.iloc[174].correct_sentence).replace('20,000', '20 @,@ 000') df.loc[174, 'incorrect_sentence'] = str(df.iloc[174].incorrect_sentence).replace('20,000', '20 @,@ 000') df.loc[175, 'correct_sentence'] = str(df.iloc[175].correct_sentence).replace('20,000', '20 @,@ 000') df.loc[175, 'incorrect_sentence'] = str(df.iloc[175].incorrect_sentence).replace('20,000', '20 @,@ 000') df, accuracy = winograd_test(df, corpus, model_file_name, ntokens, device, english=True) print('Acurácia: {} para teste realizado com {} exemplos'.format(accuracy, len(df))) len(df[~df.test_result]) len(df[df.test_result]) len(df) df, accuracy = winograd_test(df, corpus, model_file_name, ntokens, device, partial=True, english=True) print('Acurácia: {} para teste realizado com {} exemplos'.format(accuracy, len(df)))
notebooks/archive/12_wsc_english_not_discarding_words.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <h1>Mic Check</h1> # # <h3>This notebook uses the Microphone on the Raspberry Pi HAT to listen for keywords.</h3> # # <i> Click the play button next to the cell you want to run</i> # <h2>Setup</h2> from mic_helper import listen from led_helper import red, green, blue, yellow, purple, white from led_helper import color, clear, random, rotate, repeat OPTIONS = ['blueberry', 'bumblebee', 'grapefruit', 'grasshopper', 'terminator', 'porcupine'] # <h2>See how good the Raspberry Pi is at listening to you</h2> # # You can change the keyword `blueberry` to one of the options above. # # Or you can add multiple ones to listen to at the same time like this: `keywords=['blueberry', 'grapefruit']` # # What happens if you say the word fast, slow, in a funny accent? # # <i>Use the stop button at the top of the screen to stop listening.</i> listen(keywords=['blueberry']) # <h2>Execute your own action when a keyword is detected</h2> # # You can now make the Raspberry Pi do something when it hears the keyword. # # Change the value of `Hi!` below to make it print your own message. # + def action(result): print('Hi!') listen(keywords=['blueberry'], action=action) # - # <h2>Now put together lights and voice commands!</h2> # # When it hears the first keyword it will turn the lights to the first color. Same for the second keyword and color. # # Try changing the keywords or colors. # + keywords = ['blueberry', 'grasshopper'] colors = [blue, green] def action(result): print('I heard ' + keywords[result]) color(color=colors[result]) listen(keywords=keywords, action=action) clear() # - clear()
02_ReSpeaker_Device/03_Mic_Check.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import os import numpy as np # ### Import data # set the path of the processed data processed_data_path = os.path.join(os.path.pardir, "data", "processed") train_file_path = os.path.join(processed_data_path, "train.csv") test_file_path = os.path.join(processed_data_path, "test.csv") train_df = pd.read_csv(train_file_path, index_col="PassengerId") test_df = pd.read_csv(test_file_path, index_col="PassengerId") train_df.info() test_df.info() # ### Data Preperation X = train_df.loc[: "Age"].as_matrix().astype("float") y = train_df["Survived"].ravel() print(X.shape) print(y.shape) # + # train test split from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) print(X_train.shape) print(y_train.shape) print(X_test.shape) print(y_test.shape) # - # average survival in tran and test print(np.mean(y_train)) print(np.mean(y_test)) # ### Check Scikit-learn Version import sklearn sklearn.__version__ # ### Baseline Model # import function from sklearn.dummy import DummyClassifier # create model model_dummy = DummyClassifier(strategy="most_frequent", random_state=0) # train model model_dummy.fit(X_train, y_train) # baseline model score print(model_dummy.score(X_test, y_test)) # preformance metrics from sklearn.metrics import accuracy_score, confusion_matrix, precision_score, recall_score # baseline accuracy print("accuracy for baseline model: {0:.2f}".format(accuracy_score(y_test, model_dummy.predict(X_test)))) # comfusion matrix print("confusion matrix for baseline model is \n {0}".format(confusion_matrix(y_test, model_dummy.predict(X_test)))) def get_subbmission_file(model, filename): #converting to the matrix test_X = test_df.as_matrix().astype("float") # make prediction predictions = model.predict(test_X) # submission dataframe df_submission = pd.DataFrame({"PassengerId": test_df.index, "Survived": predictions}) # submission file submission_data_path = os.path.join(os.path.pardir, "data", "external") submission_file_path = os.path.join(submission_data_path, filename) # write to the file df_submission.to_csv(submission_file_path, index=False) # get submission file get_subbmission_file(model_dummy, "01_dummy.csv") # ### Logistic Regression Model # import function from sklearn.linear_model import LogisticRegression # create model model_lr_1 = LogisticRegression(random_state=0) # train_model model_lr_1.fit(X_train, y_train) # evaluate model print("score for logistic regression model - version 1 : {0:.0f}".format(model_lr_1.score(X_test, y_test))) # performance metrics print("Accuracy: {0}".format(accuracy_score(y_test, model_lr_1.predict(X_test)))) print("Confusion matrix: \n{0}".format(confusion_matrix(y_test, model_lr_1.predict(X_test)))) model_lr_1.coef_ # ### Second Kaggle Submission get_subbmission_file(model_lr_1, "02_lr.csv")
notebooks/3.0-sonhal-building-predictive-model.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 1 Data Analysis # ## 1.1 Air-Conditioner Market Share Analysis # ### 1.1.1 Abstract # - 爬虫爬取京东、苏宁、淘宝的空调的售卖信息 # - 数据展示 # # ### 1.1.2 Schedule # - Ideas written down: 20170916_1810 # # ## 1.2 House Price Prediction # ### 1.1.1 Abstract # - 政策因素的数字化:房市和股市此消彼长,股市对政策高度敏感,且容易数字化(例如 上证指数)。用股市的指数来数字化政策变化,同时因为房价对政策反应有滞后,所以加入一个时间补正参数。 # # ### 1.1.2 Schedule # - Ideas written down: 20170916_2235 # # # 2 Machine Learning # ## 2.1 Appearance Comparison # ### 2.1.1 Abstract # - 搭建神经网络并近似相似度对比,并判断有血缘关系的概率 # # ### 2.1.2 Schedule
Ideas.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/juliasbardelatti/testeAutomatizadoJasmine/blob/main/GlobalSolution-week-2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="hV1gNgslWKr0" # ## **Grupo branco:** # - <NAME>: 89014 # - <NAME>: 87395 # - <NAME>: 88668 # - <NAME>: 87743 # # # + id="cetZGH_LHg4r" import numpy as np import matplotlib.pyplot as plt from skimage.io import imread import cv2 from google.colab.patches import cv2_imshow from pathfinding.core.grid import Grid from pathfinding.finder.a_star import AStarFinder from pathfinding.core.diagonal_movement import DiagonalMovement # + [markdown] id="xfWT8hhtAvd3" # # Melhoramos a imagem binária para o fundo branco # + colab={"base_uri": "https://localhost:8080/", "height": 802} id="eOxy_c1x8ubv" outputId="5496ca82-1570-4623-e6ba-9e03fe677c09" img = cv2.imread('/content/gs-1tiar.JPG') kernel = np.ones((3,3),np.uint8) # Utilização do morphologyEx e blur closing = cv2.morphologyEx(img,cv2.MORPH_CLOSE,kernel, iterations = 2) blur = cv2.blur(closing,(36,36)) gray = cv2.cvtColor(blur,cv2.COLOR_BGR2GRAY) _, mask = cv2.threshold(gray,103,255,cv2.THRESH_BINARY) cv2_imshow(mask) cv2.imwrite("detectarPedra.jpg", mask) # + [markdown] id="4vKJvUHyA68w" # # Dados da imgem # + [markdown] id="wD-FW8opHwGQ" # Foi preciso alterar informações da imagem para a visualização do mapa ao final :) # # Por isso o uso do *resize* # + id="yGE5cU6gA-oz" img = cv2.imread('/content/detectarPedra.jpg') img = cv2.resize(img, (40, 30)) # + id="Xx2XTlAvBOb8" img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) matrix = np.array(img) # + [markdown] id="fEJsyZ65JmGx" # Plotamos a imagem para capturar a localização dos pontos # + colab={"base_uri": "https://localhost:8080/", "height": 283} id="ls0MqqnYD9mk" outputId="b8a80c11-2b7c-4f4e-8e67-15d9c2d480c8" plt.imshow(matrix) # + id="xM_5ar2X63wC" #pip install pathfinding # + colab={"base_uri": "https://localhost:8080/"} id="_xZLPleT6Zco" outputId="41645bf1-a2b7-4e2b-f286-fa089794a5e5" grid = Grid(matrix = matrix) inicio = grid.node(13,26) objetivo = grid.node(35,3) planejador = AStarFinder(diagonal_movement = DiagonalMovement.always) caminho, _runs = planejador.find_path(inicio, objetivo, grid) print(grid.grid_str(path=caminho, start=inicio, end=objetivo))
GlobalSolution-week-2.ipynb
% -*- coding: utf-8 -*- % --- % jupyter: % jupytext: % text_representation: % extension: .m % format_name: light % format_version: '1.5' % jupytext_version: 1.14.4 % kernelspec: % display_name: Octave % language: octave % name: octave % --- % Transformation ponctuelles d'images % ============================== % % TP de d'initiation au traitement d'image % % % ## Création d'une LUT (image monochrome) % % % ### Transformation d'une image en négatif par une LUT % % Dans une premier temps, voici à quoi ressemble l'image `circuit.tif` que nous allons transformer en négatif: circuit = imread('circuit.tif'); imshow(circuit) % Pour % + # Création de la LUT r = 1:-1/255:0; v = 1:-1/255:0; b = 1:-1/255:0; lut = [r' v' b']; # Affichage du négatif en appliquant la LUT à l'image image(circuit) colormap(lut) colorbar % - % ### Création d'une LUT non linéaire % % Dans ce deuxième exemple le but est de créer une LUT dont le comportement sera le suivant : % * les NDG (Niveaux de Gris) inférieurs à 50 sont affichés en rouge, % * les NDG supérieurs à 150 sont affichés en vert, % * les autres sont inchangés. % + # Par défault, un niveau de gris normal r = 0:1/255:0; v = 0:1/255:0; b = 0:1/255:0; # Pour les NDG de 0 à 49, colorier en rouge r(1:50) = 1; v(1:50) = 0; b(1:50) = 0; # Pour les NDG de 150 à 254, colorier en vert r(151:255) = 0; v(151:255) = 1; b(151:255) = 0; # Concaténer la LUT lut = [r' v' b']; # Colorier l'image circuit colormap(lut); image(circuit); colorbar % - % On peut remarquer que les niveau de gris inférieur à 50 mettent en évidence les pistes du circuit. % + pkg load image # Affichage de l'image cameraman et de son histogramme cameraman = imread('cameraman.tif'); imshow(cameraman) % - imhist(cameraman) # Calcul puis affichage de l'image avec une histogramme égalisé cameraman_eq = histeq(cameraman); imhist(cameraman_eq) imshow(cameraman_eq)
octave_kernel.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import gdxpds import pandas as pd import matplotlib.pyplot as plt import numpy as np scenarios = list(['nodal', 'with_instrument', 'without_instrument']); nodes = list(['north', 'south']) # + scenarios = list(['nodal', 'with_instrument', 'without_instrument', 'agnostic_instrument']); nodes = list(['north', 'south']) # + def read_data(file, indicator): gams_dir='C:\Program Files\GAMS' df = gdxpds.to_dataframes(str(file) + '.gdx', gams_dir=gams_dir) df = df[indicator] df['model'] = file if 'Level' in df.columns: df['Value'] = df['Level'] df['Value'] = df['Value'].round(2) return(df.set_index(['model'])) def read(indicator): if (len(scenarios) == 4): df = read_data(scenarios[0], indicator).append(read_data(scenarios[1], indicator)).append(read_data(scenarios[2], indicator)).append(read_data(scenarios[3], indicator)) elif (len(scenarios) == 3): df = read_data(scenarios[0], indicator).append(read_data(scenarios[1], indicator)).append(read_data(scenarios[2], indicator)) return(df) def plot_distribution(df, axes, location): width = 0.8 #baseload tmp = df.loc[df['tec'] == 'base'][location].fillna(0) tmp = tmp.reindex(index = scenarios) leg = axes.barh(y_pos, tmp, width, align='center', color = 'darkred') left = tmp.fillna(0) #peaker tmp = df.loc[df['tec'] == 'peak'][location].fillna(0) tmp = tmp.reindex(index = scenarios) axes.barh(y_pos, tmp, width, left=left, align='center', color = 'dimgrey') left = left + tmp.fillna(0) #wind tmp = df.loc[df['tec'] == 'wind'][location].fillna(0) tmp = tmp.reindex(index = scenarios) axes.barh(y_pos, tmp, width, left=left, align='center', color = 'lightblue') left = left + tmp.fillna(0) #solar tmp = df.loc[df['tec'] == 'solar'][location].fillna(0) tmp = tmp.reindex(index = scenarios) axes.barh(y_pos, tmp, width, left=left, align='center', color = 'gold') left = left + tmp.fillna(0) #leg.legend() return(axes) # + df = read('o_gen') df = df.reset_index().set_index(['model','t', 'n','tec']) df = df.swaplevel().unstack() df.columns = df.columns.droplevel(0) df.columns.name = '' df = df.groupby(['model','tec']).sum() df = df.reset_index().set_index(['model']) gen = df # - gen gen['total'] = gen['north'] + gen['south'] gen['type'] = ['RE' if tec in ['wind', 'solar'] else 'non RE' for tec in gen['tec']] re_shares = gen[['total', 'type']].groupby(['model', 'type']).sum() re_shares = re_shares.unstack() re_shares re_shares['total', 'RE'] / (re_shares['total', 'RE'] + re_shares['total', 'non RE']) sums = gen.groupby(['model']).sum() sums['total'] = sums[nodes[0]] + sums[nodes[1]] sums = sums /48 * 8760 sums['share north'] = sums[nodes[0]] / sums['total'] sums # in GWh # # Installed capacities df = read('o_cap') df = df.reset_index().set_index(['model', 'n','tec']) df = df.swaplevel().unstack() df.columns = df.columns.droplevel(0) df.columns.name = '' df = df.reset_index().set_index(['model']) capacities = df capacities # + #plt.rcdefaults() fig = plt.figure(figsize=(9, 2.5)) ax1 = plt.subplot(1,2,1) ax2 = plt.subplot(1,2,2) y_pos = np.arange(len(scenarios)) #leg = pd.Dataframe() plot_distribution(capacities, ax1, 'north') axes = plot_distribution(capacities, ax2, 'south') ax1.set_xlim(0, 80) ax2.set_xlim(0, 80) ax1.set_title('North') ax2.set_title('South') ax2.legend(['Base', 'Peak', 'Wind', 'Solar'], bbox_to_anchor=(1.01, 0.87)) ax1.set_yticks(y_pos); ax1.set_yticklabels(['Integrated planer', 'Locational instrument', 'Reference']) ax2.set_yticklabels([]); ax1.set_xlabel('Installed capacity in GW') ax2.set_xlabel('Installed capacity in GW') plt.tight_layout() fig.savefig('Figures/capacity_distribution.jpeg', dpi=500) # + # plot generation #plt.rcdefaults() fig = plt.figure(figsize=(10, 3.3)) ax1 = plt.subplot(1,2,1) ax2 = plt.subplot(1,2,2) y_pos = np.arange(len(scenarios)) #leg = pd.Dataframe() plot_distribution(gen, ax1, nodes[0]) axes = plot_distribution(gen, ax2, nodes[1]) ax1.set_xlim(0, 1100) ax2.set_xlim(0, 1100) ax1.set_title('North') ax2.set_title('South') ax2.legend(['Base', 'Peak', 'Wind', 'Solar'], bbox_to_anchor=(1.0, 1)) #ax1.legend(['Base', 'Peak', 'Wind', 'Solar']) ax1.set_yticks(y_pos); if len(scenarios) == 3: ax1.set_yticklabels(['Nodal market', 'Locational instrument', 'Reference scenario']); if len(scenarios) == 4: ax1.set_yticklabels(['Nodal market', 'Locational instrument', 'Reference scenario', 'Agnostic instrument']); ax2.set_yticklabels([]); ax1.set_xlabel('Generation in GWh') ax2.set_xlabel('Generation in GWh') plt.tight_layout() fig.savefig('generation_distribution.jpeg', dpi=500) # - # # Welfare comparison # + scenarios = list(['nodal', 'with_instrument', 'without_instrument', 'small_instrument', 'large_instrument', 'agnostic_instrument', 'with_instrument_redispatch', 'without_instrument_redispatch']); welfare = pd.DataFrame(index = scenarios, columns = ['Generation cost', 'Network cost', 'Total costs', 'Gross consumer surplus']) for scenario in scenarios: welfare.loc[scenario, 'Generation cost'] = read_data(scenario, 'generation_costs').iloc[0,0] welfare.loc[scenario, 'Network cost'] = read_data(scenario, 'network_cost').iloc[0,0] welfare.loc[scenario, 'Total costs'] = welfare.loc[scenario, 'Network cost'] + welfare.loc[scenario, 'Generation cost'] welfare.loc[scenario, 'Gross consumer surplus'] = read_data(scenario, 'consumer_surplus').iloc[0,0] welfare['Welfare'] = welfare['Gross consumer surplus'] - welfare['Network cost'] - welfare['Generation cost']; welfare = welfare / 1000 welfare.style.format('{0:,.1f}') # + print('welfare gains through instrument in %') print(round((welfare['Welfare']['with_instrument'] - welfare['Welfare']['without_instrument']) / welfare['Welfare']['without_instrument'] * 100,1)) print('Cost savings through instrument in %') print(round((welfare['Total costs']['with_instrument'] - welfare['Total costs']['without_instrument']) / welfare['Total costs']['without_instrument'] * 100,1)) print('welfare gains through nodal pricing in %') print(round((welfare['Welfare']['nodal'] - welfare['Welfare']['without_instrument']) / welfare['Welfare']['without_instrument'] * 100,1)) print('Cost savings through nodal pricing in %') print(round((welfare['Total costs']['nodal'] - welfare['Total costs']['without_instrument']) / welfare['Total costs']['without_instrument'] * 100,1)) if len(scenarios) > 3: print('welfare gain through uniform signal in %') print(round((welfare['Welfare']['agnostic_instrument'] - welfare['Welfare']['without_instrument']) / (welfare['Welfare']['with_instrument'] - welfare['Welfare']['without_instrument']) * 100,1)) if len(scenarios) > 3: print('Cost savings through uniform signal in %') print(round((welfare['Total costs']['agnostic_instrument'] - welfare['Total costs']['without_instrument']) / (welfare['Total costs']['with_instrument'] - welfare['Total costs']['without_instrument']) * 100,1)) # - # # Instrument level fixed_costs = read_data('with_instrument', 'c_fix').reset_index(drop = True).set_index(['tec','n']).unstack() fixed_costs.columns = fixed_costs.columns.droplevel(0) fixed_costs instr = read_data('with_instrument', 'o_instrument').reset_index(drop = True).set_index(['tec','n']).unstack() instr.columns = instr.columns.droplevel(0) instr = instr.round(1) instr cap = read_data('with_instrument', 'o_cap').reset_index(drop = True).set_index(['tec','n']).unstack() cap.columns = cap.columns.droplevel(0) instr[cap.isna()] = np.nan shares = round(100 *instr / fixed_costs,1) shares = shares.rename(columns={'north':'north (share)', 'south': 'south (share)'}) #shares.columns = shares.columns.droplevel(0) shares instrument = pd.concat([instr, shares], axis=1) instrument = instrument.sort_index(axis = 1) #fixed_costs.columns = fixed_costs.columns.droplevel(0) instrument['Fixed cost'] = fixed_costs['north'] format_dict = {'north':'{0:,.0f} €', 'north (share)': '{0:,.0f}%', 'south': '{0:,.0f} €', 'south (share)': '{0:,.0f}%', 'Fixed cost': '{0:,.0f} €'} instrument.style.format(format_dict) # # Redispatch cap = read_data('with_instrument', 'o_cap') cap2 = read_data('with_instrument_redispatch', 'o_cap') cap cap2 dif = cap2.set_index(['tec', 'n']) - cap.set_index(['tec', 'n']) dif
Two node models/Output/Postprocessing.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Server Load Prediction # ### Import necessary modules import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt # %matplotlib inline # ### Read the train and test data into pandas dataframe # + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" train_df = pd.read_csv('server_train.csv') test_df = pd.read_csv('server_test.csv') train_df.head() # - ## Check the shape of the train and test dataframe print(f'Shape of train data frame: {train_df.shape}') print(f'Shape of test data frame: {test_df.shape}') # get basic information about the train dataset train_df.info() # - Data types present in the train dataframe are **int64, float64 and object**. # check the distribution of target variable in the train dataset train_df.cpu_load.value_counts() # - **Target variable is imbalanced**. # plot the distribution of target variable in the train dataset fig, ax = plt.subplots(figsize = (10, 6)) sns.countplot(x = 'cpu_load', data = train_df, ax = ax) ax.set_title('Count of target variable in each class', fontsize = 15); # ### Check the null values in the dataframe train_df[train_df.columns[train_df.isna().any()]].isnull().sum() # - **There are no null values present in the train dataset**. ## Check the number of servers in the train dataset train_df.m_id.value_counts() # - There are 7 servers in the dataset with almost equal number of entries. fig, ax = plt.subplots(figsize = (10, 6)) sns.countplot(x = 'm_id', data = train_df, hue = 'cpu_load', ax = ax) ax.set_title('Distribution of target variable in each server', fontsize = 15); # - All servers have **medium load** except the **server f** which is on **low load**. # - Servers **e & f** have been loaded **high** least number of times. # ### NEW WORK # Server a server_a = train_df[train_df.m_id == 'a'] server_a_low = server_a[server_a.cpu_load == 'low'] server_a_medium = server_a[server_a.cpu_load == 'medium'] server_a_high = server_a[server_a.cpu_load == 'high'] fig, ax = plt.subplots(1, 3, figsize = (16, 6)) server_a_low['syst_direct_ipo_rate'].plot(kind = 'hist', bins = 50, ax = ax[0]) server_a_medium['syst_direct_ipo_rate'].plot(kind = 'hist', bins = 50, ax = ax[1]) server_a_high['syst_direct_ipo_rate'].plot(kind = 'hist', bins = 50, ax = ax[2]); # set the values of server_a for high load server_a_high.loc[server_a_high.syst_direct_ipo_rate > 4000, 'syst_direct_ipo_rate'] = 4000 server_a_low.syst_direct_ipo_rate.sort_values()[:30] server_a_low.syst_direct_ipo_rate.sort_values()[-30:] server_a_medium.syst_direct_ipo_rate.sort_values()[:30] server_a_medium.syst_direct_ipo_rate.sort_values()[-30:] server_a_high.syst_direct_ipo_rate.sort_values()[:30] ## Check the number of servers in the test dataset test_df.m_id.value_counts() # - Same number of servers are present in the **train and test data set**. # Description of the dataset pd.set_option('display.max_rows', 86) train_df.describe().T # ### Exploratory Data Analysis def box_plot(column_name): """ This function will draw box plot for the column_name if its datatype is float64 Args: column_name: name of the column in the train_df """ if train_df[column_name].dtypes == 'float64': fig, ax = plt.subplots(figsize = (15, 8)) sns.boxplot(train_df.m_id, train_df[column_name], hue = train_df.cpu_load, dodge = True, ax = ax) ax.set_title(f'Distribution of {column_name} column variable across the servers', fontsize = 15) else: pass # ### Explore each float type column in the train dataframe # set the order of categories in m_id column of the dataframe m_id_order = ['a', 'b', 'c', 'd', 'e', 'f', 'g'] ordered_cat = pd.api.types.CategoricalDtype(ordered = True, categories = m_id_order) train_df['m_id'] = train_df['m_id'].astype(ordered_cat) # set the order of categories in cpu_load column of the dataframe cpu_load_order = ['low', 'medium', 'high'] ordered_load = pd.api.types.CategoricalDtype(ordered = True, categories = cpu_load_order) train_df['cpu_load'] = train_df['cpu_load'].astype(ordered_load) box_plot('syst_direct_ipo_rate') # - **Server a** has highest values of **syst_direct_ipo_rate** when cpu is on **high_load** among all the servers. # - Server **b, e, f** have no outliers when cpu is on **high_load**, while other servers have outliers. # - Outliers are present in all the servers when cpu is on **low & medium** load. # - **low_load** on all the servers lies below 1000, **medium_load** lies around 2000 and **high_load** is around 4000 (except server a). box_plot('syst_buffered_ipo_rate') # - **Server a** has highest values of **syst_buffered_ipo_rate** on **all loading conditions of cpu** among all the servers. # - All servers, except server a has **similar syst_buffered_ipo_rate on all loading conditions of cpu**. box_plot('syst_page_fault_rate') # - **Server f** has highest values of **syst_page_fault_rate** when cpu is on **high_load**. # - All servers have similar values of **syst_page_fault_rate** when cpu is on **low_load**. box_plot('syst_page_read_ipo_rate') # - **Server a and c** has highest **syst_page_read_ipo_rate** when cpu is on **high_load**. # - **Server f** has highest **syst_page_read_ipo_rate** when cpu is on **medium_load**. box_plot('page_global_valid_fault_rate') # - **Server a and c** has highest **page_global_valid_fault_rate** when cpu is on **high_load**. # - **Server f** has highest **page_global_valid_fault_rate** when cpu is on **medium_load**. box_plot('io_mailbox_write_rate') # - **Server a** has highest values of **io_mailbox_write_rate** on **all loading conditions of cpu**. # - **Server f** has different behaviour on all loading conditions of cpu. box_plot('io_split_transfer_rate') # - All servers have similar values of **io_split_transfer_rate** when cpu is on **low_load**. # - **Server a and f** has least values of **io_split_transfer_rate** on **all loading conditons of cpu**. box_plot('io_file_open_rate') # - All servers have similar values of **io_file_open_rate** under **all loading condtions of cpu**. box_plot('io_logical_name_trans') # - **Server a** has highest values of **io_logical_name_trans** on **all loading conditions of cpu**. box_plot('io_page_reads') # - All servers have similar values of **io_page_reads** under **all loading condtions of cpu**. box_plot('io_page_writes') # - **Server f** has unusual values of **io_page_writes** on **all loading conditions of cpu**. # - **Server a** has highest values of **io_page_writes** when cpu is on **low and medium load**. box_plot('page_free_list_faults') # - **Server a** has large values of **page_free_list_faults** when cpu is on **medium and high load**. box_plot('page_modified_list_faults') # - **Server a** has highest values of **page_modified_list_faults** when cpu is on **high_load**. box_plot('page_demand_zero_faults') # - **Server f** has highest values of **page_demand_zero_faults** when cpu is on **high_load**. box_plot('app07_dirio') # - The above box plot of **app07_dirio** has very small values on all loading conditions of cpu. box_plot('app07_bufio') # - **Server b, d and g** have mean values of **app07_bufio** around 500 when cpu is on **medium and high load**. # - **Server a** has highest values of **app07_bufio** when cpu is on **medium and high load**. box_plot('app07_pgflts') # - All servers have similar values of **app07_pgflts** on all **loading conditions of cpu** except few outliers. col_list = ['app04_dirio', 'app04_bufio', 'app04_pgflts', 'app08_dirio', 'app08_bufio', 'app08_pgflts', 'app01_dirio', 'app01_bufio', 'app01_pgflts', 'app05_dirio', 'app05_bufio', 'app05_pgflts', 'app03_dirio', 'app03_bufio', 'app03_pgflts', 'app02_dirio', 'app02_bufio', 'app02_pgflts', 'lla0_pkts_recvpsec', 'lla0_pkts_sentpsec', 'llb0_pkts_recvpsec', 'llb0_pkts_sentpsec', 'ewc0_pkts_recvpsec', 'ewc0_pkts_sentpsec', 'ewd0_pkts_recvpsec', 'ewd0_pkts_sentpsec'] for col in col_list[:18]: box_plot(col) # ### app04_dirio # - Only **Server a and b** have values in **app04_dirio** column. # - Most of the values are present in **Server a**. # # ### app04_bufio # - Only **Server a and b** have values in **app04_bufio** column. # - Most of the values are present in **Server a**. # # ### app04_pgflts # - Almost all the values of **app04_pgflts** are present in **Server a**. # # ### app08_dirio # - Only **Server c and d** have values in **app08_dirio** column. # - Most of the values are present in **Server c**. # # ### app08_bufio # - Only **Server c and d** have values in **app08_bufio** column. # - Most of the values are present in **Server c**. # # ### app08_pgflts # - All data of **app08_pgflts** is present in **Server c**. # # ### app01_dirio # - All data of **app01_dirio** is present in **Server d**. # # ### app01_bufio # - All data of **app01_bufio** is present in **Server d**. # # ### app01_pgflts # - All data of **app01_pgflts** is present in **Server d**. # # ### app05_dirio # - **Server a and d** have least outliers in **app05_dirio**. # # ### app05_bufio # - **Server a and d** have similar values in **app05_bufi0** for all **loading conditions of cpu**, while all other servers have similar values, except **Server f**. # # ### app05_pgflts # - All servers have similar values in **app05_pgflts** except few outliers. # # ### app03_dirio # - All data of **app03_dirio** is present in **Server a**. # # ### app03_bufio # - All data of **app03_bufio** is present in **Server a**. # # ### app03_pgflts # - All data of **app03_pgflts** is present in **Server a**. # # ### app02_dirio # - **Server b** has highest values in **app02_dirio** on **all loading conditons of cpu**, while **Server f** has least values. # # ### app02_bufio # - **Server b** has highest values in **app02_bufio** on **all loading conditons of cpu**, while **Server f** has least values. # # ### app02_pgflts # - **Server f** has no values in **app02_pgflts**. for col in col_list[18:]: box_plot(col) # ### lla0_pkts_recvpsec # - All servers have similar values, except few outliers. Very large values are present in this column. # # ### lla0_pkts_sentpsec # - All servers have similar values, except few outliers. Very large values are present in this column. # # ### llb0_pkts_recvpsec # - **Server a and b** have very large values of **llb0_pkts_recvpsec**, while **Server f** has the lowest. # # # ### llb0_pkts_sentpsec # - **Server a and b** have very large values of **llb0_pkts_sentpsec**, while **Server f** has the lowest. # # ### ewc0_pkts_recvpsec, ewc0_pkts_sentpsec, ewd0_pkts_recvpsec and ewd0_pkts_sentpsec # - All servers have similar values in all these columns under all loading conditions, except few outliers. # - **app08_pgflts, app01_pgflts & app02_pgflts** columns have very few unique values # filter the float64 columns from the train dataframe df_float = train_df.select_dtypes(include = ['float64']) df_float.shape # ### Check the unique values in each of the 3 columns [app08_pgflts, app01_pgflts & app02_pgflts] # value counts of app01_pgflts df_float.app01_pgflts.value_counts() # value counts of app02_pgflts df_float.app02_pgflts.value_counts() # value counts of app08_pgflts df_float.app08_pgflts.value_counts() # - 3 columns above have very large number of **zero values**. # ### Lets count the number of zeros in each column of df_float dataframe df_float.isin([0]).sum().plot(kind = 'barh', figsize = (12, 14), ylabel = 'count', xlabel = 'columns', title = 'Count of zeros in the df_float dataframe', fontsize = 12); # - There are large number of zeros in the columns of df_float dataframe # ### Check correlations in the df_float dataframe ## plot the correlation matrix for the df_float dataset corr = df_float.corr() corr.style.background_gradient(cmap = 'coolwarm') # ### Highly correlated features in the df_float dataframe # - syst_page_fault_rate # - syst_page_read_ipo_rate # - page_page_write_ipo_rate # - page_global_valid_fault_rate # - io_page_reads # - io_page_writes # - page_modified_list_faults # - page_demand_zero_faults # - app08_dirio # - app08_bufio # - app01_dirio # - app01_bufio # - app02_dirio # - app02_bufio # - llb0_pkts_recvpsec # - llb0_pkts_sentpsec # ### Box plot of individual column in the dataset def plot(column_name): """ This function will draw box plot for the column_name Args: column_name: name of the column in the train_df """ fig, ax = plt.subplots(figsize = (15, 8)) sns.boxplot(train_df.cpu_load, train_df[column_name], dodge = True, ax = ax) ax.set_title(f'Distribution of {column_name} column variable across the target', fontsize = 15) for col in train_df.columns[1:15]: plot(col) for col in train_df.columns[15:30]: plot(col) for col in train_df.columns[30:45]: plot(col) for col in train_df.columns[45:50]: plot(col) for col in train_df.columns[45:60]: plot(col) for col in train_df.columns[60:75]: plot(col) ## filter the int64 columns from the train dataset df_int = train_df.select_dtypes(include = ['int64']) df_int.shape ## Check the number of unique values in the df_int dataframe df_int.nunique() # ### Explore each column of type int64 in the dataset by using countplot def count_plot(column): fig, ax = plt.subplots(figsize = (12, 6)) sns.countplot(x = column, hue = train_df.cpu_load, data = train_df, ax = ax) ax.set_title(f'Count plot of {column} column of the dataset', fontsize = 15) plt.legend(loc = 1); columns = df_int.columns[df_int.nunique() <= 12] for col in columns: count_plot(col) def print_value_counts(col): return df_int[col].value_counts() for col in columns: print(f'Value counts of the column {col} in the dataset is:') print(print_value_counts(col)) print('-------') ## Count the number of zeros in each column df_int.isin([0]).sum().plot(kind = 'barh', figsize = (12, 14), ylabel = 'count', xlabel = 'columns', title = 'Count of zeros in the dataset', fontsize = 12); ## plot the correlation matrix for the df_int dataframe zero_cols = ['app06_dirio', 'app06_bufio', 'app06_pgflts', 'app03_proccount', 'app06_proccount'] corr = df_int.drop(zero_cols, axis = 1).corr() corr.style.background_gradient(cmap = 'coolwarm') # ### Highly correlated features in df_int dataframe are: # - syst_process_count # - page_modified_list_size # - state_lef # - app06_pagesgbl # - app07_proccount # - app07_pagesgbl # - app07_pagesproc # - app04_proccount # - app04_pagesgbl # - app04_pagesproc # - app08_proccount # - app08_pagesgbl # - app08_pagesproc # - app01_proccount # - app01_pagesgbl # - app01_pagesproc # - app05_proccount # - app05_pagesgbl # - app05_pagesproc # - app03_pagesgbl # - app03_pagesproc # - app02_proccount # - app02_pagesgbl # - app02_pagesproc # - tcp_in # - tcp_out # - tcp_rxdup # - tcp_retxpk # - tcp_retxto # ### Base Line Random Forest Model from sklearn.model_selection import train_test_split, cross_val_score, KFold from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import confusion_matrix, classification_report, f1_score, accuracy_score # ### One hot encode the server id and cpu load column mid_encoding = pd.get_dummies(train_df.m_id, prefix = 'server_') target = train_df.cpu_load target_encoding = pd.get_dummies(train_df.cpu_load) train_df.drop(['m_id', 'cpu_load'], axis = 1, inplace = True) train_df = pd.concat([train_df, mid_encoding], axis = 1) train_df.head() # ### Split data into train and validation sets train_x, val_x, train_y, val_y = train_test_split(train_df, target, test_size = 0.20, stratify = target, random_state = 42) print(f'Shape of train_x: {train_x.shape}\nShape of train_y: {train_y.shape}') print(f'Shape of val_x: {val_x.shape}\nShape of train_y: {val_y.shape}') # create the model rf_model = RandomForestClassifier() rf_model.fit(train_x, train_y) pred = rf_model.predict(val_x) confusion_matrix(val_y, pred) # accuracy and f1_score accuracy_score(val_y, pred), f1_score(val_y, pred, average = None) rf_model.feature_importances_ # ### Make predictions on test data test_df = pd.get_dummies(test_df.iloc[:, 1:]) test_df.head() predictions = pd.DataFrame(rf_model.predict_proba(test_df), columns = rf_model.classes_) predictions.head()
Machine Learning/Jupyter Notebooks/Server Load Prediction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # # Deploying Bokeh Apps import numpy as np import holoviews as hv hv.extension('bokeh') # ## Purpose # # HoloViews is an incredible convenient way of working interactively and exploratively within a notebook or commandline context, however when you have implemented a polished interactive dashboard or some other complex interactive visualization you often want to deploy it outside the notebook. The [bokeh server](http://bokeh.pydata.org/en/latest/docs/user_guide/server.html) provides a very convenient way of deploying HoloViews plots and interactive dashboards in a scalable and flexible manner. The bokeh server allows all the usual interactions that HoloViews lets you define and more including: # # * responding to plot events and tool interactions via [Linked Streams](Linked_Stream.ipynb) # * generating and interacting with plots via the usual widgets that HoloViews supports for HoloMap and DynamicMap objects. # * using periodic and timeout events to drive plot updates # * combining HoloViews plots with custom bokeh plots to quickly write highly customized apps. # # ## Overview # # In this guide we will cover how we can deploy a bokeh app from a HoloViews plot in a number of different ways: # # 1. Inline from within the Jupyter notebook # # 2. Starting a server interactively and open it in a new browser window. # # 3. From a standalone script file # # 4. Combining HoloViews and Bokeh models to create a more customized app # # If you have read a bit about HoloViews you will know that HoloViews objects are not themselves plots, instead they contain sufficient data and metadata allowing them to be rendered automatically in a notebook context. In other words when a HoloViews object is evaluated a backend specific ``Renderer`` converts the HoloViews object into bokeh models, a matplotlib figure or a plotly graph. This intermediate representation is then rendered as an image or as HTML with associated Javascript, which is what ends up being displayed. # # ## The workflow # # The most convenient way to work with HoloViews is to iteratively improve a visualization in the notebook. Once you have developed a visualization or dashboard that you would like to deploy you can use the ``BokehRenderer`` to save the visualization or deploy it as a bokeh server app. # # Here we will create a small interactive plot, using [Linked Streams](Linked_Streams.html), which mirrors the points selected using box- and lasso-select tools in a second plot and computes some statistics: # + # %%opts Points [tools=['box_select', 'lasso_select']] # Declare some points points = hv.Points(np.random.randn(1000,2 )) # Declare points as source of selection stream selection = hv.streams.Selection1D(source=points) # Write function that uses the selection indices to slice points and compute stats def selected_info(index): arr = points.array()[index] if index: label = 'Mean x, y: %.3f, %.3f' % tuple(arr.mean(axis=0)) else: label = 'No selection' return points.clone(arr, label=label).opts(style=dict(color='red')) # Combine points and DynamicMap layout = points + hv.DynamicMap(selected_info, streams=[selection]) layout # - # <img src='http://assets.holoviews.org/gifs/examples/streams/bokeh/point_selection1d.gif'></img> # #### Working with the BokehRenderer # # When working with bokeh server or wanting to manipulate a backend specific plot object you will have to use a HoloViews ``Renderer`` directly to convert the HoloViews object into the backend specific representation. Therefore we will start by getting a hold of a ``BokehRenderer``: renderer = hv.renderer('bokeh') print(renderer) # ```python # BokehRenderer() # ``` # All ``Renderer`` classes in HoloViews are so called ParameterizedFunctions, they provide both classmethods and instance methods to render an object. You can easily create a new ``Renderer`` instance using the ``.instance`` method: renderer = renderer.instance(mode='server') # Renderers can also have different modes, in this case we will instantiate the renderer in ``'server'`` mode, which tells the Renderer to render the HoloViews object to a format that can easily be deployed as a server app. Before going into more detail about deploying server apps we will quickly remind ourselves how the renderer turns HoloViews objects into bokeh models. # ### Figures # The BokehRenderer converts the HoloViews object to a HoloViews ``Plot``, which holds the bokeh models that will be rendered to screen. As a very simple example we can convert a HoloViews ``Image`` to a HoloViews plot: hvplot = renderer.get_plot(layout) print(hvplot) # ``` # <LayoutPlot LayoutPlot01808> # ``` # Using the ``state`` attribute on the HoloViews plot we can access the bokeh ``Column`` model, which we can then work with directly. hvplot.state # **Column**( id = '5a8b7949-decd-4a96-b1f8-8f77ec90e5bf', …) # # In the background this is how HoloViews converts any HoloViews object into bokeh models, which can then be converted to embeddable or standalone HTML and be rendered in the browser. This conversion is usually done in the background using the ``figure_data`` method: html = renderer._figure_data(hvplot) # ### Bokeh Documents # In bokeh the [``Document``](http://bokeh.pydata.org/en/latest/docs/reference/document.html) is the basic unit at which Bokeh models (such as plots, layouts and widgets) are held and serialized. The serialized JSON representation is then sent to BokehJS on the client side browser. When in ``'server'`` mode the BokehRenderer will automatically return a server Document: renderer(layout) # ``` # (<bokeh.document.Document at 0x11afc7590>, # {'file-ext': 'html', 'mime_type': u'text/html'}) # ``` # We can also easily use the ``server_doc`` method to get a bokeh ``Document``, which does not require you to make an instance in 'server' mode. doc = renderer.server_doc(layout) doc.title = 'HoloViews App' # ## Deploying with ``bokeh serve`` # Deployment from a script with bokeh serve is one of the most common ways to deploy a bokeh app. Any ``.py`` or ``.ipynb`` file that attaches a plot to bokeh's ``curdoc`` can be deployed using ``bokeh serve``. The easiest way to do this is using the ``BokehRenderer.server_doc`` method, which accepts any HoloViews object generates the appropriate bokeh models and then attaches them to ``curdoc``. See below to see a full standalone script: # ```python # import numpy as np # import holoviews as hv # import holoviews.plotting.bokeh # # renderer = hv.renderer('bokeh') # # points = hv.Points(np.random.randn(1000,2 )).opts(plot=dict(tools=['box_select', 'lasso_select'])) # selection = hv.streams.Selection1D(source=points) # # def selected_info(index): # arr = points.array()[index] # if index: # label = 'Mean x, y: %.3f, %.3f' % tuple(arr.mean(axis=0)) # else: # label = 'No selection' # return points.clone(arr, label=label).opts(style=dict(color='red')) # # layout = points + hv.DynamicMap(selected_info, streams=[selection]) # # doc = renderer.server_doc(layout) # doc.title = 'HoloViews App' # ``` # In just a few steps, i.e. by our plot to a Document ``renderer.server_doc`` we have gone from an interactive plot which we can iteratively refine in the notebook to a deployable bokeh app. Note also that we can also deploy an app directly from a notebook. By adding ``BokehRenderer.server_doc(holoviews_object)`` to the end of the notebook any regular ``.ipynb`` file can be made into a valid bokeh app, which can be served with ``bokeh serve example.ipynb``. # # In addition to starting a server from a script we can also start up a server interactively, so let's do a quick deep dive into bokeh ``Application`` and ``Server`` objects and how we can work with them from within HoloViews. # ## Bokeh Applications and Server # A bokeh ``Application`` encapsulates a Document and allows it to be deployed on a bokeh server. The ``BokehRenderer.app`` method provides an easy way to create an ``Application`` and either display it immediately in a notebook or manually include it in a server app. # # To let us try this out we'll define a slightly simpler plot to deploy as a server app. We'll define a ``DynamicMap`` of a sine ``Curve`` varying by frequency, phase and an offset. # + def sine(frequency, phase, amplitude): xs = np.linspace(0, np.pi*4) return hv.Curve((xs, np.sin(frequency*xs+phase)*amplitude)).opts(plot=dict(width=800)) ranges = dict(frequency=(1, 5), phase=(-np.pi, np.pi), amplitude=(-2, 2), y=(-2, 2)) dmap = hv.DynamicMap(sine, kdims=['frequency', 'phase', 'amplitude']).redim.range(**ranges) app = renderer.app(dmap) print(app) # - # ``` # <bokeh.application.application.Application object at 0x11c0ab090> # ``` # Once we have a bokeh Application we can manually create a ``Server`` instance to deploy it. To start a ``Server`` instance we simply define a mapping between the URL paths and apps that we want to deploy. Additionally we define a port (defining ``port=0`` will use any open port). # + from bokeh.server.server import Server server = Server({'/': app}, port=0) # - # Next we can define a callback on the IOLoop that will open the server app in a new browser window and actually start the app (and if outside the notebook the IOLoop): def show_callback(): server.show('/') loop.add_callback(show_callback) server.start() # Outside the notebook ioloop needs to be started # loop.start() # After running the cell above you should have noticed a new browser window popping up displaying our plot. Once you are done playing with it you can stop it with: server.stop() # The ``BokehRenderer.app`` method allows us to the same thing automatically (but less flexibly) using the ``show=True`` and ``new_window=True`` arguments: server = renderer.app(dmap, show=True, new_window=True) # <img width='80%' src="http://assets.holoviews.org/gifs/guides/user_guide/Deploying_Bokeh_Apps/bokeh_server_new_window.png"></img> # We will once again stop this Server before continuing: server.stop() # ## Inlining apps in the notebook # Instead of displaying our app in a new browser window and manually creating a ``Server`` instance we can also display an app inline in the notebook simply by supplying the ``show=True`` argument to the ``BokehRenderer.app`` method. The server app will be killed whenever you rerun or delete the cell that contains the output. Additionally, if your Jupyter Notebook server is not running on the default address or port (``localhost:8888``) supply the websocket origin, which should match the first part of the URL of your notebook: renderer.app(dmap, show=True, websocket_origin='localhost:8888') # <img width='80%' src='http://assets.holoviews.org/gifs/guides/user_guide/Deploying_Bokeh_Apps/bokeh_server_inline_simple.gif'></img> # ## Periodic callbacks # # One of the most important features of deploying apps is the ability to attach asynchronous, periodic callbacks, which update the plot. The simplest way of achieving this is to attach a ``Counter`` stream on the plot which is incremented on each callback. As a simple demo we'll simply compute a phase offset from the counter value, animating the sine wave: # + def sine(counter): phase = counter*0.1%np.pi*2 xs = np.linspace(0, np.pi*4) return hv.Curve((xs, np.sin(xs+phase))).opts(plot=dict(width=800)) dmap = hv.DynamicMap(sine, streams=[hv.streams.Counter()]) app = renderer.app(dmap, show=True, websocket_origin='localhost:8888') # - # <img width='80%' src='http://assets.holoviews.org/gifs/guides/user_guide/Deploying_Bokeh_Apps/bokeh_server_periodic.gif'></img> # Once we have created the app we can start a periodic callback with the ``periodic`` method on the ``DynamicMap``. The first argument to the method is the period and the second argument the number of executions to trigger (we can set this value to ``None`` to set up an indefinite callback). As soon as we start this callback you should see the Curve above become animated. dmap.periodic(0.1, 100) # ## Combining HoloViews and Bokeh Plots/Widgets # While HoloViews provides very convenient ways of creating an app it is not as fully featured as bokeh itself is. Therefore we often want to extend a HoloViews based app with bokeh plots and widgets created directly using the bokeh API. Using the ``BokehRenderer`` we can easily convert a HoloViews object into a bokeh model, which we can combine with other bokeh models as desired. # # To see what this looks like we will use the sine example again but this time connect a [Stream](Stream.ipynb) to a manually created bokeh slider widget and play button. To display this in the notebook we will reuse what we learned about creating a ``Server`` instance using a ``FunctionHandler``, you can of course run this in a script by calling the ``modify_doc`` function with with the ``Document`` returned by the bokeh ``curdoc()`` function. # + import numpy as np import holoviews as hv from bokeh.application.handlers import FunctionHandler from bokeh.application import Application from bokeh.io import show from bokeh.layouts import layout from bokeh.models import Slider, Button renderer = hv.renderer('bokeh').instance(mode='server') # Create the holoviews app again def sine(phase): xs = np.linspace(0, np.pi*4) return hv.Curve((xs, np.sin(xs+phase))).opts(plot=dict(width=800)) stream = hv.streams.Stream.define('Phase', phase=0.)() dmap = hv.DynamicMap(sine, streams=[stream]) # Define valid function for FunctionHandler # when deploying as script, simply attach to curdoc def modify_doc(doc): # Create HoloViews plot and attach the document hvplot = renderer.get_plot(dmap, doc) # Create a slider and play buttons def animate_update(): year = slider.value + 0.2 if year > end: year = start slider.value = year def slider_update(attrname, old, new): # Notify the HoloViews stream of the slider update stream.event(phase=new) start, end = 0, np.pi*2 slider = Slider(start=start, end=end, value=start, step=0.2, title="Phase") slider.on_change('value', slider_update) def animate(): if button.label == '► Play': button.label = '❚❚ Pause' doc.add_periodic_callback(animate_update, 50) else: button.label = '► Play' doc.remove_periodic_callback(animate_update) button = Button(label='► Play', width=60) button.on_click(animate) # Combine the holoviews plot and widgets in a layout plot = layout([ [hvplot.state], [slider, button]], sizing_mode='fixed') doc.add_root(plot) return doc # To display in the notebook show(modify_doc, notebook_url='localhost:8888') # To display in a script # doc = modify_doc(curdoc()) # - # <img width='80%' src='http://assets.holoviews.org/gifs/guides/user_guide/Deploying_Bokeh_Apps/bokeh_server_play.gif'></img> # If you had trouble following the last example, you will noticed how much verbose things can get when we drop down to the bokeh API. The ability to customize the plot comes at the cost of additional complexity. However when we need it the additional flexibility of composing plots manually is there.
examples/user_guide/Deploying_Bokeh_Apps.ipynb
# --- # jupyter: # jupytext: # notebook_metadata_filter: all,-language_info # split_at_heading: true # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This page derives from the [equivalent # page](http://introtopython.org/introducing_functions.html) in the excellent # [introduction to Python](http://introtopython.org) by [Eric # Matthes](https://github.com/ehmatthes). The original page has an [MIT # license](https://github.com/ehmatthes/intro_programming/blob/master/LICENSE.md). # # # Introducing Functions # # One of the core principles of any programming language is, "Don't Repeat Yourself". If you have an action that should occur many times, you can define that action once and then call that code whenever you need to carry out that action. # # We are already repeating ourselves in our code, so this is a good time to introduce simple functions. Functions mean less work for us as programmers, and effective use of functions results in code that is less error-prone. # # ## What are functions? # # Functions are *named recipes*. The recipe is a procedure; a set of actions # that we group together. You have already used a number of functions from the # core Python language, such as `round`, and `type`. We can define our own # functions, which allows us to "teach" Python new behavior. # # ## General Syntax # # A function looks something like this: # Let's define a function. def function_name(argument_1, argument_2): # Do whatever we want this function to do, # using argument_1 and argument_2. # In this case we just add the two arguments. a_value = argument_1 + argument_2 # Send back (return) the calculated result. return a_value # We would call the function like this: # Just to define some values value_1 = 5 value_2 = 15 # Use function_name to call the function. function_name(value_1, value_2) # Notice that our new function **returns** a result. Jupyter displays the # result for us. # # This code isn't very useful, but it shows how functions are used in general. # # - **Defining a function** # - Give the keyword `def`, which tells Python that you are about to # *define* a function. # - Give your function a name. A variable name tells you what kind of value # the variable contains; a function name should tell you what the function # does. # - Give names for each value the function needs in order to do its work. # - These are basically variable names, but they are only used in the # function. # - They can be different names than what you use in the rest of your # program. # - These are called the function's *arguments*. # - Make sure the function definition line ends with a colon. # - Inside the function, write whatever code you need to make the function # do its work. # - **Using your function** # - To *call* your function, write its name followed by parentheses. # - Inside the parentheses, give the values you want the function to work # with. # - These can be variables such as `current_name` and `current_age`, or # they can be actual values such as "eric" and 5. # # ## Basic Example # # For a simple first example, we will look at a program that compliments people. # Let's look at the example, and then try to understand the code. First we will # look at a version of this program with no functions. # + # The "\n" below means start a new line. msg = "Great work, Adriana!\n" msg = msg + "Thanks for your efforts.\n" print(msg) msg = "Great work, Billy!\n" msg = msg + "Thanks for your efforts.\n" print(msg) msg = "Great work, Caroline!\n" msg = msg + "Thanks for your efforts.\n" print(msg) # - # Functions take repeated code, put it in one place, and then you call that code # when you want to use it. Here is a function which assembles the message for one person. def thank_you(name): # This function returns a two-line personalized thank you message. msg = "Great work, " + name + "\n" msg = msg + "Thanks for your efforts.\n" return msg # We can use our new function like this: msg = thank_you('Sidney') print(msg) # Or - to be even more brief, we can return the message and print it, all in the same line. print(thank_you('Sidney')) # We can now use our function to do all three thank you's above in a more # compact form: print(thank_you('Adriana')) print(thank_you('Billy')) print(thank_you('Caroline')) # In our original code, we assembled the message three times, and the only # difference was the name of the person being thanked. When you see repetition # like this, you can usually make your program more efficient by defining a # function. # # The keyword *def* tells Python that we are about to define a function. We give our function a name, *thank\_you()* in this case. A variable's name should tell us what kind of information it holds; a function's name should tell us what the variable does. We then put parentheses. Inside these parentheses we create variable names for any variable the function will need to be given in order to do its job. In this case the function will need a name to include in the thank you message. The variable `name` will hold the value that is passed into the function `thank_you()`. # # To use a function we give the function's name, and then put any values the function needs in order to do its work. In this case we call the function three times, each time passing it a different name. # # ### A common error # # A function must be defined before you use it in your program or notebook. For # example, putting the function at the end of the program or notebook cell would # not work. # + tags=["raises-exception"] print(thank_you_effusively('Adriana')) print(thank_you_effusively('Billy')) print(thank_you_effusively('Caroline')) def thank_you_effusively(name): # This function compiles another two-line personalized thank you message. msg = "EXCELLENT work, " + name + "!\n" msg = msg + "Thank you for your efforts.\n" return msg # - # On the first line we ask Python to run the function `thank_you_effusively()`, # but Python does not yet know how to do this function. We define our functions # at the beginning of our programs, and then we can use them when we need to. # # Here's what that should have looked like. We *first* define the function, *then* we call the function. # + def thank_you_effusively(name): # This function compiles another two-line personalized thank you message. msg = "EXCELLENT work, " + name + "!\n" msg = msg + "Thank you for your efforts.\n" return msg print(thank_you_effusively('Adriana')) print(thank_you_effusively('Billy')) print(thank_you_effusively('Caroline')) # - # ## More flexibility # # We can also make functions that get more than one argument. For example, let's say I wanted to customize the message, to tell the person how well they had done. def thank_you_specifically(name, quality): # This function compiles a more personalized thank you message. # Notice we use "name" and "quality" in this line. msg = quality + " work, " + name + "!\n" msg = msg + "Thank you for your efforts.\n" return msg # We can use our new function like this: print(thank_you_specifically('Matthew', 'Barely acceptable')) # Or like this: print(thank_you_specifically('Adriana', 'OK')) print(thank_you_specifically('Billy', 'Shocking')) print(thank_you_specifically('Caroline', 'AMAZING')) # ## Advantages of using functions # # You might be able to see some advantages of using functions, through this example: # # - We write a set of instructions once. We save some work in this simple # example, and we save even more work in larger programs. # - When our function works, we don't have to worry about that code anymore. # Every time you repeat code in your program, you introduce an opportunity to # make a mistake. Writing a function means there is one place to fix mistakes, # and when those bugs are fixed, we can be confident that this function will # continue to work correctly. # - We can modify our function's behavior, and that change takes effect every # time the function is called. This is much better than deciding we need some # new behavior, and then having to change code in many different places in our # program. # You can think of functions as a way to "teach" Python some new behavior. In # this case, we taught Python how to say thank you to someone; now we can tell # Python to do this with any name we choose, whenever we want to.
functions-conditionals/introducing_functions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np # # 1D Example dx = 0.01 x = np.arange(0, 1, dx) y = np.sin(x * np.pi) pdf = y / y.sum() cdf = pdf.cumsum() fig = plt.figure(figsize=(9, 3), dpi=96) plt.subplot(121) plt.plot(pdf) plt.subplot(122) plt.plot(cdf) # + r = np.random.rand(5000) proj = np.interp(r, cdf, x) fig = plt.figure(figsize=(9, 3), dpi=96) p = plt.hist(proj, bins=50, edgecolor='black') # - proj_table = np.interp(x, cdf, x) y = np.interp(r, x, proj_table) fig = plt.figure(figsize=(9, 3), dpi=96) p = plt.hist(y, bins=50, edgecolor='black') # # 2D Example # + # dx, dy = 0.01, 0.01 dx, dy = 0.25, 0.25 # dx, dy = 2.0 / 3.0, 2.0 / 3.0 # dx, dy = 1.0, 1.0 nx = int(2.0 / dx) ny = int(2.0 / dy) x = np.arange(0, nx + 1) * dx - 1.0 y = np.arange(0, ny + 1) * dy - 1.0 # Generate a desired PDF & CDF xx, yy = np.meshgrid(x[:-1], y[:-1]) # f = np.exp(-(xx - 0.2) ** 2 / 0.2 + 0.1 * (xx + 0.1) * (yy - 0.1) / 0.02 - (yy + 0.2) ** 2 / 0.1) f = np.exp(-(xx - 0.4) ** 2 / 0.2 - 0.1 * (xx - 0.1) * (yy + 0.15) / 0.02 - (yy + 0.2) ** 2 / 0.1) # f = np.exp(-(xx - 0.1) ** 2 / 0.5 - (yy) ** 2 / 0.5) pdf = f / f.sum() cdf = pdf.cumsum() xe = np.arange(0, nx + 1) * dx - 1.0 ye = np.arange(0, ny + 1) * dy - 1.0 # - def show_results(pdf, vx, vy, xe, ye): # Compute the 2D histogram H, _, _ = np.histogram2d(vx, vy, bins=(xe, ye)) H = H.T # The histograms fig = plt.figure(figsize=(11, 4), dpi=96) plt.subplot(121) plt.imshow(pdf / pdf.max(), interpolation='nearest', origin='low', extent=[x[0], x[-1], y[0], y[-1]]) plt.colorbar() plt.subplot(122) plt.imshow(H / H.max(), interpolation='nearest', origin='low', extent=[xe[0], xe[-1], ye[0], ye[-1]]) plt.colorbar() # The points fig = plt.figure(figsize=(4, 4), dpi=96) plt.plot(vx, vy, '.', markersize=1) _ = plt.xlim(-1, 1) _ = plt.ylim(-1, 1) # ## The Naive Way # + count = 10000 r = np.random.rand(count) fx = np.linspace(0.0, 1.0, nx * ny) ii = np.interp(r, cdf, fx) ii = ii * (nx * ny) ix = np.remainder(ii, nx) iy = np.floor(ii / nx) + np.random.rand(count) vx = ix * dx - 1.0 vy = iy * dy - 1.0 show_results(pdf, vx, vy, xe, ye) # - # ## A Necessary Small Change # + fx = np.arange(1, nx * ny + 1) / (nx * ny) ii = np.interp(r, cdf, fx) ii = ii * (nx * ny) ix = np.remainder(ii, nx) iy = np.floor(ii / nx) + np.random.rand(count) vx = ix * dx - 1.0 vy = iy * dy - 1.0 show_results(pdf, vx, vy, xe, ye) # + # Down-sampling ratio # d = 1 # xe = np.arange(0, nx / d + 1) * dx * d - 1.0 # ye = np.arange(0, ny / d + 1) * dy * d - 1.0 # H, _, _ = np.histogram2d(vx, vy, bins=(xe, ye)) # H = H.T
python/Generate Random Numbers With Targeted Distribution.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import matplotlib.pyplot as plt import numpy as np import pandas as pd import edo from edo.pdfs import Uniform # - def sample_mean(df, p=0.2, num_samples=10): """ Find the mean of a sample from the given dataset. """ means = [] for _ in range(num_samples): mean = df.sample(frac=p).mean().abs().iloc[0] means.append(mean) return max(means) # + Uniform.param_limits["bounds"] = [-1, 1] pop, fit, all_pops, all_fits = edo.run_algorithm( fitness=sample_mean, size=100, row_limits=[5, 50], col_limits=[1, 1], pdfs=[Uniform], max_iter=100, best_prop=0.1, mutation_prob=0.005, seed=0, ) # + fig, ax = plt.subplots(figsize=(14, 8), dpi=300) fs = 20 ax.boxplot(all_fits, positions=range(len(all_fits)), sym=".") ax.set_xlabel("Epoch", size=fs) ax.set_ylabel(r"Fitness", size=fs) ax.set_xticks(range(0, 101, 5)) ax.set_xticklabels(range(0, 101, 5)) for label in ax.get_xticklabels() + ax.get_yticklabels(): label.set_fontsize(fs) plt.tight_layout() plt.savefig("fitness.pdf", transparent=True) # + fig, ax = plt.subplots(figsize=(12, 8), dpi=300) fs = 24 best = np.argmin(fit) df = pop[best].dataframe ax.hist(df[0], bins=12) bbox_props = dict(boxstyle="round", fc="w", ec="0.5", alpha=0.9) ax.text( -0.35, 22, s=f"Mean: {np.round(df[0].mean(), 5)}", fontdict={"fontsize": fs}, bbox=bbox_props, ) ax.set_xlabel("Value", size=fs) ax.set_ylabel("Frequency", size=fs) for label in ax.get_xticklabels() + ax.get_yticklabels(): label.set_fontsize(0.8 * fs) plt.tight_layout() plt.savefig("sample_mean.pdf", transparent=True) # -
nbs/sample_mean.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import matplotlib.pyplot as plt import pandas as pd import numpy as np import seaborn as sns region = 'blr' shops = pd.read_csv(f'csv/{region}/shops.csv') centers = pd.read_csv(f'data/points.{region}.txt') sns.scatterplot(x="lat", y="lon", data=shops) from geopy.distance import geodesic geodistance = lambda u, v: geodesic(set(u), set(v)).km y = [len(centers) for i in range(shops.shape[0])] nodes = shops.values[:, :] covered = 0 for i in range(shops.shape[0]): for j in range(len(centers)): if geodistance(nodes[i], centers.values[j, :]) <= 5: covered += 1 y[i] = j break print((100 * covered) / (shops.shape[0])) sns.scatterplot(x="lat", y="lon", data=shops, hue=y) plt.plot(centers.values[:, 0], centers.values[:, 1], 'o') plt.savefig(f'centers.{region}.png', format='png', dpi='figure', quality=100) # -
draw.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # convert_raw # # This notebook applies various calibration values and conversion equations to the raw FlowMow2 data. # #### Get UTM northing and easting values for nav # Add UTM-9 meters x and y to nav and save to nav_converted.h5 import pandas as pd nav = pd.read_hdf('../data/interim/nav_raw.h5', 'table') from pyproj import Proj p = Proj(proj='utm', zone=9) nav['x'], nav['y'] = p(nav.lon.values, nav.lat.values) nav.to_hdf('../data/interim/nav_converted.h5', 'table', append=False, data_columns=True) # #### Convert Paros values to temperature and pressure # Paros pressure values are in psia, which according to the [Paros manual](../docs/G8203_Digiquartz_Broadband_Pressure_Transducers_and_Depth_Sensors_with_Frequency_Outputs.pdf) can be converted to pascals by multipyling by 9806.650, or to m of H2O by multiplying by 0.7030696. Temperature is in degrees C. The SBE3 is likely the better instrument to use for temperature. The instument [calibration constants](../data/info/paros_cals.yaml) are extracted directly from the raw DAT files. Results are saved to paros_converted.h5. paros = pd.read_hdf('../data/interim/paros_raw.h5', 'table') paros.head() import yaml import flowmow with open('../data/info/paros_cals.yaml') as f: paros_cals = yaml.load(f) paros['temp'], paros['pressure'] = flowmow.convert_paros(paros.eta, paros.tau, **paros_cals) paros.head() import gc paros.to_hdf('../data/interim/paros_converted.h5', 'table', append=False, data_columns=True) paros = None gc.collect(); # #### Convert SBE3 values to temperature # The Seabird SBE3 [calibration constants](../data/info/sbe3_cals.yaml) are from the [official calibration sheets](../docs/SBE03_2014_cals.pdf) which also contain the conversion equations. More info on the SBE3 [here](../docs/datasheet-03plus-May15.pdf). We know that the SBE3 with serial number 2265 was on the stinger from this [image of the vehicle](../docs/IMG_4014.JPG-1.jpg). Results are saved to sbe3_converted.h5. sbe3 = pd.read_hdf('../data/interim/sbe3_raw.h5', 'table') sbe3.head() with open('../data/info/sbe3_cals.yaml') as f: sbe3_cals = yaml.load(f) # sbe3 2265 was on the stinger and was recorded as counts_0 sbe3['temp_stinger'] = flowmow.convert_sbe3(sbe3.counts_0, **sbe3_cals[2265]) sbe3['temp_top'] = flowmow.convert_sbe3(sbe3.counts_1, **sbe3_cals[2446]) sbe3.head() sbe3.to_hdf('../data/interim/sbe3_converted.h5', 'table', append=False, data_columns=True) sbe3 = None gc.collect();
notebooks/convert_raw.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Author : <NAME> # ## Task 1: Prediction using Supervised learning # # <a name="company">GRIP @ THE SPARKS FOUNDATION</a> # # ### Technologies: # #### - Programming Language: Python # #### - Libraries: Numpy, Pandas, Matplotlib, Scikitlearn # # - In this regression task I tried to predict the percentage of marks that a student is expected to score based upon the number of hours they studied. # # - This is a simple linear regression task as it involves just two variables. # ### 1. Importing the libraries # importing the required libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn import metrics from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression # ### 2. Reading the data from source #reading the data from the source data = pd.read_csv('https://raw.githubusercontent.com/AdiPersonalWorks/Random/master/student_scores%20-%20student_scores.csv') data.head() # ### 3. Data visualization # visualizing the data data.plot(x = 'Hours',y = 'Scores', style='*') plt.xlabel('Hours studied') plt.ylabel('Percentage Score') plt.title('Hours vs percentage') plt.grid() # ### 4. Data Preprocessing # - This step involved division of data into "attributes" (inputs) and "labels" (outputs) x = data.iloc[:, :-1].values y = data.iloc[:, 1].values # ### 5. Model Training # - Splitting the data into training and testing sets # - Training the algorithm X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=0) # + model = LinearRegression() model.fit(X_train.reshape(-1,1), y_train) print('Training done!') # - # ### 6. Line of regression # - visualization of the best-fit line of regression # + # plotting the regression line line = model.coef_*x + model.intercept_ # plotting the test data plt.scatter(x,y) plt.plot(x,line, color='red') # - # ### 7. Making predictions # - we will use test data for prediction # + print(X_test) y_pred = model.predict(X_test) # - # ### 8. Comparing actual vs predicted results # - Comparing actual data to the predicted data # - Plotting the bar graph for actual and predicted values df = pd.DataFrame({'Actual':y_test,'Predicted':y_pred}) df df.plot(kind='bar',color=['red','green']) plt.grid() # ### 9. Estimating Training and testing score print('Training Score: ', model.score(X_train,y_train)) print('Testing Score: ', model.score(X_test,y_test)) # Testing the model with our own data hours = 9.25 test = np.array([hours]) test = test.reshape(-1, 1) new_pred = model.predict(test) print("No of Hours = {}".format(hours)) print("Predicted Score = {}".format(new_pred[0])) # ### 10. Evaluating the model print('Mean Absolute Error:',metrics.mean_absolute_error(y_test, y_pred)) print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) # ## Conclusion # #### I was successfully able to carry-out Prediction using Supervised ML task and was able to evaluate the model's performance on various parameters # # ## Thank you:)
Task 1 Linear regression.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="TVdY4x-Zi9Db" colab_type="code" outputId="7f5c60d9-a1a6-4a32-bb3e-b5e682f986ca" colab={"base_uri": "https://localhost:8080/", "height": 303} from __future__ import print_function import matplotlib.pyplot as plt class AStarGraph(object): #Define a class board like grid with two barriers def __init__(self): self.barriers = [] self.barriers.append([(2,4),(2,5),(2,6),(3,6),(4,6),(5,6),(5,5),(5,4),(5,3),(5,2),(4,2),(3,2)]) def heuristic(self, start, goal): #Use Chebyshev distance heuristic if we can move one square either #adjacent or diagonal D = 1 D2 = 1 dx = abs(start[0] - goal[0]) dy = abs(start[1] - goal[1]) return D * (dx + dy) + (D2 - 2 * D) * min(dx, dy) def get_vertex_neighbours(self, pos): n = [] #Moves allow link a chess king for dx, dy in [(1,0),(-1,0),(0,1),(0,-1),(1,1),(-1,1),(1,-1),(-1,-1)]: x2 = pos[0] + dx y2 = pos[1] + dy if x2 < 0 or x2 > 7 or y2 < 0 or y2 > 7: continue n.append((x2, y2)) return n def move_cost(self, a, b): for barrier in self.barriers: if b in barrier: return 100 #Extremely high cost to enter barrier squares return 1 #Normal movement cost def AStarSearch(start, end, graph): G = {} #Actual movement cost to each position from the start position F = {} #Estimated movement cost of start to end going via this position #Initialize starting values G[start] = 0 F[start] = graph.heuristic(start, end) closedVertices = set() openVertices = set([start]) cameFrom = {} while len(openVertices) > 0: #Get the vertex in the open list with the lowest F score current = None currentFscore = None for pos in openVertices: if current is None or F[pos] < currentFscore: currentFscore = F[pos] current = pos #Check if we have reached the goal if current == end: #Retrace our route backward path = [current] while current in cameFrom: current = cameFrom[current] path.append(current) path.reverse() return path, F[end] #Done! #Mark the current vertex as closed openVertices.remove(current) closedVertices.add(current) #Update scores for vertices near the current position for neighbour in graph.get_vertex_neighbours(current): if neighbour in closedVertices: continue #We have already processed this node exhaustively candidateG = G[current] + graph.move_cost(current, neighbour) if neighbour not in openVertices: openVertices.add(neighbour) #Discovered a new vertex elif candidateG >= G[neighbour]: continue #This G score is worse than previously found #Adopt this G score cameFrom[neighbour] = current G[neighbour] = candidateG H = graph.heuristic(neighbour, end) F[neighbour] = G[neighbour] + H raise RuntimeError("A* failed to find a solution") if __name__=="__main__": graph = AStarGraph() result, cost = AStarSearch((0,0), (7,4), graph) print ("route", result) print ("cost", cost) plt.plot([v[0] for v in result], [v[1] for v in result]) for barrier in graph.barriers: plt.plot([v[0] for v in barrier], [v[1] for v in barrier]) plt.xlim(-1,8) plt.ylim(-1,8) plt.show()
PathToParkingSlot.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from autogluon.tabular import TabularDataset, TabularPredictor import os os.listdir('../autox/data/ventilator') train_data = TabularDataset('../autox/data/ventilator/train.csv') train_data.shape train_data.head() label = 'pressure' save_path = 'agModels-kaggle_ventilator' # specifies folder to store trained models predictor = TabularPredictor(label=label, path=save_path).fit(train_data) test_data_nolab = TabularDataset('../autox/data/ventilator/test.csv') y_pred = predictor.predict(test_data_nolab) y_pred id_ = ['id'] sub = test_data_nolab[id_].copy() sub[label] = list(y_pred.values) sub.to_csv("./autogluon_sub_kaggle_ventilator.csv", index = False) # !zip -r autogluon_sub_kaggle_ventilator.csv.zip autogluon_sub_kaggle_ventilator.csv
demo/ventilator/autogluon_kaggle_ventilator.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Multiple-View Multiple People Campus # # Dataset: https://cvlab.epfl.ch/cms/site/cvlab2/lang/en/data/pom # + # %matplotlib inline import json; from pprint import pprint Settings = json.load(open('settings.txt')) import matplotlib.pyplot as plt import numpy as np import sys sys.path.insert(0,'../') from pak.datasets.EPFL_Campus import EPFL_Campus root = Settings['data_root'] campus = EPFL_Campus(root) FRAME = 800 X, Y, Calib = campus.get_frame(FRAME) fig = plt.figure(figsize=(16,16)) COLORS = ['red', 'green', 'blue', 'yellow'] for cid in [0, 1, 2]: ax = fig.add_subplot(1, 3, cid+1) P = Calib[cid] im = X[cid] ax.imshow(im) ax.axis('off') for pid, person in enumerate(Y): pts = person if pts is None: continue for x, y, z in pts: pt3d = np.array([x,y,z,1]) pt2d = P @ pt3d u = pt2d[0]/pt2d[2] v = pt2d[1]/pt2d[2] ax.scatter(u, v, color=COLORS[pid]) plt.show() # - # * 01 Right Ankle # * 02 Right Knee # * 03 Right Hip # * 04 Left Hip # * 05 Left Knee # * 06 Left Ankle # * 07 Right Wrist # * 08 Right Elbow # * 09 Right Shoulder # * 10 Left Shoulder # * 11 Left Elbow # * 12 Left Wrist # * 13 Bottom Head # * 14 Top Head
samples/Campus.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="xiqQ8uM_aoJr" # # Homework 2 - Deep Learning # ## <NAME> # # # + id="T1mBsxfDia28" import torch import numpy as np # + id="3zFe13ZXi5pJ" # A class defining the model for the Multi Layer Perceptron class MLP(torch.nn.Module): def __init__(self): super().__init__() self.layer1 = torch.nn.Linear(in_features=6, out_features=2, bias= True) self.layer2 = torch.nn.Linear(in_features=2, out_features=1, bias= True) def forward(self, X): out = self.layer1(X) out = self.layer2(out) out = torch.nn.functional.sigmoid(out) return out # + id="MeE0usQ7lU53" # Initialization of weights: uniformly distributed between -0.3 and 0.3 W = (0.3 + 0.3) * torch.rand(6, 1 ) - 0.3 # Inizialization of Data: 50% symmetric randomly generated tensors # 50% not necessarily symmetric firsthalf= torch.rand([32,3]) secondhalf=torch.zeros([32,3]) secondhalf[:, 2:3 ]=firsthalf[:, 0:1] secondhalf[:, 1:2 ]=firsthalf[:, 1:2] secondhalf[:, 0:1 ]=firsthalf[:, 2:3] y1=torch.ones([32,1]) y0=torch.zeros([32,1]) simmetric = torch.cat((firsthalf, secondhalf, y1), dim=1) notsimmetric = torch.rand([32,6]) notsimmetric= torch.cat((notsimmetric, y0), dim=1) data= torch.cat((notsimmetric, simmetric), dim=0) # Permutation of the concatenated dataset data= data[torch.randperm(data.size()[0])] # + id="wh4VluxboKYV" def train_epoch(model, data, loss_fn, optimizer): X=data[:,0:6] y=data[:,6] # 1. reset the gradients previously accumulated by the optimizer # this will avoid re-using gradients from previous loops optimizer.zero_grad() # 2. get the predictions from the current state of the model # this is the forward pass y_hat = model(X) # 3. calculate the loss on the current mini-batch loss = loss_fn(y_hat, y.unsqueeze(1)) # 4. execute the backward pass given the current loss loss.backward() # 5. update the value of the params optimizer.step() return model # + id="kRMOYpQ4ur_L" def train_model(model, data, loss_fn, optimizer, num_epochs): model.train() for epoch in range(num_epochs): model=train_epoch(model, data, loss_fn, optimizer) for i in model.state_dict(): print(model.state_dict()[i]) # + id="3Q6Y0AkWusEb" # Parameters set as defined in the paper learn_rate = 0.1 num_epochs = 1425 beta= 0.9 model = MLP() # I have judged the loss function (3) reported in the paper paper a general one for the discussion # Since the problem of interest is a binary classification and that loss is mostly suited for # regression problems I have used instead a Binary Cross Entropy loss loss_fn = torch.nn.BCELoss() # Gradient descent optimizer with momentum optimizer = torch.optim.SGD(model.parameters(), lr=learn_rate, momentum=beta) # + colab={"base_uri": "https://localhost:8080/"} id="L9hFlOIbfBmS" outputId="544595f3-aedf-43fd-cb0f-25f227055b04" train_model(model, data, loss_fn, optimizer, num_epochs) # + [markdown] id="b2-pZrI4f3RA" # ## Some conclusions: # # Even if the original protocol has been followed as deep as possible, the results obtained in the same number of epochs are fare from the ones stated in the paper. Not only the numbers, indeed those are not even near to be symmetric. I assume this could depend on the inizialization of the data, which was not reported and thus a completely autonomous choice. #
homeworks/Homework_2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # [ATM 623: Climate Modeling](../index.ipynb) # # [<NAME>](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany # # # Lecture 15: Insolation # - # ## Warning: content out of date and not maintained # # You really should be looking at [The Climate Laboratory book](https://brian-rose.github.io/ClimateLaboratoryBook) by <NAME>, where all the same content (and more!) is kept up to date. # # ***Here you are likely to find broken links and broken code.*** # + [markdown] slideshow={"slide_type": "skip"} # ### About these notes: # # This document uses the interactive [`Jupyter notebook`](https://jupyter.org) format. The notes can be accessed in several different ways: # # - The interactive notebooks are hosted on `github` at https://github.com/brian-rose/ClimateModeling_courseware # - The latest versions can be viewed as static web pages [rendered on nbviewer](http://nbviewer.ipython.org/github/brian-rose/ClimateModeling_courseware/blob/master/index.ipynb) # - A complete snapshot of the notes as of May 2017 (end of spring semester) are [available on Brian's website](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2017/Notes/index.html). # # [Also here is a legacy version from 2015](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/Notes/index.html). # # Many of these notes make use of the `climlab` package, available at https://github.com/brian-rose/climlab # - # Ensure compatibility with Python 2 and 3 from __future__ import print_function, division # + [markdown] slideshow={"slide_type": "skip"} # ## Contents # # 1. [Distribution of insolation](#section1) # 2. [Computing daily insolation with `climlab`](#section2) # 3. [Global, seasonal distribution of insolation (present-day orbital parameters)](#section3) # + [markdown] slideshow={"slide_type": "slide"} # ____________ # <a id='section1'></a> # # ## 1. Distribution of insolation # ____________ # # - # *These notes closely follow section 2.7 of <NAME>, "Global Physical Climatology", Academic Press 1994.* # # + [markdown] slideshow={"slide_type": "slide"} # The **amount of solar radiation** incident on the top of the atmosphere (what we call the "insolation") depends on # # - latitude # - season # - time of day # # This insolation is the primary driver of the climate system. Here we will examine the geometric factors that determine insolation, focussing primarily on the **daily average** values. # + [markdown] slideshow={"slide_type": "slide"} # ### Solar zenith angle # # We define the **solar zenith angle** $\theta_s$ as the angle between the local normal to Earth's surface and a line between a point on Earth's surface and the sun. # - # <img src='../images/Hartmann_Fig2.5.png'> # From the above figure (reproduced from Hartmann's book), the ratio of the shadow area to the surface area is equal to the cosine of the solar zenith angle. # + [markdown] slideshow={"slide_type": "slide"} # ### Instantaneous solar flux # # We can write the solar flux per unit surface area as # # $$ Q = S_0 \left( \frac{\overline{d}}{d} \right)^2 \cos \theta_s $$ # # where $\overline{d}$ is the mean distance for which the flux density $S_0$ (i.e. the solar constant) is measured, and $d$ is the actual distance from the sun. # # Question: # # - what factors determine $\left( \frac{\overline{d}}{d} \right)^2$ ? # - under what circumstances would this ratio always equal 1? # + [markdown] slideshow={"slide_type": "slide"} # ### Calculating the zenith angle # # Just like the flux itself, the solar zenith angle depends latitude, season, and time of day. # + [markdown] slideshow={"slide_type": "slide"} # #### Declination angle # The seasonal dependence can be expressed in terms of the **declination angle** of the sun: the latitude of the point on the surface of Earth directly under the sun at noon (denoted by $\delta$). # # $\delta$ currenly varies between +23.45º at northern summer solstice (June 21) to -23.45º at northern winter solstice (Dec. 21). # + [markdown] slideshow={"slide_type": "slide"} # #### Hour angle # # The **hour angle** $h$ is defined as the longitude of the subsolar point relative to its position at noon. # + [markdown] slideshow={"slide_type": "slide"} # #### Formula for zenith angle # With these definitions and some spherical geometry (see Appendix A of Hartmann's book), we can express the solar zenith angle for any latitude $\phi$, season, and time of day as # # $$ \cos \theta_s = \sin \phi \sin \delta + \cos\phi \cos\delta \cos h $$ # + [markdown] slideshow={"slide_type": "slide"} # #### Sunrise and sunset # # If $\cos\theta_s < 0$ then the sun is below the horizon and the insolation is zero (i.e. it's night time!) # # Sunrise and sunset occur when the solar zenith angle is 90º and thus $\cos\theta_s=0$. The above formula then gives # # $$ \cos h_0 = - \tan\phi \tan\delta $$ # # where $h_0$ is the hour angle at sunrise and sunset. # + [markdown] slideshow={"slide_type": "slide"} # #### Polar night # # Near the poles special conditions prevail. Latitudes poleward of 90º-$\delta$ are constantly illuminated in summer, when $\phi$ and $\delta$ are of the same sign. Right at the pole there is 6 months of perpetual daylight in which the sun moves around the compass at a constant angle $\delta$ above the horizon. # # In the winter, $\phi$ and $\delta$ are of opposite sign, and latitudes poleward of 90º-$|\delta|$ are in perpetual darkness. At the poles, six months of daylight alternate with six months of daylight. # # At the equator day and night are both 12 hours long throughout the year. # + [markdown] slideshow={"slide_type": "slide"} # ### Daily average insolation # # Substituting the expression for solar zenith angle into the insolation formula gives the instantaneous insolation as a function of latitude, season, and time of day: # # $$ Q = S_0 \left( \frac{\overline{d}}{d} \right)^2 \Big( \sin \phi \sin \delta + \cos\phi \cos\delta \cos h \Big) $$ # # which is valid only during daylight hours, $|h| < h_0$, and $Q=0$ otherwise (night). # + [markdown] slideshow={"slide_type": "slide"} # To get the daily average insolation, we integrate this expression between sunrise and sunset and divide by 24 hours (or $2\pi$ radians since we express the time of day in terms of hour angle): # # $$ \overline{Q}^{day} = \frac{1}{2\pi} \int_{-h_0}^{h_0} Q ~dh$$ # # $$ = \frac{S_0}{2\pi} \left( \frac{\overline{d}}{d} \right)^2 \int_{-h_0}^{h_0} \Big( \sin \phi \sin \delta + \cos\phi \cos\delta \cos h \Big) ~ dh $$ # + [markdown] slideshow={"slide_type": "slide"} # which is easily integrated to get our formula for daily average insolation: # # $$ \overline{Q}^{day} = \frac{S_0}{\pi} \left( \frac{\overline{d}}{d} \right)^2 \Big( h_0 \sin\phi \sin\delta + \cos\phi \cos\delta \sin h_0 \Big)$$ # # where the hour angle at sunrise/sunset $h_0$ must be in radians. # + [markdown] slideshow={"slide_type": "slide"} # ### The daily average zenith angle # # It turns out that, due to optical properties of the Earth's surface (particularly bodies of water), the surface albedo depends on the solar zenith angle. It is therefore useful to consider the average solar zenith angle during daylight hours as a function of latidude and season. # # The appropriate daily average here is weighted with respect to the insolation, rather than weighted by time. The formula is # # $$ \overline{\cos\theta_s}^{day} = \frac{\int_{-h_0}^{h_0} Q \cos\theta_s~dh}{\int_{-h_0}^{h_0} Q ~dh} $$ # - # <img src='../images/Hartmann_Fig2.8.png'> # + [markdown] slideshow={"slide_type": "-"} # The average zenith angle is much higher at the poles than in the tropics. This contributes to the very high surface albedos observed at high latitudes. # + [markdown] slideshow={"slide_type": "slide"} # ____________ # <a id='section2'></a> # # ## 2. Computing daily insolation with `climlab` # ____________ # + [markdown] slideshow={"slide_type": "slide"} # Here are some examples calculating daily average insolation at different locations and times. # # These all use a function called # ``` # daily_insolation # ``` # in the package # ``` # climlab.solar.insolation # ``` # to do the calculation. The code implements the above formulas to calculates daily average insolation anywhere on Earth at any time of year. # + [markdown] slideshow={"slide_type": "slide"} # The code takes account of *orbital parameters* to calculate current Sun-Earth distance. # # We can look up *past orbital variations* to compute their effects on insolation using the package # ``` # climlab.solar.orbital # ``` # See the [next lecture](./Lecture14 -- Orbital variations.ipynb)! # + [markdown] slideshow={"slide_type": "slide"} # ### Using the `daily_insolation` function # + slideshow={"slide_type": "-"} # %matplotlib inline import numpy as np import matplotlib.pyplot as plt from climlab import constants as const from climlab.solar.insolation import daily_insolation # + [markdown] slideshow={"slide_type": "slide"} # First, get a little help on using the `daily_insolation` function: # - help(daily_insolation) # + [markdown] slideshow={"slide_type": "slide"} # Here are a few simple examples. # # First, compute the daily average insolation at 45ºN on January 1: # - daily_insolation(45,1) # + [markdown] slideshow={"slide_type": "fragment"} # Same location, July 1: # - daily_insolation(45,181) # + [markdown] slideshow={"slide_type": "slide"} # We could give an array of values. Let's calculate and plot insolation at all latitudes on the spring equinox = March 21 = Day 80 # - lat = np.linspace(-90., 90., 30) Q = daily_insolation(lat, 80) fig, ax = plt.subplots() ax.plot(lat,Q) ax.set_xlim(-90,90); ax.set_xticks([-90,-60,-30,-0,30,60,90]) ax.set_xlabel('Latitude') ax.set_ylabel('W/m2') ax.grid() ax.set_title('Daily average insolation on March 21') # + [markdown] slideshow={"slide_type": "slide"} # ### In-class exercises # # Try to answer the following questions **before reading the rest of these notes**. # # - What is the daily insolation today here at Albany (latitude 42.65ºN)? # - What is the **annual mean** insolation at the latitude of Albany? # - At what latitude and at what time of year does the **maximum daily insolation** occur? # - What latitude is experiencing either **polar sunrise** or **polar sunset** today? # - # ____________ # <a id='section3'></a> # # ## 3. Global, seasonal distribution of insolation (present-day orbital parameters) # ____________ # + [markdown] slideshow={"slide_type": "-"} # Calculate an array of insolation over the year and all latitudes (for present-day orbital parameters). We'll use a dense grid in order to make a nice contour plot # - lat = np.linspace( -90., 90., 500) days = np.linspace(0, const.days_per_year, 365 ) Q = daily_insolation( lat, days ) # + [markdown] slideshow={"slide_type": "slide"} # And make a contour plot of Q as function of latitude and time of year. # - fig, ax = plt.subplots(figsize=(10,8)) CS = ax.contour( days, lat, Q , levels = np.arange(0., 600., 50.) ) ax.clabel(CS, CS.levels, inline=True, fmt='%r', fontsize=10) ax.set_xlabel('Days since January 1', fontsize=16 ) ax.set_ylabel('Latitude', fontsize=16 ) ax.set_title('Daily average insolation', fontsize=24 ) ax.contourf ( days, lat, Q, levels=[-1000., 0.], colors='k' ) # + [markdown] slideshow={"slide_type": "slide"} # ### Time and space averages # - # Take the area-weighted global, annual average of Q... Qaverage = np.average(np.mean(Q, axis=1), weights=np.cos(np.deg2rad(lat))) print( 'The annual, global average insolation is %.2f W/m2.' %Qaverage) # + [markdown] slideshow={"slide_type": "slide"} # Also plot the zonally averaged insolation at a few different times of the year: # - summer_solstice = 170 winter_solstice = 353 fig, ax = plt.subplots(figsize=(10,8)) ax.plot( lat, Q[:,(summer_solstice, winter_solstice)] ); ax.plot( lat, np.mean(Q, axis=1), linewidth=2 ) ax.set_xbound(-90, 90) ax.set_xticks( range(-90,100,30) ) ax.set_xlabel('Latitude', fontsize=16 ); ax.set_ylabel('Insolation (W m$^{-2}$)', fontsize=16 ); ax.grid() # + [markdown] slideshow={"slide_type": "skip"} # <div class="alert alert-success"> # [Back to ATM 623 notebook home](../index.ipynb) # </div> # + [markdown] slideshow={"slide_type": "skip"} # ____________ # ## Version information # ____________ # # + slideshow={"slide_type": "skip"} # %load_ext version_information # %version_information numpy, matplotlib, climlab # + [markdown] slideshow={"slide_type": "slide"} # ____________ # # ## Credits # # The author of this notebook is [<NAME>](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany. # # It was developed in support of [ATM 623: Climate Modeling](http://www.atmos.albany.edu/facstaff/brose/classes/ATM623_Spring2015/), a graduate-level course in the [Department of Atmospheric and Envionmental Sciences](http://www.albany.edu/atmos/index.php) # # Development of these notes and the [climlab software](https://github.com/brian-rose/climlab) is partially supported by the National Science Foundation under award AGS-1455071 to <NAME>. Any opinions, findings, conclusions or recommendations expressed here are mine and do not necessarily reflect the views of the National Science Foundation. # ____________ # + slideshow={"slide_type": "skip"}
Lectures/Lecture15 -- Insolation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={} colab_type="code" id="-e0jnPxVR3ZL" # #!pip install bayesian-optimization # for google collab # #!pip3 install git+https://github.com/slremy/netsapi --user --upgrade # + colab={} colab_type="code" id="wje_R8mnNGqs" from bayes_opt import BayesianOptimization from bayes_opt.util import UtilityFunction import numpy as np import matplotlib.pyplot as plt from matplotlib import cm from matplotlib import mlab from matplotlib import gridspec # %matplotlib inline # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 779, "status": "ok", "timestamp": 1561628162197, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-Aaghu78j1FA/AAAAAAAAAAI/AAAAAAAAImI/of29pyh0eh4/s64/photo.jpg", "userId": "04364851670955414673"}, "user_tz": -120} id="crzCGgxjNGqw" outputId="5296003c-401f-48c5-f568-d1c0cec2e36f" #challenge import from netsapi.challenge import * from sys import exit, exc_info, argv from multiprocessing import Pool, current_process import random as rand import json import requests import numpy as np import pandas as pd import statistics from IPython.display import clear_output from contextlib import contextmanager import sys, os @contextmanager def suppress_stdout(): with open(os.devnull, "w") as devnull: old_stdout = sys.stdout sys.stdout = devnull try: yield finally: sys.stdout = old_stdout import matplotlib.pyplot as plt import numpy as np # %matplotlib inline print("done") # + colab={} colab_type="code" id="w02OmWsfNGq2" envSeqDec = ChallengeSeqDecEnvironment() x_start = 0.0 x_end = 1.0 VAL_Max = 1.2 # fel heat map def target1(x, y): x = np.asscalar(x) y = np.asscalar(y) envSeqDec.reset() action = [x , y] print("action",action) s,r,d,_ = envSeqDec.evaluateAction(action) return r/90.0 def target(x,y): if type(x) is np.ndarray: result = [] for a,b in zip(x,y): reward = target1(a,b) result.append( reward ) #print((len(result) % 30 ) ) #if ((len(result) % 30 ) > 25) : # global envSeqDec # envSeqDec = ChallengeSeqDecEnvironment() return result else: return target1(x,y) # + colab={} colab_type="code" id="EBQYUJ2gP5fB" # + colab={} colab_type="code" id="_mmY7jv3NGq4" # for the training n = 1e5 x = y = np.linspace(x_start, x_end, 300)# was 300 X, Y = np.meshgrid(x, y) x = X.ravel() y = Y.ravel() X = np.vstack([x, y]).T[:, [1, 0]] # + colab={} colab_type="code" id="yu7taWG8NGq6" def posterior(bo, X): #ur = unique_rows(bo.X) x_obs = np.array([[res["params"]["x"], res["params"]["y"]] for res in bo.res]) y_obs = np.array([res["target"] for res in bo.res]) bo._gp.fit(x_obs, y_obs) mu, sigma = bo._gp.predict(X, return_std=True) return mu, sigma def plot_2d(name=None): #mu, s, ut = posterior(bo, X) mu, s = posterior(bo, X) fig, ax = plt.subplots(2, 2, figsize=(14, 10)) gridsize=150 # fig.suptitle('Bayesian Optimization in Action', fontdict={'size':30}) x_obs = np.array([[res["params"]["x"], res["params"]["y"]] for res in bo.res]) # GP regression output ax[0][0].set_title('Gausian Process Predicted Mean', fontdict={'size':15}) im00 = ax[0][0].hexbin(x, y, C=mu, gridsize=gridsize, cmap=cm.jet, bins=None, vmin=-VAL_Max, vmax=VAL_Max) ax[0][0].axis([x.min(), x.max(), y.min(), y.max()]) ax[0][0].plot(x_obs[:, 1], x_obs[:, 0], 'D', markersize=4, color='k', label='Observations') ax[0][1].set_title('Target Function', fontdict={'size':15}) """ im10 = ax[0][1].hexbin(x, y, C=z, gridsize=gridsize, cmap=cm.jet, bins=None, vmin=-VAL_Max, vmax=VAL_Max) ax[0][1].axis([x.min(), x.max(), y.min(), y.max()]) #ax[0][1].plot(bo.X[:, 1], bo.X[:, 0], 'D', markersize=4, color='k') ax[0][1].plot(x_obs[:, 1], x_obs[:, 0], 'D', markersize=4, color='k') """ ax[1][0].set_title('Gausian Process Variance', fontdict={'size':15}) im01 = ax[1][0].hexbin(x, y, C=s, gridsize=gridsize, cmap=cm.jet, bins=None, vmin=0, vmax=1) ax[1][0].axis([x.min(), x.max(), y.min(), y.max()]) ax[1][1].set_title('Acquisition Function', fontdict={'size':15}) # acquisition func """ im11 = ax[1][1].hexbin(x, y, C=ut, gridsize=gridsize, cmap=cm.jet, bins=None, vmin=0, vmax=8) np.where(ut.reshape((300, 300)) == ut.max())[0] np.where(ut.reshape((300, 300)) == ut.max())[1] ax[1][1].plot([np.where(ut.reshape((300, 300)) == ut.max())[1]/50., np.where(ut.reshape((300, 300)) == ut.max())[1]/50.], [0, 6], 'k-', lw=2, color='k') ax[1][1].plot([0, 6], [np.where(ut.reshape((300, 300)) == ut.max())[0]/50., np.where(ut.reshape((300, 300)) == ut.max())[0]/50.], 'k-', lw=2, color='k') ax[1][1].axis([x.min(), x.max(), y.min(), y.max()]) """ for im, axis in zip([im00, im01], ax.flatten()):#, im10, im11 cb = fig.colorbar(im, ax=axis) # cb.set_label('Value') if name is None: name = '_' plt.tight_layout() # Save or show figure? # fig.savefig('bo_eg_' + name + '.png') plt.show() plt.close(fig) # + colab={"base_uri": "https://localhost:8080/", "height": 442} colab_type="code" executionInfo={"elapsed": 1453, "status": "error", "timestamp": 1561628162948, "user": {"displayName": "se<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-Aaghu78j1FA/AAAAAAAAAAI/AAAAAAAAImI/of29pyh0eh4/s64/photo.jpg", "userId": "04364851670955414673"}, "user_tz": -120} id="wIHuv8EXNGq8" outputId="68989e8e-6b1d-4e81-def2-302d831f0f82" bo = BayesianOptimization(target, {'x': (x_start, x_end), 'y': (x_start, x_end)}) used_kappa = 10 bo.maximize(init_points=5, n_iter=0, acq='ucb', kappa=used_kappa) plot_2d() # + def invRL(policy): old_policy = [0.0, 0.0] tot = 0.0 for year in range(5): trans_policy = policy[year] trans_policy[0] *= (1.0 - old_policy[0]) trans_policy[1] *= (1.0 - old_policy[1]) r_mu, r_sigma = bo._gp.predict([trans_policy], return_std=True) print("invRL y", year+1 , ": ",r_mu," +- ", r_sigma) # ben lezem +- el racine bte3ha old_policy = policy[year] tot+= r_mu return tot def testHolePolicy(policy): global envSeqDec envSeqDec.reset() year=0 tot=0.0 while True: action = policy[year] nextstate, reward, done, _ = envSeqDec.evaluateAction(list(action)) print("test y", year+1 , ": ",reward) tot += reward if done: break year+=1 return tot # + colab={} colab_type="code" id="bOEu6X1INGq_" # Turn interactive plotting off plt.ioff() policy = [ [0.9991712478009906, 0.026881743252439638], [0.19702391566063626, 0.8142634903489118], [0.02287582405055888, 0.6499711714347374], [0.9944402045090077, 0.017507068898582667], [0.7295230492436623, 0.9926887775501024] ] for i in range(95): bo.maximize(init_points=0, n_iter=1, acq='ucb', kappa=used_kappa) print("myplot: ", "{:03}".format(len(bo.space)) ) plot_2d("{:03}".format(len(bo.space))) print("0.8 0.01") #mu, sigma = bo._gp.predict([0.8 0.01], return_std=True) #print(mu)# #print(sigma) invRL_score = invRL(policy) if (i % 10 == 1 ): test_score = testHolePolicy(policy) print("diff = ", invRL_score*90.0 - test_score) # + def myRandom(): return rand.random() # return round(rand.random(), 2)# TODO: remove this round thig # TODO run it more def GetRandPolicy(): policy=[] policy.append([myRandom(),myRandom()]) policy.append([myRandom(),myRandom()]) policy.append([myRandom(),myRandom()]) policy.append([myRandom(),myRandom()]) policy.append([myRandom(),myRandom()]) return policy evolution = [] policies = [] maxReward= 0.0 nbTestedPolicy = 0 while True: if (nbTestedPolicy % 1000 == 0 ): print('nbTestedPolicy',nbTestedPolicy) nbTestedPolicy += 1 policy = {} policy['nwemer'] = GetRandPolicy() potentialReward = invRL(policy['nwemer']) if potentialReward<maxReward*0.9: continue print(policy) policy['AvgReward'] = np.mean(potentialReward) policy['maxRewards'] = np.mean(potentialReward) # to change policies.append(policy) policies.sort(key=lambda r: -r['AvgReward']) evolution.append(policies[0]['AvgReward']) maxReward=policies[0]['AvgReward'] print("##################################################################") clear_output() print('rewards',policies[0]['AvgReward']) print('maxRewards',policies[0]['maxRewards']) print('nbTestedPolicy',nbTestedPolicy) if len(policies)>10: policies = policies[0:9] print(policies) print("##################################################################") print(evolution)
2 Bayesian Optimisation based solutions/Misc/other/BayesianCup_2D_2years_fixedTL-Copy1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd df1000 = pd.read_csv("../up1000.csv") df1000 df1000["alls"]= df1000.sum(axis = 1, skipna = True) df1000 df1000.sort_values(by=['alls'],ascending = False) df1000_b = df1000.astype('bool') df1000_b = df1000_b.astype(float) df1000_b["key_word"] = df1000["key_word"] df1000_b["alls"]=0 df1000_b df1000_b["alls"]= df1000_b.sum(axis = 1, skipna = True) df1000_b df1000_b = df1000_b.sort_values(by=['alls'],ascending = False).reset_index() df1000 = df1000.sort_values(by=['alls'],ascending = False).reset_index() df1000 df1000_b 3166.0/len(list(df1000_b)) df1000.to_csv("up1000_all.csv", index=False) df1000_b.to_csv("up1000_bool.csv", index=False) import pyLDAvis.gensim vis_data = pyLDAvis.gensim.prepare(lda, corpus, dictionary) pyLDAvis.display(vis_data)
Find-similar/여러번 나온 요소 찾기.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # CS 20 : TensorFlow for Deep Learning Research # ## Lecture 04 : Eager execution # ### Automatic differentiation and gradient tape # * Reference # + https://www.tensorflow.org/tutorials/eager/automatic_differentiation?hl=ko # ### Setup # + import tensorflow as tf import numpy as np tf.enable_eager_execution() # - # ### Gradient tapes # # TensorFlow provides the `tf.GradientTape` API for automatic differentiation - ***computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a `tf.GradientTape` onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using reverse mode differentiation.*** # + # Trainable variables (created by `tf.Variable` or `tf.get_variable`, where # `trainable=True` is default in both cases) are automatically watched. Tensors # can be manually watched by invoking the `watch` method on this context # manager. x = tf.constant(1, dtype = tf.float32) # z = y^2, y = 2x, z = (2x)^2 with tf.GradientTape() as tape: tape.watch(x) y = tf.add(x, x) z = tf.multiply(y, y) # Derivative of z with respect to the original input tensor x dz_dx = tape.gradient(target = z, sources = x) print(dz_dx) # - # You can also request gradients of the output with respect to intermediate values computed during a "recorded" `tf.GradientTape` context. # + x = tf.constant(1, dtype = tf.float32) # z = y^2, y = 2x, z = (2x)^2 with tf.GradientTape() as tape: tape.watch(x) y = tf.add(x, x) z = tf.multiply(y, y) # Use the tape to compute the derivative of z with respect to the # intermediate value y. dz_dy = tape.gradient(target = z, sources = y) print(dz_dy) # - # By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a persistent gradient tape. This allows multiple calls to the `gradient()` method. as resources are released when the tape object is garbage collected. For example: # + x = tf.constant(1, dtype = tf.float32) # z = y^2, y = 2x, z = (2x)^2 with tf.GradientTape(persistent = True) as tape: tape.watch(x) y = tf.add(x, x) z = tf.multiply(y, y) dz_dy = tape.gradient(target = z, sources = y) dy_dx = tape.gradient(target = y, sources = x) dz_dx = tape.gradient(target = z, sources = x) print(dz_dy, dy_dx, dz_dx) # - # #### Recording control flow # Because tapes record operations as they are executed, Python control flow (using `if`s and `while`s for example) is naturally handled: # + def f(x, y): output = 1.0 for i in range(y): if i > 1 and i < 5: output = tf.multiply(output, x) return output def grad(x, y): with tf.GradientTape() as tape: tape.watch(x) out = f(x, y) return tape.gradient(out, x) x = tf.convert_to_tensor(2.0) print(grad(x, 6)) # out = x^3 print(grad(x, 5)) # out = x^3 print(grad(x, 4)) # out = x^2 # - # #### Higher-order gradients # Operations inside of the `GradientTape` context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example: # + x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0 with tf.GradientTape() as t: with tf.GradientTape() as t2: y = x * x * x # Compute the gradient inside the 't' context manager # which means the gradient computation is differentiable as well. dy_dx = t2.gradient(y, x) d2y_dx2 = t.gradient(dy_dx, x) print(dy_dx) print(d2y_dx2)
Lec04_Eager execution/Lec04_Automatic differentiation and gradient tape.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Modeling Concept = balancing "bias" vs "variance" # looking for low bias and low variance, but in reality a trade off # linear fit vs quadratic vs spline (overfit?) # complexity selection = underfit vs overfit = (low = underfit,high bias,low variance) (high = overfit, low bias, high variance)) print('') # + # Logistic Regression Intro # A classification problem is when you try to predict discrete outcomes, such as whether someone died or has a disease # binary classification cannot use linear regression fit # transfer to logistic function (sigmoid function) between 0 and 1 # from y = b + mx or y = b0 + b1x to 1+e^(-z) where z = b0 + b1x # use a confusion matrix for evaluating the model # true positive (predict = yes, reality = yes) and true negative vs # false positive (we predict yes, reality = no) (we tell a man he is pregnant) and false negative (we predict no, reality yes) # type 1 error = false positive, type 2 = false negative # accuracy = (TP + TN) / total # misclassification (FP + FN) / total import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # - train = pd.read_csv('titanic_train.csv') train.head() # exploratory data analysis # we are missing a lot of data but how do we see it train.isnull() # gets booleans of true and false sns.heatmap(train.isnull(),yticklabels=False,cbar=False,cmap='viridis') # + sns.set_style('whitegrid') # who survived sns.countplot(x='Survived',data=train) # - # who survived (male vs female) sns.countplot(x='Survived', hue='Sex', data=train, palette='RdBu_r') # looks like males lower likelihood to survive # who survived (passenger class) sns.countplot(x='Survived', hue='Pclass', data=train) # looks like third class lower survival rate # age sns.distplot(train['Age'].dropna(), kde=False, bins=30) # another way to do histogram train['Age'].plot.hist(bins=30) # sibling and spouse, most likely men in the third class, single sns.countplot(x='SibSp', data=train) # another way to do histogram train['Fare'].plot.hist(bins=30, figsize=(10,4)) # an interactive way to see it import cufflinks as cf cf.go_offline() train['Fare'].iplot(kind='hist',bins=50) # + # so now we can fill in the NaN values with averages (maybe average age by passenger class) plt.figure(figsize=(10,7)) sns.boxplot(x='Pclass', y='Age', data=train) # maybe older folks have more wealth over time # + # one way of filling in the gaps for age - using an average based on passenger class # the numbers are the eyeball of above bar charts # in reality could use mean() function def impute_age(cols): Age = cols[0] Pclass = cols[1] if pd.isnull(Age): if Pclass == 1: return 37 elif Pclass == 2: return 29 elif Pclass == 3: return 24 else: return Age # if you know the age train['Age'] = train[['Age','Pclass']].apply(impute_age,axis=1) # - # check if we did it correctly by filling in the ages sns.heatmap(train.isnull(),yticklabels=False,cbar=False,cmap='viridis') # throw out any more missing values since not useful train.drop('Cabin',axis=1,inplace=True) train.dropna(inplace=True) # check if we did it correctly sns.heatmap(train.isnull(),yticklabels=False,cbar=False,cmap='viridis') # ML Dummy Variables - convert gender letter to 0 or 1 # cannot keep both female and male due to multicollinearity (ex. female = 0 will perfectly predict male =1, so drop one) # female is first, drop first means female column is deleted but all you need is male = true/false pd.get_dummies(train['Sex'], drop_first=True) sex = pd.get_dummies(train['Sex'], drop_first=True) sex # do same for where embarked # here there are 3 possible locations for getting on, so drop the first one and check the last two embark = pd.get_dummies(train['Embarked'], drop_first=True) embark.head() # notice they are not perfect predictors # add these new columns to the data set train = pd.concat([train,sex,embark],axis=1) train.head(2) # drop columns you will not use train.drop(['Sex','Embarked','Name','Ticket'], axis=1, inplace=True) # axis = 1 is columns # looks good, all numbers, but wait, one column not useful, which is passenger ID train.tail() train.drop('PassengerId', axis=1,inplace=True) # finally finished cleaning data - so we are ready to begin # perhaps Pclass could be enhanced because 1 2 3 is still a linear variable train.head() # + # now scikit learn X = train.drop('Survived',axis=1) y = train['Survived'] from sklearn.model_selection import train_test_split # we split the data into a training set and a test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101) # typically 40% = 0.4 or 0.3 # - X_train from sklearn.linear_model import LogisticRegression #logmodel = LogisticRegression(solver='liblinear') logmodel = LogisticRegression(solver='lbfgs',random_state=None, multi_class="auto", n_jobs=-1, C=1) # FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning. # liblinear is the default solver for Scikit-learn versions < 0.22.0, cannot be parallelized over multiple processor cores # lbfgs is the default solver for later versions # Regularization shifts your model toward the bias side of things in the bias/variance tradeoff. # To be safe, scale your data logmodel.fit(X_train, y_train) predictions = logmodel.predict(X_test) # + # precision table from sklearn.metrics import classification_report print(classification_report(y_test,predictions)) # to improve results: # 1) scale your data # 2) Removing outliers will generally improve model performance. Standardizing the inputs would reduce outlier effects # 3) Independent observations # 4) Higher order polynomial instead of linear. Risk of overfitting and finding global minimum # 5) PCA and Feature Reduction # 6) Multicollinearity (use Variance Inflation Factor (VIF). A VIF cutoff around 5 to 10 is common # Compute the VIF by taking the correlation matrix, inverting it, and taking the values on the diagonal for each feature.) # + # confusion matrix from sklearn.metrics import confusion_matrix print(confusion_matrix(y_test,predictions)) # many more enhancements available also - names, classes, location of cabin # ACTUAL # PREDICTED TRUE TP FP # PREDICTED FALSE FN TN # + # check model output print('Model outputs: \nIntercept = ', logmodel.intercept_, '\nCoefficients: ', logmodel.coef_) print('\n *******************************************') #pd.concat(logmodel.coef_, X_train.columns) print('DataFrame: ') print(pd.DataFrame(data=logmodel.coef_, columns=X_train.columns)) print('\n *******************************************') # list/zip print('List Zip: ') print(list(zip(X_train.columns, logmodel.coef_[0,:]))) print('\n *******************************************') # dict 1: print('Dict two flavors: ') coef_dict = {} for coef, feat in zip(logmodel.coef_[0,:],X_train.columns): coef_dict[feat] = coef print(coef_dict) print('\n *******************************************') # dict 2: coef_dict2 = dict(zip(X_train.columns, logmodel.coef_[0,:])) print(coef_dict2) # + # 6) Multicollinearity (use Variance Inflation Factor (VIF). A VIF cutoff around 5 to 10 is common # Compute the VIF by taking the correlation matrix, inverting it, and taking the values on the diagonal for each feature.) from statsmodels.stats.outliers_influence import variance_inflation_factor X_VIF = X.assign(const=1) pd.Series([variance_inflation_factor(X_VIF.values, i) for i in range(X_VIF.shape[1])], index=X_VIF.columns) # - # scatter plot the results sns.scatterplot(x=X_test['Pclass'],y=X_test['Age'],hue=y_test) # scatter plot the results sns.scatterplot(x=X_test['Age'],y=X_test['male'],hue=y_test) # + # Another way to visualize the output: using regions and colors (mlxtend library) # needs another library: pip install mlxtend from mlxtend.plotting import plot_decision_regions # Plotting decision regions # plot_decision_regions(x is a np array, y is a numpy array, cdf is a predict() method) # this would work if only 2 features: #plot_decision_regions(np.array(X_test[['male','Age']]), np.array(y_test), clf=logmodel, legend=2) # trying for >=2 features # Decision region for feature 3 = 0.5 value = 0.5 # Plot training sample with feature 3 = 0.5 +/- 1 width = 1 fig, ax = plt.subplots(figsize=(12,8)) plot_decision_regions(np.array(X_test), np.array(y_test), filler_feature_values={2: value, 3:0.4, 4:32, 5:0.6, 6:0.08, 7:0.7}, # i used mean filler_feature_ranges={2: width, 3:width, 4:20, 5:width, 6:width, 7:width}, # i used std dev clf=logmodel, legend=3, X_highlight=None, ax=ax) # notes: """ # how to set value and width for the features 2 to 7: X_train.mean() 0: Pclass 2.305466 1: Age 29.069534 2: SibSp 0.524116 3: Parch 0.377814 4: Fare 32.432388 5: male 0.639871 6: Q 0.085209 7: S 0.726688 X_train.std() Pclass 0.841708 Age 12.889703 SibSp 1.083575 Parch 0.810771 Fare 51.212127 male 0.480424 Q 0.279417 S 0.446018 pd.DataFrame(data=np.array(list(zip(X_train.mean(), X_train.std()))).T, index=['Mean','Std'],columns=X_train.columns) pd.DataFrame(data=list(zip(X_train.mean(), X_train.std())), columns=['Mean','Std'],index=X_train.columns) pd.DataFrame(data=list(zip(X_train.columns, X_train.mean(), X_train.std())), columns=['Feature','Mean','Std']) """ ax.set_xlabel('Feature 1: Pclass (3 = Third Class)') ax.set_ylabel('Feature 2: Age') ax.set_title('Feature 3: SibSp = {} +/- {}'.format(value, width)) # - # Diagram only shows 2 features: 0 to 2 (excluding 2) so just 0 and 1 # Assumes other features are held constant X_train.columns[0:2] # Features assumed held constant X_train.columns[2:] np.array(y_test) pd.DataFrame(data=list(zip(X_train.columns, X_train.mean(), X_train.std())), columns=['Feature','Mean','Std']) # + # VIF Test - Other ways to calculate it import pandas as pd import numpy as np a = [1, 1, 2, 3, 4] b = [2, 2, 3, 2, 1] c = [4, 6, 7, 8, 9] d = [4, 3, 4, 5, 4] df = pd.DataFrame({'a':a,'b':b,'c':c,'d':d}) df_cor = df.corr() vif_matrix = pd.DataFrame(np.linalg.inv(df.corr().values), index = df_cor.index, columns=df_cor.columns) np.diag(vif_matrix) # - from statsmodels.stats.outliers_influence import variance_inflation_factor X = df.assign(const=1) pd.Series([variance_inflation_factor(X.values, i) for i in range(X.shape[1])], index=X.columns) # + # More VIF """ a <- c(1, 1, 2, 3, 4) b <- c(2, 2, 3, 2, 1) c <- c(4, 6, 7, 8, 9) d <- c(4, 3, 4, 5, 4) df <- data.frame(a, b, c, d) vif_df <- vif(df) print(vif_df) Variables VIF a 22.95 b 3.00 c 12.95 d 3.00 """ import pandas as pd import statsmodels.formula.api as smf def get_vif(exogs, data): '''Return VIF (variance inflation factor) DataFrame Args: exogs (list): list of exogenous/independent variables data (DataFrame): the df storing all variables Returns: VIF and Tolerance DataFrame for each exogenous variable Notes: Assume we have a list of exogenous variable [X1, X2, X3, X4]. To calculate the VIF and Tolerance for each variable, we regress each of them against other exogenous variables. For instance, the regression model for X3 is defined as: X3 ~ X1 + X2 + X4 And then we extract the R-squared from the model to calculate: VIF = 1 / (1 - R-squared) Tolerance = 1 - R-squared The cutoff to detect multicollinearity: VIF > 10 or Tolerance < 0.1 ''' # initialize dictionaries vif_dict, tolerance_dict = {}, {} # create formula for each exogenous variable for exog in exogs: not_exog = [i for i in exogs if i != exog] formula = f"{exog} ~ {' + '.join(not_exog)}" # extract r-squared from the fit r_squared = smf.ols(formula, data=data).fit().rsquared # calculate VIF vif = 1/(1 - r_squared) vif_dict[exog] = vif # calculate tolerance tolerance = 1 - r_squared tolerance_dict[exog] = tolerance # return VIF DataFrame df_vif = pd.DataFrame({'VIF': vif_dict, 'Tolerance': tolerance_dict}) return df_vif # + # take a diff boston data set from sklearn.datasets import load_boston boston = load_boston() # dictionary boston.keys() print(boston.keys()) print('\n *******************************************') print(boston['feature_names']) print('\n *******************************************') print(boston['DESCR']) print('\n *******************************************') print(boston['data']) print('\n *******************************************') # -
vm1_ML_logistic_regression.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Visualizing MNIST Dataset # No. of rowsx+columns in each image display_dimension = 10 # import required modules import numpy as np import matplotlib.pyplot as plt import pandas as pd import imageio from threading import Thread # Load the dataset df = pd.read_csv('./../../../datasets/mnist-digits-dataset/train.csv') df.head() # remove label from data for easy plotting d_labels = df['label'] df = df.drop('label', axis=1) df.head() # + # group digits digits = [] for i in range(10): d = df[d_labels == i] digits.append(d) for i in range(10): print(digits[i].shape, len(digits[i])) # - def genGif(digit, file_name, images_to_plot_in_a_row=10): ''' Function to generate images as subplots on a gif digit - list of digits that need to be plotted file_name - name of gif file without .gif extension images_to_plot_in_a_row - NxN matrix of images in a frame ''' frames = [] no_of_items = len(digit) #no_of_items = 200 print('No.of Digits: ', no_of_items) i = 0 fig, axes = plt.subplots(images_to_plot_in_a_row,images_to_plot_in_a_row, figsize=(10,10), sharex=True,sharey=True) while i < no_of_items: #print('Index : ', i) fig.suptitle('From {0}'.format( i - i%(images_to_plot_in_a_row*images_to_plot_in_a_row) )); for subplot_index in range(images_to_plot_in_a_row*images_to_plot_in_a_row): if i >= no_of_items: break subplot_row = subplot_index//images_to_plot_in_a_row subplot_col = subplot_index%images_to_plot_in_a_row ax = axes[subplot_row, subplot_col] ax.set_adjustable('box') ax.set_aspect('equal') # plot image on subplot img = np.reshape(digit.iloc[i,:].values, (28,28)) ax.imshow(img, cmap='gray_r') #ax.set_xbound([0,28]) i = i + 1 #plt.tight_layout() fig.canvas.draw() image = np.frombuffer(fig.canvas.tostring_rgb(), dtype='uint8') image = image.reshape(fig.canvas.get_width_height()[::-1] + (3,)) frames.append(image) #plt.show() kwargs_write = {'fps':1.0, 'quantizer':'nq'} imageio.mimsave('./{0}.gif'.format(file_name), frames, fps=1) # + #genGif(digits[0], 'digit-0') # Create a list of threads threads = [] # Generate GIF's for each digit for i in range(10): # Starting one process for each digit process = Thread(target=genGif,args=[digits[i],'digits-{0}'.format(i)]) process.start() threads.append(process) # Wait for all threads to complete for process in threads: process.join() # -
utils/mnist-dataset-viewer/MNIST-Digits-GIF.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Transducer Reducer: Resample time-series transducer data, filter outliers # ## by <NAME>, May 2019 # + [markdown] slideshow={"slide_type": "slide"} # Load the transducer data from Excel file; instantiate the DataFrame: # + slideshow={"slide_type": "fragment"} import pandas as pd from datetime import date import numpy as np f = 'transducer_data.xlsx' df = pd.read_excel(f, infer_datetime_format =True, encoding = 'UTF8') date = df['TimeStamp'] flow_datetimes = pd.to_datetime(date, infer_datetime_format= True) df['TimeStamp'] = flow_datetimes df.drop(df.head(60).index, inplace=True) df.drop(df.tail(60).index, inplace=True) df.reset_index(inplace=True) print(df.head()) print(df.shape) # + [markdown] slideshow={"slide_type": "slide"} # Import additional packages, some .plt housekeeping to make things look nice (this can take a while!). # + slideshow={"slide_type": "subslide"} # %matplotlib inline import matplotlib.pyplot as plt from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() plt.rcParams['figure.figsize'] = (12.0, 7.0) plt.tight_layout() # + [markdown] slideshow={"slide_type": "slide"} # Plot GWE vs. time: # + slideshow={"slide_type": "subslide"} _0 = plt.plot(df['TimeStamp'], df['GWE']) plt.minorticks_on() plt.xticks(rotation=30) plt.xlabel('Time') plt.ylabel('GWE') plt.grid() plt.show() # + [markdown] slideshow={"slide_type": "slide"} # Filter outliers by calculating the difference between GWE values, then replacing outliers by linearly interpolating between adjacent values: # + slideshow={"slide_type": "fragment"} for n in np.arange(175): df['Diff'] = df['GWE'].diff(periods = n) df['GWE'].mask(df.Diff.abs() > 0.5, np.NaN, inplace = True) df['GWE'] = df['GWE'].interpolate(method='linear', limit=175) df1 = df print(df1.head()) print(df1.tail()) print(df1.shape) # + slideshow={"slide_type": "subslide"} _1 = plt.plot(df1['TimeStamp'], df1['GWE']) plt.minorticks_on() plt.xticks(rotation=30) plt.xlabel('Time') plt.ylabel('GWE') plt.grid() plt.show() # + [markdown] slideshow={"slide_type": "slide"} # Resample (in this case, downsample) the transducer data to 1-hour intervals on the mean value of the interval: # + slideshow={"slide_type": "fragment"} interval = '1H' df_resamp = df1.resample(interval, on='TimeStamp').mean() df_resamp.reset_index(inplace=True) df_resamp.dropna(inplace=True) df_resamp.reset_index(inplace=True) print(df_resamp.head()) print(df_resamp.shape) # + slideshow={"slide_type": "slide"} _2 = plt.plot(df_resamp['TimeStamp'], df_resamp['GWE']) plt.minorticks_on() plt.xticks(rotation=45) plt.xlabel('Time') plt.ylabel('GWE') plt.grid() plt.show() # + [markdown] slideshow={"slide_type": "subslide"} # We have now filtered out most of the outliers, and reduced the size of the transducer data set by a factor of nearly 70, all while retaining the changes in groundwater elevation!
Transducer Data Reducer.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import pandas as pd from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split import bert import os os.getcwd() os.chdir("/home/apoorv/Desktop/Agriculture-main") chem_df = pd.read_excel("02chem.list.xlsx",engine='openpyxl', header=None) crop_df = pd.read_excel("02crop.list.xlsx",engine='openpyxl', header=None) pest_df = pd.read_excel("02pest.list.xlsx",engine='openpyxl', header=None) keyword_df = pd.concat([chem_df, crop_df, pest_df]) keyword_df = keyword_df.reset_index() del keyword_df['index'] keyword_df keyword_df.head() article_df = pd.read_csv("test3.csv") article_sr = article_df['Dimension2'] label_df = pd.read_csv("TrainLabel.csv", header = 0, names=["Test", "Reference"]) label_df article_df = pd.read_csv("test3.csv") article_sr = article_df['Dimension2'] article_df # + for a in range (len(label_df.index)): df_bert = pd.DataFrame({ 'id':range(len(label_df)), 'label':label_df['Test'], 'alpha':['a']*label_df.shape[0], 'text': label_df['Reference'].replace(r'\n', ' ', regex=True) }) df_bert.append(df_bert) df_bert_train, df_bert_dev = train_test_split(df_bert, test_size=1) df_bert_train # + df_bert_test = pd.DataFrame({ 'id':range(len(article_df)), 'text': article_df['Dimension2'].replace(r'\n', ' ', regex=True) }) df_bert_test.head() # - df_bert_train.to_csv('/home/apoorv/Desktop/Agriculture-main/bert/value/train.tsv', sep='\t', index=False, header=False) df_bert_dev.to_csv('/home/apoorv/Desktop/Agriculture-main/bert/value/dev.tsv', sep='\t', index=False, header=False) df_bert_test.to_csv('/home/apoorv/Desktop/Agriculture-main/bert/value/test.tsv', sep='\t', index=False, header=False) # + BERT_MODEL_HUB = "https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1" # - MAX_SEQ_LENGTH = 128 train_features = bert.run_classifier.convert_examples_to_features(df_bert_train, label_list, MAX_SEQ_LENGTH, tokenizer) test_features = bert.run_classifier.convert_examples_to_features(df_bert_test, label_list, MAX_SEQ_LENGTH, tokenizer)
BT.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # # Introduction # # Classical mechanics is a topic which has been taught intensively over # several centuries. It is, with its many variants and ways of # presenting the educational material, normally the first **real** physics # course many of us meet and it lays the foundation for further physics # studies. Many of the equations and ways of reasoning about the # underlying laws of motion and pertinent forces, shape our approaches and understanding # of the scientific method and discourse, as well as the way we develop our insights # and deeper understanding about physical systems. # # There is a wealth of # well-tested (from both a physics point of view and a pedagogical # standpoint) exercises and problems which can be solved # analytically. However, many of these problems represent idealized and # less realistic situations. The large majority of these problems are # solved by paper and pencil and are traditionally aimed # at what we normally refer to as continuous models from which we may find an analytical solution. As a consequence, # when teaching mechanics, it implies that we can seldomly venture beyond an idealized case # in order to develop our understandings and insights about the # underlying forces and laws of motion. # # # On the other hand, numerical algorithms call for approximate discrete # models and much of the development of methods for continuous models # are nowadays being replaced by methods for discrete models in science and # industry, simply because **much larger classes of problems can be addressed** with discrete models, often by simpler and more # generic methodologies. # # As we will see below, when properly scaling the equations at hand, # discrete models open up for more advanced abstractions and the possibility to # study real life systems, with the added bonus that we can explore and # deepen our basic understanding of various physical systems # # Analytical solutions are as important as before. In addition, such # solutions provide us with invaluable benchmarks and tests for our # discrete models. Such benchmarks, as we will see below, allow us # to discuss possible sources of errors and their behaviors. And # finally, since most of our models are based on various algorithms from # numerical mathematics, we have a unique oppotunity to gain a deeper # understanding of the mathematical approaches we are using. # # # # With computing and data science as important elements in essentially # all aspects of a modern society, we could then try to define Computing as # **solving scientific problems using all possible tools, including # symbolic computing, computers and numerical algorithms, and analytical # paper and pencil solutions**. # Computing provides us with the tools to develope our own understanding of the scientific method by enhancing algorithmic thinking. # # # The way we will teach this course reflects # this definition of computing. The course contains both classical paper # and pencil exercises as well as computational projects and exercises. The # hope is that this will allow you to explore the physics of systems # governed by the degrees of freedom of classical mechanics at a deeper # level, and that these insights about the scientific method will help # you to develop a better understanding of how the underlying forces and # equations of motion and how they impact a given system. Furthermore, by introducing various numerical methods # via computational projects and exercises, we aim at developing your competences and skills about these topics. # # # These competences will enable you to # # * understand how algorithms are used to solve mathematical problems, # # * derive, verify, and implement algorithms, # # * understand what can go wrong with algorithms, # # * use these algorithms to construct reproducible scientific outcomes and to engage in science in ethical ways, and # # * think algorithmically for the purposes of gaining deeper insights about scientific problems. # # All these elements are central for maturing and gaining a better understanding of the modern scientific process *per se*. # # The power of the scientific method lies in identifying a given problem # as a special case of an abstract class of problems, identifying # general solution methods for this class of problems, and applying a # general method to the specific problem (applying means, in the case of # computing, calculations by pen and paper, symbolic computing, or # numerical computing by ready-made and/or self-written software). This # generic view on problems and methods is particularly important for # understanding how to apply available, generic software to solve a # particular problem. # # *However, verification of algorithms and understanding their limitations requires much of the classical knowledge about continuous models.* # # # # ## A well-known examples to illustrate many of the above concepts # # Before we venture into a reminder on Python and mechanics relevant applications, let us briefly outline some of the # abovementioned topics using an example many of you may have seen before in for example CMSE201. # A simple algorithm for integration is the Trapezoidal rule. # Integration of a function $f(x)$ by the Trapezoidal Rule is given by following algorithm for an interval $x \in [a,b]$ # $$ # \int_a^b(f(x) dx = \frac{1}{2}\left [f(a)+2f(a+h)+\dots+2f(b-h)+f(b)\right] +O(h^2), # $$ # where $h$ is the so-called stepsize defined by the number of integration points $N$ as $h=(b-a)/(n)$. # Python offers an extremely versatile programming environment, allowing for # the inclusion of analytical studies in a numerical program. Here we show an # example code with the **trapezoidal rule**. We use also **SymPy** to evaluate the exact value of the integral and compute the absolute error # with respect to the numerically evaluated one of the integral # $\int_0^1 dx x^2 = 1/3$. # The following code for the trapezoidal rule allows you to plot the relative error by comparing with the exact result. By increasing to $10^8$ points one arrives at a region where numerical errors start to accumulate. # + # %matplotlib inline from math import log10 import numpy as np from sympy import Symbol, integrate import matplotlib.pyplot as plt # function for the trapezoidal rule def Trapez(a,b,f,n): h = (b-a)/float(n) s = 0 x = a for i in range(1,n,1): x = x+h s = s+ f(x) s = 0.5*(f(a)+f(b)) +s return h*s # function to compute pi def function(x): return x*x # define integration limits a = 0.0; b = 1.0; # find result from sympy # define x as a symbol to be used by sympy x = Symbol('x') exact = integrate(function(x), (x, a, b)) # set up the arrays for plotting the relative error n = np.zeros(9); y = np.zeros(9); # find the relative error as function of integration points for i in range(1, 8, 1): npts = 10**i result = Trapez(a,b,function,npts) RelativeError = abs((exact-result)/exact) n[i] = log10(npts); y[i] = log10(RelativeError); plt.plot(n,y, 'ro') plt.xlabel('n') plt.ylabel('Relative error') plt.show() # - # This example shows the potential of combining numerical algorithms with symbolic calculations, allowing us to # # * Validate and verify their algorithms. # # * Including concepts like unit testing, one has the possibility to test and test several or all parts of the code. # # * Validation and verification are then included *naturally* and one can develop a better attitude to what is meant with an ethically sound scientific approach. # # * The above example allows the student to also test the mathematical error of the algorithm for the trapezoidal rule by changing the number of integration points. The students get **trained from day one to think error analysis**. # # * With a Jupyter notebook you can keep exploring similar examples and turn them in as your own notebooks. # # In this process we can easily bake in # 1. How to structure a code in terms of functions # # 2. How to make a module # # 3. How to read input data flexibly from the command line # # 4. How to create graphical/web user interfaces # # 5. How to write unit tests (test functions or doctests) # # 6. How to refactor code in terms of classes (instead of functions only) # # 7. How to conduct and automate large-scale numerical experiments # # 8. How to write scientific reports in various formats (LaTeX, HTML) # # The conventions and techniques outlined here will save you a lot of time when you incrementally extend software over time from simpler to more complicated problems. In particular, you will benefit from many good habits: # 1. New code is added in a modular fashion to a library (modules) # # 2. Programs are run through convenient user interfaces # # 3. It takes one quick command to let all your code undergo heavy testing # # 4. Tedious manual work with running programs is automated, # # 5. Your scientific investigations are reproducible, scientific reports with top quality typesetting are produced both for paper and electronic devices.
doc/src/LectureNotes/_build/jupyter_execute/testbook/chapter1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) import os import random import numpy as np import pandas as pd # + def load_imdb_sentiment_analysis_dataset(data_path, seed=123): imdb_data_path = os.path.join(data_path, 'aclImdb') print(imdb_data_path) # Load the training data train_texts = [] train_labels = [] for category in ['pos', 'neg']: train_path = os.path.join(imdb_data_path, 'train', category) for fname in sorted(os.listdir(train_path)): if fname.endswith('.txt'): with open(os.path.join(train_path, fname)) as f: train_texts.append(f.read()) train_labels.append(0 if category == 'neg' else 1) # Load the validation data. test_texts = [] test_labels = [] for category in ['pos', 'neg']: test_path = os.path.join(imdb_data_path, 'test', category) for fname in sorted(os.listdir(test_path)): if fname.endswith('.txt'): with open(os.path.join(test_path, fname)) as f: test_texts.append(f.read()) test_labels.append(0 if category == 'neg' else 1) # Shuffle the training data and labels. random.seed(seed) random.shuffle(train_texts) random.seed(seed) random.shuffle(train_labels) return ((train_texts, np.array(train_labels)), (test_texts, np.array(test_labels))) # - # ## using the word embeedings # + from tensorflow.python.keras.preprocessing import sequence from tensorflow.python.keras.preprocessing import text def sequence_vectorize(train_texts, val_texts): # Vectorization parameters # Limit on the number of features. We use the top 20K features. TOP_K = 20000 # Limit on the length of text sequences. Sequences longer than this # will be truncated. MAX_SEQUENCE_LENGTH = 500 # Create vocabulary with training texts. tokenizer = text.Tokenizer(num_words=TOP_K) tokenizer.fit_on_texts(train_texts) # Vectorize training and validation texts. x_train = tokenizer.texts_to_sequences(train_texts) x_val = tokenizer.texts_to_sequences(val_texts) # Get max sequence length. max_length = len(max(x_train, key=len)) if max_length > MAX_SEQUENCE_LENGTH: max_length = MAX_SEQUENCE_LENGTH # Fix sequence length to max value. Sequences shorter than the length are # padded in the beginning and sequences longer are truncated # at the beginning. x_train = sequence.pad_sequences(x_train, maxlen=max_length) x_val = sequence.pad_sequences(x_val, maxlen=max_length) return x_train, x_val, tokenizer.word_index,tokenizer # + #load the data # get the sequences #define useful global variables TOP_K = 20000 MAX_SEQUENCE_LENGTH = 500 EMBEDDING_DIM = 100 data_path = '/home/rohit/Documents/Study/Projects/HACKATHON INNOVATE FOR IIT/aclImdb_v1' (train_texts, train_labels), (test_texts, test_labels) = load_imdb_sentiment_analysis_dataset(data_path) #sequences x_train, x_test, tokenizer_word_index,tokenizer = sequence_vectorize(train_texts, test_texts) num_words = min(TOP_K, len(tokenizer_word_index) + 1) # + import pickle # saving with open('tokenizer.pickle', 'wb') as handle: pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL) # - train_texts[0] # + #get the glove embeedings glove_dir = '/home/rohit/Documents/Study/Projects/HACKATHON INNOVATE FOR IIT/' embeddings_index = {} with open(os.path.join(glove_dir, 'glove.6B.100d.txt')) as f: for line in f: word, coefs = line.split(maxsplit=1) coefs = np.fromstring(coefs, 'f', sep=' ') embeddings_index[word] = coefs print('Found %s word vectors.' % len(embeddings_index)) # - embedding_matrix = np.zeros((num_words, EMBEDDING_DIM)) for word, i in tokenizer_word_index.items(): if i >= TOP_K: continue embedding_vector = embeddings_index.get(word) if embedding_vector is not None: # words not found in embedding index will be all-zeros. embedding_matrix[i] = embedding_vector #keras layers import import keras import tensorflow as tf from keras.models import load_model from keras.layers import Embedding from keras.initializers import Constant from keras import models, initializers, regularizers from keras.layers import Dense, Dropout, SeparableConv1D, MaxPooling1D from keras.layers import GlobalAveragePooling1D # """Creates an instance of a separable CNN model. # # # Arguments # blocks: int, number of pairs of sepCNN and pooling blocks in the model. # # filters: int, output dimension of the layers. # # kernel_size: int, length of the convolution window. # # embedding_dim: int, dimension of the embedding vectors. # # dropout_rate: float, percentage of input to drop at Dropout layers. # # pool_size: int, factor by which to downscale input at MaxPooling layer. # # input_shape: tuple, shape of input to the model. # # num_classes: int, number of output classes. # # num_features: int, number of words (embedding input dimension). # # use_pretrained_embedding: bool, true if pre-trained embedding is on. # # is_embedding_trainable: bool, true if embedding layer is trainable. # # embedding_matrix: dict, dictionary with embedding coefficients. # # # Returns # A sepCNN model instance. # """ def sepcnn_model(blocks, filters = 64, kernel_size = 3, embedding_dim = EMBEDDING_DIM, dropout_rate = 0.2, pool_size =2, MAX_SEQUENCE_LENGTH = MAX_SEQUENCE_LENGTH, num_classes = 2, num_features = num_words, use_pretrained_embedding=True, is_embedding_trainable=True, embedding_matrix= embedding_matrix): op_units, op_activation = 1, 'sigmoid' model = models.Sequential() # Add embedding layer. If pre-trained embedding is used add weights to the # embeddings layer and set trainable to input is_embedding_trainable flag. if use_pretrained_embedding: model.add(Embedding(input_dim=num_features, output_dim=embedding_dim, input_length=MAX_SEQUENCE_LENGTH, weights=[embedding_matrix], trainable=is_embedding_trainable)) else: model.add(Embedding(input_dim=num_features, output_dim=embedding_dim, input_length=input_shape[0])) for _ in range(blocks-1): model.add(Dropout(rate=dropout_rate)) model.add(SeparableConv1D(filters=filters, kernel_size=kernel_size, activation='relu', bias_initializer='random_uniform', depthwise_initializer='random_uniform', padding='same')) model.add(SeparableConv1D(filters=filters, kernel_size=kernel_size, activation='relu', bias_initializer='random_uniform', depthwise_initializer='random_uniform', padding='same')) model.add(MaxPooling1D(pool_size=pool_size)) model.add(SeparableConv1D(filters=filters * 2, kernel_size=kernel_size, activation='relu', bias_initializer='random_uniform', depthwise_initializer='random_uniform', padding='same')) model.add(SeparableConv1D(filters=filters * 2, kernel_size=kernel_size, activation='relu', bias_initializer='random_uniform', depthwise_initializer='random_uniform', padding='same')) model.add(GlobalAveragePooling1D()) model.add(Dropout(rate=dropout_rate)) model.add(Dense(op_units, activation=op_activation)) return model temp = sepcnn_model(blocks= 5) temp.summary() # + learning_rate=1e-3, epochs=1000, batch_size= 128, blocks=2, filters=64, dropout_rate= 0.2, embedding_dim=EMBEDDING_DIM, kernel_size=3, pool_size=3 num_classes = 2 num_features = num_words # Create model instance. model = sepcnn_model(blocks = 2) model.summary() # - loss = 'binary_crossentropy' optimizer = keras.optimizers.Adam(lr=learning_rate) model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy']) # Train and validate model. history = model.fit(x_train, train_labels, epochs=5, verbose=1,batch_size= 128) #history = history.history history.keys() print(history['loss'], '\n',history['acc']) # Save model. model.save('IMDBmodel.h5') model.evaluate(x_test, test_labels) # + for i in range(100): t1 = x_test[i] t1 = t1.reshape(1,t1.shape[0]) print(t1.shape) if model.predict(t1)>0.6: print('positive') elif model.predict(t1)<= 0.4: print('negative') else: print('model is not sure. in neutral state') # + def sequence_of_single_sentence(sentence, tokenizer, MAX_SEQUENCE_LENGTH = 500): sen = tokenizer.texts_to_sequences(sentence) return sequence.pad_sequences(sen, maxlen= MAX_SEQUENCE_LENGTH) y = ['iI admit, the great majority of films released before say 1933 are just not for me. Of the dozen or so "major" silents I have viewed, one I loved (The Crowd), and two were very good (The Last Command and City Lights, that latter Chaplin circa 1931).<br /><br />So I was apprehensive about this one, and humor is often difficult to appreciate (uh, enjoy) decades later. I did like the lead actors, but thought little of the film.<br /><br />One intriguing sequence. Early on, the guys are supposed to get "de-loused" and for about three minutes, fully dressed, do some schtick. In the background, perhaps three dozen men pass by, all naked, white and black (WWI ?), and for most, their butts, part or full backside, are shown. Was this an early variation of beefcake courtesy of <NAME>?'] y= ['honey that movie was marvelous '] y = [' today '] y = ["<NAME> and <NAME> are the ""Two Arabian Knights"" referred to in the title, humorously. The pair start out as U.S. POWs trying to escape from the Germans during World War I. Eventually, they find themselves on board a ship bound for Arabia. While tripping out to the Middle East, they rescue an Arab woman, <NAME>, who turns out to be a Princess; and, of course, becomes a romantic interest for the Two Arabian Knights. No points for guessing who wins the veiled Ms. Astor!<br /><br />The film is very well photographed and directed; Lewis Milestone has wonderful sets, and stages scenes beautifully. Of the performances, <NAME> stands out - he creates a character so understandable you can almost hear him speak, trough the film is silent. The story isn't as strong as it could be - there are some events and sequences which had me wondering how and why the characters' locale changed. The last looks, exchanged between one of the stars and an extra, is an example of something I didnt understand. Perhaps these were comic bits which had a particular appeal for the time.<br /><br />The film is damaged in several places; but there is enough preserved, in even these scenes, to allow your mind to fill in the visual blanks. <NAME> appears as the Purser; watch for his big scene on ship, when Wolheim goes into a room with him for some money (what actually happens is a mystery). Early in the film, there is a long scene with a lot of naked men shown from the waist up (or, thereabouts); they are POWs being herded to the showers. Director Milestone uses parades of soldiers moving to great effect; this shower scene is different in that several of the men don't look as Caucasian as you might expect - maybe not as many Caucasian men would agree to appear nude? <br /><br />******* Two Arabian Knights (9/23/27) Lewis Milestone ~ <NAME>, <NAME>, <NAME>"] y = ["worst"] y = ["food was okay only. but i am sad.it's okay "] y = ["i am not sad with food of our college"] y = ['sad is that he died. it was boring'] y = ['Right now Im mostly just sprouting this so my cats can eat the grass. They love it. I rotate it around with Wheatgrass and Rye too'] y = ['I dont know if its the cactus or the tequila or just the unique combination of ingredients, but the flavour of this hot sauce makes it one of a kind! We picked up a bottle once on a trip we were on...'] y = ['Product arrived labeled as Jumbo Salted Peanuts...the peanuts were actually small sized unsalted. Not sure if this was an error or if the vendor intended to represent the product as "Jumbo".'] y = ['I love eating them and they are good for watching TV and looking at movies! It is not too sweet. I like to transfer them to a zip lock baggie so they stay fresh so I can take my time eating them.'] y = ['The candy is just red , No flavor . Just plan and chewy . I would never buy them again'] y = ['The flavors are good. However, I do not see any differce between this and Oaker Oats brand - they are both mushy'] y = ['Buyer Beware Please! This sweetener is not for everybody. Maltitol is an alcohol sugar and can be undigestible in the body. You will know a short time after consuming it if you are one of the unsusp...'] y = ['Food was very tasty. Though, they maintain less hygiene, but you can go for once and all for having tasty food while returning from tiring journey.'] y = ['mess secretary please improve'] k = sequence_of_single_sentence(y, tokenizer) print(k.shape) model.predict(k) # - x = [] y = 'hello mess improve' x.append(y) k = sequence_of_single_sentence(x, tokenizer) model.predict(k) from keras.models import load_model model =getModelFunny() import pickle with open('tokenizer_on_new_train.pickle', 'rb') as handle: tokenizer = pickle.load(handle) def getModelFunny(): model = load_model('model_rnn_pretrained_trained.h5') return model def getModelSentiment(): return load_model('rnn_sentiment_86.h5') # + def sequence_of_single_sentence(sentence, tokenizer, MAX_SEQUENCE_LENGTH = 500): sen = tokenizer.texts_to_sequences(sentence) return sequence.pad_sequences(sen, maxlen= MAX_SEQUENCE_LENGTH) Feedback = 'not good' x = [str(Feedback)] y = np.asarray(x) k = sequence_of_single_sentence(y, tokenizer) model_Sent = getModelSentiment() print("model loaded") out = model_Sent.predictSentiment(k) print("out ",out) sentiment_star = out[0][1] print(sentiment_star) # - import numpy as np
Basic on IMDB-Copy1.ipynb