text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
``` ##==================================================== ## このセルを最初に実行せよ---Run this cell initially. ##==================================================== import sys if 'google.colab' in sys.modules: !wget -P ./text https://www.eidos.ic.i.u-tokyo.ac.jp/~sato/assignments/project2/text/test_data.csv !wget -P ./text https://www.eidos.ic.i.u-tokyo.ac.jp/~sato/assignments/project2/text/wiki_dataset.csv ``` # ミニプロジェクト(発展課題) / Miniproject (Advanced exercises) ## Project2. 自分のアイディアで手法を改良しよう(発展課題) 基礎課題では、ウィキペディアの6カテゴリの記事からなるデータセット $D$ を学習データとして、カテゴリが未知の記事6本を分類しました。 これら6本では正しく分類できたと思いますが、いろいろな記事で試してみると、正しく分類できない記事もあることがわかります。 そこで発展課題では、基礎課題で実装した手法をベースライン(基準)とし、それよりも高い精度で分類できるよう、手法を改良してください。 **皆さん自身で考えたアイディアを実装**し、**ベースラインの手法と皆さんの提案手法とで精度を比較評価した結果を報告**してください。 適宜、図や表を使って構いません。 また、Markdownセルを利用し、**なぜ提案手法がうまくいくのか(あるいはうまく行くと考えたのか)を分かりやすく説明し、分類に失敗した記事がある場合は、失敗した理由を議論**して下さい。 なお、精度 (Accuracy) は、未知の記事の総数を$N$、正しく分類できたものの数を$TP$とすると、以下の式で評価するものとします。 $$\mbox{Accuracy} = \frac{TP}{N}$$ ### ライセンス 本教材で使用するウィキペディアのコンテンツは Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA) および GNU Free Documentation License (GFDL) の下にライセンスされています。 本データも同じくこれらのライセンスを継承します。 詳しくは[こちら](https://ja.wikipedia.org/wiki/Wikipedia:%E3%83%87%E3%83%BC%E3%82%BF%E3%83%99%E3%83%BC%E3%82%B9%E3%83%80%E3%82%A6%E3%83%B3%E3%83%AD%E3%83%BC%E3%83%89)を参照してください。 ![CC-BY-SA_icon.svg](https://www.eidos.ic.i.u-tokyo.ac.jp/~sato/assignments/project2/https://upload.wikimedia.org/wikipedia/commons/d/d0/CC-BY-SA_icon.svg) ## Project2. Let's improve the baseline method with your own idea (Advanced exercises) In the basic exercises, you impremented a code to categorize the six uncategorized articles, which were extracted from the Wikipedia data, using the dataset $D$ as the training data. Although your code gives correct category labels to the given articles in the basic exercises, you may notice that it is not always successful but some articles are mis-categorized when you apply the method to various articles. In this advanced exercises, consider the method implemented in the basic exercises as a baseline method and improve it so that it achieves higher accuracy than the original one. **Please report the results of implementing your own ideas and comparing and evaluating the accuracy of the baseline method with your proposed method.**. You can use diagrams and tables to illustrate, as appropriate. Also, using the Markdown cell, **explain clearly why the proposed method works (or you thought it would work), and if there are articles that failed to be categorized, discuss the reasons for the failure.** The accuracy is evaluated by the following equation; $$\mbox{Accuracy} = \frac{TP}{N},$$ where $N$ is the total number of the uncategorized articles and $TP$ is the number of articles that are categorized correctly. ### Licenses The Wikipedia contents used in this learning material are licensed under Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA) and GNU Free Documentation License (GFDL). This dataset inherits these licenses as well. For details, refer to [this site](https://ja.wikipedia.org/wiki/Wikipedia:%E3%83%87%E3%83%BC%E3%82%BF%E3%83%99%E3%83%BC%E3%82%B9%E3%83%80%E3%82%A6%E3%83%B3%E3%83%AD%E3%83%BC%E3%83%89). ![CC-BY-SA_icon.svg](https://www.eidos.ic.i.u-tokyo.ac.jp/~sato/assignments/project2/https://upload.wikimedia.org/wikipedia/commons/d/d0/CC-BY-SA_icon.svg) ## 準備:ウィキペディアデータセットの読み込み 以下のコードを使って、データセット $D$ を辞書 `Dw` に読み込んでください。 これは基礎課題のものと同じです。 `Dw` のキーがカテゴリ `cate` のとき、`Dw[cate]` は、カテゴリ `cate` のすべての記事の重要語リストを連結して得られるリストを与えます。 ここで、カテゴリとは、冒頭で述べた6つのカテゴリ (`animal`, `art`, `economy`, `law`, `plant`, `politics`) のいずれかです。 ## Preparation: Reading the Wikipedia dataset Execute the following code, and load the dataset $D$ into the dictionary `Dw`. This code is excerpted from the basic exercises. If a key of `Dw` is a category `cate`, `Dw[cate]` gives the list obtained by concatenating the lists of important words of all the articles in the category `cate`. Here, a category is one of the six categories: `animal`, `art`, `economy`, `law`, `plant` and `politics`. ``` ### Execute the following code to load the Wikipedia dataset: Dw = {} with open('text/wiki_dataset.csv', 'r', encoding='utf-8') as fi: fi.readline() for line in fi: tmp = line.replace('\\n', '\n').split('\t') if tmp[0] not in Dw: Dw[tmp[0]] = [] Dw[tmp[0]].extend(tmp[2].split(' ')) ``` 各カテゴリごとに最初の10個の重要語を表示してみましょう。 Let's print the first ten important words of each category. ``` ### Given code: for cate in Dw: print('Category:', cate) print('Important words:', Dw[cate][:10]) ``` ## 準備:未知の記事集合およびその正解のカテゴリラベルの読み込み CSVファイル `text/test_data.csv` には分類対象であるカテゴリが未知の記事60本が納められています。 ただし、本文はあらかじめデータセット`D`と同様に重要語リストに変換されています。 以下のコードを用いて、これらの記事を、各記事のタイトルをキー、その本文の重要語リストを値とする辞書 `Aw2` に読み込んでください。 また同時に、各記事のタイトルをキー、その正解のカテゴリ名を値とする辞書`Aw2_ans`に読み込んでください。 正解率は、推定したカテゴリラベルを正解のカテゴリラベルと比較することで評価します。 よって当然ながら、この正解のカテゴリラベルを、ラベルの推定のために使ってはいけません。 ## Preparation: Loading a set of the uncategorized aticles and their correct category labels The CSV file `text/test_data.csv` contains 60 target articles whose category is unknown. The bodies are converted to a list of important words in the same way as the data set `D` in advance. Execute the following code to load these uncategorized articles into the dictionary `Aw2` with the title of each article as a key and the list of important words in its body as the corresponding value. At the same time, create the dictionary `Aw2_ans` with the title of each article as a key and the correct category label as the value. The accuracy is evaluated by comparing the estimated category label with the correct category label. Needless to say, the correct category labels must not be used to estimate the category labels. ``` ### Given code: Aw2 = {} Aw2_ans = {} with open('text/test_data.csv', 'r', encoding='utf-8') as fi: fi.readline() for line in fi: tmp = line.replace('\\n', '').split('\t') Aw2[tmp[1]] = tmp[2].split(' ') Aw2_ans[tmp[1]] = tmp[0] ``` 以下のコードで`Aw2`および`Aw2_ans`を1つ書き出してみましょう。 The contents of `Aw2` and `Aw2_ans` are printed as follows. ``` ### Given code: for title in Aw2: print('title:', title) print('Correct answer label:', Aw2_ans[title]) print('Important words:', Aw2[title]) break ``` ## 皆さんのコードおよび解説 以下で皆さんのコードやその解説、結果の評価および議論を行ってください。 - この'project2.ipynb'は自動採点されません.答案検査システムもありません。教員やTAが一つずつ見て採点します。 - 解説や議論はMarkdownセルに、コードはCodeセルに記入してください。 - 提出されたipynbファイルは教員のPCで実行したうえで評価します。実行に必要な追加パッケージがあれば指定するなどして、実行できるファイルを提出してください。 - Codeセル、Markdownセルは必要に応じて増やして構いません ## Describe your code and explain it Describe your code, explanation and discussion below. - This notebook 'project2.ipynb' will not be automatically graded at all. No automatic checking for it is provided. The faculty members and TAs will read and execute this notebook and give a grade manually. - Fill the explanation and discussion of your method in Markdown cells. The code should be written in Code cells. - The submitted notebook will be executed on the faculty member's PC before grading. Please submit an executable file, specifying all additional packages required for execution if any. - You can add Code cells and Markdown cells as needed. # 既存手法の精度検証 提案手法の性能評価のため、既存手法 (基礎課題で実装した手法)を用いて、 `./text/test_data.csv` をカテゴリ分類する。 ``` def compute_word_frequency(dw): import itertools from collections import Counter important_word_list = list(itertools.chain.from_iterable(dw.values())) return dict(Counter(important_word_list)) W = compute_word_frequency(Dw) def extract_frequent_words(word_frequency, coverage): n_freq = 0 answer = [] for word, freq in sorted(word_frequency.items(), key=lambda x: -x[1]): if 1.0 * n_freq / sum(word_frequency.values()) < coverage: n_freq += freq answer.append(word) else: break return answer F = extract_frequent_words(W, 0.5) def words2vec(words, frequent_words): counter = dict([[fw, 0] for fw in frequent_words]) for word in words: if counter.get(word) is not None: counter[word] += 1 return [counter[fw] for fw in frequent_words] Av2 = {} for title, words in Aw2.items(): Av2[title] = words2vec(words, F) Dv = {} for cate, words in Dw.items(): Dv[cate] = words2vec(words, F) import numpy as np def guess_category(dv, v): def cos_sim(x, y): return np.sum(x * y) / (np.sqrt(np.sum(x * x)) * np.sqrt(np.sum(y * y))) cos_sim_dict = dict([[cate, cos_sim(np.array(x), np.array(v))] for cate, x in dv.items()]) return max(cos_sim_dict, key=cos_sim_dict.get) results = [] for title in Av2.keys(): pred = guess_category(Dv, Av2[title]) results.append(pred == Aw2_ans[title]) accuracy = 1.0 * sum(results) / len(results) print("Accuracy: {:.2%}".format(accuracy)) print("TP={}, N={}".format(sum(results), len(results))) ``` # 提案手法の概要 / Outline of your proposed method ... # 着想に至った経緯 / Background to the idea ... # 処理の流れ / Processing flow 1. First step... 1. Second step... 1. Third step... ``` # 提案手法のコード / The code of your proposed method # 注意: 適宜、コメント行として解説を書き込み、わかりやすいコードとなるように務めてください。 # Note: Write commentaries as comment lines where appropriate and try to make the code easy to understand. ... ``` # 評価 / Evaluation ... ``` # 提案手法の評価に関するコード / The code for evaluation of your method ... ``` # 議論と結論 / Discussion and conclusion ...
github_jupyter
SOP038 - Install azure command line interface ============================================= Steps ----- ### Common functions Define helper functions used in this notebook. ``` # Define `run` function for transient fault handling, hyperlinked suggestions, and scrolling updates on Windows import sys import os import re import platform import shlex import shutil import datetime from subprocess import Popen, PIPE from IPython.display import Markdown def run(cmd, return_output=False, no_output=False, error_hints=[], retry_hints=[], retry_count=0): """ Run shell command, stream stdout, print stderr and optionally return output """ max_retries = 5 install_hint = None output = "" retry = False # shlex.split is required on bash and for Windows paths with spaces # cmd_actual = shlex.split(cmd) # When running python, use the python in the ADS sandbox ({sys.executable}) # if cmd.startswith("python "): cmd_actual[0] = cmd_actual[0].replace("python", sys.executable) # On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail # with: # # UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128) # # Setting it to a default value of "en_US.UTF-8" enables pip install to complete # if platform.system() == "Darwin" and "LC_ALL" not in os.environ: os.environ["LC_ALL"] = "en_US.UTF-8" python_retry_hints, python_error_hints, install_hint = python_hints() retry_hints += python_retry_hints error_hints += python_error_hints if (cmd.startswith("kubectl ")): kubectl_retry_hints, kubectl_error_hints, install_hint = kubectl_hints() retry_hints += kubectl_retry_hints error_hints += kubectl_error_hints if (cmd.startswith("azdata ")): azdata_retry_hints, azdata_error_hints, install_hint = azdata_hints() retry_hints += azdata_retry_hints error_hints += azdata_error_hints # Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this # seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound) # # NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split. # which_binary = shutil.which(cmd_actual[0]) if which_binary == None: if install_hint is not None: display(Markdown(f'SUGGEST: Use {install_hint} to resolve this issue.')) raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") else: cmd_actual[0] = which_binary start_time = datetime.datetime.now().replace(microsecond=0) print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)") print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})") print(f" cwd: {os.getcwd()}") # Command-line tools such as CURL and AZDATA HDFS commands output # scrolling progress bars, which causes Jupyter to hang forever, to # workaround this, use no_output=True # try: if no_output: p = Popen(cmd_actual) else: p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1) with p.stdout: for line in iter(p.stdout.readline, b''): line = line.decode() if return_output: output = output + line else: if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file regex = re.compile(' "(.*)"\: "(.*)"') match = regex.match(line) if match: if match.group(1).find("HTML") != -1: display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"')) else: display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"')) else: print(line, end='') p.wait() except FileNotFoundError as e: if install_hint is not None: display(Markdown(f'SUGGEST: Use {install_hint} to resolve this issue.')) raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e if not no_output: for line in iter(p.stderr.readline, b''): line_decoded = line.decode() # azdata emits a single empty line to stderr when doing an hdfs cp, don't # print this empty "ERR:" as it confuses. # if line_decoded == "": continue print(f"ERR: {line_decoded}", end='') for error_hint in error_hints: if line_decoded.find(error_hint[0]) != -1: display(Markdown(f'SUGGEST: Use [{error_hint[2]}]({error_hint[1]}) to resolve this issue.')) for retry_hint in retry_hints: if line_decoded.find(retry_hint) != -1: if retry_count < max_retries: print(f"RETRY: {retry_count} (due to: {retry_hint})") retry_count = retry_count + 1 output = run(cmd, return_output=return_output, error_hints=error_hints, retry_hints=retry_hints, retry_count=retry_count) if return_output: return output else: return elapsed = datetime.datetime.now().replace(microsecond=0) - start_time if p.returncode != 0: raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n') print(f'\nSUCCESS: {elapsed}s elapsed\n') if return_output: return output def python_hints(): retry_hints = [] error_hints = [ ["""Library not loaded: /usr/local/opt/unixodbc""", """../common/sop008-distcp-backup-to-adl-gen2.ipynb""", """SOP008 - Backup HDFS files to Azure Data Lake Store Gen2 with distcp"""], ["""WARNING: You are using pip version""", """../install/sop040-upgrade-pip.ipynb""", """SOP040 - Upgrade pip in ADS Python sandbox"""] ] return retry_hints, error_hints, None print('Common functions defined successfully.') ``` ### Install az CLI ``` run("python --version") run('python -m pip install -m pip install azure-cli') print('Notebook execution complete.') ```
github_jupyter
``` # cell used to import important library of the notebook import numpy as np import sys from scipy import sparse from scipy.spatial.distance import pdist, squareform import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import pandas as pd import networkx as nx from sklearn.preprocessing import StandardScaler from utils import * # contains all helper functions used in the project import scipy as sci from sklearn.cluster import KMeans import sklearn.metrics as sm ``` # I. Load, clean, study and prepare the data for graph creation ## I.1 Data cleaning & preperation **Preparing IRS data** ``` #load the data df_migrations = pd.read_csv("NTDS_Data/countyinflow1516.csv" ) # create the combined fips county number of destination df_migrations['statefips_str'] = df_migrations['y2_statefips'].apply(lambda x : str(x).zfill(2)) df_migrations['countyfips_str'] = df_migrations['y2_countyfips'].apply(lambda x : str(x).zfill(3)) df_migrations['combined_fips-destination'] = df_migrations['statefips_str'].apply(lambda x: x.lstrip('0')) + df_migrations['countyfips_str'] # create the combined fips county number of source df_migrations['statefips_str1'] = df_migrations['y1_statefips'].apply(lambda x : str(x).zfill(2)) df_migrations['countyfips_str1'] = df_migrations['y1_countyfips'].apply(lambda x : str(x).zfill(3)) df_migrations['combined_fips-source'] = df_migrations['statefips_str1'].apply(lambda x: x.lstrip('0')) + df_migrations['countyfips_str1'] # Cleaning the data to have only source and origin counties and unemployment rate as a new column df_migrations = df_migrations[df_migrations['y1_statefips']<=56] df_migrations["Unemployment rate"] = df_migrations["n1"]/(df_migrations["n2"] +df_migrations["n1"] ) # drop useless information df_migrations = df_migrations.drop(columns=["y1_countyname","y2_statefips", "y2_countyfips", "y1_statefips", "y1_countyfips", "y1_state", "statefips_str", "countyfips_str","statefips_str1", "countyfips_str1"]) # remove nodes where data is undefined undefined data by zero df_migrations = df_migrations[df_migrations['n1'] != -1] # convert combined fips to int64 df_migrations['combined_fips-destination'] = df_migrations['combined_fips-destination'].astype('int64') df_migrations['combined_fips-source'] = df_migrations['combined_fips-source'].astype('int64') #extracting the combined fips destination and combined fips source for graph in form of numpy arrays df_graph= df_migrations.drop(columns=["n1","n2","agi","Unemployment rate"]) # extracting all the combinations that have happened in the US between county dest_source = df_graph.to_numpy() # reset index starting from 0 (because rows were dropped) df_migrations = df_migrations.reset_index() df_migrations = df_migrations.drop(columns=['index']) ``` **From the IRS dataset create adjency matrix** In this adjency matrix, the nodes are the counties and the edges are : - `A_total[i, j]` := total number of people who migrated from county i to county j - `A_returns[i, j]` := number of people who migrated from i to j and payed taxes - `A_exempt[i, j]` := number of people who migrated from county i to county j and did not payed taxes ``` nodes_index = np.unique(dest_source) num_nodes = nodes_index.shape[0] A_total = np.zeros((num_nodes, num_nodes)) A_returns = np.zeros((num_nodes, num_nodes)) A_exemptions = np.zeros((num_nodes, num_nodes)) count = 0 for dest, source in dest_source : i = np.where(nodes_index == dest) j = np.where(nodes_index == source) total = df_migrations["n1"][count] + df_migrations["n2"][count] A_total[j[0], i[0]] = df_migrations["n1"][count] + df_migrations["n2"][count] A_returns[j[0], i[0]] = df_migrations["n1"][count]/total A_exemptions[j[0], i[0]] = df_migrations["n2"][count]/total count += 1 ``` **Preparing the presidential result by county dataset** The main idea in this cell is to prepare the presidential result by county dataset. To each county a label is given : $+1$ if the county has a majority of Republican and $-1$ if the county has a majority of Democrat ``` df_presidential_result =pd.read_csv("NTDS_Data/2016_US_County_Level_Presidential_Results.csv" ) df_presidential_result = df_presidential_result.drop(columns=["Unnamed: 0","votes_dem", "votes_gop", "total_votes", "diff", "per_point_diff", "state_abbr", "county_name"]) #Sorting according to the fips code to be consistent with the migration data by IRS df_presidential_result = df_presidential_result.sort_values(by=['combined_fips']) #Adding a new column of the winners with -1 corresponding to democrat and 1 to republican df_presidential_result["Winner"] = np.where((df_presidential_result['per_dem'] > df_presidential_result['per_gop']), -1, 1) df_presidential_result = df_presidential_result.drop(columns=["per_dem","per_gop"]) # Redindex some FIPS due to difference between FIPS test = nodes_index - df_presidential_result["combined_fips"].values df_presidential_result["combined_fips"] = df_presidential_result["combined_fips"] + test ``` ## I.2 Study the datasets at hand First we study the proportion of people paying taxes and not paying taxes for each migration flow. An histogram of these migration is plotted. As one can see, on average, $35$% of the people in a migration flow are paying taxes (and conversly $65$% are exempt from paying taxes). At most, $50$% of the people in a migration flow will pay taxes. Hence, it is intersting to note that most people who migrate are not exempt from paying taxes. In subsequent part of this notebook, we will try to see if we can use these proportions to predict if a county is either voting Republican or Democrat. ``` # <returns, exempt> node_pct = np.zeros((df_migrations.shape[0], 2)) for i in range (0, df_migrations.shape[0]) : total = df_migrations['n1'][i] + df_migrations['n2'][i] node_pct[i, 0] = df_migrations['n1'][i] / total node_pct[i, 1] = df_migrations['n2'][i] / total df_node_pct = pd.DataFrame(node_pct, columns=["pct_return", "pct_exempt"]) plt.hist(df_node_pct["pct_return"].values, normed=False, bins=30) plt.title('Distribution of the proportion of migration where people are paying taxes') plt.ylabel('Number of migration'); plt.xlabel('Pct. of people paying tax and migrating'); plt.hist(df_node_pct["pct_exempt"].values, normed=False, bins=30) plt.title('Distribution of the proportion of migration where people are not paying taxes') plt.ylabel('Number of migration'); plt.xlabel('Pct. of people not paying tax and migrating'); ``` One wants to also consider the proportion of Republican and Democrat counties in the US. Before doing the actual computation, a bit of historic background on the US electoral system is required. Historically, most of the states in the US are Republican. Hence, if one draws a simple geographic map of the US, he would color most states in red (the color of the Republican). However, if then one scales the size of each state with the number of inhabitants in each county, then the proportion of blue and red on the map would be more or less equal, with coastal states (states that are on the Atlantic or Pacific coast) in blue, and the inner states red (Republican). Our computations verify this historical proportions : more than $84$% of the counties are Republican. ``` num_republicans = df_presidential_result[df_presidential_result['Winner'] == 1].shape[0] num_democrats = df_presidential_result[df_presidential_result['Winner'] == -1].shape[0] pct_republican = df_presidential_result[df_presidential_result['Winner'] == 1].shape[0] / df_presidential_result.shape[0] pct_democrat = df_presidential_result[df_presidential_result['Winner'] == -1].shape[0] / df_presidential_result.shape[0] print("Pct. of counties Republican : ", pct_republican, " // Pct. of counties Democrat : ", pct_democrat) ``` # II. Creation of simple graph following structure of migration & first attempt to predict county type ## II.1 Creation of simple graph The first graphs that will be studied in this notebook are simple to understand as they follow the structure of a migration : if there is a migration between county i and j, then an edge is set between these to counties. Before moving on, it is intersting to note that in this section, we are creating graph that are suppose to show a correlation between a type of migration and a voting pattern in a county. When we refer to "type of migration", we mean what kind of proportion is there between people paying taxes and not paying taxes in a specific migration flow. For exemple, we say that a migration flow has a high proportion of people paying taxes if more then $40$% of the people in the migration flow are paying taxes. The idea is to correlate this migration to a specific voting pattern in the destination county. To achieve this task we will be creating 2 types of graph : - `graph_nonRGB_returns` : in these graph there is a migration between two counties if (1) there is an actual migration between county i and j and (2) if the migration flow as a proportion of people paying taxes greater then a **specified threshold**. - `graph_nonRGB_exempt`: same type of graph as before, but now we are studying the proportions of exempted people in a migration flow In subsequent cells, we code mainly two methods : one for creating `graph_nonRGB_return` graphs and one for creating `graph_nonRGB_exempt` **Note :** we refer to graph created in this section as "nonRGB" as in later section we will be using RGB graphs. One can read this notation as being a raw graph built on migration without any kind of similarity extrapolation. ``` def create_adjency_nonRGB_returns(threshold_returns, plot_adj_returns=False) : """ Create the adjency matrix for a graph where there is an edge between two county if migration flow between two county has a proportion of people paying taxes greater than threshold_returns """ adjacency_nonRGB_returns = A_returns.copy() adjacency_nonRGB_returns[adjacency_nonRGB_returns >= threshold_returns] = 1 adjacency_nonRGB_returns[adjacency_nonRGB_returns < threshold_returns] = 0 if plot_adj_returns : plt.spy(adjacency_nonRGB_returns) plt.show() return adjacency_nonRGB_returns def create_graph_nonRGB_returns(threshold_returns, plot_adj_returns=False) : """ Create a graph where there is an edge between two county if migration flow between two county has a proportion of people paying taxes greater than threshold_returns The attribute plot_adj_returns can is a boolean used if one wants to plot the adjency matrix of the graph """ i = np.where(nodes_index == dest) graph_nonRGB_returns = nx.from_numpy_array(create_adjency_nonRGB_returns(threshold_returns, plot_adj_returns)) nodes = np.zeros((nodes_index.shape[0], 2)) for fips, result in df_presidential_result.values : i = np.where(nodes_index == fips) index = i[0][0] nodes[index, 0] = index nodes[index, 1] = result node = pd.DataFrame(nodes, columns=["id", "result"]) node_props = node.to_dict() for key in node_props: nx.set_node_attributes(graph_nonRGB_returns, node_props[key], key) nx.write_gexf(graph_nonRGB_returns, 'graph_nonRGB_returns_35.gexf') return graph_nonRGB_returns def create_graph_nonRGB_returns_features(threshold_returns, plot_adj_returns=False): i = np.where(nodes_index == dest) graph_nonRGB_returns = nx.from_numpy_array(create_adjency_nonRGB_returns(threshold_returns, plot_adj_returns)) nodes = np.zeros((nodes_index.shape[0], 4)) for fips, result in df_presidential_result.values : i = np.where(nodes_index == fips) index = i[0][0] nodes[index, 0] = index nodes[index, 1] = result for j in range (0, df_migrations.shape[0]): fips = df_migrations['combined_fips-destination'][j] i = np.where(nodes_index == fips) index = i[0][0] nodes[index, 2] = df_migrations['agi'][j] nodes[index, 3] = df_migrations['Unemployment rate'][j] node = pd.DataFrame(nodes, columns=["id", "result", "agi", "unemployment_rate"]) node_props = node.to_dict() for key in node_props: nx.set_node_attributes(graph_nonRGB_returns, node_props[key], key) nx.write_gexf(graph_nonRGB_returns, 'graph_nonRGB_returns_35.gexf') return graph_nonRGB_returns, node # construct graph for flows with more then 45% returns # create adjacency matrix for flows with more then 45% returns def create_adjency_nonRGB_exempt(thershold_exempt, plot_adj_exempt = False ) : """ Create the adjency matrix for a graph where there is an edge between two county if migration flow between two county has a proportion of people not paying taxes greater than thershold_exempt """ adjacency_nonRGB_exempt = A_exemptions.copy() adjacency_nonRGB_exempt[adjacency_nonRGB_exempt >= thershold_exempt] = 1 adjacency_nonRGB_exempt[adjacency_nonRGB_exempt < thershold_exempt] = 0 if plot_adj_exempt : plt.spy(adjacency_nonRGB_exempt) plt.show() return adjacency_nonRGB_exempt def create_graph_nonRGB_exempt(threshold_exempt, plot_adj_exempt = False) : """ Create a graph where there is an edge between two county if migration flow between two county has a proportion of people not paying taxes greater than threshold_exempt The attribute plot_adj_exempt can is a boolean used if one wants to plot the adjency matrix of the graph """ i = np.where(nodes_index == dest) graph_nonRGB_exempt = nx.from_numpy_array(create_adjency_nonRGB_exempt(threshold_exempt, plot_adj_exempt)) nodes = np.zeros((nodes_index.shape[0], 2)) for fips, result in df_presidential_result.values : i = np.where(nodes_index == fips) index = i[0][0] nodes[index, 0] = index nodes[index, 1] = result node = pd.DataFrame(nodes, columns=["id", "result"]) node_props = node.to_dict() for key in node_props: nx.set_node_attributes(graph_nonRGB_exempt, node_props[key], key) nx.write_gexf(graph_nonRGB_exempt, 'graph_nonRGB_exempt.gexf') return graph_nonRGB_exempt def create_graph_nonRGB_exempt_features(threshold_exempt, plot_adj_exempt = False) : i = np.where(nodes_index == dest) graph_nonRGB_exempt = nx.from_numpy_array(create_adjency_nonRGB_exempt(threshold_exempt, plot_adj_exempt)) nodes = np.zeros((nodes_index.shape[0], 4)) for fips, result in df_presidential_result.values : i = np.where(nodes_index == fips) index = i[0][0] nodes[index, 0] = index nodes[index, 1] = result for j in range (0, df_migrations.shape[0]): fips = df_migrations['combined_fips-destination'][j] i = np.where(nodes_index == fips) index = i[0][0] nodes[index, 2] = df_migrations['agi'][j] nodes[index, 3] = df_migrations['Unemployment rate'][j] node = pd.DataFrame(nodes, columns=["id", "result", "agi", "unemployment_rate"]) node_props = node.to_dict() for key in node_props: nx.set_node_attributes(graph_nonRGB_exempt, node_props[key], key) nx.write_gexf(graph_nonRGB_exempt, 'graph_nonRGB_exempt.gexf') return graph_nonRGB_exempt, node ``` ## II.2 First attempt at predicting election results With the graph built in the previous section, we want to see if there is some sort of pattern between a particular structure of the graph and the voting pattern in the county. ### II.2.1 First observations using Gephi The first hypothesis that could be stated are the following : 1. **Hypothesis 1** : a migration flow with a more than 35% people paying taxes will have as destination a republican county. One could think that people paying taxes would like to move to Republican county were taxes such as the proprety tax are lower. 2. **Hypothesis 2** : a migration flow with a more than 70% people not paying taxes will have as destination a democrat county. One could think that people with the lowest income would move to county were charity is more developed (we are not considering helps from the state, which is the same whatever the state). To validate or reject these two hypotethis, we are building two graph. The first one considers only the migration flows between counties were more then $38$% of the migrants are paying taxes. The second graph considers only the migration flow between counties where more then $70$% of the migrants are paying taxes. If hypothesis 1 is correct, then one observing the first graph on $Gephi$, most of the connection will be toward Republican counties. On the other hand, if hypothesis 2 is correct, then most migration will have as destination a Democrat county. ``` create_graph_nonRGB_exempt(0.7) create_graph_nonRGB_returns(0.35) ``` Result of the observation on $Gephi$ : - *observation on exemption graph* : the exemption graph (i.e graph with edges between nodes where migration is caractersied by more then $70$% of migratiants not paying taxes) doesn't have the structre expected. Edges are going from Democrat to Republican, and to Democrat from Republican in an equal fashion. So hypothesis 2 cannot be validate. - *observation on return graph* : the return graph (i.e graph with edges between nodes where migration is caractersised by more then $35$% of migrants paying taxes) doesn't have the expected strcuture. Most of the migration is concentrated between democrat nodes. It appears that migration caracterised by a high rate of people paying taxes is concentrated between Democrat counties. So as a conclusion, both hypothesis 1 and 2 are rejected. However, it appears that from the return graph if one studies the degree of the node, one could be able to tell if its a Democrat or Republican county. ### II.2.2 Prediction based on degree of county (i.e node of the graph) The aforementioned observation tells us that by studying the degree of a node, we might be able to predict the label (i.e Republican or Democrat) of that node. We will now verify this asumption. The driving force behind our first prediction algorithm is quiet simple : we believe that we can split the nodes into two categories. The first category being nodes with high degree and the second category being nodes with low degree. These two catagory will then mapped to reciprocally Democrat and Republican. However, the problem remains on finding the correct threshold to construct our graph (remember that our graph are constrcuted using a threshold on the proportion of migratants payin taxes or not paying taxes) and what should be the degree that defines the limit between the two aformentionned category. This limit is from now on reffered as the "cut". This problem of finding the best possible tuple of hyper-parameters is a cross-validation problem. Hence, the subsequent cell implment a cross validation to find the best possible cut and threshold for this problem anc computes the accuracy of predicting graph in such a way. ``` def get_degree_attribute (G) : degree_attr = [(G.degree(n), G.nodes[n]['result']) for n in G.nodes()] return np.array(degree_attr) def get_degree_party (degree_attr) : democrats = [] republicans = [] for tuple_ in degree_attr : if tuple_[1] == -1 : democrats.append(tuple_[0]) else : republicans.append(tuple_[0]) return democrats, republicans def compute_accuracy(d, r, best_cut) : pct_dem_predicted_correctly = d[d > best_cut].shape[0]/d.shape[0] pct_rep_predicted_correctly = r[r > best_cut].shape[0]/r.shape[0] accuracy = (num_democrats*pct_dem_predicted_correctly + num_republicans*(1 - pct_rep_predicted_correctly))/(num_democrats + num_republicans) return accuracy def cross_validation_returns (threshold_range_min, threshold_range_max, step=0.01, print_best=False) : thresholds = np.arange(start=threshold_range_min, stop=threshold_range_max, step=step) max_global = 0 best_cut = 0 best_threshold = 0 for threshold in thresholds : graph_nonRGB_returns = create_graph_nonRGB_returns(threshold) degree_attr = get_degree_attribute(graph_nonRGB_returns) d, r = get_degree_party(degree_attr) d = np.array(d) r = np.array(r) d_qt025 = np.quantile(d, 0.25) d_qt075 = np.quantile(d, 0.75) cuts = np.arange(d_qt025, d_qt075, 1) max_local = 0 cut_local = 0 for cut in cuts : temp = np.abs(d[d > cut].shape[0]/d.shape[0] - r[r > cut].shape[0]/r.shape[0]) if temp > max_local : max_local = temp cut_local = cut if max_local > max_global : max_global = max_local best_threshold = threshold best_cut = cut_local if print_best : graph_nonRGB_returns = create_graph_nonRGB_returns(best_threshold) degree_attr = get_degree_attribute(graph_nonRGB_returns) d, r = get_degree_party(degree_attr) d = np.array(d) r = np.array(r) print(d[d > best_cut].shape[0]/d.shape[0]) print(r[r > best_cut].shape[0]/r.shape[0]) plt.hist(d, density=True, bins= 100) plt.show() plt.hist(r, density=True, bins= 100) plt.show() accuracy = compute_accuracy(d, r, best_cut) return best_cut, best_threshold, accuracy best_cut_brute, best_threshold_brute, accuracy_brute = cross_validation_returns(0.3, 0.6, print_best=True) print("The best cut is : ", best_cut_brute, "and the best threshold is : ", best_threshold_brute) print("W/ overall accuracy : ", accuracy_brute) ``` The graphs above show that by constructing a graph with migration caractersized by more then $38$% of people paying taxes we can split the nodes of the graph into two categories : the Republican being the nodes with a degree less than 6 and the Democrat being nodes with a degree more than 6. By doing so, we will rightfully caractersied half of the Democrats and $92$% of the Republicans giving an overall accuracy of $85$%. This is not great, as one could simply say that all counties are Republican and get an overall $81$% of accuracy. **Note :** we refer as this method as cross-validation, but we are not splitting the data to create a validation set and a proper training set, hence talking about cross-validation here might be an over-statement. However, the term still encapsulate the idea that we are trying to find the best possible tuple (cut, threshold) for this prediction. ### II.2.3 Prediction based on degree neighboring nodes of county The previous technique based on predicting the label of a node based on its absolute degree proved to perform poorly. And the reason was that half of the Democrat nodes are wrongfully predicted. Hence, we try a new technique : predicint the label of a node, based on the average degree of its neighboors. The problem with the previous prediction algorithm was that to have a clear cut between two nodes, we had to put a high threshold between counties. A high threshold meant that most of the Republican nodes were having edge-free, but also a high proportion of Democrat nodes were edge-free and hence wrongfully classifed. To solve this problem, we use the fact that we can study the neighboring nodes and reduce the threshold. Hence, eventhough more Republican nodes will have conneciton, we believe that still making the average of all there connection, we will get a lower average degree than for Democrat nodes. Also, because this method seemed like a realy good idea, we developped it for the two graphs : returns and exemptions. **Study neighboors on the returns graph** ``` #a_dict = graph_nonRGB_returns.neighbors def compute_mean (neigh_degree) : if neigh_degree.shape[0] == 0 : return 0 else : return neigh_degree.mean() def mean_degree_neighbors (G) : degree_attr = get_degree_attribute(G) mean_degree_neigh = [] dicts = [G.neighbors(n) for n in G.nodes] for a_dict in dicts : neigh_degree = [] for key in a_dict: neigh_degree.append(degree_attr[key][0]) mean_degree_neigh.append(compute_mean(np.array(neigh_degree))) return np.concatenate((np.array(mean_degree_neigh).reshape(degree_attr.shape[0], 1), degree_attr[:, 1].reshape(degree_attr.shape[0], 1)), axis=1) def cross_validation_neigh_returns (threshold_range_min, threshold_range_max, step=0.01, print_best=False) : thresholds = np.arange(start=threshold_range_min, stop=threshold_range_max, step=step) max_global = 0 best_cut = 0 best_threshold = 0 for threshold in thresholds : graph_nonRGB_returns = create_graph_nonRGB_returns(threshold) degree_attr = mean_degree_neighbors(graph_nonRGB_returns) d, r = get_degree_party(degree_attr) d = np.array(d) r = np.array(r) d_qt025 = np.quantile(d, 0.25) d_qt075 = np.quantile(d, 0.75) cuts = np.arange(d_qt025, d_qt075, 1) max_local = 0 cut_local = 0 for cut in cuts : temp = np.abs(d[d > cut].shape[0]/d.shape[0] - np.log(r[r > cut].shape[0]/r.shape[0])) if temp > max_local : max_local = temp cut_local = cut if max_local > max_global : max_global = max_local best_threshold = threshold best_cut = cut_local if print_best : graph_nonRGB_returns = create_graph_nonRGB_returns(best_threshold) degree_attr = mean_degree_neighbors(graph_nonRGB_returns) d, r = get_degree_party(degree_attr) d = np.array(d) r = np.array(r) print(d[d > best_cut].shape[0]/d.shape[0]) print(r[r > best_cut].shape[0]/r.shape[0]) plt.hist(d, normed=True, bins= 100) plt.show() plt.hist(r, normed=True, bins= 100) plt.show() accuracy = compute_accuracy(d, r, best_cut) return best_cut, best_threshold, accuracy best_cut_return, best_thershold_return, accuracy_returns = cross_validation_neigh_returns(0.3, 0.6, print_best=True) print("best cut is : ", best_cut_return, " // best thershold us : ", best_thershold_return) print("W/ overall accuracy : ", accuracy_returns) def cross_validation_neigh_exempt (threshold_range_min, threshold_range_max, step=0.01, print_best=False) : thresholds = np.arange(start=threshold_range_min, stop=threshold_range_max, step=step) max_global = 0 best_cut = 0 best_threshold = 0 for threshold in thresholds : graph_nonRGB_exempt = create_graph_nonRGB_exempt(threshold) degree_attr = mean_degree_neighbors(graph_nonRGB_exempt) d, r = get_degree_party(degree_attr) d = np.array(d) r = np.array(r) d_qt025 = np.quantile(d, 0.25) d_qt075 = np.quantile(d, 0.75) cuts = np.arange(d_qt025, d_qt075, 1) max_local = 0 cut_local = 0 for cut in cuts : temp = np.abs(d[d > cut].shape[0]/d.shape[0] - np.log(r[r > cut].shape[0]/r.shape[0])) if temp > max_local : max_local = temp cut_local = cut if max_local > max_global : max_global = max_local best_threshold = threshold best_cut = cut_local if print_best : graph_nonRGB_exempt = create_graph_nonRGB_exempt(best_threshold) degree_attr = mean_degree_neighbors(graph_nonRGB_exempt) d, r = get_degree_party(degree_attr) d = np.array(d) r = np.array(r) print(d[d > best_cut].shape[0]/d.shape[0]) print(r[r > best_cut].shape[0]/r.shape[0]) plt.hist(d, normed=True, bins= 100) plt.show() plt.hist(r, normed=True, bins= 100) plt.show() accuracy = compute_accuracy(d, r, best_cut) return best_cut, best_threshold, accuracy best_cut_exempt, best_thershold_exempt, accuracy_exempt = cross_validation_neigh_exempt(0.55, 0.8, print_best=True) print("Best cut is : ", best_cut_exempt, " // best thershold us : ", best_thershold_exempt) print("W/ overall accuracy : ", accuracy_exempt) ``` When a first try gave us the overall accuracy score of $0.78$% which is clearly a terrible result -- simply put all the counties as Democrat and you would get a better accuracy. Hence, a bit of inspiration was taken from Machine Learning. In Machine Learning, when one is faced with heavy tailed targets (i.e most of the dataset at hand is driven toward on specific value) is to use a penalizing function such as the log. This is specifically what we introduce here, when we are computing the absolute difference between the number of nodes above the cut, we are penalizing the republican nodes with the log function. By doing so, we are forcing the "loss function" (based on the degree of the nodes) to make sure that the number of nodes above the cut is kept small for Republican (because they are the one who will give us the biggest loss in the end). By doing so, reach an overall accuray of $87$%. **Note :** the acutal value of the new loss function is : $$ loss = |pctOfDemocratAboveCut + log(pctOfRepublicanAboveCut)| $$ Even with these changements, we were able to improve the result by a mere $2$%, which could be great if not there was such a heavy tail on republican. Hence, we try more sophisticated methods using Lagrange techniques and GCN in subsequent cells in order to reach a higer accuracy. ### II.2.4 Graph observation Observation of the graph non_RGB_return and non_RGB_exempt characteristics, such as, type of network, clustring coefficient and sparsity. ``` arr = df_graph.to_numpy() possible_nodes = np.unique(arr) A_migr = np.zeros((len(possible_nodes), len(possible_nodes))) for dest, source in arr : i = np.where(possible_nodes == dest) #print(i) j = np.where(possible_nodes == source) #print(j) A_migr[j[0], i[0]] = 1 A_migr[i[0], j[0]] = 1 G_migr = nx.from_numpy_matrix(A_migr) G_exempt = create_graph_nonRGB_exempt(0.7) G_returns = create_graph_nonRGB_returns(0.35) ``` Degree distribution of the graphs ``` fig, axes = plt.subplots(1, 3, figsize=(20, 6)) axes[0].set_title('Degree distribution of Non RGB exemption graph') exemption_degrees = [degree for node, degree in G_exempt.degree()] axes[0].hist(exemption_degrees); axes[1].set_title('Degree distribution of Non RGB returns graph') returns_degrees = [degree for node, degree in G_returns.degree()] axes[1].hist(returns_degrees); axes[2].set_title('Degree distribution of the migration graph') migr_degrees = [degree for node, degree in G_migr.degree()] axes[2].hist(migr_degrees); ``` We can clearly observe that the returns graph has nodes of higher degrees as compared to the exemption graph. The exemption graph has fewer edges because there is an edge between two nodes only when more than 70% of the migration flow is not paying taxes. This is a stricter requirement for an edge between nodes as compared to the requirement for an edge in the returns graph making the exemption graph sparser. The degree distribution also implies that most of the counties have degree less than 50 and only few counties which have a degree above it. The migration graph, as expected, has higher degree nodes as it has edges between all the counties which have a migration between them. As the returns graph contains higher degree nodes,it is expected to have a higher average clustering coefficient and a larger giant component size The degree distribution of both the graphs follow approcimatley a power law and hence they are scale free. Evaluating basic properties of the exemption graph ``` print('Number of nodes: {}, Number of edges: {}'. format(G_exempt.number_of_nodes(), G_exempt.number_of_edges())) print('Number of connected components: {}'. format(nx.number_connected_components(G_exempt))) exempt_connected_components = (G_exempt.subgraph(c) for c in nx.connected_components(G_exempt)) giant_exempt = max(exempt_connected_components, key = len) print('The giant component of the exemption graph has {} nodes and {} edges.'.format(giant_exempt.number_of_nodes(), giant_exempt.size())) # Calculating the clustering coefficient print('The average clustering coefficient of the exemption graph is {}'.format(nx.average_clustering(G_exempt))) ``` By looking at the clustering coefficient, we can assume that the graph is not random and has a structure within it. Simulating the graph using erdos renyi network ``` # It should have the same number of nodes n1 = len(G_exempt.nodes()) # Edges of te exemption graph m1 = G_exempt.size() # The p parameter is adjusted to have the same number of edges p1 = 2*m1 / (n1 * (n1-1)) G_exempt_er = nx.erdos_renyi_graph(n1, p1) print('The Erdos-Rényi network that simulates the exemption graph has {} edges.'.format(G_exempt_er.size())) ``` The Erdos-Rényi network that simulates the exemption graph has 4089 edges. ``` # Calculating the clustering coefficient nx.average_clustering(G_exempt_er) ``` The erdos renyi graph has a very low clustering coefficient as compared to the original graph as ER network is completely random while the original graph is not. Simulation of the graph using BA network ``` q1 = 2 G_exempt_ba = nx.barabasi_albert_graph(n1, q1) print('The Barabási-Albert network that simulates the exemption graph has {} edges.'.format(G_exempt_ba.size())) # Calculating the clustering coefficient nx.average_clustering(G_exempt_ba) ``` The BA network has it's average clustering close to that of the original graph but has significantly higher number of edges as compared to the original graph. ``` fig, axes = plt.subplots(1, 3, figsize=(20, 6)) axes[0].set_title('Degree distribution of the Simulated BA network') exemption_degrees = [degree for node, degree in G_exempt_ba.degree()] axes[0].hist(exemption_degrees); axes[1].set_title('Degree distribution of the Simulated ER network') exempt_er_degrees = [degree for node, degree in G_exempt_er.degree()] axes[1].hist(exempt_er_degrees); axes[2].set_title('Degree distribution of the original exemption graph') exempt_degrees = [degree for node, degree in G_exempt.degree()] axes[2].hist(exempt_degrees); ``` The degree distribution and the average clustering of the simulated BA network is very close to that of the original graph. Performing similar analysis for the returns graph ``` print('Number of nodes: {}, Number of edges: {}'. format(G_returns.number_of_nodes(), G_returns.number_of_edges())) print('Number of connected components: {}'. format(nx.number_connected_components(G_returns))) returns_connected_components = (G_returns.subgraph(c) for c in nx.connected_components(G_returns)) giant_returns = max(returns_connected_components, key = len) print('The giant component of the returns graph has {} nodes and {} edges.'.format(giant_returns.number_of_nodes(), giant_returns.size())) # Calculating the clustering coefficient print('The average clustering coefficient of the returns graph is {}'.format(nx.average_clustering(G_returns))) # It should have the same number of nodes n2 = len(G_returns.nodes()) # Edges of te exemption graph m2 = G_returns.size() # The p parameter is adjusted to have the same number of edges p2 = 2*m2 / (n2 * (n2-1)) G_returns_er = nx.erdos_renyi_graph(n2, p2) print('The Erdos-Rényi network that simulates the returns graph has {} edges.'.format(G_returns_er.size())) # Calculating the clustering coefficient nx.average_clustering(G_exempt_er) q2 = 6 G_returns_ba = nx.barabasi_albert_graph(n2, q2) print('The Barabási-Albert network that simulates the exemption graph has {} edges.'.format(G_returns_ba.size())) # Calculating the clustering coefficient nx.average_clustering(G_returns_ba) fig, axes = plt.subplots(1, 3, figsize=(20, 6)) axes[0].set_title('Degree distribution of the Simulated BA network') returnsba_degrees = [degree for node, degree in G_returns_ba.degree()] axes[0].hist(returnsba_degrees); axes[1].set_title('Degree distribution of the Simulated ER network') returnser_degrees = [degree for node, degree in G_returns_er.degree()] axes[1].hist(returnser_degrees); axes[2].set_title('Degree distribution of the original returns graph') returns_degrees = [degree for node, degree in G_returns.degree()] axes[2].hist(returns_degrees); ``` Similar observations hold true for the returns graph. BA network simulates the returns graph more accurately as compared to ER graph. The degree distribution of the simulated network is similar to the original returns graph but the average clustering coefficient is too low.This could be because the returns graph has more structure as compared to the random BA network. ## II.3 Second attempt at predicting election results - GCN and Graph signal processing ## All functions used in this part are in uils file. As aforementioned, we are now trying more sophisticated methods such as GCN and or using Laplacian and fourier transfom based on the return and exempt graphs. -In both two methods, 20% of the target labels (either +1 ot -1) are randomly masked to zero which constitue our signal on wich filtering will be performed. This is served as the test set on which the performance will be evaluated. The remaining target lables are used for training. Graph signal porcessing method: The idea is to use Fourier analysis. Namely we apply the Fourier transform on the masked signal. The output is filtered by a lowpass filter and a heat kernel which can smooth the signal. When the signal is converted back to the graph domain, some values are assigned to the filtered signal. These values can contribute to the prediction. The final prediction of a masked node is obtained by averaging over the addition of the value of node itself and values of its neighbours, then thresholding to get back to the +1 and -1 entries and compare with grouch truth labels with F1 score. GCN:GCN method requires a train set and a test set as well. Different from the Fourier method, where masked labels were directly modified over the original target. GCN required a different format for labels. Train mask and test mask have the same length as the original target, having value 0 or 1. If i-th value of train mask is 0, then the test mask of i-th value must be 1, and vice versa. This means that i-th node is masked and will be used for testing, not for training. In this way, labels are separated into two different sets. Afterwards, these masks are applied to the original target in order to form train labels and test labels, which then are ready to be fitted in GCN. ``` # creation of the graph // seperation of adjency matrix & label/features for later use _, features1 = create_graph_nonRGB_returns_features(0.38) adjacency_nonRGB_returns = create_adjency_nonRGB_returns(0.38, plot_adj_returns=False) (features1) ``` ### II.3.1 Graph signal processing and GCN on exemption graph With this graph we are using the Fourier method to predict the outcome of the election in one particular county in the return graph. ``` # prepare A_migration and target label A_migration = adjacency_nonRGB_returns.copy() # prepare the target label y_presidential_result = features1["result"].copy() # compute lamb and U laplacian_migration = compute_laplacian(A_migration, normalize=True) lamb_migration, U_migration = spectral_decomposition(laplacian_migration) # prepare low pass filter ideal_lp_migration = np.ones((A_migration.shape[0],)) ideal_lp_migration[lamb_migration >= 0.1] = 0 # to tune #heat kernel filter ideal_ht_migration=np.exp(-0.1 * lamb_migration) #0.1 can be tuned # apply filter x_lp_migration = ideal_graph_filter(y_presidential_result.copy(),ideal_lp_migration,U_migration) x_ht_migration=ideal_graph_filter(y_presidential_result.copy(),ideal_lp_migration,U_migration) ``` Additionnally, to the low pass filter used previousely, a heat kernel is also test in the sake of improving accuracy ``` iters = 20 n = int(len(y_presidential_result)*0.2) accuracy_mean_fourier, accuracy_var_fourier = pred_iteration(A_migration,iters, y_presidential_result, n, x_lp_migration) accuracy_mean_ht, accuracy_var_ht = pred_iteration(A_migration,iters, y_presidential_result, n, x_ht_migration) ``` Using low pass filtering and a heat kernel allows us to predict correctly $87$% of the election result, a similar result to the one found in part II.2.3, where we allready said that the result wasn't terrible. So we move on to GCN method. **GCN method** ``` # determine features to use in GCN X_migration = features1.drop(columns=['id', 'result']).values # evaluation GCN performance accuracy_mean_GCN, accuracy_var_GCN = apply_gcn(iters,X_migration,y_presidential_result,A_migration,laplacian_migration,lamb_migration,U_migration) ``` ### II.3.2 Graph signal processing and GCN on exemption graph We are conducting the same study are in part II.3.1 but on the graph of exempltion (i.e graph where the flow are each caraterised by at least $56$% of migrants not paying taxes. ``` # creation of the graph // seperation of adjency matrix & label/features for later use _, features2 = create_graph_nonRGB_exempt_features(0.56) adjacency_nonRGB_exempt = create_adjency_nonRGB_exempt(0.56, plot_adj_exempt = False ) ``` **Fourier method** ``` # prepare A_migration and target label A_migration2 = adjacency_nonRGB_exempt.copy() # prepare the target label y_presidential_result2 = features2["result"].copy() # compute lamb and U laplacian_migration2 = compute_laplacian(A_migration2, normalize=True) lamb_migration2, U_migration2 = spectral_decomposition(laplacian_migration2) # low pass filter ideal_lp_migration2 = np.ones((A_migration2.shape[0],)) ideal_lp_migration2[lamb_migration2 >= 0.1] = 0 # to tune #heat kernel filter ideal_ht_migration2=np.exp(-0.1 * lamb_migration2) # apply filter x_lp_migration2 = ideal_graph_filter(y_presidential_result2.copy(),ideal_lp_migration2,U_migration2) x_ht_migration2 = ideal_graph_filter(y_presidential_result2.copy(),ideal_ht_migration2,U_migration2) iters = 20 n = int(len(y_presidential_result2)*0.2) accuracy_mean_fourier2, accuracy_var_fourier2 = pred_iteration(A_migration2,iters, y_presidential_result2, n, x_lp_migration2) accuracy_mean_ht2, accuracy_var_ht2 = pred_iteration(A_migration2,iters, y_presidential_result2, n, x_ht_migration2) ``` With the exemption graph we achieve an accuracy of $92$%, a result that starts to be rather conclusive. **GCN method** ``` # determine features to use in GCN X_migration2 = features2.drop(columns=['id', 'result']).values # evaluation GCN performance accuracy_mean_GCN2, accuracy_var_GCN2 = apply_gcn(iters,X_migration2,y_presidential_result2,A_migration2,laplacian_migration2,lamb_migration2,U_migration2) ``` # III. Study of a similarity graph for prediction Result of the previous section were good, but still far from being great, hence we are now moving on to construct another type of graph : similarity graph using an RGB kernel. Using such a graph will be interesting in the sense that the IRS will allow us to add another dimension to the graph: the origin of the migrant, if they are either US citizen or migrant. This allows us to capture the polorazing aspect of migration : immigration of foreigner. To construct the similarity graph we re-prepare the IRS dataset (now we are considering other part of the IRS dataset -- the one that allows us to do the seperation between foreigner and US citizen). ## III.1. Clean and prepare the data ``` # load the data df_migrations = pd.read_csv("./NTDS_Data/countyinflow1516.csv" ) # load the data df_migrations = pd.read_csv("./NTDS_Data/countyinflow1516.csv" ) # keep only summury information of each county df_migrations = df_migrations[df_migrations['y1_countyname'].str.contains("County Total Migration")] # create the combined fips county number df_migrations['statefips_str'] = df_migrations['y2_statefips'].apply(lambda x : str(x).zfill(2)) df_migrations['countyfips_str'] = df_migrations['y2_countyfips'].apply(lambda x : str(x).zfill(3)) df_migrations['combined_fips'] = df_migrations['statefips_str'].apply(lambda x: x.lstrip('0')) + df_migrations['countyfips_str'] # drop useless information df_migrations = df_migrations.drop(columns=["y2_statefips", "y2_countyfips", "y1_statefips", "y1_countyfips", "y1_state", "statefips_str", "countyfips_str"]) # seperate each possible migration into three dataframe df_migration_total = df_migrations[df_migrations['y1_countyname'].str.contains("County Total Migration-US and Foreign")] df_migrations['y1_countyname'] = df_migrations['y1_countyname'].apply(lambda x : x if x.find("County Total Migration-US and Foreign") == -1 else "County Total Migration Both") df_migration_us = df_migrations[df_migrations['y1_countyname'].str.contains("County Total Migration-US")] df_migration_for = df_migrations[df_migrations['y1_countyname'].str.contains("County Total Migration-Foreign")] # drop the name of the column df_migration_total = df_migration_total.drop(columns=["y1_countyname"]) df_migration_us = df_migration_us.drop(columns=["y1_countyname"]) df_migration_for = df_migration_for.drop(columns=["y1_countyname"]) # remove nodes where data is undefined undefined data by zero df_migration_total = df_migration_total[df_migration_total['n1'] != -1] df_migration_us = df_migration_us[df_migration_us['n1'] != -1] df_migration_for = df_migration_for[df_migration_for['n1'] != -1] # convert combined fips to int64 df_migration_total['combined_fips'] = df_migration_total['combined_fips'].astype('int64') df_migration_us['combined_fips'] = df_migration_us['combined_fips'].astype('int64') df_migration_for['combined_fips'] = df_migration_for['combined_fips'].astype('int64') df_presidential_result = pd.read_csv("./NTDS_Data/2016_US_County_Level_Presidential_Results.csv" ) df_presidential_result = df_presidential_result.drop(columns=["Unnamed: 0","votes_dem", "votes_gop", "total_votes", "diff", "per_point_diff", "state_abbr", "county_name"]) # merge the two dataset and drop useless column, add a new column winner df_merged_total = pd.merge(df_migration_total, df_presidential_result, on="combined_fips", how='inner') df_merged_us = pd.merge(df_migration_us, df_presidential_result, on="combined_fips", how='inner') df_merged_for = pd.merge(df_migration_for, df_presidential_result, on="combined_fips", how='inner') df_merged_total['difference'] = df_merged_total['per_dem'] - df_merged_total['per_gop'] df_merged_us['difference'] = df_merged_us['per_dem'] - df_merged_total['per_gop'] df_merged_for['difference'] = df_merged_for['per_dem'] - df_merged_total['per_gop'] df_merged_total['winner'] = df_merged_total['difference'].apply(lambda x : -1 if x > 0 else 1) df_merged_us['winner'] = df_merged_us['difference'].apply(lambda x : -1 if x > 0 else 1) df_merged_for['winner'] = df_merged_for['difference'].apply(lambda x : -1 if x > 0 else 1) df_merged_total = df_merged_total.drop(columns=['difference']) df_merged_us = df_merged_us.drop(columns=['difference']) df_merged_for = df_merged_for.drop(columns=['difference']) ``` ## III.2 Creation of the similarity graph We will create 3 similarity graph : - `total graph`: a graph that encapsulate the total inflow of migrant/immigrants in a county (US citizen or not) - `US graph`: a graph that only encapsulate migration of US citizen - `For graph`: a graoh that only encapsulate migration of foreigner ``` # compute the adjency matrix of total X_total = df_merged_total.drop(columns=['combined_fips', 'per_dem', 'per_gop', 'winner']) nodes_total = df_merged_total.drop(columns=['n1', 'n2', 'agi', 'per_dem', 'per_gop']).values X_total['agi'] = (X_total['agi'] - X_total['agi'].mean()) / X_total['agi'].std() X_total['prop_ret/exempt'] = X_total['n1'] / X_total['n2'] X_total = X_total.drop(columns=['n1', 'n2']) adjacency_RGB_total = epsilon_similarity_graph(X_total, sigma=0.5284353963018223*0.1, epsilon=0.2) # compute the adjency matrix of foreigner X_for = df_merged_for.drop(columns=['combined_fips', 'per_dem', 'per_gop', 'winner']) nodes_for = df_merged_for.drop(columns=['n1', 'n2', 'agi', 'per_dem', 'per_gop']).values X_for['agi'] = (X_for['agi'] - X_for['agi'].mean()) / X_for['agi'].std() X_for['prop_ret/exempt'] = X_for['n1'] / X_for['n2'] X_for = X_for.drop(columns=['n1', 'n2']) adjacency_RGB_for = epsilon_similarity_graph(X_for, sigma=0.6675252605174871*0.1, epsilon=0.5) # compute the adjency matrix of US X_us = df_merged_us.drop(columns=['combined_fips', 'per_dem', 'per_gop', 'winner']) nodes_us = df_merged_us.drop(columns=['n1', 'n2', 'agi', 'per_dem', 'per_gop']).values X_us['agi'] = (X_us['agi'] - X_us['agi'].mean()) / X_us['agi'].std() X_us['prop_ret/exempt'] = X_us['n1'] / X_us['n2'] X_us = X_us.drop(columns=['n1', 'n2']) adjacency_RGB_us = epsilon_similarity_graph(X_us, sigma=0.5310405705207334*0.1, epsilon=0.5) ``` ## II.3 Graph signal processing of similarity graphs The approach is similar to one in the previous section, only the graph changes, as three graph as built according the origin of the migrants. The features used are the normalized AGI and the ration between people who are paying taxes and thoses who are not. **Laplacian for total** ``` # prepare A(adjacency matrix) A = adjacency_RGB_total.copy() # prepare the target label y = df_merged_total["winner"].copy() # prepare features X_total = X_total.values # compute corresponding lamb and U laplacian = compute_laplacian(A, normalize=True) lamb, U = spectral_decomposition(laplacian) # prepare filter #low pass filter n_nodes = A.shape[0] ideal_lp = np.ones((n_nodes,)) ideal_lp[lamb >= 0.1] = 0 # to tune #heat kernel filter ideal_ht=np.exp(-0.1 * lamb) # apply filter x_lp = ideal_graph_filter(y.copy(),ideal_lp,U) x_ht= ideal_graph_filter(y.copy(),ideal_ht,U) # detemine the number of iteration iters = 20 # determine the percentage of masks n = int(len(y)*0.2) # accuracy of the low pass filter print("Fourier method:") accuracy_mean, accuracy_var = pred_iteration(A,iters, y, n, x_lp) #accuracy of the heat kernel print("With heat kernel:") accuracy_mean_ht, accuracy_var_ht = pred_iteration(A,iters, y, n, x_ht) ``` **Laplacian for foreigner** ``` # prepare A_for(adjacency matrix) A_for = adjacency_RGB_for.copy() # prepare the target label y_for = df_merged_for["winner"].copy() # prepare features X_for = X_for.values # compute corresponding lamb and U laplacian_for = compute_laplacian(A_for, normalize=True) lamb_for, U_for = spectral_decomposition(laplacian_for) # prepare filter ideal_lp_for = np.ones((A_for.shape[0],)) ideal_lp_for[lamb_for >= 0.1] = 0 # to tune #heat kernel ideal_ht_for=np.exp(-0.1 * lamb_for) # apply filter x_lp_for = ideal_graph_filter(y_for.copy(),ideal_lp_for,U_for) x_ht_for = ideal_graph_filter(y_for.copy(),ideal_ht_for,U_for) # detemine the number of iteration iters_for = 20 # determine the percentage of masks n_for = int(len(y_for)*0.2) # apply low-pass method print("Fourier method:") accuracy_mean_for, accuracy_var_for = pred_iteration(A_for,iters_for, y_for, n_for, x_lp_for) #heat kernel print("With heat kernel:") accuracy_mean_for_ht, accuracy_var_for = pred_iteration(A_for,iters_for, y_for, n_for, x_lp_for) ``` **Laplacian for US** ``` # prepare A_us(adjacency matrix) A_us = adjacency_RGB_us.copy() # prepare the target label y_us = df_merged_us["winner"].copy() # prepare features X_us = X_us.values # compute corresponding lamb and U laplacian_us = compute_laplacian(A_us, normalize=True) lamb_us, U_us = spectral_decomposition(laplacian_us) # prepare filter ideal_lp_us = np.ones((A_us.shape[0],)) ideal_lp_us[lamb_us >= 0.1] = 0 # to tune #heat kernel ideal_ht_us=np.exp(-0.1 * lamb_us) # apply filter(lowpass+heat kernel) x_lp_us = ideal_graph_filter(y_us.copy(),ideal_lp_us,U_us) x_ht_us = ideal_graph_filter(y_us.copy(),ideal_ht_us,U_us) # detemine the number of iteration iters_us = 20 # determine the percentage of masks n_us = int(len(y_us)*0.2) # apply Fourier method print("Fourier method:") accuracy_mean_us, accuracy_var_us = pred_iteration(A_us,iters_us, y_us, n_us, x_lp_us) #accuracy of the heat kernel method print("With heat kernel:") accuracy_mean_us_ht, accuracy_var_us_ht = pred_iteration(A_us,iters_us, y_us, n_us, x_ht_us) ``` ## III.4 GCN method on similarity graphs We are now trying to implement GCN methods on the three similarity graphs. ``` import time import networkx as nx from sklearn.linear_model import LogisticRegression import torch import torch.nn as nn import torch.nn.functional as F import dgl.function as fn from dgl import DGLGraph from dgl.data.citation_graph import load_cora np.random.seed(0) torch.manual_seed(1) ``` **GCN for foreigner** ``` mean_for,var_for = apply_gcn(iters_for,X_for,y_for,A_for,laplacian_for,lamb_for,U_for) ``` **GCN for total** ``` mean_total,var_total = apply_gcn(iters,X_total,y,A,laplacian,lamb,U) ``` **GCN for US citizen** ``` mean_us,var_us = apply_gcn(iters_us,X_us,y_us,A_us,laplacian_us,lamb_us,U_us) ``` At this stage a function that plots a signal on a graph using both a Laplacian embedding and the NetworkX force-directed layout.The spring layout is used for the force directed layout . This means that each node tries to get as far away from the others as it can, while being held back by the edges which are assimilated to springs, having a spring constant related to their corresponding weight. ``` graph_tot=nx.from_numpy_matrix(A) coords_tot = nx.spring_layout(graph_tot) # Force-directed layout. graph_us = nx.from_numpy_matrix(A_us) coords_us = nx.spring_layout(graph_us) # Force-directed layout. graph_for = nx.from_numpy_matrix(A_for) coords_for = nx.spring_layout(graph_for) # Force-directed layout. def embedding(adj,U): D_norm = np.diag(np.clip(np.sum(adj, 1), 1, None)**(-1/2)) network_emb = D_norm @ U[:,[1,3]] emb_x = network_emb[:,0] emb_y = network_emb[:,1] return emb_x,emb_y def coplot_network_signal(signal,emb_x,emb_y,graph,coords, title='Signal = ...'): ''' Plots a signal on a graph using both a Laplacian embedding and the NetworkX force-directed layout. Args: signal: The signal of each node to plot on the graph title: Plot title ''' fig, ax = plt.subplots(1, 2, figsize=(16,7)) vmax = max(-np.nanmin(signal), np.nanmax(signal)) vmin = -vmax # emb_x, emb_y=embedding(A_us, U_us) im = ax[0].scatter(emb_x, emb_y, c=signal, cmap='bwr', s=70, edgecolors='black', vmin=vmin, vmax=vmax) ax[0].set_title('Laplacian Embedding') ax[0].set_xlabel('Generalized eigenvector embedding $U_1$') ax[0].set_ylabel('Generalized eigenvector embedding $U_3$') nx.draw_networkx_nodes(graph, coords, node_size=60, node_color=signal, cmap='bwr', edgecolors='black', ax=ax[1], vmin=vmin, vmax=vmax) nx.draw_networkx_edges(graph, coords, alpha=0.2, ax=ax[1]) ax[1].set_title('NetworkX Force-directed layout') fig.suptitle(title, fontsize=16) fig.subplots_adjust(right=0.9) cbar_ax = fig.add_axes([0.925, 0.15, 0.025, 0.7]) fig.colorbar(im, cax=cbar_ax) #plt of the us immigration emb_x_us, emb_y_us=embedding(A_us, U_us) coplot_network_signal(y_us,emb_x_us, emb_y_us,graph_us,coords_us, title='Signal = truth labels') #plot of the foreign immigration emb_x_for, emb_y_for=embedding(A_for, U_for) coplot_network_signal(y_for,emb_x_for, emb_y_for,graph_for,coords_for, title='Signal = truth labels') #plt of total immigration emb_x, emb_y=embedding(A, U) coplot_network_signal(y,emb_x, emb_y,graph_tot,coords_tot, title='Signal = truth labels') ```
github_jupyter
# Preface Learning a new language (programming or spoken) can be extremely difficult. Usually the first language you learn will probably be the hardest to learn. Trying to learn a second language will also be challenging, but hopefully you will be able to rely on your experience from the first language you learned to help you along with the second; instead of learning all the concepts of a language from scratch, you are able to map the concepts from the old language and figure out how to write/say them in the new language. Our hope with this guide is to make learning Python easier by leveraging your experience with Java. In CSE 142/143, you might have felt like you were only learning Java but we were actually teaching you computational thinking and how to write programs to solve problems. All of those skills are transferable in full to solving skills in almost any language of your choice. All we have to teach you now is the _syntax_ of Python so you can solve problems using the ideas you learned in Java. This is easier said than done, but this guide should help by mapping concepts that you learned in Java to how they are "said" in Python. When I was first learning Python, I normally thought of the code in my head in Java and then translated to Python as I typed. By the end of this guide, you will not be a master at Python. To master a language you need to practice and practice takes time. Our hope is that this guide will make practicing easier and act as a reference for when you are trying to write Python. You will become familiar with looking up "How do I do X in Python", but after practicing for long enough you will start to think in Python instead of thinking in Java. This guide has been modified for python3 for CSE416 SP19 ## Table of Contents 1. [Hello World](sections/HelloWorld.ipynb) 2. [Python Basics: Variables, Expressions, Methods](sections/PythonBasics.ipynb) 3. [Control Structures: If Statements, While Loops, For Loops](sections/ControlStructures.ipynb) 4. [Lists, Sets, Tuples](sections/ListsSetsTuples.ipynb) 5. [Dictionaries](sections/Dictionaries.ipynb) 6. [Objects](sections/Objects.ipynb) 7. [File I/O](sections/Files.ipynb) ## Feedback This guide is relatively new and has not been rigorously proofread. If you notice any typos, grammar mistakes, or sections that were confusing to read, please submit feedback using our [issue tracker](https://github.com/hschafer/java-2-python/issues). Pull requests with proposed changes are also welcome! ## Other Resources * [Learn X in Y Minutes](https://learnxinyminutes.com/docs/python/)
github_jupyter
``` %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd from sqlalchemy import create_engine from hold import connection_string engine = create_engine(f'{connection_string}', encoding='iso-8859-1', connect_args={'connect_timeout': 10}) gtdDF = pd.read_sql_table('global_terrorism', con= engine) gtdDF.head() casualties = gtdDF['nkill'] + gtdDF['nwound'] casualties.sum() gtdDF['casualties'] = gtdDF['nkill'] + gtdDF['nwound'] gtdDF.head() RegionCasualtiesDF = gtdDF[['index1','success','suicide','nkill','nwound','iyear','region_txt','casualties']] RegionCasualtiesDF.head() RegionCasualties2DF = pd.get_dummies(RegionCasualtiesDF) RegionCasualties2DF.head() RegionCasualtiesMergeDF = result = pd.merge(RegionCasualties2DF,RegionCasualtiesDF[['index1','region_txt']],on='index1') #RegionCasualtiesMergeDF RegionCasualtiesMergeDF.head() # Assign X (data) and y (target) X = RegionCasualtiesMergeDF.drop(["region_txt"], axis=1) y = RegionCasualtiesMergeDF["region_txt"] print(X.shape, y.shape) # Split the data into training and testing ### BEGIN SOLUTION from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, stratify=y) ### END SOLUTION from sklearn.linear_model import LogisticRegression classifier = LogisticRegression() classifier classifier.fit(X_train, y_train) print(f"Training Data Score: {classifier.score(X_train, y_train)}") print(f"Testing Data Score: {classifier.score(X_test, y_test)}") predictions = classifier.predict(X_test) print(f"First 10 Predictions: {predictions[:10]}") print(f"First 10 Actual labels: {y_test[:10].tolist()}") RegionPredictionDF = pd.DataFrame({"Prediction": predictions, "Actual": y_test}).reset_index(drop=True) Top30PredictionDF = RegionPredictionDF.head(30) Top30PredictionDF from pandas.plotting import table fig, ax = plt.subplots(figsize = (20,2)) # no visible frame ax.xaxis.set_visible(False) # hide the x axis ax.yaxis.set_visible(False) # hide the y axis ax.set_frame_on(False) # no visible frame, uncomment if size is ok Ptable = table(ax, Top30PredictionDF, loc='center', colWidths=[0.17]*len(Top30PredictionDF.columns)) Ptable.auto_set_font_size(False) # Activate set fontsize manually Ptable.set_fontsize(12) # if ++fontsize is necessary ++colWidths Ptable.scale(1.2, 1.2) # change size table plt.savefig('../GTA/front_end/static/front_end/assets/visualizations/ML6_Casualties-Region-Logistic-Prediction.png', transparent = True) ```
github_jupyter
# Mixture of Gaussians ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline ``` Mixture of Gaussians (aka Expectation Maximation) is a clustering method. The idea of this model is simpel: for a given dataset, each point is generated by linearly combining multiple multivariate Gaussians. ## What are Gaussians? A Gaussian is a function of the form: \begin{equation*} f(x)=a e^{-\frac{(x-b)^2}{2c^2}} \end{equation*} where - $a\in \mathbb{R}$ is the height of the curve's peak - $b \in \mathbb{R}$ is the position of center of the peak, - $c \in \mathbb{R}$ is the [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation "The standard deviation σ is a measure that is used to quantify the amount of variation or dispersion of a set of data values") which controls the width of the bell The function is mathematically convenient that is often used to describe a dataset that typically has the normal [distribution](https://en.wikipedia.org/wiki/Frequency_distribution "A distribution is a listing of outcomes of an experiment and the probability associated with each outcome."). Its plot shape is called a bell curve. A univariate Gaussian can be represented by two variables ($\mu$ and $\sigma$) when it represents the probability density function: \begin{equation*} f(x)=\frac{1}{\sigma \sqrt{2 \pi}} e^{-\frac{(x-\mu)^2}{2 \sigma^2}} \end{equation*} where - $\mu$ is the mean of all data points. This specifies the center of the curve - $\sigma$ is the standard deviation. This describe the "spread" of the data Here are some plots of the univarite Guassian distribution for various parameters of $\mu$ and $\sigma$: ``` X = np.linspace(-6, 12, 100) def gaussian(X, mu, sigma): a = 1 / (sigma * np.sqrt(2 * np.pi)) return a * np.exp(-np.power(X - mu, 2.) / (2 * sigma * sigma)) fig, ax = plt.subplots(figsize=(10, 5)) ax.plot(X, gaussian(X, mu=0, sigma=1), label='$\mu = 0, \sigma$ = 1') ax.plot(X, gaussian(X, mu=5, sigma=2), label='$\mu = 5, \sigma$ = 2') ax.plot(X, gaussian(X, mu=5, sigma=5), label='$\mu = 5, \sigma$ = 5') plt.legend() ``` The Gaussian distribution for a vector $x$ with $d$ dimensions (multivariate Gaussian) is defined as follows: \begin{equation*} f(x \mid \mu, \Sigma) = \frac{1}{ \sqrt{(2 \pi)^d |\Sigma|} } exp\left( -\frac{1}{2} (x-\mu)^T \Sigma^{-1} (x-m) \right) \end{equation*} where - $d$ -- number of dimensions in the vector $x$ - $\mu$ -- the mean - $\Sigma$ -- the covariance matrix We can also plot 2D Gaussian distribution: ![](images/2d-gaussian-distribution.png) Source: Wikimedia ## Variance-Covariance Matrix Before we look at Gaussian Mixture Model, let us first try to understand what the variance-covariance matrix is. Covariance is a measure how changes in one variable are associated with changes in a second variables and tells us how two variables behave as a pair. In other words, covariance is a measure of the linear relationship between two variables. We are only interested in the sign of a covariance value: - A positive value indicates a direct or increase linear relationship - A negative value indicates a decreasing relationship - Zero (or around zero) indicates that there is probably not a linear relationship between the two variables We are not interested in the number itself since covariance does not tells us anything about the strength of the relationship. To find the strength of the relationship, we need to find the correlation. Variance and covariance are often displayed together in a **variance-covariance** matrix aka a covariance matrix. The diagonal of covariance matrix provides the variance of each individual variable, whereas the off-diagonal entries provide the covariance between each pair of variables. ## Gaussian Mixture Model (GMM) In GMM, we assume that each cluster $C_i$ is characterised by a multivariate normal distribution. Now, we can design a density function $f_i(x)$ which can tell us what is the probability that a data point $x$ (a vector with $d$ elements) belongs to the cluster $C_i$: \begin{equation*} f_i(x) = f(x \mid \mu_i, \Sigma_i) = \frac{1}{ \sqrt{(2 \pi)^d |\Sigma_i|} } exp\left( -\frac{1}{2} (x-\mu_i)^T \Sigma_i^{-1} (x-m_i) \right) \end{equation*} where - $d$ -- number of dimensions - $\mu_i$ -- the mean of the cluster $C_i$ - $\Sigma_i$ -- the covariance matrix for the cluster $C_i$ Before we can define the function, we need to learn the unknown parameters $\mu_i$ and $\Sigma_i$. **Our problem is as follows:** Give a dataset $X={x_1, x_2, \cdots, x_N}$ drawn from an unknown distribution (assume it is a multiple Gaussians distribution), estimate the parameters $\theta$ of the GMM that fits the data. To find the parameters $\theta$, we Maximise the likelihood $p(X \mid \theta)$ of the data with regard to the model parameters
github_jupyter
# Tutorial 5: ## Random Forest Regression Random Forests Regression is an ensemble learning method that combines multiple Decision Tree Regressions. The method uses a multitude of decision trees to train and predict values. Random Forests reduces the over-fitting in comparison to using a single Decision Tree model. For a deeper understanding of Random Forest Regression, use the following resources: - ***Random Forests*** - ***Understanding Random Forests*** In this section, we will learn some of the key concepts in Random Forest. Inside, the practice segment you will also learn to develop a Random Forest model for Regression Problems. ![Random_forest](images/Random_forest.png "Random_forest") --------------------------------------------- #### The following concepts are key to understand the Random Forest Algorithm, - **Bagging** - **Ensemble Learning** #### Bagging Bagging is a paradigm to use weak learners to create a strong learner. Using bagging multiple decision tree learners are used on sub-sampled sets of data and the results from each is aggregated based on the nature of the problem statement. For classification, the results from every learner are used and a majority consensus is used whereas for regression the results are averaged across multiple decision trees. Random Forests are also used for Feature selection, as every tree has a root node. It’s easy for an ensemble model such as Random Forest to find the most relevant features. #### Ensemble Learning Ensembling or Ensemble learning is a technique to combine multiple models to generate a more robust model. It is used to develop algorithms such as Random Fores, Gradient Boosting, XgBoost. ## In this practice session, we will learn to code Random Forest Regression. ### We will perform the following steps to build a simple classifier using the popular Iris dataset. - **Data Preprocessing** - Importing the libraries. - Dealing with the categorical variable. - Classifying dependent and independent variables. - Splitting the data into a training set and test set. - Feature scaling. - **Random Forest Regression** - Create a Random Forest Regressor. - Feed the training data to the regression model. - Predicting the species for the test set. - Using the RMSE to calculate the error metric. ``` import ipywidgets as widgets from IPython.display import display style = {'description_width': 'initial'} #1 Importing essential libraries import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt #2 Importing the dataset file_name = 'DataSets/beer_data.csv' dataset = pd.read_csv(file_name) #Displaying the dataset dataset.head(8) # Dealing with Categorical variables from sklearn.preprocessing import LabelEncoder le = LabelEncoder() #Making sure the type of the review_profilename column is str dataset["review_profilename"] = dataset["review_profilename"].astype(str) dataset["review_profilename"] = le.fit_transform(dataset["review_profilename"]) dataset.head() print(f"Dataset has {dataset.shape[0]} rows and {dataset.shape[1]} columns.") # classify dependent and independent variables X = dataset[[col for col in dataset.columns if col not in ('review_overall')]].values #independent variables y = dataset['review_overall'].values #dependent variable print("\nIdependent Variables :\n\n", X[:5]) print("\nDependent Variable (Score):\n\n", y[:5]) ``` ## Create Train and Test Sets ``` #4 Creating training set and testing set from sklearn.model_selection import train_test_split test_size = widgets.FloatSlider(min=0.01, max=0.6, value=0.2, description="Test Size :", tooltips=['Usually 20-30%']) display(test_size) #Divide the dataset into Train and Test sets X_train, X_test, y_train, y_test = train_test_split(X ,y, test_size=test_size.value, random_state = 0) print("Training Set :\n----------------\n") print("X = \n", X_train[:5]) print("y = \n", y_train[:5]) print("\n\nTest Set :\n----------------\n") print("X = \n",X_test[:5]) print("y = \n", y_test[:5]) print(f"Shape of Training set is {X_train.shape}") print(f"Shape of Testing set is {X_test.shape}") ``` ### Apply Random Forest Regression ``` # import random forest library from sklearn.ensemble import RandomForestRegressor # configure params for the model. max_feat_wig = widgets.ToggleButtons(options=['log2', 'sqrt', 'auto'], description='Number of features for the best split :', disabled=False, style=style) display(max_feat_wig) max_depth_wig = widgets.Dropdown(options=[10, 20, 30, 50], description='The maximum depth of the Tree. :', style=style) display(max_depth_wig) min_split_wig = widgets.Dropdown(options=[100, 200, 300, 500], description='Minimum Number of Splits. :', style=style) display(min_split_wig) njobs_wig = widgets.Dropdown(options=[('One', 1), ('Two', 2), ('Three', 3), ('All Cores', -1)], description="Number of CPU Cores :", style=style) display(njobs_wig) ``` ### Predict and Evaluate the Model ``` # Train the Regressor with training set regressor = RandomForestRegressor(max_features=max_feat_wig.value, max_depth=max_depth_wig.value, min_samples_split=min_split_wig.value, n_jobs=njobs_wig.value) #fit the linear model regressor.fit(X_train, y_train) #7 predict the outcome of test sets y_Pred = regressor.predict(X_test) print("\nPredictions = ", y_Pred) # Calculating score from Root Mean Log Squared Error def rmlse(y_test, y_pred): error = np.square(np.log10(y_pred +1) - np.log10(y_test +1)).mean() ** 0.5 score = 1 - error return score # Printing the score print("\n----------------------------\nRMLSE Score = ", rmlse(y_test, y_Pred)) #9 Comparing Actual and Predicted Salaries for he test set print("\nActual vs Predicted Scores \n------------------------------\n") error_df = pd.DataFrame({"Actual" : y_test, "Predicted" : y_Pred, "Abs. Error" : np.abs(y_test - y_Pred)}) error_df ``` ## Feature Importance ``` feat_names = [col for col in dataset.columns if col not in ('review_overall')] pd.Series(regressor.feature_importances_, \ index=feat_names).sort_values(ascending=True).plot(kind='barh', figsize=(16,9)); plt.title('Feature Importance Random Forest Regressor'); ``` ## Actual vs. Predicted ``` #Plotting Actual observation vs Predictions plt.figure(figsize=(16, 9)); plt.scatter(y_test, y_Pred, s = 70) plt.xlabel('Actual'); plt.ylabel('Predicted'); plt.grid(); plt.show(); ```
github_jupyter
``` from astropy.table import Table, Column import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib import colors import os import urllib.request from tqdm import tqdm import sys os.chdir("/calvin1/benardorci/SimulationData") os.getcwd() Halos = np.load("/calvin1/benardorci/SimulationData/halos.npy") DMP = np.load("/calvin1/benardorci/SimulationData/dm_cat_ds_1000.npy") CheckingEverythingIsAlright = 117 # MassBins, where: MassBin1 = [] # MB < 1e13 MassBin2 = [] # 1e13 <= MB <5e13 MassBin3 = [] # 5e13 <= MB <1e14 MassBin4 = [] # 1e14 <= MB <5e14 MassBin5 = [] # 5e14 <= MB <1e15 MassBin6 = [] # MB >= 1e15 for m, x, y, z, Vx, Vy, Vz in zip(Halos[:,0], Halos[:,1], Halos[:,2], Halos[:,3], Halos[:,4], Halos[:,5], Halos[:,6]) : if m >= 1.0*10**15 : MassBin1.append([m, x, y, z, Vx, Vy, Vz]) elif m >= 5.0*10**14 : MassBin2.append([m, x, y, z, Vx, Vy, Vz]) elif m >= 1.0*10**14 : MassBin3.append([m, x, y, z, Vx, Vy, Vz]) elif m >= 5.0*10**13 : MassBin4.append([m, x, y, z, Vx, Vy, Vz]) elif m >= 1.0*10**13 : MassBin5.append([m, x, y, z, Vx, Vy, Vz]) else : MassBin6.append([m, x, y, z, Vx, Vy, Vz]) MassBin1 = np.array(MassBin1) MassBin2 = np.array(MassBin2) MassBin3 = np.array(MassBin3) MassBin4 = np.array(MassBin4) MassBin5 = np.array(MassBin5) MassBin6 = np.array(MassBin6) print(CheckingEverythingIsAlright) Radius = 10 HalfHeight = 525 if ((Radius>525)&(HalfHeight>525)): sys.exit("The Shell will not be complete!") NewDMPBoxShell = [] for m, x, y, z, Vx, Vy, Vz in zip(DMP[:,0], DMP[:,1], DMP[:,2], DMP[:,3], DMP[:,4], DMP[:,5], DMP[:,6]) : if x < Radius : if y < Radius : if z < HalfHeight : NewDMPBoxShell.append([m, x+1050+Radius, y+1050+Radius, z+1050+HalfHeight, Vx, Vy, Vz]) #Vertice/Vertex elif z > 1050-HalfHeight : NewDMPBoxShell.append([m, x+1050+Radius, y+1050+Radius, z-1050+HalfHeight, Vx, Vy, Vz]) #Vertice/Vertex elif y > 1050-Radius : if z < HalfHeight : NewDMPBoxShell.append([m, x+1050+Radius, y-1050+Radius, z+1050+HalfHeight, Vx, Vy, Vz]) #Vertice/Vertex elif z > 1050-HalfHeight : NewDMPBoxShell.append([m, x+1050+Radius, y-1050+Radius, z-1050+HalfHeight, Vx, Vy, Vz]) #Vertice/Vertex elif x > 1050-Radius : if y < Radius : if z < HalfHeight : NewDMPBoxShell.append([m, x-1050+Radius, y+1050+Radius, z+1050+HalfHeight, Vx, Vy, Vz]) #Vertice/Vertex elif z > 1050-HalfHeight : NewDMPBoxShell.append([m, x-1050+Radius, y+1050+Radius, z-1050+HalfHeight, Vx, Vy, Vz]) #Vertice/Vertex elif y > 1050-Radius : if z < HalfHeight : NewDMPBoxShell.append([m, x-1050+Radius, y-1050+Radius, z+1050+HalfHeight, Vx, Vy, Vz]) #Vertice/Vertex elif z > 1050-HalfHeight : NewDMPBoxShell.append([m, x-1050+Radius, y-1050+Radius, z-1050+HalfHeight, Vx, Vy, Vz]) #Vertice/Vertex for m, x, y, z, Vx, Vy, Vz in zip(DMP[:,0], DMP[:,1], DMP[:,2], DMP[:,3], DMP[:,4], DMP[:,5], DMP[:,6]) : if x < Radius : if y < Radius : NewDMPBoxShell.append([m, x+1050+Radius, y+1050+Radius, z+HalfHeight, Vx, Vy, Vz]) #Arista/Edge elif y > 1050-Radius : NewDMPBoxShell.append([m, x+1050+Radius, y-1050+Radius, z+HalfHeight, Vx, Vy, Vz]) #Arista/Edge elif x > 1050-Radius : if y < Radius : NewDMPBoxShell.append([m, x-1050+Radius, y+1050+Radius, z+HalfHeight, Vx, Vy, Vz]) #Arista/Edge elif y > 1050-Radius : NewDMPBoxShell.append([m, x-1050+Radius, y-1050+Radius, z+HalfHeight, Vx, Vy, Vz]) #Arista/Edge for m, x, y, z, Vx, Vy, Vz in zip(DMP[:,0], DMP[:,1], DMP[:,2], DMP[:,3], DMP[:,4], DMP[:,5], DMP[:,6]) : if y < Radius : if z < HalfHeight : NewDMPBoxShell.append([m, x+Radius, y+1050+Radius, z+1050+HalfHeight, Vx, Vy, Vz]) #Arista/Edge elif z > 1050-HalfHeight : NewDMPBoxShell.append([m, x+Radius, y+1050+Radius, z-1050+HalfHeight, Vx, Vy, Vz]) #Arista/Edge elif y > 1050-Radius : if z < HalfHeight : NewDMPBoxShell.append([m, x+Radius, y-1050+Radius, z+1050+HalfHeight, Vx, Vy, Vz]) #Arista/Edge elif z > 1050-HalfHeight : NewDMPBoxShell.append([m, x+Radius, y-1050+Radius, z-1050+HalfHeight, Vx, Vy, Vz]) #Arista/Edge for m, x, y, z, Vx, Vy, Vz in zip(DMP[:,0], DMP[:,1], DMP[:,2], DMP[:,3], DMP[:,4], DMP[:,5], DMP[:,6]) : if x < Radius : if z < HalfHeight : NewDMPBoxShell.append([m, x+1050+Radius, y+Radius, z+1050+HalfHeight, Vx, Vy, Vz]) #Arista/Edge elif z > 1050-HalfHeight : NewDMPBoxShell.append([m, x+1050+Radius, y+Radius, z-1050+HalfHeight, Vx, Vy, Vz]) #Arista/Edge elif x > 1050-Radius : if z < HalfHeight : NewDMPBoxShell.append([m, x-1050+Radius, y+Radius, z+1050+HalfHeight, Vx, Vy, Vz]) #Arista/Edge elif z > 1050-HalfHeight : NewDMPBoxShell.append([m, x-1050+Radius, y+Radius, z-1050+HalfHeight, Vx, Vy, Vz]) #Arista/Edge for m, x, y, z, Vx, Vy, Vz in zip(DMP[:,0], DMP[:,1], DMP[:,2], DMP[:,3], DMP[:,4], DMP[:,5], DMP[:,6]) : if x < Radius : NewDMPBoxShell.append([m, x+1050+Radius, y+Radius, z+HalfHeight, Vx, Vy, Vz]) #Cara/Face elif x > 1050-Radius : NewDMPBoxShell.append([m, x-1050+Radius, y+Radius, z+HalfHeight, Vx, Vy, Vz]) #Cara/Face for m, x, y, z, Vx, Vy, Vz in zip(DMP[:,0], DMP[:,1], DMP[:,2], DMP[:,3], DMP[:,4], DMP[:,5], DMP[:,6]) : if y < Radius : NewDMPBoxShell.append([m, x+Radius, y+1050+Radius, z+HalfHeight, Vx, Vy, Vz]) #Cara/Face elif y > 1050-Radius : NewDMPBoxShell.append([m, x+Radius, y-1050+Radius, z+HalfHeight, Vx, Vy, Vz]) #Cara/Face for m, x, y, z, Vx, Vy, Vz in zip(DMP[:,0], DMP[:,1], DMP[:,2], DMP[:,3], DMP[:,4], DMP[:,5], DMP[:,6]) : if z < HalfHeight : NewDMPBoxShell.append([m, x+Radius, y+Radius, z+1050+HalfHeight, Vx, Vy, Vz]) #Cara/Face elif z > 1050-HalfHeight : NewDMPBoxShell.append([m, x+Radius, y+Radius, z-1050+HalfHeight, Vx, Vy, Vz]) #Cara/Face DMP[:,1] = DMP[:,1] + Radius DMP[:,2] = DMP[:,2] + Radius DMP[:,3] = DMP[:,3] + HalfHeight print(CheckingEverythingIsAlright) NewDMPBox = np.concatenate((DMP, NewDMPBoxShell)) print(CheckingEverythingIsAlright) ID = 0 FinalVelocities = [] for QQ in tqdm(MassBin1[:,0]) : # ------------------------------ Setting up limits of a box ------------------------------ x0 = MassBin1[ID,1].item() + Radius y0 = MassBin1[ID,2].item() + Radius z0 = MassBin1[ID,3].item() + HalfHeight x0SupLim = x0+Radius x0InfLim = x0-Radius y0SupLim = y0+Radius y0InfLim = y0-Radius z0SupLim = z0+HalfHeight z0InfLim = z0-HalfHeight DMPBoxIndex = np.where((NewDMPBox[:,1] >= x0InfLim)&(NewDMPBox[:,1] <= x0SupLim)&(NewDMPBox[:,2] >= y0InfLim)& (NewDMPBox[:,2] <= y0SupLim)&(NewDMPBox[:,3] >= z0InfLim)&(NewDMPBox[:,3] <= z0SupLim))[0] Box = np.zeros((np.size(np.where(DMPBoxIndex)),4)) Box[:,0] = NewDMPBox[DMPBoxIndex,1] Box[:,1] = NewDMPBox[DMPBoxIndex,2] Box[:,2] = NewDMPBox[DMPBoxIndex,3] Box[:,3] = NewDMPBox[DMPBoxIndex,6] #Box = np.array(Box) # ------------------------------ Setting up limits of a cylinder ------------------------------ Delta = np.zeros((np.size(Box[:,0]),2)) Delta[:,0]= Box[:,0]-x0 Delta[:,1]= Box[:,1]-y0 DistanceSquared = Delta[:,0]**2 + Delta[:,1]**2 NewDMPBoxIndex = np.zeros((np.size(np.where(DistanceSquared<=np.power(Radius,2))),1)) IndexNumbers = np.array(np.where(DistanceSquared<=np.power(Radius,2))[0]) NewDMPBoxIndex[:,0] = IndexNumbers Cylinder = np.zeros((np.size(IndexNumbers),4)) Cylinder[:,0] = Box[IndexNumbers,0] Cylinder[:,1] = Box[IndexNumbers,1] Cylinder[:,2] = Box[IndexNumbers,2] Cylinder[:,3] = Box[IndexNumbers,3] MBElementVelocity = MassBin1[ID,6] # ------------------------------ Saving velocities on an array ------------------------------ # # Note: If you wanted to also save the mass of the particles, add another column to the 'Box' # array (Box[:,4] = NewDMPBox[DMPBoxIndex,0]) # # # FinalVelocities = np.append(FinalVelocities,Cylinder[:,3]-MBElementVelocity) ID = ID+1 ######################################################################################################## #np.save("/calvin1/benardorci/MassBin3HistogramsDS1000Height2Radius0dot5.npy",FinalVelocities) # ---------------------- Velocity in a cylinder relative to a Halo's velociy Histogram ---------------------- plt.hist(FinalVelocities, bins=200) #density=True plt.xlim(-3000,3000) plt.show ######################################################################################################## #plt.savefig("/calvin1/benardorci/MassBin3HistogramsDS1000Height2Radius0dot5.png") ```
github_jupyter
``` import pandas as pd from tqdm import tqdm import spotipy from spotipy.oauth2 import SpotifyClientCredentials #To access authorised Spotify data client_id= "d8df752143c842238c774a4551d21546" client_secret= "de7220a3104a43a0aed809594a5d92b1" ``` # Import data ``` pitchfork_df = pd.read_csv('pitchfork_reviews.tsv', sep=('\t')) scrap_df = pitchfork_df[['title', 'artist']].drop_duplicates() artist_df = scrap_df['artist'].drop_duplicates() album_df = scrap_df['title'].drop_duplicates() ``` # Get artist URI for artist name ``` def initiate_connection(client_id, client_secret): client_credentials_manager = SpotifyClientCredentials(client_id=client_id, client_secret=client_secret) sp = spotipy.Spotify(client_credentials_manager=client_credentials_manager) #spotify object to access API return sp def get_artist_uri(sp, name): result = sp.search(name) #search query try: artist_uri = result['tracks']['items'][0]['artists'][0]['uri'] return artist_uri except Exception: print(result) return None sp = initiate_connection(client_id, client_secret) #artist_uri = get_artist_uri(sp, "Mystery Jets") #art_uri_list = [] #for name in list(artist_df): # try: # art_uri_list.append([name, get_artist_uri(sp, name)]) # except Exception: # break #art_uri_df = pd.DataFrame(art_uri_list, columns=['artist', 'artist_uri']) #art_uri_df.to_csv('art_uri.csv') art_uri_df = pd.read_csv('art_uri.csv', index_col=False) ``` # Get album names and URI for each artist ``` scrap_artist_df = scrap_df.merge(art_uri_df) #album_uri_list = [] def get_all_artist_albums(sp, artist, artist_uri): alb = [] #Pull all of the artist's albums sp_albums = sp.artist_albums(artist_uri, album_type='album') try: #Store artist's albums' names' and uris in separate lists for i in range(len(sp_albums['items'])): alb.append([artist, artist_uri, sp_albums['items'][i]['name'], sp_albums['items'][i]['uri']]) return alb except Exception: print(sp_albums) pass #for row in tqdm(art_uri_df.itertuples(index=False)): # try: # album_uri_list.extend(get_all_artist_albums(sp, row[0], row[1])) # except Exception: # pass #album_uri_df = pd.DataFrame(album_uri_list, columns=['artist', 'artist_uri', 'name', 'album_uri']) #album_uri_df.to_csv('album_uri.csv') album_uri_df = pd.read_csv('album_uri.csv', index_col=False) album_uri_df = album_uri_df.rename(columns={'name':'title'}) ``` # Clean album names, join ``` import jellyfish common = scrap_df.merge(album_uri_df, on=['artist', 'title']) common['not_common'] = False filter_not_common = scrap_df.merge(common[['title', 'artist', 'not_common']].drop_duplicates(), how='left')['not_common'].fillna(True) to_join_df = album_uri_df.merge(scrap_df.reset_index().loc[filter_not_common], on='artist') to_join_df['similarity'] = to_join_df.apply(lambda row: jellyfish.jaro_winkler(str(row['title_x']), str(row['title_y'])), axis=1) not_common = to_join_df.sort_values('similarity', ascending=False).groupby(['artist', 'title_y']).head(1) not_common = not_common[not_common.similarity > 0.8] not_common = not_common.rename(columns={'title_y':'title'}) common_clean = common[['title', 'artist', 'artist_uri', 'album_uri']].groupby(['artist', 'title']).head(1) common_clean = common_clean.append(not_common[['title', 'artist', 'artist_uri', 'album_uri']]) common_clean.to_csv('common_clean.csv') ``` # Get track information, aggregate and link to album ``` def get_album_features(sp, album_uri): tracks = sp.album_tracks(album_uri) acousticness = [] danceability = [] energy = [] instrumental = [] liveness = [] loudness = [] speechiness = [] tempo = [] valence = [] popularity = [] for i in range(len(tracks['items'])): track_uri = tracks['items'][i]['uri'] features = sp.audio_features(track_uri) acousticness.append(features[0]['acousticness']) danceability.append(features[0]['danceability']) energy.append(features[0]['energy']) instrumental.append(features[0]['instrumentalness']) liveness.append(features[0]['liveness']) loudness.append(features[0]['loudness']) speechiness.append(features[0]['speechiness']) tempo.append(features[0]['tempo']) valence.append(features[0]['valence']) pop = sp.track(track_uri) popularity.append(pop['popularity']) return {'album_uri': album_uri, 'acousticness': np.mean(acousticness), 'danceability': np.mean(danceability), 'energy': np.mean(energy), 'instrumental': np.mean(instrumental), 'liveness': np.mean(liveness), 'loudness': np.mean(loudness), 'speechiness': np.mean(speechiness), 'tempo': np.mean(tempo), 'valence': np.mean(valence), 'popularity': np.mean(popularity)} album_features = [] import time import numpy as np sleep_min = 2 sleep_max = 5 start_time = time.time() request_count = 0 for row in tqdm(common_clean.iloc[5000:5500].itertuples(index=False)): try: album_features.append(get_album_features(sp, row[3])) request_count+=1 if request_count % 10 == 0: time.sleep(np.random.uniform(sleep_min, sleep_max)) except Exception: time.sleep(np.random.uniform(sleep_min, sleep_max)) pass pd.DataFrame(album_features).to_csv('album_features_4999_to_5499.csv') len(album_features) common_clean.reset_index(drop=True).iloc[25:300] ```
github_jupyter
<a href="https://colab.research.google.com/github/khbae/trading/blob/master/Petersen_Jupyter_Notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Petersen - Estimating Standard Errors in Finance Panel Data Sets: Comparing Approaches (prepared by Jinkyu Kim) Objectives : provide standard error codes ------------- If you use wrong standard error, you have more probability of both rejecting your hypotheses and your paper being rejected by referees. **:(** Let's use proper standard error!!! Provided codes ---------- OLS, White Error, Newey-West, Pooled OLS (same as OLS), Clustered by Firm, Clustered by Time (R default), Clustered by Time (STATA default), Clustered by Firm and Time (STATA default), Fama and Macbeth When to use each code? ---------- **FIRM EFFECT**: USE Std. Error **Clustered by FIRM**, or if you sure your firm effect is permanent, FE, RE (I don't provide here, if you need, just search on google) is okay **TIME EFFECT**: USE **Fama MacBeth**, or if T is sufficient, Std. Error clustered by Time is okay. **FIRM & TIME EFFECT**: if N,T is sufficient, **Double Clustering**, if not, consider using combination of **Time Dummy + Std. Error Clustered by FIRM** ``` rm(list=ls()) #LIBRARY library(sandwich); library(plm); library(lmtest) #DATA Reading from PETERSEN website mydat<-read.table( "http://www.kellogg.northwestern.edu/faculty/petersen/htm/papers/se/test_data.txt", col.names=c("firm", "year","x", "y")) #OLS ols = lm(y~x, data=mydat) result = t(as.data.frame(summary(ols)$coefficients[2,1:3])) row.names(result) = c("ols") #OLS with White white = coeftest(ols, vcov = function(x) vcovHC(x, method="white1", type="HC1")) result = rbind(result, white[2,c("Estimate", "Std. Error", "t value")]) row.names(result)[2] = c("white") #OLS with Newey-West newey = coeftest(ols, vcov = NeweyWest(ols)) result = rbind(result, newey[2,c("Estimate", "Std. Error", "t value")]) row.names(result)[3] = c("newey") #OLS clustered by Firm or Year p.ols = plm(y~x, model="pooling", index=c("firm", "year"), data=mydat) result=rbind(result, summary(p.ols)$coefficients[2 ,c("Estimate", "Std. Error","t-value")]) row.names(result)[4] = c("p.ols") cluster.firm = coeftest(p.ols, vcov = function(x) vcovHC(x, cluster="group", type="HC1")) result = rbind(result, cluster.firm[2,c("Estimate", "Std. Error", "t value")]) row.names(result)[5] = c("C.Firm") #Cluster by Time - R Default cluster.time = coeftest(p.ols, vcov = function(x) vcovHC(x, cluster="time", type="HC1")) #Different Result!!! result = rbind(result, cluster.time[2,c("Estimate", "Std. Error", "t value")]) row.names(result)[6] = c("C.Time.R") #Cluster by Time - STATA Default cluster.time = coeftest(p.ols, vcov = function(x) vcovHC(x, method=c("arellano"), type=c("sss"),cluster = c("time"))) #Different Result!!! result = rbind(result, cluster.time[2,c("Estimate", "Std. Error", "t value")]) row.names(result)[7] = c("C.Time.Stata") #OLS clustered by Firm and Year - STATA Default vcovDC = function(x, ...){ vcovHC(x, cluster="group", ...) + vcovHC(x, method=c("arellano"), type=c("sss"),cluster = c("time"), ...) - vcovHC(x, method="white1", ...) } cluster.double = coeftest(p.ols, vcov = function(x) vcovDC(x)) result = rbind(result, cluster.double[2,c("Estimate", "Std. Error", "t value")]) row.names(result)[8] = c("C.Double") #Fama-Macbeth fmb = pmg(y~x, mydat, index=c("year","firm")) FMB = coeftest(fmb) result = rbind(result, FMB[2,c("Estimate", "Std. Error", "t value")]) row.names(result)[9] = c("FMB") round(result, 4) ``` You can compare the results to Petersen's website. http://www.kellogg.northwestern.edu/faculty/petersen/htm/papers/se/test_data.htm Standard Errors are same at least at 3 to 4 decimal points. If you want to see full results, just eneter the variable name, such as **ols, wheite, newey, p.ols, cluster.firm, cluster.time, cluster.double, FMB** ``` summary(ols) white newey cluster.firm cluster.time cluster.double FMB Contact Info: Jinkyu Kim, Business School, Hanyang Univ. email:jkyu126@gmail.com ```
github_jupyter
``` # import plaidml.keras # plaidml.keras.install_backend() # import os # os.environ["KERAS_BACKEND"] = "plaidml.keras.backend" # Importing useful libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd from sklearn.preprocessing import MinMaxScaler from keras.models import Sequential from keras.layers import Dense, LSTM, Dropout, GRU, Bidirectional, Conv1D, Flatten, MaxPooling1D from keras.optimizers import SGD import math from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from keras import optimizers import time ``` ### Data Processing ``` df = pd.read_csv('../data/num_data.csv') dataset = df dataset.shape # Useful functions def plot_predictions(test, predicted): plt.figure(figsize=(30, 15)); plt.plot(test, color='red', alpha=0.5, label='Actual PM2.5 Concentration',) plt.plot(predicted, color='blue', alpha=0.5, label='Predicted PM2.5 Concentation') plt.title('PM2.5 Concentration Prediction') plt.xlabel('Time') plt.ylabel('PM2.5 Concentration') plt.legend() plt.show() def return_rmse(test,predicted): rmse = math.sqrt(mean_squared_error(test, predicted)) return rmse data_size = dataset.shape[0] train_size=int(data_size * 0.6) test_size = 100 valid_size = data_size - train_size - test_size test_next_day = [12, 24, 48] training_set = dataset[:train_size].iloc[:,4:16].values valid_set = dataset[train_size:train_size+valid_size].iloc[:,4:16].values test_set = dataset[data_size-test_size:].iloc[:,4:16].values y = dataset.iloc[:,4].values y = y.reshape(-1,1) y.shape # Scaling the dataset sc = MinMaxScaler(feature_range=(0,1)) training_set_scaled = sc.fit_transform(training_set) valid_set_scaled = sc.fit_transform(valid_set) test_set_scaled = sc.fit_transform(test_set) sc_y = MinMaxScaler(feature_range=(0,1)) y_scaled = sc_y.fit_transform(y) # split a multivariate sequence into samples def split_sequences(sequences, n_steps_in, n_steps_out): X_, y_ = list(), list() for i in range(len(sequences)): # find the end of this pattern end_ix = i + n_steps_in out_end_ix = end_ix + n_steps_out-1 # check if we are beyond the dataset if out_end_ix > len(sequences): break # gather input and output parts of the pattern seq_x, seq_y = sequences[i:end_ix, :], sequences[end_ix-1:out_end_ix, 0] X_.append(seq_x) y_.append(seq_y) return np.array(X_), np.array(y_) n_steps_in = 24 * 7 n_steps_out = 24 * 7 X_train, y_train = split_sequences(training_set_scaled, n_steps_in, n_steps_out) X_valid, y_valid = split_sequences(valid_set_scaled, n_steps_in, n_steps_out) X_test, y_test = split_sequences(test_set_scaled, n_steps_in, n_steps_out) GRU_3 = Sequential() LSTM_3 = Sequential() GRU_4 = Sequential() LSTM_4 = Sequential() GRU_3.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) GRU_3.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) GRU_3.add(GRU(units=50, activation='tanh')) GRU_3.add(Dense(units=n_steps_out)) GRU_4.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) GRU_4.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) GRU_4.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) GRU_4.add(GRU(units=50, activation='tanh')) GRU_4.add(Dense(units=n_steps_out)) LSTM_3.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) LSTM_3.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) LSTM_3.add(LSTM(units=50, activation='tanh')) LSTM_3.add(Dense(units=n_steps_out)) LSTM_4.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) LSTM_4.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) LSTM_4.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) LSTM_4.add(LSTM(units=50, activation='tanh')) LSTM_4.add(Dense(units=n_steps_out)) # Compiling the RNNs adam = optimizers.Adam(lr=0.01) GRU_3.compile(optimizer=adam,loss='mean_squared_error') GRU_4.compile(optimizer=adam,loss='mean_squared_error') LSTM_3.compile(optimizer=adam,loss='mean_squared_error') LSTM_4.compile(optimizer=adam,loss='mean_squared_error') LSTM_GRU_LSTM = Sequential() GRU_LSTM_GRU = Sequential() LSTM_LSTM_GRU_GRU = Sequential() GRU_GRU_LSTM_LSTM = Sequential() LSTM_GRU_LSTM_GRU = Sequential() GRU_LSTM_GRU_LSTM = Sequential() LSTM_GRU_LSTM.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) LSTM_GRU_LSTM.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) LSTM_GRU_LSTM.add(LSTM(units=50, activation='tanh')) LSTM_GRU_LSTM.add(Dense(units=n_steps_out)) GRU_LSTM_GRU.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) GRU_LSTM_GRU.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) GRU_LSTM_GRU.add(GRU(units=50, activation='tanh')) GRU_LSTM_GRU.add(Dense(units=n_steps_out)) LSTM_LSTM_GRU_GRU.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) LSTM_LSTM_GRU_GRU.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) LSTM_LSTM_GRU_GRU.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) LSTM_LSTM_GRU_GRU.add(GRU(units=50, activation='tanh')) LSTM_LSTM_GRU_GRU.add(Dense(units=n_steps_out)) GRU_GRU_LSTM_LSTM.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) GRU_GRU_LSTM_LSTM.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) GRU_GRU_LSTM_LSTM.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) GRU_GRU_LSTM_LSTM.add(LSTM(units=50, activation='tanh')) GRU_GRU_LSTM_LSTM.add(Dense(units=n_steps_out)) LSTM_GRU_LSTM_GRU.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) LSTM_GRU_LSTM_GRU.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) LSTM_GRU_LSTM_GRU.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) LSTM_GRU_LSTM_GRU.add(GRU(units=50, activation='tanh')) LSTM_GRU_LSTM_GRU.add(Dense(units=n_steps_out)) GRU_LSTM_GRU_LSTM.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) GRU_LSTM_GRU_LSTM.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) GRU_LSTM_GRU_LSTM.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],12), activation='tanh')) GRU_LSTM_GRU_LSTM.add(LSTM(units=50, activation='tanh')) GRU_LSTM_GRU_LSTM.add(Dense(units=n_steps_out)) # Compiling the RNNs adam = optimizers.Adam(lr=0.01) LSTM_GRU_LSTM.compile(optimizer=adam,loss='mean_squared_error') GRU_LSTM_GRU.compile(optimizer=adam,loss='mean_squared_error') LSTM_LSTM_GRU_GRU.compile(optimizer=adam,loss='mean_squared_error') GRU_GRU_LSTM_LSTM.compile(optimizer=adam,loss='mean_squared_error') LSTM_GRU_LSTM_GRU.compile(optimizer=adam,loss='mean_squared_error') GRU_LSTM_GRU_LSTM.compile(optimizer=adam,loss='mean_squared_error') RnnModelDict = {'LSTM_3': LSTM_3, 'GRU_3': GRU_3, 'LSTM_4': LSTM_4, 'GRU_4': GRU_4, 'LSTM_GRU_LSTM': LSTM_GRU_LSTM, 'GRU_LSTM_GRU': GRU_LSTM_GRU, 'LSTM_LSTM_GRU_GRU': LSTM_LSTM_GRU_GRU, 'GRU_GRU_LSTM_LSTM': GRU_GRU_LSTM_LSTM} X_test_24 = X_test[:24] y_test_24 = y_test[:24] rmse_df = pd.DataFrame() for model in RnnModelDict: regressor = RnnModelDict[model] print('training start for', model) start = time.process_time() regressor.fit(X_train,y_train,epochs=50,batch_size=32) train_time = round(time.process_time() - start, 2) print('results for training set') y_train_pred = regressor.predict(X_train) # plot_predictions(y_train,y_train_pred) train_rmse = return_rmse(y_train,y_train_pred) print('results for valid set') y_valid_pred = regressor.predict(X_valid) # plot_predictions(y_valid,y_valid_pred) valid_rmse = return_rmse(y_valid,y_valid_pred) # print('results for test set - 24 hours') # y_test_pred24 = regressor.predict(X_test_24) # plot_predictions(y_test_24,y_test_pred24) # test24_rmse = return_rmse(y_test_24,y_test_pred24) one_df = pd.DataFrame([[model, train_rmse, valid_rmse, train_time]], columns=['Model', 'train_rmse', 'valid_rmse', 'train_time']) rmse_df = pd.concat([rmse_df, one_df]) # save the rmse results rmse_df.to_csv('../deep_rnn_1week.csv') ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt ``` # Derivadas aproximadas Nem sempre é possível calcular a derivada de uma função. Às vezes, a função em questão não é dada de forma explícita. Por exemplo, $$f(x) = \min_{|y| < x} \Big( \frac{\cos(2x^2 - 3y)}{20x - y} \Big).$$ Assim, teremos que _estimar_ a derivada de $f$, sem calculá-la explicitamente. A idéia principal é que $$ f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}, $$ ou seja, que a derivada é o limite do "quociente fundamental". Podemos usar o computador para estimar este limite: ``` def df(f, x, h=1e-5): return (f(x+h) - f(x))/h ``` ### Exercício: "Esta função é vetorial"? Ou seja, se passarmos um vetor `x` em argumento, vai dar certo? Em que condições? ### Exercício Calcule a derivada do seno no intervalo $[0,7]$ por este método, e compare com o resultado teórico. ``` xs = np.arange(0, 7, 0.05) ### BEGIN SOLUTION dfx = np.cos(xs) dfx_approx = df(np.sin,xs) print(dfx_approx) _, [ax1, ax2] = plt.subplots(ncols=2, figsize=(15,4)) ax1.set_title('Cálculo da derivada') ax1.plot(xs, dfx_approx, label='aproximação') ax1.plot(xs, dfx, label='valor real') ax1.legend(loc='best') ax2.set_title('Erro de aproximação') ax2.plot(xs, dfx_approx - dfx) plt.show() ### BEGIN SOLUTION ``` ### Exercício Muitas vezes, a função que vamos usar é "vetorial", como por exemplo `sin`, `exp`. Mas às vezes não é tão simples escrever uma forma vetorial para uma função. Nestes casos, não podemos usar tão diretamente as funções acima para fazer gráficos, e em vez disso devemos construir as listas (ou, melhor, `array`s) nós mesmos. Vejamos um exemplo: Seja $y = f(t)$ a raiz de $t\cos(x) = x$. Uma forma de calcular $f$ seria, por exemplo, usando o método da bisseção. Por exemplo: ``` def bissecaoStep(f,a,b): z = (a+b)/2 #if f(z) == 0: # return (z,z) np.array([(i,i) for i in z if f(i) == 0]) if f(a)*f(z) < 0: return (a,z) else: return (z,b) ### - BASIC RECURSIVE BISSECTION - ### ### CORRETA ### def bissection(f, a, b, tol=1e-6, i=0): if (b-a) < tol: if abs(f((b+a)/2)) < abs(f(b)) and abs(f((b+a)/2)) < abs(f(a)): retv = (b+a)/2 elif abs(f(b)) < abs(f(a)): retv = b else: retv = a return retv else: a, b = bissecaoStep(f, a, b) i += 1 return bissection(f, a, b, i) def f(t): def g(x): return t*np.cos(x) - x return bissection(g,-np.pi/2,np.pi/2, tol=1e-8) ``` Agora, escreva uma função `fvect` que recebe um array do numpy e retorna o array correspondente a todas as $f(t)$ para cada $t$ no array. ``` ### Resposta aqui def fvect(t): def g(x): return np.array([t*np.cos(x) - x]) return bissection(g,-np.pi/2,np.pi/2, tol=1e-8) ``` E agora, veja o gráfico de f. ``` v = np.arange(-3,3,0.05) plt.plot(v, fvect(v)); plt.show() ``` Com a ajuda da fvect, faça um gráfico da derivada de $f$. ``` ### Resposta aqui ``` ## Estimando o erro Uma atividade importante ao se construir um método numérico é calcular (ou ao menos estimar) o erro cometido. Em geral, estimativas de erros são feitas com mais do que as hipóteses mínimas para o método. Por exemplo, no caso do método de Newton, basta a função ser derivável para podermos usá-lo, mas para mostrar convergência quadrática, temos que supor que ela terá duas derivadas, e que o quociente $\frac{f''(\xi)}{2f'(x)}$ seja limitado no intervalo de convergência. Vamos, então, seguir este padrão: queremos calcular a primeira derivada, e para estimar o erro suporemos que a função é duas vezes derivável. Assim: $$ \frac{f(x+h) - f(x)}{h} - f'(x) = \frac{\big(f(x) + h f'(x) + \frac{h^2}{2} f''(\xi) \big) - f(x)}{h} - f'(x) = \frac{h f''(\xi)}{2}.$$ No caso de $f(x) = \sin(x)$, o erro estará aproximadamente entre $h (-\sin(x))/2$ e $h (-\sin(x+h))/2$. Vejamos o quão próximo isto é de fato: ``` plt.title('Erro na estimativa do erro ("erro do erro")') plt.plot(xs, (dfx_approx - dfx) - (- 1e-5 * np.sin(xs) / 2)) plt.show() ``` O exemplo anterior mostra que, se desejamos aproximar a derivada de uma função "bem-comportada" pelo quociente fundamental, o erro será proporcional ao **passo** e à derivada segunda (que, em geral, não conhecemos!). Assim, para diminuir o erro, teremos que diminuir igualmente o passo. Mas isso pode resultar em erros de truncamento... ``` dfx_approx_2 = df(np.sin,xs, h=1e-10) _, [ax1, ax2] = plt.subplots(ncols=2, figsize=(15,4)) ax1.set_title('Erro de aproximação') ax1.plot(xs, dfx_approx_2 - dfx) ax2.set_title('Erro na estimativa do erro') ax2.plot(xs, (dfx_approx_2 - dfx) - (- 1e-10 * np.sin(xs)/2)) plt.show() ``` ### Exercício: vendo o truncamento Porque faz sentido, dados os gráficos acima, atribuir o erro de aproximação à precisão numérica do computador, e não à derivada segunda? Note que o erro de aproximação não está mais proporcional a $\varepsilon$. Para resolver isso, precisamos de um método de cálculo cujo erro seja menor!
github_jupyter
# Introduction to Quantum Error Correction via the Repetition Code ## Introduction Quantum computing requires us to encode information in qubits. Most quantum algorithms developed over the past few decades have assumed that these qubits are perfect: they can be prepared in any state we desire, and be manipulated with complete precision. Qubits that obey these assumptions are often known as *logical qubits*. The last few decades have also seen great advances in finding physical systems that behave as qubits, with better quality qubits being developed all the time. However, the imperfections can never be removed entirely. These qubits will always be much too imprecise to serve directly as logical qubits. Instead, we refer to them as *physical qubits*. In the current era of quantum computing, we seek to use physical qubits despite their imperfections, by designing custom algorithms and using error mitigation effects. For the future era of fault-tolerance, however, we must find ways to build logical qubits from physical qubits. This will be done through the process of quantum error correction, in which logical qubits are encoded in a large number of physical qubits. The encoding is maintained by constantly putting the physical qubits through a highly entangling circuit. Auxilliary degrees of freedom are also constantly measured, to detect signs of errors and allow their effects to be removed. The operations on the logical qubits required to implement quantum computation will be performed by essentially making small perturbations to this procedure. Because of the vast amount of effort required for this process, most operations performed in fault-tolerant quantum computers will be done to serve the purpose of error detection and correction. So when benchmarking our progress towards fault-tolerant quantum computation, we must keep track of how well our devices perform error correction. In this chapter we will look at a particular example of error correction: the repetition code. Though not a true example of quantum error correction - it uses physical qubits to encode a logical *bit*, rather than a qubit - it serves as a simple guide to all the basic concepts in any quantum error correcting code. We will also see how it can be run on current prototype devices. ## Introduction to the repetition code ### The basics of error correction The basic ideas behind error correction are the same for quantum information as for classical information. This allows us to begin by considering a very straightforward example: speaking on the phone. If someone asks you a question to which the answer is 'yes' or 'no', the way you give your response will depend on two factors: * How important is it that you are understood correctly? * How good is your connection? Both of these can be paramaterized with probabilities. For the first, we can use $P_a$, the maximum acceptable probability of being misunderstood. If you are being asked to confirm a preference for ice cream flavours, and don't mind too much if you get vanilla rather than chocolate, $P_a$ might be quite high. If you are being asked a question on which someone's life depends, however, $P_a$ will be much lower. For the second we can use $p$, the probability that your answer is garbled by a bad connection. For simplicity, let's imagine a case where a garbled 'yes' doesn't simply sound like nonsense, but sounds like a 'no'. And similarly a 'no' is transformed into 'yes'. Then $p$ is the probability that you are completely misunderstood. A good connection or a relatively unimportant question will result in $p<P_a$. In this case it is fine to simply answer in the most direct way possible: you just say 'yes' or 'no'. If, however, your connection is poor and your answer is important, we will have $p>P_a$. A single 'yes' or 'no' is not enough in this case. The probability of being misunderstood would be too high. Instead we must encode our answer in a more complex structure, allowing the receiver to decode our meaning despite the possibility of the message being disrupted. The simplest method is the one that many would do without thinking: simply repeat the answer many times. For example say 'yes, yes, yes' instead of 'yes' or 'no, no no' instead of 'no'. If the receiver hears 'yes, yes, yes' in this case, they will of course conclude that the sender meant 'yes'. If they hear 'no, yes, yes', 'yes, no, yes' or 'yes, yes, no', they will probably conclude the same thing, since there is more positivity than negativity in the answer. To be misunderstood in this case, at least two of the replies need to be garbled. The probability for this, $P$, will be less than $p$. When encoded in this way, the message therefore becomes more likely to be understood. The code cell below shows an example of this. ``` p1 = 0.01 p3 = 3 * p1**2 * (1-p1) + p1**3 # probability of 2 or 3 errors print('Probability of a single reply being garbled: {}'.format(p1)) print('Probability of a the majority of three replies being garbled: {:.4f}'.format(p3)) ``` If $P<P_a$, this technique solves our problem. If not, we can simply add more repetitions. The fact that $P<p$ above comes from the fact that we need at least two replies to be garbled to flip the majority, and so even the most likely possibilities have a probability of $\sim p^2$. For five repetitions we'd need at least three replies to be garbled to flip the majority, which happens with probability $\sim p^3$. The value for $P$ in this case would then be even lower. Indeed, as we increase the number of repetitions, $P$ will decrease exponentially. No matter how bad the connection, or how certain we need to be of our message getting through correctly, we can achieve it by just repeating our answer enough times. Though this is a simple example, it contains all the aspects of error correction. * There is some information to be sent or stored: In this case, a 'yes' or 'no. * The information is encoded in a larger system to protect it against noise: In this case, by repeating the message. * The information is finally decoded, mitigating for the effects of noise: In this case, by trusting the majority of the transmitted messages. This same encoding scheme can also be used for binary, by simply substituting `0` and `1` for 'yes' and 'no. It can therefore also be easily generalized to qubits by using the states $\left|0\right\rangle$ and $\left|1\right\rangle$. In each case it is known as the *repetition code*. Many other forms of encoding are also possible in both the classical and quantum cases, which outperform the repetition code in many ways. However, its status as the simplest encoding does lend it to certain applications. One is exactly what it is used for in Qiskit: as the first and simplest test of implementing the ideas behind quantum error correction. ### Correcting errors in qubits We will now implement these ideas explicitly using Qiskit. To see the effects of imperfect qubits, we simply can use the qubits of the prototype devices. We can also reproduce the effects in simulations. The function below creates a simple noise model in order to do this. These go beyond the simple case discussed earlier, of a single noise event which happens with a probability $p$. Instead we consider two forms of error that can occur. One is a gate error: an imperfection in any operation we perform. We model this here in a simple way, using so-called depolarizing noise. The effect of this will be, with probability $p_{gate}$ ,to replace the state of any qubit with a completely random state. For two qubit gates, it is applied independently to each qubit. The other form of noise is that for measurement. This simply flips between a `0` to a `1` and vice-versa immediately before measurement with probability $p_{meas}$. ``` from qiskit.providers.aer.noise import NoiseModel from qiskit.providers.aer.noise.errors import pauli_error, depolarizing_error def get_noise(p_meas,p_gate): error_meas = pauli_error([('X',p_meas), ('I', 1 - p_meas)]) error_gate1 = depolarizing_error(p_gate, 1) error_gate2 = error_gate1.tensor(error_gate1) noise_model = NoiseModel() noise_model.add_all_qubit_quantum_error(error_meas, "measure") # measurement error is applied to measurements noise_model.add_all_qubit_quantum_error(error_gate1, ["x"]) # single qubit gate error is applied to x gates noise_model.add_all_qubit_quantum_error(error_gate2, ["cx"]) # two qubit gate error is applied to cx gates return noise_model ``` With this we'll now create such a noise model with a probability of $1\%$ for each type of error. ``` noise_model = get_noise(0.01,0.01) ``` Let's see what effect this has when trying to store a `0` using three qubits in state $\left|0\right\rangle$. We'll repeat the process `shots=1024` times to see how likely different results are. ``` from qiskit import QuantumCircuit, execute, Aer qc0 = QuantumCircuit(3,3,name='0') # initialize circuit with three qubits in the 0 state qc0.measure(qc0.qregs[0],qc0.cregs[0]) # measure the qubits # run the circuit with th noise model and extract the counts counts = execute( qc0, Aer.get_backend('qasm_simulator'),noise_model=noise_model).result().get_counts() print(counts) ``` Here we see that almost all results still come out `'000'`, as they would if there was no noise. Of the remaining possibilities, those with a majority of `0`s are most likely. In total, much less than 10 samples come out with a majority of `1`s. When using this circuit to encode a `0`, this means that $P<1\%$ Now let's try the same for storing a `1` using three qubits in state $\left|1\right\rangle$. ``` qc1 = QuantumCircuit(3, 3, name='0') # initialize circuit with three qubits in the 0 state qc1.x(qc1.qregs[0]) # flip each 0 to 1 qc1.measure(qc1.qregs[0],qc1.cregs[0]) # measure the qubits # run the circuit with th noise model and extract the counts counts = execute(qc1, Aer.get_backend('qasm_simulator'),noise_model=noise_model).result().get_counts() print(counts) ``` The number of samples that come out with a majority in the wrong state (`0` in this case) is again much less than 10, so $P<1\%$. Whether we store a `0` or a `1`, we can retrieve the information with a smaller probability of error than either of our sources of noise. This was possible because the noise we considered was relatively weak. As we increase $p_{meas}$ and $p_{gate}$, the higher the probability $P$ will be. The extreme case of this is for either of them to have a $50/50$ chance of applying the bit flip error, `x`. For example, let's run the same circuit as before but with $p_{meas}=0.5$ and $p_{gate}=0$. ``` noise_model = get_noise(0.5,0.0) counts = execute(qc1, Aer.get_backend('qasm_simulator'),noise_model=noise_model).result().get_counts() print(counts) ``` With this noise, all outcomes occur with equal probability, with differences in results being due only to statistical noise. No trace of the encoded state remains. This is an important point to consider for error correction: sometimes the noise is too strong to be corrected. The optimal approach is to combine a good way of encoding the information you require, with hardware whose noise is not too strong. ### Storing qubits So far, we have considered cases where there is no delay between encoding and decoding. For qubits, this means that there is no significant amount of time that passes between initializing the circuit, and making the final measurements. However, there are many cases for which there will be a significant delay. As an obvious example, one may wish to encode a quantum state and store it for a long time, like a quantum hard drive. A less obvious but much more important example is performing fault-tolerant quantum computation itself. For this, we need to store quantum states and preserve their integrity during the computation. This must also be done in a way that allows us to manipulate the stored information in any way we need, and which corrects any errors we may introduce when performing the manipulations. In all cases, we need account for the fact that errors do not only occur when something happens (like a gate or measurement), they also occur when the qubits are idle. Such noise is due to the fact that the qubits interact with each other and their environment. The longer we leave our qubits idle for, the greater the effects of this noise becomes. If we leave them for long enough, we'll encounter a situation like the $p_{meas}=0.5$ case above, where the noise is too strong for errors to be reliably corrected. The solution is to keep measuring throughout. No qubit is left idle for too long. Instead, information is constantly being extracted from the system to keep track of the errors that have occurred. For the case of classical information, where we simply wish to store a `0` or `1`, this can be done by just constantly measuring the value of each qubit. By keeping track of when the values change due to noise, we can easily deduce a history of when errors occurred. For quantum information, however, it is not so easy. For example, consider the case that we wish to encode the logical state $\left|+\right\rangle$. Our encoding is such that $$\left|0\right\rangle \rightarrow \left|000\right\rangle, \left|1\right\rangle \rightarrow \left|111\right\rangle.$$ To encode the logical $\left|+\right\rangle$ state we therefore need $$\left|+\right\rangle =\frac{1}{\sqrt{2}}\left(\left|0\right\rangle+\left|1\right\rangle\right)\rightarrow\frac{1}{\sqrt{2}}\left(\left|000\right\rangle+\left|111\right\rangle\right).$$ With the repetition encoding that we are using, a z measurement (which distinguishes between the $\left|0\right\rangle$ and $\left|1\right\rangle$ states) of the logical qubit is done using a z measurement of each physical qubit. The final result for the logical measurement is decoded from the physical qubit measurement results by simply looking which output is in the majority. As mentioned earlier, we can keep track of errors on logical qubits that are stored for a long time by constantly performing z measurements of the physical qubits. However, note that this effectively corresponds to constantly performing z measurements of the physical qubits. This is fine if we are simply storing a `0` or `1`, but it has undesired effects if we are storing a superposition. Specifically: the first time we do such a check for errors, we will collapse the superposition. This is not ideal. If we wanted to do some computation on our logical qubit, or if we wish to perform a basis change before final measurement, we need to preserve the superposition. Destroying it is an error. But this is not an error caused by imperfections in our devices. It is an error that we have introduced as part of our attempts to correct errors. And since we cannot hope to recreate any arbitrary superposition stored in our quantum computer, it is an error that cannot be corrected. For this reason, we must find another way of keeping track of the errors that occur when our logical qubit is stored for long times. This should give us the information we need to detect and correct errors, and to decode the final measurement result with high probability. However, it should not cause uncorrectable errors to occur during the process by collapsing superpositions that we need to preserve. The way to do this is with the following circuit element. ``` from qiskit import QuantumRegister, ClassicalRegister cq = QuantumRegister(2,'code_qubit') lq = QuantumRegister(1,'ancilla_qubit') sb = ClassicalRegister(1,'syndrome_bit') qc = QuantumCircuit(cq,lq,sb) qc.cx(cq[0],lq[0]) qc.cx(cq[1],lq[0]) qc.measure(lq,sb) qc.draw() ``` Here we have three physical qubits. Two are called 'code qubits', and the other is called an 'ancilla qubit'. One bit of output is extracted, called the syndrome bit. The ancilla qubit is always initialized in state $\left|0\right\rangle$. The code qubits, however, can be initialized in different states. To see what affect different inputs have on the output, we can create a circuit `qc_init` that prepares the code qubits in some state, and then run the circuit `qc_init+qc`. First, the trivial case: `qc_init` does nothing, and so the code qubits are initially $\left|00\right\rangle$. ``` qc_init = QuantumCircuit(cq) (qc_init+qc).draw() counts = execute( qc_init+qc, Aer.get_backend('qasm_simulator')).result().get_counts() print('Results:',counts) ``` The outcome, in all cases, is `0`. Now let's try an initial state of $\left|11\right\rangle$. ``` qc_init = QuantumCircuit(cq) qc_init.x(cq) (qc_init+qc).draw() counts = execute(qc_init+qc, Aer.get_backend('qasm_simulator')).result().get_counts() print('Results:',counts) ``` The outcome in this case is also always `0`. Given the linearity of quantum mechanics, we can expect the same to be true also for any superposition of $\left|00\right\rangle$ and $\left|11\right\rangle$, such as the example below. ``` qc_init = QuantumCircuit(cq) qc_init.h(cq[0]) qc_init.cx(cq[0],cq[1]) (qc_init+qc).draw() counts = execute(qc_init+qc, Aer.get_backend('qasm_simulator')).result().get_counts() print('Results:',counts) ``` The opposite outcome will be found for an initial state of $\left|01\right\rangle$, $\left|10\right\rangle$ or any superposition thereof. ``` qc_init = QuantumCircuit(cq) qc_init.h(cq[0]) qc_init.cx(cq[0],cq[1]) qc_init.x(cq[0]) (qc_init+qc).draw() counts = execute(qc_init+qc, Aer.get_backend('qasm_simulator')).result().get_counts() print('Results:',counts) ``` In such cases the output is always `'1'`. This measurement is therefore telling us about a collective property of multiple qubits. Specifically, it looks at the two code qubits and determines whether their state is the same or different in the z basis. For basis states that are the same in the z basis, like $\left|00\right\rangle$ and $\left|11\right\rangle$, the measurement simply returns `0`. It also does so for any superposition of these. Since it does not distinguish between these states in any way, it also does not collapse such a superposition. Similarly, For basis states that are different in the z basis it returns a `1`. This occurs for $\left|01\right\rangle$, $\left|10\right\rangle$ or any superposition thereof. Now suppose we apply such a 'syndrome measurement' on all pairs of physical qubits in our repetition code. If their state is described by a repeated $\left|0\right\rangle$, a repeated $\left|1\right\rangle$, or any superposition thereof, all the syndrome measurements will return `0`. Given this result, we will know that our states are indeed encoded in the repeated states that we want them to be, and can deduce that no errors have occurred. If some syndrome measurements return `1`, however, it is a signature of an error. We can therefore use these measurement results to determine how to decode the result. ### Quantum repetition code We now know enough to understand exactly how the quantum version of the repetition code is implemented We can use it in Qiskit by importing the required tools from Ignis. ``` from qiskit.ignis.verification.topological_codes import RepetitionCode from qiskit.ignis.verification.topological_codes import lookuptable_decoding from qiskit.ignis.verification.topological_codes import GraphDecoder ``` We are free to choose how many physical qubits we want the logical qubit to be encoded in. We can also choose how many times the syndrome measurements will be applied while we store our logical qubit, before the final readout measurement. Let us start with the smallest non-trivial case: three repetitions and one syndrome measurement round. The circuits for the repetition code can then be created automatically from using the `RepetitionCode` object from Qiskit-Ignis. ``` n = 3 T = 1 code = RepetitionCode(n,T) ``` With this we can inspect various properties of the code, such as the names of the qubit registers used for the code and ancilla qubits. The `RepetitionCode` contains two quantum circuits that implement the code: One for each of the two possible logical bit values. Here are those for logical `0` and `1`, respectively. ``` code.circuit['0'].draw() code.circuit['1'].draw() ``` In these circuits, we have two types of physical qubits. There are the 'code qubits', which are the three physical qubits across which the logical state is encoded. There are also the 'link qubits', which serve as the ancilla qubits for the syndrome measurements. Our single round of syndrome measurements in these circuits consist of just two syndrome measurements. One compares code qubits 0 and 1, and the other compares code qubits 1 and 2. One might expect that a further measurement, comparing code qubits 0 and 2, should be required to create a full set. However, these two are sufficient. This is because of the information on whether 0 and 2 have the same z basis state can be inferred from the same information about 0 and 1 with that for 1 and 2. Indeed, for $n$ qubits, we can get the required information from just $n-1$ syndrome measurements of neighbouring pairs of qubits. Running these circuits on a simulator without any noise leads to very simple results. ``` def get_raw_results(code,noise_model=None): circuits = code.get_circuit_list() raw_results = {} for log in range(2): job = execute( circuits[log], Aer.get_backend('qasm_simulator'), noise_model=noise_model) raw_results[str(log)] = job.result().get_counts(str(log)) return raw_results raw_results = get_raw_results(code) for log in raw_results: print('Logical',log,':',raw_results[log],'\n') ``` Here we see that the output comes in two parts. The part on the right holds the outcomes of the two syndrome measurements. That on the left holds the outcomes of the three final measurements of the code qubits. For more measurement rounds, $T=4$ for example, we would have the results of more syndrome measurements on the right. ``` code = RepetitionCode(n,4) raw_results = get_raw_results(code) for log in raw_results: print('Logical',log,':',raw_results[log],'\n') ``` For more repetitions, $n=5$ for example, each set of measurements would be larger. The final measurement on the left would be of $n$ qubits. The $T$ syndrome measurements would each be of the $n-1$ possible neighbouring pairs. ``` code = RepetitionCode(5,4) raw_results = get_raw_results(code) for log in raw_results: print('Logical',log,':',raw_results[log],'\n') ``` ### Lookup table decoding Now let's return to the $n=3$, $T=1$ example and look at a case with some noise. ``` code = RepetitionCode(3,1) noise_model = get_noise(0.05,0.05) raw_results = get_raw_results(code,noise_model) for log in raw_results: print('Logical',log,':',raw_results[log],'\n') ``` Here we have created `raw_results`, a dictionary that holds both the results for a circuit encoding a logical `0` and `1` encoded for a logical `1`. Our task when confronted with any of the possible outcomes we see here is to determine what the outcome should have been, if there was no noise. For an outcome of `'000 00'` or `'111 00'`, the answer is obvious. These are the results we just saw for a logical `0` and logical `1`, respectively, when no errors occur. The former is the most common outcome for the logical `0` even with noise, and the latter is the most common for the logical `1`. We will therefore conclude that the outcome was indeed that for logical `0` whenever we encounter `'000 00'`, and the same for logical `1` when we encounter `'111 00'`. Though this tactic is optimal, it can nevertheless fail. Note that `'111 00'` typically occurs in a handful of cases for an encoded `0`, and `'000 00'` similarly occurs for an encoded `1`. In this case, through no fault of our own, we will incorrectly decode the output. In these cases, a large number of errors conspired to make it look like we had a noiseless case of the opposite logical value, and so correction becomes impossible. We can employ a similar tactic to decode all other outcomes. The outcome `'001 00'`, for example, occurs far more for a logical `0` than a logical `1`. This is because it could be caused by just a single measurement error in the former case (which incorrectly reports a single `0` to be `1`), but would require at least two errors in the latter. So whenever we see `'001 00'`, we can decode it as a logical `0`. Applying this tactic over all the strings is a form of so-called 'lookup table decoding'. This is where every possible outcome is analyzed, and the most likely value to decode it as is determined. For many qubits, this quickly becomes intractable, as the number of possible outcomes becomes so large. In these cases, more algorithmic decoders are needed. However, lookup table decoding works well for testing out small codes. We can use tools in Qiskit to implement lookup table decoding for any code. For this we need two sets of results. One is the set of results that we actually want to decode, and for which we want to calculate the probability of incorrect decoding, $P$. We will use the `raw_results` we already have for this. The other set of results is one to be used as the lookup table. This will need to be run for a large number of samples, to ensure that it gets good statistics for each possible outcome. We'll use `shots=10000`. ``` circuits = code.get_circuit_list() table_results = {} for log in range(2): job = execute( circuits[log], Aer.get_backend('qasm_simulator'), noise_model=noise_model, shots=10000 ) table_results[str(log)] = job.result().get_counts(str(log)) ``` With this data, which we call `table_results`, we can now use the `lookuptable_decoding` function from Qiskit. This takes each outcome from `raw_results` and decodes it with the information in `table_results`. Then it checks if the decoding was correct, and uses this information to calculate $P$. ``` P = lookuptable_decoding(raw_results,table_results) print('P =',P) ``` Here we see that the values for $P$ are lower than those for $p_{meas}$ and $p_{gate}$, so we get an improvement in the reliability for storing the bit value. Note also that the value of $P$ for an encoded `1` is higher than that for `0`. This is because the encoding of `1` requires the application of `x` gates, which are an additional source of noise. ### Graph theoretic decoding The decoding considered above produces the best possible results, and does so without needing to use any details of the code. However, it has a major drawback that counters these advantages: the lookup table grows exponentially large as code size increases. For this reason, decoding is typically done in a more algorithmic manner that takes into account the structure of the code and its resulting syndromes. For the codes of `topological_codes` this structure is revealed using post-processing of the syndromes. Instead of using the form shown above, with the final measurement of the code qubits on the left and the outputs of the syndrome measurement rounds on the right, we use the `process_results` method of the code object to rewrite them in a different form. For example, below is the processed form of a `raw_results` dictionary, in this case for $n=3$ and $T=2$. Only results with 50 or more samples are shown for clarity. ``` code = RepetitionCode(3,2) raw_results = get_raw_results(code,noise_model) results = code.process_results( raw_results ) for log in ['0','1']: print('\nLogical ' + log + ':') print('raw results ', {string:raw_results[log][string] for string in raw_results[log] if raw_results[log][string]>=50 }) print('processed results ', {string:results[log][string] for string in results[log] if results[log][string]>=50 }) ``` Here we can see that `'000 00 00'` has been transformed to `'0 0 00 00 00'`, and `'111 00 00'` to `'1 1 00 00 00'`, and so on. In these new strings, the `0 0` to the far left for the logical `0` results and the `1 1` to the far left of the logical `1` results are the logical readout. Any code qubit could be used for this readout, since they should (without errors) all be equal. It would therefore be possible in principle to just have a single `0` or `1` at this position. We could also do as in the original form of the result and have $n$, one for each qubit. Instead we use two, from the two qubits at either end of the line. The reason for this will be shown later. In the absence of errors, these two values will always be equal, since they represent the same encoded bit value. After the logical values follow the $n-1$ results of the syndrome measurements for the first round. A `0` implies that the corresponding pair of qubits have the same value, and `1` implies they are different from each other. There are $n-1$ results because the line of $d$ code qubits has $n-1$ possible neighboring pairs. In the absence of errors, they will all be `0`. This is exactly the same as the first such set of syndrome results from the original form of the result. The next block is the next round of syndrome results. However, rather than presenting these results directly, it instead gives us the syndrome change between the first and second rounds. It is therefore the bitwise `OR` of the syndrome measurement results from the second round with those from the first. In the absence of errors, they will all be `0`. Any subsequent blocks follow the same formula, though the last of all requires some comment. This is not measured using the standard method (with a link qubit). Instead it is calculated from the final readout measurement of all code qubits. Again it is presented as a syndrome change, and will be all `0` in the absence of errors. This is the $T+1$-th block of syndrome measurements since, as it is not done in the same way as the others, it is not counted among the $T$ syndrome measurement rounds. The following examples further illustrate this convention. **Example 1:** `0 0 0110 0000 0000` represents a $d=5$, $T=2$ repetition code with encoded `0`. The syndrome shows that (most likely) the middle code qubit was flipped by an error before the first measurement round. This causes it to disagree with both neighboring code qubits for the rest of the circuit. This is shown by the syndrome in the first round, but the blocks for subsequent rounds do not report it as it no longer represents a change. Other sets of errors could also have caused this syndrome, but they would need to be more complex and so presumably less likely. **Example 2:** `0 0 0010 0010 0000` represents a $d=5$, $T=2$ repetition code with encoded `0`. Here one of the syndrome measurements reported a difference between two code qubits in the first round, leading to a `1`. The next round did not see the same effect, and so resulted in a `0`. However, since this disagreed with the previous result for the same syndrome measurement, and since we track syndrome changes, this change results in another `1`. Subsequent rounds also do not detect anything, but this no longer represents a change and hence results in a `0` in the same position. Most likely the measurement result leading to the first `1` was an error. **Example 3:** `0 1 0000 0001 0000` represents a $d=5$, $T=2$ repetition code with encoded `1`. A code qubit on the end of the line is flipped before the second round of syndrome measurements. This is detected by only a single syndrome measurement, because it is on the end of the line. For the same reason, it also disturbs one of the logical readouts. Note that in all these examples, a single error causes exactly two characters in the string to change from the value they would have with no errors. This is the defining feature of the convention used to represent stabilizers in `topological_codes`. It is used to define the graph on which the decoding problem is defined. Specifically, the graph is constructed by first taking the circuit encoding logical `0`, for which all bit values in the output string should be `0`. Many copies of this and then created and run on a simulator, with a different single Pauli operator inserted into each. This is done for each of the three types of Pauli operators on each of the qubits and at every circuit depth. The output from each of these circuits can be used to determine the effects of each possible single error. Since the circuit contains only Clifford operations, the simulation can be performed efficiently. In each case, the error will change exactly two of the characters (unless it has no effect). A graph is then constructed for which each bit of the output string corresponds to a node, and the pairs of bits affected by the same error correspond to an edge. The process of decoding a particular output string typically requires the algorithm to deduce which set of errors occurred, given the syndrome found in the output string. This can be done by constructing a second graph, containing only nodes that correspond to non-trivial syndrome bits in the output. An edge is then placed between each pair of nodes, with an corresponding weight equal to the length of the minimal path between those nodes in the original graph. A set of errors consistent with the syndrome then corresponds then to finding a perfect matching of this graph. To deduce the most likely set of errors to have occurred, a good tactic would be to find one with the least possible number of errors that is consistent with the observed syndrome. This corresponds to a minimum weight perfect matching of the graph. Using minimal weight perfect matching is a standard decoding technique for the repetition code and surface code, and is implement in Qiskit Ignis. It can also be used in other cases, such as Color codes, but it does not find the best approximation of the most likely set of errors for every code and noise model. For that reason, other decoding techniques based on the same graph can be used. The `GraphDecoder` of Qiskit Ignis calculates these graphs for a given code, and will provide a range of methods to analyze it. At time of writing, only minimum weight perfect matching is implemented. Note that, for codes such as the surface code, it is not strictly true than each single error will change the value of only two bits in the output string. A $\sigma^y$ error, for example would flip a pair of values corresponding to two different types of stabilizer, which are typically decoded independently. Output for these codes will therefore be presented in a way that acknowledges this, and analysis of such syndromes will correspondingly create multiple independent graphs to represent the different syndrome types. ## Running a repetition code benchmarking procedure We will now run examples of repetition codes on real devices, and use the results as a benchmark. First, we will briefly summarize the process. This applies to this example of the repetition code, but also for other benchmarking procedures in `topological_codes`, and indeed for Qiskit Ignis in general. In each case, the following three-step process is used. 1. A task is defined. Qiskit Ignis determines the set of circuits that must be run and creates them. 2. The circuits are run. This is typically done using Qiskit. However, in principle any service or experimental equipment could be interfaced. 3. Qiskit Ignis is used to process the results from the circuits, to create the output required for the given task. For `topological_codes`, step 1 requires the type and size of quantum error correction code to be chosen. Each type of code has a dedicated Python class. A corresponding object is initialized by providing the parameters required, such as `n` and `T` for a `RepetitionCode` object. The resulting object then contains the circuits corresponding to the given code encoding simple logical qubit states (such as $\left|0\right\rangle$ and $\left|1\right\rangle$), and then running the procedure of error detection for a specified number of rounds, before final readout in a straightforward logical basis (typically a standard $\left|0\right\rangle$ / $\left|1\right\rangle$ measurement). For `topological_codes`, the main processing of step 3 is the decoding, which aims to mitigate for any errors in the final readout by using the information obtained from error detection. The optimal algorithm for decoding typically varies between codes. However, codes with similar structure often make use of similar methods. The aim of `topological_codes` is to provide a variety of decoding methods, implemented such that all the decoders can be used on all of the codes. This is done by restricting to codes for which decoding can be described as a graph-theoretic minimization problem. This classic example of such codes are the toric and surface codes. The property is also shared by 2D color codes and matching codes. All of these are prominent examples of so-called topological quantum error correcting codes, which led to the name of the subpackage. However, note that not all topological codes are compatible with such a decoder. Also, some non-topological codes will be compatible, such as the repetition code. The decoding is done by the `GraphDecoder` class. A corresponding object is initialized by providing the code object for which the decoding will be performed. This is then used to determine the graph on which the decoding problem will be defined. The results can then be processed using the various methods of the decoder object. In the following we will see the above ideas put into practice for the repetition code. In doing this we will employ two Boolean variables, `step_2` and `step_3`. The variable `step_2` is used to show which parts of the program need to be run when taking data from a device, and `step_3` is used to show the parts which process the resulting data. Both are set to false by default, to ensure that all the program snippets below can be run using only previously collected and processed data. However, to obtain new data one only needs to use `step_2 = True`, and perform decoding on any data one only needs to use `step_3 = True`. ``` step_2 = False step_3 = False ``` To benchmark a real device we need the tools required to access that device over the cloud, and compile circuits suitable to run on it. These are imported as follows. ``` from qiskit import IBMQ from qiskit.compiler import transpile from qiskit.transpiler import PassManager ``` We can now create the backend object, which is used to run the circuits. This is done by supplying the string used to specify the device. Here `'ibmq_16_melbourne'` is used, which has 15 active qubits at time of writing. We will also consider the 53 qubit *Rochester* device, which is specified with `'ibmq_rochester'`. ``` device_name = 'ibmq_16_melbourne' if step_2: IBMQ.load_account() for provider in IBMQ.providers(): for potential_backend in provider.backends(): if potential_backend.name()==device_name: backend = potential_backend coupling_map = backend.configuration().coupling_map ``` When running a circuit on a real device, a transpilation process is first implemented. This changes the gates of the circuit into the native gate set implement by the device. In some cases these changes are fairly trivial, such as expressing each Hadamard as a single qubit rotation by the corresponding Euler angles. However, the changes can be more major if the circuit does not respect the connectivity of the device. For example, suppose the circuit requires a controlled-NOT that is not directly implemented by the device. The effect must be then be reproduced with techniques such as using additional controlled-NOT gates to move the qubit states around. As well as introducing additional noise, this also delocalizes any noise already present. A single qubit error in the original circuit could become a multiqubit monstrosity under the action of the additional transpilation. Such non-trivial transpilation must therefore be prevented when running quantum error correction circuits. Tests of the repetition code require qubits to be effectively ordered along a line. The only controlled-NOT gates required are between neighbours along that line. Our first job is therefore to study the coupling map of the device, and find a line. ![Fig. 1. The coupling map of the IBM Q Melbourne device.](images/melbourne.png) For Melbourne it is possible to find a line that covers all 15 qubits. The choice one specified in the list `line` below is designed to avoid the most error prone `cx` gates. For the 53 qubit *Rochester* device, there is no single line that covers all 53 qubits. Instead we can use the following choice, which covers 43. ``` if device_name=='ibmq_16_melbourne': line = [13,14,0,1,2,12,11,3,4,10,9,5,6,8,7] elif device_name=='ibmq_rochester': line = [10,11,17,23,22,21,20,19,16,7,8,9,5]#,0,1,2,3,4,6,13,14,15,18,27,26,25,29,36,37,38,41,50,49,48,47,46,45,44,43,42,39,30,31] ``` Now we know how many qubits we have access to, we can create the repetition code objects for each code that we will run. Note that a code with `n` repetitions uses $n$ code qubits and $n-1$ link qubits, and so $2n-1$ in all. ``` n_min = 3 n_max = int((len(line)+1)/2) code = {} for n in range(n_min,n_max+1): code[n] = RepetitionCode(n,1) ``` Before running the circuits from these codes, we need to ensure that the transpiler knows which physical qubits on the device it should use. This means using the qubit of `line[0]` to serve as the first code qubit, that of `line[1]` to be the first link qubit, and so on. This is done by the following function, which takes a repetition code object and a `line`, and creates a Python dictionary to specify which qubit of the code corresponds to which element of the line. ``` def get_initial_layout(code,line): initial_layout = {} for j in range(n): initial_layout[code.code_qubit[j]] = line[2*j] for j in range(n-1): initial_layout[code.link_qubit[j]] = line[2*j+1] return initial_layout ``` Now we can transpile the circuits, to create the circuits that will actually be run by the device. A check is also made to ensure that the transpilation indeed has not introduced non-trivial effects by increasing the number of qubits. Furthermore, the compiled circuits are collected into a single list, to allow them all to be submitted at once in the same batch job. ``` if step_2: circuits = [] for n in range(n_min,n_max+1): initial_layout = get_initial_layout(code[n],line) for log in ['0','1']: circuits.append( transpile(code[n].circuit[log], backend=backend, initial_layout=initial_layout) ) num_cx = dict(circuits[-1].count_ops())['cx'] assert num_cx==2*(n-1), str(num_cx) + ' instead of ' + str(2*(n-1)) + ' cx gates for n = ' + str(n) ``` We are now ready to run the job. As with the simulated jobs considered already, the results from this are extracted into a dictionary `raw_results`. However, in this case it is extended to hold the results from different code sizes. This means that `raw_results[n]` in the following is equivalent to one of the `raw_results` dictionaries used earlier, for a given `n`. ``` if step_2: job = execute(circuits,backend,shots=8192) raw_results = {} j = 0 for d in range(n_min,n_max+1): raw_results[d] = {} for log in ['0','1']: raw_results[d][log] = job.result().get_counts(j) j += 1 ``` It can be convenient to save the data to file, so that the processing of step 3 can be done or repeated at a later time. ``` if step_2: # save results with open('results/raw_results_'+device_name+'.txt', 'w') as file: file.write(str(raw_results)) elif step_3: # read results with open('results/raw_results_'+device_name+'.txt', 'r') as file: raw_results = eval(file.read()) ``` As we saw previously, the process of decoding first needs the results to be rewritten in order for the syndrome to be expressed in the correct form. As such, the `process_results` method of each the repetition code object `code[n]` is used to create a results dictionary `results[n]` from each `raw_results[n]`. ``` if step_3: results = {} for n in range(n_min,n_max+1): results[n] = code[n].process_results( raw_results[n] ) ``` The decoding also needs us to set up the `GraphDecoder` object for each code. The initialization of these involves the construction of the graph corresponding to the syndrome, as described in the last section. ``` if step_3: dec = {} for n in range(n_min,n_max+1): dec[n] = GraphDecoder(code[n]) ``` Finally, the decoder object can be used to process the results. Here the default algorithm, minimum weight perfect matching, is used. The end result is a calculation of the logical error probability. When running step 3, the following snippet also saves the logical error probabilities. Otherwise, it reads in previously saved probabilities. ``` if step_3: logical_prob_match = {} for n in range(n_min,n_max+1): logical_prob_match[n] = dec[n].get_logical_prob(results[n]) with open('results/logical_prob_match_'+device_name+'.txt', 'w') as file: file.write(str(logical_prob_match)) else: with open('results/logical_prob_match_'+device_name+'.txt', 'r') as file: logical_prob_match = eval(file.read()) ``` The resulting logical error probabilities are displayed in the following graph, which uses a log scale used on the y axis. We would expect that the logical error probability decays exponentially with increasing $n$. If this is the case, it is a confirmation that the device is compatible with this basis test of quantum error correction. If not, it implies that the qubits and gates are not sufficiently reliable. Fortunately, the results from IBM Q prototype devices typically do show the expected exponential decay. For the results below, we can see that small codes do represent an exception to this rule. Other deviations can also be expected, such as when an increase in the size of the code includes a group of qubits with either exceptionally low or high noise. ``` import matplotlib.pyplot as plt import numpy as np x_axis = range(n_min,n_max+1) P = { log: [logical_prob_match[n][log] for n in x_axis] for log in ['0', '1'] } ax = plt.gca() plt.xlabel('Code distance, n') plt.ylabel('ln(Logical error probability)') ax.scatter( x_axis, P['0'], label="logical 0") ax.scatter( x_axis, P['1'], label="logical 1") ax.set_yscale('log') ax.set_ylim(ymax=1.5*max(P['0']+P['1']),ymin=0.75*min(P['0']+P['1'])) plt.legend() plt.show() ``` Another insight we can gain is to use the results to determine how likely certain error processes are to occur. To do this we use the fact that each edge in the syndrome graph represents a particular form of error, occurring on a particular qubit at a particular point within the circuit. This is the unique single error that causes the syndrome values corresponding to both of the adjacent nodes to change. Using the results to estimate the probability of such a syndrome therefore allows us to estimate the probability of such an error event. Specifically, to first order it is clear that $$ \frac{p}{1-p} \approx \frac{C_{11}}{C_{00}} $$ Here $p$ is the probability of the error corresponding to a particular edge, $C_{11}$ is the number of counts in the `results[n]['0']` corresponding to the syndrome value of both adjacent nodes being `1`, and $C_{00}$ is the same for them both being `0`. The decoder object has a method `weight_syndrome_graph` which determines these ratios, and assigns each edge the weight $-\ln(p/(1-p))$. By employing this method and inspecting the weights, we can easily retrieve these probabilities. ``` if step_3: dec[n_max].weight_syndrome_graph(results=results[n_max]) probs = [] for edge in dec[n_max].S.edges: ratio = np.exp(-dec[n_max].S.get_edge_data(edge[0],edge[1])['distance']) probs.append( ratio/(1+ratio) ) with open('results/probs_'+device_name+'.txt', 'w') as file: file.write(str(probs)) else: with open('results/probs_'+device_name+'.txt', 'r') as file: probs = eval(file.read()) ``` Rather than display the full list, we can obtain a summary via the mean, standard deviation, minimum, maximum and quartiles. ``` import pandas as pd pd.Series(probs).describe().to_dict() ``` The benchmarking of the devices does not produce any set of error probabilities that is exactly equivalent. However, the probabilities for readout errors and controlled-NOT gate errors could serve as a good comparison. Specifically, we can use the `backend` object to obtain these values from the benchmarking. ``` if step_3: gate_probs = [] for j,qubit in enumerate(line): gate_probs.append( backend.properties().readout_error(qubit) ) cx1,cx2 = 0,0 if j>0: gate_probs( backend.properties().gate_error('cx',[qubit,line[j-1]]) ) if j<len(line)-1: gate_probs( backend.properties().gate_error('cx',[qubit,line[j+1]]) ) with open('results/gate_probs_'+device_name+'.txt', 'w') as file: file.write(str(gate_probs)) else: with open('results/gate_probs_'+device_name+'.txt', 'r') as file: gate_probs = eval(file.read()) pd.Series(gate_probs).describe().to_dict() import qiskit qiskit.__qiskit_version__ ```
github_jupyter
<a href="https://colab.research.google.com/github/rockerritesh/Artifical-Neural-Network/blob/master/NPL_and_SENTIMENT_CLASSIFICATION_using_simple_NEURAL_NETWORK.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import nltk from nltk.stem import PorterStemmer from nltk.corpus import stopwords nltk.download('punkt') nltk.download('stopwords') nltk.download('wordnet') ``` ### ***Below code is for Stemmer*** ``` paragraph = """I have three visions for India. In 3000 years of our history, people from all over the world have come and invaded us, captured our lands, conquered our minds. From Alexander onwards, the Greeks, the Turks, the Moguls, the Portuguese, the British, the French, the Dutch, all of them came and looted us, took over what was ours. Yet we have not done this to any other nation. We have not conquered anyone. We have not grabbed their land, their culture, their history and tried to enforce our way of life on them. Why? Because we respect the freedom of others.That is why my first vision is that of freedom. I believe that India got its first vision of this in 1857, when we started the War of Independence. It is this freedom that we must protect and nurture and build on. If we are not free, no one will respect us. My second vision for India’s development. For fifty years we have been a developing nation. It is time we see ourselves as a developed nation. We are among the top 5 nations of the world in terms of GDP. We have a 10 percent growth rate in most areas. Our poverty levels are falling. Our achievements are being globally recognised today. Yet we lack the self-confidence to see ourselves as a developed nation, self-reliant and self-assured. Isn’t this incorrect? I have a third vision. India must stand up to the world. Because I believe that unless India stands up to the world, no one will respect us. Only strength respects strength. We must be strong not only as a military power but also as an economic power. Both must go hand-in-hand. My good fortune was to have worked with three great minds. Dr. Vikram Sarabhai of the Dept. of space, Professor Satish Dhawan, who succeeded him and Dr. Brahm Prakash, father of nuclear material. I was lucky to have worked with all three of them closely and consider this the great opportunity of my life. I see four milestones in my career""" sentences = nltk.sent_tokenize(paragraph) stemmer = PorterStemmer() # Stemming for i in range(len(sentences)): words = nltk.word_tokenize(sentences[i]) words = [stemmer.stem(word) for word in words if word not in set(stopwords.words('english'))] sentences[i] = ' '.join(words) sentences ``` ## ***Below code is for Lemmatizer*** ``` from nltk.stem import WordNetLemmatizer from nltk.corpus import stopwords sentences = nltk.sent_tokenize(paragraph) lemmatizer = WordNetLemmatizer() # Lemmatization for i in range(len(sentences)): words = nltk.word_tokenize(sentences[i]) words = [lemmatizer.lemmatize(word) for word in words if word not in set(stopwords.words('english'))] sentences[i] = ' '.join(words) sentences ``` Below code is for TFIDF ``` # Cleaning the texts import re from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer from nltk.stem import WordNetLemmatizer ps = PorterStemmer() wordnet=WordNetLemmatizer() #here i have used Lemmatizer sentences = nltk.sent_tokenize(paragraph) corpus = [] for i in range(len(sentences)): review = re.sub('[^a-zA-Z]', ' ', sentences[i]) review = review.lower() review = review.split() review = [wordnet.lemmatize(word) for word in review if not word in set(stopwords.words('english'))] review = ' '.join(review) corpus.append(review) corpus # Creating the TF-IDF model from sklearn.feature_extraction.text import TfidfVectorizer cv = TfidfVectorizer() X = cv.fit_transform(corpus).toarray() X X.shape import numpy as np p=np.random.rand(31,).reshape(31,1) for i in range(30): if p[i]>0.5: p[i]=1 else: p[i]=0 #float_array. astype(int) u=p.astype(int) u from sklearn.decomposition import PCA pca=PCA(n_components=2) pca.fit(X) x=pca.transform(X) x from sklearn.model_selection import train_test_split x_train,x_test,y_train,y_test= train_test_split(x,u,test_size=0.1) x_train.shape y_train.shape y_test[3] from sklearn.linear_model import LogisticRegression reg= LogisticRegression() reg.fit(x_train,y_train) reg.predict(x_test) y_test #From her we are going to start mini project of sentiment classification. import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.metrics import confusion_matrix, classification_report, mean_squared_error, mean_absolute_error, r2_score from keras.models import Sequential from keras.layers import Dense, Dropout, BatchNormalization, Activation from keras.optimizers import Adam from keras.callbacks import EarlyStopping from keras.utils.np_utils import to_categorical from sklearn.preprocessing import StandardScaler, LabelEncoder, OneHotEncoder, MinMaxScaler from sklearn.model_selection import train_test_split, cross_val_score, StratifiedKFold, KFold import keras.backend as K from keras.wrappers.scikit_learn import KerasClassifier import pandas as pd df=pd.read_csv('train.csv') df def missing_value_of_data(data): total=data.isnull().sum().sort_values(ascending=False) percentage=round(total/data.shape[0]*100,2) return pd.concat([total,percentage],axis=1,keys=['Total','Percentage']) f=missing_value_of_data(df) f df=df.dropna() x=df.iloc[:,2].values x # Creating the TF-IDF model from sklearn.feature_extraction.text import TfidfVectorizer cv = TfidfVectorizer() X = cv.fit_transform(x).toarray() #print(X) X.shape X Y=df.iloc[:,3] Y cat=df['sentiment'] w=pd.get_dummies(cat) w t=w.iloc[:,[0,1,2]].values t from sklearn.decomposition import PCA pca=PCA(n_components=8) pca.fit(X) x=pca.transform(X) np.array(x[1]) t.shape from numpy import asarray from numpy import save data=x save('data.npy',data) save('result.npy',t) model = Sequential() model.add(Dense(256, input_shape=(8,), activation='relu')) model.add(Dense(128, activation='relu')) model.add(Dense(64, activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(16, activation='relu')) model.add(Dense(8, activation='relu')) model.add(Dense(3, activation='softmax')) model.compile('adam', 'categorical_crossentropy', metrics=['accuracy']) model.summary() history = model.fit(x, t, verbose=1, epochs=50) tf=pd.read_csv('test.csv') missing=missing_value_of_data(df) print(missing) tf=tf.dropna() tf x_test=tf.iloc[:,1].values x_test # Creating the TF-IDF model from sklearn.feature_extraction.text import TfidfVectorizer cv = TfidfVectorizer() X_test = cv.fit_transform(x_test).toarray() X_test from sklearn.decomposition import PCA pca=PCA(n_components=8) pca.fit(X_test) x_test=pca.transform(X_test) x_test.shape y_pred=model.predict(x_test) np.argmax(y_pred[2]) Y_test=tf.iloc[:,2] cate=tf['sentiment'] y_test=pd.get_dummies(cate) y_test =y_test.values print(np.argmax(y_pred[4])) print(np.argmax(y_test[4])) g=model.predict_classes(x_test) g.shape h=np.argmax(y_test,axis=1) h.shape confusion_matrix(g,h) ```
github_jupyter
# Navigation --- In this notebook, you will learn how to use the Unity ML-Agents environment for the first project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893). ### 1. Start the Environment We begin by importing some necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/). ``` from unityagents import UnityEnvironment import numpy as np ``` Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded. - **Mac**: `"path/to/Banana.app"` - **Windows** (x86): `"path/to/Banana_Windows_x86/Banana.exe"` - **Windows** (x86_64): `"path/to/Banana_Windows_x86_64/Banana.exe"` - **Linux** (x86): `"path/to/Banana_Linux/Banana.x86"` - **Linux** (x86_64): `"path/to/Banana_Linux/Banana.x86_64"` - **Linux** (x86, headless): `"path/to/Banana_Linux_NoVis/Banana.x86"` - **Linux** (x86_64, headless): `"path/to/Banana_Linux_NoVis/Banana.x86_64"` For instance, if you are using a Mac, then you downloaded `Banana.app`. If this file is in the same folder as the notebook, then the line below should appear as follows: ``` env = UnityEnvironment(file_name="Banana.app") ``` ``` env = UnityEnvironment(file_name="Banana_Linux/Banana.x86_64") ``` Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python. ``` # get the default brain brain_name = env.brain_names[0] brain = env.brains[brain_name] ``` ### 2. Examine the State and Action Spaces The simulation contains a single agent that navigates a large environment. At each time step, it has four actions at its disposal: - `0` - walk forward - `1` - walk backward - `2` - turn left - `3` - turn right The state space has `37` dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. A reward of `+1` is provided for collecting a yellow banana, and a reward of `-1` is provided for collecting a blue banana. Run the code cell below to print some information about the environment. ``` # reset the environment env_info = env.reset(train_mode=True)[brain_name] # number of agents in the environment print('Number of agents:', len(env_info.agents)) # number of actions action_size = brain.vector_action_space_size print('Number of actions:', action_size) # examine the state space state = env_info.vector_observations[0] print('States look like:', state) state_size = len(state) print('States have length:', state_size) ``` ### 3. Take Random Actions in the Environment In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment. Once this cell is executed, you will watch the agent's performance, if it selects an action (uniformly) at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment! ``` env_info = env.reset(train_mode=False)[brain_name] # reset the environment state = env_info.vector_observations[0] # get the current state score = 0 # initialize the score while True: action = np.random.randint(action_size) # select an action env_info = env.step(action)[brain_name] # send the action to the environment next_state = env_info.vector_observations[0] # get the next state reward = env_info.rewards[0] # get the reward done = env_info.local_done[0] # see if episode has finished score += reward # update the score state = next_state # roll over the state to next time step if done: # exit loop if episode finished break print("Score: {}".format(score)) ``` When finished, you can close the environment. ``` env.close() ``` ### 4. It's Your Turn! Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following: ```python env_info = env.reset(train_mode=True)[brain_name] ```
github_jupyter
## Introduction to Data Science ### Data Science Tasks: Recommender Systems Based on [this](https://www.datacamp.com/community/tutorials/recommender-systems-python), [this](https://www.analyticsvidhya.com/blog/2016/06/quick-guide-build-recommendation-engine-python/) and [this](http://www.data-mania.com/blog/recommendation-system-python/) blog posts. Full version of data can be found [here](https://grouplens.org/datasets/movielens/) ``` import os import sys import re import math import time import string import datetime from zipfile import ZipFile from io import StringIO import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.neighbors import NearestNeighbors ``` Specifying the path to the files: ``` outputs = "../outputs/" ``` #### Recommendation engines Recommendation engines are nothing but an automated form of a “shop counter guy”. You ask him for a product. Not only he shows that product, but also the related ones which you could buy. They are well trained in cross selling and up selling. So, does our recommendation engines. The ability of these engines to recommend personalized content, based on past behavior is incredible. It brings customer delight and gives them a reason to keep returning to the website. #### Types of Recommendation Engines a) Recommend the most popular items A simple approach could be to recommend the items which are liked by most number of users. This is a blazing fast and dirty approach and thus has a major drawback. The things is, there is no personalization involved with this approach. Basically the most popular items would be same for each user since popularity is defined on the entire user pool. So everybody will see the same results. It sounds like, ‘a website recommends you to buy microwave just because it’s been liked by other users and doesn’t care if you are even interested in buying or not’. Surprisingly, such approach still works in places like news portals. Whenever you login to say bbcnews, you’ll see a column of “Popular News” which is subdivided into sections and the most read articles of each sections are displayed. This approach can work in this case because: There is division by section so user can look at the section of his interest. At a time there are only a few hot topics and there is a high chance that a user wants to read the news which is being read by most others b) Using a classifier to make recommendation We already know lots of classification algorithms. Let’s see how we can use the same technique to make recommendations. Classifiers are parametric solutions so we just need to define some parameters (features) of the user and the item. The outcome can be 1 if the user likes it or 0 otherwise. This might work out in some cases because of following advantages: Incorporates personalization It can work even if the user’s past history is short or not available But has some major drawbacks as well because of which it is not used much in practice: The features might actually not be available or even if they are, they may not be sufficient to make a good classifier As the number of users and items grow, making a good classifier will become exponentially difficult c) Recommendation Algorithms Now lets come to the special class of algorithms which are tailor-made for solving the recommendation problem. There are typically two types of algorithms – Content Based and Collaborative Filtering. You should refer to our previous article to get a complete sense of how they work. I’ll give a short recap here. + Content based algorithms: If you like an item then you will also like a “similar” item based on similarity of the items being recommended. It generally works well when its easy to determine the context/properties of each item. For instance when we are recommending the same kind of item like a movie recommendation or song recommendation. + Collaborative filtering algorithms: If a person A likes item 1, 2, 3 and B like 2,3,4 then they have similar interests and A should like item 4 and B should like item 1. This algorithm is entirely based on the past behavior and not on the context. This makes it one of the most commonly used algorithm as it is not dependent on any additional information. For instance: product recommendations by e-commerce player like Amazon and merchant recommendations by banks like American Express. + Hybrid recommendation systems – Hybrid recommendation systems combine both collaborative and content-based approaches. They help improve recommendations that are derived from sparse datasets. (Netflix is a prime example of a hybrid recommender) Further, there are several types of collaborative filtering algorithms: + User-User Collaborative filtering: Here we find look alike customers (based on similarity) and offer products which first customer’s look alike has chosen in past. This algorithm is very effective but takes a lot of time and resources. It requires to compute every customer pair information which takes time. Therefore, for big base platforms, this algorithm is hard to implement without a very strong parallelizable system. + Item-Item Collaborative filtering: It is quite similar to previous algorithm, but instead of finding customer look alike, we try finding item look alike. Once we have item look alike matrix, we can easily recommend alike items to customer who have purchased any item from the store. This algorithm is far less resource consuming than user-user collaborative filtering. Hence, for a new customer the algorithm takes far lesser time than user-user collaborate as we don’t need all similarity scores between customers. And with fixed number of products, product-product look alike matrix is fixed over time. + Other simpler algorithms: There are other approaches like market basket analysis, which generally do not have high predictive power than the algorithms described above. #### First Example: quick and dirty similarity system: Collaborative systems often deploy a nearest neighbor method or a item-based collaborative filtering system – a simple system that makes recommendations based on simple regression or a weighted-sum approach. The end goal of collaborative systems is to make recommendations based on customers’ behavior, purchasing patterns, and preferences, as well as product attributes, price ranges, and product categories. Content-based systems can deploy methods as simple as averaging, or they can deploy advanced machine learning approaches in the form of Naive Bayes classifiers, clustering algorithms or artificial neural nets. First let's create a dataset called X, with 6 records and 2 features each. ``` X = np.array([[-1, 2], [4, -4], [-2, 1], [-1, 3], [-3, 2], [-1, 4]]) print(X) ``` Next we will instantiate a nearest neighbor object, and call it nbrs. Then we will fit it to dataset X. ``` nbrs = NearestNeighbors(n_neighbors=3, algorithm='ball_tree').fit(X) ``` Let's find the k-neighbors of each point in object X. To do that we call the kneighbors() function on object X. ``` distances, indices = nbrs.kneighbors(X) print(indices) print(distances) ``` Imagine you have a new incoming data point. It contains the values -2 and 4. To search object X and identify the most similar record, all you need to do is call the kneighbors() function on the new incoming data p ``` dist, idx = nbrs.kneighbors([[-2, 4]]) print('The closest are {}'.format(idx)) print('The distances are {}'.format(dist)) ``` The results indicate that the record that has neighbors with the indices [5, 3, 0] is the most similar to the new incoming data point. If you look back at the records in X, that is the last record: [-1, 4]. Just based on a quick glance you can see that, indeed, the last record in object X is the one that is most similar to this new incoming data point [-2, 4]. In this way, you can use kNN to quickly classify new incoming data points and then make recommendations, all based on similarity. #### Second Example: Movie Lens Database The MovieLens dataset has been collected by the GroupLens Research Project at the University of Minnesota. MovieLens 100K dataset consists of: + 100,000 ratings (1-5) from 943 users on 1682 movies. + Each user has rated at least 20 movies. + Simple demographic info for the users (age, gender, occupation, zip) + Genre information of movies ``` with ZipFile('../datasets/CSVs/ml-100k.zip') as z: for filename in z.namelist(): if not os.path.isdir(filename): print(filename) #Reading users file: u_cols = ['user_id', 'age', 'sex', 'occupation', 'zip_code'] with ZipFile('../datasets/CSVs/ml-100k.zip') as z: myfile = StringIO(z.read('ml-100k/u.user').decode('latin-1')) users = pd.read_csv(myfile, sep='|', names=u_cols, encoding='latin-1') users.head() #Reading ratings file: r_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp'] with ZipFile('../datasets/CSVs/ml-100k.zip') as z: myfile = StringIO(z.read('ml-100k/u.data').decode('latin-1')) ratings = pd.read_csv(myfile, sep='\t', names=r_cols, encoding='latin-1') ratings.head() #Reading items file: i_cols = ['movie id', 'movie title' , 'release date', 'video release date', 'IMDb URL', 'unknown', 'Action', 'Adventure', 'Animation', 'Children\'s', 'Comedy', 'Crime', 'Documentary', 'Drama', 'Fantasy', 'Film-Noir', 'Horror', 'Musical', 'Mystery', 'Romance', 'Sci-Fi', 'Thriller', 'War', 'Western'] with ZipFile('../datasets/CSVs/ml-100k.zip') as z: myfile = StringIO(z.read('ml-100k/u.item').decode('latin-1')) items = pd.read_csv(myfile, sep='|', names=i_cols, encoding='latin-1') items.head() ```
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt ``` ## Object Creation Creating a Series by passing a list of values, letting pandas create a default integer index (for the list) ``` s = pd.Series([1,3,5,np.nan,6,8]) s ``` Creating a DataFrame by passing numpy array, with datetime index and labeled columns ``` dates = pd.date_range('20130101', periods =6) dates df = pd.DataFrame(np.random.randn(6,4), index=dates, columns=list('ABCD')) df ``` Creating a DataFrame by passing a dict ``` df2 = pd.DataFrame({ 'A' : 1., 'B' : pd.Timestamp('20130102'), 'C' : pd.Series(1,index=list(range(4)),dtype='float32'), 'D' : np.array([3] * 4, dtype='int32'), 'E' : pd.Categorical(["test","train","test","train"]), 'F' : 'foo'}) df2 df2.dtypes ``` ## Viewing Data ``` df.head() # see the top rows of the frame df.tail(3) # see the bottom rows of the frame df.index df.columns df.values df.describe() # describe shows a quick statistic summary of data df.T # Transposing your data df.sort_values(by='B') # sorting by values ``` ## Selection ### Getting ``` df['A'] df[0:3] df['20130102':'20130104'] ``` ### Selection by Label for production code, optimized pandas data access methods were recommended: `.at`,`.iat`, `.loc`, `.iloc`, and `.ix` ``` df.loc[dates[0]] dates[0] ``` ** Selecting on a multi-axis by label ** ``` df.loc[:,['A','B']] # Selecting on a multi-axis by label df.loc['20130102':'20130104',['A','B']] # Showing label slicing, both endpoints are included df.loc['20130102',['A','B']] # Reduction in the dimensions of the returned object df.loc[dates[0],'A'] ``` ** For getting fast access to a scalar (equiv to the prior method) ** ``` df.at[dates[0],'A'] ``` ## Selection by Position `.iloc` ``` df.iloc[3] df.iloc[3:5,0:2] df.iloc[[1,2,4],[0,2]] df.iloc[1:3,:] df.iloc[:,1:3] df.iloc[1,1] ``` For getting fast access to a scalar (equiv to the prior method) ``` df.iat[1,1] ``` ## Boolean indexing ``` df[df.A > 0] ``` A `where` operation for getting ``` df[df>0] ``` Using the `isin()` method for filtering: ``` df2 = df.copy() df2['E'] = ['one', 'one', 'two', 'three', 'four', 'three'] df2 df2[df2['E'].isin(['two','four'])] ``` ## Setting ``` s1 = pd.Series([1,2,3,4,5,6], index=pd.date_range('20130102',periods=6)) s1 df['F'] = s1 df.at[dates[0],'A'] = 0 df.iat[0,1]=0 df.loc[:,'D'] = np.array([5] * len(df)) df ``` A `where` operation with setting ``` df2 = df.copy() df2[df2 > 0] = -df2 df2 ``` ## Missing Data pandas primarily uses the value `np.nan` to represent missing data. It's by default not included in computations. Reindexing allows you to change/add/delete index on a specified axis. This returns a copy of the data. ``` df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E']) df1.loc[dates[0]:dates[1],'E'] = 1 df1 ``` *To drop any rows that have missing data.* ``` df1.dropna(how='any') ``` *Filling missing data* ``` df1.fillna(value=5) ``` To get the boolean mask where values are `nan` ``` pd.isnull(df1) ``` ## Operations ### Stats Operations in general *exclude* missing data. ``` df.mean() ``` Same operation on the other axis ``` df.mean(1) ``` Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically broadcasts along the specified dimension. ``` s = pd.Series([1,3,5,np.nan,6,8], index=dates).shift(2) s df.sub(s,axis='index') df ``` ## Apply Applying functions to the data ``` df.apply(np.cumsum) df.apply(lambda x: x.max() - x.min()) ``` ## Histogramming ``` s = pd.Series(np.random.randint(0,7, size=10)) s s.value_counts() ``` ## String Methods ``` s = pd.Series(['A','B','C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat']) s.str.lower() ``` ## Merge ### Concat pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations Concatenating pandas objects together with `concat()`: ``` df = pd.DataFrame( np.random.randn(10,4)) df # break it into pieces pieces = [df[:3], df[3:7], df[7:]] pd.concat(pieces) ``` ### Join ``` left = pd.DataFrame({'key': ['foo','foo'], 'lval': [1,2]}) right = pd.DataFrame({'key': ['foo','foo'], 'rval': [4,5]}) left right pd.merge(left,right,on='key') left = pd.DataFrame({'key': ['foo', 'bar'], 'lval': [1,2]}) right = pd.DataFrame({'key': ['foo', 'bar'], 'rval': [4,5]}) left right pd.merge(left,right,on='key') ``` ### Append ``` df = pd.DataFrame(np.random.randn(8,4), columns=['A','B','C','D']) df s = df.iloc[3] df.append(s,ignore_index=True) ``` ## Grouping By "group by" we are referring to a process involving one or more of the following steps * **Splitting** the data into groups based on some criteria * **Applying** a function to each group independently * **Combining** the results into a data structure ``` df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], 'B' : ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], 'C' : np.random.randn(8), 'D' : np.random.randn(8)}) df ``` Grouping and then applying a function *sum* to the resulting groups ``` df.groupby('A').sum() ``` Grouping by multiple colums forms a hierarchical index, which we then apply the function ``` df.groupby(['A','B']).sum() ``` ## Reshaping ### Stack ``` tuples = list(zip(*[['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], ['one','two', 'one', 'two', 'one', 'two','one', 'two']])) tuples index = pd.MultiIndex.from_tuples(tuples, names=['first','second']) df = pd.DataFrame(np.random.randn(8,2),index=index,columns=['A','B']) df df2 =df[:4] df[:4] ``` The `stack()` method "compresses" a level in the DataFrame's columns ``` stacked = df2.stack() stacked stacked.unstack() stacked.unstack(1) stacked.unstack(0) ``` ## Pivot Tables ``` df = pd.DataFrame({'A' : ['one', 'one', 'two', 'three'] * 3, 'B' : ['A', 'B', 'C'] * 4 , 'C' : ['foo', 'foo', 'foo', 'bar', 'bar' , 'bar'] * 2, 'D' : np.random.randn(12), 'E' : np.random.randn(12)}) df pd.pivot_table(df, values='D',index=['A','B'], columns=['C']) ``` ## Time Series ``` rng = pd.date_range('1/1/2012', periods=100, freq='S') ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng) ts.resample('5Min').sum() rng = pd.date_range('3/6/2012 00:00', periods=5, freq='D') ts = pd.Series(np.random.randn(len(rng)), rng) ts ts_utc = ts.tz_localize('UTC') ts_utc ts_utc.tz_convert('US/Eastern') rng = pd.date_range('1/1/2012', periods=5, freq='M') ts = pd.Series(np.random.randn(len(rng)), index=rng) ts ps = ts.to_period() ps ps.to_timestamp() prng = pd.period_range('1990Q1', '2000Q4', freq='Q-NOV') prng ts = pd.Series(np.random.randn(len(prng)), prng) ts ts.index = (prng.asfreq('M', 'e')+1).asfreq('M','s') + 9 ts.head() ``` ## Categoricals ``` df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a','b','b','a','a','e']}) ``` Convert the raw grades to a categorical data type. ``` df["grade"] = df["raw_grade"].astype("category") df["grade"] ``` Rename the categories to more meaningful names (assigning to `Series.cat.categories` is inplace!) ``` df["grade"].cat.categories = ["very good", "good", "very bad"] df ``` Sorting is per order in the categories, not lexical order ``` df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium", "good", "very good" ]) df["grade"] df.sort_values(by="grade") df.groupby("grade").size() ``` ## Plotting ``` ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000)) ts = ts.cumsum() ts.plot() ``` **The following command is important to view matplotlib plots on a jupyter notebook** ``` %matplotlib inline ts.plot() df = pd.DataFrame(np.random.randn(1000,4), index=ts.index, columns=['A','B','C','D']) df = df.cumsum() plt.figure(); df.plot(); plt.legend(loc='best') ``` # Getting Data In/Out ## CSV [Writing to a csv file](http://pandas.pydata.org/pandas-docs/stable/io.html#io-store-in-csv) ``` df.to_csv('foo.csv') ``` [Reading from a csv file](http://pandas.pydata.org/pandas-docs/stable/io.html#io-read-csv-table) ``` pd.read_csv('foo.csv') ``` ## HDF5 Reading and writing to [HDFStores](http://pandas.pydata.org/pandas-docs/stable/io.html#io-hdf5) Writing to a HDF5 Store ``` df.to_hdf('foo.h5','df') ``` Reading from a HDF5 Store ``` pd.read_hdf('foo.h5','df') ``` ## Excel ``` df.to_excel('foo.xlsx', sheet_name='Sheet1') pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA']) ``` ## Gotchas If you are trying an operation and you see an exception like: ``` if pd.Series([False, True, False]): print("I was true") ``` See [Comparisons](http://pandas.pydata.org/pandas-docs/stable/basics.html#basics-compare) for an explanation and what to do See [Gotchas](http://pandas.pydata.org/pandas-docs/stable/gotchas.html#gotchas) as well.
github_jupyter
--- _You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-machine-learning/resources/bANLa) course resource._ --- ## Assignment 4 - Understanding and Predicting Property Maintenance Fines This assignment is based on a data challenge from the Michigan Data Science Team ([MDST](http://midas.umich.edu/mdst/)). The Michigan Data Science Team ([MDST](http://midas.umich.edu/mdst/)) and the Michigan Student Symposium for Interdisciplinary Statistical Sciences ([MSSISS](https://sites.lsa.umich.edu/mssiss/)) have partnered with the City of Detroit to help solve one of the most pressing problems facing Detroit - blight. [Blight violations](http://www.detroitmi.gov/How-Do-I/Report/Blight-Complaint-FAQs) are issued by the city to individuals who allow their properties to remain in a deteriorated condition. Every year, the city of Detroit issues millions of dollars in fines to residents and every year, many of these fines remain unpaid. Enforcing unpaid blight fines is a costly and tedious process, so the city wants to know: how can we increase blight ticket compliance? The first step in answering this question is understanding when and why a resident might fail to comply with a blight ticket. This is where predictive modeling comes in. For this assignment, your task is to predict whether a given blight ticket will be paid on time. All data for this assignment has been provided to us through the [Detroit Open Data Portal](https://data.detroitmi.gov/). **Only the data already included in your Coursera directory can be used for training the model for this assignment.** Nonetheless, we encourage you to look into data from other Detroit datasets to help inform feature creation and model selection. We recommend taking a look at the following related datasets: * [Building Permits](https://data.detroitmi.gov/Property-Parcels/Building-Permits/xw2a-a7tf) * [Trades Permits](https://data.detroitmi.gov/Property-Parcels/Trades-Permits/635b-dsgv) * [Improve Detroit: Submitted Issues](https://data.detroitmi.gov/Government/Improve-Detroit-Submitted-Issues/fwz3-w3yn) * [DPD: Citizen Complaints](https://data.detroitmi.gov/Public-Safety/DPD-Citizen-Complaints-2016/kahe-efs3) * [Parcel Map](https://data.detroitmi.gov/Property-Parcels/Parcel-Map/fxkw-udwf) ___ We provide you with two data files for use in training and validating your models: train.csv and test.csv. Each row in these two files corresponds to a single blight ticket, and includes information about when, why, and to whom each ticket was issued. The target variable is compliance, which is True if the ticket was paid early, on time, or within one month of the hearing data, False if the ticket was paid after the hearing date or not at all, and Null if the violator was found not responsible. Compliance, as well as a handful of other variables that will not be available at test-time, are only included in train.csv. Note: All tickets where the violators were found not responsible are not considered during evaluation. They are included in the training set as an additional source of data for visualization, and to enable unsupervised and semi-supervised approaches. However, they are not included in the test set. <br> **File descriptions** (Use only this data for training your model!) readonly/train.csv - the training set (all tickets issued 2004-2011) readonly/test.csv - the test set (all tickets issued 2012-2016) readonly/addresses.csv & readonly/latlons.csv - mapping from ticket id to addresses, and from addresses to lat/lon coordinates. Note: misspelled addresses may be incorrectly geolocated. <br> **Data fields** train.csv & test.csv ticket_id - unique identifier for tickets agency_name - Agency that issued the ticket inspector_name - Name of inspector that issued the ticket violator_name - Name of the person/organization that the ticket was issued to violation_street_number, violation_street_name, violation_zip_code - Address where the violation occurred mailing_address_str_number, mailing_address_str_name, city, state, zip_code, non_us_str_code, country - Mailing address of the violator ticket_issued_date - Date and time the ticket was issued hearing_date - Date and time the violator's hearing was scheduled violation_code, violation_description - Type of violation disposition - Judgment and judgement type fine_amount - Violation fine amount, excluding fees admin_fee - $20 fee assigned to responsible judgments state_fee - $10 fee assigned to responsible judgments late_fee - 10% fee assigned to responsible judgments discount_amount - discount applied, if any clean_up_cost - DPW clean-up or graffiti removal cost judgment_amount - Sum of all fines and fees grafitti_status - Flag for graffiti violations train.csv only payment_amount - Amount paid, if any payment_date - Date payment was made, if it was received payment_status - Current payment status as of Feb 1 2017 balance_due - Fines and fees still owed collection_status - Flag for payments in collections compliance [target variable for prediction] Null = Not responsible 0 = Responsible, non-compliant 1 = Responsible, compliant compliance_detail - More information on why each ticket was marked compliant or non-compliant ___ ## Evaluation Your predictions will be given as the probability that the corresponding blight ticket will be paid on time. The evaluation metric for this assignment is the Area Under the ROC Curve (AUC). Your grade will be based on the AUC score computed for your classifier. A model which with an AUROC of 0.7 passes this assignment, over 0.75 will recieve full points. ___ For this assignment, create a function that trains a model to predict blight ticket compliance in Detroit using `readonly/train.csv`. Using this model, return a series of length 61001 with the data being the probability that each corresponding ticket from `readonly/test.csv` will be paid, and the index being the ticket_id. Example: ticket_id 284932 0.531842 285362 0.401958 285361 0.105928 285338 0.018572 ... 376499 0.208567 376500 0.818759 369851 0.018528 Name: compliance, dtype: float32 ### Hints * Make sure your code is working before submitting it to the autograder. * Print out your result to see whether there is anything weird (e.g., all probabilities are the same). * Generally the total runtime should be less than 10 mins. You should NOT use Neural Network related classifiers (e.g., MLPClassifier) in this question. * Try to avoid global variables. If you have other functions besides blight_model, you should move those functions inside the scope of blight_model. * Refer to the pinned threads in Week 4's discussion forum when there is something you could not figure it out. ``` import pandas as pd import numpy as np def blight_model(): # Your code here # import the necessary packages from sklearn.preprocessing import MinMaxScaler from sklearn.neural_network import MLPClassifier # load datas train_df = pd.read_csv('train.csv', encoding = 'ISO-8859-1') test_df = pd.read_csv('test.csv', encoding = 'ISO-8859-1') addresses_df = pd.read_csv('addresses.csv', encoding = 'ISO-8859-1') latlons_df = pd.read_csv('latlons.csv', encoding = 'ISO-8859-1') # remove some of the columns list_to_remove = ['balance_due', 'collection_status', 'compliance_detail', 'payment_amount', 'payment_date', 'payment_status'] list_to_remove_all = ['violator_name', 'zip_code', 'country', 'city', 'inspector_name', 'violation_street_number', 'violation_street_name', 'violation_zip_code', 'violation_description', 'mailing_address_str_number', 'mailing_address_str_name', 'non_us_str_code', 'ticket_issued_date', 'hearing_date', 'admin_fee'] train_df = train_df.drop(list_to_remove_all, axis=1) train_df = train_df.drop(list_to_remove, axis=1) test_df = test_df.drop(list_to_remove_all, axis=1) train_df = train_df.drop('grafitti_status', axis=1) test_df = test_df.drop('grafitti_status', axis=1) # join train and test dfs with addresses and latlons dfs address_latlon_df = addresses_df.set_index('address').join(latlons_df.set_index('address')) train_df = train_df.set_index('ticket_id').join(address_latlon_df.set_index('ticket_id')) test_df = test_df.set_index('ticket_id').join(address_latlon_df.set_index('ticket_id')) # clean and transform violetion code column vio_code_freq10 = train_df['violation_code'].value_counts().index[:10] train_df['violation_code_freq10'] = [list(vio_code_freq10).index(c) if c in vio_code_freq10 else -1 for c in train_df['violation_code']] train_df.drop('violation_code', axis=1, inplace=True) test_df['violation_code_freq10'] = [list(vio_code_freq10).index(c) if c in vio_code_freq10 else -1 for c in test_df['violation_code']] test_df.drop('violation_code', axis=1, inplace=True) # remove rows with null compiliance column value train_df = train_df[train_df['compliance'].isnull() == False] # Fill NAN values in lat, lon, and state columns train_df['lat'].fillna(method='pad', inplace=True) train_df['lon'].fillna(method='pad', inplace=True) train_df['state'].fillna(method='pad', inplace=True) test_df['lat'].fillna(method='pad', inplace=True) test_df['lon'].fillna(method='pad', inplace=True) test_df['state'].fillna(method='pad', inplace=True) # encode the categorical datas columns_to_encode = ['agency_name', 'state', 'disposition', 'violation_code_freq10'] train_df = pd.get_dummies(train_df, columns=columns_to_encode) test_df = pd.get_dummies(test_df, columns=columns_to_encode) # split the train_df into X_train and y_train X_train = train_df.drop('compliance', axis=1) y_train = train_df['compliance'] # resampling train_df = train_df.sort_values('compliance', ascending=False) num_of_comp = int(train_df['compliance'].sum()) train_df = train_df.append([train_df[0:num_of_comp]]*10, ignore_index=True) # split the train_df and test_df train_features = train_df.columns.drop('compliance') X_train = train_df[train_features] y_train = train_df['compliance'] X_test = test_df l1 = list(set(X_test.columns) - set(X_train.columns)) X_test = X_test.drop(l1, axis=1) l2 = list(set(X_train.columns) - set(X_test.columns)) X_train = X_train.drop(l2, axis=1) # normalize the X_train and X_test between 0 and 1 scaler = MinMaxScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) # build and train model clf = MLPClassifier(hidden_layer_sizes = [500, 50], random_state = 0, learning_rate_init = 0.005, batch_size = 1000, learning_rate = 'invscaling', momentum = 0.1, power_t = 0.001, verbose = False, max_iter = 400, tol = 0.0001, solver='adam') clf.fit(X_train, y_train) # evaluate the model test_proba = clf.predict_proba(X_test)[:,1] test_df = pd.read_csv('test.csv', encoding = "ISO-8859-1") test_df['compliance'] = test_proba test_df.set_index('ticket_id', inplace=True) return test_df['compliance'] # Your answer here blight_model() ```
github_jupyter
# Adstock Campaign/Channel Attribution Model This is a reference implementation of the adstock model that estimates the contribution of individual campaigns/channels to the total outcome (number of conversions, site traffic, etc.) Assuming we observe activity data for several campaigns/channels as time series $x_{it}$, the total outcome is modeled as $\sum_{i} c(x_i)$ where $c(\cdot)$ is the convolution operation with the exp decay kernel. ### references [1] [Introduction to Algorithmic Marketing](https://algorithmicweb.wordpress.com/ ) book ``` %matplotlib inline import numpy as np from mpmath import * from scipy.optimize import minimize from sklearn.linear_model import LinearRegression import matplotlib.pyplot as plt # The input of the model is a sequence of samples (time series) # where each sample is represented by three values: # - Total revenue # - Intensity of campaign 01 # - Intensity of campaign 02 revenue_series = [37, 89, 82, 58, 110, 77, 103, 78, 95, 106, 98, 96, 68, 96, 157, 198, 145, 132, 96, 135] campaign_series_01 = [6, 27, 0, 0, 20, 0, 20, 0, 0, 18, 9, 0, 0, 0, 13, 25, 0, 15, 0, 0] campaign_series_02 = [3, 0, 4, 0, 5, 0, 0, 0, 8, 0, 0, 5, 0, 11, 16, 11, 5, 0, 0, 15] decay_length = 3 time = range(1, len(revenue_series)+1) plt.plot(time, revenue_series, time, campaign_series_01, time, campaign_series_02); # # Apply time lag (memory effect) to the campaign intensity series # It is modeled as a convolution operation with exp decay filter (kernel) # def lag(x, alpha): w = np.array([ np.power(alpha, i) for i in range(decay_length) ]) xx = np.vstack([ np.append(np.zeros(i), x[:len(x)-i]) for i in range(decay_length) ]) y = np.dot(w/np.sum(w), xx) return y # # Apply the lag operation to each campaign/channel # def adstock(x, alpha): return np.array([ lag(x[i], alpha[i]) for i in range(len(x)) ]).T def evaluate_ols_loss(x, y, alpha): x_transformed = adstock(x, alpha) reg_model = LinearRegression().fit(x_transformed, y) return -reg_model.score(x_transformed, y) # # Optimize exp decay coeffecients jointly with the linear regression model # y = np.array(revenue_series) x = np.vstack([campaign_series_01, campaign_series_02]) solution = minimize(lambda alpha: evaluate_ols_loss(x, y, alpha), x0 = np.zeros(len(x)), tol=1e-6) print(solution) # # Compare model estimation with the actual revenue series # x_transformed = adstock(x, solution['x']) reg_model = LinearRegression().fit(x_transformed, y) revenue_predicted = reg_model.predict(x_transformed) plt.plot(time, revenue_series, time, revenue_predicted); # # Plot contribution for each channel on top of the intercept # plt.stackplot(time, [ [reg_model.intercept_]*len(time), reg_model.coef_[0] * x_transformed[:, 0], reg_model.coef_[1] * x_transformed[:, 1] ] ); ```
github_jupyter
``` %matplotlib inline import matplotlib.pyplot as plt ax11 = plt.subplot(2, 2, 1) ax12 = plt.subplot(2, 2, 2) ax21 = plt.subplot(2, 2, 3) ax22 = plt.subplot(2, 2, 4) ax11.set_title("ax11") ax12.set_title("ax12") ax21.set_title("ax21") ax22.set_title("ax22") plt.tight_layout() plt.show() plt.savefig("images/subplots2.png") fig, axes = plt.subplots(2, 2) ax11, ax12, ax21, ax22 = axes.ravel() ax11.set_title("ax11") ax12.set_title("ax12") ax21.set_title("ax21") ax22.set_title("ax22") plt.figure() ax11 = plt.subplot(2, 2, 1) ax12 = plt.subplot(2, 2, 2) ax2 = plt.subplot(2, 1, 2) ax11.set_title("ax11") ax12.set_title("ax12") ax2.set_title("ax2") plt.tight_layout() plt.show() plt.savefig("images/complex_subplots.png") fig, axes = plt.subplots(2, 2) axes[0, 0].plot(sin) sin = np.sin(np.linspace(-4, 4, 100)) plt.figure() plt.subplot(2, 2, 1) plt.plot(sin) plt.subplot(2, 2, 2) plt.plot(sin, c='r') plt.subplot(2, 2, 3) plt.subplot(2, 2, 4) fig, axes = plt.subplots(2, 2) axes[0, 0].plot(sin) axes[0, 1].plot(sin, c='r') plt.savefig("images/subplots_sin.png", bbox_inches="tight", dpi=300) fig, ax = plt.subplots(2, 4, figsize=(10, 5)) ax[0, 0].plot(sin) ax[0, 1].plot(range(100), sin) # same as above ax[0, 2].plot(np.linspace(-4, 4, 100), sin) ax[0, 3].plot(sin[::10], 'o') ax[1, 0].plot(sin, c='r') ax[1, 1].plot(sin, '--') ax[1, 2].plot(sin, lw=3) ax[1, 3].plot(sin[::10], '--o') plt.tight_layout() # makes stuff fit - usually works plt.savefig("images/plot.png", bbox_inches="tight", dpi=300) x = np.random.uniform(size=50) y = x + np.random.normal(0, .1, size=50) sizes = np.abs(np.random.normal(scale=20, size=50)) fig, ax = plt.subplots(1, 4, figsize=(10, 3), subplot_kw={'xticks': (), 'yticks': ()}) ax[0].plot(x, y, 'o') ax[0].set_title("plot") ax[1].scatter(x, y) ax[1].set_title("scatter") ax[2].scatter(x, y, c=x-y, cmap='bwr', edgecolor='k') ax[2].set_title("scatter \w color") ax[3].scatter(x, y, c=x-y, s=sizes, cmap='bwr', edgecolor='k') ax[3].set_title("scatter \w size") plt.tight_layout() plt.savefig("images/matplotlib_scatter.png", bbox_inches="tight", dpi=300) import pandas as pd df, = pd.read_html("""<table><tbody><tr><th>&nbsp;</th><th>&nbsp;</th><th>Movie</th><th>Distributor</th><th>Gross</th><th>Change</th><th>Thtrs.</th><th>Per Thtr.</th><th>Total Gross</th><th>Days</th></tr> <tr> <td class="data">1</td> <td class="data">(1)</td> <td><b><a href="/movie/Hidden-Figures#tab=box-office">Hidden Figures</a></b></td> <td><a href="/market/distributor/20th-Century-Fox">20th Century Fox</a></td> <td class="data">$33,605,651</td> <td class="data chart_up">+7%</td> <td class="data">3,286</td> <td class="data chart_grey">$10,227</td> <td class="data">&nbsp;&nbsp;$67,988,751</td> <td class="data">26</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_up"><b>2</b></td> <td class="data">(5)</td> <td><b><a href="/movie/La-La-Land#tab=box-office">La La Land</a></b></td> <td><a href="/market/distributor/Lionsgate">Lionsgate</a></td> <td class="data">$21,748,928</td> <td class="data chart_up">+21%</td> <td class="data">1,848</td> <td class="data chart_grey">$11,769</td> <td class="data">&nbsp;&nbsp;$81,330,497</td> <td class="data">42</td> </tr> <tr> <td class="data">3</td> <td class="data">(3)</td> <td><b><a href="/movie/Sing-(2016)#tab=box-office">Sing</a></b></td> <td><a href="/market/distributor/Universal">Universal</a></td> <td class="data">$21,109,675</td> <td class="data chart_down">-17%</td> <td class="data">3,431</td> <td class="data chart_grey">$6,153</td> <td class="data">&nbsp;&nbsp;$240,325,195</td> <td class="data">30</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_down">4</td> <td class="data">(2)</td> <td><b><a href="/movie/Rogue-One-A-Star-Wars-Story#tab=box-office">Rogue One: A Star Wars Story</a></b></td> <td><a href="/market/distributor/Walt-Disney">Walt Disney</a></td> <td class="data">$20,073,829</td> <td class="data chart_down">-33%</td> <td class="data">3,162</td> <td class="data chart_grey">$6,348</td> <td class="data">&nbsp;&nbsp;$505,165,563</td> <td class="data">35</td> </tr> <tr> <td class="data chart_up"><b>5</b></td> <td class="data">(28)</td> <td><b><a href="/movie/Patriots-Day#tab=box-office">Patriots Day</a></b></td> <td><a href="/market/distributor/Lionsgate">Lionsgate</a></td> <td class="data">$16,715,863</td> <td class="data chart_up">+10,435%</td> <td class="data">3,120</td> <td class="data chart_grey">$5,358</td> <td class="data">&nbsp;&nbsp;$17,639,945</td> <td class="data">30</td> </tr> <tr bgcolor="#ffeeff"> <td class="data">6</td> <td class="data"><b>new</b></td> <td><b><a href="/movie/Bye-Bye-Man-The#tab=box-office">The Bye Bye Man</a></b></td> <td><a href="/market/distributor/STX-Entertainment">STX Entertainment</a></td> <td class="data">$16,559,630</td> <td class="data">&nbsp;</td> <td class="data">2,220</td> <td class="data chart_grey">$7,459</td> <td class="data">&nbsp;&nbsp;$16,559,630</td> <td class="data">7</td> </tr> <tr> <td class="data">7</td> <td class="data"><b>new</b></td> <td><b><a href="/movie/Monster-Trucks#tab=box-office">Monster Trucks</a></b></td> <td><a href="/market/distributor/Paramount-Pictures">Paramount Pictures</a></td> <td class="data">$15,611,554</td> <td class="data">&nbsp;</td> <td class="data">3,119</td> <td class="data chart_grey">$5,005</td> <td class="data">&nbsp;&nbsp;$15,611,554</td> <td class="data">7</td> </tr> <tr bgcolor="#ffeeff"> <td class="data">8</td> <td class="data"><b>new</b></td> <td><b><a href="/movie/Sleepless-(2016)#tab=box-office">Sleepless</a></b></td> <td><a href="/market/distributor/Open-Road">Open Road</a></td> <td class="data">$11,486,904</td> <td class="data">&nbsp;</td> <td class="data">1,803</td> <td class="data chart_grey">$6,371</td> <td class="data">&nbsp;&nbsp;$11,486,904</td> <td class="data">7</td> </tr> <tr> <td class="data chart_down">9</td> <td class="data">(4)</td> <td><b><a href="/movie/Underworld-Blood-Wars#tab=box-office">Underworld: Blood Wars</a></b></td> <td><a href="/market/distributor/Sony-Pictures">Sony Pictures</a></td> <td class="data">$8,794,841</td> <td class="data chart_down">-51%</td> <td class="data">3,070</td> <td class="data chart_grey">$2,865</td> <td class="data">&nbsp;&nbsp;$26,910,959</td> <td class="data">14</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_down">10</td> <td class="data">(6)</td> <td><b><a href="/movie/Passengers-(2016)#tab=box-office">Passengers</a></b></td> <td><a href="/market/distributor/Sony-Pictures">Sony Pictures</a></td> <td class="data">$7,853,457</td> <td class="data chart_down">-36%</td> <td class="data">2,447</td> <td class="data chart_grey">$3,209</td> <td class="data">&nbsp;&nbsp;$92,233,188</td> <td class="data">30</td> </tr> <tr> <td class="data chart_up"><b>11</b></td> <td class="data">(38)</td> <td><b><a href="/movie/Live-by-Night#tab=box-office">Live by Night</a></b></td> <td><a href="/market/distributor/Warner-Bros">Warner Bros.</a></td> <td class="data">$7,481,705</td> <td class="data chart_up">+16,845%</td> <td class="data">2,822</td> <td class="data chart_grey">$2,651</td> <td class="data">&nbsp;&nbsp;$7,667,349</td> <td class="data">26</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_down">12</td> <td class="data">(8)</td> <td><b><a href="/movie/Moana#tab=box-office">Moana</a></b></td> <td><a href="/market/distributor/Walt-Disney">Walt Disney</a></td> <td class="data">$6,968,577</td> <td class="data chart_down">-16%</td> <td class="data">1,847</td> <td class="data chart_grey">$3,773</td> <td class="data">&nbsp;&nbsp;$234,274,702</td> <td class="data">58</td> </tr> <tr> <td class="data chart_down">13</td> <td class="data">(7)</td> <td><b><a href="/movie/Why-Him#tab=box-office">Why Him?</a></b></td> <td><a href="/market/distributor/20th-Century-Fox">20th Century Fox</a></td> <td class="data">$5,032,411</td> <td class="data chart_down">-49%</td> <td class="data">1,977</td> <td class="data chart_grey">$2,545</td> <td class="data">&nbsp;&nbsp;$56,865,458</td> <td class="data">28</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_down">14</td> <td class="data">(9)</td> <td><b><a href="/movie/Fences#tab=box-office">Fences</a></b></td> <td><a href="/market/distributor/Paramount-Pictures">Paramount Pictures</a></td> <td class="data">$4,367,322</td> <td class="data chart_down">-39%</td> <td class="data">1,342</td> <td class="data chart_grey">$3,254</td> <td class="data">&nbsp;&nbsp;$47,499,684</td> <td class="data">35</td> </tr> <tr> <td class="data chart_down">15</td> <td class="data">(12)</td> <td><b><a href="/movie/Lion-(Australia)#tab=box-office">Lion</a></b></td> <td><a href="/market/distributor/Weinstein-Co">Weinstein Co.</a></td> <td class="data">$3,539,926</td> <td class="data chart_up">+9%</td> <td class="data">575</td> <td class="data chart_grey">$6,156</td> <td class="data">&nbsp;&nbsp;$14,582,530</td> <td class="data">56</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_up"><b>16</b></td> <td class="data">(20)</td> <td><b><a href="/movie/Silence-(2016)#tab=box-office">Silence</a></b></td> <td><a href="/market/distributor/Paramount-Pictures">Paramount Pictures</a></td> <td class="data">$2,926,937</td> <td class="data chart_up">+319%</td> <td class="data">747</td> <td class="data chart_grey">$3,918</td> <td class="data">&nbsp;&nbsp;$4,008,701</td> <td class="data">28</td> </tr> <tr> <td class="data chart_down">17</td> <td class="data">(11)</td> <td><b><a href="/movie/Manchester-by-the-Sea#tab=box-office">Manchester-by-the Sea</a></b></td> <td><a href="/market/distributor/Roadside-Attractions">Roadside Attractions</a></td> <td class="data">$2,786,718</td> <td class="data chart_down">-27%</td> <td class="data">726</td> <td class="data chart_grey">$3,838</td> <td class="data">&nbsp;&nbsp;$37,948,496</td> <td class="data">63</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_down">18</td> <td class="data">(10)</td> <td><b><a href="/movie/Assassins-Creed#tab=box-office">Assassin’s Creed</a></b></td> <td><a href="/market/distributor/20th-Century-Fox">20th Century Fox</a></td> <td class="data">$1,979,315</td> <td class="data chart_down">-66%</td> <td class="data">968</td> <td class="data chart_grey">$2,045</td> <td class="data">&nbsp;&nbsp;$53,482,956</td> <td class="data">30</td> </tr> <tr> <td class="data chart_up"><b>19</b></td> <td class="data">(21)</td> <td><b><a href="/movie/Moonlight-(2015)#tab=box-office">Moonlight</a></b></td> <td><a href="/market/distributor/A24">A24</a></td> <td class="data">$1,693,623</td> <td class="data chart_up">+185%</td> <td class="data">582</td> <td class="data chart_grey">$2,910</td> <td class="data">&nbsp;&nbsp;$15,192,382</td> <td class="data">91</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_down">20</td> <td class="data">(14)</td> <td><b><a href="/movie/Fantastic-Beasts-and-Where-to-Find-Them#tab=box-office">Fantastic Beasts and Where …</a></b></td> <td><a href="/market/distributor/Warner-Bros">Warner Bros.</a></td> <td class="data">$1,406,667</td> <td class="data chart_down">-46%</td> <td class="data">502</td> <td class="data chart_grey">$2,802</td> <td class="data">&nbsp;&nbsp;$231,277,992</td> <td class="data">63</td> </tr> <tr> <td class="data chart_down">21</td> <td class="data">(16)</td> <td><b><a href="/movie/Jackie-(2016)#tab=box-office">Jackie</a></b></td> <td><a href="/market/distributor/Fox-Searchlight">Fox Searchlight</a></td> <td class="data">$1,149,751</td> <td class="data chart_down">-26%</td> <td class="data">353</td> <td class="data chart_grey">$3,257</td> <td class="data">&nbsp;&nbsp;$10,902,840</td> <td class="data">49</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_down">22</td> <td class="data">(13)</td> <td><b><a href="/movie/Monster-Calls-A#tab=box-office">A Monster Calls</a></b></td> <td><a href="/market/distributor/Focus-Features">Focus Features</a></td> <td class="data">$887,171</td> <td class="data chart_down">-68%</td> <td class="data">1,513</td> <td class="data chart_grey">$586</td> <td class="data">&nbsp;&nbsp;$3,710,799</td> <td class="data">28</td> </tr> <tr> <td class="data chart_down">23</td> <td class="data">(17)</td> <td><b><a href="/movie/Arrival-(2016)#tab=box-office">Arrival</a></b></td> <td><a href="/market/distributor/Paramount-Pictures">Paramount Pictures</a></td> <td class="data">$829,052</td> <td class="data chart_down">-34%</td> <td class="data">247</td> <td class="data chart_grey">$3,356</td> <td class="data">&nbsp;&nbsp;$95,349,632</td> <td class="data">70</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_down">24</td> <td class="data">(22)</td> <td><b><a href="/movie/Trolls#tab=box-office">Trolls</a></b></td> <td><a href="/market/distributor/20th-Century-Fox">20th Century Fox</a></td> <td class="data">$639,148</td> <td class="data chart_up">+15%</td> <td class="data">262</td> <td class="data chart_grey">$2,439</td> <td class="data">&nbsp;&nbsp;$152,041,839</td> <td class="data">77</td> </tr> <tr> <td class="data chart_down">25</td> <td class="data">(19)</td> <td><b><a href="/movie/Dangal-(India)#tab=box-office">Dangal</a></b></td> <td><a href="/market/distributor/UTV-Communications">UTV Communications</a></td> <td class="data">$537,498</td> <td class="data chart_down">-52%</td> <td class="data">95</td> <td class="data chart_grey">$5,658</td> <td class="data">&nbsp;&nbsp;$12,008,183</td> <td class="data">30</td> </tr> <tr bgcolor="#ffeeff"> <td class="data">26</td> <td class="data">(26)</td> <td><b><a href="/movie/20th-Century-Women#tab=box-office">20th Century Women</a></b></td> <td><a href="/market/distributor/A24">A24</a></td> <td class="data">$482,993</td> <td class="data chart_up">+153%</td> <td class="data">29</td> <td class="data chart_grey">$16,655</td> <td class="data">&nbsp;&nbsp;$926,641</td> <td class="data">26</td> </tr> <tr> <td class="data">27</td> <td class="data"><b>new</b></td> <td><b><a href="/movie/Ok-Jaanu-(India)#tab=box-office">Ok Jaanu</a></b></td> <td><a href="/market/distributor/FIP">FIP</a></td> <td class="data">$312,090</td> <td class="data">&nbsp;</td> <td class="data">121</td> <td class="data chart_grey">$2,579</td> <td class="data">&nbsp;&nbsp;$312,090</td> <td class="data">7</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_down">28</td> <td class="data">(23)</td> <td><b><a href="/movie/Doctor-Strange-(2016)#tab=box-office">Doctor Strange</a></b></td> <td><a href="/market/distributor/Walt-Disney">Walt Disney</a></td> <td class="data">$309,972</td> <td class="data chart_down">-30%</td> <td class="data">162</td> <td class="data chart_grey">$1,913</td> <td class="data">&nbsp;&nbsp;$231,345,380</td> <td class="data">77</td> </tr> <tr> <td class="data chart_down">29</td> <td class="data">(15)</td> <td><b><a href="/movie/Collateral-Beauty#tab=box-office">Collateral Beauty</a></b></td> <td><a href="/market/distributor/Warner-Bros">Warner Bros.</a></td> <td class="data">$305,013</td> <td class="data chart_down">-83%</td> <td class="data">254</td> <td class="data chart_grey">$1,201</td> <td class="data">&nbsp;&nbsp;$30,621,252</td> <td class="data">35</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_down">30</td> <td class="data">(24)</td> <td><b><a href="/movie/Hacksaw-Ridge#tab=box-office">Hacksaw Ridge</a></b></td> <td><a href="/market/distributor/Lionsgate">Lionsgate</a></td> <td class="data">$208,955</td> <td class="data chart_down">-34%</td> <td class="data">172</td> <td class="data chart_grey">$1,215</td> <td class="data">&nbsp;&nbsp;$65,411,438</td> <td class="data">77</td> </tr> <tr> <td class="data chart_down">31</td> <td class="data">(18)</td> <td><b><a href="/movie/Office-Christmas-Party#tab=box-office">Office Christmas Party</a></b></td> <td><a href="/market/distributor/Paramount-Pictures">Paramount Pictures</a></td> <td class="data">$165,146</td> <td class="data chart_down">-86%</td> <td class="data">141</td> <td class="data chart_grey">$1,171</td> <td class="data">&nbsp;&nbsp;$54,648,213</td> <td class="data">42</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_down">32</td> <td class="data">(30)</td> <td><b><a href="/movie/Allied#tab=box-office">Allied</a></b></td> <td><a href="/market/distributor/Paramount-Pictures">Paramount Pictures</a></td> <td class="data">$161,201</td> <td class="data chart_up">+20%</td> <td class="data">174</td> <td class="data chart_grey">$926</td> <td class="data">&nbsp;&nbsp;$40,015,450</td> <td class="data">58</td> </tr> <tr> <td class="data chart_down">33</td> <td class="data">(29)</td> <td><b><a href="/movie/Nocturnal-Animals#tab=box-office">Nocturnal Animals</a></b></td> <td><a href="/market/distributor/Focus-Features">Focus Features</a></td> <td class="data">$112,841</td> <td class="data chart_down">-25%</td> <td class="data">54</td> <td class="data chart_grey">$2,090</td> <td class="data">&nbsp;&nbsp;$10,604,004</td> <td class="data">63</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_up"><b>34</b></td> <td class="data">(36)</td> <td><b><a href="/movie/Neruda-(Chile)#tab=box-office">Neruda</a></b></td> <td><a href="/market/distributor/Orchard-The">The Orchard</a></td> <td class="data">$69,515</td> <td class="data chart_up">+25%</td> <td class="data">15</td> <td class="data chart_grey">$4,634</td> <td class="data">&nbsp;&nbsp;$296,307</td> <td class="data">35</td> </tr> <tr> <td class="data chart_down">35</td> <td class="data">(34)</td> <td><b><a href="/movie/Miss-Peregrines-Home-for-Peculiar-Children#tab=box-office">Miss Peregrine’s Home for…</a></b></td> <td><a href="/market/distributor/20th-Century-Fox">20th Century Fox</a></td> <td class="data">$68,755</td> <td class="data chart_down">-4%</td> <td class="data">84</td> <td class="data chart_grey">$819</td> <td class="data">&nbsp;&nbsp;$87,170,123</td> <td class="data">112</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_down">36</td> <td class="data">(33)</td> <td><b><a href="/movie/Loving-(2016)#tab=box-office">Loving</a></b></td> <td><a href="/market/distributor/Focus-Features">Focus Features</a></td> <td class="data">$56,241</td> <td class="data chart_down">-28%</td> <td class="data">41</td> <td class="data chart_grey">$1,372</td> <td class="data">&nbsp;&nbsp;$7,679,676</td> <td class="data">77</td> </tr> <tr> <td class="data chart_down">37</td> <td class="data">(27)</td> <td><b><a href="/movie/Railroad-Tigers#tab=box-office">Railroad Tigers</a></b></td> <td><a href="/market/distributor/Well-Go-USA">Well Go USA</a></td> <td class="data">$39,136</td> <td class="data chart_down">-76%</td> <td class="data">13</td> <td class="data chart_grey">$3,010</td> <td class="data">&nbsp;&nbsp;$205,655</td> <td class="data">14</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_up"><b>38</b></td> <td class="data">(40)</td> <td><b><a href="/movie/avenir-L-(france)#tab=box-office">Things to Come</a></b></td> <td><a href="/market/distributor/IFC-Films">IFC Films</a></td> <td class="data">$30,237</td> <td class="data chart_down">-10%</td> <td class="data">20</td> <td class="data chart_grey">$1,512</td> <td class="data">&nbsp;&nbsp;$326,869</td> <td class="data">49</td> </tr> <tr> <td class="data">39</td> <td class="data">(39)</td> <td><b><a href="/movie/A-Ga-ssi-(S-Korea)#tab=box-office">The Handmaiden</a></b></td> <td><a href="/market/distributor/Magnolia-Pictures">Magnolia Pictures</a></td> <td class="data">$29,808</td> <td class="data chart_down">-18%</td> <td class="data">16</td> <td class="data chart_grey">$1,863</td> <td class="data">&nbsp;&nbsp;$1,961,089</td> <td class="data">91</td> </tr> <tr bgcolor="#ffeeff"> <td class="data">40</td> <td class="data"><b>new</b></td> <td><b><a href="/movie/Enas-Allos-Kosmos-(Greece)#tab=box-office">Worlds Apart</a></b></td> <td><a href="/market/distributor/Cinema-Libre">Cinema Libre</a></td> <td class="data">$25,007</td> <td class="data">&nbsp;</td> <td class="data">1</td> <td class="data chart_grey">$25,007</td> <td class="data">&nbsp;&nbsp;$25,007</td> <td class="data">7</td> </tr> <tr> <td class="data">41</td> <td class="data">(41)</td> <td><b><a href="/movie/Sully#tab=box-office">Sully</a></b></td> <td><a href="/market/distributor/Warner-Bros">Warner Bros.</a></td> <td class="data">$19,427</td> <td class="data chart_down">-18%</td> <td class="data">35</td> <td class="data chart_grey">$555</td> <td class="data">&nbsp;&nbsp;$125,059,249</td> <td class="data">133</td> </tr> <tr bgcolor="#ffeeff"> <td class="data">42</td> <td class="data"><b>new</b></td> <td><b><a href="/movie/Jeder-stirbt-fur-sich-allein-(Germany)#tab=box-office">Alone in Berlin</a></b></td> <td><a href="/market/distributor/IFC-Films">IFC Films</a></td> <td class="data">$14,502</td> <td class="data">&nbsp;</td> <td class="data">2</td> <td class="data chart_grey">$7,251</td> <td class="data">&nbsp;&nbsp;$14,502</td> <td class="data">7</td> </tr> <tr> <td class="data">43</td> <td class="data"><b>new</b></td> <td><b><a href="/movie/Vince-Giordano-Theres-a-Future-in-the-Past#tab=box-office">Vince Giordano: There’s a…</a></b></td> <td><a href="/market/distributor/First-Run-Features">First Run Features</a></td> <td class="data">$10,625</td> <td class="data">&nbsp;</td> <td class="data">1</td> <td class="data chart_grey">$10,625</td> <td class="data">&nbsp;&nbsp;$10,625</td> <td class="data">7</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_up"><b>44</b></td> <td class="data">(46)</td> <td><b><a href="/movie/tout-nouveau-testament-Le-(Belgium)#tab=box-office">The Brand New Testament</a></b></td> <td><a href="/market/distributor/Music-Box-Films">Music Box Films</a></td> <td class="data">$8,835</td> <td class="data chart_down">-31%</td> <td class="data">13</td> <td class="data chart_grey">$680</td> <td class="data">&nbsp;&nbsp;$103,977</td> <td class="data">42</td> </tr> <tr> <td class="data chart_down">45</td> <td class="data">(42)</td> <td><b><a href="/movie/Bad-Santa-2#tab=box-office">Bad Santa 2</a></b></td> <td><a href="/market/distributor/Broad-Green-Pictures">Broad Green Pictures</a></td> <td class="data">$5,777</td> <td class="data chart_down">-74%</td> <td class="data">19</td> <td class="data chart_grey">$304</td> <td class="data">&nbsp;&nbsp;$17,781,710</td> <td class="data">58</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_down">46</td> <td class="data">(43)</td> <td><b><a href="/movie/man-som-heter-Ove-En#tab=box-office">A Man Called Ove</a></b></td> <td><a href="/market/distributor/Music-Box-Films">Music Box Films</a></td> <td class="data">$5,635</td> <td class="data chart_down">-69%</td> <td class="data">7</td> <td class="data chart_grey">$805</td> <td class="data">&nbsp;&nbsp;$3,375,381</td> <td class="data">112</td> </tr> <tr> <td class="data chart_down">47</td> <td class="data">(45)</td> <td><b><a href="/movie/Trilogie-Marseillaise-La-(France)#tab=box-office">The Marseille Trilogy</a></b></td> <td><a href="/market/distributor/Janus-Films">Janus Films</a></td> <td class="data">$4,173</td> <td class="data chart_down">-71%</td> <td class="data">1</td> <td class="data chart_grey">$4,173</td> <td class="data">&nbsp;&nbsp;$21,513</td> <td class="data">21</td> </tr> <tr bgcolor="#ffeeff"> <td class="data">48</td> <td class="data">(48)</td> <td><b><a href="/movie/Saisons-Les-(France)#tab=box-office">Seasons</a></b></td> <td><a href="/market/distributor/Music-Box-Films">Music Box Films</a></td> <td class="data">$3,763</td> <td class="data chart_down">-60%</td> <td class="data">4</td> <td class="data chart_grey">$941</td> <td class="data">&nbsp;&nbsp;$126,431</td> <td class="data">70</td> </tr> <tr> <td class="data chart_down">49</td> <td class="data">(44)</td> <td><b><a href="/movie/Tanpopo#tab=box-office">Tampopo</a></b></td> <td><a href="/market/distributor/Janus-Films">Janus Films</a></td> <td class="data">$2,716</td> <td class="data chart_down">-85%</td> <td class="data">1</td> <td class="data chart_grey">$2,716</td> <td class="data">&nbsp;&nbsp;$203,791</td> <td class="data">91</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_up"><b>50</b></td> <td class="data">(55)</td> <td><b><a href="/movie/Bad-Kids-The-(2016)#tab=box-office">The Bad Kids</a></b></td> <td><a href="/market/distributor/FilmRise">FilmRise</a></td> <td class="data">$1,863</td> <td class="data chart_up">+34%</td> <td class="data">6</td> <td class="data chart_grey">$311</td> <td class="data">&nbsp;&nbsp;$6,226</td> <td class="data">35</td> </tr> <tr> <td class="data">51</td> <td class="data">(51)</td> <td><b><a href="/movie/Harry-Benson-Shoot-First#tab=box-office">Harry Benson: Shoot First</a></b></td> <td><a href="/market/distributor/Magnolia-Pictures">Magnolia Pictures</a></td> <td class="data">$1,344</td> <td class="data chart_down">-37%</td> <td class="data">5</td> <td class="data chart_grey">$269</td> <td class="data">&nbsp;&nbsp;$17,184</td> <td class="data">42</td> </tr> <tr bgcolor="#ffeeff"> <td class="data chart_up"><b>52</b></td> <td class="data">(53)</td> <td><b><a href="/movie/Ardennen-D-(Belgium)#tab=box-office">The Ardennes</a></b></td> <td><a href="/market/distributor/Film-Movement">Film Movement</a></td> <td class="data">$976</td> <td class="data chart_down">-32%</td> <td class="data">2</td> <td class="data chart_grey">$488</td> <td class="data">&nbsp;&nbsp;$2,415</td> <td class="data">14</td> </tr> <tr> <td class="data chart_down">53</td> <td class="data">(50)</td> <td><b><a href="/movie/Busanhaeng-(south-korea)#tab=box-office">Train to Busan</a></b></td> <td><a href="/market/distributor/Well-Go-USA">Well Go USA</a></td> <td class="data">$799</td> <td class="data chart_down">-66%</td> <td class="data">2</td> <td class="data chart_grey">$400</td> <td class="data">&nbsp;&nbsp;$2,128,963</td> <td class="data">182</td> </tr> </tbody></table>""", header=0) df.Gross = df.Gross.str.replace("[$,]", "").astype("int") df.head() gross = df.Gross.values[20:] movie = df.Movie.values[20:] plt.figure() plt.bar(range(len(gross)), gross) plt.xticks(range(len(gross)), movie, rotation=90) plt.tight_layout() plt.savefig("images/matplotlib_bar", bbox_inches="tight", dpi=300) plt.figure() plt.barh(range(len(gross)), gross) plt.yticks(range(len(gross)), movie, fontsize=8) ax = plt.gca() ax.set_frame_on(False) ax.tick_params(length=0) plt.tight_layout() plt.savefig("images/matplotlib_barh", bbox_inches="tight", dpi=300) data1 = np.random.laplace(loc=-2, size=100) data2 = np.random.laplace(loc=5, size=100) data3 = np.random.laplace(scale=6, size=200) data4 = np.random.laplace(loc=-15, scale=.1, size=10) data = np.hstack([data1, data2, data3, data4, [50]]) fig, ax = plt.subplots(1, 3, figsize=(20, 3)) ax[0].hist(data) ax[1].hist(data, bins=100) ax[2].hist(data, bins="auto") plt.savefig("images/matplotlib_histogram.png", bbox_inches="tight", dpi=300) from matplotlib.cbook import get_sample_data f = get_sample_data("axes_grid/bivariate_normal.npy", asfileobj=False) np.set_printoptions(suppress=True, precision=2) arr = np.load(f) arr = np.load("bivariate_normal.npy") fig, ax = plt.subplots(2, 2) im1 = ax[0, 0].imshow(arr) ax[0, 1].imshow(arr, interpolation='bilinear') im3 = ax[1, 0].imshow(arr, cmap='gray') im4 = ax[1, 1].imshow(arr, cmap='bwr', vmin=-1.5, vmax=1.5) plt.colorbar(im1, ax=ax[0, 0]) plt.colorbar(im3, ax=ax[1, 0]) plt.colorbar(im4, ax=ax[1, 1]) plt.savefig("images/matplotlib_heatmap.png", bbox_inches="tight", dpi=300) x1, y1 = 1 / np.random.uniform(-1000, 100, size=(2, 10000)) x2, y2 = np.dot(np.random.uniform(size=(2, 2)), np.random.normal(size=(2, 1000))) x = np.hstack([x1, x2]) y = np.hstack([y1, y2]) plt.figure() plt.xlim(-1, 1) plt.ylim(-1, 1) plt.scatter(x, y) fig, ax = plt.subplots(1, 3, figsize=(10, 4), subplot_kw={'xlim': (-1, 1), 'ylim': (-1, 1)}) ax[0].scatter(x, y) ax[1].scatter(x, y, alpha=.1) ax[2].scatter(x, y, alpha=.01) plt.savefig("images/matplotlib_overplotting.png", bbox_inches="tight", dpi=300) plt.figure() plt.hexbin(x, y, bins='log', extent=(-1, 1, -1, 1)) plt.colorbar() plt.axis("off") plt.savefig("images/matplotlib_hexgrid.png", bbox_inches="tight", dpi=300) ``` # Twinx ``` df = pd.DataFrame({'Math PhDs awareded (US)': {'2000': 1050, '2001': 1010, '2002': 919, '2003': 993, '2004': 1076, '2005': 1205, '2006': 1325, '2007': 1393, '2008': 1399, '2009': 1554}, 'Total revenue by arcades (US)': {'2000': 1196000000, '2001': 1176000000, '2002': 1269000000, '2003': 1240000000, '2004': 1307000000, '2005': 1435000000, '2006': 1601000000, '2007': 1654000000, '2008': 1803000000, '2009': 1734000000}}) # could also do df.plot() phds = df['Math PhDs awareded (US)'] revenue = df['Total revenue by arcades (US)'] years = df.index plt.figure() ax = plt.gca() ax.plot(years, phds, label="math PhDs awarded") ax.plot(years, revenue, c='r', label="revenue by arcades") ax.set_ylabel("Math PhDs awarded") ax.set_ylabel("revenue by arcades") ax.legend() plt.savefig("images/matplotlib_twinx1.png", bbox_inches="tight", dpi=300) plt.figure() ax1 = plt.gca() line1, = ax1.plot(years, phds) ax2 = ax1.twinx() line2, = ax2.plot(years, revenue, c='r') ax1.set_ylabel("Math PhDs awarded") ax2.set_ylabel("revenue by arcades") ax2.legend((line1, line2), ("math PhDs awarded", "revenue by arcades")) plt.savefig("images/matplotlib_twinx2.png", bbox_inches="tight", dpi=300) # DONT! # This import registers the 3D projection, but is otherwise unused. from mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import import matplotlib.pyplot as plt import numpy as np # Fixing random state for reproducibility np.random.seed(19680801) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') x, y = np.random.rand(2, 100) * 4 hist, xedges, yedges = np.histogram2d(x, y, bins=4, range=[[0, 4], [0, 4]]) # Construct arrays for the anchor positions of the 16 bars. xpos, ypos = np.meshgrid(xedges[:-1] + 0.25, yedges[:-1] + 0.25, indexing="ij") xpos = xpos.ravel() ypos = ypos.ravel() zpos = 0 # Construct arrays with the dimensions for the 16 bars. dx = dy = 0.5 * np.ones_like(zpos) dz = hist.ravel() ax.bar3d(xpos, ypos, zpos, dx, dy, dz, color='b', zsort='average') plt.savefig("images/3dhist.png", dpi=300) import numpy as np import matplotlib.pyplot as plt # This import registers the 3D projection, but is otherwise unused. from mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import def lorenz(x, y, z, s=10, r=28, b=2.667): ''' Given: x, y, z: a point of interest in three dimensional space s, r, b: parameters defining the lorenz attractor Returns: x_dot, y_dot, z_dot: values of the lorenz attractor's partial derivatives at the point x, y, z ''' x_dot = s*(y - x) y_dot = r*x - y - x*z z_dot = x*y - b*z return x_dot, y_dot, z_dot dt = 0.01 num_steps = 10000 # Need one more for the initial values xs = np.empty((num_steps + 1,)) ys = np.empty((num_steps + 1,)) zs = np.empty((num_steps + 1,)) # Set initial values xs[0], ys[0], zs[0] = (0., 1., 1.05) # Step through "time", calculating the partial derivatives at the current point # and using them to estimate the next point for i in range(num_steps): x_dot, y_dot, z_dot = lorenz(xs[i], ys[i], zs[i]) xs[i + 1] = xs[i] + (x_dot * dt) ys[i + 1] = ys[i] + (y_dot * dt) zs[i + 1] = zs[i] + (z_dot * dt) # Plot fig = plt.figure() ax = fig.gca(projection='3d') ax.plot(xs, ys, zs, lw=0.5) ax.set_xlabel("X Axis") ax.set_ylabel("Y Axis") ax.set_zlabel("Z Axis") ax.set_title("Lorenz Attractor") plt.savefig("images/lorenz.png", dpi=300) from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt from sklearn.datasets import load_iris iris = load_iris() X, y = iris.data, iris.target fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(X[:, 1], X[:, 2], X[:, 3], c=y) plt.savefig("images/3dscatter.png", dpi=300) ```
github_jupyter
# Numerical Solution of the Wave Equation using the Finite Element Method This notebook illustrates the numerical time-domain solution of the wave equation using the [Finite Element Method](https://en.wikipedia.org/wiki/Finite_element_method) (FEM). The method aims at an approximate solution by subdividing the area of interest into smaller parts with simpler geometry, linking these parts together and applying methods from the calculus of variations to solve the problem numerically. The FEM is a well established method for the numerical approximation of the solution of partial differential equations (PDEs). The solutions of PDEs are often known analytically only for rather simple geometries. FEM based simulations allow to gain insights into other more complex cases. ## Problem Statement The linear and lossless propagation of sound is governed by the inhomogeneous linear [wave equation](https://en.wikipedia.org/wiki/Wave_equation) \begin{equation} \Delta p(\mathbf{x}, t) - \frac{1}{c^2} \frac{\partial^2}{\partial t^2} p(\mathbf{x}, t) = - q(\mathbf{x}, t) , \end{equation} where $p(\mathbf{x}, t)$ denotes the sound pressure at position $\mathbf{x}$, $c$ the speed of sound and $q(\mathbf{x}, t)$ the inhomogeneity. We aim in the following for a numerical solution of the wave equation on the domain $V$ with respect to the homogeneous Dirichlet boundary condition \begin{equation} p(\mathbf{x}, t) = 0 \qquad \text{for } x \in \partial V , \end{equation} or the homogeneous Neumann boundary condition \begin{equation} \frac{\partial}{\partial n} p(\mathbf{x}, t) = 0 \qquad \text{for } x \in \partial V , \end{equation} where $\partial V $ denotes the boundary of $V$. ## Variational Formulation The FEM is based on expressing the partial differential equation (PDE) to be solved in its [variational](https://en.wikipedia.org/wiki/Calculus_of_variations) or weak form. The first step towards this is to approximate the second-order temporal derivative in the wave equation by its backward [finite difference](https://en.wikipedia.org/wiki/Finite_difference) \begin{equation} \frac{\partial^2}{\partial t^2} p(\mathbf{x}, t) \approx \frac{p(\mathbf{x}, t) - 2 p(\mathbf{x}, t - T) + p(\mathbf{x}, t- 2 T)}{T^2} , \end{equation} where $T$ denotes the temporal stepsize (i.e. the sampling interval). Introducing this approximation into the wave equation and rearranging terms yields \begin{equation} c^2 T^2 \Delta p(\mathbf{x}, t) - p(\mathbf{x}, t) = - 2 p(\mathbf{x}, t - T) + p(\mathbf{x}, t- 2 T) - c^2 T^2 q(\mathbf{x}, t) . \end{equation} In order to derive the variational formulation we follow the [procedure outlined for the Helmholz equation](FEM_Helmholtz_equation_2D.ipynb#Variational-Formulation). Multiplication by the test function $v(\mathbf{x}, t)$, integration over the domain $V$ and application of Green's first identity yields \begin{equation} {-} \int_V \left( c^2 T^2 \nabla p(\mathbf{x}, t) \cdot \nabla v(\mathbf{x}, t) + p(\mathbf{x}, t) v(\mathbf{x}, t) \right) \mathrm{d}x = \int_V \left( - 2 p(\mathbf{x}, t - T) + p(\mathbf{x}, t- 2 T) - c^2 T^2 q(\mathbf{x}, t) \right) v(\mathbf{x}, t) \mathrm{d}x , \end{equation} where $v(\mathbf{x}, t) = 0$ on $\partial V$ where $p(\mathbf{x}, t)$ is known - for instance due to fixed boundary conditions - was exploited in case of a pure Dirichlet boundary condition or $\frac{\partial}{\partial n} p(\mathbf{x}, t) = 0$ on $\partial V$ in case of a pure Neumann boundary condition. It is common to express the integral equation above in terms of the bilinear $a(P, V)$ and linear $L(V)$ forms \begin{equation} a(P, V) = \int_V \left( c^2 T^2 \nabla p(\mathbf{x}, t) \cdot \nabla v(\mathbf{x}, t) + p(\mathbf{x}, t) v(\mathbf{x}, t) \right) \mathrm{d}x , \end{equation} \begin{equation} L(V) = \int_V \left( 2 p(\mathbf{x}, t - T) - p(\mathbf{x}, t- 2 T) + c^2 T^2 q(\mathbf{x}, t) \right) v(\mathbf{x}, t) \mathrm{d}x , \end{equation} where \begin{equation} a(P, V) = L(V) . \end{equation} ## Numerical Solution The numerical solution of the variational problem is based on [FEniCS](https://fenicsproject.org/), an open-source framework for numerical solution of PDEs. Its high-level Python interface `dolfin` is used in the following to define the problem and compute its solution. The implementation is directly based on the variational formulation derived above. It is common in the FEM to denote the solution of the problem by $u$ and the test function by $v$. The definition of the problem in FEniCS is very close to the mathematical formulation of the problem. ``` import dolfin import mshr import matplotlib.pyplot as plt %matplotlib inline T = 1/40000 # temporal sampling interval def FEM_wave_equation(mesh, T, N, xs, neumann_bc=True, c=343): # define function space V = dolfin.FunctionSpace(mesh, "CG", 1) # define previous and second-last solution u1 = dolfin.interpolate(dolfin.Constant(0.0), V) u0 = dolfin.interpolate(dolfin.Constant(0.0), V) # define boundary conditions if neumann_bc: bcs = None else: bcs = dolfin.DirichletBC(V, dolfin.Constant(0.), "on_boundary") # define variational problem u = dolfin.TrialFunction(V) v = dolfin.TestFunction(V) a = dolfin.inner(u, v) * dolfin.dx + dolfin.Constant(T**2 * c**2) * dolfin.inner(dolfin.nabla_grad(u), dolfin.nabla_grad(v)) * dolfin.dx L = 2*u1*v * dolfin.dx - u0*v * dolfin.dx # compute solution for all time-steps u = dolfin.Function(V) for n in range(N): A, b = dolfin.assemble_system(a, L, bcs) # define inhomogenity if n==0: delta = dolfin.PointSource(V, xs, 1) delta.apply(b) # solve variational problem dolfin.solve(A, u.vector(), b) u0.assign(u1) u1.assign(u) return u def plot_soundfield(u): '''plots solution of FEM-based simulation''' fig = plt.figure(figsize=(10,10)) fig = dolfin.plot(u) plt.title(r'$p(\mathbf{x}, t)$') plt.xlabel(r'$x$ / m') plt.ylabel(r'$y$ / m') plt.colorbar(fig, fraction=0.038, pad=0.04); ``` ### Sound Field in a Rectangular Room with Sound-Hard Boundaries For a first validation of the numerical simulation by the FEM, the solution of the inhomogeneous two-dimensional wave equation for a point source $q(\mathbf{x}, t) = \delta(\mathbf{x}-\mathbf{x}_s) \cdot \delta(t)$ at position $\mathbf{x}_s = (2,2)$ m is considered for a Dirac shaped excitation in the time-domain. The simulated geometry is a two-dimensional rectangular room with size $4 \times 4$ meters and sound-hard boundaries (Neumann boundary condition). Note that the free-field solution of the two-dimensional wave equation for a spatio-temporally Dirac shaped excitation is given as \begin{equation} p(\mathbf{x}, t) = \frac{1}{2 \pi \sqrt{t^2 - (\frac{||\mathbf{x} - \mathbf{x}_s||}{c})^2}} \qquad \text{for } t > \frac{||\mathbf{x} - \mathbf{x}_s||}{c} . \end{equation} In order to validate the simulation for free-field propagation the timestep $N=150$ and sampling interval $T=\frac{1}{40000}$ s are chosen such that the incident wave is not reflected by one of the walls. The simulated time $t = N \cdot T = 3.75$ ms after excitation by the Dirac impulse. ``` # define geometry and mesh domain = mshr.Rectangle(dolfin.Point(0, 0), dolfin.Point(4,4)) mesh = mshr.generate_mesh(domain, 200) # compute solution u = FEM_wave_equation(mesh, T, 150, dolfin.Point(2,2), neumann_bc=True) # plot sound field plot_soundfield(u) plt.grid() ``` The result of the numerical simulation is compared to the theoretical result given above for a line parallel to the y-axis for $y = 2$. ``` import numpy as np # extract simulation results on line x = np.linspace(0 + 1E-14, 4 - 1E-14, 101) points = [(x_, 2) for x_ in x] ux = [u(point) for point in points] # compute analytic result on line px = np.zeros(len(x)) a = (150*T)**2 - (np.sqrt((x-2)**2 + (2-2)**2)/343)**2 px[a>0] = 0.34 * 1/(2*np.pi * np.sqrt(a[a>0])) # plot comparison plt.figure(figsize=(10,5)) plt.plot(x, px, 'k--' , label='analytic solution') plt.plot(x, ux, label='numeric solution') plt.xlabel(r'$x$ / m') plt.ylabel(r'$p(\mathbf{x}, t)$') plt.legend() plt.ylim([0, 50]); plt.grid() ``` ### Sound Field in Two Coupled Rectangular Rooms# In order to illustrate the procedure for a more complex geometry, the sound field in two coupled rectangular rooms with sound-hard boundaries is computed. First, the geometry of the problem is defined and the mesh is plotted with a low number of elements for ease of illustration. ``` # define geometry and compute low resolution mesh for illustration domain = mshr.Rectangle(dolfin.Point(0, 0), dolfin.Point(3,4)) + \ mshr.Rectangle(dolfin.Point(3, 1.5), dolfin.Point(3.5, 2.5)) + \ mshr.Rectangle(dolfin.Point(3.5, 0), dolfin.Point(6, 4)) mesh = mshr.generate_mesh(domain, 20) dolfin.plot(mesh); ``` Now the problem is defined and solved with FEniCS with a high density mesh. The source position is chosen as $\mathbf{x}_s = (2,1)$ m and the total number of time-steps as $N=300$. ``` # high resolution mesh for FEM simulation mesh = mshr.generate_mesh(domain, 150) # compute solution u = FEM_wave_equation(mesh, T, 300, dolfin.Point(2,1), neumann_bc=True) # plot sound field plot_soundfield(u) ``` **Copyright** This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT).
github_jupyter
# How to Use DeepBugs for Yourself Follow along with this notebook to reproduce our replication of DeepBugs, tested on the switched-argument bug (i.e., the developer accidentally typed the arguments in reverse order.) Or, feel free to just check out the pre-saved output - things can take a while to run. You can also use the functions we provide to deploy DeepBugs in your own code! ## 1. Round up the source code Start by downloading the 150k JavaScript Dataset using the links below. * [Training Data - 10.0GB](https://1drv.ms/u/s!AvvT9f1RiwGbh6hYNoymTrzQcNA46g?e=WeJf3K) * [Testing Data - 4.8GB](https://1drv.ms/u/s!AvvT9f1RiwGbh6hXmjPOUS-kBARjFA?e=AJY1Xf) Save them into the `demo_data` folder. ## 2. Convert the source code ASTs to tokens For a given corpus of code, you should have a large list of source files, each of which is converted into an Abstract Syntax Tree (AST). In this example, we convert each AST from the 150k JavaScript Dataset into a list of tokens (e.g., "ID:setInterval" or "LIT:true"). Those lists are aggregated together into a master list of lists. This list-of-list format is important for training Word2Vec, since each list of tokens corresponds to a single source file - tokens within a source file are closely related but tokens across source files may not be as closely related. Example: ``` [ [ # Corresponds to first source file "ID:setInterval", "LIT:1000", "ID:callbackFn", "LIT:true", "LIT:http-mode", ... ], [ # Corresponds to second source file "ID:fadeIn", "LIT:300", "ID:css", "LIT:color:red;margin:auto", ... ] ] ``` ### Note on using our code If you organize your ASTs into one file, such that each line of the file corresponds to one AST, you can just call our ready-to-go `ast_token_extractor.get_tokens_from_corpus()` function as shown below. If you need more fine-grained control, you could use `ast_token_extractor.get_tokens_from()` to extract tokens from each node in a single AST. ``` from ast_token_extractor import get_tokens_from_corpus TRAIN_DATA_PATH = "demo_data/150k_training.json" TEST_DATA_PATH = "demo_data/150k_testing.json" list_of_lists_of_tokens = get_tokens_from_corpus(TRAIN_DATA_PATH) # Count the tokens extracted num_tokens_extracted = len([len(tokens_from_single_src_file) for tokens_from_single_src_file in list_of_lists_of_tokens]) print("Extracted {0} tokens".format(num_tokens_extracted)) print("A few examples...") print(list_of_lists_of_tokens[0]) ``` ## Convert tokens to vectors: train a Word2Vec model Now that you have reduced your dataset to lists of tokens, you can use them to train a Word2Vec model so that it predicts a vector for each token based on lexical similarity. In other words, a token of `LIT:true` will be lexically similar to a token of `LIT:1` but not `LIT:false`. We train Word2Vec using the Continuous Bag of Words method with a 200-word window (i.e. for a given token, we use the previous 100 tokens and the following 100 tokens to learn the context of the token). Like the original authors, we limit the vocabulary size to the top 10,000 tokens from the dataset. ### Note on using our code As long as you have one list of tokens per source file, aggregated into a master list of all source files, then you can call our ready-made `token2vectorizer.train_word2vec()` function as shown below. ``` from token2vectorizer import train_word2vec WORD2VEC_MODEL_SAVE_PATH = "demo_data/word2vec.model" model = train_word2vec(list_of_lists_of_tokens, WORD2VEC_MODEL_SAVE_PATH) print("Should be larger difference btwn LIT:true and LIT:false", model.wv.similarity("LIT:true", "LIT:false")) print("Should be smaller difference btwn LIT:true and LIT:1", model.wv.similarity("LIT:true", "LIT:1")) ``` ## Save your token-vector vocabulary for later To speed things up when you're training and testing DeepBugs, you should save off your learned Word2Vec vocabulary in a dictionary for rapid lookup and sharing. Our `token2vectorizer.save_token2vec_vocabulary()` handles this for you in a jiffy. Example output: ``` { "LIT:true": [-5.174832 -4.9506106 1.6868128 1.476279 -3.211739 ...], ... } ``` import json from gensim.models import Word2Vec from token2vectorizer import save_token2vec_vocabulary WORD2VEC_MODEL_READ_PATH = "demo_data/word2vec.model" VOCAB_SAVE_PATH = "demo_data/token2vec.json" model = Word2Vec.load(WORD2VEC_MODEL_READ_PATH) save_token2vec_vocabulary(model, VOCAB_SAVE_PATH) with open(VOCAB_SAVE_PATH) as example_json: vocab = json.load(example_json) print("A couple examples...") print("ID:Date: ", vocab["ID:Date"], "\n") print("ID:end: ", vocab["ID:end"]) ``` ## Generate positive/negative examples In our example, we our testing for the switched-argument bug that the DeepBugs authors tested for, so we generate data by extracting all 2-argument function calls from the 150k dataset and then manually switching the arguments around to make "buggy" examples. ### Note on using our code Our code is specific to switched-argument bugs. For your own bugs, you will need to write your own code to generate positive and negative training/testing examples. You can follow similar procedures to our `swarg_` scripts. We save our examples as `.npz` files, where each file is a `Tuple[List,List]`: `(Data, Labels)`. Both `Data` and `Labels` are numpy arrays of the same length, where `Labels[i]` is 1 for positive, 0 for negative ``` import json from swarg_gen_train_eval import gen_good_bad_fn_args from swarg_fnargs2tokens import get_all_2_arg_fn_calls_from_ast VOCAB_READ_PATH = "demo_data/token2vec.json" SWARG_TRAIN_EXAMPLES_SAVE_PATH = "demo_data/switch_arg_train.npz" SWARG_TEST_EXAMPLES_SAVE_PATH = "demo_data/switch_arg_test.npz" gen_good_bad_fn_args(TRAIN_DATA_PATH, VOCAB_READ_PATH, SWARG_TRAIN_EXAMPLES_SAVE_PATH) gen_good_bad_fn_args(TEST_DATA_PATH, VOCAB_READ_PATH, SWARG_TEST_EXAMPLES_SAVE_PATH) ``` ## Train DeepBugs We use examples generated from the training partition of the 150K JavaScript Dataset. ``` # TODO: Jordan and Abhi's code can just slot in here ``` ## Test DeepBugs We use examples generated from the test partition of the 150K JavaScript Dataset. ``` # TODO: Jordan and Abhi's code can just slot in here ```
github_jupyter
# Learning to pivot, part 3 ## Independence $\neq$ non significant This example demonstrates that statistical independence of classifier predictions from the nuisance parameter does not imply that classifier does not use nuisance parameter. Main paper: https://arxiv.org/abs/1611.01046 ``` try: import mlhep2019 except ModuleNotFoundError: import subprocess as sp result = sp.run( ['pip', 'install', 'git+https://github.com/yandexdataschool/mlhep2019.git'], stdout=sp.PIPE, stderr=sp.PIPE ) if result.returncode != 0: print(result.stdout.decode('utf-8')) print(result.stderr.decode('utf-8')) import mlhep2019 %matplotlib inline import matplotlib.pyplot as plt from tqdm import tqdm_notebook as tqdm from IPython import display import numpy as np import torch import torch.utils.data from mlhep2019.pivot import * for i in range(torch.cuda.device_count()): print(torch.cuda.get_device_name(i)) if torch.cuda.is_available(): device = torch.device("cuda:0") else: device = "cpu" import warnings warnings.warn('Using CPU!') ``` ## Toy data ``` def get_data(size = 1024): labels = np.random.binomial(1, 0.5, size=(size, )).astype('float32') xs = np.random.uniform(0.1, 0.9, size=(size, )) xs = xs + 0.1 * np.sign(xs - 0.5) ys = np.where( labels > 0.5, xs + np.random.uniform(-1, 1, size=(size, )) * (xs - 0.5) ** 2 , 1 - xs + np.random.uniform(-1, 1, size=(size, )) * (xs - 0.5) ** 2, ) data = np.stack([xs, ys], axis=1).astype('float32') return data, labels, xs.astype('float32') data_train, labels_train, nuisance_train = get_data(size=1024) data_test, labels_test, nuisance_test = get_data(size=128 * 1024) plt.scatter(data_train[labels_train < 0.5, 0], data_train[labels_train < 0.5, 1], label='class 0') plt.scatter(data_train[labels_train > 0.5, 0], data_train[labels_train > 0.5, 1], label='class 1') plt.xlabel('$x_1$', fontsize=14) plt.ylabel('$x_2$', fontsize=14) plt.title('Toy data') plt.legend() plt.show() ``` ## Utility functions ``` xs, ys, grid = make_grid(data_train) X_train, y_train, z_train = [ torch.from_numpy(tensor).to(device) for tensor in (data_train, labels_train, nuisance_train) ] X_test, y_test, z_test = [ torch.from_numpy(tensor).to(device) for tensor in (data_test, labels_test, nuisance_test) ] G = torch.from_numpy(grid).to(device) dataset_test = torch.utils.data.TensorDataset(X_test, y_test, z_test) dataloader_test = torch.utils.data.DataLoader(dataset_test, batch_size=1024, shuffle=False) dataset_grid = torch.utils.data.TensorDataset(G) dataloader_grid = torch.utils.data.DataLoader(dataset_grid, batch_size=1024, shuffle=False) def get_predictions(model, loader): with torch.no_grad(): return np.concatenate([ torch.sigmoid(model(batch[0])).to('cpu').detach().numpy() for batch in loader ], axis=0) test_predictions = lambda model: get_predictions(model, dataloader_test) grid_predictions = lambda model: get_predictions(model, dataloader_grid) ``` ## Unmodified classification Here we define a simple classifier: ``` Input(2 units) -> DenseLayer(64 units) -> DenseLayer(32 units) -> DenseLayer(1 unit) ``` **Note:** we don't use any activation function for the output layer, however, at the same time with use `BCEWithLogitsLoss` loss as it is more computationally stable. ``` class Classifier(torch.nn.Module): def __init__(self, activation=torch.nn.Softplus()): super(Classifier, self).__init__() self.layer1 = torch.nn.Linear(2, 64) self.layer2 = torch.nn.Linear(64, 32) self.head = torch.nn.Linear(32, 1) self.activation = activation def forward(self, X): result = X result = self.activation(self.layer1(result)) result = self.activation(self.layer2(result)) return torch.flatten( self.head(result) ) classifier = Classifier().to(device) loss_fn_classification = torch.nn.BCEWithLogitsLoss() num_epoches = 128 num_batches = data_train.shape[0] // 32 losses = np.zeros(shape=(num_epoches, num_batches)) optimizer = torch.optim.Adam(classifier.parameters(), lr=1e-3) for i in tqdm(range(num_epoches)): for j in range(num_batches): optimizer.zero_grad() indx = torch.randint(0, data_train.shape[0], size=(32, )) X_batch, y_batch = X_train[indx], y_train[indx] predictions = classifier(X_batch) loss = loss_fn_classification(predictions, y_batch) losses[i, j] = loss.item() loss.backward() optimizer.step() plot_losses(classifier=losses) ``` ## Let's pivot In order to make predictions of the classifier independent from nuisance parameters, an adversary is introduced. The idea is similar to the main principle of GAN - seek for the solution that maximizes minimum of the adversary loss. If classifier utilises information about nuisance parameters to make predictions, then its predictions are dependent on nuisance parameters. This information is most probably coming from dependencies between nuisance parameters and the training features, therefore, just excluding nuisance parameters from the training features is typically not enough. Adversary is trained to predict nuisance parameters given output of the classifier. A dependency between nuisance parameters and predictions means that adversary is able to learn it (i.e. achieve minimum of the loss lower than loss of the constant). Maxumum of the minimum of the adversary loss is achieved only when there is not any dependencies between predictions and nusiances. More formally, adversary loss is given by: $$\mathcal{L}_{\mathrm{adv}}(\theta, \psi) = -\mathbb{E}_{x, z} \log P_\psi(z \mid f_\theta(x)) \to_\psi \min;$$ while the classifier is trained to minimize the following loss: $$\mathcal{L}_{\mathrm{clf}} = \left[-\mathbb{E}_{x, y} \log P_\theta(y \mid x)\right] - \left[ \min_\psi \mathcal{L}_\mathrm{adv}(\theta, \psi)\right] \to_\theta \min;$$ where: - $f_\theta$ and $P_\theta$ - classifier with parameters $\theta$ and probability distribution that corresponds to it; - $P_\psi$ - probability distribution that corresponds to the output of adversary; Note the minus sign before the second term in $\mathcal{L}_{\mathrm{clf}}$. The training procedure is similar to that of GAN. ``` class Adversary(torch.nn.Module): def __init__(self, activation=torch.nn.Softplus()): super(Adversary, self).__init__() self.layer1 = torch.nn.Linear(1, 128) self.head = torch.nn.Linear(128, 1) self.activation = activation def forward(self, X): result = X result = self.activation(self.layer1(result)) return torch.squeeze(self.head(result), dim=1) pivoted_classifier = Classifier().to(device) adversary = Adversary().to(device) loss_fn_pivoted_classification = torch.nn.BCEWithLogitsLoss() loss_fn_adversary = torch.nn.MSELoss() ``` **Warning:** be careful using optimizers with an internal state for adversarial optimization problems ($\max \min$ problems): almost all of the popular optimizers have an internal state (except for SGD). After performing an optimization step for the generator, optimization problem for the adversary changes, thus, previously accumulated internal state might become invalid. This might lead to the noticable associlations in the learning curves. Alternatively, it might result in the generator (classifier in our case) and the adversary going in circles, which appears as if they have converged, which is especially difficult to detect; or collapse of the generator, as improper internal state of the discriminator optimizer slows its convergance. One might avoid these effects by setting learning rate of the adversary optimizer to a low enough value and/or train the adversary longer. One can use any optimizer for the generator (classifier in our case), provided that the adversary has enough time to converge. From practical experience, optimizers that use $l_\infty$ (adamax, AMSGrad etc) norm perform well. Nevertheless, when in doubt, use SGD for the adversary. ``` optimizer_pivoted_classifier = torch.optim.Adam(pivoted_classifier.parameters(), lr=1e-3) optimizer_adversary = torch.optim.Adamax(adversary.parameters(), lr=1e-3) num_epoches = 128 num_batches = data_train.shape[0] // 32 losses_clf = np.zeros(shape=(num_epoches, num_batches)) losses_adv = np.zeros(shape=(num_epoches, num_batches)) for i in tqdm(range(num_epoches)): for j in range(num_batches): ### training adversary for k in range(4): ### generating batch indx = torch.randint(0, data_train.shape[0], size=(32, )) X_batch, z_batch = X_train[indx], z_train[indx] optimizer_adversary.zero_grad() predictions = pivoted_classifier(X_batch) nuisance_predictions = adversary(torch.unsqueeze(predictions, dim=1)) loss_adversary = loss_fn_adversary(nuisance_predictions, z_batch) loss_adversary.backward() optimizer_adversary.step() optimizer_pivoted_classifier.zero_grad() ### generating batch indx = torch.randint(0, data_train.shape[0], size=(32, )) X_batch, y_batch, z_batch = X_train[indx], y_train[indx], z_train[indx] ### training classifier predictions = pivoted_classifier(X_batch) nuisance_predictions = adversary(torch.unsqueeze(predictions, dim=1)) loss_classifier = loss_fn_pivoted_classification(predictions, y_batch) loss_adversary = loss_fn_adversary(nuisance_predictions, z_batch) losses_clf[i, j] = loss_classifier.item() losses_adv[i, j] = loss_adversary.item() joint_loss = loss_classifier - loss_adversary joint_loss.backward() optimizer_pivoted_classifier.step() plot_losses(epoch=i, classifier=losses_clf, adversary=losses_adv) ``` If you look closely, you will see tiny (sometimes not tiny) associlations - note, how adamax stops them (at least, tries to). Try different optimizer (e.g. adam, adagrad) or decreasing number of adversary training steps for more pronounced effect. ### Conditional pivoting Sometimes it is desirable to make predictions independent from the nuisance parameter within each class. Note, that this might still leave some dependency between nuisance and overall distribution of predictions. In this case we make adversary **conditional**, which in practice means simply adding target labels as an input. ``` class ConditionalAdversary(torch.nn.Module): def __init__(self, activation=torch.nn.Softplus()): super(ConditionalAdversary, self).__init__() self.layer1 = torch.nn.Linear(2, 128) self.head = torch.nn.Linear(128, 1) self.activation = activation def forward(self, X): result = X result = self.activation(self.layer1(result)) return torch.squeeze(self.head(result), dim=1) conditional_pivoted_classifier = Classifier().to(device) conditional_adversary = ConditionalAdversary().to(device) loss_fn_conditional_pivoted_classification = torch.nn.BCEWithLogitsLoss() loss_fn_conditional_adversary = torch.nn.MSELoss() optimizer_conditional_pivoted_classifier = torch.optim.Adam( conditional_pivoted_classifier.parameters(), lr=1e-3 ) optimizer_conditional_adversary = torch.optim.Adam(conditional_adversary.parameters(), lr=1e-3) num_epoches = 128 num_batches = data_train.shape[0] // 32 losses_clf = np.zeros(shape=(num_epoches, num_batches)) losses_adv = np.zeros(shape=(num_epoches, num_batches)) for i in tqdm(range(num_epoches)): for j in range(num_batches): ### training adversary for k in range(4): optimizer_conditional_adversary.zero_grad() indx = torch.randint(0, data_train.shape[0], size=(32, )) X_batch, y_batch, z_batch = X_train[indx], y_train[indx], z_train[indx] predictions = conditional_pivoted_classifier(X_batch) nuisance_predictions = conditional_adversary( torch.stack([predictions, y_batch], dim=1) ) loss_adversary = loss_fn_conditional_adversary(nuisance_predictions, z_batch) loss_adversary.backward() optimizer_conditional_adversary.step() optimizer_conditional_pivoted_classifier.zero_grad() indx = torch.randint(0, data_train.shape[0], size=(32, )) X_batch, y_batch, z_batch = X_train[indx], y_train[indx], z_train[indx] ### training classifier predictions = conditional_pivoted_classifier(X_batch) nuisance_predictions = conditional_adversary( torch.stack([predictions, y_batch], dim=1) ) loss_classifier = loss_fn_conditional_pivoted_classification(predictions, y_batch) loss_adversary = loss_fn_conditional_adversary(nuisance_predictions, z_batch) losses_clf[i, j] = loss_classifier.item() losses_adv[i, j] = loss_adversary.item() joint_loss = loss_classifier - loss_adversary joint_loss.backward() optimizer_conditional_pivoted_classifier.step() plot_losses(classifier=losses_clf, adversary=losses_adv) ``` ## Results ``` from sklearn.metrics import roc_auc_score, log_loss cross_entropy = lambda y, p: log_loss(y, p, eps=1e-6) accuracy = lambda y, p: np.mean(np.where(y > 0.5, 1, 0) == np.where(p > 0.5, 1, 0)) plt.subplots(nrows=1, ncols=3, figsize=(23, 5)) plt.subplot(1, 3, 1) plt.title('non-pivoted') draw_response(xs, ys, grid_predictions(classifier), data_train, labels_train) plt.subplot(1, 3, 2) plt.title('pivoted, unconditional') draw_response(xs, ys, grid_predictions(pivoted_classifier), data_train, labels_train) plt.subplot(1, 3, 3) plt.title('pivoted, conditional') draw_response(xs, ys, grid_predictions(conditional_pivoted_classifier), data_train, labels_train) ``` The following figure shows dependency between predictions and the nuisance parameter: - each column correspond to a different model; - rows correspond to nuisance parameter bins; - each plot show distribution of model predictions within the corresponding nuisance bin. - $\mathrm{MI}$ - (unconditional) mutual information between the nuisance parameter and model predictions. - $\mathrm{MI}_i$ - mutual information between the nuisance parameter and model predictions **within** $i$-th class. **Note**, that the following Mutual Information estimates migh be unreliable. ``` nuisance_prediction_hist([ test_predictions(classifier), test_predictions(pivoted_classifier), test_predictions(conditional_pivoted_classifier) ], nuisance_test, labels=labels_test.astype('int'), names=['non-pivoted', 'pivoted, unconditional', 'pivoted, conditional'] ) ``` Pivoted models tend to show worse (but flat) performance. If pivoted models shows an increased performance in some regions, then most likely the model is biased (i.e. low capacity). ``` nuisance_metric_plot([ test_predictions(classifier), test_predictions(pivoted_classifier), test_predictions(conditional_pivoted_classifier) ], labels_test, nuisance_test, metric_fn=accuracy, metric_name='accuracy', names=['non-pivoted', 'pivoted', 'conditional-pivoted'], ) nuisance_metric_plot([ test_predictions(classifier), test_predictions(pivoted_classifier), test_predictions(conditional_pivoted_classifier) ], labels_test, nuisance_test, metric_fn=roc_auc_score, metric_name='ROC AUC', names=['non-pivoted', 'pivoted', 'conditional-pivoted'], ) nuisance_metric_plot([ test_predictions(classifier), test_predictions(pivoted_classifier), test_predictions(conditional_pivoted_classifier) ], labels_test, nuisance_test, metric_fn=cross_entropy, metric_name='cross-entropy', base_level=0.0, names=['non-pivoted', 'pivoted', 'conditional-pivoted'], ) ```
github_jupyter
``` from micromlgen import port from sklearn.metrics import accuracy_score from sklearn.svm import SVC from sklearn.model_selection import train_test_split import pandas as pd import numpy as np import random # TYLKO ŻEBY OBLICZENIA BYŁY ZAWSZE TAKIE SAME random.seed(123) np.random.seed(123) np.set_printoptions(precision=7, suppress=True, threshold=5) data = pd.read_csv('training.csv',sep='\t') data ``` # CZYTANIE DANYCH ``` X = data[['Left','Middle','Right','Distance','DistanceLeft','DistanceRight']] Y = data['Action'] x_train, x_test, y_train,y_test = train_test_split(X,Y) y_train ``` # UCZENIE SVM ``` clf = SVC(kernel='linear', gamma=0.01).fit(x_train, y_train) accuracy_score(y_test,clf.predict(x_test)) clf.predict(x_test) print(port(clf)) output_file = port(clf) + "\n\nunsigned char svm_action_classes[] = " + str(list(clf.classes_)).replace("[","{").replace("]","}") + ";" + """ using namespace Eloquent::ML::Port; SVM svm = SVM(); unsigned char svm_predict(float *x) { return svm_action_classes[svm.predict(x)]; } """ print(output_file) with open("car_obstacle_train/obstacle_model_svm.h","w") as f: f.write(output_file) ``` # TENSORFLOW ``` import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.utils import to_categorical print(tf.__version__) tf.random.set_seed(123456) from sklearn.preprocessing import LabelEncoder import numpy as np code = np.array(Y) label_encoder = LabelEncoder() vec = label_encoder.fit_transform(code) label_encoder.classes_ from tensorflow.keras.utils import to_categorical y_train_ = to_categorical(label_encoder.transform(y_train)) y_test_ = to_categorical(label_encoder.transform(y_test)) x_train_ = tf.convert_to_tensor(x_train.astype(float).values) x_test_ = tf.convert_to_tensor(x_test.astype(float).values) print(x_train_.shape, y_train_.shape) print(x_test.shape, y_test_.shape) model = tf.keras.models.Sequential([ tf.keras.layers.Dense(3, input_shape=(6,), activation='relu'), tf.keras.layers.Dense(5, activation='softmax') ]) model.compile(optimizer="adam", loss='binary_crossentropy', metrics=['accuracy']) model.summary() model.fit(x_train_,y_train_, epochs=18) model.predict([x_test_]) def encode_output(y): return [label_encoder.classes_[i] for i in np.argmax(y,axis=1)] encode_output(model.predict([x_test_])) ``` # WEIGHTS #### Wszystkie wagi modelu potrzebne do eksportu do C++ ``` model.weights ``` #### Jak przelicza każdą warstwę nasz model ``` # WYSWIETL KAZDY LAYER OSOBNO from tensorflow.keras import backend as K inp = model.input # input placeholder outputs = [layer.output for layer in model.layers] # all layer outputs functors = [K.function([inp], [out]) for out in outputs] # evaluation functions layer_outs = [func([x_test_]) for func in functors] layer_outs ``` ### Wyliczenie przy pomocy nunmpy naszego modelu ``` weight_1 = model.weights[0].numpy() bias_1 = model.weights[1].numpy() weight_2 = model.weights[2].numpy() bias_2 = model.weights[3].numpy() def softmax(z): assert len(z.shape) == 2 s = np.max(z, axis=1) s = s[:, np.newaxis] # necessary step to do broadcasting e_x = np.exp(z - s) div = np.sum(e_x, axis=1) div = div[:, np.newaxis] # dito return e_x / div def predict_np(x): arr = np.matmul(x,weight_1) + bias_1 arr[arr<0] = 0 # relu print("1 layer:") print(arr.shape, arr) arr = np.matmul(arr, weight_2) + bias_2 print("2 layer:") print(arr.shape, arr) return softmax(arr) print(x_test_) predict_np(x_test_) model.predict(x_test_) ``` #### Czy predykcja przez numpy i tensorflow jest identyczna ? ``` np.allclose(model.predict(x_test_),predict_np(x_test_)) ``` ## Konwersja do .cpp Zapisujemy do pliku car_obstacle_train/model.h nasz nowy model https://repl.it/@alexiej/PoorAdmiredConference#main.cpp ``` ## Wartosci dla kazdej warstwy powinny byc identyczne dla podanego przykladu print([x_test_[0]]) predict_np([x_test_[0]]) ``` ## Output ```shelll 1st layer: 24.6334 0 0 2nd layer: -10.0696 11.86 -0.314387 19.1382 -14.3809 Softmax: 2.06497e-13 0.000689978 3.56087e-09 0.99931 2.77044e-15 Argmax: 3 = 'R' R ``` ![image.png](attachment:a15dff28-6588-497a-a07a-6ac4fc346636.png) ``` model.weights ``` ![image.png](attachment:bec01c10-5926-478e-babc-5879e1c44952.png) https://repl.it/@alexiej/PoorAdmiredConference#main.cpp ```cpp /****************************************************************************** Online C++ Compiler. Code, Compile, Run and Debug C++ program online. Write your code in this editor and press "Run" button to compile and execute it. *******************************************************************************/ #include <math.h> #include <iostream> #include <string> /** * @brief Calculates the dot product of two float arrays * @param[in] float The first float array * @param[in] float The second float array * @param[in] int Number of elements in the array * @return float The result */ float dot(float v[], float u[], int n) { float result = 0.0; for (int i = 0; i < n; i++) result += v[i]*u[i]; return result; } float weight_1[6][3] = {{ 0.69388396, -0.18363768, -0.48691928}, {-0.11122542, 0.09409314, 0.48345053}, {-0.6351829 , 0.8122994 , 0.34991562}, { 0.73035145, -0.2926492 , -0.09623587}, { 0.42388558, 0.291497 , -0.6941188 }, {-0.31986186, 0.42326367, -0.4152213 }}; float bias_1[3] = { -0.05082855, 0. , 0. }; float weight_2[3][5] = {{-0.4100235 , 0.4793676 , -0.01442659, 0.7790407 , -0.58525103}, { 0.06853658, 0.59284264, 0.75364965, -0.42056793, -0.7062055 }, { 0.39540118, 0.46259648, -0.2965532 , -0.85764015, -0.45471957}}; float bias_2[5] = { 0.0306683, 0.0515581, 0.0409896, -0.052255 , 0.035845 }; void calculate_layer(float input[], float output[], float *weights, float *bias, int dim_input, int dim_output) { for (int i=0; i< dim_output; i++) { float result = 0.0; for(int k = 0; k < dim_input; k++) { result += input[k] * weights[k*dim_output + i]; } output[i] = result + bias[i]; } return; } void relu(float input[], int dim_input) { for (int i=0; i< dim_input; i++) { input[i] = (input[i] < 0) ? 0 : input[i]; } } void softmax(float input[], int dim_input) { float maxv = input[0]; for (int i=1; i< dim_input; i++) { maxv = (input[i]>maxv) ? input[i] : maxv; } // Serial.print("MAX: "); Serial.print(maxv); for (int i = 0; i < dim_input; i++) { input[i] = exp(input[i]-maxv); } float divv = 0; for (int i = 0; i < dim_input; i++) { divv += input[i]; } // Serial.print(" DIV: "); Serial.print(divv); for (int i = 0; i < dim_input; i++) { input[i] = input[i]/divv; } } float layer1_output[3]; float output[5]; unsigned char decoder[5] = { '*', 'D', 'L', 'R', 'U' }; int argmax(float input[],int dim) { int max_i = 0; float max_v = input[0]; for(int i=1;i<dim;i++) { if(input[i]>max_v) { max_i = i; max_v = input[i]; } } return max_i; } unsigned char model_predict(float *input) { // Serial.println(); Serial.print("INPUT: "); // for(int i=0;i<5;i++) { // Serial.print(input[i]); Serial.print(", "); // } calculate_layer(input, layer1_output, &weight_1[0][0], &bias_1[0], 6, 3); relu(layer1_output, 3); calculate_layer(layer1_output,output ,&weight_2[0][0],&bias_2[0], 3, 5); // Serial.println(); Serial.print("MODEL: "); // for(int i=0;i<5;i++) { // Serial.print(output[i]); Serial.print(", "); // } // softmax(output,5); // Serial.print(" = "); Serial.print(argmax(output,5)); return decoder[argmax(output,5)]; } int main() { float input[6] = { 1. , 1.,0., 33., 0., 0. }; // float input[6] = {1.00 ,1.00, 1.00, 100.00, 00 , 00 }; calculate_layer(input, layer1_output, &weight_1[0][0], &bias_1[0], 6, 3); relu(layer1_output, 3); std::cout << "1st layer: "; for(int i=0;i<3;i++) { std::cout << layer1_output[i] << " "; } std::cout << "\n"; calculate_layer(layer1_output,output ,&weight_2[0][0],&bias_2[0], 3, 5); std::cout << "2nd layer: "; for(int i=0;i<5;i++) { std::cout << output[i] << " "; } std::cout << "\n"; softmax(output,5); std::cout << "Softmax: "; for(int i=0;i<5;i++) { std::cout << output[i] << " "; } std::cout << "\n"; std::cout << "Argmax: " << argmax(output,5) << " = '" << decoder[argmax(output,5)] << "'\n"; std::cout << model_predict(input) << "\n"; } ``` ## Output ```shelll 1st layer: 24.6334 0 0 2nd layer: -10.0696 11.86 -0.314387 19.1382 -14.3809 Softmax: 2.06497e-13 0.000689978 3.56087e-09 0.99931 2.77044e-15 Argmax: 3 = 'R' R ```
github_jupyter
# Distribution of insolation **Note this should be updated to take advantage of the new xarray capabilities of the `daily_insolation` code.** Here are some examples calculating daily average insolation at different locations and times. These all use a function called `daily_insolation` in the module `insolation.py` to do the calculation. The code calculates daily average insolation anywhere on Earth at any time of year for a given set of orbital parameters. To look at past orbital variations and their effects on insolation, we use the module `orbital.py` which accesses tables of values for the past 5 million years. We can easily lookup parameters for any point in the past and pass these to `daily_insolation`. ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt from climlab import constants as const from climlab.solar.insolation import daily_insolation ``` ## Present-day orbital parameters Calculate an array of insolation over the year and all latitudes (for present-day orbital parameters). ``` lat = np.linspace( -90., 90., 500) days = np.linspace(0, const.days_per_year, 365) Q = daily_insolation( lat, days ) ``` And make a contour plot of Q as function of latitude and time of year. ``` ax = plt.figure( figsize=(10,8) ).add_subplot(111) CS = ax.contour( days, lat, Q , levels = np.arange(0., 600., 50.) ) ax.clabel(CS, CS.levels, inline=True, fmt='%r', fontsize=10) ax.set_xlabel('Days since January 1', fontsize=16 ) ax.set_ylabel('Latitude', fontsize=16 ) ax.set_title('Daily average insolation', fontsize=24 ) ax.contourf ( days, lat, Q, levels=[-500., 0.] ) plt.show() ``` Take the area-weighted global, annual average of Q... ``` print(np.sum( np.mean( Q, axis=1 ) * np.cos( np.deg2rad(lat) ) ) / np.sum( np.cos( np.deg2rad( lat ) ) )) ``` Also plot the zonally averaged insolation at a few different times of the year: ``` summer_solstice = 170 winter_solstice = 353 ax = plt.figure( figsize=(10,8) ).add_subplot(111) ax.plot( lat, Q[:,(summer_solstice, winter_solstice)] ); ax.plot( lat, np.mean(Q, axis=1), linewidth=2 ) ax.set_xbound(-90, 90) ax.set_xticks( range(-90,100,30) ) ax.set_xlabel('Latitude', fontsize=16 ); ax.set_ylabel('Insolation (W m$^{-2}$)', fontsize=16 ); ax.grid() plt.show() ``` ## Past orbital parameters The `orbital.py` code allows us to look up the orbital parameters for Earth over the last 5 million years. Make reference plots of the variation in the three orbital parameter over the last 1 million years ``` from climlab.solar.orbital import OrbitalTable kyears = np.arange( -1000., 1.) #table = OrbitalTable() orb = OrbitalTable.interp(kyear=kyears ) orb ``` The `xarray` object `orb` now holds 1 million years worth of orbital data, total of 1001 data points for each element: eccentricity `ecc`, obliquity angle `obliquity`, and solar longitude of perihelion `long_peri`. ``` fig = plt.figure( figsize = (10,10) ) ax1 = fig.add_subplot(3,1,1) ax1.plot( kyears, orb['ecc'] ) ax1.set_title('Eccentricity $e$', fontsize=18 ) ax2 = fig.add_subplot(3,1,2) ax2.plot( kyears, orb['ecc'] * np.sin( np.deg2rad( orb['long_peri'] ) ) ) ax2.set_title('Precessional parameter $e \sin(\Lambda)$', fontsize=18 ) ax3 = fig.add_subplot(3,1,3) ax3.plot( kyears, orb['obliquity'] ) ax3.set_title('Obliquity (axial tilt) $\Phi$', fontsize=18 ) ax3.set_xlabel( 'Thousands of years before present', fontsize=14 ) plt.show() ``` ### Annual mean insolation Create a large array of insolation over the whole globe, whole year, and for every set of orbital parameters. ``` lat = np.linspace(-90, 90, 181) days = np.linspace(1.,50.)/50 * const.days_per_year Q = daily_insolation(lat, days, orb) print(Q.shape) Qann = np.mean(Q, axis=1) # time average over the year print(Qann.shape) Qglobal = np.empty_like( kyears ) for n in range( kyears.size ): # global area-weighted average Qglobal[n] = np.sum( Qann[:,n] * np.cos( np.deg2rad(lat) ) ) / np.sum( np.cos( np.deg2rad(lat) ) ) print(Qglobal.shape) ``` We are going to create a figure showing past time variations in three quantities: 1. Global, annual mean insolation 2. Annual mean insolation at high northern latitudes 3. Summer solstice insolation at high northern latitudes ``` fig = plt.figure( figsize = (10,14) ) ax1 = fig.add_subplot(3,1,1) ax1.plot( kyears, Qglobal ) ax1.set_title('Global, annual mean insolation', fontsize=18 ) ax1.ticklabel_format( useOffset=False ) ax2 = fig.add_subplot(3,1,2) ax2.plot( kyears, Qann[160,:] ) ax2.set_title('Annual mean insolation at 70N', fontsize=18 ) ax3 = fig.add_subplot(3,1,3) ax3.plot( kyears, Q[160,23,:] ) ax3.set_title('Summer solstice insolation at 70N', fontsize=18 ) plt.show() ``` And comparing with the plots of orbital variations above, we see that 1. Global annual mean insolation variations on with eccentricity (slow), and the variations are very small! 2. Annual mean insolation varies with obliquity (medium). Annual mean insolation does NOT depend on precession! 3. Summer solstice insolation at high northern latitudes is affected by both precession and obliquity. The variations are large. ### Insolation changes between the Last Glacial Maximum and the end of the last ice age Last Glacial Maximum or "LGM" occurred around 23,000 years before present, when the ice sheets were at their greatest extent. By 10,000 years ago, the ice sheets were mostly gone and the last ice age was over. Let's plot the changes in the seasonal distribution of insolation from 23 kyrs to 10 kyrs. ``` orb_0 = OrbitalTable.interp(kyear=0) # present-day orbital parameters orb_10 = OrbitalTable.interp(kyear=-10) # orbital parameters for 10 kyrs before present orb_23 = OrbitalTable.interp(kyear=-23) # 23 kyrs before present Q_0 = daily_insolation( lat, days, orb_0 ) Q_10 = daily_insolation( lat, days, orb_10 ) # insolation arrays for each of the three sets of orbital parameters Q_23 = daily_insolation( lat, days, orb_23 ) fig = plt.figure( figsize=(20,8) ) ax1 = fig.add_subplot(1,2,1) Qdiff = Q_10 - Q_23 CS1 = ax1.contour( days, lat, Qdiff, levels = np.arange(-100., 100., 10.) ) ax1.clabel(CS1, CS1.levels, inline=True, fmt='%r', fontsize=10) ax1.contour( days, lat, Qdiff, levels = [0.], colors = 'k' ) ax1.set_xlabel('Days since January 1', fontsize=16 ) ax1.set_ylabel('Latitude', fontsize=16 ) ax1.set_title('Insolation differences: 10 kyrs - 23 kyrs', fontsize=24 ) ax2 = fig.add_subplot(1,2,2) ax2.plot( np.mean( Qdiff, axis=1 ), lat ) ax2.set_xlabel('W m$^{-2}$', fontsize=16 ) ax2.set_ylabel( 'Latitude', fontsize=16 ) ax2.set_title(' Annual mean differences', fontsize=24 ) ax2.set_ylim((-90,90)) ax2.grid() plt.show() ``` The annual mean plot shows a classic obliquity signal: at 10 kyrs, the axis close to its maximum tilt, around 24.2º. At 23 kyrs, the tilt was much weaker, only about 22.7º. In the annual mean, a stronger tilt means more sunlight to the poles and less to the equator. This is very helpful if you are trying to melt an ice sheet. Finally, take the area-weighted global average of the difference: ``` print(np.average(np.mean(Qdiff,axis=1), weights=np.cos(np.deg2rad(lat)))) ``` This confirms that the difference is tiny (and due to very small changes in the eccentricity). **Ice ages are driven by seasonal and latitudinal redistributions of solar energy**, NOT by changes in the total global amount of solar energy!
github_jupyter
``` import csv import os import glob import multiprocessing as mp import pandas as pd import numpy as np pd.options.mode.chained_assignment = None %matplotlib inline import matplotlib.pyplot as plt import matplotlib matplotlib.style.use('ggplot') from scipy.interpolate import interp1d pd.set_option('display.float_format', lambda x: '%.3f' % x) def tf(x): return(np.log10(x)) import pickle as pkl def temp_reduce(x): return(x-1) def decader(x): return(x -x%10) from sklearn.preprocessing import Normalizer fourgrampath="4grams/" fourgram_files=glob.glob(os.path.join(fourgrampath, "*.txt")) from itertools import groupby from operator import itemgetter import pickle as pkl import re def patternmaker(x): x=np.array(x.notnull()) x=x.astype(int) #print(x) val = ''.join(map(str, x)) #print(val) return val from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() from sklearn.decomposition import PCA,TruncatedSVD,NMF modifier_list = pkl.load( open( "modifier_list.p", "rb" ) ) head_list = pkl.load( open( "head_list.p", "rb" ) ) words=list(set(modifier_list).union(head_list)) print(len(words)) def lemma_maker(x, y): #print(x,y) return lemmatizer.lemmatize(x,y) br_to_us=pd.read_excel("Book.xlsx") br_to_us_dict=dict(zip(br_to_us.UK.tolist(),br_to_us.US.tolist())) pos_dict=dict(zip(['i', 'c','p', 't', 'm', 'r', 'd', 'j', 'f', 'x', 'e', 'u', 'b'],['n', 'n', 'n', 'n', 'n', 'n', 'n', 'a', 'n', 'n', 'n','n', 'n'])) replacements={'r1_PoS':pos_dict,'r2_PoS':pos_dict,'l1_PoS':pos_dict,'l2_PoS':pos_dict,'r1':br_to_us_dict,'r2':br_to_us_dict,'l1':br_to_us_dict,'l2':br_to_us_dict,'modifier':br_to_us_dict,'head':br_to_us_dict} leftgram = pd.concat((pd.read_csv(f,header=None,encoding="cp1252",delim_whitespace=True) for f in fourgram_files),ignore_index =True) leftgram.columns=['freq','modifier','head','r1','r2','mod_pos','head_pos','r1_PoS','r2_PoS','decade'] modifier_noun_tags=["nn","nn1","nn2"] head_noun_tags=["nn","nn1","nn2"] leftgram=leftgram[leftgram.mod_pos.isin(modifier_noun_tags) & leftgram.head_pos.isin(head_noun_tags)] #leftgram=leftgram[~leftgram['decade'].isin([19,20])] leftgram=leftgram[~leftgram['r1_PoS'].isin(head_noun_tags)] leftgram['modifier']=leftgram['modifier'].str.lower() leftgram['head']=leftgram['head'].str.lower() leftgram['r1']=leftgram['r1'].str.lower() leftgram['r2']=leftgram['r2'].str.lower() leftgram['r1_PoS']=leftgram.r1_PoS.str[0] leftgram['r2_PoS']=leftgram.r2_PoS.str[0] leftgram['mod_pos']=leftgram.mod_pos.str[0] leftgram['head_pos']=leftgram.head_pos.str[0] leftgram.replace(replacements,inplace=True) leftgram['modifier']=np.vectorize(lemma_maker)(leftgram['modifier'], leftgram['mod_pos']) leftgram['head']=np.vectorize(lemma_maker)(leftgram['head'], leftgram['head_pos']) leftgram.dropna(inplace=True) leftgram['r1']=np.vectorize(lemma_maker)(leftgram['r1'], leftgram['r1_PoS']) leftgram['r2']=np.vectorize(lemma_maker)(leftgram['r2'], leftgram['r2_PoS']) leftgram['modifier']=leftgram['modifier']+"_n" leftgram['head']=leftgram['head']+"_n" leftgram['r1']=leftgram['r1']+"_"+leftgram['r1_PoS'] leftgram['r2']=leftgram['r2']+"_"+leftgram['r2_PoS'] leftgram.drop(['mod_pos','head_pos',"r1_PoS","r2_PoS"],axis=1,inplace=True) leftgram=leftgram.groupby(['modifier','head','r1','r2','decade'])['freq'].sum().to_frame() display(leftgram.shape) leftgram.head(10) compound_left_counts=pd.melt(leftgram.reset_index(),id_vars=['modifier','head','decade','freq'],value_vars=['r1','r2']) compound_left=pd.pivot_table(compound_left_counts,index=['modifier','head','decade'],columns='value',values='freq',aggfunc=np.sum) display(compound_left.shape) compound_left.head(10) rightgram = pd.concat((pd.read_csv(f,header=None,encoding="cp1252",delim_whitespace=True) for f in fourgram_files),ignore_index =True) rightgram.columns=['freq','l1','l2','modifier','head','l1_PoS','l2_PoS','mod_pos','head_pos','decade'] rightgram=rightgram[rightgram.mod_pos.isin(modifier_noun_tags) & rightgram.head_pos.isin(head_noun_tags)] #rightgram=rightgram[rightgram['decade'].isin([19,20])] rightgram=rightgram[~rightgram['l2_PoS'].isin(head_noun_tags)] rightgram['modifier']=rightgram['modifier'].str.lower() rightgram['head']=rightgram['head'].str.lower() rightgram['l1']=rightgram['l1'].str.lower() rightgram['l2']=rightgram['l2'].str.lower() rightgram['l1_PoS']=rightgram.l1_PoS.str[0] rightgram['l2_PoS']=rightgram.l2_PoS.str[0] rightgram['mod_pos']=rightgram.mod_pos.str[0] rightgram['head_pos']=rightgram.head_pos.str[0] rightgram.replace(replacements,inplace=True) rightgram['modifier']=np.vectorize(lemma_maker)(rightgram['modifier'], rightgram['mod_pos']) rightgram['head']=np.vectorize(lemma_maker)(rightgram['head'], rightgram['head_pos']) rightgram.dropna(inplace=True) rightgram['l1']=np.vectorize(lemma_maker)(rightgram['l1'], rightgram['l1_PoS']) rightgram['l2']=np.vectorize(lemma_maker)(rightgram['l2'], rightgram['l2_PoS']) rightgram['modifier']=rightgram['modifier']+"_n" rightgram['head']=rightgram['head']+"_n" rightgram['l1']=rightgram['l1']+"_"+rightgram['l1_PoS'] rightgram['l2']=rightgram['l2']+"_"+rightgram['l2_PoS'] rightgram.drop(['mod_pos','head_pos',"l1_PoS","l2_PoS"],axis=1,inplace=True) rightgram=rightgram.groupby(['modifier','head','l1','l2','decade'])['freq'].sum().to_frame() display(rightgram.shape) rightgram.head(10) compound_right_counts=pd.melt(rightgram.reset_index(),id_vars=['modifier','head','decade','freq'],value_vars=['l1','l2']) compound_right=pd.pivot_table(compound_right_counts,index=['modifier','head','decade'],columns='value',values='freq',aggfunc=np.sum) display(compound_right.shape) compound_right.head(10) compounds=pd.concat([compound_left_counts,compound_right_counts]) compounds=pd.pivot_table(compounds,index=['modifier','head','decade'],columns='value',values='freq',aggfunc=np.sum) display(compounds.shape) compounds.head(10) compounds_list=list(set(compounds.reset_index(level="decade").index.tolist())) len(compounds_list) pkl.dump( compounds_list, open( "compounds.p", "wb" ) ) ```
github_jupyter
# Completely optional ... but fun! #### Geek-out about Pandas Expanding Rolling Windows follows (a.k.a. `"Let's measure the Earth!!"`) Rolling windows are cool, especially because they forget the far past, and keep only the recent data "in mind" when performing operations. There are [many types of rolling window](https://docs.scipy.org/doc/scipy/reference/signal.html#window-functions), which will fall out of the scope of the Academy. I do however want to mention the expanding rolling window, as it is crazy cool! _(Confession bear: This is not technically timeseries, but just about the rolling windows of Pandas.)_ ![Al Biruni](https://www.thefamouspeople.com/profiles/images/ab-rayn-al-brn-1.jpg) Let's say you are [Al-Biruni, and you are trying to calculate the radius of the earth in the 9th century](https://www.quora.com/When-and-how-did-scientists-measure-the-radius-of-the-earth). (for argument's sake). You take measurements using his rudimentary yet brilliant approach, but your instrument is not that precise. Our objective is to only be wrong by 10 Km (at the most!) Let's go measure he earth! ``` import pandas as pd from matplotlib import pyplot as plt import numpy as np import utils import matplotlib np.random.seed(1000) % matplotlib inline our_precision = .03 first_try = utils.measure_the_earth(our_precision, verbose=True); ``` Uff... ok, let's try again ``` second_try = utils.measure_the_earth(our_precision, verbose=True); ``` Ok, maybe third time is the charm... ``` third_try = utils.measure_the_earth(our_precision, verbose=True); ``` Oh boy... well we know we can average stuff out... maybe that will help? ``` mean_measure = np.mean([first_try, second_try, third_try]) utils.measure_error(mean_measure, corect_measure=6371) ``` So... how many do we need to get to our 10 Km mark? This is where expanding rolling windows come into play: when you are measuring something which you know is a constant, and yet you have a sequence of measures. In this case, your first measure is in no way inferior to your most recent measure, all of them are equally useful. ``` measurements = pd.Series([utils.measure_the_earth(our_precision) for i in range(1000)]) ``` Let's use an expanding window to see how our mean evolves with the number of experiments: ``` series_of_measurements = measurements.expanding().mean() ``` The number we are looking for is 6371 Km. ``` series_of_measurements.head() ``` And as a plot: ``` utils.plot_number_of_tries(series_of_measurements) ``` So as a summary, expanding rolling windows are super-useful when we are measuring something we know to be a constant, and we have a sequence of measures. So... cool!
github_jupyter
# TOOLS_ELA *tools_ela.py* is a command line tool that extracts information from original transcribed text. The provided information is: 1. Indexes of Words, Lemmas, Types 2. Frequencies of Lemmas, Types 3. Concordances 4. TTR 5. Collocations (for both words and lemmas) 6. N-grams 7. Min, Max and Mean lengths of Types 8. POS Tagging (both Bayesian and HMM - Hidden Markov Model) 9. TEI attributes (on TEI files) 10. TEI entity lists (on TEI files) The tool can handle both *XML-TEI encoded* and pure-text files. By default it handles *XML-TEI*, but a command line argument chan instruct the tool to treat files as text. It is actually a multiplatform tool, actively tested on both Linux and Windows. It works in the current Python *virtual environment*, the same as this Jupyter Lab session. In the same directory as *tools_ela.py* a configuration file is required, namely *tools_ela.cfg*, that specifies working directories. The default configuration is ``` [Paths] base = . origin_subdir = SOURCE/ela_txt result_subdir = RESULT/tools_ela database = tools_ela.db ``` Apart from the database, it means that input files are expected to be in `SOURCE/ela_txt` and the results will be written in `RESULT/tools_ela`; the source directory is considered to be *flat*, thus no subdirectories are traversed. Results are stored in a flat directory as well. Result files keep their base prefix and have suffixes such as the following: * `BASEPREFIX_collocations.json`: word collocations * `BASEPREFIX_lemma_collocations.json`: lemma collocations (latin only) * `BASEPREFIX_ngrams.json`: ngrams (at various levels) * `BASEPREFIX_concordances.txt`: concordances (*note: these are text lines*) * `BASEPREFIX_statistics.json`: statistics on the latin part of text * `BASEPREFIX_fulltext_statistics.json`: statistics on all text for all languages * `BASEPREFIX_tei_attrs.json`: data from the XML-TEI header * `BASEPREFIX_tei_lists.json`: lists from XML-TEI encoding (*persName*, *geogName*, *placeName*) All results, except for *concordances*, are in JSON format. **Note:** *tools_ela.py* requires CLTK to be installed, and the `latin_models_cltk` corpus to be loaded. This can be performed from a Python session as follows: ``` >>> from cltk.corpus.utils.importer import CorpusImporter >>> corpus_importer = CorpusImporter('latin') >>> corpus_importer.import_corpus('latin_models_cltk') ``` otherwise the tool will exit with a warning. Also, in order to produce data related to *TEI lists* (namely places and people), the *tools_ela.db* has to be present, created from updated dumps of the *Pleiades* and *Geonames* databases. This can be done using the *nbk_dbload* notebook. The following are the command line parameters: ``` !python tools_ela.py --help ``` To invoke the tool for a specific file the `BASE_FILENAME` attribute has to be explicitly specified, *without* extension: extension will be chosen by *tools_ela.py* between *.txt* and *.xml* depending on whether `--assume-text` has been provided or not. To process all files in a directory, just specify `--all`. The `--full-monty` switch is an utility that generates most useful information (actually: everything but *POS tagging*) without having to specify all related command line switches. Thus, for instance, to regenerate POS tagging for *Epistola.xml* in the source directory, issue the following: ``` $ python tools_ela.py -v -p -M -B Epistola ``` and to generate most useful information the following command can be launched, as we will do below: ``` $ python tools_ela.py --full-monty --verbose -a ``` **Note:** For optimal results this tool should be used on XML-TEI files processed by *RETAG*. The following cell invokes the *tools_ela.py* from the command line, in the above form. ``` !python tools_ela.py --full-monty --verbose -a ``` Now the `RESULT/tools_ela` directory contains all the requested information.
github_jupyter
``` import sys sys.path.append('../../nucleon_elastic_FF/scripts/area51_files') sys.path.append('../../nucleon_elastic_FF/scripts') import sources import utils from lattedb.wavefunction.models import Hadron as wavefunction_Hadron from lattedb.wavefunction.models import Hadron4D as wavefunction_Hadron4D from lattedb.fermionaction.models import Hisq as fermionaction_Hisq from lattedb.gaugeaction.models import LuescherWeisz as gaugeaction_LuescherWeisz from lattedb.fermionaction.models import MobiusDW as fermionaction_MobiusDW from lattedb.gaugeconfig.models import Nf211 as gaugeconfig_Nf211 from lattedb.quarksmear.models import Point as quarksmear_Point from lattedb.linksmear.models import WilsonFlow as linksmear_WilsonFlow from lattedb.propagator.models import BaryonCoherentSeq as propagator_BaryonCoherentSeq from lattedb.propagator.models import OneToAll as propagator_OneToAll # change parameters starting here short_tag = "a09m310" stream = "e" gaugeconfigs = gaugeconfig_Nf211.objects.filter(short_tag=short_tag, stream=stream) import a09m310 as a51 p = a51.mpirun_params("summit") p["STREAM"] = stream cfgs_run, p['srcs'] = utils.parse_cfg_src_argument([p["cfg_i"], p["cfg_f"], p["cfg_d"]],'',p) # sink smear of sequential propagator # this should always be point sinksmear, created = quarksmear_Point.objects.get_or_create( tag="point", # (Optional) User defined tag for easy searches description="Point", # (Optional) Description of the quark smearing operator ) linksmear, created = linksmear_WilsonFlow.objects.get_or_create( flowtime=p["FLOW_TIME"], # Flow time in lattice units flowstep=p["FLOW_STEP"], # Number of diffusion steps ) for quark_tag in ["up", "down"]: fermion_tag = f"M5{p['M5']}_L5{p['L5']}_a{p['alpha5']}" fermionaction, created = fermionaction_MobiusDW.objects.get_or_create( quark_mass=p["MV_L"], # Input quark mass quark_tag=quark_tag, # Type of quark l5=p["L5"], # Length of 5th dimension m5=p["M5"], # 5th dimensional mass b5=p["B5"], # Mobius kernel parameter [a5 = b5 - c5, alpha5 * a5 =… c5=p["C5"], # Mobius kernal perameter linksmear=linksmear, # Foreign Key pointing to additional gauge `linksmear` outside of Monte Carlo. tag=fermion_tag, # (Optional) User defined tag for easy searches ) if quark_tag == "up": qtag = "UU" if quark_tag == "down": qtag = "DD" for parity in [1, -1]: for spin_z_x2 in [1, -1]: if parity == 1: paritytag = "pp" else: paritytag = "np" if spin_z_x2 == 1: spintag = "up" else: spintag = "dn" sinkwave, created = wavefunction_Hadron.objects.get_or_create( strangeness=0, # Strangeness of hadronic operator irrep="G", # Irreducible representations of O^D_h (octahedral group) embedding=1, # k-th embedding of O^D_h irrep., can be blank parity=parity, # Parity of hadronic operator spin_x2=1, # Total spin times 2 spin_z_x2=spin_z_x2, # Spin in \(z\)-direction isospin_x2=1, # Total isospin times 2 isospin_z_x2=1, # Isospin in \(z\)-direction times 2 momentum=0, # Momentum in units of 2 pi / L tag="proton", # (Optional) User defined tag for easy searches description=f"G1 irrep {paritytag} spin {spintag} proton", # (Optional) Description of the interpolating operator ) if spin_z_x2 == 1: snkspntag = "up" else: snkspntag = "dn" sourcewave, created = wavefunction_Hadron4D.objects.get_or_create( strangeness=0, # Strangeness of hadronic operator irrep="G", # Irreducible representations of O^D_h (octahedral group) embedding=1, # k-th embedding of O^D_h irrep., can be blank parity=parity, # Parity of hadronic operator spin_x2=1, # Total spin times 2 spin_z_x2=spin_z_x2, # Spin in \(z\)-direction isospin_x2=1, # Total isospin times 2 isospin_z_x2=1, # Isospin in \(z\)-direction times 2 tag="proton", # (Optional) User defined tag for easy searches description=f"G1 irrep {paritytag} spin {spintag} proton", # (Optional) Description of the interpolating operator ) if spin_z_x2 == 1: srcspntag = "up" else: srcspntag = "dn" for gaugeconfig in gaugeconfigs: for sinksmear_bool in [False, True]: propagators = propagator_OneToAll.objects.filter(gaugeconfig=gaugeconfig, fermionaction__mobiusdw__quark_tag="light", sinksmear__point__isnull=sinksmear_bool, ) if propagators[0].sourcesmear.type == "Point": srcsmrtag = "P" else: srcsmrtag = "S" if propagators[0].sinksmear.type == "Point": snksmrtag = "P" else: snksmrtag = "S" smrtag = f"{snksmrtag}{srcsmrtag}" for sinksep in p["t_seps"]: mq = propagators[0].tag.split('_')[2][2:] paction = f"{propagators[0].tag.split('wflow')[1].split('_cfg')[0]}_{propagators[0].tag.split('wv_')[1]}" pmom = f"px{sinkwave.momentum}py{sinkwave.momentum}pz{sinkwave.momentum}" tag = f"seqprop_{gaugeconfig.short_tag}_{gaugeconfig.stream}_{gaugeconfig.config}_{sinkwave.tag}_{paritytag}_{qtag}_{snkspntag}_{srcspntag}_gf{paction}_mq{mq}_{pmom}_dt{parity*sinksep}_Srcs0-7_{smrtag}" propagator_baryoncoherentseq, created = propagator_BaryonCoherentSeq.objects.get_or_create( gaugeconfig=gaugeconfig, # Foreign Key referencing specific `gaugeconfig` inverted on fermionaction=fermionaction, # Foreign Key referencing valence lattice `fermionaction` sourcewave=sourcewave, sinkwave=sinkwave, # Foreign Key referencing sink interpolating operator `wavefunction` sinksmear=sinksmear, # Foreign Key pointing to sink `quarksmear` which should be Point unless some… sinksep=sinksep, # Source-sink separation time tag=tag# (Optional) User defined tag for easy searches ) propagator_baryoncoherentseq.propagator0.add(*propagators) propagator_baryoncoherentseq.propagator1.add(*propagators) print(f"\r>>{created} {tag}", end="") ```
github_jupyter
# MadMiner particle physics tutorial # Part 3b: Training a score estimator Johann Brehmer, Felix Kling, Irina Espejo, and Kyle Cranmer 2018-2019 In part 3a of this tutorial we will finally train a neural network to estimate likelihood ratios. We assume that you have run part 1 and 2a of this tutorial. If, instead of 2a, you have run part 2b, you just have to load a different filename later. ## Preparations Make sure you've run the first tutorial before executing this notebook! ``` from __future__ import absolute_import, division, print_function, unicode_literals import logging import numpy as np import matplotlib from matplotlib import pyplot as plt %matplotlib inline from madminer.sampling import SampleAugmenter from madminer import sampling from madminer.ml import ScoreEstimator # MadMiner output logging.basicConfig( format='%(asctime)-5.5s %(name)-20.20s %(levelname)-7.7s %(message)s', datefmt='%H:%M', level=logging.INFO ) # Output of all other modules (e.g. matplotlib) for key in logging.Logger.manager.loggerDict: if "madminer" not in key: logging.getLogger(key).setLevel(logging.WARNING) ``` ## 1. Make (unweighted) training and test samples with augmented data At this point, we have all the information we need from the simulations. But the data is not quite ready to be used for machine learning. The `madminer.sampling` class `SampleAugmenter` will take care of the remaining book-keeping steps before we can train our estimators: First, it unweights the samples, i.e. for a given parameter vector `theta` (or a distribution `p(theta)`) it picks events `x` such that their distribution follows `p(x|theta)`. The selected samples will all come from the event file we have so far, but their frequency is changed -- some events will appear multiple times, some will disappear. Second, `SampleAugmenter` calculates all the augmented data ("gold") that is the key to our new inference methods. Depending on the specific technique, these are the joint likelihood ratio and / or the joint score. It saves all these pieces of information for the selected events in a set of numpy files that can easily be used in any machine learning framework. ``` # sampler = SampleAugmenter('data/lhe_data_shuffled.h5') sampler = SampleAugmenter('/data_CMS/cms/cortinovis/ewdim6/data_ew_1M_az/delphes_data_shuffled.h5') ``` The relevant `SampleAugmenter` function for local score estimators is `extract_samples_train_local()`. As in part 3a of the tutorial, for the argument `theta` you can use the helper functions `sampling.benchmark()`, `sampling.benchmarks()`, `sampling.morphing_point()`, `sampling.morphing_points()`, and `sampling.random_morphing_points()`. ``` x, theta, t_xz, _ = sampler.sample_train_local( theta=sampling.benchmark('sm'), n_samples=500000, folder='/data_CMS/cms/cortinovis/ewdim6/data_ew_2M_az/samples', filename='train_score' ) ``` We can use the same data as in part 3a, so you only have to execute this if you haven't gone through tutorial 3a: ``` _ = sampler.sample_test( theta=sampling.benchmark('sm'), n_samples=1000, folder='/data_CMS/cms/cortinovis/ewdim6/data_ew_2M_az/samples', filename='test' ) ``` ## 2. Train score estimator It's now time to build a neural network. Only this time, instead of the likelihood ratio itself, we will estimate the gradient of the log likelihood with respect to the theory parameters -- the score. To be precise, the output of the neural network is an estimate of the score at some reference parameter point, for instance the Standard Model. A neural network that estimates this "local" score can be used to calculate the Fisher information at that point. The estimated score can also be used as a machine learning version of Optimal Observables, and likelihoods can be estimated based on density estimation in the estimated score space. This method for likelihood ratio estimation is called SALLY, and there is a closely related version called SALLINO. Both are explained in ["Constraining Effective Field Theories With Machine Learning"](https://arxiv.org/abs/1805.00013) and ["A Guide to Constraining Effective Field Theories With Machine Learning"](https://arxiv.org/abs/1805.00020). The central object for this is the `madminer.ml.ScoreEstimator` class: ``` estimator = ScoreEstimator(n_hidden=(30,30)) estimator.train( method='sally', x='/data_CMS/cms/cortinovis/ewdim6/data_ew_2M_az/samples/x_train_score.npy', t_xz='/data_CMS/cms/cortinovis/ewdim6/data_ew_2M_az/samples/t_xz_train_score.npy', ) estimator.save('/data_CMS/cms/cortinovis/ewdim6/models_ew_2M_az/sally') ``` ## 3. Evaluate score estimator Let's evaluate the SM score on the test data ``` estimator.load('/data_CMS/cms/cortinovis/ewdim6/models_ew_2M_az/sally') t_hat = estimator.evaluate_score( x = '/data_CMS/cms/cortinovis/ewdim6/data_ew_2M_az/samples/x_test.npy' ) ``` Let's have a look at the estimated score and how it is related to the observables: ``` x = np.load('/data_CMS/cms/cortinovis/ewdim6/data_ew_2M_az/samples/x_test.npy') fig = plt.figure(figsize=(10,4)) for i in range(2): ax = plt.subplot(1,2,i+1) sc = plt.scatter(x[:,0], x[:,1], c=t_hat[:,i], s=25., cmap='viridis', vmin=-1., vmax=1.) cbar = plt.colorbar(sc) cbar.set_label(r'$\hat{t}_' + str(i) + r'(x | \theta_{ref})$') plt.xlabel(r'$p_{T,j1}$ [GeV]') plt.ylabel(r'$\Delta \phi_{jj}$') plt.xlim(10.,300.) plt.ylim(-3.15,3.15) plt.tight_layout() plt.show() ```
github_jupyter
``` %load_ext watermark %watermark -d -u -a 'Andreas Mueller, Kyle Kastner, Sebastian Raschka' ``` The use of watermark (above) is optional, and we use it to keep track of the changes while developing the tutorial material. (You can install this IPython extension via "pip install watermark". For more information, please see: https://github.com/rasbt/watermark). # SciPy 2016 Scikit-learn Tutorial # 01.1 Introduction to Machine Learning in Python ## What is Machine Learning? Machine learning is the process of extracting knowledge from data automatically, usually with the goal of making predictions on new, unseen data. A classical example is a spam filter, for which the user keeps labeling incoming mails as either spam or not spam. A machine learning algorithm then "learns" a predictive model from data that distinguishes spam from normal emails, a model which can predict for new emails whether they are spam or not. Central to machine learning is the concept of **automating decision making** from data **without the user specifying explicit rules** how this decision should be made. For the case of emails, the user doesn't provide a list of words or characteristics that make an email spam. Instead, the user provides examples of spam and non-spam emails that are labeled as such. The second central concept is **generalization**. The goal of a machine learning model is to predict on new, previously unseen data. In a real-world application, we are not interested in marking an already labeled email as spam or not. Instead, we want to make the user's life easier by automatically classifying new incoming mail. <img src="figures/supervised_workflow.svg" width="100%"> The data is presented to the algorithm usually as a two-dimensional array (or matrix) of numbers. Each data point (also known as a *sample* or *training instance*) that we want to either learn from or make a decision on is represented as a list of numbers, a so-called feature vector, and its containing features represent the properties of this point. Later, we will lay our hands on a popular dataset called *Iris* -- among many other datasets. Iris, a classic benchmark dataset in the field of machine learning, contains the measurements of 150 iris flowers from 3 different species: Iris-Setosa, Iris-Versicolor, and Iris-Virginica. Iris Setosa <img src="figures/iris_setosa.jpg" width="50%"> Iris Versicolor <img src="figures/iris_versicolor.jpg" width="50%"> Iris Virginica <img src="figures/iris_virginica.jpg" width="50%"> We represent each flower sample as one row in our data array, and the columns (features) represent the flower measurements in centimeters. For instance, we can represent this Iris dataset, consisting of 150 samples and 4 features, a 2-dimensional array or matrix $\mathbb{R}^{150 \times 4}$ in the following format: $$\mathbf{X} = \begin{bmatrix} x_{1}^{(1)} & x_{2}^{(1)} & x_{3}^{(1)} & \dots & x_{4}^{(1)} \\ x_{1}^{(2)} & x_{2}^{(2)} & x_{3}^{(2)} & \dots & x_{4}^{(2)} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ x_{1}^{(150)} & x_{2}^{(150)} & x_{3}^{(150)} & \dots & x_{4}^{(150)} \end{bmatrix}. $$ (The superscript denotes the *i*th row, and the subscript denotes the *j*th feature, respectively. There are two kinds of machine learning we will talk about today: ***supervised learning*** and ***unsupervised learning***. ### Supervised Learning: Classification and regression In **Supervised Learning**, we have a dataset consisting of both input features and a desired output, such as in the spam / no-spam example. The task is to construct a model (or program) which is able to predict the desired output of an unseen object given the set of features. Some more complicated examples are: - Given a multicolor image of an object through a telescope, determine whether that object is a star, a quasar, or a galaxy. - Given a photograph of a person, identify the person in the photo. - Given a list of movies a person has watched and their personal rating of the movie, recommend a list of movies they would like. - Given a persons age, education and position, infer their salary What these tasks have in common is that there is one or more unknown quantities associated with the object which needs to be determined from other observed quantities. Supervised learning is further broken down into two categories, **classification** and **regression**: - **In classification, the label is discrete**, such as "spam" or "no spam". In other words, it provides a clear-cut distinction between categories. Furthermore, it is important to note that class labels are nominal, not ordinal variables. Nominal and ordinal variables are both subcategories of categorical variable. Ordinal variables imply an order, for example, T-shirt sizes "XL > L > M > S". On the contrary, nominal variables don't imply an order, for example, we (usually) can't assume "orange > blue > green". - **In regression, the label is continuous**, that is a float output. For example, in astronomy, the task of determining whether an object is a star, a galaxy, or a quasar is a classification problem: the label is from three distinct categories. On the other hand, we might wish to estimate the age of an object based on such observations: this would be a regression problem, because the label (age) is a continuous quantity. In supervised learning, there is always a distinction between a **training set** for which the desired outcome is given, and a **test set** for which the desired outcome needs to be inferred. The learning model fits the predictive model to the training set, and we use the test set to evaluate its generalization performance. ### Unsupervised Learning In **Unsupervised Learning** there is no desired output associated with the data. Instead, we are interested in extracting some form of knowledge or model from the given data. In a sense, you can think of unsupervised learning as a means of discovering labels from the data itself. Unsupervised learning is often harder to understand and to evaluate. Unsupervised learning comprises tasks such as *dimensionality reduction*, *clustering*, and *density estimation*. For example, in the iris data discussed above, we can used unsupervised methods to determine combinations of the measurements which best display the structure of the data. As we’ll see below, such a projection of the data can be used to visualize the four-dimensional dataset in two dimensions. Some more involved unsupervised learning problems are: - Given detailed observations of distant galaxies, determine which features or combinations of features summarize best the information. - Given a mixture of two sound sources (for example, a person talking over some music), separate the two (this is called the [blind source separation](http://en.wikipedia.org/wiki/Blind_signal_separation) problem). - Given a video, isolate a moving object and categorize in relation to other moving objects which have been seen. - Given a large collection of news articles, find recurring topics inside these articles. - Given a collection of images, cluster similar images together (for example to group them when visualizing a collection) Sometimes the two may even be combined: e.g. unsupervised learning can be used to find useful features in heterogeneous data, and then these features can be used within a supervised framework.
github_jupyter
# Neural networks with PyTorch Deep learning networks tend to be massive with dozens or hundreds of layers, that's where the term "deep" comes from. You can build one of these deep networks using only weight matrices as we did in the previous notebook, but in general it's very cumbersome and difficult to implement. PyTorch has a nice module `nn` that provides a nice way to efficiently build large neural networks. ``` # Import necessary packages %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import torch import helper import matplotlib.pyplot as plt ``` Now we're going to build a larger network that can solve a (formerly) difficult problem, identifying text in an image. Here we'll use the MNIST dataset which consists of greyscale handwritten digits. Each image is 28x28 pixels, you can see a sample below <img src='assets/mnist.png'> Our goal is to build a neural network that can take one of these images and predict the digit in the image. First up, we need to get our dataset. This is provided through the `torchvision` package. The code below will download the MNIST dataset, then create training and test datasets for us. Don't worry too much about the details here, you'll learn more about this later. ``` ### Run this cell from torchvision import datasets, transforms # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)), ]) # Download and load the training data trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) ``` We have the training data loaded into `trainloader` and we make that an iterator with `iter(trainloader)`. Later, we'll use this to loop through the dataset for training, like ```python for image, label in trainloader: ## do things with images and labels ``` You'll notice I created the `trainloader` with a batch size of 64, and `shuffle=True`. The batch size is the number of images we get in one iteration from the data loader and pass through our network, often called a *batch*. And `shuffle=True` tells it to shuffle the dataset every time we start going through the data loader again. But here I'm just grabbing the first batch so we can check out the data. We can see below that `images` is just a tensor with size `(64, 1, 28, 28)`. So, 64 images per batch, 1 color channel, and 28x28 images. ``` dataiter = iter(trainloader) images, labels = dataiter.next() print(type(images)) print(images.shape) print(labels.shape) ``` This is what one of the images looks like. ``` plt.imshow(images[7].numpy().squeeze(), cmap='Greys_r'); ``` First, let's try to build a simple network for this dataset using weight matrices and matrix multiplications. Then, we'll see how to do it using PyTorch's `nn` module which provides a much more convenient and powerful method for defining network architectures. The networks you've seen so far are called *fully-connected* or *dense* networks. Each unit in one layer is connected to each unit in the next layer. In fully-connected networks, the input to each layer must be a one-dimensional vector (which can be stacked into a 2D tensor as a batch of multiple examples). However, our images are 28x28 2D tensors, so we need to convert them into 1D vectors. Thinking about sizes, we need to convert the batch of images with shape `(64, 1, 28, 28)` to a have a shape of `(64, 784)`, 784 is 28 times 28. This is typically called *flattening*, we flattened the 2D images into 1D vectors. Previously you built a network with one output unit. Here we need 10 output units, one for each digit. We want our network to predict the digit shown in an image, so what we'll do is calculate probabilities that the image is of any one digit or class. This ends up being a discrete probability distribution over the classes (digits) that tells us the most likely class for the image. That means we need 10 output units for the 10 classes (digits). We'll see how to convert the network output into a probability distribution next. > **Exercise:** Flatten the batch of images `images`. Then build a multi-layer network with 784 input units, 256 hidden units, and 10 output units using random tensors for the weights and biases. For now, use a sigmoid activation for the hidden layer. Leave the output layer without an activation, we'll add one that gives us a probability distribution next. ``` ## Your solution def activation(x): return 1/(1 + torch.exp(-x)) #Flatten the input images inputs = images.view(images.shape[0], -1) w1 = torch.randn(784, 256) b1 = torch.randn(256) w2 = torch.randn(256, 10) b2 = torch.randn(10) h = activation(torch.mm(inputs, w1) + b1) #out = # output of your network, should have shape (64,10) out = torch.mm(h, w2) + b2 ``` Now we have 10 outputs for our network. We want to pass in an image to our network and get out a probability distribution over the classes that tells us the likely class(es) the image belongs to. Something that looks like this: <img src='assets/image_distribution.png' width=500px> Here we see that the probability for each class is roughly the same. This is representing an untrained network, it hasn't seen any data yet so it just returns a uniform distribution with equal probabilities for each class. To calculate this probability distribution, we often use the [**softmax** function](https://en.wikipedia.org/wiki/Softmax_function). Mathematically this looks like $$ \Large \sigma(x_i) = \cfrac{e^{x_i}}{\sum_k^K{e^{x_k}}} $$ What this does is squish each input $x_i$ between 0 and 1 and normalizes the values to give you a proper probability distribution where the probabilites sum up to one. > **Exercise:** Implement a function `softmax` that performs the softmax calculation and returns probability distributions for each example in the batch. Note that you'll need to pay attention to the shapes when doing this. If you have a tensor `a` with shape `(64, 10)` and a tensor `b` with shape `(64,)`, doing `a/b` will give you an error because PyTorch will try to do the division across the columns (called broadcasting) but you'll get a size mismatch. The way to think about this is for each of the 64 examples, you only want to divide by one value, the sum in the denominator. So you need `b` to have a shape of `(64, 1)`. This way PyTorch will divide the 10 values in each row of `a` by the one value in each row of `b`. Pay attention to how you take the sum as well. You'll need to define the `dim` keyword in `torch.sum`. Setting `dim=0` takes the sum across the rows while `dim=1` takes the sum across the columns. ``` def softmax(x): ## TODO: Implement the softmax function here return torch.exp(x) / torch.sum(torch.exp(x), dim=1).view(-1, 1) # Here, out should be the output of the network in the previous excercise with shape (64,10) probabilities = softmax(out) # Does it have the right shape? Should be (64, 10) print(probabilities.shape) # Does it sum to 1? print(probabilities.sum(dim=1)) ``` ## Building networks with PyTorch PyTorch provides a module `nn` that makes building networks much simpler. Here I'll show you how to build the same one as above with 784 inputs, 256 hidden units, 10 output units and a softmax output. ``` from torch import nn class Network(nn.Module): def __init__(self): super().__init__() # Inputs to hidden layer linear transformation self.hidden = nn.Linear(784, 256) # Output layer, 10 units - one for each digit self.output = nn.Linear(256, 10) # Define sigmoid activation and softmax output self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) def forward(self, x): # Pass the input tensor through each of our operations x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) return x ``` Let's go through this bit by bit. ```python class Network(nn.Module): ``` Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything. ```python self.hidden = nn.Linear(784, 256) ``` This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`. ```python self.output = nn.Linear(256, 10) ``` Similarly, this creates another linear transformation with 256 inputs and 10 outputs. ```python self.sigmoid = nn.Sigmoid() self.softmax = nn.Softmax(dim=1) ``` Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns. ```python def forward(self, x): ``` PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method. ```python x = self.hidden(x) x = self.sigmoid(x) x = self.output(x) x = self.softmax(x) ``` Here the input tensor `x` is passed through each operation and reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method. Now we can create a `Network` object. ``` # Create the network and look at it's text representation model = Network() model ``` You can define the network somewhat more concisely and clearly using the `torch.nn.functional` module. This is the most common way you'll see networks defined as many operations are simple element-wise functions. We normally import this module as `F`, `import torch.nn.functional as F`. ``` import torch.nn.functional as F class Network(nn.Module): def __init__(self): super().__init__() # Inputs to hidden layer linear transformation self.hidden = nn.Linear(784, 256) # Output layer, 10 units - one for each digit self.output = nn.Linear(256, 10) def forward(self, x): # Hidden layer with sigmoid activation x = F.sigmoid(self.hidden(x)) # Output layer with softmax activation x = F.softmax(self.output(x), dim=1) return x ``` ### Activation functions So far we've only been looking at the softmax activation, but in general any function can be used as an activation function. The only requirement is that for a network to approximate a non-linear function, the activation functions must be non-linear. Here are a few more examples of common activation functions: Tanh (hyperbolic tangent), and ReLU (rectified linear unit). <img src="assets/activation.png" width=700px> In practice, the ReLU function is used almost exclusively as the activation function for hidden layers. ### Your Turn to Build a Network <img src="assets/mlp_mnist.png" width=600px> > **Exercise:** Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, then a hidden layer with 64 units and a ReLU activation, and finally an output layer with a softmax activation as shown above. You can use a ReLU activation with the `nn.ReLU` module or `F.relu` function. It's good practice to name your layers by their type of network, for instance 'fc' to represent a fully-connected layer. As you code your solution, use `fc1`, `fc2`, and `fc3` as your layer names. ``` ## Your solution here import torch.nn.functional as F class Network(nn.Module): def __init__(self): super().__init__() #Input to hidden layer linear transformation self.fc1 = nn.Linear(784, 128) #Second hidden layer with relu activation self.fc2 = nn.Linear(128, 64) #Output layer, 10 units - one for each digit self.fc3 = nn.Linear(64, 10) def forward(self, x): #First hidden layer with ReLu activation x = F.relu(self.fc1(x)) #Second hidden layer with Relu activation x = F.relu(self.fc2(x)) #Output layer with softmax activation x = F.softmax(self.fc3(x), dim =1) return x model = Network() model ``` ### Initializing weights and biases The weights and such are automatically initialized for you, but it's possible to customize how they are initialized. The weights and biases are tensors attached to the layer you defined, you can get them with `model.fc1.weight` for instance. For custom initialization, we want to modify these tensors in place. These are actually autograd *Variables*, so we need to get back the actual tensors with `model.fc1.weight.data`. Once we have the tensors, we can fill them with zeros (for biases) or random normal values. ``` print(model.fc1.weight) print(model.fc1.bias) # Set biases to all zeros model.fc1.bias.data.fill_(0) # sample from random normal with standard dev = 0.01 model.fc1.weight.data.normal_(std=0.01) ``` ### Forward pass Now that we have a network, let's see what happens when we pass in an image. ``` # Grab some data dataiter = iter(trainloader) images, labels = dataiter.next() # Resize images into a 1D vector, new shape is (batch size, color channels, image pixels) images.resize_(64, 1, 784) # or images.resize_(images.shape[0], 1, 784) to automatically get batch size # Forward pass through the network img_idx = 0 ps = model.forward(images[img_idx,:]) img = images[img_idx] helper.view_classify(img.view(1, 28, 28), ps) ``` As you can see above, our network has basically no idea what this digit is. It's because we haven't trained it yet, all the weights are random! ### Using `nn.Sequential` PyTorch provides a convenient way to build networks like this where a tensor is passed sequentially through operations, `nn.Sequential` ([documentation](https://pytorch.org/docs/master/nn.html#torch.nn.Sequential)). Using this to build the equivalent network: ``` # Hyperparameters for our network input_size = 784 hidden_sizes = [128, 64] output_size = 10 # Build a feed-forward network model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]), nn.ReLU(), nn.Linear(hidden_sizes[0], hidden_sizes[1]), nn.ReLU(), nn.Linear(hidden_sizes[1], output_size), nn.Softmax(dim=1)) print(model) # Forward pass through the network and display output images, labels = next(iter(trainloader)) images.resize_(images.shape[0], 1, 784) ps = model.forward(images[0,:]) helper.view_classify(images[0].view(1, 28, 28), ps) ``` Here our model is the same as before: 784 input units, a hidden layer with 128 units, ReLU activation, 64 unit hidden layer, another ReLU, then the output layer with 10 units, and the softmax output. The operations are available by passing in the appropriate index. For example, if you want to get first Linear operation and look at the weights, you'd use `model[0]`. ``` print(model[0]) model[0].weight ``` You can also pass in an `OrderedDict` to name the individual layers and operations, instead of using incremental integers. Note that dictionary keys must be unique, so _each operation must have a different name_. ``` from collections import OrderedDict model = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(input_size, hidden_sizes[0])), ('relu1', nn.ReLU()), ('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])), ('relu2', nn.ReLU()), ('output', nn.Linear(hidden_sizes[1], output_size)), ('softmax', nn.Softmax(dim=1))])) model ``` Now you can access layers either by integer or the name ``` print(model[0]) print(model.fc1) ``` In the next notebook, we'll see how we can train a neural network to accuractly predict the numbers appearing in the MNIST images.
github_jupyter
## Step 1. 讀入檔案 ``` # 下載 Gossiping 版 2005 至 2020 年,每五年的詞向量 !gdown --id "1gEL4v3wGgvqJnpWspISZvLeIL3GQZLB1" -O "Gossiping_2005.model" # 2005 年 Gossiping 板 !gdown --id "1yB9WPVDJVmmLLxbEHZroZP_cYMP0JUpC" -O "Gossiping_2010.model" # 2010 年 Gossiping 板 !gdown --id "1Vh8meq6hdte02nQ2-djclgpEKxFUC0YU" -O "Gossiping_2015.model" # 2015 年 Gossiping 板 !gdown --id "1EiDgWcnDDSOy1bu_aRjbBk4JGIENNoGk" -O "Gossiping_2020.model" # 2020 年 Gossiping 板 # 下載 WomenTalk 版 2005 至 2020 年,每五年的詞向量 !gdown --id "18rhI6VBnBXBji0YRplcL9bF31K2gFH9R" -O "WomenTalk_2005.model" # 2005 年 WomenTalk 板 !gdown --id "19XZ-SeZNUu515TZS3lW9kHASk_P6CYQJ" -O "WomenTalk_2010.model" # 2010 年 WomenTalk 板 !gdown --id "1CQtZ_5Tu8ML24es2vYfQcCoGcCTadzCp" -O "WomenTalk_2015.model" # 2015 年 WomenTalk 板 !gdown --id "1PqqW_5TyNKDU3WPubypIBED2GnlfFGTE" -O "WomenTalk_2020.model" # 2020 年 WomenTalk 板 ``` ## Step 2. 選定 PPT 的版及年份 ``` board_lst = ['Gossiping', 'WomenTalk'] year_lst = ['2005', '2010', '2015', '2020'] ``` ## Step 3. 歷時詞向量(、鄰近詞) ``` import gensim # 讀入詞向量 # 建立一個 class 來存放與詞向量有關的資料 class Embedding: def __init__(self, board, year_lst): self.board = board # 選定 PTT 的版,存成 string self.year_lst = year_lst # 選定各年份,存成 list self.path_lst = [f'{board}_{year}.model' for year in self.year_lst] # 該版各年份的詞向量檔案路徑 self.model_lst = [gensim.models.Word2Vec.load(path) for path in self.path_lst] # 依詞向量檔案路徑,讀入檔案 # TO-DO # 建立 Gossiping 版,2005 及 2015 的詞向量 class embed_2005_2015 = Embedding('Gossiping', ['2005', '2015']) # TO-DO # 看 embed_2005_2015 的 model_lst embed_2005_2015.model_lst # TO-DO # 找出 model_lst[0] 中,'台灣' 的鄰近詞 embed_2005 = embed_2005_2015.model_lst[0] embed_2005.wv.most_similar('台灣') # TO-DO # 找出 model_lst[0] 中,'台灣' 的前35個鄰近詞 embed_2005.wv.most_similar('台灣', topn=35) # TO-DO # 找出 model_lst[0] 中,'台灣' 的詞向量 embed_2005['台灣'] ``` ## Step 4. 視覺化 ``` import numpy as np from sklearn.manifold import TSNE import matplotlib import matplotlib.pyplot as plt import matplotlib.cm as cm from matplotlib.font_manager import FontProperties %matplotlib inline # 下載中文字體 !wget -O taipei_sans_tc_beta.ttf https://drive.google.com/uc?id=1eGAsTN1HBpJAkeVM57_C7ccp7hbgSz3_&export=download # 中文字體設定 matplotlib.font_manager.fontManager.addfont('taipei_sans_tc_beta.ttf') matplotlib.rc('font', family = 'Taipei Sans TC Beta') # 視覺化解析度設定 plt.rcParams['figure.dpi'] = 300 # source: https://github.com/sismetanin/word2vec-tsne def tsne_plot_similar_words(labels, embedding_clusters, word_clusters, n1): plt.figure(figsize=(9, 9)) # 設定空白畫布 colors = cm.Accent(np.linspace(0, 1, len(labels))) # 依 labels 數量設定不同的顏色 # source: https://matplotlib.org/3.1.1/gallery/color/colormap_reference.html arrow_lst = [] for label, embeddings, words, color in zip(labels, embedding_clusters, word_clusters, colors): x = embeddings[:, 0] y = embeddings[:, 1] arrow_lst.append((x[0], y[0])) # 第 0 個點是關鍵詞本身,抓出此點的 x, y,存入 arrow_lst 中 # 畫點 plt.scatter(x[:1], y[:1], c=color, alpha=1, label=label) for i, word in enumerate(words): # 關鍵詞本身 if i == 0: a = 1 # 透明度 size = 28 # 字體大小 # 將近鄰詞分層,調整透明度與字體大小 elif i >= 1 and i <= n1: a = 0.85 size = 16 else: a = 0.35 size = 16 # 標詞 plt.annotate(word, alpha=a, xy=(x[i], y[i]), xytext=(1, 1), textcoords='offset points', ha='right', va='bottom', size=size, c=color) for c, i in zip(colors, range(len(arrow_lst))): try: # 劃上箭頭方向 plt.annotate('', xy=(arrow_lst[i+1][0], arrow_lst[i+1][1]), xytext=(arrow_lst[i][0], arrow_lst[i][1]), arrowprops=dict(facecolor=c, edgecolor=c, width=5, shrink=0.01, alpha=0.5)) # source: https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.annotate.html except: pass plt.legend(loc=4) plt.grid(True) plt.axis('off') plt.show() class PlotTemporalData(Embedding): # 從 Embedding 這個 class 繼續擴增 function def __init__(self, board, year_lst): super().__init__(board, year_lst) # self.vocab_lst = [model.wv.vocab for model in self.model_lst] # 每個詞向量的 vocabulary # 抓出詞向量中的點 def create_datapoints(self, keyword, n1=10, n2=15): error_log = {} # 紀錄錯誤訊息 labels = [] # 詞_年份 word_clusters = [] # 詞 embedding_clusters = [] # 向量 # 第一層 for loop: 各年份 for year, model in zip(self.year_lst, self.model_lst): # 將 self.year_lst 和 self.model_lst 一一對應 label = f'{keyword}({year})' try: # 若是有任何錯誤(Exception as e),以 try-except 紀錄錯誤訊息(e),並存至 error_log 這個 dictionary # 關鍵詞 words = [label] embeddings = [model[keyword]] # 第二層 for loop: 某年份的鄰近詞 # 鄰近詞(前 n1+n2 個鄰近詞) for similar_word, _ in model.wv.most_similar(keyword, topn=n1+n2): words.append(similar_word) embeddings.append(model[similar_word]) embedding_clusters.append(embeddings) word_clusters.append(words) labels.append(label) except Exception as e: error_log[label] = e print(error_log) self.error_log = error_log self.keyword = keyword self.labels = labels self.n1 = n1 self.n2 = n2 self.embedding_clusters = embedding_clusters self.word_clusters = word_clusters # 將點經過 t-SNE 處理 def tsne(self): embedding_clusters = np.array(self.embedding_clusters) n, m, k = embedding_clusters.shape tsne_model_en_2d = TSNE(perplexity=15, n_components=2, init='pca', n_iter=3500, random_state=32) embeddings_en_2d = np.array(tsne_model_en_2d.fit_transform(embedding_clusters.reshape(n * m, k))).reshape(n, m, 2) self.embeddings_en_2d = embeddings_en_2d # 將處理後的點視覺化 def tsne_plot(self): tsne_plot_similar_words(self.labels, self.embeddings_en_2d, self.word_clusters, self.n1) ``` ## Step 5. 選定想觀察的字詞 ``` keyword = '台灣' for board in board_lst: data = PlotTemporalData(board, year_lst) data.create_datapoints(keyword, n1=5, n2=5) #data.create_datapoints(keyword) data.tsne() data.tsne_plot() ```
github_jupyter
### Markov decision process This week's methods are all built to solve __M__arkov __D__ecision __P__rocesses. In the broadest sense, an MDP is defined by how it changes states and how rewards are computed. State transition is defined by $P(s' |s,a)$ - how likely are you to end at state $s'$ if you take action $a$ from state $s$. Now there's more than one way to define rewards, but we'll use $r(s,a,s')$ function for convenience. _This notebook is inspired by the awesome_ [CS294](https://github.com/berkeleydeeprlcourse/homework/blob/36a0b58261acde756abd55306fbe63df226bf62b/hw2/HW2.ipynb) _by Berkeley_ For starters, let's define a simple MDP from this picture: <img src="https://upload.wikimedia.org/wikipedia/commons/a/ad/Markov_Decision_Process.svg" width="400px" alt="Diagram by Waldoalvarez via Wikimedia Commons, CC BY-SA 4.0"/> ``` import sys, os if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'): !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/week02_value_based/mdp.py !touch .setup_complete # This code creates a virtual display to draw game images on. # It will have no effect if your machine has a monitor. if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0: !bash ../xvfb start os.environ['DISPLAY'] = ':1' transition_probs = { 's0': { 'a0': {'s0': 0.5, 's2': 0.5}, 'a1': {'s2': 1} }, 's1': { 'a0': {'s0': 0.7, 's1': 0.1, 's2': 0.2}, 'a1': {'s1': 0.95, 's2': 0.05} }, 's2': { 'a0': {'s0': 0.4, 's2': 0.6}, 'a1': {'s0': 0.3, 's1': 0.3, 's2': 0.4} } } rewards = { 's1': {'a0': {'s0': +5}}, 's2': {'a1': {'s0': -1}} } from mdp import MDP mdp = MDP(transition_probs, rewards, initial_state='s0') ``` We can now use MDP just as any other gym environment: ``` print('initial state =', mdp.reset()) next_state, reward, done, info = mdp.step('a1') print('next_state = %s, reward = %s, done = %s' % (next_state, reward, done)) ``` but it also has other methods that you'll need for Value Iteration ``` print("mdp.get_all_states =", mdp.get_all_states()) print("mdp.get_possible_actions('s1') = ", mdp.get_possible_actions('s1')) print("mdp.get_next_states('s1', 'a0') = ", mdp.get_next_states('s1', 'a0')) print("mdp.get_reward('s1', 'a0', 's0') = ", mdp.get_reward('s1', 'a0', 's0')) print("mdp.get_transition_prob('s1', 'a0', 's0') = ", mdp.get_transition_prob('s1', 'a0', 's0')) ``` ### Optional: Visualizing MDPs You can also visualize any MDP with the drawing fuction donated by [neer201](https://github.com/neer201). You have to install graphviz for system and for python. 1. * For ubuntu just run: `sudo apt-get install graphviz` * For OSX: `brew install graphviz` 2. `pip install graphviz` 3. restart the notebook __Note:__ Installing graphviz on some OS (esp. Windows) may be tricky. However, you can ignore this part alltogether and use the standart vizualization. ``` from mdp import has_graphviz from IPython.display import display print("Graphviz available:", has_graphviz) if has_graphviz: from mdp import plot_graph, plot_graph_with_state_values, plot_graph_optimal_strategy_and_state_values display(plot_graph(mdp)) ``` ### Value Iteration Now let's build something to solve this MDP. The simplest algorithm so far is __V__alue __I__teration Here's the pseudo-code for VI: --- `1.` Initialize $V^{(0)}(s)=0$, for all $s$ `2.` For $i=0, 1, 2, \dots$ `3.` $ \quad V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$, for all $s$ --- First, let's write a function to compute the state-action value function $Q^{\pi}$, defined as follows $$Q_i(s, a) = \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$$ ``` def get_action_value(mdp, state_values, state, action, gamma): """ Computes Q(s,a) as in formula above """ <YOUR CODE> return <YOUR CODE> import numpy as np test_Vs = {s: i for i, s in enumerate(sorted(mdp.get_all_states()))} assert np.isclose(get_action_value(mdp, test_Vs, 's2', 'a1', 0.9), 0.69) assert np.isclose(get_action_value(mdp, test_Vs, 's1', 'a0', 0.9), 3.95) ``` Using $Q(s,a)$ we can now define the "next" V(s) for value iteration. $$V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = \max_a Q_i(s,a)$$ ``` def get_new_state_value(mdp, state_values, state, gamma): """ Computes next V(s) as in formula above. Please do not change state_values in process. """ if mdp.is_terminal(state): return 0 <YOUR CODE> return <YOUR CODE> test_Vs_copy = dict(test_Vs) assert np.isclose(get_new_state_value(mdp, test_Vs, 's0', 0.9), 1.8) assert np.isclose(get_new_state_value(mdp, test_Vs, 's2', 0.9), 1.08) assert np.isclose(get_new_state_value(mdp, {'s0': -1e10, 's1': 0, 's2': -2e10}, 's0', 0.9), -13500000000.0), \ "Please ensure that you handle negative Q-values of arbitrary magnitude correctly" assert test_Vs == test_Vs_copy, "Please do not change state_values in get_new_state_value" ``` Finally, let's combine everything we wrote into a working value iteration algo. ``` # parameters gamma = 0.9 # discount for MDP num_iter = 100 # maximum iterations, excluding initialization # stop VI if new values are this close to old values (or closer) min_difference = 0.001 # initialize V(s) state_values = {s: 0 for s in mdp.get_all_states()} if has_graphviz: display(plot_graph_with_state_values(mdp, state_values)) for i in range(num_iter): # Compute new state values using the functions you defined above. # It must be a dict {state : float V_new(state)} new_state_values = <YOUR CODE> assert isinstance(new_state_values, dict) # Compute difference diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states()) print("iter %4i | diff: %6.5f | " % (i, diff), end="") print(' '.join("V(%s) = %.3f" % (s, v) for s, v in state_values.items())) state_values = new_state_values if diff < min_difference: print("Terminated") break if has_graphviz: display(plot_graph_with_state_values(mdp, state_values)) print("Final state values:", state_values) assert abs(state_values['s0'] - 3.781) < 0.01 assert abs(state_values['s1'] - 7.294) < 0.01 assert abs(state_values['s2'] - 4.202) < 0.01 ``` Now let's use those $V^{*}(s)$ to find optimal actions in each state $$\pi^*(s) = argmax_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = argmax_a Q_i(s,a)$$ The only difference vs V(s) is that here we take not max but argmax: find action such with maximum Q(s,a). ``` def get_optimal_action(mdp, state_values, state, gamma=0.9): """ Finds optimal action using formula above. """ if mdp.is_terminal(state): return None <YOUR CODE> return <YOUR CODE> assert get_optimal_action(mdp, state_values, 's0', gamma) == 'a1' assert get_optimal_action(mdp, state_values, 's1', gamma) == 'a0' assert get_optimal_action(mdp, state_values, 's2', gamma) == 'a1' assert get_optimal_action(mdp, {'s0': -1e10, 's1': 0, 's2': -2e10}, 's0', 0.9) == 'a0', \ "Please ensure that you handle negative Q-values of arbitrary magnitude correctly" assert get_optimal_action(mdp, {'s0': -2e10, 's1': 0, 's2': -1e10}, 's0', 0.9) == 'a1', \ "Please ensure that you handle negative Q-values of arbitrary magnitude correctly" if has_graphviz: display(plot_graph_optimal_strategy_and_state_values(mdp, state_values, get_action_value)) # Measure agent's average reward s = mdp.reset() rewards = [] for _ in range(10000): s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma)) rewards.append(r) print("average reward: ", np.mean(rewards)) assert(0.40 < np.mean(rewards) < 0.55) ``` ### Frozen lake ``` from mdp import FrozenLakeEnv mdp = FrozenLakeEnv(slip_chance=0) mdp.render() def value_iteration(mdp, state_values=None, gamma=0.9, num_iter=1000, min_difference=1e-5): """ performs num_iter value iteration steps starting from state_values. Same as before but in a function """ state_values = state_values or {s: 0 for s in mdp.get_all_states()} for i in range(num_iter): # Compute new state values using the functions you defined above. It must be a dict {state : new_V(state)} new_state_values = <YOUR CODE> assert isinstance(new_state_values, dict) # Compute difference diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states()) print("iter %4i | diff: %6.5f | V(start): %.3f " % (i, diff, new_state_values[mdp._initial_state])) state_values = new_state_values if diff < min_difference: break return state_values state_values = value_iteration(mdp) s = mdp.reset() mdp.render() for t in range(100): a = get_optimal_action(mdp, state_values, s, gamma) print(a, end='\n\n') s, r, done, _ = mdp.step(a) mdp.render() if done: break ``` ### Let's visualize! It's usually interesting to see what your algorithm actually learned under the hood. To do so, we'll plot state value functions and optimal actions at each VI step. ``` import matplotlib.pyplot as plt %matplotlib inline def draw_policy(mdp, state_values): plt.figure(figsize=(3, 3)) h, w = mdp.desc.shape states = sorted(mdp.get_all_states()) V = np.array([state_values[s] for s in states]) Pi = {s: get_optimal_action(mdp, state_values, s, gamma) for s in states} plt.imshow(V.reshape(w, h), cmap='gray', interpolation='none', clim=(0, 1)) ax = plt.gca() ax.set_xticks(np.arange(h)-.5) ax.set_yticks(np.arange(w)-.5) ax.set_xticklabels([]) ax.set_yticklabels([]) Y, X = np.mgrid[0:4, 0:4] a2uv = {'left': (-1, 0), 'down': (0, -1), 'right': (1, 0), 'up': (0, 1)} for y in range(h): for x in range(w): plt.text(x, y, str(mdp.desc[y, x].item()), color='g', size=12, verticalalignment='center', horizontalalignment='center', fontweight='bold') a = Pi[y, x] if a is None: continue u, v = a2uv[a] plt.arrow(x, y, u*.3, -v*.3, color='m', head_width=0.1, head_length=0.1) plt.grid(color='b', lw=2, ls='-') plt.show() state_values = {s: 0 for s in mdp.get_all_states()} for i in range(10): print("after iteration %i" % i) state_values = value_iteration(mdp, state_values, num_iter=1) draw_policy(mdp, state_values) # please ignore iter 0 at each step from IPython.display import clear_output from time import sleep mdp = FrozenLakeEnv(map_name='8x8', slip_chance=0.1) state_values = {s: 0 for s in mdp.get_all_states()} for i in range(30): clear_output(True) print("after iteration %i" % i) state_values = value_iteration(mdp, state_values, num_iter=1) draw_policy(mdp, state_values) sleep(0.5) # please ignore iter 0 at each step ``` Massive tests ``` mdp = FrozenLakeEnv(slip_chance=0) state_values = value_iteration(mdp) total_rewards = [] for game_i in range(1000): s = mdp.reset() rewards = [] for t in range(100): s, r, done, _ = mdp.step( get_optimal_action(mdp, state_values, s, gamma)) rewards.append(r) if done: break total_rewards.append(np.sum(rewards)) print("average reward: ", np.mean(total_rewards)) assert(1.0 <= np.mean(total_rewards) <= 1.0) print("Well done!") # Measure agent's average reward mdp = FrozenLakeEnv(slip_chance=0.1) state_values = value_iteration(mdp) total_rewards = [] for game_i in range(1000): s = mdp.reset() rewards = [] for t in range(100): s, r, done, _ = mdp.step( get_optimal_action(mdp, state_values, s, gamma)) rewards.append(r) if done: break total_rewards.append(np.sum(rewards)) print("average reward: ", np.mean(total_rewards)) assert(0.8 <= np.mean(total_rewards) <= 0.95) print("Well done!") # Measure agent's average reward mdp = FrozenLakeEnv(slip_chance=0.25) state_values = value_iteration(mdp) total_rewards = [] for game_i in range(1000): s = mdp.reset() rewards = [] for t in range(100): s, r, done, _ = mdp.step( get_optimal_action(mdp, state_values, s, gamma)) rewards.append(r) if done: break total_rewards.append(np.sum(rewards)) print("average reward: ", np.mean(total_rewards)) assert(0.6 <= np.mean(total_rewards) <= 0.7) print("Well done!") # Measure agent's average reward mdp = FrozenLakeEnv(slip_chance=0.2, map_name='8x8') state_values = value_iteration(mdp) total_rewards = [] for game_i in range(1000): s = mdp.reset() rewards = [] for t in range(100): s, r, done, _ = mdp.step( get_optimal_action(mdp, state_values, s, gamma)) rewards.append(r) if done: break total_rewards.append(np.sum(rewards)) print("average reward: ", np.mean(total_rewards)) assert(0.6 <= np.mean(total_rewards) <= 0.8) print("Well done!") ``` # HW Part 1: Value iteration convergence ### Find an MDP for which value iteration takes long to converge (1 pts) When we ran value iteration on the small frozen lake problem, the last iteration where an action changed was iteration 6--i.e., value iteration computed the optimal policy at iteration 6. Are there any guarantees regarding how many iterations it'll take value iteration to compute the optimal policy? There are no such guarantees without additional assumptions--we can construct the MDP in such a way that the greedy policy will change after arbitrarily many iterations. Your task: define an MDP with at most 3 states and 2 actions, such that when you run value iteration, the optimal action changes at iteration >= 50. Use discount=0.95. (However, note that the discount doesn't matter here--you can construct an appropriate MDP with any discount.) Note: value function must change at least once after iteration >=50, not necessarily change on every iteration till >=50. ``` transition_probs = { <YOUR CODE> } rewards = { <YOUR CODE> } from mdp import MDP from numpy import random mdp = MDP(transition_probs, rewards, initial_state=random.choice(tuple(transition_probs.keys()))) # Feel free to change the initial_state state_values = {s: 0 for s in mdp.get_all_states()} policy = np.array([get_optimal_action(mdp, state_values, state, gamma) for state in sorted(mdp.get_all_states())]) for i in range(100): print("after iteration %i" % i) state_values = value_iteration(mdp, state_values, num_iter=1) new_policy = np.array([get_optimal_action(mdp, state_values, state, gamma) for state in sorted(mdp.get_all_states())]) n_changes = (policy != new_policy).sum() print("N actions changed = %i \n" % n_changes) policy = new_policy # please ignore iter 0 at each step ``` ### Value iteration convervence proof (1 pts) **Note:** Assume that $\mathcal{S}, \mathcal{A}$ are finite. Update of value function in value iteration can be rewritten in a form of Bellman operator: $$(TV)(s) = \max_{a \in \mathcal{A}}\mathbb{E}\left[ r_{t+1} + \gamma V(s_{t+1}) | s_t = s, a_t = a\right]$$ Value iteration algorithm with Bellman operator: --- &nbsp;&nbsp; Initialize $V_0$ &nbsp;&nbsp; **for** $k = 0,1,2,...$ **do** &nbsp;&nbsp;&nbsp;&nbsp; $V_{k+1} \leftarrow TV_k$ &nbsp;&nbsp;**end for** --- In [lecture](https://docs.google.com/presentation/d/1lz2oIUTvd2MHWKEQSH8hquS66oe4MZ_eRvVViZs2uuE/edit#slide=id.g4fd6bae29e_2_4) we established contraction property of bellman operator: $$ ||TV - TU||_{\infty} \le \gamma ||V - U||_{\infty} $$ For all $V, U$ Using contraction property of Bellman operator, Banach fixed-point theorem and Bellman equations prove that value function converges to $V^*$ in value iterateion$ *<-- Your proof here -->* ### Asynchronious value iteration (2 pts) Consider the following algorithm: --- Initialize $V_0$ **for** $k = 0,1,2,...$ **do** &nbsp;&nbsp;&nbsp;&nbsp; Select some state $s_k \in \mathcal{S}$ &nbsp;&nbsp;&nbsp;&nbsp; $V(s_k) := (TV)(s_k)$ **end for** --- Note that unlike common value iteration, here we update only a single state at a time. **Homework.** Prove the following proposition: If for all $s \in \mathcal{S}$, $s$ appears in the sequence $(s_0, s_1, ...)$ infinitely often, then $V$ converges to $V*$ *<-- Your proof here -->* # HW Part 2: Policy iteration ## Policy iteration implementateion (3 pts) Let's implement exact policy iteration (PI), which has the following pseudocode: --- Initialize $\pi_0$ `// random or fixed action` For $n=0, 1, 2, \dots$ - Compute the state-value function $V^{\pi_{n}}$ - Using $V^{\pi_{n}}$, compute the state-action-value function $Q^{\pi_{n}}$ - Compute new policy $\pi_{n+1}(s) = \operatorname*{argmax}_a Q^{\pi_{n}}(s,a)$ --- Unlike VI, policy iteration has to maintain a policy - chosen actions from all states - and estimate $V^{\pi_{n}}$ based on this policy. It only changes policy once values converged. Below are a few helpers that you may or may not use in your implementation. ``` transition_probs = { 's0': { 'a0': {'s0': 0.5, 's2': 0.5}, 'a1': {'s2': 1} }, 's1': { 'a0': {'s0': 0.7, 's1': 0.1, 's2': 0.2}, 'a1': {'s1': 0.95, 's2': 0.05} }, 's2': { 'a0': {'s0': 0.4, 's1': 0.6}, 'a1': {'s0': 0.3, 's1': 0.3, 's2': 0.4} } } rewards = { 's1': {'a0': {'s0': +5}}, 's2': {'a1': {'s0': -1}} } from mdp import MDP mdp = MDP(transition_probs, rewards, initial_state='s0') ``` Let's write a function called `compute_vpi` that computes the state-value function $V^{\pi}$ for an arbitrary policy $\pi$. Unlike VI, this time you must find the exact solution, not just a single iteration. Recall that $V^{\pi}$ satisfies the following linear equation: $$V^{\pi}(s) = \sum_{s'} P(s,\pi(s),s')[ R(s,\pi(s),s') + \gamma V^{\pi}(s')]$$ You'll have to solve a linear system in your code. (Find an exact solution, e.g., with `np.linalg.solve`.) ``` def compute_vpi(mdp, policy, gamma): """ Computes V^pi(s) FOR ALL STATES under given policy. :param policy: a dict of currently chosen actions {s : a} :returns: a dict {state : V^pi(state) for all states} """ <YOUR CODE> return <YOUR CODE> test_policy = {s: np.random.choice( mdp.get_possible_actions(s)) for s in mdp.get_all_states()} new_vpi = compute_vpi(mdp, test_policy, gamma) print(new_vpi) assert type( new_vpi) is dict, "compute_vpi must return a dict {state : V^pi(state) for all states}" ``` Once we've got new state values, it's time to update our policy. ``` def compute_new_policy(mdp, vpi, gamma): """ Computes new policy as argmax of state values :param vpi: a dict {state : V^pi(state) for all states} :returns: a dict {state : optimal action for all states} """ <YOUR CODE> return <YOUR CODE> new_policy = compute_new_policy(mdp, new_vpi, gamma) print(new_policy) assert type( new_policy) is dict, "compute_new_policy must return a dict {state : optimal action for all states}" ``` __Main loop__ ``` def policy_iteration(mdp, policy=None, gamma=0.9, num_iter=1000, min_difference=1e-5): """ Run the policy iteration loop for num_iter iterations or till difference between V(s) is below min_difference. If policy is not given, initialize it at random. """ <YOUR CODE: a whole lot of it> return state_values, policy ``` __Your PI Results__ ``` <YOUR CODE: compare PI and VI on the MDP from bonus 1, then on small & large FrozenLake> ``` ## Policy iteration convergence (3 pts) **Note:** Assume that $\mathcal{S}, \mathcal{A}$ are finite. We can define another Bellman operator: $$(T_{\pi}V)(s) = \mathbb{E}_{r, s'|s, a = \pi(s)}\left[r + \gamma V(s')\right]$$ And rewrite policy iteration algorithm in operator form: --- Initialize $\pi_0$ **for** $k = 0,1,2,...$ **do** &nbsp;&nbsp;&nbsp;&nbsp; Solve $V_k = T_{\pi_k}V_k$ &nbsp;&nbsp;&nbsp;&nbsp; Select $\pi_{k+1}$ s.t. $T_{\pi_{k+1}}V_k = TV_k$ **end for** --- To prove convergence of the algorithm we need to prove two properties: contraction an monotonicity. #### Monotonicity (0.5 pts) For all $V, U$ if $V(s) \le U(s)$ $\forall s \in \mathcal{S}$ then $(T_\pi V)(s) \le (T_\pi U)(s)$ $\forall s \in \mathcal{S}$ *<-- Your proof here -->* #### Contraction (1 pts) $$ ||T_\pi V - T_\pi U||_{\infty} \le \gamma ||V - U||_{\infty} $$ For all $V, U$ *<-- Your proof here -->* #### Convergence (1.5 pts) Prove that there exists iteration $k_0$ such that $\pi_k = \pi^*$ for all $k \ge k_0$ *<-- Your proof here -->*
github_jupyter
# Turbine isentropic efficiency A steam turbine performs with an isentropic efficiency of $\eta_t = 0.84$. The inlet conditions are 4 MPa and 650°C, with a mass flow rate of 100 kg/s, and the exit pressure is 10 kPa. Assume the turbine is adiabatic. ![Turbine](../../images/turbine.png) **Problem:** - Determine the power produced by the turbine - Determine the rate of entropy generation ``` import numpy as np import cantera as ct from pint import UnitRegistry ureg = UnitRegistry() Q_ = ureg.Quantity ``` We can start by specifying state 1 and the other known quantities: ``` temp1 = Q_(650, 'degC') pres1 = Q_(4, 'MPa') state1 = ct.Water() state1.TP = temp1.to('K').magnitude, pres1.to('Pa').magnitude mass_flow_rate = Q_(100, 'kg/s') efficiency = 0.84 pres2 = Q_(10, 'kPa') ``` To apply the isentropic efficiency, we'll need to separately consider the real turbine and an equivalent turbine operating in a reversible manner. They have the same initial conditions and mass flow rate. For the reversible turbine, an entropy balance gives: \begin{equation} s_{s,2} = s_1 \end{equation} and then with $P_2$ and $s_{s,2}$ we can fix state 2 for the reversible turbine: ``` state2_rev = ct.Water() state2_rev.SP = state1.s, pres2.to('Pa').magnitude state2_rev() ``` Then, we can do an energy balance for the reversible turbine, which is also steady state and adiabatic: $$ \dot{m} h_1 = \dot{m} h_{s,2} + \dot{W}_{s,t} $$ Then, recall that the isentropic efficiency is defined as $$ \eta_t = \frac{\dot{W}_t}{\dot{W}_{s,t}} \;, $$ so we can obtain the actual turbine work using $\dot{W}_t = \eta_t \dot{W}_{s,t}$ : ``` work_isentropic = mass_flow_rate * ( Q_(state1.h, 'J/kg') - Q_(state2_rev.h, 'J/kg') ) work_actual = efficiency * work_isentropic print(f'Actual turbine work: {work_actual.to(ureg.megawatt): .2f}') ``` Then, we can perform an energy balance on the actual turbine: $$ \dot{m} h_1 = \dot{m} h_2 + \dot{W}_t \;, $$ which we can use with the exit pressure to fix state 2. ``` enthalpy2 = Q_(state1.h, 'J/kg') - (work_actual / mass_flow_rate) state2 = ct.Water() state2.HP = enthalpy2.to('J/kg').magnitude, pres2.to('Pa').magnitude state2() ``` Finally, we can perform an entropy balance on the actual turbine: $$ \dot{m} s_1 + \dot{S}_{\text{gen}} = \dot{m} s_2 \;, $$ which allows us to find the rate of entropy generation. ``` entropy_gen = mass_flow_rate * ( Q_(state2.s, 'J/(kg K)') - Q_(state1.s, 'J/(kg K)') ) print(f'rate of entropy generation: {entropy_gen.to("kW/K"): .2f}') ```
github_jupyter
**Note**: Click on "*Kernel*" > "*Restart Kernel and Clear All Outputs*" in [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) *before* reading this notebook to reset its output. If you cannot run this file on your machine, you may want to open it [in the cloud <img height="12" style="display: inline-block" src="../static/link/to_mb.png">](https://mybinder.org/v2/gh/webartifex/intro-to-python/develop?urlpath=lab/tree/08_mfr/00_content.ipynb). # Chapter 8: Map, Filter, & Reduce In this chapter, we continue the study of sequential data by looking at memory efficient ways to process the elements in a sequence. That is an important topic for the data science practitioner who must be able to work with data that does *not* fit into a single computer's memory. As shown in [Chapter 4 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/04_iteration/02_content.ipynb#Containers-vs.-Iterables), both the `list` objects `[0, 1, 2, 3, 4]` and `[1, 3, 5, 7, 9]` on the one side and the `range` objects `range(5)` and `range(1, 10, 2)` on the other side allow us to loop over the same numbers. However, the latter two only create *one* `int` object in every iteration while the former two create *all* `int` objects before the loop even starts. In this aspect, we consider `range` objects to be "rules" in memory that know how to calculate the numbers *without* calculating them. In [Chapter 7 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/07_sequences/01_content.ipynb#The-list-Type), we see how the built-in [list() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#func-list) constructor **materializes** the `range(1, 13)` object into the `list` object `[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]`. In other words, we make `range(1, 13)` calculate *all* numbers at once and store them in a `list` object for further processing. In many cases, however, it is not necessary to do that, and, in this chapter, we look at other types of "rules" in memory and how we can compose different "rules" together to implement bigger computations. Next, we take a step back and continue with a simple example involving the familiar `numbers` list. Then, we iteratively exchange `list` objects with "rule"-like objects *without* changing the overall computation at all. As computations involving sequential data are commonly classified into three categories **map**, **filter**, or **reduce**, we do so too for our `numbers` example. ``` numbers = [7, 11, 8, 5, 3, 12, 2, 6, 9, 10, 1, 4] ``` ## Mapping **Mapping** refers to the idea of applying a transformation to every element in a sequence. For example, let's square each element in `numbers` and add `1` to the squares. In essence, we apply the transformation $y := x^2 + 1$ as expressed with the `transform()` function below. ``` def transform(element): """Map elements to their squares plus 1.""" return (element ** 2) + 1 ``` With the syntax we know so far, we revert to a `for`-loop that iteratively appends the transformed elements to an initially empty `transformed_numbers` list. ``` transformed_numbers = [] for old in numbers: new = transform(old) transformed_numbers.append(new) transformed_numbers ``` As this kind of data processing is so common, Python provides the [map() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#map) built-in. In its simplest usage form, it takes two arguments: A transformation `function` that takes exactly *one* positional argument and an `iterable` that provides the objects to be mapped. We call [map() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#map) with a reference to the `transform()` function and the `numbers` list as the arguments and store the result in the variable `transformer` to inspect it. ``` transformer = map(transform, numbers) ``` We might expect to get back a materialized sequence (i.e., all elements exist in memory), and a `list` object would feel the most natural because of the type of the `numbers` argument. However, `transformer` is an object of type `map`. ``` transformer type(transformer) ``` Like `range` objects, `map` objects generate a series of objects "on the fly" (i.e., one by one), and we use the built-in [next() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#next) function to obtain the next object in line. So, we should think of a `map` object as a "rule" stored in memory that only knows how to calculate the next object of possibly *infinitely* many. ``` next(transformer) next(transformer) next(transformer) ``` It is essential to understand that by creating a `map` object with the [map() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#map) built-in, *nothing* happens in memory except the creation of the `map` object. In particular, no second `list` object derived from `numbers` is created. Also, we may view `range` objects as a special case of `map` objects: They are constrained to generating `int` objects only, and the `iterable` argument is replaced with `start`, `stop`, and `step` arguments. If we are sure that a `map` object generates a *finite* number of elements, we may materialize them into a `list` object with the built-in [list() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#func-list) constructor. Below, we "pull out" the remaining `int` objects from `transformer`, which itself is derived from a *finite* `list` object. ``` list(transformer) ``` In summary, instead of creating an empty list first and appending it in a `for`-loop as above, we write the following one-liner and obtain an equal `transformed_numbers` list. ``` transformed_numbers = list(map(transform, numbers)) transformed_numbers ``` ## Filtering **Filtering** refers to the idea of creating a subset of a sequence with a **boolean filter** `function` that indicates if an element should be kept (i.e., `True`) or not (i.e., `False`). In the example, let's only keep the even elements in `numbers`. The `is_even()` function implements that as a filter. ``` def is_even(element): """Filter out odd numbers.""" if element % 2 == 0: return True return False ``` As `element % 2 == 0` is already a boolean expression, we could shorten `is_even()` like so. ``` def is_even(element): """Filter out odd numbers.""" return element % 2 == 0 ``` As before, we first use a `for`-loop that appends the elements to be kept iteratively to an initially empty `even_numbers` list. ``` even_numbers = [] for number in transformed_numbers: if is_even(number): even_numbers.append(number) even_numbers ``` Analogously to the `map` object above, we use the [filter() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#filter) built-in to create an object of type `filter` and assign it to `evens`. ``` evens = filter(is_even, transformed_numbers) evens type(evens) ``` `evens` works like `transformer` above: With the built-in [next() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#next) function we obtain the even numbers one by one. So, the "next" element in line is simply the next even `int` object the `filter` object encounters. ``` transformed_numbers next(evens) next(evens) next(evens) ``` As above, we could create a materialized `list` object with the [list() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#func-list) constructor. ``` list(filter(is_even, transformed_numbers)) ``` We may also chain `map` and `filter` objects derived from the original `numbers` list. As the entire cell is *one* big expression consisting of nested function calls, we read it from the inside out. ``` list( filter( is_even, map(transform, numbers), ) ) ``` Using the [map() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#map) and [filter() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#filter) built-ins, we can quickly switch the order: Filter first and then transform the remaining elements. This variant equals the "*A simple Filter*" example in [Chapter 4 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/04_iteration/03_content.ipynb#Example:-A-simple-Filter). On the contrary, code with `for`-loops and `if` statements is more tedious to adapt. Additionally, `map` and `filter` objects loop "at the C level" and are a lot faster because of that. Because of that, experienced Pythonistas tend to *not* use explicit `for`-loops so often. ``` list( map( transform, filter(is_even, numbers), ) ) ``` ## Reducing Lastly, **reducing** sequential data means to summarize the elements into a single statistic. A simple example is the built-in [sum() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#sum) function. ``` sum( map( transform, filter(is_even, numbers), ) ) ``` Other straightforward examples are the built-in [min() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#min) or [max() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#max) functions. ``` min(map(transform, filter(is_even, numbers))) max(map(transform, filter(is_even, numbers))) ``` [sum() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#sum), [min() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#min), and [max() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#max) can be regarded as special cases. The generic way of reducing a sequence is to apply a function of *two* arguments on a rolling horizon: Its first argument is the reduction of the elements processed so far, and the second the next element to be reduced. For illustration, let's replicate [sum() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#sum) as such a function, called `sum_alt()`. Its implementation only adds two numbers. ``` def sum_alt(sum_so_far, next_number): """Reduce a sequence by addition.""" return sum_so_far + next_number ``` Further, we create a *new* `map` object derived from `numbers` ... ``` evens_transformed = map(transform, filter(is_even, numbers)) ``` ... and loop over all *but* the first element it generates. The latter is captured separately as the initial `result` with the [next() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#next) function. We know from above that `evens_transformed` generates *six* elements. That is why we see *five* growing `result` values resembling a [cumulative sum](http://mathworld.wolfram.com/CumulativeSum.html). The first `210` is the sum of the first two elements generated by `evens_transformed`, `65` and `145`. So, we also learn that `map` objects, and analogously `filter` objects, are *iterable* as we may loop over them. ``` result = next(evens_transformed) for number in evens_transformed: result = sum_alt(result, number) print(result, end=" ") # line added for didactical purposes ``` The final `result` is the same `370` as above. ``` result ``` The [reduce() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functools.html#functools.reduce) function in the [functools <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functools.html) module in the [standard library <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/index.html) provides more convenience (and speed) replacing the `for`-loop. It takes two arguments, `function` and `iterable`, in the same way as the [map() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#map) and [filter() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#filter) built-ins. [reduce() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functools.html#functools.reduce) is **[eager <img height="12" style="display: inline-block" src="../static/link/to_wiki.png">](https://en.wikipedia.org/wiki/Eager_evaluation)** meaning that all computations implied by the contained `map` and `filter` "rules" are executed immediately, and the code cell evaluates to `370`. On the contrary, [map() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#map) and [filter() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#filter) create **[lazy <img height="12" style="display: inline-block" src="../static/link/to_wiki.png">](https://en.wikipedia.org/wiki/Lazy_evaluation)** `map` and `filter` objects, and we have to use the [next() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#next) function to obtain the elements, one by one. ``` from functools import reduce reduce( sum_alt, map( transform, filter(is_even, numbers), ) ) ``` ## Lambda Expressions [map() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#map), [filter() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#filter), and [reduce() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functools.html#functools.reduce) take a `function` object as their first argument, and we defined `transform()`, `is_even()`, and `sum_alt()` to be used precisely for that. Often, such functions are used *only once* in a program. However, the primary purpose of functions is to *reuse* them. In such cases, it makes more sense to define them "anonymously" right at the position where the first argument goes. As mentioned in [Chapter 2 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/02_functions/00_content.ipynb#Anonymous-Functions), we use `lambda` expressions to create `function` objects *without* a name referencing them. So, the above `sum_alt()` function could be rewritten as a `lambda` expression like so ... ``` lambda sum_so_far, next_number: sum_so_far + next_number ``` ... or even shorter. ``` lambda x, y: x + y ``` With the new concepts in this section, we can rewrite the entire example in just a few lines of code *without* any `for`, `if`, and `def` statements. The resulting code is concise, easy to read, quick to modify, and even faster in execution. Most importantly, it is optimized to handle big amounts of data as *no* temporary `list` objects are materialized in memory. ``` numbers = [7, 11, 8, 5, 3, 12, 2, 6, 9, 10, 1, 4] evens = filter(lambda x: x % 2 == 0, numbers) transformed = map(lambda x: (x ** 2) + 1, evens) sum(transformed) ``` If `numbers` comes as a sorted sequence of whole numbers, we may use the [range() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#func-range) built-in and get away *without* any materialized `list` object in memory at all! ``` numbers = range(1, 13) evens = filter(lambda x: x % 2 == 0, numbers) transformed = map(lambda x: (x ** 2) + 1, evens) sum(transformed) ``` To additionally save the temporary variables, `numbers`, `evens`, and `transformed`, we could write the entire computation as *one* expression. ``` sum( map( lambda x: (x ** 2) + 1, filter( lambda x: x % 2 == 0, range(1, 13), ) ) ) ``` PythonTutor visualizes the differences in the number of computational steps and memory usage: - [Version 1 <img height="12" style="display: inline-block" src="../static/link/to_py.png">](http://pythontutor.com/visualize.html#code=def%20is_even%28element%29%3A%0A%20%20%20%20if%20element%20%25%202%20%3D%3D%200%3A%0A%20%20%20%20%20%20%20%20return%20True%0A%20%20%20%20return%20False%0A%0Adef%20transform%28element%29%3A%0A%20%20%20%20return%20%28element%20**%202%29%20%2B%201%0A%0Anumbers%20%3D%20list%28range%281,%2013%29%29%0A%0Aevens%20%3D%20%5B%5D%0Afor%20number%20in%20numbers%3A%0A%20%20%20%20if%20is_even%28number%29%3A%0A%20%20%20%20%20%20%20%20evens.append%28number%29%0A%0Atransformed%20%3D%20%5B%5D%0Afor%20number%20in%20evens%3A%0A%20%20%20%20transformed.append%28transform%28number%29%29%0A%0Aresult%20%3D%20sum%28transformed%29&cumulative=false&curInstr=0&heapPrimitives=nevernest&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false): With `for`-loops, `if` statements, and named functions -> **116** steps and **3** `list` objects - [Version 2 <img height="12" style="display: inline-block" src="../static/link/to_py.png">](http://pythontutor.com/visualize.html#code=numbers%20%3D%20range%281,%2013%29%0Aevens%20%3D%20filter%28lambda%20x%3A%20x%20%25%202%20%3D%3D%200,%20numbers%29%0Atransformed%20%3D%20map%28lambda%20x%3A%20%28x%20**%202%29%20%2B%201,%20evens%29%0Aresult%20%3D%20sum%28transformed%29&cumulative=false&curInstr=0&heapPrimitives=nevernest&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false): With named `map` and `filter` objects -> **58** steps and **no** `list` object - [Version 3 <img height="12" style="display: inline-block" src="../static/link/to_py.png">](http://pythontutor.com/visualize.html#code=result%20%3D%20sum%28map%28lambda%20x%3A%20%28x%20**%202%29%20%2B%201,%20filter%28lambda%20x%3A%20x%20%25%202%20%3D%3D%200,%20range%281,%2013%29%29%29%29&cumulative=false&curInstr=0&heapPrimitives=nevernest&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false): Everything in *one* expression -> **55** steps and **no** `list` object Versions 2 and 3 are the same, except for the three additional steps required to create the temporary variables. The *major* downside of Version 1 is that, in the worst case, it may need *three times* the memory as compared to the other two versions! An experienced Pythonista would probably go with Version 2 in a production system to keep the code readable and maintainable. The map-filter-reduce paradigm has caught attention in recent years as it enables **[parallel computing <img height="12" style="display: inline-block" src="../static/link/to_wiki.png">](https://en.wikipedia.org/wiki/Parallel_computing)**, and this gets important when dealing with big amounts of data. The workings in the memory as shown in this section provide an idea why.
github_jupyter
# 1) Intro to Groupby Module + Groupby is like a container where each bundle has a common theme ``` import pandas as pd fortune = pd.read_csv('Data/fortune1000.csv', index_col=['Rank']) fortune.head(3) ``` ## Groupby TIPS - When we consider Groupby conditions, we want to group by using a column with smallest number of unique categories (such as Sector). There are multiple compaines that fall under each Sector. Similarly Industry is a good candidate for groupby too. - There is no point groupby using Company in this example because every Company value is unique. ``` # check the number of unique values fortune.nunique() ``` ## How does it work behind the scence? + Pandas will look at Sector value (for example: Retailing). It then loop through the dataset and collect all the rows that fall under that specific Sector (For Example: It gonna collect Wallmart, then proceed all 1000 companines, take any rows fall under that same Sector). The same procedure will continue for each Sector. + Then once all collection is done, pandas will bundle them into a larger object which we can see as DataFrameGoupBy object. + So we can think of it like a container where each bundle has a common theme (in our case it is Common Sector) **Groupby Object itself doesn't do anything, until we call methods upon it.** ``` sectors = fortune.groupby('Sector') sectors ``` #### Take note that DataFrame and DataFrameGroupBy objects are completely different. ``` type(fortune), type(sectors) ``` ------ # 2) The `.groupby()` Method ``` fortune = pd.read_csv('Data/fortune1000.csv', index_col=['Rank']) sectors = fortune.groupby('Sector') fortune.head(3) ``` ## `length of Groupby` object using `len()`, we actually get the number of Groupings There are 21 unique sectors in our object. ``` len(sectors) fortune['Sector'].nunique() ``` ## `Size()` of groupby object gives us each grouping details. ``` sectors.size() ``` ### It is very similar to calling to `value_counts()` + The only difference is the sorting. ``` fortune['Sector'].value_counts() ``` ## `first()` give us first row value of every grouping we get 21 values as we have 21 grouping in our groupby object. Each value is the first row of each Sector. ``` fortune.head(3) sectors.first() ``` ## `last()` give us the last row of each grouping ``` fortune.tail(3) sectors.last() ``` ## `groups` attribute gives a dictionary object + keys with Grouping Name (in our case Sector Name) + values as a list of values as Index Labels (in our case Index Labels of companies belong to that each Sector) ``` fortune.head(1) fortune.loc[24] # we can check the company belong to that Sector. sectors.groups ``` ------ # 3) Retrieve A Group with the `.get_group()` Method ``` fortune = pd.read_csv('Data/fortune1000.csv', index_col=['Rank']) sectors = fortune.groupby('Sector') fortune.head(3) sectors.get_group('Energy') sectors.get_group('Technology') sectors.get_group('Apparel') # same as calling like this fortune[fortune['Sector'] == 'Apparel'] ``` ------ # 4) Methods on the `Groupby` Object and `DataFrame` Columns ``` fortune = pd.read_csv('Data/fortune1000.csv', index_col=['Rank']) sectors = fortune.groupby('Sector') fortune.head(3) ``` ## using `max()` method + Pandas will look at the **left most column** in the row and give the largest value of it. + Example: in our case 'Company' is the last most column in each grouping. So it will get the largest string value of it. ``` sectors.get_group('Energy').head(1) ``` **We can see that 'Woodward' is the largest value (last value after descending ranking) in Aerospace & Defense Sector.** ``` sectors.max() ``` ## using `min()` + this is the opposite of max() and give the smallest value. ``` sectors.min() ``` ## using `sum()` and `mean()` + this will give **Total/Mean of every numeric columns** in each grouping + in our case it will Sum/Mean of Revenue, Profits, Employees ``` # we can prove this like that sectors.get_group('Apparel')['Employees'].sum() sectors.get_group('Apparel')['Revenue'].mean() sectors.sum() sectors.mean() ``` ## We can directly do mathmetic calculation on Grouping object too. ``` sectors['Revenue'].sum() sectors['Profits'].mean() sectors['Revenue'].min() sectors['Employees'].max() sectors[['Revenue', 'Profits']].sum() sectors[['Employees', 'Profits']].max() ``` ------- # 5) Grouping by Multiple Columns ### Let's say we want to group by `Sector` and `Industry` ``` fortune = pd.read_csv('Data/fortune1000.csv', index_col=['Rank']) sectors = fortune.groupby(['Sector', 'Industry']) # group by multiple columns (in our case Sector and Industry) fortune.head(3) sectors.groups ``` ### This allows us to go in more details by breaking down ``` sectors.size() sectors.sum() sectors['Revenue'].sum() sectors['Employees'].mean() ``` ----- # 6) The `.agg()` Method ``` fortune = pd.read_csv('Data/fortune1000.csv', index_col=['Rank']) sectors = fortune.groupby('Sector') fortune.head(3) sectors.mean() sectors['Profits'].sum() ``` ### `.agg()` Method gives us the best of the world by combining above usages. We need to provie + Option 1) a dictionary with key, values pair of **`column_name`: `aggregation method`** + Option 2) **list of agg methods** which will be performed on numerical columns + Option 3) we do the combination of above 2 options We can mix and match based on our requirements. ## Option 1) ``` sectors.agg({ 'Revenue': 'sum', 'Profits': 'sum', 'Employees': 'mean' }) ``` ## Option 2) ``` sectors.agg(['size', 'sum', 'mean', 'count', 'max', 'min', 'std']) # count returns same as size sectors.agg(['size', 'sum', 'mean']) ``` ## Option 3) ``` sectors.agg({ 'Revenue': ['sum', 'mean'], 'Profits': 'sum', 'Employees': 'mean' }) ``` ------ # 7) Iterating through Groups ``` fortune = pd.read_csv('Data/fortune1000.csv', index_col=['Rank']) sectors = fortune.groupby('Sector') fortune.head(3) ``` ## We want to extract company with the Most Revenue. What is that? ``` sectors['Profits'].max() # this doesn't give the whole row information. ``` ### First create the empty dataframe following the same columns from original DF ``` df = pd.DataFrame(columns=fortune.columns) df ``` ### Then we will iterate through the whole DF ``` for sector, data in sectors: highest_revenue_company_in_group = data.nlargest(1, 'Revenue') # get the top one result of highest revenue df = df.append(highest_revenue_company_in_group) df ``` ## Example 2) Let's say we want the Location City based, which has the highest Revenue in each group. ``` # first create a groupby object based on cities cities = fortune.groupby('Location') cities # create empty df with columns df = pd.DataFrame(columns=fortune.columns) df # loop through to find the largest one for city, data in cities: highest_revenue_company_in_city = data.nlargest(1, 'Revenue') df = df.append(highest_revenue_company_in_city) df ``` -------
github_jupyter
# Lab 3 Accuracy of Quantum Phase Estimation Prerequisite - [Ch.3.5 Quantum Fourier Transform](https://qiskit.org/textbook/ch-algorithms/quantum-fourier-transform.html) - [Ch.3.6 Quantum Phase Estimation](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html) Other relevant materials - [QCQI] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information ``` from qiskit import * import numpy as np from qiskit.visualization import plot_histogram import qiskit.tools.jupyter from qiskit.tools.monitor import job_monitor from qiskit.ignis.mitigation.measurement import * import matplotlib.pyplot as plt ``` <h2 style="font-size:24px;">Part 1: Performance of Quantum Phase Estimation</h2> <br> <div style="background: #E8E7EB; border-radius: 5px; -moz-border-radius: 5px;"> <p style="background: #800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; "><b>Goal</b></p> <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Investigate the relationship between the number of qubits required for the desired accuracy of the phase estimation with high probability.</p> </div> The accuracy of the estimated value through Quantum Phase Estimation (QPE) and its probability of success depend on the number of qubits employed in QPE circuits. Therefore, one might want to know the necessary number of qubits to achieve the targeted level of QPE performance, especially when the phase that needs to be determined cannot be decomposed in a finite bit binary expansion. In Part 1 of this lab, we examine the number of qubits required to accomplish the desired accuracy and the probability of success in determining the phase through QPE. <h3 style="font-size: 20px">1. Find the probability of obtaining the estimation for a phase value accurate to $2^{-2}$ successfully with four counting qubits.</h3> <h4 style="font-size: 17px">&#128211;Step A. Set up the QPE circuit with four counting qubits and save the circuit to the variable 'qc4'. Execute 'qc4' on a qasm simulator. Plot the histogram of the result.</h4> Check the QPE chapter in Qiskit textbook ( go to `3. Example: Getting More Precision` section [here](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html) ) for the circuit. ``` def qft(n): """Creates an n-qubit QFT circuit""" circuit = QuantumCircuit(n) def swap_registers(circuit, n): for qubit in range(n//2): circuit.swap(qubit, n-qubit-1) return circuit def qft_rotations(circuit, n): """Performs qft on the first n qubits in circuit (without swaps)""" if n == 0: return circuit n -= 1 circuit.h(n) for qubit in range(n): circuit.cp(np.pi/2**(n-qubit), qubit, n) qft_rotations(circuit, n) qft_rotations(circuit, n) swap_registers(circuit, n) return circuit ## Start your code to create the circuit, qc4 qc4.draw() ## Run this cell to simulate 'qc4' and to plot the histogram of the result sim = Aer.get_backend('qasm_simulator') shots = 20000 count_qc4 = execute(qc4, sim, shots=shots).result().get_counts() plot_histogram(count_qc4, figsize=(9,5)) ``` Having performed `Step A` successfully, you will have obtained a distribution similar to the one shown below with the highest probability at `0101` which corresponds to the estimated $\phi$ value, `0.3125`. ![](image/L3_qc4_hist.png) Since the number of counting qubits used for the circuit is four, the best estimated value should be accurate to $\delta = 2^{-4} = 0.0625$. However, there are multiple possible outcomes as $\theta = 1/3$ cannot be expressed in a finite number of bits, the estimation by QPE here is not always bounded by this accuracy. Running the following cell shows the same histogram but with all possible estimated $\phi$ values on the x-axis. ``` phi_est = np.array([round(int(key, 2)/2**t,3) for key in list(count_qc4.keys())]) key_new = list(map(str, phi_est)) count_new = dict(zip(key_new, count_qc4.values())) plot_histogram(count_new, figsize=(9,5)) ``` **Suppose the outcome of the final measurement is $m$, and let $b$ the best estimation which is `0.3125` for this case.** <h4 style="font-size: 17px">&#128211;Step B. Find $e$, the maximum difference in integer from the best estimation <code>0101</code> so that all the outcomes, 'm's, would approximate $\phi$ to an accuracy $2^{-2}$ when $|m - b| \leq \frac{e}{2^{t}}$. </h4> In this case, the values of $t$ and $b$ are $4$ and $0.3125$, respectively. For example, under $e = 1$, the considered outcomes are `0100`, `0101`, `0110` which correspond to the values of $m$: $0.25,~0.312,~0.375$, respectively, and all of them approximate the value $\frac{1}{3}$ to an accuracy $2^{-2}$. ``` ## Your code goes here ``` <h4 style="font-size: 17px">&#128211;Step C. Compute the probability of obtaining an approximation correct to an accuracy $2^{-2}$. Verify that the computed probability value is larger or equal to $1- \frac{1}{2(2^{(t-n)}-2)}$ where $t$ is the number of counting bits and the $2^{-n}$ is the desired accuracy. </h4> Now it is easy to evaluate the probability of the success from the histogram since all the outcomes that approximate $\phi$ to the accuracy $2^{-2}$ can be found based on the maximum difference $e$ from the best estimate. ``` ## Your code goes here ``` <h3 style="font-size: 20px">2. Compute the probability of success for the accuracy $2^{-2}$ when the number of counting qubits, $t$, varies from four to nine. Compare your result with the equation $t=n+log(2+\frac{1}{2\epsilon})$ when $2^{-n}$ is the desired accuracy and $\epsilon$ is 1 - probability of success.</h3> The following plot shows the relationship between the number of counting qubit, t, and the minimum probability of success to approximate the phase to an accuracy $2^{-2}$. Check the Ch. 5.2.1 Performance and requirements in `[QCQI]`. ``` y = lambda t, n: 1-1/(2*(2**(t-n)-2)) t_q = np.linspace(3.5, 9.5, 100 ) p_min = y(t_q, 2) plt.figure(figsize=(7, 5)) plt.plot(t_q, p_min, label='$p_{min}$') plt.xlabel('t: number of counting qubits') plt.ylabel('probability of success for the accuracy $2^{-2}$') plt.legend(loc='lower right') plt.title('Probability of success for different number of counting qubits') plt.show() ``` <h4 style="font-size: 17px">&#128211;Step A. Construct QPE circuit to estimate $\phi$ when $\phi = 1/3$ with for the different number of counting qubits, $t$, when $t = [4, 5, 6, 7, 8, 9]$. Store all the circuits in a list variable 'circ' to simulate all the circuits at once as we did in Lab2. </h4> ``` ## Your Code to create the list variable 'circ' goes here # Run this cell to simulate `circ` and plot the histograms of the results results = execute(circ, sim, shots=shots).result() n_circ = len(circ) counts = [results.get_counts(idx) for idx in range(n_circ)] fig, ax = plt.subplots(n_circ,1,figsize=(25,40)) for idx in range(n_circ): plot_histogram(counts[idx], ax=ax[idx]) plt.tight_layout() ``` <h4 style="font-size: 17px">&#128211;Step B. Determine $e$, the maximum difference in integer from the best estimation for the different numer of counting qubits, $t = [4, 5, 6, 7, 8, 9]$. Verify the relationship $e=2^{t-n}-1$ where $n=2$ since the desired accuracy is $2^{-2}$ in this case. </h4> ``` ## Your Code goes here ``` If you successfully calculated $e$ values for all the counting qubits, $t=[4,5,6,7,8,9]$, you will be able to generate the following graph that verifies the relationship $e = 2^{t-2} -1$ with the $e$ values that you computed. ![](image/L3_e_max.png) <h4 style="font-size: 17px">&#128211;Step C. Evaluate the probability of success estimating $\phi$ to an accuracy $2^{-2}$ for all the values of $t$, the number of counting qubits. Save the probabilities to the list variable, 'prob_success'. </h4> ``` ## Your code to create the list variable, 'prob_success', goes here ``` <h4 style="font-size: 17px">&#128211;Step D. Overlay the results of Step C on the graph that shows the relationship between the number of counting qubits, $t$, and the minimum probability of success to approximate the phase to an accuracy $2^{-2}$. Understand the result. </h4> ``` ## Your code goes here ``` ![](image/L3_prob_t.png) Your plot should be similar to the above one. The line plot in the left pannel shows the minimum success probability to estimate $\phi$ within the accuracy $2^{-2}$ as the number of counting qubits varies. The overlayed orange dots are the same values, but from the simulation, which confirms the relationship the line plot represents as the lower bound. The right pannel displays the same result but zoomed by adjusting the y-axis range. The following graph exhibits the relationships with different accuracy levels. The relationship, $t=n+log(2+\frac{1}{2\epsilon})$, indicates the number of counting qubits $t$ to estimate $\phi$ to an accuracy $2^{-2}$ with probability of success at least $1-\epsilon$, as we validated above. ``` t = np.linspace(5.1, 10, 100) prob_success_n = [y(t, n) for n in [2, 3, 4]] prob_n2, prob_n3, prob_n4 = prob_success_n[0], prob_success_n[1], prob_success_n[2] plt.figure(figsize=(7, 5)) plt.plot(t, prob_n2, t, prob_n3, t, prob_n4, t, [1]*len(t),'--' ) plt.axis([5, 10, 0.7, 1.05]) plt.xlabel('t: number of counting qubits') plt.ylabel('probability of success for the accuracy $2^{-n}$') plt.legend(['n = 2', 'n = 3', 'n = 4'], loc='lower right') plt.grid(True) ``` <h2 style="font-size:24px;">Part 2: QPE on Noisy Quantum System</h2> <br> <div style="background: #E8E7EB; border-radius: 5px; -moz-border-radius: 5px;"> <p style="background: #800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; "><b>Goal</b></p> <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Run the QPE circuit on a real quantum system to understand the result and limitations when using noisy quantum systems</p> </div> The accuracy anaylsis that we performed in Part 1 would not be correct when the QPE circuit is executed on present day noisy quantum systems. In part 2, we will obtain QPE results by running the circuit on a backend from IBM Quantum Experience to examine how noise affects the outcome and learn techniques to reduce its impact. <h4 style="font-size: 17px">&#128211;Step A. Load your account and select the backend from your provider. </h4> ``` ## Your code goes here. ``` <h4 style="font-size: 17px">&#128211;Step B. Generate multiple ( as many as you want ) transpiled circuits of <code>qc4</code> that you set up in Part 1 at the beginning. Choose one with the minimum circuit depth, and the other with the maximum circuit depth.</h4> Transpile the circuit with the parameter `optimization_level = 3` to reduce the error in the result. As we learned in Lab 1, Qiskit by default uses a stochastic swap mapper to place the needed SWAP gates, which varies the tranpiled circuit results even under the same runtime settings. Therefore, to achieve shorter depth transpiled circuit for smaller error in the outcome, transpile `qc4` multiple times and choose one with the minimum circuit depth. Select the maximum circuit depth one as well for comparison purposes. ``` ## Your code goes here ``` <h4 style="font-size: 17px">&#128211;Step C. Execute both circuits on the backend that you picked. Plot the histogram for the results and compare them with the simulation result in Part 1.</h4> ``` ## Your code goes here ``` The following shows the sample result. ![](image/L3_QPEresults.png) <h4 style="font-size: 17px">Step D. Measurement Error Mitigation </h4> In the previous step, we utilized our knowledge about Qiskit transpiler to get the best result. Here, we try to mitigate the errors in the result further through the measurement mitigation technique that we learned in Lab 2. <p>&#128211;Construct the circuits to profile the measurement errors of all basis states using the function 'complete_meas_cal'. Obtain the measurement filter object, 'meas_filter', which will be applied to the noisy results to mitigate readout (measurement) error. ``` ## Your Code goes here ``` <p>&#128211;Plot the histogram of the results before and after the measurement error mitigation to exhibit the improvement. ``` ## Your Code goes here ``` The following plot shows the sample result. ![](image/L3_QPEresults_final.png) The figure below displays a simulation result with the sample final results from both the best and worst SWAP mapping cases after applying the measurement error mitigation. In Lab 2, as the major source of the error was from the measurement, after the error mitigation procedure, the outcomes were significantly improved. For QPE case, however, the measurement error doesn't seem to be the foremost cause for the noise in the result; CNOT gate errors dominate the noise profile. In this case, choosing the transpiled circuit with the least depth was the crucial procedure to reduce the errors in the result. ![](image/L3_QPE_final.png)
github_jupyter
# Plot Performance Analysis ``` %load_ext autoreload %autoreload 2 %matplotlib inline import json import matplotlib.pyplot as plt import numpy as np import pandas as pd plt.rcParams['font.family'] = 'sans-serif' plt.rcParams['font.sans-serif'] = 'Roboto Condensed' sizes = [200, 500, 1000, 2000, 5000] repeats = 10 ``` ## Animation FPS ``` animation_fps = pd.read_table('data/performance-analysis/animation-fps.csv', sep=',', skipinitialspace=True) scroll_fps_mean = np.array([ animation_fps['scroll200'].mean(), animation_fps['scroll500'].mean(), animation_fps['scroll1000'].mean(), animation_fps['scroll2000'].mean(), animation_fps['scroll5000'].mean() ]) scroll_fps_std = np.array([ animation_fps['scroll200'].std(), animation_fps['scroll500'].std(), animation_fps['scroll1000'].std(), animation_fps['scroll2000'].std(), animation_fps['scroll5000'].std() ]) panzoom_fps_mean = np.array([ animation_fps['panzoom200'].mean(), animation_fps['panzoom500'].mean(), animation_fps['panzoom1000'].mean(), animation_fps['panzoom2000'].mean(), animation_fps['panzoom5000'].mean() ]) panzoom_fps_std = np.array([ animation_fps['panzoom200'].std(), animation_fps['panzoom500'].std(), animation_fps['panzoom1000'].std(), animation_fps['panzoom2000'].std(), animation_fps['panzoom5000'].std() ]) arrange_fps_mean = np.array([ animation_fps['arrange200'].mean(), animation_fps['arrange500'].mean(), animation_fps['arrange1000'].mean(), animation_fps['arrange2000'].mean(), animation_fps['arrange5000'].mean() ]) arrange_fps_std = np.array([ animation_fps['arrange200'].std(), animation_fps['arrange500'].std(), animation_fps['arrange1000'].std(), animation_fps['arrange2000'].std(), animation_fps['arrange5000'].std() ]) lasso_fps_mean = np.array([ animation_fps['lasso200'].mean(), animation_fps['lasso500'].mean(), animation_fps['lasso1000'].mean(), animation_fps['lasso2000'].mean(), animation_fps['lasso5000'].mean() ]) lasso_fps_std = np.array([ animation_fps['lasso200'].std(), animation_fps['lasso500'].std(), animation_fps['lasso1000'].std(), animation_fps['lasso2000'].std(), animation_fps['lasso5000'].std() ]) fig, ax = plt.subplots(figsize=(10, 5),) idx = np.arange(len(sizes)) width = 0.15 ax.bar(idx - width * 1.5, scroll_fps_mean, yerr=scroll_fps_std, label='scroll', width=width, color='#c17da5') ax.bar(idx - width * 0.5, panzoom_fps_mean, yerr=panzoom_fps_std, label='pan-zoom', width=width, color='#6fb2e4') ax.bar(idx + width * 0.5, arrange_fps_mean, yerr=arrange_fps_std, label='arrange', width=width, color='#eee462') ax.bar(idx + width * 1.5, lasso_fps_mean, yerr=lasso_fps_std, label='lasso', width=width, color='#469b76') ax.set_xticklabels(['whatever...'] + sizes) ax.grid(axis='y', color='#8c8c8c', linestyle='--', linewidth=1) ax.set_axisbelow(True) ax.spines['top'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['right'].set_visible(False) ax.tick_params(axis='both', which='major', labelsize=24) fig.savefig( 'animation-fps.svg', bbox_inches="tight", pad_inches=0 ) ax.plot() fig, ax = plt.subplots(figsize=(10, 5),) idx = np.arange(len(sizes)) width = 0.15 ax.bar(idx - width * 1.5, scroll_fps_mean, yerr=scroll_fps_std, label='scroll', width=width, color='#c17da5') ax.bar(idx - width * 0.5, panzoom_fps_mean, yerr=panzoom_fps_std, label='pan-zoom', width=width, color='#6fb2e4') ax.bar(idx + width * 0.5, arrange_fps_mean, yerr=arrange_fps_std, label='arrange', width=width, color='#eee462') ax.bar(idx + width * 1.5, lasso_fps_mean, yerr=lasso_fps_std, label='lasso', width=width, color='#469b76') ax.set_xticklabels(['whatever...'] + sizes) ax.grid(axis='y', color='#8c8c8c', linestyle='--', linewidth=1) ax.set_axisbelow(True) ax.spines['top'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['right'].set_visible(False) ax.tick_params(axis='both', which='major', labelsize=24) fig.savefig( 'animation-fps.svg', bbox_inches="tight", pad_inches=0 ) ax.plot() ``` ## Initialization Times ``` load_times = pd.read_table('data/performance-analysis/initialization-times.csv', sep=',', skipinitialspace=True) photos_times_mean = np.array([ load_times['photos200'].mean() / 1000, load_times['photos500'].mean() / 1000, load_times['photos1000'].mean() / 1000, load_times['photos2000'].mean() / 1000, load_times['photos5000'].mean() / 1000 ]) photos_times_std = np.array([ load_times['photos200'].std() / 1000, load_times['photos500'].std() / 1000, load_times['photos1000'].std() / 1000, load_times['photos2000'].std() / 1000, load_times['photos5000'].std() / 1000 ]) drawings_times_mean = np.array([ load_times['drawings200'].mean() / 1000, load_times['drawings500'].mean() / 1000, load_times['drawings1000'].mean() / 1000, load_times['drawings2000'].mean() / 1000, load_times['drawings5000'].mean() / 1000 ]) drawings_times_std = np.array([ load_times['drawings200'].std() / 1000, load_times['drawings500'].std() / 1000, load_times['drawings1000'].std() / 1000, load_times['drawings2000'].std() / 1000, load_times['drawings5000'].std() / 1000 ]) matrices_times_mean = np.array([ load_times['matrices200'].mean() / 1000, load_times['matrices500'].mean() / 1000, load_times['matrices1000'].mean() / 1000, load_times['matrices2000'].mean() / 1000, load_times['matrices5000'].mean() / 1000 ]) matrices_times_std = np.array([ load_times['matrices200'].std() / 1000, load_times['matrices500'].std() / 1000, load_times['matrices1000'].std() / 1000, load_times['matrices2000'].std() / 1000, load_times['matrices5000'].std() / 1000 ]) load_times.head() fig, ax = plt.subplots(figsize=(10, 5),) idx = np.arange(len(sizes)) width = 0.2 ax.bar(idx - width, photos_times_mean, yerr=photos_times_std, label='photos', width=width, color='#c17da5') ax.bar(idx, drawings_times_mean, yerr=drawings_times_std, label='drawings', width=width, color='#6fb2e4') ax.bar(idx + width, matrices_times_mean, yerr=matrices_times_std, label='matrices', width=width, color='#eee462') ax.set_xticklabels(['whatever...'] + sizes) ax.grid(axis='y', color='#8c8c8c', linestyle='--', linewidth=1) ax.set_axisbelow(True) ax.spines['top'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['right'].set_visible(False) ax.tick_params(axis='both', which='major', labelsize=24) fig.savefig( 'init-times.svg', bbox_inches="tight", pad_inches=0 ) ax.plot() ```
github_jupyter
# 18 - Support Vector Machines by [Alejandro Correa Bahnsen](albahnsen.com/) version 0.1, Apr 2016 ## Part of the class [Practical Machine Learning](https://github.com/albahnsen/PracticalMachineLearningClass) This notebook is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](http://creativecommons.org/licenses/by-sa/3.0/deed.en_US). Special thanks goes to [Jake Vanderplas](http://www.vanderplas.com) Previously we introduced supervised machine learning. There are many supervised learning algorithms available; here we'll go into brief detail one of the most powerful and interesting methods: **Support Vector Machines (SVMs)**. ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt from scipy import stats plt.style.use('fivethirtyeight') ``` ## Motivating Support Vector Machines Support Vector Machines (SVMs) are a powerful supervised learning algorithm used for **classification** or for **regression**. SVMs are a **discriminative** classifier: that is, they draw a boundary between clusters of data. Let's show a quick example of support vector classification. First we need to create a dataset: ``` from sklearn.datasets.samples_generator import make_blobs X, y = make_blobs(n_samples=50, centers=2, random_state=0, cluster_std=0.60) plt.figure(figsize=(8,8)) plt.scatter(X[:, 0], X[:, 1], c=y, s=50); ``` A discriminative classifier attempts to draw a line between the two sets of data. Immediately we see a problem: such a line is ill-posed! For example, we could come up with several possibilities which perfectly discriminate between the classes in this example: ``` xfit = np.linspace(-1, 3.5) plt.figure(figsize=(8,8)) plt.scatter(X[:, 0], X[:, 1], c=y, s=50) for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]: plt.plot(xfit, m * xfit + b, '-k') plt.xlim(-1, 3.5); ``` These are three *very* different separaters which perfectly discriminate between these samples. Depending on which you choose, a new data point will be classified almost entirely differently! How can we improve on this? ### Support Vector Machines: Maximizing the *Margin* Support vector machines are one way to address this. What support vector machined do is to not only draw a line, but consider a *region* about the line of some given width. Here's an example of what it might look like: ``` xfit = np.linspace(-1, 3.5) plt.figure(figsize=(8,8)) plt.scatter(X[:, 0], X[:, 1], c=y, s=50) for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]: yfit = m * xfit + b plt.plot(xfit, yfit, '-k') plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none', color='#AAAAAA', alpha=0.4) plt.xlim(-1, 3.5); ``` Notice here that if we want to maximize this width, the middle fit is clearly the best. This is the intuition of **support vector machines**, which optimize a linear discriminant model in conjunction with a **margin** representing the perpendicular distance between the datasets. #### Fitting a Support Vector Machine Now we'll fit a Support Vector Machine Classifier to these points. While the mathematical details of the likelihood model are interesting, we'll let you read about those elsewhere. Instead, we'll just treat the scikit-learn algorithm as a black box which accomplishes the above task. ``` from sklearn.svm import SVC # "Support Vector Classifier" clf = SVC(kernel='linear') clf.fit(X, y) ``` To better visualize what's happening here, let's create a quick convenience function that will plot SVM decision boundaries for us: ``` import warnings warnings.filterwarnings('ignore') def plot_svc_decision_function(clf, ax=None): """Plot the decision function for a 2D SVC""" if ax is None: ax = plt.gca() x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30) y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30) Y, X = np.meshgrid(y, x) P = np.zeros_like(X) for i, xi in enumerate(x): for j, yj in enumerate(y): P[i, j] = clf.decision_function([xi, yj]) # plot the margins ax.contour(X, Y, P, colors='k', levels=[-1, 0, 1], alpha=0.5, linestyles=['--', '-', '--']) plt.figure(figsize=(8,8)) plt.scatter(X[:, 0], X[:, 1], c=y, s=50) plot_svc_decision_function(clf); ``` Notice that the dashed lines touch a couple of the points: these points are the pivotal pieces of this fit, and are known as the *support vectors* (giving the algorithm its name). In scikit-learn, these are stored in the ``support_vectors_`` attribute of the classifier: ``` plt.figure(figsize=(8,8)) plt.scatter(X[:, 0], X[:, 1], c=y, s=50) plot_svc_decision_function(clf) plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=200, facecolors='none'); ``` Let's use IPython's ``interact`` functionality to explore how the distribution of points affects the support vectors and the discriminative fit. (This is only available in IPython 2.0+, and will not work in a static view) ``` from IPython.html.widgets import interact def plot_svm(N=10): X, y = make_blobs(n_samples=200, centers=2, random_state=0, cluster_std=0.60) X = X[:N] y = y[:N] clf = SVC(kernel='linear') clf.fit(X, y) plt.figure(figsize=(8,8)) plt.scatter(X[:, 0], X[:, 1], c=y, s=50) plt.xlim(-1, 4) plt.ylim(-1, 6) plot_svc_decision_function(clf, plt.gca()) plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=200, facecolors='none') interact(plot_svm, N=[10, 200], kernel='linear'); ``` Notice the unique thing about SVM is that only the support vectors matter: that is, if you moved any of the other points without letting them cross the decision boundaries, they would have no effect on the classification results! #### Going further: Kernel Methods Where SVM gets incredibly exciting is when it is used in conjunction with *kernels*. To motivate the need for kernels, let's look at some data which is not linearly separable: ``` from sklearn.datasets.samples_generator import make_circles X, y = make_circles(100, factor=.1, noise=.1) clf = SVC(kernel='linear').fit(X, y) plt.figure(figsize=(8,8)) plt.scatter(X[:, 0], X[:, 1], c=y, s=50) # plot_svc_decision_function(clf); ``` Clearly, no linear discrimination will ever separate these data. One way we can adjust this is to apply a **kernel**, which is some functional transformation of the input data. For example, one simple model we could use is a **radial basis function** ``` r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2)) ``` If we plot this along with our data, we can see the effect of it: ``` from mpl_toolkits import mplot3d def plot_3D(elev=30, azim=30): plt.figure(figsize=(8,8)) ax = plt.subplot(projection='3d') ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50) ax.view_init(elev=elev, azim=azim) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('r') interact(plot_3D, elev=[-90, 90], azip=(-180, 180)); ``` We can see that with this additional dimension, the data becomes trivially linearly separable! This is a relatively simple kernel; SVM has a more sophisticated version of this kernel built-in to the process. This is accomplished by using ``kernel='rbf'``, short for *radial basis function*: ``` clf = SVC(kernel='rbf') clf.fit(X, y) plt.figure(figsize=(8,8)) plt.scatter(X[:, 0], X[:, 1], c=y, s=50) plot_svc_decision_function(clf) ``` Here there are effectively $N$ basis functions: one centered at each point! Through a clever mathematical trick, this computation proceeds very efficiently using the "Kernel Trick", without actually constructing the matrix of kernel evaluations.
github_jupyter
<h1 align="center">PROGRAMACIÓN DE COMPUTADORES </h1> <h2 align="center">UNIVERSIDAD EAFIT</h2> <h3 align="center">MEDELLÍN - COLOMBIA </h3> <h2 align="center">Sesión 14 - Diccionarios</h2> ## Docente: > <strong> *Carlos Alberto Álvarez Henao, I.C. Ph.D.* </strong> ### Diccionarios >Un Diccionario es una estructura de datos y un tipo de dato en *Python* con características especiales que nos permite almacenar cualquier tipo de valor como enteros, cadenas, listas e incluso otras funciones. Estos diccionarios nos permiten además identificar cada elemento por una *clave* (*Key*). > Para definir un diccionario, se encierra el listado de valores entre llaves, ({}). Las parejas de *clave* y *valor* se separan con comas, y la *clave* y el *valor* se separan con dos puntos, (:). >Los diccionarios (llamados *arrays asociativos* o *tablas de hash* en otros lenguajes), son una estructura de datos muy poderosa, que permite asociar un valor a una clave. > Las claves deben ser de tipo inmutable, los valores pueden ser de cualquier tipo. > Los diccionarios no están ordenados. Si bien se los puede recorrer, el orden en el que se tomarán los elementos no está determinado. ``` diccionario = {'nombre' : 'Carlos', 'edad' : 48, 'cursos': ['Python','Fortran','Matlab'] } ``` Podemos acceder al elemento de un Diccionario mediante la *clave* de este elemento, como veremos a continuación: ``` print(diccionario['nombre']) #Carlos print(diccionario['edad']) #48 print(diccionario['cursos']) #['Python','Fortran','Matlab'] ``` También es posible insertar una lista dentro de un diccionario. Para acceder a cada uno de los cursos usamos los índices: ``` print(diccionario['cursos'][0:2])#Python print(diccionario['cursos'][1])#Fortran print(diccionario['cursos'][2])#Matlab ``` Para recorrer todo el Diccionario, podemos hacer uso de la estructura for: ``` for a in diccionario: print(a, ":", diccionario[a]) ``` ### Métodos de los Diccionarios *dict*() - Recibe como parámetro una representación de un diccionario y si es factible, devuelve un diccionario de datos. ``` dic = dict(nombre='Carlos', apellido='Alvarez', edad=48) print(dic) ``` *zip*() - Recibe como parámetro dos elementos iterables, ya sea una cadena, una lista o una tupla. Ambos parámetros deben tener el mismo número de elementos. Se devolverá un diccionario relacionando el elemento $i$-esimo de cada uno de los iterables. ``` dic = dict(zip('abcd',["z","y","x","w"])) print(dic) ``` *items*() - Devuelve una lista de tuplas, cada tupla se compone de dos elementos: el primero será la *clave* y el segundo, su *valor*. ``` dic = {'a' : 1, 'b': 2, 'c' : 3 , 'd' : 4} items = dic.items() print(items) ``` *keys*() - Retorna una lista de elementos, los cuales serán las *claves* de nuestro diccionario. ``` dic = {'a' : 1, 'b' : 2, 'c' : 3 , 'd' : 4} keys= dic.keys() print(keys) ``` *values*() - Retorna una lista de elementos, que serán los *valores* de nuestro diccionario. ``` dic = {'a' : 1, 'b' : 2, 'c' : 3 , 'd' : 4} values= dic.values() print(values) ``` *clear*() - Elimina todos los ítems del diccionario dejándolo vacío. ``` dic1 = {'a' : 1, 'b' : 2, 'c' : 3 , 'd' : 4} dic1.clear() print(dic1) ``` *copy*() - Retorna una copia del diccionario original. ``` dic = {'a' : 1, 'b' : 2, 'c' : 3 , 'd' : 4} dic1 = dic.copy() print(dic1) ``` *fromkeys*() - Recibe como parámetros un iterable y un valor, devolviendo un diccionario que contiene como claves los elementos del iterable con el mismo valor ingresado. Si el valor no es ingresado, devolverá none para todas las claves. ``` dic = dict.fromkeys(['a','b','c','d'],1) print(dic) ``` *get*() - Recibe como parámetro una clave, devuelve el valor de la clave. Si no lo encuentra, devuelve un objeto none. ``` dic = {'a' : 1, 'b' : 2, 'c' : 3 , 'd' : 4} valor = dic.get('z') print(valor) ``` *pop*() - Recibe como parámetro una clave, elimina esta y devuelve su valor. Si no lo encuentra, devuelve error. ``` dic = {'a' : 1, 'b' : 2, 'c' : 3 , 'd' : 4} valor = dic.pop('c') print(valor) print(dic) ``` *setdefault*() - Funciona de dos formas. En la primera como get ``` dic = {'a' : 1, 'b' : 2, 'c' : 3 , 'd' : 4} valor = dic.setdefault('b') print(valor) ``` Y en la segunda forma, nos sirve para agregar un nuevo elemento a nuestro diccionario. ``` dic = {'a' : 1, 'b' : 2, 'c' : 3 , 'd' : 4} valor = dic.setdefault('e',5) print(dic) print(valor) ``` *update*() - Recibe como parámetro otro diccionario. Si se tienen claves iguales, actualiza el valor de la clave repetida; si no hay claves iguales, este par clave-valor es agregado al diccionario. ``` dic1 = {'a' : 1, 'b' : 2, 'c' : 3 , 'd' : 4} dic2 = {'c' : 6, 'b' : 5, 'e' : 9 , 'f' : 10} dic2.update(dic1) print(dic2) ``` > Los diccionarios son una herramienta muy versátil. Se puede utilizar un diccionario, por ejemplo, para contar cuántas apariciones de cada palabra hay en un texto, o cuántas apariciones de cada letra. > Es posible utilizar un diccionario, también, para tener una agenda donde la clave es el nombre de la persona, y el valor es una lista con los datos correspondientes a esa persona. > También podría utilizarse un diccionario para mantener los datos de los alumnos inscritos en una materia. Siendo la clave el ID, y el valor una lista con todas las notas asociadas a ese alumno. > En general, los diccionarios sirven para crear bases de datos muy simples, en las que la clave es el identificador del elemento, y el valor son todos los datos del elemento a considerar. > Otro posible uso de un diccionario sería utilizarlo para realizar traducciones, donde la clave sería la palabra en el idioma original y el valor la palabra en el idioma al que se quiere traducir. Sin embargo esta aplicación es poco destacable, ya que esta forma de traducir es muy mala. ### Ejemplo: Se desea crear un diccionario con el listado de jugadores de la selección española de fútbol campeona del mundial de Suráfrica 2010 y realizar una serie de consultas sobre él. (*Ejemplo de extraído del blog [Jarroba.com](https://jarroba.com/diccionario-python-ejemplos/)*) ``` futbolistas = dict() futbolistas = { 1 : "Casillas", 15 : "Ramos", 3 : "Pique", 5 : "Puyol", 11 : "Capdevila", 14 : "Xabi Alonso", 16 : "Busquets", 8 : "Xavi Hernandez", 18 : "Pedrito", 6 : "Iniesta", 7 : "Villa" } ``` Recorriendo cada uno de los elementos del diccionario e imprimiendo el resultado ``` for k,v in futbolistas.items(): print("el jugador # {0} es {1} ".format(k,v)) #print("el jugador #", k, "es ", v, "mas", 10.000) ``` vamos a determinar la cantidad de elementos del diccionario ``` numElem = len(futbolistas) print("Numero de elementos del diccionario len(futbolistas) = {0}".format(numElem)) ``` Ahora queremos ver por separado las claves y los valores del diccionario ``` # Imprimimos una lista con las claves del diccionario keys = futbolistas.keys(); print("Las claves del diccionario son \n {0}".format(keys)) # Imprimimos en una lista los valores del diccionario values = futbolistas.values() print("\nLos valores del diccionario son \n {0}".format(values)) ``` Si deseamos conocer el valor que tiene una determinada clave empleamos el método `get(key)` ``` elem = futbolistas.get(6) print("el nombre del futbolista que tiene el número '6' es {0}".format(elem)) ``` A continuación vamos a ver dos formas de insertar elementos en el diccionario. La primera de ellas es la más sencilla (como si de un array asociativo se tratase), pasándole la clave entre corchetes y asignándole un valor: ``` # Añadimos un nuevo elemento a la lista futbolistas[22] = 'Navas' print("\nDiccionario tras añadir un elemento: \n {0}".format(futbolistas)) numElem = len(futbolistas) print("Numero de elementos del diccionario len(futbolistas) = {0}".format(numElem)) ``` La segunda forma de insertar un elemento es con el método `setdefault(key,default=valor)` al que se le pasa como parámetros un clave y un valor. Este método tiene la peculiaridad de que solo inserta el elemento en el diccionario sino existe un elemento con esa clave. Si existe un elemento con esa clave no realiza la inserción: ``` # Insertamos un elemento en el array. Si la clave ya existe no inserta el elemento elem2 = futbolistas.setdefault(10,'Cesc') print("\nInsertamos un elemento en el diccionario (Si la clave existe no lo inserta): {0}".format(elem2)) numElem = len(futbolistas) print("Numero de elementos del diccionario len(futbolistas) = {0}".format(numElem)) ``` El siguiente método que vamos a ver `pop(key)` nos borrará del diccionario aquel elemento que tenga como clave, la que le pasamos como parámetro. Por ejemplo vamos a borrar el elemento con *clave = 22*: ``` # Eliminamos un elemento del diccionario dada su clave futbolistas.pop(22) print("\nDiccionario tras eliminar un elemento: {0}".format(futbolistas)) numElem = len(futbolistas) print("Numero de elementos del diccionario len(futbolistas) = {0}".format(numElem)) ``` Para hacer una copia de un diccionario, se utiliza el método `copy()`: ``` # Hacemos una copia del diccionario futbolistasCopy = futbolistas.copy(); print("\nRealizamos una copia del diccionario: \n {0}".format(futbolistasCopy)) ``` Para eliminar el contenido (o los elementos) de un diccionario utilizamos el método `clear()`: ``` # Eliminamos los elementos de un diccionario futbolistasCopy.clear() print("\nEliminamos los elementos de un diccionario: {0}".format(futbolistasCopy)) ``` Con el método `fromkeys(listKey,default=value)`, creamos un diccionario cuyas claves son las que le pasamos como parámetro en una lista. Si le pasamos un segundo parámetro, pondrá ese parámetro como clave de cada uno de los elementos. Veamos un ejemplo: ``` # Creamos un diccionario a partir de una lista con las claves keys = ['nombre', 'apellidos', 'edad'] dictList = dict.fromkeys(keys, 'nada') print("Creamos un diccionario a partir de una lista {0}".format(dictList)) ``` El método que nos permite comprobar si existe o no una clave es el método `has_key(key)`. Veamos un ejemplo: ``` # Comprobamos si existe o no una clave exit2 = futbolistas.has_key(2) exit8 = futbolistas.has_key(8) print("\nComprobamos si existen los elementos 2 y 8 : {0}, {1}".format(exit2,exit8)) ```
github_jupyter
<div align="center"> <h1><img width="30" src="https://madewithml.com/static/images/rounded_logo.png">&nbsp;<a href="https://madewithml.com/">Made With ML</a></h1> Applied ML · MLOps · Production <br> Join 30K+ developers in learning how to responsibly <a href="https://madewithml.com/about/">deliver value</a> with ML. <br> </div> <br> <div align="center"> <a target="_blank" href="https://newsletter.madewithml.com"><img src="https://img.shields.io/badge/Subscribe-30K-brightgreen"></a>&nbsp; <a target="_blank" href="https://github.com/GokuMohandas/MadeWithML"><img src="https://img.shields.io/github/stars/GokuMohandas/MadeWithML.svg?style=social&label=Star"></a>&nbsp; <a target="_blank" href="https://www.linkedin.com/in/goku"><img src="https://img.shields.io/badge/style--5eba00.svg?label=LinkedIn&logo=linkedin&style=social"></a>&nbsp; <a target="_blank" href="https://twitter.com/GokuMohandas"><img src="https://img.shields.io/twitter/follow/GokuMohandas.svg?label=Follow&style=social"></a> <br> 🔥&nbsp; Among the <a href="https://github.com/topics/deep-learning" target="_blank">top ML</a> repositories on GitHub </div> <br> <hr> # Transformers In this lesson we will learn how to implement the Transformer architecture to extract contextual embeddings for our text classification task. <div align="left"> <a target="_blank" href="https://madewithml.com/courses/foundations/transformers/"><img src="https://img.shields.io/badge/📖 Read-blog post-9cf"></a>&nbsp; <a href="https://github.com/GokuMohandas/MadeWithML/blob/main/notebooks/15_Transformers.ipynb" role="button"><img src="https://img.shields.io/static/v1?label=&amp;message=View%20On%20GitHub&amp;color=586069&amp;logo=github&amp;labelColor=2f363d"></a>&nbsp; <a href="https://colab.research.google.com/github/GokuMohandas/MadeWithML/blob/main/notebooks/15_Transformers.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> </div> # Overview Transformers are a very popular architecture that leverage and extend the concept of self-attention to create very useful representations of our input data for a downstream task. - **advantages**: - better representation for our input tokens via contextual embeddings where the token representation is based on the specific neighboring tokens using self-attention. - sub-word tokens, as opposed to character tokens, since they can hold more meaningful representation for many of our keywords, prefixes, suffixes, etc. - attend (in parallel) to all the tokens in our input, as opposed to being limited by filter spans (CNNs) or memory issues from sequential processing (RNNs). - **disadvantages**: - computationally intensive - required large amounts of data (mitigated using pretrained models) <div align="left"> <img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/transformers/architecture.png" width="800"> </div> <div align="left"> <small><a href="https://arxiv.org/abs/1706.03762" target="_blank">Attention Is All You Need</a></small> </div> # Set up ``` !pip install transformers==3.0.2 -q import numpy as np import pandas as pd import random import torch import torch.nn as nn SEED = 1234 def set_seeds(seed=1234): """Set seeds for reproducibility.""" np.random.seed(seed) random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.cuda.manual_seed_all(seed) # multi-GPU# Set seeds for reproducibility set_seeds(seed=SEED) # Set seeds for reproducibility set_seeds(seed=SEED) # Set device cuda = True device = torch.device('cuda' if ( torch.cuda.is_available() and cuda) else 'cpu') torch.set_default_tensor_type('torch.FloatTensor') if device.type == 'cuda': torch.set_default_tensor_type('torch.cuda.FloatTensor') print (device) ``` ## Load data We will download the [AG News dataset](http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html), which consists of 120K text samples from 4 unique classes (`Business`, `Sci/Tech`, `Sports`, `World`) ``` import numpy as np import pandas as pd import re import urllib # Load data url = "https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/datasets/news.csv" df = pd.read_csv(url, header=0) # load df = df.sample(frac=1).reset_index(drop=True) # shuffle df.head() # Reduce data size (too large to fit in Colab's limited memory) df = df[:10000] print (len(df)) ``` ## Preprocessing We're going to clean up our input data first by doing operations such as lower text, removing stop (filler) words, filters using regular expressions, etc. ``` import nltk from nltk.corpus import stopwords from nltk.stem import PorterStemmer import re nltk.download('stopwords') STOPWORDS = stopwords.words('english') print (STOPWORDS[:5]) porter = PorterStemmer() def preprocess(text, stopwords=STOPWORDS): """Conditional preprocessing on our text unique to our task.""" # Lower text = text.lower() # Remove stopwords pattern = re.compile(r'\b(' + r'|'.join(stopwords) + r')\b\s*') text = pattern.sub('', text) # Remove words in paranthesis text = re.sub(r'\([^)]*\)', '', text) # Spacing and filters text = re.sub(r"([-;;.,!?<=>])", r" \1 ", text) text = re.sub('[^A-Za-z0-9]+', ' ', text) # remove non alphanumeric chars text = re.sub(' +', ' ', text) # remove multiple spaces text = text.strip() return text # Sample text = "Great week for the NYSE!" preprocess(text=text) # Apply to dataframe preprocessed_df = df.copy() preprocessed_df.title = preprocessed_df.title.apply(preprocess) print (f"{df.title.values[0]}\n\n{preprocessed_df.title.values[0]}") ``` ## Split data ``` import collections from sklearn.model_selection import train_test_split TRAIN_SIZE = 0.7 VAL_SIZE = 0.15 TEST_SIZE = 0.15 def train_val_test_split(X, y, train_size): """Split dataset into data splits.""" X_train, X_, y_train, y_ = train_test_split(X, y, train_size=TRAIN_SIZE, stratify=y) X_val, X_test, y_val, y_test = train_test_split(X_, y_, train_size=0.5, stratify=y_) return X_train, X_val, X_test, y_train, y_val, y_test # Data X = preprocessed_df["title"].values y = preprocessed_df["category"].values # Create data splits X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split( X=X, y=y, train_size=TRAIN_SIZE) print (f"X_train: {X_train.shape}, y_train: {y_train.shape}") print (f"X_val: {X_val.shape}, y_val: {y_val.shape}") print (f"X_test: {X_test.shape}, y_test: {y_test.shape}") print (f"Sample point: {X_train[0]} → {y_train[0]}") ``` ## Label encoder ``` class LabelEncoder(object): """Label encoder for tag labels.""" def __init__(self, class_to_index={}): self.class_to_index = class_to_index self.index_to_class = {v: k for k, v in self.class_to_index.items()} self.classes = list(self.class_to_index.keys()) def __len__(self): return len(self.class_to_index) def __str__(self): return f"<LabelEncoder(num_classes={len(self)})>" def fit(self, y): classes = np.unique(y) for i, class_ in enumerate(classes): self.class_to_index[class_] = i self.index_to_class = {v: k for k, v in self.class_to_index.items()} self.classes = list(self.class_to_index.keys()) return self def encode(self, y): y_one_hot = np.zeros((len(y), len(self.class_to_index)), dtype=int) for i, item in enumerate(y): y_one_hot[i][self.class_to_index[item]] = 1 return y_one_hot def decode(self, y): classes = [] for i, item in enumerate(y): index = np.where(item == 1)[0][0] classes.append(self.index_to_class[index]) return classes def save(self, fp): with open(fp, 'w') as fp: contents = {'class_to_index': self.class_to_index} json.dump(contents, fp, indent=4, sort_keys=False) @classmethod def load(cls, fp): with open(fp, 'r') as fp: kwargs = json.load(fp=fp) return cls(**kwargs) # Encode label_encoder = LabelEncoder() label_encoder.fit(y_train) num_classes = len(label_encoder) label_encoder.class_to_index # Class weights counts = np.bincount([label_encoder.class_to_index[class_] for class_ in y_train]) class_weights = {i: 1.0/count for i, count in enumerate(counts)} print (f"counts: {counts}\nweights: {class_weights}") # Convert labels to tokens print (f"y_train[0]: {y_train[0]}") y_train = label_encoder.encode(y_train) y_val = label_encoder.encode(y_val) y_test = label_encoder.encode(y_test) print (f"y_train[0]: {y_train[0]}") print (f"decode([y_train[0]]): {label_encoder.decode([y_train[0]])}") ``` ## Tokenizer We'll be using the [BertTokenizer](https://huggingface.co/transformers/model_doc/bert.html#berttokenizer) to tokenize our input text in to sub-word tokens. ``` from transformers import DistilBertTokenizer from transformers import BertTokenizer # Load tokenizer and model # tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased") tokenizer = BertTokenizer.from_pretrained("allenai/scibert_scivocab_uncased") vocab_size = len(tokenizer) print (vocab_size) # Tokenize inputs encoded_input = tokenizer(X_train.tolist(), return_tensors="pt", padding=True) X_train_ids = encoded_input["input_ids"] X_train_masks = encoded_input["attention_mask"] print (X_train_ids.shape, X_train_masks.shape) encoded_input = tokenizer(X_val.tolist(), return_tensors="pt", padding=True) X_val_ids = encoded_input["input_ids"] X_val_masks = encoded_input["attention_mask"] print (X_val_ids.shape, X_val_masks.shape) encoded_input = tokenizer(X_test.tolist(), return_tensors="pt", padding=True) X_test_ids = encoded_input["input_ids"] X_test_masks = encoded_input["attention_mask"] print (X_test_ids.shape, X_test_masks.shape) # Decode print (f"{X_train_ids[0]}\n{tokenizer.decode(X_train_ids[0])}") # Sub-word tokens print (tokenizer.convert_ids_to_tokens(ids=X_train_ids[0])) ``` ## Datasets We're going to create Datasets and DataLoaders to be able to efficiently create batches with our data splits. ``` class TransformerTextDataset(torch.utils.data.Dataset): def __init__(self, ids, masks, targets): self.ids = ids self.masks = masks self.targets = targets def __len__(self): return len(self.targets) def __str__(self): return f"<Dataset(N={len(self)})>" def __getitem__(self, index): ids = torch.tensor(self.ids[index], dtype=torch.long) masks = torch.tensor(self.masks[index], dtype=torch.long) targets = torch.FloatTensor(self.targets[index]) return ids, masks, targets def create_dataloader(self, batch_size, shuffle=False, drop_last=False): return torch.utils.data.DataLoader( dataset=self, batch_size=batch_size, shuffle=shuffle, drop_last=drop_last, pin_memory=False) # Create datasets train_dataset = TransformerTextDataset(ids=X_train_ids, masks=X_train_masks, targets=y_train) val_dataset = TransformerTextDataset(ids=X_val_ids, masks=X_val_masks, targets=y_val) test_dataset = TransformerTextDataset(ids=X_test_ids, masks=X_test_masks, targets=y_test) print ("Data splits:\n" f" Train dataset:{train_dataset.__str__()}\n" f" Val dataset: {val_dataset.__str__()}\n" f" Test dataset: {test_dataset.__str__()}\n" "Sample point:\n" f" ids: {train_dataset[0][0]}\n" f" masks: {train_dataset[0][1]}\n" f" targets: {train_dataset[0][2]}") # Create dataloaders batch_size = 128 train_dataloader = train_dataset.create_dataloader( batch_size=batch_size) val_dataloader = val_dataset.create_dataloader( batch_size=batch_size) test_dataloader = test_dataset.create_dataloader( batch_size=batch_size) batch = next(iter(train_dataloader)) print ("Sample batch:\n" f" ids: {batch[0].size()}\n" f" masks: {batch[1].size()}\n" f" targets: {batch[2].size()}") ``` ## Trainer Let's create the `Trainer` class that we'll use to facilitate training for our experiments. ``` import torch.nn.functional as F class Trainer(object): def __init__(self, model, device, loss_fn=None, optimizer=None, scheduler=None): # Set params self.model = model self.device = device self.loss_fn = loss_fn self.optimizer = optimizer self.scheduler = scheduler def train_step(self, dataloader): """Train step.""" # Set model to train mode self.model.train() loss = 0.0 # Iterate over train batches for i, batch in enumerate(dataloader): # Step batch = [item.to(self.device) for item in batch] # Set device inputs, targets = batch[:-1], batch[-1] self.optimizer.zero_grad() # Reset gradients z = self.model(inputs) # Forward pass J = self.loss_fn(z, targets) # Define loss J.backward() # Backward pass self.optimizer.step() # Update weights # Cumulative Metrics loss += (J.detach().item() - loss) / (i + 1) return loss def eval_step(self, dataloader): """Validation or test step.""" # Set model to eval mode self.model.eval() loss = 0.0 y_trues, y_probs = [], [] # Iterate over val batches with torch.no_grad(): for i, batch in enumerate(dataloader): # Step batch = [item.to(self.device) for item in batch] # Set device inputs, y_true = batch[:-1], batch[-1] z = self.model(inputs) # Forward pass J = self.loss_fn(z, y_true).item() # Cumulative Metrics loss += (J - loss) / (i + 1) # Store outputs y_prob = F.softmax(z).cpu().numpy() y_probs.extend(y_prob) y_trues.extend(y_true.cpu().numpy()) return loss, np.vstack(y_trues), np.vstack(y_probs) def predict_step(self, dataloader): """Prediction step.""" # Set model to eval mode self.model.eval() y_probs = [] # Iterate over val batches with torch.no_grad(): for i, batch in enumerate(dataloader): # Forward pass w/ inputs inputs, targets = batch[:-1], batch[-1] z = self.model(inputs) # Store outputs y_prob = F.softmax(z).cpu().numpy() y_probs.extend(y_prob) return np.vstack(y_probs) def train(self, num_epochs, patience, train_dataloader, val_dataloader): best_val_loss = np.inf for epoch in range(num_epochs): # Steps train_loss = self.train_step(dataloader=train_dataloader) val_loss, _, _ = self.eval_step(dataloader=val_dataloader) self.scheduler.step(val_loss) # Early stopping if val_loss < best_val_loss: best_val_loss = val_loss best_model = self.model _patience = patience # reset _patience else: _patience -= 1 if not _patience: # 0 print("Stopping early!") break # Logging print( f"Epoch: {epoch+1} | " f"train_loss: {train_loss:.5f}, " f"val_loss: {val_loss:.5f}, " f"lr: {self.optimizer.param_groups[0]['lr']:.2E}, " f"_patience: {_patience}" ) return best_model ``` # Transformer ## Scaled dot-product attention The most popular type of self-attention is scaled dot-product attention from the widely-cited [Attention is all you need](https://arxiv.org/abs/1706.03762) paper. This type of attention involves projecting our encoded input sequences onto three matrices, queries (Q), keys (K) and values (V), whose weights we learn. $ inputs \in \mathbb{R}^{NXMXH} $ ($N$ = batch size, $M$ = sequence length, $H$ = hidden dim) $ Q = XW_q $ where $ W_q \in \mathbb{R}^{HXd_q} $ $ K = XW_k $ where $ W_k \in \mathbb{R}^{HXd_k} $ $ V = XW_v $ where $ W_v \in \mathbb{R}^{HXd_v} $ $ attention (Q, K, V) = softmax( \frac{Q K^{T}}{\sqrt{d_k}} )V \in \mathbb{R}^{MXd_v} $ ## Multi-head attention Instead of applying self-attention only once across the entire encoded input, we can also separate the input and apply self-attention in parallel (heads) to each input section and concatenate them. This allows the different head to learn unique representations while maintaining the complexity since we split the input into smaller subspaces. $ MultiHead(Q, K, V) = concat({head}_1, ..., {head}_{h})W_O $ * ${head}_i = attention(Q_i, K_i, V_i) $ * $h$ = # of self-attention heads * $W_O \in \mathbb{R}^{hd_vXH} $ * $H$ = hidden dim. (or dimension of the model $d_{model}$) ## Positional encoding With self-attention, we aren't able to account for the sequential position of our input tokens. To address this, we can use positional encoding to create a representation of the location of each token with respect to the entire sequence. This can either be learned (with weights) or we can use a fixed function that can better extend to create positional encoding for lengths during inference that were not observed during training. $ PE_{(pos,2i)} = sin({pos}/{10000^{2i/H}}) $ $ PE_{(pos,2i+1)} = cos({pos}/{10000^{2i/H}}) $ where: * $pos$ = position of the token $(1...M)$ * $i$ = hidden dim $(1..H)$ This effectively allows us to represent each token's relative position using a fixed function for very large sequences. And because we've constrained the positional encodings to have the same dimensions as our encoded inputs, we can simply concatenate them before feeding them into the multi-head attention heads. ## Architecture And here's how it all fits together! It's an end-to-end architecture that creates these contextual representations and uses an encoder-decoder architecture to predict the outcomes (one-to-one, many-to-one, many-to-many, etc.) Due to the complexity of the architecture, they require massive amounts of data for training without overfitting, however, they can be leveraged as pretrained models to finetune with smaller datasets that are similar to the larger set it was initially trained on. <div align="left"> <img src="https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/foundations/transformers/architecture.png" width="800"> </div> <div align="left"> <small><a href="https://arxiv.org/abs/1706.03762" target="_blank">Attention Is All You Need</a></small> </div> > We're not going to the implement the Transformer [from scratch](https://nlp.seas.harvard.edu/2018/04/03/attention.html) but we will use the[ Hugging Face library](https://github.com/huggingface/transformers) to load a pretrained [BertModel](https://huggingface.co/transformers/model_doc/bert.html#bertmodel) , which we'll use as a feature extractor and fine-tune on our own dataset. ## Model We're going to use a pretrained [BertModel](https://huggingface.co/transformers/model_doc/bert.html#bertmodel) to act as a feature extractor. We'll only use the encoder to receive sequential and pooled outputs (`is_decoder=False` is default). ``` from transformers import BertModel # transformer = BertModel.from_pretrained("distilbert-base-uncased") # embedding_dim = transformer.config.dim transformer = BertModel.from_pretrained("allenai/scibert_scivocab_uncased") embedding_dim = transformer.config.hidden_size class Transformer(nn.Module): def __init__(self, transformer, dropout_p, embedding_dim, num_classes): super(Transformer, self).__init__() self.transformer = transformer self.dropout = torch.nn.Dropout(dropout_p) self.fc1 = torch.nn.Linear(embedding_dim, num_classes) def forward(self, inputs): ids, masks = inputs seq, pool = self.transformer(input_ids=ids, attention_mask=masks) z = self.dropout(pool) z = self.fc1(z) return z ``` > We decided to work with the pooled output, but we could have just as easily worked with the sequential output (encoder representation for each sub-token) and applied a CNN (or other decoder options) on top of it. ``` # Initialize model dropout_p = 0.5 model = Transformer( transformer=transformer, dropout_p=dropout_p, embedding_dim=embedding_dim, num_classes=num_classes) model = model.to(device) print (model.named_parameters) ``` ## Training ``` # Arguments lr = 1e-4 num_epochs = 100 patience = 10 # Define loss class_weights_tensor = torch.Tensor(np.array(list(class_weights.values()))) loss_fn = nn.BCEWithLogitsLoss(weight=class_weights_tensor) # Define optimizer & scheduler optimizer = torch.optim.Adam(model.parameters(), lr=lr) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau( optimizer, mode="min", factor=0.1, patience=5) # Trainer module trainer = Trainer( model=model, device=device, loss_fn=loss_fn, optimizer=optimizer, scheduler=scheduler) # Train best_model = trainer.train(num_epochs, patience, train_dataloader, val_dataloader) ``` ## Evaluation ``` import json from sklearn.metrics import precision_recall_fscore_support def get_performance(y_true, y_pred, classes): """Per-class performance metrics.""" # Performance performance = {"overall": {}, "class": {}} # Overall performance metrics = precision_recall_fscore_support(y_true, y_pred, average="weighted") performance["overall"]["precision"] = metrics[0] performance["overall"]["recall"] = metrics[1] performance["overall"]["f1"] = metrics[2] performance["overall"]["num_samples"] = np.float64(len(y_true)) # Per-class performance metrics = precision_recall_fscore_support(y_true, y_pred, average=None) for i in range(len(classes)): performance["class"][classes[i]] = { "precision": metrics[0][i], "recall": metrics[1][i], "f1": metrics[2][i], "num_samples": np.float64(metrics[3][i]), } return performance # Get predictions test_loss, y_true, y_prob = trainer.eval_step(dataloader=test_dataloader) y_pred = np.argmax(y_prob, axis=1) # Determine performance performance = get_performance( y_true=np.argmax(y_true, axis=1), y_pred=y_pred, classes=label_encoder.classes) print (json.dumps(performance['overall'], indent=2)) # Save artifacts from pathlib import Path dir = Path("transformers") dir.mkdir(parents=True, exist_ok=True) label_encoder.save(fp=Path(dir, "label_encoder.json")) torch.save(best_model.state_dict(), Path(dir, "model.pt")) with open(Path(dir, "performance.json"), "w") as fp: json.dump(performance, indent=2, sort_keys=False, fp=fp) ``` ## Inference ``` def get_probability_distribution(y_prob, classes): """Create a dict of class probabilities from an array.""" results = {} for i, class_ in enumerate(classes): results[class_] = np.float64(y_prob[i]) sorted_results = {k: v for k, v in sorted( results.items(), key=lambda item: item[1], reverse=True)} return sorted_results # Load artifacts device = torch.device("cpu") tokenizer = BertTokenizer.from_pretrained("allenai/scibert_scivocab_uncased") label_encoder = LabelEncoder.load(fp=Path(dir, "label_encoder.json")) transformer = BertModel.from_pretrained("allenai/scibert_scivocab_uncased") embedding_dim = transformer.config.hidden_size model = Transformer( transformer=transformer, dropout_p=dropout_p, embedding_dim=embedding_dim, num_classes=num_classes) model.load_state_dict(torch.load(Path(dir, "model.pt"), map_location=device)) model.to(device); # Initialize trainer trainer = Trainer(model=model, device=device) # Create datasets train_dataset = TransformerTextDataset(ids=X_train_ids, masks=X_train_masks, targets=y_train) val_dataset = TransformerTextDataset(ids=X_val_ids, masks=X_val_masks, targets=y_val) test_dataset = TransformerTextDataset(ids=X_test_ids, masks=X_test_masks, targets=y_test) print ("Data splits:\n" f" Train dataset:{train_dataset.__str__()}\n" f" Val dataset: {val_dataset.__str__()}\n" f" Test dataset: {test_dataset.__str__()}\n" "Sample point:\n" f" ids: {train_dataset[0][0]}\n" f" masks: {train_dataset[0][1]}\n" f" targets: {train_dataset[0][2]}") # Dataloader text = "The final tennis tournament starts next week." X = preprocess(text) encoded_input = tokenizer(X, return_tensors="pt", padding=True).to(torch.device("cpu")) ids = encoded_input["input_ids"] masks = encoded_input["attention_mask"] y_filler = label_encoder.encode([label_encoder.classes[0]]*len(ids)) dataset = TransformerTextDataset(ids=ids, masks=masks, targets=y_filler) dataloader = dataset.create_dataloader(batch_size=int(batch_size)) # Inference y_prob = trainer.predict_step(dataloader) y_pred = np.argmax(y_prob, axis=1) label_encoder.index_to_class[y_pred[0]] # Class distributions prob_dist = get_probability_distribution(y_prob=y_prob[0], classes=label_encoder.classes) print (json.dumps(prob_dist, indent=2)) ``` ## Interpretability Let's visualize the self-attention weights from each of the attention heads in the encoder. ``` import sys !rm -r bertviz_repo !test -d bertviz_repo || git clone https://github.com/jessevig/bertviz bertviz_repo if not "bertviz_repo" in sys.path: sys.path += ["bertviz_repo"] from bertviz import head_view # Print input ids print (ids) print (tokenizer.batch_decode(ids)) # Get encoder attentions seq, pool, attn = model.transformer(input_ids=ids, attention_mask=masks, output_attentions=True) print (len(attn)) # 12 attention layers (heads) print (attn[0].shape) # HTML set up def call_html(): import IPython display(IPython.core.display.HTML(''' <script src="/static/components/requirejs/require.js"></script> <script> requirejs.config({ paths: { base: '/static/base', "d3": "https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.8/d3.min", jquery: '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min', }, }); </script> ''')) # Visualize self-attention weights call_html() tokens = tokenizer.convert_ids_to_tokens(ids[0]) head_view(attention=attn, tokens=tokens) ``` > Now you're ready to start the [MLOps lesson](https://madewithml.com/#mlops) to learn how to put all this foundational modeling knowledge to responsibly deliver value.
github_jupyter
``` import numpy as np import pandas as pd import subprocess import argparse ``` # Preprocessing RecSys 2017 For the RecSys 2017 dataset we first need to artificially create sessions out of the user internactions ``` def make_sessions(data, session_th=30 * 60, is_ordered=False, user_key='user_id', item_key='item_id', time_key='ts'): """Assigns session ids to the events in data without grouping keys""" if not is_ordered: # sort data by user and time data.sort_values(by=[user_key, time_key], ascending=True, inplace=True) # compute the time difference between queries tdiff = np.diff(data[time_key].values) # check which of them are bigger then session_th split_session = tdiff > session_th split_session = np.r_[True, split_session] # check when the user chenges is data new_user = data['user_id'].values[1:] != data['user_id'].values[:-1] new_user = np.r_[True, new_user] # a new sessions stars when at least one of the two conditions is verified new_session = np.logical_or(new_user, split_session) # compute the session ids session_ids = np.cumsum(new_session) data['session_id'] = session_ids return data ``` # Test set A test set can be either created by (1) adding the last session of every user to be tested or, (2) making a time-based split. ``` def last_session_out_split(data, user_key='user_id', item_key='item_id', session_key='session_id', time_key='ts', clean_test=True, min_session_length=2): """ last-session-out split assign the last session of every user to the test set and the remaining ones to the training set """ sessions = data.sort_values(by=[user_key, time_key]).groupby(user_key)[session_key] last_session = sessions.last() train = data[~data.session_id.isin(last_session.values)].copy() test = data[data.session_id.isin(last_session.values)].copy() if clean_test: train_items = train[item_key].unique() test = test[test[item_key].isin(train_items)] # remove sessions in test shorter than min_session_length slen = test[session_key].value_counts() good_sessions = slen[slen >= min_session_length].index test = test[test[session_key].isin(good_sessions)].copy() return train, test def last_n_days_out_split(data, n=1, user_key='user_id', item_key='item_id', session_key='session_id', time_key='ts', clean_test=True, min_session_length=2): """ last n-days out split assign the sessions in the last n days to the test set and remaining to the training one """ DAY = 24 * 60 * 60 data.sort_values(by=[user_key, time_key], inplace=True) # start times of all sessions #sessions_start = data.groupby(session_key)[time_key].agg('min') # extract test start and end time end_time = data[time_key].max() test_start = end_time - n * DAY # get train and test indicies session_max_times = data.groupby(session_key)[time_key].max() session_train = session_max_times[session_max_times < test_start].index session_test = session_max_times[session_max_times >= test_start].index # in1d: Returns a boolean array the same length as ar1 that is True where # an element of ar1 is in ar2 and False otherwise. train = data[ np.in1d( data[session_key], session_train ) ].copy() test = data[ np.in1d( data[session_key], session_test ) ].copy() #train = data[data.session_id.isin(sessions_start[sessions_start < test_start].index)].copy() #test = data[data.session_id.isin(sessions_start[sessions_start >= test_start].index)].copy() if clean_test: before_items = len(test[item_key].unique()) # remove items which do not occur in the test set test = test[np.in1d(test[item_key], train[item_key])] after_items = len(test[item_key].unique()) print("Before item count: " + str(before_items)) print("After item count: " + str(after_items)) # remove sessions in test shorter than min_session_length tslength = test.groupby(session_key).size() test = test[ np.in1d( test[session_key], tslength[tslength >= min_session_length].index ) ].copy() return train, test ``` # 1. RecSys17 processing ``` path = "../../data/" dataset = "recsys17/" raw_path = path + dataset + "raw/" interim_path = path + dataset + "interim/" processed_path = path + dataset + "processed/" ``` For the RecSys17 dataset, we: * Remove the **delete recommendation** and **recruiter interest** interactions as these are not relevant in our setting * **Discard** the **impression interaction** as these denote that XING showed the corresponding job to a user. As stated by Bianchi, et al., 2017, the **presence of an impression does not imply** that the **user interacted with the job** and would thus **introduce bias** and possibly lead to learning a model that mimics XINGs recommender engine Sessions are partitioned by a **30-minute** idle time Keep all sessions: users with >= 2 sessions and also overly active ones (< 200,000 sessions) ``` interactions = pd.read_csv(raw_path + "interactions.csv", header=0, sep='\t') print("Start Time: {}".format(pd.to_datetime(interactions["created_at"].min(), unit="s"))) print("Start Time: {}".format(pd.to_datetime(interactions["created_at"].max(), unit="s"))) # remove NaN values (should have only 1) interactions = interactions[np.isfinite(interactions['created_at'])] # convert back to long from float interactions['created_at'] = interactions['created_at'].astype(np.int64) # remove impressions interactions = interactions[interactions.interaction_type >= 1].copy() # remove delete and headhunter event types interactions = interactions[interactions.interaction_type < 4].copy() interactions['interaction_type'] = interactions['interaction_type'].fillna(0).astype('int') print('Building sessions') # partition interactions into sessions with 30-minutes idle time interactions = make_sessions(interactions, session_th=30 * 60, time_key='created_at', is_ordered=False) print(interactions.head(3)) # drop 189 duplicate interactions interactions = interactions.drop_duplicates(['session_id','created_at']) print('Original data:') print('Num items: {}'.format(interactions.item_id.nunique())) print('Num users: {}'.format(interactions.user_id.nunique())) print('Num sessions: {}'.format(interactions.session_id.nunique())) print('Filtering data') # drop duplicate interactions within the same session interactions.drop_duplicates(subset=['item_id', 'session_id', 'interaction_type'], keep='first', inplace=True) # keep items with >=1 interactions item_pop = interactions.item_id.value_counts() good_items = item_pop[item_pop >= 1].index inter_dense = interactions[interactions.item_id.isin(good_items)] # remove sessions with length < 2 session_length = inter_dense.session_id.value_counts() good_sessions = session_length[session_length >= 3].index inter_dense = inter_dense[inter_dense.session_id.isin(good_sessions)] # let's keep only returning users (with >= 5 sessions) and remove overly active ones (>=200 sessions) sess_per_user = inter_dense.groupby('user_id')['session_id'].nunique() good_users = sess_per_user[(sess_per_user >= 1) & (sess_per_user < 200000)].index inter_dense = inter_dense[inter_dense.user_id.isin(good_users)] print('Filtered data:') print('Num items: {}'.format(inter_dense.item_id.nunique())) print('Num users: {}'.format(inter_dense.user_id.nunique())) print('Num sessions: {}'.format(inter_dense.session_id.nunique())) inter_dense.to_csv(interim_path + "interactions.csv", sep='\t') ``` # 2. Create train and test set by doing a time-based (2 weeks) split ``` print('Partitioning data') # last-session-out partitioning train_full_sessions, test_sessions = last_n_days_out_split(inter_dense, n=14, user_key='user_id', item_key='item_id', session_key='session_id', time_key='created_at', clean_test=True) train_valid_sessions, valid_sessions = last_n_days_out_split(train_full_sessions, n=14, user_key='user_id', item_key='item_id', session_key='session_id', time_key='created_at', clean_test=True) # print statistics train_len = len(train_full_sessions.session_id.unique()) train_item_len = len(train_full_sessions.item_id.unique()) test_len = len(test_sessions.session_id.unique()) test_item_len = len(test_sessions.item_id.unique()) merged_items = train_full_sessions.append(test_sessions, ignore_index=True) merged_item_len = len(merged_items.item_id.unique()) print("Training - Sessions: " + str(train_len)) print("Testing - Sessions: " + str(test_len)) print("Train + Test - Sessions: " + str(train_len + test_len)) print("Training - Items: " + str(train_item_len)) print("Testing - Items: " + str(test_len)) print("Train + Test - Items: " + str(merged_item_len)) print("Train Validating - Sessions: " + str(len(train_valid_sessions.session_id.unique()))) print("Test Validating - Sessions: " + str(len(valid_sessions.session_id.unique()))) ``` # 3. Store train and test sets ``` train_full_sessions.to_csv(processed_path + "train_14d.csv", sep='\t') test_sessions.to_csv(processed_path + "test_14d.csv", sep='\t') train_valid_sessions.to_csv(processed_path + "valid_train_14d.csv", sep='\t') valid_sessions.to_csv(processed_path + "valid_test_14d.csv", sep='\t') ``` # 4. Create train and test session vectors ``` # Create vocabulary from train set unqiue_train_items = train_full_sessions.item_id.unique() # store (or load) unqiue_train_items_df = pd.DataFrame(unqiue_train_items, columns=["item_id"]) print(len(unqiue_train_items_df)) unqiue_train_items_df.to_csv(interim_path + 'vocabulary.csv', header=True) unqiue_train_items_df = pd.read_csv(interim_path + 'vocabulary.csv', index_col=0) unqiue_train_items_dict = unqiue_train_items_df.to_dict('dict')["item_id"] # inverse that item_id is key and index is value unqiue_train_items_dict_inv = {v: k for k, v in unqiue_train_items_dict.items()} print(unqiue_train_items_dict_inv[864950]) # session_vectors = [] session_vectors_np = [] session_groups = train_full_sessions.groupby("session_id") print(str(len(session_groups)) + " sessions to encode.") s_counter = 0 for session_id, session_group in session_groups: # vector length = len(unqiue_train_items) session_vector = np.zeros((len(unqiue_train_items),), dtype=int) # fill 1s for session items for index, row in session_group.iterrows(): item_index = unqiue_train_items_dict_inv[row["item_id"]] #item_index = unqiue_train_items.index(row["item_id"]) # 1-hot encode session_vector[item_index] = 1 #break # append session vector # session_vectors.append(session_vector) session_vectors_np.append(np.insert(session_vector, 0, s_counter)) s_counter += 1 if (s_counter % 10000 == 0): print(str(len(session_groups) - s_counter) + " sessions remaining to encode.") # session_vector_df = pd.DataFrame(session_vectors) # session_vector_df.head() # session_vector_df.to_csv(interim_path + 'train_session_interaction_vector.csv', header=True) a = np.vstack(session_vectors_np) header = ",".join(map(str, range(len(unqiue_train_items)))) np.savetxt(interim_path + 'train_session_interaction_vector.csv', a, header=header, delimiter=",", fmt="%d", comments=",") a ``` # Statistics ``` import matplotlib.pyplot as plt interactions.interaction_type.value_counts().plot(kind='bar') plt.show() print('Train Num items: {}'.format(train_full_sessions.item_id.nunique())) print('Train Num sessions: {}'.format(train_full_sessions.session_id.nunique())) print('Train Num events: {}'.format(len(train_full_sessions))) print('Test Num items: {}'.format(test_sessions.item_id.nunique())) print('Test Num sessions: {}'.format(test_sessions.session_id.nunique())) print('Test Num events: {}'.format(len(test_sessions))) interactions = pd.read_csv("../../data/recsys17/raw/interactions.csv", header=0, sep='\t') # remove NaN values (should have only 1) interactions = interactions[np.isfinite(interactions['created_at'])] # convert back to long from float interactions['created_at'] = interactions['created_at'].astype(np.int64) # remove impressions interactions = interactions[interactions.interaction_type >= 1].copy() # remove delete and headhunter event types interactions = interactions[interactions.interaction_type < 4].copy() interactions['interaction_type'] = interactions['interaction_type'].fillna(0).astype('int') print('Building sessions') # partition interactions into sessions with 30-minutes idle time interactions = make_sessions(interactions, session_th=30 * 60, time_key='created_at', is_ordered=False) print(interactions.head(3)) # drop 189 duplicate interactions interactions = interactions.drop_duplicates(['item_id','session_id','created_at']) print('Original data:') print('Num items: {}'.format(interactions.item_id.nunique())) print('Num users: {}'.format(interactions.user_id.nunique())) print('Num sessions: {}'.format(interactions.session_id.nunique())) print('Filtering data') # keep items with >=20 interactions item_pop = interactions.item_id.value_counts() good_items = item_pop[item_pop >= 1].index inter_dense = interactions[interactions.item_id.isin(good_items)] # remove sessions with length < 3 session_length = inter_dense.session_id.value_counts() good_sessions = session_length[session_length >= 3].index inter_dense = inter_dense[inter_dense.session_id.isin(good_sessions)] # let's keep only returning users (with >= 5 sessions) and remove overly active ones (>=200 sessions) sess_per_user = inter_dense.groupby('user_id')['session_id'].nunique() good_users = sess_per_user[(sess_per_user >= 1) & (sess_per_user < 200000)].index inter_dense = inter_dense[inter_dense.user_id.isin(good_users)] print('Filtered data:') print('Num items: {}'.format(inter_dense.item_id.nunique())) print('Num users: {}'.format(inter_dense.user_id.nunique())) print('Num sessions: {}'.format(inter_dense.session_id.nunique())) store_path = "../../data/recsys17/" inter_dense.to_csv(store_path + "filtered.csv", sep='\t') print('Partitioning data') # last-session-out partitioning train_full_sessions, test_sessions = last_n_days_out_split(inter_dense, n=14, user_key='user_id', item_key='item_id', session_key='session_id', time_key='created_at', clean_test=True) train_valid_sessions, valid_sessions = last_n_days_out_split(train_full_sessions, n=14, user_key='user_id', item_key='item_id', session_key='session_id', time_key='created_at', clean_test=True) print("Data - Sessions: " + str(len(inter_dense.session_id.unique()))) print("Training - Sessions: " + str(len(train_full_sessions.session_id.unique()))) print("Testing - Sessions: " + str(len(test_sessions.session_id.unique()))) print("Train Validating - Sessions: " + str(len(train_valid_sessions.session_id.unique()))) print("Test Validating - Sessions: " + str(len(valid_sessions.session_id.unique()))) train_full_sessions.to_csv(store_path + "train_d14.csv", sep='\t') test_sessions.to_csv(store_path + "test_d14.csv", sep='\t') train_valid_sessions.to_csv(store_path + "valid_train_d14.csv", sep='\t') valid_sessions.to_csv(store_path + "valid_test_d14.csv", sep='\t') print('Train Num items: {}'.format(train_full_sessions.item_id.nunique())) print('Train Num sessions: {}'.format(train_full_sessions.session_id.nunique())) print('Train Num events: {}'.format(len(train_full_sessions))) print('Test Num items: {}'.format(test_sessions.item_id.nunique())) print('Test Num sessions: {}'.format(test_sessions.session_id.nunique())) print('Test Num events: {}'.format(len(test_sessions))) ```
github_jupyter
# Feature Engineering - Business Attributes ``` import pandas as pd import numpy as np from sklearn.preprocessing import StandardScaler rev_busi_Pho= pd.read_csv('../data/filtered_reviews_in_Phonex.csv', parse_dates=["date"]) busi = pd.read_csv('../data/business_data_subset.csv') busi.head(1) ``` ### Change dict attribute to dummy variables ``` def convert_dict_into_dummy(data,feature): """ First change feature values from str to dict, then create variables according to dict keys. return: dataframe with dict keys as columns """ col_index = data.columns.get_loc(feature) get_dict = pd.Series(data.iloc[:,col_index].replace(np.nan,"None")).apply(eval) dummy_df = get_dict.replace("None",np.nan).replace("nan",np.nan).apply(pd.Series) return dummy_df attr = convert_dict_into_dummy(busi,"attributes") attr.head(1) ambience = convert_dict_into_dummy(attr,"Ambience") ambience.dropna().head(1) # change column name ambience.columns = ["Ambience_"+i for i in ambience.columns.tolist()] # concat attr = pd.concat([attr.drop(['Ambience'],1),ambience],axis=1) ``` ### GoodforMeal, latenights ``` goodformeal = convert_dict_into_dummy(attr,"GoodForMeal") goodformeal.dropna().head(5) ``` ### Change dict into boolean ``` def convert_dict_into_boolean(data,feature,new_name): """ For some features that have many nan, but still have several values, convert it into boolean. """ col_index = data.columns.get_loc(feature) data[new_name] = False for i in range(len(data)): if pd.isna(data.iloc[i,col_index]): continue elif "True" in data.iloc[i,col_index]: data.loc[i,new_name] = True return data attr = convert_dict_into_boolean(attr,"BusinessParking","Parking") attr = attr.drop("BusinessParking",axis=1) attr = convert_dict_into_boolean(attr,"Music","music") attr = attr.drop("Music",axis=1) ``` ### Hours ``` hours = convert_dict_into_dummy(busi,"hours") hours.notnull().head() ``` ### Concatenate to form final business features ``` bus_df = pd.concat([busi.drop(['attributes','hours'],1),attr,hours.notnull()],axis=1) bus_df.head(1) ``` ### Data cleaning ``` def delete_u(data,feature): col_index = data.columns.get_loc(feature) values = data.iloc[:,col_index].value_counts().index # print(values) for i in values: if i == "None": data.iloc[:,col_index].replace("None",np.nan,inplace=True) else: data.iloc[:,col_index].replace(i,i.split("'")[1],inplace=True) # for Alcohol data.iloc[:,col_index].replace("none",np.nan,inplace=True) return data for feature in ["RestaurantsAttire","Alcohol","NoiseLevel","Smoking","WiFi"]: # print(feature) bus_df = delete_u(bus_df,feature) bus_df.head(1) bus_df["RestaurantsAttire"].value_counts() ``` ### Drop non-related columns ``` bus_df = bus_df.drop(["DietaryRestrictions", "BYOB", "GoodForMeal", "AgesAllowed","Open24Hours","AcceptsInsurance", "HairSpecializesIn","BYOBCorkage"],axis=1) bus_df = bus_df.replace('True',True) bus_df = bus_df.replace('False',False) bus_df = bus_df.replace('None', np.nan) bus_df = bus_df.replace('nan', np.nan) bus_df.shape ``` ### Keep restaurants in Phoenix ``` bus_df_subset = bus_df[bus_df.business_id.isin(rev_busi_Pho["business_id"].unique())] bus_df_subset.shape features_ind = bus_df_subset.columns.get_loc("RestaurantsGoodForGroups") features = bus_df_subset.columns[features_ind:] features bus_df_subset = bus_df_subset.set_index("business_id").filter(features) bus_df_subset.head(1) ``` ### Impute Missing Values ``` bus_df_subset = bus_df_subset.fillna(False) bus_df_subset = bus_df_subset * 1 ``` ### Correct object data types: ``` feature_dtypes = [] for i in features: # print(i) type_to_convert = type(bus_df_subset[i].iloc[0]) # print(type_to_convert) bus_df_subset[i] = bus_df_subset[i].astype(type_to_convert) ## Drop columns that remain objects col_index = bus_df_subset.columns[bus_df_subset.dtypes != "object"] bus_df_subset = bus_df_subset[col_index] bus_df_subset = busi[["business_id","latitude", "longitude", "stars", "review_count", "is_open"]].set_index("business_id").merge(\ bus_df_subset, left_index = True, right_index = True ) ``` ### Standardize non-boolean variables ``` scaler = StandardScaler() vars_to_scale = ["latitude", "longitude", "stars","review_count"] bus_df_subset[vars_to_scale] = scaler.fit_transform(bus_df_subset[vars_to_scale]) bus_df_subset.to_csv("../data/business_subset_cleaned.csv") bus_df_subset.head(4) ```
github_jupyter
This notebook assumes kafka (and zookeeper) have been started and are available at localhost:9092. https://medium.com/better-programming/your-local-event-driven-environment-using-dockerised-kafka-cluster-6e84af09cd95 ``` $ docker-compose up -d ``` You can explicitly create Kafka topics with appropriate replication and partition config. ``` % docker exec -ti kafka bash root@kafka:/# kafka-topics --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic whylogs-stream ``` ``` %matplotlib inline import warnings warnings.simplefilter("ignore") !pip install kafka-python import datetime import os.path import pandas as pd import numpy as np ``` Load some sample data that we will feed into a Kafka topic. ``` data_file = "lending_club_demo.csv" full_data = pd.read_csv(os.path.join(data_file)) full_data['issue_d'].describe() data = full_data[full_data['issue_d'] == 'Jan-2017'] ``` Load some data into a Kafka topic. ``` from kafka import KafkaProducer import json producer = KafkaProducer(bootstrap_servers='localhost:9092', value_serializer=lambda v: json.dumps(v).encode('utf-8')) for i, row in data.iterrows(): producer.send('whylogs-stream', row.to_dict()) import json from kafka import KafkaConsumer, TopicPartition consumer = KafkaConsumer(bootstrap_servers='localhost:9092', value_deserializer=lambda x: json.loads(x.decode('utf-8'))) # consumer.seek_to_beginning workaround # https://github.com/dpkp/kafka-python/issues/601#issuecomment-331419097 assignments = [] topics=['whylogs-stream'] for topic in topics: partitions = consumer.partitions_for_topic(topic) for p in partitions: print(f'topic {topic} - partition {p}') assignments.append(TopicPartition(topic, p)) consumer.assign(assignments) ``` A long-running, stand-alone python consumer might use this code to read events from a Kfaka topic. We don't use this in the Notebook because it does not terminate. ``` import datetime consumer.seek_to_beginning(); total = 0 with session.logger(dataset_name="another-dataset", dataset_timestamp=datetime.datetime(2020, 9, 22, 0, 0)) as logger: for record in consumer: total += 1 print(f'total {total}') logger.log(record.value) ``` For Notebooks it is better to poll for data and exit when the partition is exhausted. For demonstration purposes, we reset all partitions to the beginning. ``` from whylogs import get_or_create_session session = get_or_create_session() consumer.seek_to_beginning(); with session.logger(dataset_name="another-dataset") as logger: total = 0 while True: finished = True record = consumer.poll(timeout_ms=500, max_records=100, update_offsets=True) for k,v in record.items(): print(f'{k} - {len(v)}') total += len(v) df = pd.DataFrame([row.value for row in v]) logger.log_dataframe(df) finished = False if finished: print(f"total {total}") break !find whylogs-output -type f ```
github_jupyter
# Charts for **REINFORCE** or Monte-Carlo policy gradient My notes on REINFORCE Algorithm. ## Symbol Lookup Table | Symbol | Definition | |-------------------- |---------------------------------------------------------------------------------------------- | | $s \in S$ | $s$ denotes a state. | | $a \in A$ | $a$ denotes an action. | | $r \in R$ | $r$ denotes a reward. | | $ \pi(a \vert s) $ | Policy function, returns probability of choosing action $a$ in state $s$. | | $V(s)$ | State-Value function, Measures how good a state is. (in terms of expected reward). | | $V^\pi (s)$ | State-Value function, When we are using policy $\pi$. | | $Q^\pi$ | Action-value function, Measures how good an action is. | | $Q^\pi (s, a)$ | Action-value function, How good is to take action $a$ in state $s$ when we use policy $\pi$. | | $\gamma$ | Discount factor. | | $G_t$ | Total return value. | | $Q^\pi$ | Action-value function. | | $V^\pi$ | State-value function. | ## Definition [REINFORCE (Monte-Carlo policy gradient)](https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html#reinforce) relies on an estimated return by Monte-Carlo methods using episode samples to update the policy parameter $\theta$. REINFORCE works because the expectation of the sample gradient is equal to the actual gradient: $$ \begin{eqnarray} \nabla_{\theta}J(\theta) &=& \mathbb{E}_{\pi} [ Q^{\pi} (s, a) \nabla_\theta \ln \pi_\theta(a \vert s) ] \nonumber \\ &=& \mathbb{E}_{\pi}[G_t \nabla_\theta \ln \pi_\theta ( A_t \vert S_t)] \nonumber \end{eqnarray} $$ (Because $ Q^\pi (S_t, A_t) = \mathbb{E}_{\pi}[G_t \vert S_t, A_t] $) ### Process 1. Initialize the policy parameter $\theta$ at random. 2. Generate one trajectory on policy $\pi_{\theta}: S_1, A_1, R_1, S_2, A_2, ... , S_T$. 3. For $t=1,2,...,T$: 1. Estimate the return $G_t$. 1. Update policy parameters: $\theta \leftarrow \theta + \alpha \gamma^t G_t \nabla_{\theta} \ln \pi_{\theta}(A_t \vert S_t)$ ## Sources This is just re-hash of what's already out there, nothing new per se. 1. [Lilian's Blog](https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html#reinforce) 1. [PyTorch's Github Repository](https://github.com/pytorch/examples/blob/master/reinforcement_learning/reinforce.py) ``` # Import all packages we want to use from itertools import count import numpy as np import gym import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.distributions import Categorical import matplotlib import matplotlib.pyplot as plt %matplotlib inline ``` ## Enough information about ``CartPole-v1`` A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of `+1` or `-1` to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of `+1` is provided for every timestep that the pole remains upright. The episode ends when the pole is more than `15` degrees from vertical, or the cart moves more than `2.4` units from the center. ### Summary | Property | Default | Note | |-------------------- |------------ |------------------------------------------------------------------------------------------- | | Max Episode Length | `500` | Check out this [line](https://github.com/openai/gym/blob/master/gym/envs/__init__.py#L63) | | Action Space | `+1`, `-1` | The system is controlled by applying a force of `+1` or `-1` to the cart | | Default reward | `+1` | A reward of `+1` is provided for every time-step that the pole remains upright | ### Sample output <center><img src='./CartPole.gif'></center> [Source](https://gym.openai.com/envs/CartPole-v1/) ``` # Preparing the Cart Pole env = gym.make('CartPole-v1') env.seed(0) torch.manual_seed(0) gamma = 0.99 # A very simple NN with one hidden layer acts as a brain # We are simply mapping observation from environment to actions using one hidden layer! class REINFORCEBrain(nn.Module): def __init__(self): super(REINFORCEBrain, self).__init__() self.affine1 = nn.Linear(4, 128) self.affine2 = nn.Linear(128, 2) self.saved_log_probs = [] self.rewards = [] def forward(self, x): x = F.relu(self.affine1(x)) action_scores = self.affine2(x) return F.softmax(action_scores, dim=1) def total_reward_received(self): return np.sum(self.rewards) # No need to use GPU yet! you can call .cuda() after REINFORCEBrain() to instantiate CUDA version of Brain policy = REINFORCEBrain() #Defining an optimizer optimizer = optim.Adam(policy.parameters(), lr=1e-2) # Retrieving a value for epsilon using numpy's built-ins eps = np.finfo(np.float32).eps.item() # Sample from policy π and store some extra info for calculating Loss J(θ) def select_action(state): state = torch.from_numpy(state).float().unsqueeze(0) # Calculating probabilites of selection each actions probs = policy(state) # Using Categorical helper for sampling and log_probs m = Categorical(probs) action = m.sample() # Keeping log probs. Wee need this to calculate J(θ) policy.saved_log_probs.append(m.log_prob(action)) # converting tensor to python scalar and returning it return action.item() # just sample policy and return output # Used only for logging def sample_policy(state): state = torch.from_numpy(state).float().unsqueeze(0) # Calculating probabilites of selection each actions probs = policy(state) return probs.detach().numpy() def policy_optimize_step(): R = 0 policy_loss = [] rewards = [] # Discounted Reward Calculation for r in policy.rewards[::-1]: R = r + gamma * R rewards.insert(0, R) # List conversion to Tensor rewards = torch.tensor(rewards) # Normalizing Reward Tensor to have zero mean and unit variance rewards = (rewards - rewards.mean()) / (rewards.std() + eps) # Calculating Loss per action/reward for log_prob, reward in zip(policy.saved_log_probs, rewards): policy_loss.append(-log_prob * reward) optimizer.zero_grad() # converting list of tensors to array and summing all of them to create total loss policy_loss = torch.cat(policy_loss).sum() policy_loss.backward() optimizer.step() # Removing data from last episode del policy.rewards[:] del policy.saved_log_probs[:] def train(num_episodes, state_bank): # Length of each episode ep_history = [] # Total reward gathered in each episode rw_history = [] # Record Selected Actions policy_output_on_state_bank = {} for current_episode in range(num_episodes): # Reseting the Environment state = env.reset() # Gathering data, with max step of 500 for t in range(500): action = select_action(state) state, reward, done, _ = env.step(action) policy.rewards.append(reward) if done: break # Sample from our policy to log how it changes over training l = [] for sb in state_bank: probs = sample_policy(sb) l.append(probs) policy_output_on_state_bank[str(current_episode)] = l ep_history.append(t) rw_history.append(policy.total_reward_received()) # Optimize our policy after gathering a full episode policy_optimize_step() # Logging if (current_episode+1) % 50 == 0: print('Episode {}\tLast Epsiode length: {:5d}\t'.format(current_episode, t)) return ep_history, rw_history, policy_output_on_state_bank state_bank = np.random.uniform(-1.5, 1.5, (180, 4)) episodes_to_train = 300 ep_history, rw_history, pout = train(episodes_to_train, state_bank) # Making plots larger! matplotlib.rcParams['figure.figsize'] = [15, 10] # X Axis of the plots xx = range(episodes_to_train) plt.subplot(2, 1, 1) plt.plot(xx, ep_history, '.-') plt.title('Reward and Episode Length') plt.ylabel('Length of each Episode') plt.subplot(2, 1, 2) plt.plot(xx, rw_history, '.-') plt.xlabel('Episode') plt.ylabel('Reward') plt.show() ``` ## Policy Evolution Chart Adding a very unintuitive plot to show how our policy decision changes over training ``` data_theta_rad = [float(x)*np.pi/180.0 for x in np.linspace(1, 360, 180)] data_theta_rad[0] = 0 data_theta_rad[-1] = 2 * np.pi ax1 = plt.subplot(121, polar=True) for i in np.linspace(1, 299, 25): data_r = np.array(pout[str(int(i))]).squeeze()[:, 0] ax1.plot(data_theta_rad, data_r, color='r', linewidth=0.5) ax1.set_rmax(95) ax1.grid(True) ax1.fill_between ax1.set_title("Choosing Action A", va='bottom') ax1.fill_between(data_theta_rad, 0, data_r, facecolor='r', alpha=0.01) ax1.axes.get_xaxis().set_visible(False) ax2 = plt.subplot(122, polar=True) for i in np.linspace(1, 299, 25): data_r = np.array(pout[str(int(i))]).squeeze()[:, 1] ax2.plot(data_theta_rad, data_r, color='b', linewidth=0.5) ax2.set_rmax(95) ax2.grid(True) ax2.fill_between ax2.set_title("Choosing Action B", va='bottom') ax2.fill_between(data_theta_rad, 0, data_r, facecolor='b', alpha=0.01) ax2.axes.get_xaxis().set_visible(False) plt.show() ``` ## Make A Movie !! Policy change over training. <center><img src='./PolicyChange.gif'></center> Bellow are some codes for generating this video ``` # import cv2 # from tqdm import tqdm # import io # from PIL import Image # import matplotlib.pyplot as plt # fourcc = cv2.VideoWriter_fourcc(*'DIVX') # video = cv2.VideoWriter('output1.avi', fourcc, 20.0, (1080, 720)) # for t in tqdm(np.linspace(1, 299, 35), desc='Generating Video ...'): # ax1 = plt.subplot(121, polar=True) # data_r = np.array(pout[str(int(t))]).squeeze()[:, 0] # ax1.plot(data_theta_rad, data_r, color='r', linewidth=0.5) # ax1.set_rmax(95) # ax1.grid(True) # ax1.fill_between # ax1.set_title("Choosing Action A", va='bottom') # ax1.fill_between(data_theta_rad, 0, data_r, facecolor='r', alpha=0.01) # ax1.axes.get_xaxis().set_visible(False) # axes = plt.gca() # axes.set_ylim([0, 1]) # buf = io.BytesIO() # plt.savefig(buf, format='png') # buf.seek(0) # img = Image.open(buf) # img_out_cv2 = np.array(img) # img_out_cv2 = img_out_cv2[:, :, ::-1].copy() # video.write(img_out_cv2) # buf.close() # video.release() # save_dir = "/src/rl-advantures/figs/" # for t in tqdm(range(300), desc='Generating Video ...'): # ax1 = plt.subplot(121, polar=True) # data_r = np.array(pout[str(int(t))]).squeeze()[:, 0] # ax1.plot(data_theta_rad, data_r, color='r', linewidth=0.5) # ax1.set_rmax(95) # ax1.grid(True) # ax1.fill_between # ax1.set_title("Choosing Action A", va='bottom') # ax1.fill_between(data_theta_rad, 0, data_r, facecolor='r', alpha=0.1) # ax1.axes.get_xaxis().set_visible(False) # axes = plt.gca() # axes.set_ylim([0, 1]) # ax2 = plt.subplot(122, polar=True) # data_r = np.array(pout[str(int(t))]).squeeze()[:, 1] # ax2.plot(data_theta_rad, data_r, color='b', linewidth=0.5) # ax2.set_rmax(95) # ax2.grid(True) # ax2.fill_between # ax2.set_title("Choosing Action B", va='bottom') # ax2.fill_between(data_theta_rad, 0, data_r, facecolor='b', alpha=0.1) # ax2.axes.get_xaxis().set_visible(False) # axes = plt.gca() # axes.set_ylim([0, 1]) # plt.savefig(save_dir + str(t) + '.png') # plt.clf() ``` ## Notes 1. Reward plot is useless as every value of `rw_history` is equals to `ep_history` minus `1`. 1. Continuing the training will hurt the performance of the model! ### Useful Tools [Makrdown Table Generator](https://www.tablesgenerator.com/markdown_tables)
github_jupyter
``` from google.colab import drive drive.mount('/content/drive') cd /content/drive/My Drive/FYP/Sentiment Analysis/Implementation/Sentiment Analysis/Capsule/Dynamic _rounting_enhancement_Capsule_network !pip install tensorflow==1.14.0 !pip install keras==2.1.5 ``` # Dependencies ``` from __future__ import absolute_import from __future__ import division from __future__ import print_function import h5py import numpy as np from sklearn.metrics import precision_recall_fscore_support from sklearn.metrics import accuracy_score import keras from keras import backend as K from tensorflow.contrib.layers.python.layers import initializers import collections import pickle import gensim from gensim.models.keyedvectors import KeyedVectors import numpy as np import re import random import sys import os import pandas as pd from sklearn.model_selection import train_test_split from numpy import array from numpy import asarray from numpy import zeros import keras from keras.preprocessing.text import Tokenizer from keras.models import Sequential from keras.preprocessing.sequence import pad_sequences from keras.preprocessing import sequence from keras.layers import Dropout, Activation, Flatten,Embedding, Convolution1D, MaxPooling1D, AveragePooling1D, Input, Dense, merge,Add from keras.layers.recurrent import LSTM, GRU, SimpleRNN from keras.regularizers import l2 from keras.constraints import maxnorm from keras.datasets import imdb from keras import callbacks from keras.utils import generic_utils from keras.models import Model from keras.optimizers import Adadelta import time import numpy as np import pickle from collections import defaultdict import sys, re import pandas as pd from string import punctuation from os import listdir from collections import Counter from nltk.corpus import stopwords import string from string import punctuation from os import listdir from numpy import array from numpy import asarray from numpy import zeros from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.models import Sequential from keras.layers import Dense from keras.layers import Flatten from keras.layers import Embedding from keras.layers.convolutional import Conv1D from keras.layers.convolutional import MaxPooling1D import nltk nltk.download('stopwords') import tensorflow as tf print(tf.__version__) ``` # Model ``` class Classifer(keras.layers.Layer): def get_hidden_states_before(self, hidden_states, step, shape, hidden_size): #padding zeros padding=tf.zeros((shape[0], step, hidden_size), dtype=tf.float32) #remove last steps displaced_hidden_states=hidden_states[:,:-step,:] #concat padding return tf.concat([padding, displaced_hidden_states], axis=1) #return tf.cond(step<=shape[1], lambda: tf.concat([padding, displaced_hidden_states], axis=1), lambda: tf.zeros((shape[0], shape[1], self.config.hidden_size_sum), dtype=tf.float32)) def get_hidden_states_after(self, hidden_states, step, shape, hidden_size): #padding zeros padding=tf.zeros((shape[0], step, hidden_size), dtype=tf.float32) #remove last steps displaced_hidden_states=hidden_states[:,step:,:] #concat padding return tf.concat([displaced_hidden_states, padding], axis=1) #return tf.cond(step<=shape[1], lambda: tf.concat([displaced_hidden_states, padding], axis=1), lambda: tf.zeros((shape[0], shape[1], self.config.hidden_size_sum), dtype=tf.float32)) def sum_together(self, l): combined_state=None for tensor in l: if combined_state==None: combined_state=tensor else: combined_state=combined_state+tensor return combined_state def slstm_cell(self, name_scope_name, hidden_size, lengths, initial_hidden_states, initial_cell_states, num_layers): with tf.name_scope(name_scope_name): #Word parameters #forget gate for left with tf.name_scope("f1_gate"): #current Wxf1 = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wxf") #left right Whf1 = tf.Variable(tf.random_normal([2*hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Whf") #initial state Wif1 = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wif") #dummy node Wdf1 = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wdf") #forget gate for right with tf.name_scope("f2_gate"): Wxf2 = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wxf") Whf2 = tf.Variable(tf.random_normal([2*hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Whf") Wif2 = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wif") Wdf2 = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wdf") #forget gate for inital states with tf.name_scope("f3_gate"): Wxf3 = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wxf") Whf3 = tf.Variable(tf.random_normal([2*hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Whf") Wif3 = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wif") Wdf3 = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wdf") #forget gate for dummy states with tf.name_scope("f4_gate"): Wxf4 = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wxf") Whf4 = tf.Variable(tf.random_normal([2*hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Whf") Wif4 = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wif") Wdf4 = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wdf") #input gate for current state with tf.name_scope("i_gate"): Wxi = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wxi") Whi = tf.Variable(tf.random_normal([2*hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Whi") Wii = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wii") Wdi = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wdi") #input gate for output gate with tf.name_scope("o_gate"): Wxo = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wxo") Who = tf.Variable(tf.random_normal([2*hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Who") Wio = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wio") Wdo = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wdo") #bias for the gates with tf.name_scope("biases"): bi = tf.Variable(tf.random_normal([hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="bi") bo = tf.Variable(tf.random_normal([hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="bo") bf1 = tf.Variable(tf.random_normal([hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="bf1") bf2 = tf.Variable(tf.random_normal([hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="bf2") bf3 = tf.Variable(tf.random_normal([hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="bf3") bf4 = tf.Variable(tf.random_normal([hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="bf4") #dummy node gated attention parameters #input gate for dummy state with tf.name_scope("gated_d_gate"): gated_Wxd = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wxf") gated_Whd = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Whf") #output gate with tf.name_scope("gated_o_gate"): gated_Wxo = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wxo") gated_Who = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Who") #forget gate for states of word with tf.name_scope("gated_f_gate"): gated_Wxf = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Wxo") gated_Whf = tf.Variable(tf.random_normal([hidden_size, hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="Who") #biases with tf.name_scope("gated_biases"): gated_bd = tf.Variable(tf.random_normal([hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="bi") gated_bo = tf.Variable(tf.random_normal([hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="bo") gated_bf = tf.Variable(tf.random_normal([hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="bo") # print("first phase done!") #filters for attention mask_softmax_score=tf.cast(tf.sequence_mask(lengths), tf.float32)*1e25-1e25 # print("second phase done!") # print(mask_softmax_score.shape) mask_softmax_score_expanded=tf.expand_dims(mask_softmax_score, axis=2) # print("third phase done!") #filter invalid steps sequence_mask=tf.expand_dims(tf.cast(tf.sequence_mask(lengths), tf.float32),axis=2) # print("fourth phase done!") # print(initial_hidden_states.shape) # print(sequence_mask.shape) #filter embedding states initial_hidden_states=initial_hidden_states*sequence_mask # print("fifth phase done!") initial_cell_states=initial_cell_states*sequence_mask # print("sixth phase done!") #record shape of the batch shape=tf.shape(initial_hidden_states) #initial embedding states embedding_hidden_state=tf.reshape(initial_hidden_states, [-1, hidden_size]) embedding_cell_state=tf.reshape(initial_cell_states, [-1, hidden_size]) #randomly initialize the states if config.random_initialize: initial_hidden_states=tf.random_uniform(shape, minval=-0.05, maxval=0.05, dtype=tf.float32, seed=None, name=None) initial_cell_states=tf.random_uniform(shape, minval=-0.05, maxval=0.05, dtype=tf.float32, seed=None, name=None) #filter it initial_hidden_states=initial_hidden_states*sequence_mask initial_cell_states=initial_cell_states*sequence_mask #inital dummy node states dummynode_hidden_states=tf.reduce_mean(initial_hidden_states, axis=1) dummynode_cell_states=tf.reduce_mean(initial_cell_states, axis=1) for i in range(num_layers): #update dummy node states #average states combined_word_hidden_state=tf.reduce_mean(initial_hidden_states, axis=1) reshaped_hidden_output=tf.reshape(initial_hidden_states, [-1, hidden_size]) #copy dummy states for computing forget gate transformed_dummynode_hidden_states=tf.reshape(tf.tile(tf.expand_dims(dummynode_hidden_states, axis=1), [1, shape[1],1]), [-1, hidden_size]) #input gate gated_d_t = tf.nn.sigmoid( tf.matmul(dummynode_hidden_states, gated_Wxd) + tf.matmul(combined_word_hidden_state, gated_Whd) + gated_bd ) #output gate gated_o_t = tf.nn.sigmoid( tf.matmul(dummynode_hidden_states, gated_Wxo) + tf.matmul(combined_word_hidden_state, gated_Who) + gated_bo ) #forget gate for hidden states gated_f_t = tf.nn.sigmoid( tf.matmul(transformed_dummynode_hidden_states, gated_Wxf) + tf.matmul(reshaped_hidden_output, gated_Whf) + gated_bf ) #softmax on each hidden dimension reshaped_gated_f_t=tf.reshape(gated_f_t, [shape[0], shape[1], hidden_size])+ mask_softmax_score_expanded gated_softmax_scores=tf.nn.softmax(tf.concat([reshaped_gated_f_t, tf.expand_dims(gated_d_t, dim=1)], axis=1), dim=1) #split the softmax scores new_reshaped_gated_f_t=gated_softmax_scores[:,:shape[1],:] new_gated_d_t=gated_softmax_scores[:,shape[1]:,:] #new dummy states dummy_c_t=tf.reduce_sum(new_reshaped_gated_f_t * initial_cell_states, axis=1) + tf.squeeze(new_gated_d_t, axis=1)*dummynode_cell_states dummy_h_t=gated_o_t * tf.nn.tanh(dummy_c_t) #update word node states #get states before initial_hidden_states_before=[tf.reshape(self.get_hidden_states_before(initial_hidden_states, step+1, shape, hidden_size), [-1, hidden_size]) for step in range(self.config.step)] initial_hidden_states_before=self.sum_together(initial_hidden_states_before) initial_hidden_states_after= [tf.reshape(self.get_hidden_states_after(initial_hidden_states, step+1, shape, hidden_size), [-1, hidden_size]) for step in range(self.config.step)] initial_hidden_states_after=self.sum_together(initial_hidden_states_after) #get states after initial_cell_states_before=[tf.reshape(self.get_hidden_states_before(initial_cell_states, step+1, shape, hidden_size), [-1, hidden_size]) for step in range(self.config.step)] initial_cell_states_before=self.sum_together(initial_cell_states_before) initial_cell_states_after=[tf.reshape(self.get_hidden_states_after(initial_cell_states, step+1, shape, hidden_size), [-1, hidden_size]) for step in range(self.config.step)] initial_cell_states_after=self.sum_together(initial_cell_states_after) #reshape for matmul initial_hidden_states=tf.reshape(initial_hidden_states, [-1, hidden_size]) initial_cell_states=tf.reshape(initial_cell_states, [-1, hidden_size]) #concat before and after hidden states concat_before_after=tf.concat([initial_hidden_states_before, initial_hidden_states_after], axis=1) #copy dummy node states transformed_dummynode_hidden_states=tf.reshape(tf.tile(tf.expand_dims(dummynode_hidden_states, axis=1), [1, shape[1],1]), [-1, hidden_size]) transformed_dummynode_cell_states=tf.reshape(tf.tile(tf.expand_dims(dummynode_cell_states, axis=1), [1, shape[1],1]), [-1, hidden_size]) f1_t = tf.nn.sigmoid( tf.matmul(initial_hidden_states, Wxf1) + tf.matmul(concat_before_after, Whf1) + tf.matmul(embedding_hidden_state, Wif1) + tf.matmul(transformed_dummynode_hidden_states, Wdf1)+ bf1 ) f2_t = tf.nn.sigmoid( tf.matmul(initial_hidden_states, Wxf2) + tf.matmul(concat_before_after, Whf2) + tf.matmul(embedding_hidden_state, Wif2) + tf.matmul(transformed_dummynode_hidden_states, Wdf2)+ bf2 ) f3_t = tf.nn.sigmoid( tf.matmul(initial_hidden_states, Wxf3) + tf.matmul(concat_before_after, Whf3) + tf.matmul(embedding_hidden_state, Wif3) + tf.matmul(transformed_dummynode_hidden_states, Wdf3) + bf3 ) f4_t = tf.nn.sigmoid( tf.matmul(initial_hidden_states, Wxf4) + tf.matmul(concat_before_after, Whf4) + tf.matmul(embedding_hidden_state, Wif4) + tf.matmul(transformed_dummynode_hidden_states, Wdf4) + bf4 ) i_t = tf.nn.sigmoid( tf.matmul(initial_hidden_states, Wxi) + tf.matmul(concat_before_after, Whi) + tf.matmul(embedding_hidden_state, Wii) + tf.matmul(transformed_dummynode_hidden_states, Wdi)+ bi ) o_t = tf.nn.sigmoid( tf.matmul(initial_hidden_states, Wxo) + tf.matmul(concat_before_after, Who) + tf.matmul(embedding_hidden_state, Wio) + tf.matmul(transformed_dummynode_hidden_states, Wdo) + bo ) f1_t, f2_t, f3_t, f4_t, i_t=tf.expand_dims(f1_t, axis=1), tf.expand_dims(f2_t, axis=1),tf.expand_dims(f3_t, axis=1), tf.expand_dims(f4_t, axis=1), tf.expand_dims(i_t, axis=1) five_gates=tf.concat([f1_t, f2_t, f3_t, f4_t,i_t], axis=1) five_gates=tf.nn.softmax(five_gates, dim=1) f1_t,f2_t,f3_t, f4_t,i_t= tf.split(five_gates, num_or_size_splits=5, axis=1) f1_t, f2_t, f3_t, f4_t, i_t=tf.squeeze(f1_t, axis=1), tf.squeeze(f2_t, axis=1),tf.squeeze(f3_t, axis=1), tf.squeeze(f4_t, axis=1),tf.squeeze(i_t, axis=1) c_t = (f1_t * initial_cell_states_before) + (f2_t * initial_cell_states_after)+(f3_t * embedding_cell_state)+ (f4_t * transformed_dummynode_cell_states)+ (i_t * initial_cell_states) h_t = o_t * tf.nn.tanh(c_t) #update states initial_hidden_states=tf.reshape(h_t, [shape[0], shape[1], hidden_size]) initial_cell_states=tf.reshape(c_t, [shape[0], shape[1], hidden_size]) initial_hidden_states=initial_hidden_states*sequence_mask initial_cell_states=initial_cell_states*sequence_mask dummynode_hidden_states=dummy_h_t dummynode_cell_states=dummy_c_t initial_hidden_states = tf.nn.dropout(initial_hidden_states,self.dropout) initial_cell_states = tf.nn.dropout(initial_cell_states, self.dropout) return initial_hidden_states, initial_cell_states, dummynode_hidden_states def slstm_basic_layer(self,embedding, config, mask): embedding = tf.squeeze(embedding) initial_hidden_states = embedding # print("embedding shape") # print(embedding.shape) initial_cell_states=tf.identity(initial_hidden_states) initial_hidden_states = tf.nn.dropout(initial_hidden_states,config.keep_prob) # print("initial shape") # print(initial_hidden_states.shape) initial_cell_states = tf.nn.dropout(initial_cell_states, config.keep_prob) #create layers new_hidden_states,new_cell_state, dummynode_hidden_states = self.slstm_cell("word_slstm", config.hidden_size, mask, initial_hidden_states, initial_cell_states, config.layer) softmax_w = tf.Variable(tf.random_normal([2*config.hidden_size, config.num_label], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="softmax_w") softmax_b = tf.Variable(tf.random_normal([config.num_label], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="softmax_b") #representation=dummynode_hidden_states representation=tf.reduce_mean(tf.concat([new_hidden_states, tf.expand_dims(dummynode_hidden_states, axis=1)], axis=1), axis=1) representation1=tf.reduce_mean(tf.concat([new_hidden_states, tf.expand_dims(dummynode_hidden_states, axis=1)], axis=1), axis=1) # new_hidden_states_concat = tf.concat([new_hidden_states, tf.expand_dims(dummynode_hidden_states, axis=1)], axis=1) softmax_w2 = tf.Variable(tf.random_normal([config.hidden_size, 2*config.hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="softmax_w2") softmax_b2 = tf.Variable(tf.random_normal([2*config.hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="softmax_b2") softmax_w22 = tf.Variable(tf.random_normal([config.hidden_size, 2*config.hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="softmax_w2") softmax_b22 = tf.Variable(tf.random_normal([2*config.hidden_size], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32, name="softmax_b2") representation=tf.nn.tanh(tf.matmul(representation, softmax_w2)+softmax_b2) representation1=tf.nn.tanh(tf.matmul(representation1, softmax_w22)+softmax_b22) return new_hidden_states, representation, representation1 def __init__(self, config): self.config=config self.dropout=self.keep_prob=config.keep_prob def _matmul_broadcast(x, y, name): """Compute x @ y, broadcasting over the first `N - 2` ranks. """ with tf.variable_scope(name) as scope: return tf.reduce_sum( tf.nn.dropout(x[..., tf.newaxis] * y[..., tf.newaxis, :, :],1), axis=-2 ) z = tf.constant(0.1, shape=[2,3,4]) z[...,tf.newaxis] def _conv2d_wrapper(inputs, shape, strides, padding, add_bias, activation_fn, name, stddev=0.1): """Wrapper over tf.nn.conv2d(). """ with tf.variable_scope(name) as scope: kernel = _get_weights_wrapper( name='weights', shape=shape, weights_decay_factor=0.0, ) output = tf.nn.conv2d(inputs, filter=kernel, strides=strides, padding=padding, name='conv') if add_bias: biases = _get_biases_wrapper(name='biases', shape=[shape[-1]] ) output = tf.add(output, biases, name='biasAdd') if activation_fn is not None: output = activation_fn(output, name='activation') return output def _get_weights_wrapper(name, shape, dtype=tf.float32, initializer=initializers.xavier_initializer(),weights_decay_factor=None): """Wrapper over _get_variable_wrapper() to get weights, with weights decay factor in loss. """ weights = _get_variable_wrapper(name=name, shape=shape, dtype=dtype, initializer=initializer) if weights_decay_factor is not None and weights_decay_factor > 0.0: weights_wd = tf.multiply(tf.nn.l2_loss(weights), weights_decay_factor, name=name + '/l2loss') tf.add_to_collection('losses', weights_wd) return weights def _get_biases_wrapper(name, shape, dtype=tf.float32, initializer=tf.constant_initializer(0.0)): """Wrapper over _get_variable_wrapper() to get bias. """ biases = _get_variable_wrapper(name=name, shape=shape, dtype=dtype, initializer=initializer) return biases def _get_variable_wrapper( name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True, custom_getter=None ): """Wrapper over tf.get_variable(). """ with tf.device('/cpu:0'): var = tf.get_variable( name, shape=shape, dtype=dtype, initializer=initializer, regularizer=regularizer, trainable=trainable, collections=collections, caching_device=caching_device, partitioner=partitioner, validate_shape=validate_shape, custom_getter=custom_getter ) return var def softmax(x, axis=-1): ex = K.exp(x - K.max(x, axis=axis, keepdims=True)) return ex/K.sum(ex, axis=axis, keepdims=True) def squash_v1(x, axis=-1): s_squared_norm = K.sum(K.square(x), axis, keepdims=True) + K.epsilon() scale = K.sqrt(s_squared_norm)/ (0.5 + s_squared_norm) return scale * x def squash_v0(s, axis=-1, epsilon=1e-7, name=None): s_squared_norm = K.sum(K.square(s), axis, keepdims=True) + K.epsilon() safe_norm = K.sqrt(s_squared_norm) scale = 1 - tf.exp(-safe_norm) return scale * s / safe_norm def routing1(u_hat_vecs, beta_a, iterations, output_capsule_num, i_activations, context): b = keras.backend.zeros_like(u_hat_vecs[:,:,:,0])+ context if i_activations is not None: i_activations = i_activations[...,tf.newaxis] for i in range(iterations): if False: leak = tf.zeros_like(b, optimize=True) leak = tf.reduce_sum(leak, axis=1, keep_dims=True) leaky_logits = tf.concat([leak, b], axis=1) leaky_routing = tf.nn.softmax(leaky_logits, dim=1) c = tf.split(leaky_routing, [1, output_capsule_num], axis=1)[1] else: c = softmax(b, 1) # if i_activations is not None: # tf.transpose(tf.transpose(c, perm=[0,2,1]) * i_activations, perm=[0,2,1]) outputs = squash_v1(K.batch_dot(c, u_hat_vecs, [2, 2])) if i < iterations - 1: b = b + K.batch_dot(outputs, u_hat_vecs, [2, 3]) poses = outputs activations = K.sqrt(K.sum(K.square(poses), 2)) return poses, activations def routing(u_hat_vecs, beta_a, iterations, output_capsule_num, i_activations, context_sensitivity): b = keras.backend.zeros_like(u_hat_vecs[:,:,:,0]) + context_sensitivity if i_activations is not None: i_activations = i_activations[...,tf.newaxis] for i in range(iterations): if False: leak = tf.zeros_like(b, optimize=True) leak = tf.reduce_sum(leak, axis=1, keep_dims=True) leaky_logits = tf.concat([leak, b], axis=1) leaky_routing = tf.nn.softmax(leaky_logits, dim=1) c = tf.split(leaky_routing, [1, output_capsule_num], axis=1)[1] else: c = softmax(b, 1) # if i_activations is not None: # tf.transpose(tf.transpose(c, perm=[0,2,1]) * i_activations, perm=[0,2,1]) outputs = K.batch_dot(c, u_hat_vecs, [2, 2]) if i < iterations - 1: b = b + K.batch_dot(outputs, u_hat_vecs, [2, 3]) poses = squash_v1(outputs) activations = K.sqrt(K.sum(K.square(poses), 2)) return poses, activations def vec_transformationByConv(poses, input_capsule_dim, input_capsule_num, output_capsule_dim, output_capsule_num): kernel = _get_weights_wrapper( name='weights', shape=[1, input_capsule_dim, output_capsule_dim*output_capsule_num], weights_decay_factor=0.0 ) u_hat_vecs = keras.backend.conv1d(poses, kernel) u_hat_vecs = keras.backend.reshape(u_hat_vecs, (-1, input_capsule_num, output_capsule_num, output_capsule_dim)) u_hat_vecs = keras.backend.permute_dimensions(u_hat_vecs, (0, 2, 1, 3)) return u_hat_vecs def vec_transformationByMat(poses, input_capsule_dim, input_capsule_num, output_capsule_dim, output_capsule_num, shared=False): inputs_poses_shape = poses.get_shape().as_list() poses = poses[..., tf.newaxis, :] poses = tf.tile(poses, [1, 1, output_capsule_num, 1]) if shared: kernel = _get_weights_wrapper(name='weights', shape=[1, 1, output_capsule_num, output_capsule_dim, input_capsule_dim], weights_decay_factor=0.0) kernel = tf.tile(kernel, [inputs_poses_shape[0], input_capsule_num, 1, 1, 1]) else: kernel = _get_weights_wrapper(name='weights', shape=[1, input_capsule_num, output_capsule_num, output_capsule_dim, input_capsule_dim], weights_decay_factor=0.0) kernel = tf.tile(kernel, [inputs_poses_shape[0], 1, 1, 1, 1]) u_hat_vecs = tf.squeeze(tf.matmul(kernel, poses[...,tf.newaxis]),axis=-1) u_hat_vecs = keras.backend.permute_dimensions(u_hat_vecs, (0, 2, 1, 3)) return u_hat_vecs def capsules_init(inputs, shape, strides, padding, pose_shape, add_bias, name): with tf.variable_scope(name): poses = _conv2d_wrapper( inputs, shape=shape[0:-1] + [shape[-1] * pose_shape], strides=strides, padding=padding, add_bias=add_bias, activation_fn=None, name='pose_stacked' ) poses_shape = poses.get_shape().as_list() poses = tf.reshape(poses, [-1, poses_shape[1], poses_shape[2], shape[-1], pose_shape]) beta_a = _get_weights_wrapper(name='beta_a', shape=[1, shape[-1]]) poses = squash_v1(poses, axis=-1) activations = K.sqrt(K.sum(K.square(poses), axis=-1)) + beta_a return poses, activations def capsule_fc_layer(nets, output_capsule_num, iterations, name, representation, args): with tf.variable_scope(name): poses, i_activations = nets input_pose_shape = poses.get_shape().as_list() u_hat_vecs = vec_transformationByConv(poses,input_pose_shape[-1], input_pose_shape[1],input_pose_shape[-1], output_capsule_num,) beta_a = _get_weights_wrapper(name='beta_a', shape=[1, output_capsule_num]) representation = tf.reshape(representation,[input_pose_shape[0],1,1,600]) representation = tf.tile(representation,[1,args.num_classes,1,1]) context = _get_weights_wrapper(name='context', shape=[input_pose_shape[0],args.num_classes,600,input_pose_shape[1]]) context_sensitivity = tf.matmul(representation,context) context_sensitivity = tf.squeeze(context_sensitivity) poses, activations = routing(u_hat_vecs, beta_a, iterations, output_capsule_num, i_activations, context_sensitivity) return poses, activations def capsule_flatten(nets): poses, activations = nets input_pose_shape = poses.get_shape().as_list() poses = tf.reshape(poses, [ -1, input_pose_shape[1]*input_pose_shape[2]*input_pose_shape[3], input_pose_shape[-1]]) activations = tf.reshape(activations, [ -1, input_pose_shape[1]*input_pose_shape[2]*input_pose_shape[3]]) return poses, activations def capsule_conv_layer(nets, shape, strides, iterations, name, representation): with tf.variable_scope(name): poses, i_activations = nets inputs_poses_shape = poses.get_shape().as_list() hk_offsets = [ [(h_offset + k_offset) for k_offset in range(0, shape[0])] for h_offset in range(0, inputs_poses_shape[1] + 1 - shape[0], strides[1]) ] wk_offsets = [ [(w_offset + k_offset) for k_offset in range(0, shape[1])] for w_offset in range(0, inputs_poses_shape[2] + 1 - shape[1], strides[2]) ] inputs_poses_patches = tf.transpose( tf.gather( tf.gather( poses, hk_offsets, axis=1, name='gather_poses_height_kernel' ), wk_offsets, axis=3, name='gather_poses_width_kernel' ), perm=[0, 1, 3, 2, 4, 5, 6], name='inputs_poses_patches' ) inputs_poses_shape = inputs_poses_patches.get_shape().as_list() inputs_poses_patches = tf.reshape(inputs_poses_patches, [ -1, shape[0]*shape[1]*shape[2], inputs_poses_shape[-1] ]) i_activations_patches = tf.transpose( tf.gather( tf.gather( i_activations, hk_offsets, axis=1, name='gather_activations_height_kernel' ), wk_offsets, axis=3, name='gather_activations_width_kernel' ), perm=[0, 1, 3, 2, 4, 5], name='inputs_activations_patches' ) patches_dim = i_activations_patches.get_shape().as_list() i_activations_patches = tf.reshape(i_activations_patches, [ -1, shape[0]*shape[1]*shape[2]] ) u_hat_vecs = vec_transformationByConv( inputs_poses_patches, inputs_poses_shape[-1], shape[0]*shape[1]*shape[2], inputs_poses_shape[-1], shape[3], ) # patches_dim = i_activations_patches.get_shape().as_list() representation = tf.expand_dims(representation, -1) representation = tf.tile(representation,[1,1,8]) representation = tf.reshape(representation,[-1,1,16,300]) representation = tf.tile(representation,[1,patches_dim[1],1,1]) representation = tf.reshape(representation,[-1,16,300]) context = _get_weights_wrapper(name='context', shape=[patches_dim[1]*patches_dim[0],300,48]) context_sensitivity = tf.matmul(representation,context) beta_a = _get_weights_wrapper( name='beta_a', shape=[1, shape[3]] ) poses, activations = routing(u_hat_vecs, beta_a, iterations, shape[3], i_activations_patches, context_sensitivity) poses = tf.reshape(poses, [ inputs_poses_shape[0], inputs_poses_shape[1], inputs_poses_shape[2], shape[3], inputs_poses_shape[-1]] ) activations = tf.reshape(activations, [ inputs_poses_shape[0],inputs_poses_shape[1], inputs_poses_shape[2],shape[3]] ) nets = poses, activations return nets ``` # **Load Data** ## From other repo ``` import numpy as np import os import pickle import pandas as pd def checkdirs(directory): if not os.path.exists(directory): os.makedirs(directory) def read_text(filename): with open(filename, 'r') as f: best_epoch = int(f.read()) return best_epoch def write_text(text, filename): with open(filename, 'w') as f: f.write(text) def load_pickle_data(pickle_dir, dataset): assert os.path.exists(pickle_dir) pickle_data = pickle.load(open(pickle_dir+dataset, "rb")) return pickle_data def get_idx_from_sent(sent, word_idx_map, max_length): x = [] words = sent.split()[:max_length] for word in words: if word in word_idx_map: x.append(word_idx_map[word]) while len(x) < max_length: x.append(0) return x def make_idx_data(raw_datas, word_idx_map, len_train, max_length): data = [] for raw_data in raw_datas: sent = get_idx_from_sent(raw_data["text"], word_idx_map, max_length) sent.append(raw_data["y"]) data.append(sent) split = len_train train = np.array(data[:split], dtype="int") test = np.array(data[split:], dtype="int") return train, test def preprocessing(data_path, dataset, long_sent=800): """ :param data_path: base directory :param dataset: select dataset {'20news', 'mr', 'trec', 'mpqa'} :param long_sent: if dataset has long sentences, set to be constant length value :return: seq_length, num_classes, vocab_size, x_train, y_train, x_test, y_test, pre-train_word (GloVe 840b), word_idx """ assert os.path.exists(data_path) is True x = load_pickle_data(data_path, dataset) data_frame, pretrain_word, len_train, n_exist_word, vocab, word_idx = x max_l = int(np.max(pd.DataFrame(data_frame)["num_words"])) if dataset in ["reuters", "20news", "imdb", 'mr']: train, test = make_idx_data(data_frame, word_idx, len_train, long_sent) else: train, test = make_idx_data(data_frame, word_idx, len_train, max_l) # train[:, :-1] = word idx # train[:, -1] = true label x_train = train[:, :-1] y_train = train[:, -1] x_test = test[:, :-1] y_test = test[:, -1] sequence_length = len(x_train[0]) # make one-hot labels = sorted(list(set(y_train))) one_hot = np.zeros((len(labels), len(labels)), int) np.fill_diagonal(one_hot, 1) label_dict = dict(zip(labels, one_hot)) y_train = np.eye(len(label_dict))[y_train] num_class = y_train.shape[1] y_test = np.eye(len(label_dict))[y_test] vocab_size = pretrain_word.shape[0] print("sequence length :", sequence_length) print("vocab size :", vocab_size) print("num classes :", num_class) return sequence_length, num_class, vocab_size, x_train, y_train, x_test, y_test, pretrain_word, word_idx ``` ## load data ``` def text_preprocessing(train_data,test_data): train_data_texts = train_data['comment'] train_data_labels = train_data['label'] test_data_texts = test_data['comment'] test_data_labels = test_data['label'] comment_texts = [] comment_labels = [] train_text = [] test_text = [] train_labels=[] test_labels=[] for label in train_data_labels: if label == "POSITIVE": train_labels.append(1) else: train_labels.append(0) comment_labels.append(train_labels) for label in test_data_labels: if label == "POSITIVE": test_labels.append(1) else: test_labels.append(0) comment_labels.append(test_labels) for comment in train_data_texts: lines = [] try: words = comment.split() lines += words except: continue train_text.append(lines) comment_texts.append(train_text) for comment in test_data_texts: lines = [] try: words = comment.split() lines += words except: continue test_text.append(lines) comment_texts.append(test_text) return comment_texts,comment_labels EMBEDDING_SIZE = 300 #@param [50, 150, 200, 250, 300, 350, 400, 450, 500] embedding_type = "fasttext" #@param ["fasttext","word2vec"] experiment_no = "2000" #@param [] {allow-input: true} model_name = "DS_Caps_sinhala" folder_path = '/content/drive/My Drive/Final Year Project/FYP/Sentiment Analysis/Implementation/' lankadeepa_data_path = folder_path + 'corpus/new/preprocess_from_isuru/lankadeepa_tagged_comments.csv' gossip_lanka_data_path = folder_path + 'corpus/new/preprocess_from_isuru/gossip_lanka_tagged_comments.csv' # "corpus/new/preprocess_from_unicode_values/lankadeepa_tagged_comments.csv" # "corpus/new/preprocess_from_unicode_values/gossip_lanka_tagged_comments.csv" context = 5 embeds = "fasttext" word_embedding_path = folder_path + "word_embedding/"+embeds+"/source2_data_from_gosspiLanka_and_lankadeepa/"+str(EMBEDDING_SIZE)+"/"+embedding_type+"_"+str(EMBEDDING_SIZE)+"_"+str(context) word_embedding_keydvectors_path = folder_path + "word_embedding/"+embeds+"/source2_data_from_gosspiLanka_and_lankadeepa/"+str(EMBEDDING_SIZE)+"/keyed_vectors/keyed.kv" embedding_matrix_path = folder_path + 'Sentiment Analysis/CNN RNN/embedding_matrix/'+embedding_type+'_lankadeepa_gossiplanka_'+str(EMBEDDING_SIZE)+'_'+str(context) experiment_name = folder_path + "Sentiment Analysis/CNN RNN/experiments/" +str(experiment_no) + "_"+ model_name +"_"+embedding_type+"_"+str(EMBEDDING_SIZE)+"_"+str(context) model_save_path = folder_path + "Sentiment Analysis/CNN RNN/saved_models/"+str(experiment_no)+"_weights_best_"+model_name+"_"+embedding_type+"_"+str(experiment_no)+".hdf5" lankadeepa_data = pd.read_csv(lankadeepa_data_path)[:9059] gossipLanka_data = pd.read_csv(gossip_lanka_data_path) gossipLanka_data = gossipLanka_data.drop(columns=['Unnamed: 3']) all_data = pd.concat([lankadeepa_data,gossipLanka_data], ignore_index=True) all_data.shape all_data def text_preprocessing(data): comments = data['comment'] labels = data['label'] comments_splitted = [] for comment in comments: lines = [] try: words = comment.split() lines += words except: continue comments_splitted.append(lines) return comments_splitted,labels comment_texts, comment_labels = text_preprocessing(all_data) # prepare tokenizer t = Tokenizer() t.fit_on_texts(comment_texts) vocab_size = len(t.word_index) + 1 print(vocab_size) comment_labels for i in range(len(comment_labels)): comment_labels[i] = comment_labels[i]-2 comment_labels[15054] encoded_docs = t.texts_to_sequences(comment_texts) # max_length = len(max(encoded_docs, key=len)) max_length = 30 padded_docs = pad_sequences(encoded_docs, maxlen=max_length,padding='post') comment_labels = np.array(comment_labels) padded_docs = np.array(padded_docs) # comment_labels = pd.get_dummies(comment_labels).values print('Shape of label tensor:', comment_labels.shape) comment_labels = pd.get_dummies(comment_labels).values X_train, X_test, y_train, y_test = train_test_split(padded_docs, comment_labels, test_size=0.1, random_state=0) y_train[0] (unique, counts) = np.unique(y_test, return_counts = True) frequencies = np.asarray((unique, counts)).T print(frequencies) X_train # !pip install gensim # import gensim # from gensim.models.keyedvectors import KeyedVectors # from gensim.models.fasttext import FastText # from gensim.models import word2vec # word_embedding_model = FastText.load(word_embedding_path) # word_vectors = word_embedding_model.wv # word_vectors.vocab.items() # def generate_embedding_matrix(): # if (embedding_type == 'fasText'): # word_embedding_model = FastText.load(word_embedding_path) # else: # word_embedding_model = word2vec.Word2Vec.load(word_embedding_path) # word_vectors = word_embedding_model.wv # word_vectors.save(word_embedding_keydvectors_path) # word_vectors = KeyedVectors.load(word_embedding_keydvectors_path, mmap='r') # embeddings_index = dict() # for word, vocab_obj in word_vectors.vocab.items(): # embeddings_index[word]=word_vectors[word] # # create a weight matrix for words in training docs # embedding_matrix = zeros((vocab_size, EMBEDDING_SIZE)) # for word, i in t.word_index.items(): # embedding_vector = embeddings_index.get(word) # if embedding_vector is not None: # embedding_matrix[i] = embedding_vector # # pickle.dump(embedding_matrix, open(embedding_matrix_path, 'wb')) # return embedding_matrix def load_word_embedding_matrix(): f = open(embedding_matrix_path, 'rb') embedding_matrix= np.array(pickle.load(f)) return embedding_matrix embedding_matrix = load_word_embedding_matrix() ``` # **Loss functions** ``` def spread_loss(labels, activations, margin): activations_shape = activations.get_shape().as_list() mask_t = tf.equal(labels, 1) mask_i = tf.equal(labels, 0) activations_t = tf.reshape( tf.boolean_mask(activations, mask_t), [activations_shape[0], 1] ) activations_i = tf.reshape( tf.boolean_mask(activations, mask_i), [activations_shape[0], activations_shape[1] - 1] ) gap_mit = tf.reduce_sum(tf.square(tf.nn.relu(margin - (activations_t - activations_i)))) return gap_mit def cross_entropy(y, preds): y = tf.argmax(y, axis=1) loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=preds, labels=y) loss = tf.reduce_mean(loss) return loss def margin_loss(y, preds): y = tf.cast(y,tf.float32) loss = y * tf.square(tf.maximum(0., 0.9 - preds)) + \ 0.25 * (1.0 - y) * tf.square(tf.maximum(0., preds - 0.1)) loss = tf.reduce_mean(tf.reduce_sum(loss, axis=1)) # loss = tf.reduce_mean(loss) return loss def capsule_model_A(X, num_classes, args, context, context_sensitivity1): with tf.variable_scope('capsule_'+str(3), reuse=tf.AUTO_REUSE): nets = _conv2d_wrapper( X, shape=[3, 300, 1, 32], strides=[1, 2, 1, 1], padding='VALID', add_bias=True, activation_fn=tf.nn.relu, name='conv1' ) nets = capsules_init(nets, shape=[1, 1, 32, 16], strides=[1, 1, 1, 1], padding='VALID', pose_shape=16, add_bias=True, name='primary') nets = capsule_conv_layer(nets, shape=[3, 1, 16, 16], strides=[1, 1, 1, 1], iterations=3, name='conv2', representation=context_sensitivity1) nets = capsule_flatten(nets) poses, activations = capsule_fc_layer(nets, num_classes, 3, 'fc2', context, args) return poses, activations def capsule_model_B(X, num_classes, args, representation1, representation2): poses_list = [] for _, ngram in enumerate([3,4,5]): with tf.variable_scope('capsule_'+str(ngram), reuse=tf.AUTO_REUSE): nets = _conv2d_wrapper( X, shape=[ngram, 300, 1, 32], strides=[1, 2, 1, 1], padding='VALID', add_bias=True, activation_fn=tf.nn.relu, name='conv1' ) nets = capsules_init(nets, shape=[1, 1, 32, 16], strides=[1, 1, 1, 1], padding='VALID', pose_shape=16, add_bias=True, name='primary') nets = capsule_conv_layer(nets, shape=[3, 1, 16, 16], strides=[1, 1, 1, 1], iterations=3, name='conv2', representation=representation1) nets = capsule_flatten(nets) poses, activations = capsule_fc_layer(nets, num_classes, 3, 'fc2', representation2, args) poses_list.append(poses) poses = tf.reduce_mean(tf.convert_to_tensor(poses_list), axis=0) activations = K.sqrt(K.sum(K.square(poses), 2)) return poses, activations ``` # Set Config ## Config Capsule ``` class Args: embedding_type = "static" dataset = "reuters_multilabel_dataset" loss_type = "margin_loss" model_type = "capsule-B" has_test = 1 has_dev = 1 num_epochs = 50 batch_size = 4 use_orphan = True use_leaky = True learning_rate = 0.001 margin = 0.2 num_classes = 4 vocab_size = vocab_size vec_size = 300 max_sent = max_length args = Args() ``` ## Config SLSTM ``` class Config(object): vocab_size=vocab_size max_grad_norm = 5 init_scale = 0.05 hidden_size = 300 lr_decay = 0.95 valid_portion = 0.0 batch_size = 16 keep_prob = 0.8 #0.05 learning_rate = 0.0001 max_epoch =2 # max_max_epoch =40 max_max_epoch = 15 num_label=5 attention_iteration=3 random_initialize=True embedding_trainable=True l2_beta=0.0 layer = 4 step = 1 config = Config() ``` # Tensorflow Graph ``` # X = tf.placeholder(tf.int32, [args.batch_size, args.max_sent], name="input_x") # y = tf.placeholder(tf.int64, [args.batch_size, args.num_classes], name="input_y") # is_training = tf.placeholder_with_default(False, shape=()) # learning_rate = tf.placeholder(dtype='float32') # margin = tf.placeholder(shape=(),dtype='float32') # l2_loss = tf.constant(0.0) # print(y) # w2v = np.array(embedding_matrix,dtype=np.float32) # w2v.shape # W1 = tf.Variable(w2v, trainable = False) # X_embedding = tf.nn.embedding_lookup(W1, X) # X_embedding = X_embedding[...,tf.newaxis] # tf.logging.info("input dimension:{}".format(X_embedding.get_shape())) # X_embedding.shape # classifier = Classifer(config) # hidden_states,representation, representation1 = classifier.slstm_basic_layer(X_embedding, config, [max_length]*(args.batch_size)) # hidden_states.shape # representation1.shape # representation1 = tf.expand_dims(representation1, -1) # representation1 = tf.tile(representation1,[1,1,2]) # representation1.shape # representation1 = tf.reshape(representation1,[-1,1,16,75]) # representation1.shape # representation1 = tf.tile(representation1,[1,23,1,1]) # representation1.shape # representation1 = tf.reshape(representation1,[-1,16,75]) # representation1.shape # context1 = tf.Variable(tf.random_normal([92,75,48], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32) # context_sensitivity1 = tf.matmul(representation1,context1) # context_sensitivity1.shape # context_sensitivity1 = tf.reshape(context_sensitivity1,[92,16,48]) # context_sensitivity1.shape # hidden_states = tf.expand_dims(hidden_states, -1) # hidden_states.shape # hidden_states.shape[0:-1] # representation = tf.reshape(representation,[4,2,300,1]) # representation.shape # context = tf.Variable(tf.random_normal([4,2,368,300], mean=0.0, stddev=0.1, dtype=tf.float32), dtype=tf.float32) # context_sensitivity = tf.matmul(context,representation) # context_sensitivity = tf.squeeze(context_sensitivity) # context_sensitivity.shape # poses, activations = capsule_model_A(X_embedding, args.num_classes) # poses, activations = capsule_model_B(hidden_states, args.num_classes, args, representation, representation1) # loss = margin_loss(y, activations) # y_pred = tf.argmax(activations, axis=1, name="y_proba") # correct = tf.equal(tf.argmax(y, axis=1), y_pred, name="correct") # accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") # optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) # training_op = optimizer.minimize(loss, name="training_op") # gradients, variables = zip(*optimizer.compute_gradients(loss)) # with tf.device('/cpu:0'): # global_step = tf.train.get_or_create_global_step() args.max_sent = max_length threshold = 0.5 # grad_check = [tf.check_numerics(g, message='Gradient NaN Found!') # for g in gradients if g is not None] + [tf.check_numerics(loss, message='Loss NaN Found')] # with tf.control_dependencies(grad_check): # training_op = optimizer.apply_gradients(zip(gradients, variables), global_step=global_step) # sess = tf.InteractiveSession() # from keras import utils class BatchGenerator(object): """Generate and hold batches.""" def __init__(self, dataset,label, batch_size,input_size, is_shuffle=True): self._dataset = dataset self._label = label self._batch_size = batch_size self._cursor = 0 self._input_size = input_size if is_shuffle: index = np.arange(len(self._dataset)) np.random.shuffle(index) self._dataset = np.array(self._dataset)[index] self._label = np.array(self._label)[index] else: self._dataset = np.array(self._dataset) self._label = np.array(self._label) def next(self): if self._cursor + self._batch_size > len(self._dataset): self._cursor = 0 """Generate a single batch from the current cursor position in the data.""" batch_x = self._dataset[self._cursor : self._cursor + self._batch_size,:] batch_y = self._label[self._cursor : self._cursor + self._batch_size] self._cursor += self._batch_size return batch_x, batch_y # n_iterations_per_epoch = len(X_train) // args.batch_size # n_iterations_test = len(X_test) // args.batch_size # n_iterations_dev = len(X_test) // args.batch_size # mr_train = BatchGenerator(X_train,y_train, args.batch_size, 0) # mr_dev = BatchGenerator(X_test,y_test, args.batch_size, 0) # mr_test = BatchGenerator(X_test,y_test, args.batch_size, 0, is_shuffle=False) best_model = None best_epoch = 0 best_acc_val = 0. # init = tf.global_variables_initializer() # sess.run(init) lr = args.learning_rate m = args.margin # array1 = [2,3,0,1] # z = utils.to_categorical(array1, 4) # z ``` # Train/ Validate ``` # loss_vals, acc_vals = [], [] # for epoch in range(args.num_epochs): # for iteration in range(1, n_iterations_per_epoch + 1): # X_batch, y_batch = mr_train.next() # y_batch = utils.to_categorical(y_batch, args.num_classes) # _, loss_train, probs, capsule_pose = sess.run( # [training_op, loss, activations, poses], # feed_dict={X: X_batch[:,:args.max_sent], # y: y_batch, # is_training: True, # learning_rate:lr, # margin:m}) # print("\rIteration: ({:.1f}%) Loss: {:.5f}".format( # iteration * 100 / n_iterations_per_epoch, # loss_train), # end="") # loss_vals, acc_vals = [], [] # for iteration in range(1, n_iterations_dev + 1): # X_batch, y_batch = mr_dev.next() # y_batch = utils.to_categorical(y_batch, args.num_classes) # loss_val, acc_val = sess.run( # [loss, accuracy], # feed_dict={X: X_batch[:,:args.max_sent], # y: y_batch, # is_training: False, # margin:m}) # loss_vals.append(loss_val) # acc_vals.append(acc_val) # loss_val, acc_val = np.mean(loss_vals), np.mean(acc_vals) # print("\rEpoch: {} Val accuracy: {:.1f}% Loss: {:.4f}".format( # epoch + 1, acc_val * 100, loss_val)) # # preds_list, y_list = [], [] # # for iteration in range(1, n_iterations_test + 1): # # X_batch, y_batch = mr_test.next() # # probs = sess.run([activations], # # feed_dict={X:X_batch[:,:args.max_sent], # # is_training: False}) # # preds_list = preds_list + probs[0].tolist() # # y_list = y_list + y_batch.tolist() # # y_list = np.array(y_list) # # preds_probs = np.array(preds_list) # # preds_probs[np.where( preds_probs >= threshold )] = 1.0 # # preds_probs[np.where( preds_probs < threshold )] = 0.0 # # print(y_list) # # print(preds_probs) # # #[precision, recall, F1, support] = precision_recall_fscore_support(y_list, preds_probs, average='samples') # # acc = accuracy_score(y_list, preds_probs) # # print ('\rER: %.3f' % acc, 'Precision: %.3f' % precision, 'Recall: %.3f' % recall, 'F1: %.3f' % F1) # # # if args.model_type == 'CNN' or args.model_type == 'KIMCNN': # # # lr = max(1e-6, lr * 0.8) # if args.loss_type == 'margin_loss': # m = min(0.9, m + 0.1) # print(acc_vals) from sklearn.model_selection import train_test_split,cross_val_score, cross_val_predict, KFold, GridSearchCV from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, classification_report, confusion_matrix, precision_recall_fscore_support acc_per_fold = [] precision_per_fold = [] recall_per_fold = [] f1_per_fold = [] kfold = KFold(n_splits=10, shuffle=True) fold_no = 1 inputs = padded_docs targets = comment_labels for train, test in kfold.split(inputs, targets): n_iterations_per_epoch = len(inputs[train]) // args.batch_size n_iterations_test = len(inputs[test]) // args.batch_size mr_train1 = BatchGenerator(inputs[train], targets[train], args.batch_size, 0) mr_test1 = BatchGenerator(inputs[test], targets[test], args.batch_size, 0, is_shuffle=False) best_accuracy = 0. best_precision = 0. best_recall = 0. best_f1 = 0. X = tf.placeholder(tf.int32, [args.batch_size, args.max_sent], name="input_x") y = tf.placeholder(tf.int64, [args.batch_size, args.num_classes], name="input_y") is_training = tf.placeholder_with_default(False, shape=()) learning_rate = tf.placeholder(dtype='float32') margin = tf.placeholder(shape=(),dtype='float32') l2_loss = tf.constant(0.0) w2v = np.array(embedding_matrix,dtype=np.float32) W1 = tf.Variable(w2v, trainable = False) X_embedding = tf.nn.embedding_lookup(W1, X) X_embedding = X_embedding[...,tf.newaxis] classifier = Classifer(config) hidden_states,representation, representation1 = classifier.slstm_basic_layer(X_embedding, config, [max_length]*(args.batch_size)) hidden_states = tf.expand_dims(hidden_states, -1) poses, activations = capsule_model_A(hidden_states, args.num_classes, args, representation, representation1) loss = margin_loss(y, activations) y_pred = tf.argmax(activations, axis=1, name="y_proba") correct = tf.equal(tf.argmax(y, axis=1), y_pred, name="correct") accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy") optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, name = 'opt'+str(fold_no)) training_op = optimizer.minimize(loss, name="training_op") gradients, variables = zip(*optimizer.compute_gradients(loss)) with tf.Session() as sess: init = tf.global_variables_initializer() sess.run(init) for epoch in range(0,6): for iteration in range(1, n_iterations_per_epoch + 1): X_batch, y_batch = mr_train1.next() _, loss_train, probs, capsule_pose = sess.run( [training_op, loss, activations, poses], feed_dict={X: X_batch[:,:args.max_sent], y: y_batch, is_training: True, learning_rate:lr, margin:m}) print("\rIteration: {}/{} ({:.1f}%) epoch:{} Loss: {:.5f}".format(iteration, n_iterations_per_epoch, iteration * 100 / n_iterations_per_epoch, epoch+1, loss_train), end="") # print("\r ({}) epoch:{}".format( 'running', epoch+1), end="") preds_list, y_list = [], [] for iteration in range(1, n_iterations_test + 1): X_batch, y_batch = mr_test1.next() probs = sess.run([activations], feed_dict={X:X_batch[:,:args.max_sent], is_training: False}) preds_list = preds_list + probs[0].tolist() y_list = y_list + y_batch.tolist() y_list = np.array(y_list) preds_probs = np.array(preds_list) labels = np.argmax(y_list, axis=1) predictions = np.argmax(preds_probs, axis=1) accuracy_fold = accuracy_score(labels, predictions) precision_fold = precision_score(labels, predictions, average='weighted', zero_division = 0 ) recall_fold = recall_score(labels, predictions, average='weighted') f1_fold = f1_score(labels, predictions, average='weighted') if best_f1 <= f1_fold : best_accuracy = accuracy_fold best_precision = precision_fold best_recall = recall_fold best_f1 = f1_fold acc_per_fold.append(best_accuracy) precision_per_fold.append(best_precision) recall_per_fold.append(best_recall) f1_per_fold.append(best_f1) print("\rFold: {} accuracy: {:.4f}% Precision: {:.4f} recall: {:.4f} F1: {:.4f}".format(fold_no, best_accuracy, best_precision, best_recall, best_f1)) if args.loss_type == 'margin_loss': m = min(0.9, m + 0.1) fold_no += 1 accuracy = np.mean(acc_per_fold) print('Accuracy: %f' % accuracy) # precision tp / (tp + fp) precision = np.mean(precision_per_fold) print('Precision: %f' % precision) # recall: tp / (tp + fn) recall = np.mean(recall_per_fold) print('Recall: %f' % recall) # f1: 2 tp / (2 tp + fp + fn) f1 = np.mean(f1_per_fold) print('F1 score: %f' % f1) ``` # Test Model ``` from sklearn.metrics import accuracy_score from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.metrics import f1_score preds_list, y_list = [], [] for iteration in range(1, n_iterations_test + 1): X_batch, y_batch = mr_test.next() probs = sess.run([activations], feed_dict={X:X_batch[:,:args.max_sent], is_training: False}) preds_list = preds_list + probs[0].tolist() y_list = y_list + y_batch.tolist() y_list = np.array(y_list) preds_probs = np.array(preds_list) preds_probs[np.where( preds_probs >= threshold )] = 1.0 preds_probs[np.where( preds_probs < threshold )] = 0.0 # [precision, recall, F1, support] = precision_recall_fscore_support(y_list, preds_probs, average='samples') # acc = accuracy_score(y_list, preds_probs) print(len(preds_list)) pred1 = preds_probs[:,1].tolist() pred11 = [int(i) for i in pred1] y_list1 = y_list.tolist() # y_list1 accuracy = accuracy_score(y_list1, pred11) print('Accuracy: %f' % accuracy) # precision tp / (tp + fp) precision = precision_score(y_list1, pred11) print('Precision: %f' % precision) # recall: tp / (tp + fn) recall = recall_score(y_list1, pred11) print('Recall: %f' % recall) # f1: 2 tp / (2 tp + fp + fn) f1 = f1_score(y_list1, pred11) print('F1 score: %f' % f1) y_list best_acc_val from google.colab import drive drive.mount('/content/drive') !pip install transformers[torch] from transformers import BertTokenizer, BertModel import torch folder_path = '/content/drive/My Drive/Final Year Project/FYP/Sentiment Analysis/Implementation/Sentiment Analysis/Bert/' folder_path_data = '/content/drive/My Drive/Final Year Project/FYP/Sentiment Analysis/Implementation/' lankadeepa_data_path = folder_path_data + 'corpus/new/preprocess_from_isuru/lankadeepa_tagged_comments.csv' gossip_lanka_data_path = folder_path_data + 'corpus/new/preprocess_from_isuru/gossip_lanka_tagged_comments.csv' def embeds(model_output): token_embeddings = model_output[0] #First element of model_output contains all token embeddings return token_embeddings tokenizer = BertTokenizer.from_pretrained(folder_path + 'xnli_output') model = BertModel.from_pretrained(folder_path + 'xnli_output', return_dict=True) import pandas as pd lankadeepa_data = pd.read_csv(lankadeepa_data_path)[:9059] gossipLanka_data = pd.read_csv(gossip_lanka_data_path) gossipLanka_data = gossipLanka_data.drop(columns=['Unnamed: 3']) all_data = pd.concat([lankadeepa_data,gossipLanka_data], ignore_index=True) sentences = all_data.comment list_of_sentences = [] for i in sentences: list_of_sentences.append(i) encoded_input = tokenizer(list_of_sentences, padding=True, truncation=True, max_length=20, return_tensors='pt') #Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) #Perform pooling. In this case, mean pooling sentence_embeddings = embeds(model_output) type(sentence_embeddings) Numpy_embedss = sentence_embeddings.numpy() type(Numpy_embedss) !cd '/content/drive/My Drive/Final Year Project/FYP/Sentiment Analysis/Implementation/Sentiment Analysis/Bert' import numpy as np np.save('context_embeds', Numpy_embedss) import pickle as pkl with open("contextual_embeds", 'wb') as file1: pkl.dump(Numpy_embedss, file1) !cp -r '/content/context_embeds.npy' '/content/drive/My Drive/Final Year Project/FYP/Sentiment Analysis/Implementation/Sentiment Analysis/Bert' !cp -r '/content/contextual_embeds' '/content/drive/My Drive/Final Year Project/FYP/Sentiment Analysis/Implementation/Sentiment Analysis/Bert' ```
github_jupyter
# Week 3 Assessment: Orthogonal Projections ## Learning Objectives In this week, we will write functions which perform orthogonal projections. By the end of this week, you should be able to 1. Write code that projects data onto lower-dimensional subspaces. 2. Understand the real world applications of projections. We highlight some tips and tricks which would be useful when you implement numerical algorithms that you have never encountered before. You are invited to think about these concepts when you write your program. The important thing is to learn to map from mathematical equations to code. It is not always easy to do so, but you will get better at it with more practice. We will apply this to project high-dimensional face images onto lower dimensional basis which we call "eigenfaces". We will also revisit the problem of linear regression, but from the perspective of solving normal equations, the concept which you apply to derive the formula for orthogonal projections. We will apply this to predict housing prices for the Boston housing dataset, which is a classic example in machine learning. ``` # PACKAGE: DO NOT EDIT import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') import numpy as np from sklearn.datasets import fetch_olivetti_faces, fetch_lfw_people from ipywidgets import interact %matplotlib inline image_shape = (64, 64) # Load faces data dataset = fetch_olivetti_faces() faces = dataset.data ``` ### Advice for testing numerical algorithms Testing machine learning algorithms (or numerical algorithms in general) is sometimes really hard as it depends on the dataset to produce an answer, and you will never be able to test your algorithm on all the datasets we have in the world. Nevertheless, we have some tips for you to help you identify bugs in your implementations. #### 1. Test on small dataset Test your algorithms on small dataset: datasets of size 1 or 2 sometimes will suffice. This is useful because you can (if necessary) compute the answers by hand and compare them with the answers produced by the computer program you wrote. In fact, these small datasets can even have special numbers, which will allow you to compute the answers by hand easily. #### 2. Find invariants Invariants refer to properties of your algorithm and functions that are maintained regardless of the input. We will highlight this point later in this notebook where you will see functions, which will check invariants for some of the answers you produce. Invariants you may want to look for: 1. Does your algorithm always produce a positive/negative answer, or a positive definite matrix? 2. If the algorithm is iterative, do the intermediate results increase/decrease monotonically? 3. Does your solution relate with your input in some interesting way, e.g. orthogonality? When you have a set of invariants, you can generate random inputs and make assertions about these invariants. This is sometimes known as [fuzzing](https://en.wikipedia.org/wiki/Fuzzing), which has proven to be a very effective technique for identifying bugs in programs. Finding invariants is hard, and sometimes there simply isn't any invariant. However, DO take advantage of them if you can find them. They are the most powerful checks when you have them. ## 1. Orthogonal Projections Recall that for projection of a vector $x$ onto a 1-dimensional subspace $U$ with basis vector $\boldsymbol b$ we have $${\pi_U}(\boldsymbol x) = \frac{\boldsymbol b\boldsymbol b^T}{{\lVert \boldsymbol b \rVert}^2}\boldsymbol x $$ And for the general projection onto an M-dimensional subspace $U$ with basis vectors $\boldsymbol b_1,\dotsc, \boldsymbol b_M$ we have $${\pi_U}(\boldsymbol x) = \boldsymbol B(\boldsymbol B^T\boldsymbol B)^{-1}\boldsymbol B^T\boldsymbol x $$ where $$\boldsymbol B = (\boldsymbol b_1|...|\boldsymbol b_M)$$ Your task is to implement orthogonal projections. We can split this into two steps 1. Find the projection matrix $\boldsymbol P$ that projects any $\boldsymbol x$ onto $U$. 2. The projected vector $\pi_U(\boldsymbol x)$ of $\boldsymbol x$ can then be written as $\pi_U(\boldsymbol x) = \boldsymbol P\boldsymbol x$. Note that for orthogonal projections, we have the following invariants: ``` import numpy.testing as np_test def test_property_projection_matrix(P): """Test if the projection matrix satisfies certain properties. In particular, we should have P @ P = P, and P = P^T """ np_test.assert_almost_equal(P, P @ P) np_test.assert_almost_equal(P, P.T) def test_property_projection(x, p): """Test orthogonality of x and its projection p.""" np_test.assert_almost_equal(np.dot(p-x, p), 0) # GRADED FUNCTION: DO NOT EDIT THIS LINE # Projection 1d # ===YOU SHOULD EDIT THIS FUNCTION=== def projection_matrix_1d(b): """Compute the projection matrix onto the space spanned by `b` Args: b: ndarray of dimension (D,), the basis for the subspace Returns: P: the projection matrix """ b_arr = np.array(b.T) b_squared = np.dot(b_arr, b_arr) b_mat = np.matrix(b).T P = b_mat * b_mat.T / b_squared return P # ===YOU SHOULD EDIT THIS FUNCTION=== def project_1d(x, b): """Compute the projection matrix onto the space spanned by `b` Args: x: the vector to be projected b: ndarray of dimension (D,), the basis for the subspace Returns: y: projection of x in space spanned by b """ return np.array(projection_matrix_1d(b) * np.matrix(x).T).T[0] # Projection onto general subspace # ===YOU SHOULD EDIT THIS FUNCTION=== def projection_matrix_general(B): """Compute the projection matrix onto the space spanned by `B` Args: B: ndarray of dimension (D, M), the basis for the subspace Returns: P: the projection matrix """ B = np.matrix(B) P = B * np.linalg.inv(B.T * B) * B.T # EDIT THIS return np.array(P) # ===YOU SHOULD EDIT THIS FUNCTION=== def project_general(x, B): """Compute the projection matrix onto the space spanned by `B` Args: B: ndarray of dimension (D, E), the basis for the subspace Returns: y: projection of x in space spanned by b """ y = np.array((projection_matrix_general(B) * np.matrix(x).T).T) if y.shape[0] == 1: return y[0] return y ``` We have included some unittest for you to test your implementation. ``` # Orthogonal projection in 2d # define basis vector for subspace b = np.array([2,1]).reshape(-1,1) # point to be projected later x = np.array([1,2]).reshape(-1, 1) # Test 1D np_test.assert_almost_equal(projection_matrix_1d(np.array([1, 2, 2])), np.array([[1, 2, 2], [2, 4, 4], [2, 4, 4]]) / 9) np_test.assert_almost_equal(project_1d(np.ones(3), np.array([1, 2, 2])), np.array([5, 10, 10]) / 9) B = np.array([[1, 0], [1, 1], [1, 2]]) # Test General np_test.assert_almost_equal(projection_matrix_general(B), np.array([[5, 2, -1], [2, 2, 2], [-1, 2, 5]]) / 6) np_test.assert_almost_equal(project_general(np.array([6, 0, 0]), B), np.array([5, 2, -1])) print('correct') # Write your own test cases here, use random inputs, utilize the invariants we have! ``` ## 2. Eigenfaces (optional) Next, we will take a look at what happens if we project some dataset consisting of human faces onto some basis we call the "eigenfaces". ``` from sklearn.datasets import fetch_olivetti_faces, fetch_lfw_people from ipywidgets import interact %matplotlib inline image_shape = (64, 64) # Load faces data dataset = fetch_olivetti_faces() faces = dataset.data mean = faces.mean(axis=0) std = faces.std(axis=0) faces_normalized = (faces - mean) / std ``` The data for the basis has been saved in a file named `eigenfaces.py`, first we load it into the variable B. ``` B = np.load('eigenfaces.npy')[:50] # we use the first 50 dimensions of the basis, you should play around with the dimension here. print("the eigenfaces have shape {}".format(B.shape)) ``` Along the first dimension of B, each instance is a `64x64` image, an "eigenface". Let's visualize a few of them. ``` plt.figure(figsize=(10,10)) plt.imshow(np.hstack(B[:5]), cmap='gray'); plt.figure(figsize=(10,10)) plt.imshow(np.hstack(B[5:10]), cmap='gray'); plt.figure(figsize=(10,10)) plt.imshow(np.hstack(B[45:50]), cmap='gray'); ``` Take a look at what happens if we project our faces onto the basis spanned by these "eigenfaces". This requires us to reshape B into the same shape as the matrix representing the basis as we have done earlier. Then we can reuse the functions we implemented earlier to compute the projection matrix and the projection. Complete the code below to visualize the reconstructed faces that lie on the subspace spanned by the "eigenfaces". ``` @interact(i=(0, 10)) def show_eigenface_reconstruction(i): original_face = faces_normalized[i].reshape(64, 64) # project original_face onto the vector space spanned by B_basis, # you should take advantage of the functions you have implemented above # to perform the projection. First, reshape B such that it represents the basis # for the eigenfaces. Then perform orthogonal projection which would give you # `face_reconstruction`. B_basis = B[i] face_reconstruction = project_general(original_face, B_basis) plt.figure() plt.imshow(np.hstack([original_face, face_reconstruction]), cmap='gray') plt.show() ``` __Question__: What would happen to the reconstruction as we increase the dimension of our basis? Modify the code above to visualize it. ``` B = np.load('eigenfaces.npy')[:100] def show_eigenface_reconstruction(i, dimension=50): original_face = faces_normalized[i].reshape(64, 64) # project original_face onto the vector space spanned by B_basis, # you should take advantage of the functions you have implemented above # to perform the projection. First, reshape B such that it represents the basis # for the eigenfaces. Then perform orthogonal projection which would give you # `face_reconstruction`. B_basis = B[dimension] face_reconstruction = project_general(original_face, B_basis) plt.figure() plt.imshow(np.hstack([original_face, face_reconstruction]), cmap='gray') plt.show() show_eigenface_reconstruction(0, 0) show_eigenface_reconstruction(0, 49) show_eigenface_reconstruction(0, 75) show_eigenface_reconstruction(0, 99) ``` ## 3. Least square for predicting Boston housing prices (optional) Consider the case where we have a linear model for predicting housing prices. We are predicting the housing prices based on features in the housing dataset. If we collect the features in a vector $\boldsymbol{x}$, and the price of the houses as $y$. Assuming that we have a prediction model in the way such that $\hat{y}_i = f(\boldsymbol {x}_i) = \boldsymbol \theta^T\boldsymbol{x}_i$. If we collect the dataset of $n$ datapoints $\boldsymbol x_i$ in a data matrix $\boldsymbol X$, we can write down our model like this: $$ \begin{bmatrix} \boldsymbol {x}_1^T \\ \vdots \\ \boldsymbol {x}_n^T \end{bmatrix} \boldsymbol {\theta} = \begin{bmatrix} y_1 \\ \vdots \\ y_n \end{bmatrix}. $$ That is, $$ \boldsymbol X\boldsymbol{\theta} = \boldsymbol {y}. $$ where $\boldsymbol y$ collects all house prices $y_1,\dotsc, y_n$ of the training set. Our goal is to find the best $\boldsymbol \theta$ that minimizes the following (least squares) objective: $$ \begin{eqnarray} &\sum^n_{i=1}{\lVert \boldsymbol \theta^T\boldsymbol {x}_i - y_i \rVert^2} \\ &= (\boldsymbol X\boldsymbol {\theta} - \boldsymbol y)^T(\boldsymbol X\boldsymbol {\theta} - \boldsymbol y). \end{eqnarray} $$ Note that we aim to minimize the squared error between the prediction $\boldsymbol \theta^T\boldsymbol {x}_i$ of the model and the observed data point $y_i$ in the training set. To find the optimal (maximum likelihood) parameters $\boldsymbol \theta^*$, we set the gradient of the least-squares objective to $\boldsymbol 0$: $$ \begin{eqnarray} \nabla_{\boldsymbol\theta}(\boldsymbol X{\boldsymbol \theta} - \boldsymbol y)^T(\boldsymbol X{\boldsymbol \theta} - \boldsymbol y) &=& \boldsymbol 0 \\ \iff \nabla_{\boldsymbol\theta}(\boldsymbol {\theta}^T\boldsymbol X^T - \boldsymbol y^T)(\boldsymbol X\boldsymbol {\theta} - \boldsymbol y) &=& \boldsymbol 0 \\ \iff \nabla_{\boldsymbol\theta}(\boldsymbol {\theta}^T\boldsymbol X^T\boldsymbol X\boldsymbol {\theta} - \boldsymbol y^T\boldsymbol X\boldsymbol \theta - \boldsymbol \theta^T\boldsymbol X^T\boldsymbol y + \boldsymbol y^T\boldsymbol y ) &=& \boldsymbol 0 \\ \iff 2\boldsymbol X^T\boldsymbol X\boldsymbol \theta - 2\boldsymbol X^T\boldsymbol y &=& \boldsymbol 0 \\ \iff \boldsymbol X^T\boldsymbol X\boldsymbol \theta &=& \boldsymbol X^T\boldsymbol y. \end{eqnarray} $$ The solution,\boldsymbol which gives zero gradient solves the __normal equation__ $$\boldsymbol X^T\boldsymbol X\boldsymbol \theta = \boldsymbol X^T\boldsymbol y.$$ If you recall from the lecture on projection onto n-dimensional subspace, this is exactly the same as the normal equation we have for projection (take a look at the notes [here](https://www.coursera.org/teach/mathematics-machine-learning-pca/content/edit/supplement/fQq8T/content) if you don't remember them). This means our optimal parameter vector, which minimizes our objective, is given by $$\boldsymbol \theta^* = (\boldsymbol X^T\boldsymbol X)^{-1}\boldsymbol X^T\boldsymbol y.$$ Let's put things into perspective and try to find the best parameter $\theta^*$ of the line $y = \theta x$, where $x,\theta\in\mathbb{R}$ for a given a training set $\boldsymbol X\in\mathbb{R}^n$ and $\boldsymbol y\in\mathbb{R}^n$. Note that in our example, the features $x_i$ are only scalar, such that the parameter $\theta$ is also only a scalar. The derivation above holds for general parameter vectors (not only for scalars). Note: This is exactly the same problem as linear regression which was discussed in [Mathematics for Machine Learning: Multivariate Calculus](https://www.coursera.org/teach/multivariate-calculus-machine-learning/content/edit/lecture/74ryq/video-subtitles). However, rather than finding the optimimal $\theta^*$ with gradient descent, we can solve this using the normal equation. ``` x = np.linspace(0, 10, num=50) random = np.random.RandomState(42) # we use the same random seed so we get deterministic output theta = random.randn() # we use a random theta, our goal is to perform linear regression which finds theta_hat that minimizes the objective y = theta * x + random.rand(len(x)) # our theta is corrupted by some noise, so that we do not get (x,y) on a line plt.scatter(x, y); plt.xlabel('x'); plt.ylabel('y'); X = x.reshape(-1,1) Y = y.reshape(-1,1) theta_hat = np.linalg.solve(X.T @ X, X.T @ Y) ``` We can show how our $\hat{\theta}$ fits the line. ``` fig, ax = plt.subplots() ax.scatter(x, y); xx = [0, 10] yy = [0, 10 * theta_hat[0,0]] ax.plot(xx, yy, 'red', alpha=.5); ax.set(xlabel='x', ylabel='y'); print("theta = %f" % theta) print("theta_hat = %f" % theta_hat) ``` What would happen to $\lVert {\theta^*} - \theta \rVert$ if we increased the number of datapoints? Make your hypothesis, and write a small program to confirm it! ``` N = np.arange(10, 10000, step=10) # Your code here which calculates θ* for different sample size. ``` We see how we can find the best $\theta$. In fact, we can extend our methodology to higher dimensional dataset. Let's now try applying the same methodology to the boston housing prices dataset. ``` from sklearn.datasets import load_boston boston = load_boston() boston_X, boston_y = boston.data, boston.target print("The housing dataset has size {}".format(boston_X.shape)) print("The prices has size {}".format(boston_X.shape)) boston_theta_hat = np.zeros(3) ## EDIT THIS to predict boston_theta_hat ```
github_jupyter
# APTOS 2019 Blindness Detection New Idea Ref : https://www.kaggle.com/ratthachat/aptos-simple-preprocessing-decoloring-cropping ## Circle Crop ``` import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import collections import os from pathlib import Path from IPython.display import Image as display_image from IPython.display import display print(os.listdir("../input")) import cv2 from PIL import Image import imagesize from scipy import ndimage ``` ### data load ``` train_df = pd.read_csv("../input/train.csv") ``` ### define function ``` def load_image(id): img_path = Path().absolute().parent / "input" / "train_images" / "{}.png".format(id) if img_path.exists(): d_level = int(train_df.query("id_code == '{}'".format(id)).iloc[0, 1]) diagnosis_dict = {0: "No DR", 1: "Mild", 2: "Moderate", 3: "Severe", 4: "Proliferative DR"} else: img_path = Path().absolute().parent / "input" / "test_images" / "{}.png".format(id) return cv2.imread(str(img_path), 1) def edge_detection(img): dst = cv2.medianBlur(img, ksize=5) sub = cv2.addWeighted(dst, 4, cv2.GaussianBlur( dst , (7, 7) , 0) ,-1 ,80) _b, _g, sub = cv2.split(sub) _b, _g, dst = cv2.split(dst) dst = cv2.addWeighted(dst, 0.5, sub, 0.5, 0) _, dst = cv2.threshold(dst, np.mean(dst)/2, 255, cv2.THRESH_BINARY) dst = cv2.Canny(dst, 0, 100) dst = cv2.cvtColor(dst, cv2.COLOR_GRAY2RGB) _, dst = cv2.threshold(dst, 10, 255, cv2.THRESH_BINARY) return dst def calc_center_circle(edge_img, loop=5000): def calc_center_pixcel(A, B, C, D): def calc_lineparams(ax, ay, bx, by): if (by - ay) == 0: by = by + 1 slope = (ax - bx) / (by - ay) section = ((by**2 - ay**2) - (ax**2 - bx**2)) / (2 * (by - ay)) return slope, section A_slope, A_section = calc_lineparams(A[0], A[1], B[0], B[1]) B_slope, B_section = calc_lineparams(C[0], C[1], D[0], D[1]) if abs(A_slope - B_slope) < 0.01: return None, None X = (B_section - A_section) / (A_slope - B_slope) Y = (A_slope * X + A_section + B_slope * X + B_section) / 2 return int(X), int(Y) edge_list = np.where(edge_img[:, :, 2] == 255) edge_list = [(edge_list[1][i], edge_list[0][i]) for i in range(len(edge_list[0]))] X_cand, Y_cand = [], [] for _ in range(loop): edge_sample = [] edge_sample.extend(edge_list[i] for i in np.random.randint(0, int(len(edge_list)/2), 2)) edge_sample.extend(edge_list[i] for i in np.random.randint(int(len(edge_list)/2), len(edge_list), 2)) x, y = calc_center_pixcel(edge_sample[0], edge_sample[2], edge_sample[1], edge_sample[3]) if x is not None: X_cand.append(x) Y_cand.append(y) X, Y = int(np.mean(X_cand)), int(np.mean(Y_cand)) r_list = [np.sqrt((X-e[0])**2+(Y-e[1])**2) for e in edge_list] radius = int(np.median(r_list)) return (X, Y), radius def center_crop(img, center, radius): height, width, _ = img.shape mask = np.zeros((height, width), np.uint8) mask = cv2.circle(mask, center, radius, (255, 255, 255), thickness=-1) mask_img = cv2.bitwise_and(img, img, mask=mask) crop_img = np.zeros((radius*2, radius*2, 3), np.uint8) cl, cr, ct, cb = 0, radius*2, 0, radius*2 il, ir, it, ib = 0, width, 0, height if center[1] - radius > 0: it = center[1] - radius else: ct = radius - center[1] if height - center[1] > radius: ib -= (height - center[1]) - radius else: cb -= radius - (height - center[1]) if center[0] - radius > 0: il = center[0] - radius else: cl = radius - center[0] if width - center[0] > radius: ir -= (width - center[0]) - radius else: cr -= radius - (width - center[0]) crop_img[ct:cb, cl:cr, :] = mask_img[it:ib, il:ir, :] return crop_img # file_name = "00cb6555d108" file_name = "005b95c28852" # file_name = "01499815e469" # file_name = "0167076e7089" f_list = ["005b95c28852", "00cb6555d108", "01499815e469", "0167076e7089"] class EdgeCrop(object): def __init__(self, center_search_loop=5000): self.loop = center_search_loop def _edge_detection(self, img): dst = cv2.medianBlur(img, ksize=5) sub = cv2.addWeighted(dst, 4, cv2.GaussianBlur( dst , (7, 7) , 0) ,-1 ,80) _b, _g, sub = cv2.split(sub) _b, _g, dst = cv2.split(dst) dst = cv2.addWeighted(dst, 0.5, sub, 0.5, 0) _, dst = cv2.threshold(dst, np.mean(dst)/2, 255, cv2.THRESH_BINARY) dst = cv2.Canny(dst, 0, 100) dst = cv2.cvtColor(dst, cv2.COLOR_GRAY2RGB) _, dst = cv2.threshold(dst, 10, 255, cv2.THRESH_BINARY) return dst def _calc_center_circle(self, edge_img, loop=5000): def calc_center_pixcel(A, B, C, D): def calc_lineparams(ax, ay, bx, by): if (by - ay) == 0: by = by + 1 slope = (ax - bx) / (by - ay) section = ((by**2 - ay**2) - (ax**2 - bx**2)) / (2 * (by - ay)) return slope, section A_slope, A_section = calc_lineparams(A[0], A[1], B[0], B[1]) B_slope, B_section = calc_lineparams(C[0], C[1], D[0], D[1]) if abs(A_slope - B_slope) < 0.01: return None, None X = (B_section - A_section) / (A_slope - B_slope) Y = (A_slope * X + A_section + B_slope * X + B_section) / 2 return int(X), int(Y) edge_list = np.where(edge_img[:, :, 2] == 255) edge_list = [(edge_list[1][i], edge_list[0][i]) for i in range(len(edge_list[0]))] X_cand, Y_cand = [], [] for _ in range(loop): edge = [] edge.extend(edge_list[i] for i in np.random.randint(0, int(len(edge_list)/2), 2)) edge.extend(edge_list[i] for i in np.random.randint(int(len(edge_list)/2), len(edge_list), 2)) x, y = calc_center_pixcel(edge[0], edge[2], edge[1], edge[3]) if x is not None: X_cand.append(x) Y_cand.append(y) X, Y = int(np.mean(X_cand)), int(np.mean(Y_cand)) r_list = [np.sqrt((X-e[0])**2+(Y-e[1])**2) for e in edge_list] radius = int(np.median(r_list)) return (X, Y), radius def _center_crop(self, img, center, radius): height, width, _ = img.shape mask = np.zeros((height, width), np.uint8) mask = cv2.circle(mask, center, radius, (255, 255, 255), thickness=-1) mask_img = cv2.bitwise_and(img, img, mask=mask) crop_img = np.zeros((radius*2, radius*2, 3), np.uint8) cl, cr, ct, cb = 0, radius*2, 0, radius*2 il, ir, it, ib = 0, width, 0, height if center[1] - radius > 0: it = center[1] - radius else: ct = radius - center[1] if height - center[1] > radius: ib -= (height - center[1]) - radius else: cb -= radius - (height - center[1]) if center[0] - radius > 0: il = center[0] - radius else: cl = radius - center[0] if width - center[0] > radius: ir -= (width - center[0]) - radius else: cr -= radius - (width - center[0]) crop_img[ct:cb, cl:cr, :] = mask_img[it:ib, il:ir, :] return crop_img def __call__(self, img): edge = self._edge_detection(img) center, radius = self._calc_center_circle(edge, loop=self.loop) img = self._center_crop(img, center=center, radius=radius) return img cropper = EdgeCrop() for file_name in f_list[:1]: img = load_image(file_name) img = cropper(img) plt.figure(figsize=(5, 5)) plt.imshow(img) img = load_image(file_name) edge = edge_detection(img) center, radius = calc_center_circle(edge) crop_img = center_crop(img, center, radius) plt.figure(figsize=(5, 5)) plt.imshow(crop_img, cmap="gray") ``` ## Debug ``` def center_crop_inv(img, center, radius): height, width, _ = img.shape mask = np.zeros((height, width), np.uint8) mask = cv2.circle(mask, center, radius, (255, 255, 255), thickness=-1) inv_img = cv2.bitwise_and(img, img, mask=cv2.bitwise_not(mask)) gray_img = cv2.cvtColor(inv_img, cv2.COLOR_BGR2GRAY) inv = gray_img.copy() inv[gray_img < 2] = 0 inv[gray_img >= 2] = 255 return inv inv = center_crop_inv(img, center, radius) plt.figure(figsize=(5, 5)) plt.imshow(img, "gray") plt.figure(figsize=(5, 5)) plt.imshow(inv, "gray") def debug_panel(img_id): img = load_image(file_name) edge = edge_detection(img) center, radius = calc_center_circle(edge) inv = center_crop_inv(img, center, radius) inv = cv2.resize(inv, (100, 100)) return inv indice = train_df["id_code"].tolist() a = None panel = [] for i in range(30): temp = [] for j in range(30): inv = debug_panel(indice[i*10+j]) temp.append(inv) panel.append(np.hstack(temp)) panel = np.vstack(panel) plt.figure(figsize=(16, 16)) plt.imshow(panel, cmap="gray") indice[84] ```
github_jupyter
<h1>Quiz 1 : Pemahaman</h1> 1. Sebutkan apa saja kira2 preprocessing Data? 2. Jelaskan beberapa cara imputing missing value? 3. Kapan kita perlu melakukan feature centering dan scaling? 4. Bagaimana Data Science Workflow? 1. Berikut preprocessing data: - Binarization - Mean Removal - Scaling - Normalization - Label encoding 2. Berikut beberapa cara dalam menghandle missing value: - Drop missing value yaitu ketika jumlah missing value data banyak atau NaN maka baris atau kolom tersebut dihapuskan - Filling with mean/median yaitu berlaku untuk data yang bertipe numerik sehingga dirata-ratakan menjadi float dan kemudian diubah kedalam bentuk integer - Filling with modus yaitu berlaku untuk data yang bertipe kategori dengan mengambil data yang paling banyak dari kategori tersebut untuk diisikan ke nilai NaN - Filling with bfill (backward fill) atau ffill (forward fill) yaitu data NaN diisi dengan data sebelumnya atau data setelahnya - KNN yaitu cara menghandle missing value dengan algoritma KNN berdasarkan data tetangganya yang terdekat 3. Ketika suatu kolom prediktor memiliki skala distribusi yang besar maka akan mempengaruhi pembangunan terhadap suatu model dan ketika skala distribusinya kecil maka pengaruhnya juga kecil terhadap arsitektur pembuatan suatu model, sehingga skala yang diperlukan harus disesuaikan agar seimbang. 4. Data science workflow yaitu - Mengambil data dari sumber data - Data tersebut kemudian diproses untuk dianalisis - Analisis tersebut dibuat sebuah model - Jika model tersebut sudah selesai maka akan diproduction - Terakhir model tersebut akan dimonitoring <h1>Quiz 2 : Pengaplikasian</h1> Selamat, sampai tahap ini kalian telah belajar banyak tentang data science, dari mulai python, data manipulasi, visualisasi, dan pembuatan model. Sekarang saatnya untuk mengaplikasikan semuanya. Download dan gunakan data titanic.csv sebagai data untuk pembuatan model ML. Pahami betul data ini dengan melakukan EDA (Explolatory Data Analaysis), Visualisasi, Data Analysis, Preprocessing Data, dan Modeling. <b>(Optional)</b> Download dan gunakan data titanic_test.csv untuk mengetest model kalian dengan melakukan prediksi terhadap data tersebut. Submit hasil prediksinya ke kaggle dan lihat scorenya. https://www.kaggle.com/c/titanic/submit ![image.png](attachment:image.png) ``` # Read titanic.csv import pandas as pd df = pd.read_csv('titanic.csv') df # EDA - Columns titanic.csv # ============================ # PassengerId : int (Clean) # Survived : int (Clean) # Pclass : int (Clean) # Name : string (Not Clean, karena data berupa string unique) # Sex : string (Not Clean, karena termasuk data kategori dan dapat diubah ke numerik) # Age : float (Not Clean, karena ada missing value) # SibSp : int (Clean) # Parch : int (Clean) # Ticket : string (Clean, karena data berupa string unique) # Fare : float (Clean) # Cabin : (Not Clean, karena ada missing value) # Embarked : (Not Clean, karena termasuk data kategori dan dapat diubah ke numerik dan ada missing value) # Solusi saya terkait data Not Clean yang termasuk kategori maka diubah ke numerik, sementara data Not Clean bagian missing value # saya antisipasi dengan algoritma KNN karena kolom Age, Cabin, Embarked kemungkinan besar saling berkaitan antar data yang lain # Menghapus kolom Name dan Ticket karena bersifat unique dan dapat diwakili oleh kolom PassengerId df = df.drop(['Name', 'Ticket'], axis=1) df # Encoding Categorical Data Kolom Sex dan Embarked obj_sex = { 'male' : 0, 'female' : 1 } obj_embarked = { 'C' : 0, 'Q' : 1, 'S' : 2 } df['Sex'] = df['Sex'].replace(obj_sex) df['Embarked'] = df['Embarked'].replace(obj_embarked) # Encoding kolom cabin menjadi float import numpy as np df['Cabin'] = df['Cabin'].replace(np.nan, '0') key_cabin = df['Cabin'].unique() key_cabin.sort() value_cabin = np.arange(0, len(df['Cabin'].unique())) obj_cabin = dict(zip(key_cabin, value_cabin.T)) df['Cabin'] = df['Cabin'].replace(obj_cabin) df['Cabin'] = df['Cabin'].replace(0, np.nan) # Missing value kolom Age, Cabin, dan Embarked from sklearn.impute import KNNImputer imp = KNNImputer(n_neighbors=5) df[['Age', 'Cabin', 'Embarked']] = imp.fit_transform(df[['Age', 'Cabin', 'Embarked']]) # Value NaN sudah terisi df df['Survived'].value_counts() from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import cross_validate X = df.drop('Survived', axis=1) y = df['Survived'] from sklearn.preprocessing import StandardScaler stdscalar = StandardScaler() datascale = stdscalar.fit_transform(X) X = pd.DataFrame(datascale, columns=X.columns) X X.describe() df.describe() def knn_predict(k): model = KNeighborsClassifier(n_neighbors=k) score = cross_validate(model, X, y, cv=10, return_train_score=True) train_score = score['train_score'].mean() test_score = score['test_score'].mean() return train_score, test_score train_scores = [] test_scores = [] for k in range(2, 100): train_score, test_score = knn_predict(k) train_scores.append(train_score) test_scores.append(test_score) import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(14, 8)) ax.plot(range(2, 100), train_scores, marker='x', color='b', label='Train Scores') ax.plot(range(2, 100), test_scores, marker='o', color='g', label='Test Scores') ax.set_xlabel('Nilai K') ax.set_ylabel('Score') fig.legend() plt.show() from sklearn.model_selection import GridSearchCV model = KNeighborsClassifier() param_grid = {'n_neighbors':np.arange(5, 50), 'weights':['distance', 'uniform']} gscv = GridSearchCV(model, param_grid=param_grid, scoring='accuracy', cv=5) gscv.fit(X, y) gscv.best_params_ gscv.best_score_ # Read titanic_test.csv df_test = pd.read_csv('titanic_test.csv') df_test df_test = df_test.drop(['Name', 'Ticket'], axis=1) df_test df_test['Sex'] = df_test['Sex'].replace(obj_sex) df_test['Embarked'] = df_test['Embarked'].replace(obj_embarked) df_test['Cabin'] = df_test['Cabin'].replace(np.nan, '0') key_cabin_test = df_test['Cabin'].unique() key_cabin_test.sort() value_cabin_test = np.arange(0, len(df_test['Cabin'].unique())) obj_cabin_test = dict(zip(key_cabin_test, value_cabin_test.T)) df_test['Cabin'] = df_test['Cabin'].replace(obj_cabin_test) df_test['Cabin'] = df_test['Cabin'].replace(0, np.nan) # Missing Value terisi df_test[['Age', 'Fare', 'Cabin', 'Embarked']] = imp.fit_transform(df_test[['Age', 'Fare', 'Cabin', 'Embarked']]) # Scaling datascale_test = stdscalar.fit_transform(df_test) X_test = pd.DataFrame(datascale_test, columns=df_test.columns) # Prediksi y_pred = gscv.predict(X_test) df_test['Survived'] = y_pred df_test df_test = df_test.drop(['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Cabin', 'Embarked'], axis=1) df_test df_test['Survived'].value_counts() df_test.to_csv("titanic_test_mazharrasyad.csv", index=False) # Pembahasan Tugas Harian 5 import pandas as pd import numpy as np from sklearn.preprocessing import StandardScaler from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import GridSearchCV train = pd.read_csv('titanic.csv') test = pd.read_csv('titanic_test.csv') train = train.drop(['PassengerId', 'Name', 'Ticket', 'Cabin'], axis=1).dropna() x = train.drop('Survived', axis=1) y = train['Survived'] x = pd.get_dummies(x) x = pd.DataFrame(StandardScaler().fit_transform(x), columns=list(x.columns.values)) test = test.drop('Cabin', axis=1).dropna() x_test = test.drop(['PassengerId', 'Name', 'Ticket'], axis=1) x_test = pd.get_dummies(x_test) x_test = pd.DataFrame(StandardScaler().fit_transform(x_test), columns=list(x_test.columns.values)) model = KNeighborsClassifier() params = {'n_neighbors':np.arange(1, 50), 'metric':['euclidean','manhattan','minkowski'], 'weights':['distance', 'uniform']} gscv = GridSearchCV(model, param_grid=params, cv=5, scoring='accuracy') gscv.fit(x, y.ravel()) print(gscv.best_params_) print(gscv.best_score_) model = KNeighborsClassifier(metric='euclidean', n_neighbors=10, weights='uniform') model.fit(x,y.ravel()) y_pred = model.predict(x_test) y_pred test['Survived'] = y_pred test.Survived.value_counts() test[['PassengerId', 'Survived']].to_csv('titanic_test-2.csv', index=False) ```
github_jupyter
# Introduction to Random Forest ## Introduction Random forests (also called random decision forests) construct multiple decision trees at training time. The output of a random forest is often the mode class of individual trees when it is a classification problem, or an average of prediction of individual trees when it is a regression problem. One major advantage of random forests is that they can correct the overfitting problem random trees suffer from. (Reference: https://en.wikipedia.org/wiki/Random_forest) During this tutorial we will first introduce the notion of entropy and mutual information, as a prerequisite for random trees. After that we are going to implement a random tree class that can grow on some training data recursively, based on a split method called ID3, which we will explain later. Finally we are going to use our random tree class to construct our random forest. ## Entropy and mutual information ### Entropy The entropy of a distribution is the expected amount of information we get when we observe a possible outcome of the distribution. It is used to evaluate the uncertainty of the distribution. But how can we measure how much information we get when we observe a possible outcome? A intuitive answer is that the more unlikely the outcome happens, the more information we get. To be specific, we have the following definition (After [Abramson 63]): Let $E$ be some event which occurs with probability $P(E)$, if we are told that $E$ has occurred, then we sat that we have received $Info(E) = log_2\frac{1}{P(E)}$ bits of information. Complete the following function to calculate the bits of information we received when we are told that some event with probability $p$ occurs. ``` import math def Info(p): """given the probability of some event, return the bits of information we receives if it occurs Args: p(float): the probability of some event occurs Return: (float): the bits of information we receive """ return math.log(1/p, 2) ``` The entropy of a distribution $D$, denoted by $H(D)$, is simply the expected amount of information we get when we get a possible outcome from the distribution. It is given by the following equation: $H(D) = \sum_{E \in D} P(E)I(E)$ Complete the following function to calculate the entropy of a discrete distribution ``` def H(p): """given a discrete probability distribution, return its entropy Args: p(list of float): the list of probabilities that each event in the distribution occurs with Return: (float): the entropy of this distribution """ entropy_sum = 0 for event in p: if event > 0: entropy_sum += event * Info(event) return entropy_sum # Simple examples to test your code: # The entropy of a fair coin should be 1.0 print H([0.5, 0.5]) # The entropy of a fair dice should be 2.58 print H([1.0/6] * 6) ``` ### Mutual Information To illustreate what is mutual information, let's look at an example first: Suppose the two variables, $gosports$ and $weather$ have the following joint distribution: | | weather(sunny) | weather(cloudy) | weather(rainy) | |-----------|-------------| |gosports(yes) | 0.3 | 0.2 | 0.1 | |gosports(no) | 0.1 | 0.1 | 0.2 | ``` # Calculate the entropy of gosports(should be 0.97) entropy_sports = H([0.3 + 0.2 + 0.1, 0.1 + 0.1 + 0.2]) print entropy_sports # Calculate the entropy of gosports conditioned on weather = sunny, cloudy, and rainy (should be 0.81, 0.92, 0.92 respectively) entropy_sports_sunny = H([0.3 / 0.4, 0.1 / 0.4]) print entropy_sports_sunny entropy_sports_cloudy = H([0.2 / 0.3, 0.1 / 0.3]) print entropy_sports_cloudy entropy_sports_rainy = H([0.1 / 0.3, 0.2 / 0.3]) print entropy_sports_rainy # Calculated the expected entropy of gosports if we are told weather(should be 0.88) entropy_sports_weather = entropy_sports_sunny * 0.4 + entropy_sports_cloudy * 0.3 + entropy_sports_rainy * 0.3 print entropy_sports_weather # Calculate the expected reduced entropy of gosports if we know weather(should be 0.095) entropy_reduced = entropy_sports - entropy_sports_weather print entropy_reduced ``` From the example, we know that if we are told the information about weather, the entropy (uncertainty) of gosports will reduce by 0.095. We call the reduced entropy of distribution X given distribution Y the $mutual$ $information$ between X and Y, denoted by I(X, Y). Formally, mutual information can be calculated as: $I(X, Y) = H(X) - H(X|Y)$ where $H(X|Y)$ is just a short hand for $E_Y[H(X|Y = y)]$ complete the following function to calculate mutual information: ``` def I(joint_dist): """given the joint distribution of two variables, calculate the mutual information between them Agrs: joint_dist(list of list of float): the joint distribution between two variables, for example, for the example above joint_dist = [[0.3, 0.2, 0.1], [0.1, 0.1, 0.2]] Return: (float) the mutual information between these two variables: """ m = len(joint_dist) if m <= 0: return -1; n = len(joint_dist[0]) probs = [] for i in range(m): probs.append(sum(joint_dist[i])) H_total = H(probs) H_reduced = 0.0 for j in range(n): total_prob = 0.0 probs = [] for i in range(m): total_prob += joint_dist[i][j] probs.append(joint_dist[i][j]) for i in range(m): probs[i] /= total_prob H_reduced += total_prob * H(probs) return H_total - H_reduced # result should be 0.09546 print I([[0.3, 0.2, 0.1], [0.1, 0.1, 0.2]]) #result should also be 0.09546 print I([[0.3, 0.1], [0.2, 0.1], [0.1, 0.2]]) ``` ## Decision Tree Consider the following dataset. Each row represents a single training data. | outlook | humidity | wind | play sports? | |--------|--------|--------|--------| | overcast | high | strong | yes| | overcast | normal | weak | yes | | overcast | high | weak | yes | | overcast | normal | strong | yes | | sunny | high | strong | no | | sunny | normal | weak | yes | | rain | high | strong | no | | rain | normal | weak | yes| Based on the training data above, we what to predict weather play sports is yes or no based on the information of outlook, humidity, and wind. After a close look at the dataset, we may find that, if the outlook is overcast, then play sports is yes. Otherwise, it also depends on other attributes. To be specific, if outlook is sunny, play sports = yes iff humidity is normal. if outlook is rain, play sports = yes iff wind = strong. We can express the if-else process above as a tree as follow: outlook / | \ / | \ sunny overcast rain | | | humidity yes wind / \ / \ high normal strong weak | | | | no yes no yes This is a simple example of decision tree. Please note the the tree above is the not only tree to be consistent with our training dataset. In general, in a decision tree, at each note we look at one of the attributes, and partition our dataset according to different values taken on this attribute, thus splitting our dataset. When a split dataset contains only one kind of label, the process is stopped. Otherwise we select new attributes and partitions out dataset recursively until the data in the same dataset are all of the same label, or some other stop condition is reached. However, in the process above there is a center problem remains unsolved: when we need to split our dataset, which attribute should we use? One common approach is to select the attribute that can reduce the uncertainty(entropy) of training data, which is also called the ID3 method. In other words: 1. View each attribute as a distribution. For example, in the dataset above, there are 4 overcast, 2 sunny, and 2 rain. Then the distribution of outlook is [0.5, 0.25, 0.25]. 2. Select the attribute(distribution) that has highest mutual information with the label(which can also be view as a distribution). 3. Use the selected attribute to participate current dataset. Do recursion if necessary. Complete the following class of DecisionTreeNode ``` class DecisionTreeNode(): def __init__(self, X, Y, used_attr_num = 0): """ return the tree node on dataset (X, Y). Build it children recursively. pick up the attribute which has maximum mutual information with our label Y, and use that attribute to split data into several classes, each class corresponding to one possible value of that attribute. The stopping condition is that a node contains all of the same label. If there are multiple attributes with the same mutual information, pick up the one with smallest index args: X(list of list of integer): X[i][j] is the value of the jth attribute of the ith training data For any given attribute, we encode the set of possible values it can take as 0, 1, 2, ... Y(list of integer): Y[i] is the label of the ith training data. We encode the set of possible values as 0, 1, 2, ... For example, for the outlook/humidity/rain/play sports dataset above: X = [[0, 0, 0], [0, 1, 1], [0, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 1], [2, 0, 0], [2, 1, 1]] Y = [0, 0, 0, 0, 1, 0, 1, 0] constrain: (X, Y) should represent at least one sample used_attr_num(integer): number of used attributes: if all of the attributes has been used but the dataset is not composed of same label(which means there are conflicting data), predict the label as majority and stop recursion recommended members: self.label (None or integer): if the node is leaf, it has a non None label indicating its label self.attr_index (integer, defined when self.labe is not None): the index of the splitting attribute for this ndoe self.child (dict, mapping from label for child nodes): used to find the next child if this node is not leaf """ if len(set(Y)) <= 1: self.label = Y[0]; return if used_attr_num >= len(X[0]): #attrs has been used up self.label = max(set(Y), key=Y.count) return self.label = None #select a proper attribute as the split attribute for this Node sample_num = len(X) attr_num = len(X[0]) max_mutual_info = 0 opt_attr_index = -1 for attr_index in range(attr_num): d = {} for sample_index in range(sample_num): attr = X[sample_index][attr_index] label = Y[sample_index] if not attr in d: d[attr] = {} if not label in d[attr]: d[attr][label] = 0 d[attr][label] += 1 joint_dist = [] for attr in d: row_dist = [] for label in set(Y): if not label in d[attr]: row_dist.append(0.0) else: row_dist.append(float(d[attr][label]) / sample_num) joint_dist.append(row_dist) mutual_info = I(joint_dist) if mutual_info > max_mutual_info: max_mutual_info = mutual_info opt_attr_index = attr_index #opt_attr_index is selected, split child based on the index self.attr_index = opt_attr_index self.child = {} attrToData = {} for sample_index in range(sample_num): attr = X[sample_index][self.attr_index] if not attr in attrToData: attrToData[attr] = [[],[]]; #map attr to [X, Y] pair for child attrToData[attr][0].append(X[sample_index]) attrToData[attr][1].append(Y[sample_index]) for attr in attrToData: self.child[attr] = DecisionTreeNode(attrToData[attr][0], attrToData[attr][1], used_attr_num + 1) def printNode(self, depth = 0, attr_names = None, attr_values = None, output_names = None): """ print the tree recursively, mainly used for debug args: depth: (int) the depth of current node, depth of root is zero attr_names: a list of string, denoting the name for each attribute attr_values: a list of list of string, attr_values[i][j] is the name of the ith attribute, jth value output_names: a list of string, denoting the name for each kind of output """ if self.label is not None: print '\t' * depth, if output_names is None: print 'label = ' + str(self.label) else: print 'label = ' + output_names[self.label] else: for attr in self.child: print '\t' * depth, if attr_names is None or attr_values is None: print 'attr[' + str(self.attr_index) + '] = ' + str(attr) + ':' else: print attr_names[self.attr_index] + ' = ' + attr_values[self.attr_index][attr] + ':' self.child[attr].printNode(depth + 1, attr_names, attr_values, output_names) def predict(self, x): """ given a input data x, predict its label according to the tree. implement it recursively if some value of the splitting attribute has never been seen by the node, return None args: x (list of integers) an input sample: return: label(int) the predicted label of this input """ if self.label is not None: return self.label try: attr = x[self.attr_index] return self.child[attr].predict(x) except KeyError: return None ``` ### Construct Decision Tree Node In this subsection of decision tree you will implement the construction function of node. Please follow the following specifications: 1. If the dataset for the current node is composed of the same label, stop recursion and use that label as the label for this tree node 2. If the dataset for the current node has at least two kinds of labels, you should split this label using some attribute(feature). The selection of feature should follow the following principle: choose the attribute that has the maximum mutual information with current labels. If multiple attributes have the same mutual information, choose the attribute with the smallest index. In terms of minimizing the depth of the tree, this may not be the optimum solution, but generally speaking it works well. Actually calculating the tree with minimum depth is NP-hard. Thus it is almost impossible to produce the optimum result unless P = NP After implementing the construction function, please test your code using the following simple test case: ``` # TestCode for node construction Y = [0, 0, 0, 0, 1, 0, 1, 0] X = [[0, 0, 0],[0, 1, 1],[0, 0, 1],[0, 1, 0],[1, 0, 0],[1, 1, 1],[2, 0, 0],[2, 1, 1]] node = DecisionTreeNode(X, Y) print node.attr_index print node.child[0].label print node.child[1].attr_index print node.child[2].attr_index #The code above should produce the following results: 0 0 1 1 ``` ### Print Tree Node Implement the printTree(depth = 0) function to visualize tree node structure recursively. This function can help you can a intuitive feeling of the decision tree. We do not have strict requirement for the implementation function. However, your function should indicate the label, splitting attribute, and value of splitting attribute clearly. We recommend to use indention to represent the structure of tree. Please run the following test code after implementation: ``` # TestCode for printNode() node.printNode() #The code above should give the following tree structure: # # attr[0] = 0: # label = 0 # attr[0] = 1: # attr[1] = 0: # label = 1 # attr[1] = 1: # label = 0 # attr[0] = 2: # attr[1] = 0: # label = 1 # attr[1] = 1: # label = 0 # ``` ### Predict Label for Test Data Now you have implemented the construction of tree node. Next step is to predict labels for training labels. You should do it recursively. Recommended Stopping condition is that self.label is not None for some node: Please run the following test code after implementation: ``` # Test code for predict() print node.predict([2,0,1]) # output should be 1 print node.predict([0,0,0]) # output should be 0 ``` ## Random Forest We train our random forest based on a general technique called bootstrap aggregating, or bagging. It means that given a training set X = x1, ..., xn with labels Y = y1, ..., yn, we repeatedly (B times) selects a set of random samples with replacement and train a decision based on our selection (reference: https://en.wikipedia.org/wiki/Random_forest) Algorithm: For $b = 1, ..., B$: 1. Sample, with replacement, n training examples from $X$, $Y$; call these $X_b$, $Y_b$. 2. Train a decision or regression tree $f_b$ on $X_b$, $Y_b$. After training, predictions on an unseen samples x can be made by averaging the predictions from all the individual regression trees on x: $ f(x) = {\frac {1}{B}}\sum _{b=1}^{B} f_b(x) $ or by taking the majority vote in the case of decision trees. You should implement the construction function of RandomForest and predict() of RandomForest in the following class: ``` import random class RandomForest: def __init__(self, X, Y, B, n): """ Construct a Random Forest using DecisionTreeNode. args: X(list of list of integer): X[i][j] is the value of the jth attribute of the ith training data For any given attribute, we encode the set of possible values it can take as 0, 1, 2, ... Y(list of integer): Y[i] is the label of the ith training data. We encode the set of possible values as 0, 1, 2, ... B(integer): number of decision trees this forest has n(integer): number of samples used to train each decision tree. Samples are draw from X, Y uniformaly with replacement recommended member: roots (list of DecisionTreeNode): a list of decision tree roots. len(roots) = n """ self.roots = [] sample_num = len(X) for i in range(B): index = range(sample_num) random.shuffle(index) X_b = [] Y_b = [] for j in range(n): X_b.append(X[index[j]]) Y_b.append(Y[index[j]]) self.roots.append(DecisionTreeNode(X_b, Y_b)) def predict(self, x): """ Predict the label of input x using majority voting, if multiple labels receive the same amount of vote, return anyone of them Note: since each tree is trained based on a random set of samples, it is possible that a decision tree has never seen some of the values of some features. In this case, you should except the KeyError in your implementation and cancel the vote of this tree If the votes of all tree has been canceled, you should return None args: x (list of integers) an input sample: return: (int) the predicted label of this input, by majority voting of all of its random forests """ cnt = {} for root in self.roots: pred = root.predict(x) if pred != None: if pred in cnt: cnt[pred] += 1 else: cnt[pred] = 1 opt_pred = None max_cnt = 0 for pred in cnt: if cnt[pred] > max_cnt: max_cnt = cnt[pred] opt_pred = pred return opt_pred # Test code for random forest: randomForest = RandomForest(X, Y, 4, 4) print randomForest.predict([2,0,0]) # 1 with prob around 0.6, 0 with prob aroud 0.4 print randomForest.predict([1,1,1]) # 1 with prob around 0.1, 0 with prob around 0.9 # Write your code here to test their probabilities ``` ## Play with Real Data! Hope now you have implemented the RandomForest class. You can play with it on some real data instead of the artificial examples regarding play sports. We use a dataset from http://archive.ics.uci.edu/ml/datasets/Car+Evaluation. In this dataset, we want to predict the acceptability (unacc, acc, good, vgood) of a car based on the following properties: 1. buying price: vhigh, high, med, low. 2. maintaining price: vhigh, high, med, low. 3. number of doors: 2, 3, 4, 5more. 4. positions for person: 2, 4, more. 5. size of luggage boot : small, med, big. 6. safety: low, med, high. First we need to load data and connnsdcstruct our training and test dataset: (There are some dirty work here, I recommend that we should not ask students to work on this part) ``` # Dirty work start, you can ignore it # load data, convert data to our representation, and split data into training and testing data f = open('car_acceptability.csv') train_sample_num = 1200 lines = f.readlines() X = [] Y = [] convert_dict = [{}, {}, {}, {}, {}, {}, {}] for line in lines: attrs = line.split(',') row = [] for i in range(7): attr = attrs[i] if not attr in convert_dict[i]: convert_dict[i][attr] = len(convert_dict[i]) row.append(convert_dict[i][attr]) X.append(row[:6]) Y.append(row[6]) sample_num = len(X) index = range(sample_num) random.shuffle(index) Xtrain = [] Xtest = [] Ytrain = [] Ytest = [] for i in range(sample_num): if i < train_sample_num: Xtrain.append(X[index[i]]) Ytrain.append(Y[index[i]]) else: Xtest.append(X[index[i]]) Ytest.append(Y[index[i]]) # construct attr_names, attr_values, and output_names to make the printNode more readable # not something important attr_names = ['buy price', 'maintain price', 'num of doors', 'num of person', 'size of lug', 'safety'] output_names = [0] * (max(convert_dict[6].values()) + 1) for key in convert_dict[6]: output_names[convert_dict[6][key]] = key attr_values = [0] * 6 for i in range(6): attr_values[i] = [0] * (max(convert_dict[i].values()) + 1) for key in convert_dict[i]: attr_values[i][convert_dict[i][key]] = key # Dirty work end, attention please ``` ### Play with Decision Tree on Real Data In this part you should use $Xtrain$ and $Ytrain$ to train your decision tree. After training, you print the tree and test the accuracy on test data: ``` # Train your tree and print it here root = DecisionTreeNode(Xtrain, Ytrain) #root.printNode(0, attr_names, attr_values, output_names) # The begining of your tree should looks like: # safety = low: # label = unacc # safety = med: # num of person = 2: # label = unacc # num of person = 4: # buy price = vhigh: # maintain price = vhigh: # label = unacc # maintain price = high: # label = unacc # maintain price = med: # size of lug = small: # label = unacc # test your tree on Xtest and Ytest here: pred = [] cnt = 0 for i in range(len(Xtest)): if Ytest[i] == root.predict(Xtest[i]): cnt += 1; print float(cnt) / len(Ytest) # our implementation has accuracy between 0.85-0.9 ``` ### Play with Random Forest on Real Data In this part you should use $Xtrain$ and $Ytrain$ to train your random forest. After training, you test the accuracy on test data: ``` # Train your tree and print it here forest = RandomForest(Xtrain, Ytrain, 100, 200) ## Test your tree on Xtest and Ytest here: pred = [] cnt = 0 for i in range(len(Xtest)): if Ytest[i] == forest.predict(Xtest[i]): cnt += 1; print float(cnt) / len(Ytest) # When B = 100 and n = 1000, our implementation has accuracy around 0.92. # Compared with the accuracy of a single decision tree, you can see the improvement of random forest ``` ### How accuracy changes with number of samples We have noticed that as $n$ changes, the accuracy changes accordingly. In the last section you should plot how accuracy changes with $n$, when the other parameters are fixed. To get a relatively stable result, we recommend that you should take repeated experiments for at least 10 times and take the average ``` # Write your code here: import matplotlib.pyplot as plt repeat = 10 ns = range(50, 1250, 50) res = [] B = 100 for n in ns: total = 0.0 for r in range(repeat): forest = RandomForest(Xtrain, Ytrain, B, n) pred = [] cnt = 0 for i in range(len(Xtest)): if Ytest[i] == forest.predict(Xtest[i]): cnt += 1; total += float(cnt) / len(Ytest) res.append(total / repeat) plt.plot(ns, res) plt.xlabel('number of samples') plt.ylabel('accuracy') plt.show() ``` Our implementation shows that as n increase, the accuracy increases first, then decrease. This result is quite reasonable. As your final task, think about why it happens. <img src="files/example.png">
github_jupyter
# Midterm Answer Script **Name**: Ferdous Zeaul Islam **ID**: 173 1136 042 **Course**: CSE445 (Machine Learning) **Faculty**: Dr. Sifat Momen (Sfm1) **Section**: 01 **Semester**: Spring 2021 ### N.B- please put the diabetes.csv dataset on the same directory as the ipynb file. ``` # only need this line in jupyter %matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np import pandas as pd ``` ## (a) Read the dataset (which is in the csv format) using panda's dataframe. ``` diabetes_df = pd.read_csv('./diabetes.csv') diabetes_df.shape ``` ## (b) Find out the number of instances and the number of features (including the target class) in the dataset. ``` print('Number of instances in the dataset =', diabetes_df.shape[0]) print('Number of features in the dataset =', diabetes_df.shape[1]) ``` ## (c) Does the dataset have any missing entries? Show your workings. ``` diabetes_df.info() ``` ### Explanation: We can observe from the command on the previous line that all columns/features of the dataset have non-null count equal to the total number of instances that we found on on Question(b). Therefore, we can state that **to the naked eye there are no missing entries in this dataset.** ## (d) Here “Outcome” is the target class and contains values zeros or ones. Determine how many instances have the outcome values zeroes and how many have the outcome values ones. Hence or otherwise, comment on whether this dataset suffers from class imbalance problem. ``` outcome_freq = diabetes_df.Outcome.value_counts() outcome_freq num_total_instances = diabetes_df.shape[0] num_outcome_zero = outcome_freq[0] num_outcome_one = outcome_freq[1] outcome_zero_data_percentage = round((num_outcome_zero*100)/num_total_instances, 3) print('Percentage of data with outcome zero =', outcome_zero_data_percentage) outcome_one_data_percentage = round((num_outcome_one*100)/num_total_instances, 3) print('Percentage of data with outcome one =', outcome_one_data_percentage) ``` ### Explanation: With respect to "Outcome" we see that there are **65.104% data with value zero** and remaining **34.896% data with value one**. Clearly, **the dataset suffers from class imbalance.** ## (e) Show the first 5 and the last 5 instances of the dataset. ``` diabetes_df.head() diabetes_df.tail() ``` ## (f) Often, in many datasets, it may appear that there exists no missing entries. However, when you look at the dataset closely, it is often found that the missing entries are replaced by a zero (0). Check if this dataset has this issue or not. Show and explain your workings. ``` diabetes_df[30:35] diabetes_df[342:347] diabetes_df[706:711] diabetes_df[(diabetes_df['DiabetesPedigreeFunction'] == 0)].shape[0] diabetes_df[(diabetes_df['Age'] == 0)].shape[0] ``` ### Explanation- Apart from the 'Pregnancy' and 'Outcome' columns any other column with the value 0 is non-sensical. By printing various segments of the data we see that some instances have 0 value for columns- 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin' and 'BMI'. So we can state that, **there are missing datas replaced with 0 in this dataset.** Further calculations are shown below, ``` missing_data_count = diabetes_df[ (diabetes_df['Glucose']==0) | (diabetes_df['BloodPressure']==0) | (diabetes_df['BMI']==0) | (diabetes_df['Insulin']==0) | (diabetes_df['SkinThickness']==0) ].shape[0] print('A total of', missing_data_count, 'instances have missing data (one or more columns invalidly contain zero).') ``` ## (g) Draw a histogram for each numerical features. You may use the hist() function of the panda's dataframe. Documentation on this can be found at https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.hist.html ### In order to make the histograms for each features visually appealing, you are advised to tweak bins and figsize parameters. ``` diabetes_df.hist(bins = 9, figsize = (15, 15)) plt.show() ``` ## (h) One of the ways to visualize how each attribute is correlated with other attributes is by drawing a seaborn correlation heatmap. Read the documentation on how to generate correlation heatmap using the seaborn library. The following link provides a quick overview on how to do this: https://www.geeksforgeeks.org/how-to-create-a-seaborn-correlation-heatmap-in-python/ ### I strongly suggest you to adjust the figure size before using the heatmap. For instance, you can write the code plt.figure (figsize = (a,b)) before using the seaborn's heatmap [Here a and b are appropriate choices for the figure size that you need to decide on]. ``` import seaborn # help taken from -> # https://medium.com/@szabo.bibor/how-to-create-a-seaborn-correlation-heatmap-in-python-834c0686b88e plt.figure(figsize=(15, 8)) corr_matrix = diabetes_df.corr() # mask to hide the upper triangle of the symmetric corr-matrix # mask = np.triu(np.ones_like(corr_matrix, dtype=np.bool)) heatmap = seaborn.heatmap( # correlation matrix corr_matrix, # mask the top triangle of the matrix # mask=mask, # two-contrast color, different color for + - cmap="PiYG", # color map range vmin=-1, vmax=1, # show corr values in the cells annot=True ) # set a title heatmap.set_title('Correlation Heatmap', fontdict={'fontsize':20}, pad=16); plt.show() ``` ## (i) If this dataset has the issue discussed in (f), you are now required to write a function in python that will replace each zeros by the corresponding median value of the features. Note that you may require to use the numpy library. We saw in (f) that there were some invalid zeroes in the columns- **'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin' and 'BMI'**. ``` column_with_invalid_zeroes = ['Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI'] for column in column_with_invalid_zeroes: # extract the column from original dataframe column_data = diabetes_df[column] # replace zero values with np.NaN column_data = column_data.replace(0, np.NaN) # replace np.NaN values with the median column_data = column_data.fillna(column_data.median()) # put the column in the original dataframe diabetes_df[column] = column_data ``` Now if we run the same code as we did of (f) to count missing values (i.e contains invalid zero), ``` missing_data_count = diabetes_df[ (diabetes_df['Glucose']==0) | (diabetes_df['BloodPressure']==0) | (diabetes_df['BMI']==0) | (diabetes_df['Insulin']==0) | (diabetes_df['SkinThickness']==0) ].shape[0] print('A total of', missing_data_count, 'instances have missing data (one or more columns invalidly contain zero).') ``` **Therefore we can safely assume that invalid zeroes have been replaced by their columns median values.** ## (j) Split the dataset into X and y where X contains all the predictors and y contains only the entries in the target class. ``` X = diabetes_df.drop(columns=['Outcome']) y = diabetes_df['Outcome'] diabetes_df.head() X.head() y.head() ``` ## (k) Use the train_test_split function to split the dataset into train set and test set in the ratio 80:20. ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y) train_data_percentage = round((X_train.shape[0]/X.shape[0])*100, 2) test_data_percentage = round((X_test.shape[0]/X.shape[0])*100, 2) print("Test size = " + str(test_data_percentage) + "%" + " Train size = " + str(train_data_percentage) + "%") ``` ## (l) Write a code to implement the zeroR classifier (i.e. a baseline classifier) on this dataset. Determine the precision, recall, F1 score, train accuracy and the test accuracy. ``` from sklearn.dummy import DummyClassifier from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.metrics import classification_report # ZeroR classifier model = DummyClassifier(strategy = 'most_frequent', random_state = 42) # Dataset is trained and a model is created model.fit(X_train,y_train) y_train_predictions = model.predict(X_train) y_test_predictions = model.predict(X_test) print('For the train predictions:\n', classification_report(y_train, y_train_predictions)) print() print('For the test predictions:\n', classification_report(y_test, y_test_predictions)) ``` ## (m) Apply the KNN classifier with the euclidean distance as the distance metric on this dataset. You need to determine a suitable value of the hyperparameter, k. One way to do this is to apply the KNN classifier with different values of k and determine the train and test accuracies. Plot a graph of train and test accuracy with respect to k and determine the value of k for which the difference between the train and the test accuracy is minimum. You may require to do feature scaling before using the KNN classifier. Before we begin applying KNN algorithm we need to Scale our dataset. **We must scale test and train both segments of the dataset using the same min, max values for corresponding columns.** ``` from sklearn.neighbors import KNeighborsClassifier from sklearn.preprocessing import MinMaxScaler X_train.head() scaler = MinMaxScaler() X_train_scaled_using_library = pd.DataFrame(scaler.fit_transform(X_train), columns=X_train.columns) X_train_scaled_using_library.hist(bins = 9, figsize = (15, 15)) plt.show() columns = X.columns X_train_col_min = [] X_train_col_max = [] col_idx = 0 for column in columns: X_train_col_max.append(X_train[column].max()) X_train_col_min.append(X_train[column].min()) X_train_scaled = X_train.copy() # MUST MAKE COLUMNS INTO FLOAT DATATYPE, UNLESS SCALING WILL NOT WORK # spent 3hs for this (: X_train_scaled[list(columns)] = X_train_scaled[list(columns)].astype(float) for (row_idx, data) in X_train.iterrows(): col_idx = 0 for val in data: column = columns[col_idx] scaled_val = (val - X_train_col_min[col_idx]) / (X_train_col_max[col_idx] - X_train_col_min[col_idx]) X_train_scaled.at[row_idx, column] = float(scaled_val) col_idx += 1 X_train_scaled.hist(bins = 9, figsize = (15, 15)) plt.show() ``` Among the above two scaling, the first one was done using MinMaxScaler() of sklearn library. The second one was done by manually implementing the scaling process. From the two histograms diagrams of each column from above we can conclude that our manual scaling process is as accurate as the MinMaxScaler() of sklearn library. Now we can proceed to manually scale the test set, **using the minimum and maximum values of the train dataset**, ``` X_test_scaled = X_test.copy() X_test_scaled[list(columns)] = X_test_scaled[list(columns)].astype(float) for (row_idx, data) in X_test_scaled.iterrows(): col_idx = 0 for val in data: column = columns[col_idx] scaled_val = (val - X_train_col_min[col_idx]) / (X_train_col_max[col_idx] - X_train_col_min[col_idx]) X_test_scaled.at[row_idx, column] = scaled_val col_idx += 1 X_test_scaled.head() ``` Now we implement a function that applies KNN classifier for k values in the range provided as function parameter, ``` def check_k_in_range(left, right): k_values = [] for i in range (left, right): k_values.append(i) train_accuracies = [] test_accuracies = [] for k in k_values: # k-nn classifier witk k neighbours and euclidian distance model = KNeighborsClassifier(n_neighbors=k, metric='minkowski', p=2) # train model model.fit(X_train_scaled, y_train) # train predictions y_train_predictions = model.predict(X_train_scaled) # train accuracy for current k value train_accuracies.append(accuracy_score(y_train, y_train_predictions)) # test predictions y_test_predictions = model.predict(X_test_scaled) # test accuracy for current k value test_accuracies.append(accuracy_score(y_test, y_test_predictions)) # plot the Test-Accuracy, Training-Accuracy VS K-value plt.figure(figsize=(15, 8)) plt.title('Train accuracy, Test accuracy vs K-values') plt.plot(k_values, train_accuracies, 'ro-', k_values, test_accuracies,'bv--') plt.legend(['Training Accuracy','Test Accuracy']) plt.xlabel('K values') plt.ylabel('Accuracy') min_k = 1 max_k = int(X_train.shape[0]/5) print('Minimum k = ', min_k, 'Maximum k = ', max_k) check_k_in_range(min_k, max_k) ``` ### Explanation- From the figure we can observe that an optiman k-value lies in the range 10 to 20. Because for this range Train Accuracies and Test Accuracies seem relatively closer, which means reduced chance of model getting too complex and overfitting. Let's test for k values in range 10 to 20 now. ``` check_k_in_range(10, 20) ``` From the above three graphs we can state that **k=17 should be the optimal choice for our K nearest neighbour classifier.** ## (n) Apply the decision tree classifier with the “gini” criterion on this dataset. One of the hyperparameters of the decision tree classifier is max_depth. Apply the decision tree classifier with different values of max_depth and find the train and test accuracies. Plot a graph showing how the train and test accuracy varies with max_depth. Determine the most suitable value of max_depth. For a suitable value of max_depth, draw the decision tree. ``` from sklearn import tree def check_decision_tree_max_depth_in_range(left, right): max_depths = [] for i in range (left, right): max_depths.append(i) train_accuracies = [] test_accuracies = [] for depth in max_depths: # decision tree classifier with max_depth impurity measure 'gini' model = tree.DecisionTreeClassifier(criterion='gini',max_depth=depth) # train model model.fit(X_train, y_train) # train predictions y_train_predictions = model.predict(X_train) # train accuracy for current k value train_accuracies.append(accuracy_score(y_train, y_train_predictions)) # test predictions y_test_predictions = model.predict(X_test) # test accuracy for current k value test_accuracies.append(accuracy_score(y_test, y_test_predictions)) # plot the Test-Accuracy, Training-Accuracy VS K-value plt.figure(figsize=(15, 8)) plt.title('Train accuracy, Test accuracy vs Max-Depths') plt.plot(max_depths, train_accuracies, 'ro-', max_depths, test_accuracies,'bv--') plt.legend(['Training Accuracy','Test Accuracy']) plt.xlabel('Max Depths') plt.ylabel('Accuracy') check_decision_tree_max_depth_in_range(1, 50) ``` It appears that our desired max_depth is somewhere in the range from 1 to 10. Let's find out, ``` check_decision_tree_max_depth_in_range(1, 10) ``` From the graph we can state that **maxdepth = 4 is the optimal choice.** Now let's draw the decision tree for max_depth=4 and impurity measure as gini, ``` import pydotplus from IPython.display import Image # decision tree classifier with max_depth = 4 and impurity measure 'gini' model = tree.DecisionTreeClassifier(criterion='gini',max_depth=4) # train model model.fit(X_train, y_train) dot_data = tree.export_graphviz(model, feature_names=X_train.columns, class_names=['non-diabetic','diabetic'], filled=True, out_file=None) graph = pydotplus.graph_from_dot_data(dot_data) Image(graph.create_png()) ``` ## (o) Read the article “How to configure k-fold cross validation” and apply 10-fold cross validation using the classifiers in m and n. Determine the performance of the classifiers (accuracy, precision, recall, f1-score and the area under the curve of the ROC curve) on this dataset. Link to the article: https://machinelearningmastery.com/how-to-configure-k-fold-cross-validation/ We have to initialize the 10-fold cross validator and our classifiers. ``` from sklearn.model_selection import StratifiedKFold, cross_val_score # 10-fold cross validation cv = StratifiedKFold(n_splits = 10, random_state = 42, shuffle = True) ``` Let's find out the accuracy, precision, recall and area under the ROC curve for decision tree classifier. To pick the max_depth hyper parameter for the decision tree we will pick the one with the highest average accuracy for 10-fold cross validation. ``` max_depths = [] for i in range (1, 25): max_depths.append(i) accuracies = [] for depth in max_depths: model = tree.DecisionTreeClassifier(criterion='gini',max_depth=depth) accuracie_segments = cross_val_score(model, X, y, scoring = 'accuracy', cv = cv, n_jobs = 1) accuracies.append(np.mean(accuracie_segments)) plt.figure(figsize=(15, 8)) plt.title('Avg accuracy vs Max depths') plt.plot(max_depths, accuracies,'bv--') plt.xlabel('Max depths') plt.ylabel('Avg accuracy') plt.show() ``` So, **max_depth = 5** gives the highest accuracy. ``` # decision tree classifier with max_depth=5 impurity measure 'gini' model_decision_tree = tree.DecisionTreeClassifier(criterion='gini',max_depth=5) accuracies = cross_val_score(model_decision_tree, X, y, scoring = 'accuracy', cv = cv, n_jobs = 1) precisions = cross_val_score(model_decision_tree, X, y, scoring = 'precision', cv = cv, n_jobs = 1) recalls = cross_val_score(model_decision_tree, X, y, scoring = 'recall', cv = cv, n_jobs = 1) f1s = cross_val_score(model_decision_tree, X, y, scoring = 'f1', cv = cv, n_jobs = 1) aucs = cross_val_score(model_decision_tree, X, y, scoring = 'roc_auc', cv = cv, n_jobs = 1) accuracy_decision_tree = np.mean(accuracies) precision_decision_tree = np.mean(precisions) recall_decision_tree = np.mean(recalls) f1_decision_tree = np.mean(f1s) auc_decision_tree = np.mean(aucs) print('For the Decision Tree classifier:') print('accuracy =', round(accuracy_decision_tree, 2) , 'precision =', round(precision_decision_tree, 2) , 'recall =', round(recall_decision_tree, 2) , 'f1-score =', round(f1_decision_tree, 2) , 'AUC =', round(auc_decision_tree, 2)) ``` Let's find out the accuracy, precision, recall and area under the ROC curve for K-NN classifier. To pick the hyper parameter, k for the classifier we will pick the one with the highest average accuracy for 10-fold cross validation. ``` X_scaled = pd.DataFrame(MinMaxScaler().fit_transform(X), columns=X.columns) k_values = [] for i in range (1, 25): k_values.append(i) accuracies = [] for k in k_values: model = KNeighborsClassifier(n_neighbors=k, metric='minkowski', p=2) accuracy_segments = cross_val_score(model, X_scaled, y, scoring = 'accuracy', cv = cv, n_jobs = 1) accuracies.append(np.mean(accuracy_segments)) plt.figure(figsize=(15, 8)) plt.title('Avg accuracy vs K-values') plt.plot(k_values, accuracies,'bv--') plt.xlabel('K values') plt.ylabel('Avg accuracy') plt.show() ``` So, **k=17(or 15)** gives the highest average accuracy. ``` # k-nn classifier witk k=17 neighbours and euclidian distance model_knn = KNeighborsClassifier(n_neighbors=17, metric='minkowski', p=2) accuracies = cross_val_score(model_knn, X_scaled, y, scoring = 'accuracy', cv = cv, n_jobs = 1) precisions = cross_val_score(model_knn, X_scaled, y, scoring = 'precision', cv = cv, n_jobs = 1) recalls = cross_val_score(model_knn, X_scaled, y, scoring = 'recall', cv = cv, n_jobs = 1) f1s = cross_val_score(model_knn, X_scaled, y, scoring = 'f1', cv = cv, n_jobs = 1) aucs = cross_val_score(model_knn, X_scaled, y, scoring = 'roc_auc', cv = cv, n_jobs = 1) accuracy_knn = np.mean(accuracies) precision_knn = np.mean(precisions) recall_knn = np.mean(recalls) f1_knn = np.mean(f1s) auc_knn = np.mean(aucs) print('For the K-NN classifier:') print('accuracy =', round(accuracy_knn, 2) , ', precision =', round(precision_knn, 2) , ', recall =', round(recall_knn, 2) , ', f1-score =', round(f1_knn, 2) , ', AUC =', round(auc_knn, 2)) ``` For comparison of performance let's draw a bar graph of evaluation metrics for the two classifiers. ``` labels = ['accuracy', 'precision', 'recall', 'f1', 'auc'] decision_tree_evaluation_metrics = [accuracy_decision_tree, precision_decision_tree, recall_decision_tree, f1_decision_tree, auc_decision_tree] knn_evaluation_metrics = [accuracy_knn, precision_knn, recall_knn, f1_knn, auc_knn] x = np.arange(len(labels)) # the label locations width = 0.35 # the width of the bars fig, ax = plt.subplots() ax.bar(x - width/2, decision_tree_evaluation_metrics, width, label='decision tree') ax.bar(x + width/2, knn_evaluation_metrics, width, label='k-nn') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Score') ax.set_title('Decision Tree vs KNN comparison') ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() plt.show() ```
github_jupyter
``` %matplotlib inline %config InlineBackend.figure_format = "retina" from matplotlib import rcParams rcParams["savefig.dpi"] = 100 rcParams["figure.dpi"] = 100 import numpy as np import matplotlib.pyplot as plt import tensorflow as tf session = tf.InteractiveSession() from exoplanet import transit T = tf.float64 c1 = tf.constant(0.5, dtype=T) c2 = tf.constant(0.5, dtype=T) ld = transit.QuadraticLimbDarkening(c1, c2) N = 1000 r_ref = tf.constant(0.1 + np.zeros(N), dtype=T) z_ref = (1 + r_ref) * tf.constant(np.linspace(0.0, 1.0, N), dtype=T) r_var = tf.placeholder(T, (None, None)) z_var = tf.placeholder(T, (None, None)) delta_var = transit.transit_depth(ld, z_var, r_var, n_integrate=1000) delta_exact = transit.transit_depth(ld, z_var, r_var, n_integrate=500000) ns = 2**np.arange(1, 14) deltas = [] for n in ns: deltas.append(tf.abs(delta_exact - transit.transit_depth(ld, z_var, r_var, n_integrate=n))) delta_z = transit.transit_depth(ld, z_ref, r_ref, n_integrate=1000) - transit.transit_depth(ld, z_ref, r_ref, n_integrate=500000) z_val = z.eval() rors = np.array([0.01, 0.04, 0.16, 0.64]) shape = np.zeros((len(rors), len(z_val))) fd = {r_var: rors[:, None] + shape, z_var: z_val[None, :] + shape} ld_cs = [(0.5, 0.5), (0.1, 0.8), (0.8, 0.1)] err = np.empty((len(ld_cs), len(ns), len(rors), len(z_val))) for i, c in enumerate(ld_cs): fd[c1] = c[0] fd[c2] = c[1] err[i] = session.run(deltas, feed_dict=fd) # fd = {r_var: rors[:, None] + shape, z_var: z_val[None, :] + shape} # session.run([deltas, delta_z], feed_dict=fd) err.shape fig, axes = plt.subplots(2, 1, figsize=(5, 6), sharex=True) for i, (c, s) in enumerate(zip(ld_cs, ["solid", "dashed", "dotted"])): for j, ror in enumerate(rors): color = "C{0}".format(j) label = "$r = {0:.2f}$".format(ror) if s == "solid" else None ax = axes[0] ax.loglog(ns, err_val[i, j], color=color, linestyle=s, label=label) ax = axes[1] ax.loglog(ns, err_val[i, j] / ror**2, color=color, linestyle=s, label=label) axes[0].axhline(1e-6, color="k", lw=2.0, alpha=0.3) axes[0].set_ylabel("$||\delta_n - \delta_\mathrm{exact}||_\infty$") axes[1].set_ylabel("$||\delta_n - \delta_\mathrm{exact}||_\infty / r^2$") axes[1].set_xlabel("number of integration annuli $[n]$") axes[1].set_xlim(ns.min(), ns.max()) axes[0].legend() fig.subplots_adjust(hspace=0.04) fig, axes = plt.subplots(2, 1, figsize=(5, 6), sharex=True) for i, (c, s) in enumerate(zip(ld_cs, ["solid", "dashed", "dotted"])): for j, ror in enumerate(rors): color = "C{0}".format(j) label = "$r = {0:.2f}$".format(ror) if s == "solid" else None ax = axes[0] ax.semilogy(z_val, np.abs(err_z[i, j]), color=color, linestyle=s, label=label) ax = axes[1] ax.semilogy(z_val, np.abs(err_z[i, j] / ror**2), color=color, linestyle=s, label=label) axes[0].axhline(1e-6, color="k", lw=2.0, alpha=0.3) axes[0].set_ylabel("$|\delta_\mathrm{1000} - \delta_\mathrm{exact}|$") axes[1].set_ylabel("$|\delta_\mathrm{1000} - \delta_\mathrm{exact}| / r^2$") axes[1].set_xlabel("impact parameter $z$") axes[1].set_xlim(z_val.min(), z_val.max()) axes[1].legend() fig.subplots_adjust(hspace=0.04) ror = r.eval() nums = 2**np.arange(10, 22) times = [] for n in nums[::-1]: z_var = tf.constant(np.linspace(0.0, 1.0 + ror, n), dtype=T) delta_var = transit.transit_depth(ld, z_var, r, n_integrate=1000) session.run(delta_var, feed_dict=fd) res = %timeit -o session.run(delta_var, feed_dict=fd) times.append(res.best) break plt.loglog(nums[:-1], times[:-1], "o-") nums ```
github_jupyter
## Making decisions with pandas ### quantile analysis: random data Quintile analysis is a common framework for evaluating the efficacy of security factors #### What is a factor ? A factor is a method for scoring/ranking sets of securities. For a particular point in time and for a particular set of securities, a factor can be represented as a pandas series where the index is an array of the security identifiers and the values are the scores or ranks. ### Quitiles/Buckets If we take factor scores over time, we can, at each point in time, split the set of securities into 5 equal buckets, or quintiles, based on the order of the factor scores. There is nothing particularly sacred about the number 5. We could have used 3 or 10. But we use 5 often. Finally, we track the performance of each of the five buckets to determine if there is a meaningful difference in the returns. We tend to focus more intently on the difference in returns of the bucket with the highest rank relative to that of the lowest rank. #### generating time series data for explanation Returns:- generate random returns for specified number of securities and periods. Signals: generate random signals for specified number of securities and periods and with prescribed level of correlation with Returns. In order for a factor to be useful, there must be some information or correlation between the scores/ranks and subsequent returns. If there weren't correlation, we would see it. That would be a good exercise for the reader, duplicate this analysis with random data generated with 0 correlation. ``` import pandas as pd import numpy as np num_securities = 1000 num_periods = 1000 period_frequency = 'W' start_date = "2000-12-31" np.random.seed([3,1415]) means = [0,0] covariance = [[1.,5e-3], [5e-3,1.]] #generating a set of data [0] and m[1] with ~0.005 correlation m = np.random.multivariate_normal(means, covariance, (num_periods, num_securities)).T # generating index ids = pd.Index(['s{:05d}'.format(s) for s in range(num_securities)]) tidx = pd.date_range(start=start_date, periods=num_periods, freq=period_frequency) ``` I divide m[0] by 25 to scale down to something that looks like stock returns. I also add 1e-7 to give a modest positive mean return. ``` security_returns = pd.DataFrame(m[0] / 25 + 1e-7, tidx, ids) security_signals = pd.DataFrame(m[1], tidx, ids) ``` # pd.qcut - Create Quintile Buckets ``` def qcut(s, q=5): labels = ['q{}'.format(i) for i in range(1, 6)] return pd.qcut(s, q, labels=labels) cut = security_signals.stack().groupby(level=0).apply(qcut) #Use these cuts as an index on our returns returns_cut = security_returns.stack().rename('returns') \ .to_frame().set_index(cut, append=True) \ .swaplevel(2, 1).sort_index().squeeze() \ .groupby(level=[0, 1]).mean().unstack() import matplotlib.pyplot as plt fig = plt.figure(figsize=(15, 5)) ax1 = plt.subplot2grid((1,3), (0,0)) ax2 = plt.subplot2grid((1,3), (0,1)) ax3 = plt.subplot2grid((1,3), (0,2)) # Cumulative Returns returns_cut.add(1).cumprod() \ .plot(colormap='jet', ax=ax1, title="Cumulative Returns") leg1 = ax1.legend(loc='upper left', ncol=2, prop={'size': 10}, fancybox=True) leg1.get_frame().set_alpha(.8) # Rolling 50 Week Return returns_cut.add(1).rolling(50).apply(lambda x: x.prod()) \ .plot(colormap='jet', ax=ax2, title="Rolling 50 Week Return") leg2 = ax2.legend(loc='upper left', ncol=2, prop={'size': 10}, fancybox=True) leg2.get_frame().set_alpha(.8) # Return Distribution returns_cut.plot.box(vert=False, ax=ax3, title="Return Distribution") fig.autofmt_xdate() plt.show() ``` ### Visualize Quintile Correlation with scatter_matrix ``` def max_dd(returns): """returns is a series""" r = returns.add(1).cumprod() dd = r.div(r.cummax()).sub(1) mdd = dd.min() end = dd.argmin() start = r.loc[:end].argmax() return mdd, start, end def max_dd_df(returns): """returns is a dataframe""" series = lambda x: pd.Series(x, ['Draw Down', 'Start', 'End']) return returns.apply(max_dd).apply(series) #max_dd_df(returns_cut) draw_downs = max_dd_df(returns_cut) fig, axes = plt.subplots(5, 1, figsize=(10, 8)) for i, ax in enumerate(axes[::-1]): returns_cut.iloc[:, i].add(1).cumprod().plot(ax=ax) sd, ed = draw_downs[['Start', 'End']].iloc[i] ax.axvspan(sd, ed, alpha=0.1, color='r') ax.set_ylabel(returns_cut.columns[i]) fig.suptitle('Maximum Draw Down', fontsize=18) fig.tight_layout() plt.subplots_adjust(top=.95) ```
github_jupyter
## 3.4 编辑段落 ### 3.4.1 段落首行缩进调整 许多出版社要求文章段落必须首行缩进,若想调整段落首行缩进的距离,可以使用`\setlength{\parindent}{长度}`命令,在`{长度}`处填写需要设置的距离即可。 【**例1**】使用`\setlength{\parindent}{长度}`命令调整段落首行缩进为两字符。 ```tex \documentclass[12pt]{article} \setlength{\parindent}{2em} \begin{document} In \LaTeX, We can use the setlength command to adjust the indentation distance of the first line. In this case, we set the indentation distance as 2em. \end{document} ``` 编译后效果如图3-4-1所示。 <p align="center"> <img align="middle" src="graphics/example3_4_1.png" width="600" /> </p> <center><b>图3-4-1</b> 编译后效果</center> 当然,如果不想让段落自动首行缩进, 在段落前使用命令`\noindent`即可。 【**例2**】使用`\noindent`命令使第二段首行不缩进。 ```tex \documentclass[12pt]{article} \setlength{\parindent}{2em} \begin{document} In \LaTeX, We can use the setlength command to adjust the indentation distance of the first line. In this case, we set the indentation distance as 2em. \noindent In \LaTeX, We can use the setlength command to adjust the indentation distance of the first line. In this case, we set the indentation distance as 2em. \end{document} ``` 编译后效果如图3-4-2所示。 <p align="center"> <img align="middle" src="graphics/example3_4_2.png" width="600" /> </p> <center><b>图3-4-2</b> 编译后效果</center> 需要注意的是,在段落设置在章节后面时,每一节后的第一段默认是不缩进的,为了使第一段向其他段一样缩进,可以在段落前使用`\hspace*{\parindent}`命令,也可以在源文件的前导代码中直接调用宏包`\usepackage{indentfirst}`。 【**例3**】使用`\hspace*{\parindent}`命令使章节后第一段首行缩进。 ```tex \documentclass[12pt]{article} \setlength{\parindent}{2em} \begin{document} \section{Introduction} \hspace*{\parindent}In \LaTeX, We can use the setlength command to adjust the indentation distance of the first line. In this case, we set the indentation distance as 2em. In \LaTeX, We can use the setlength command to adjust the indentation distance of the first line. In this case, we set the indentation distance as 2em. \end{document} ``` 编译后效果如图3-4-3所示。 <p align="center"> <img align="middle" src="graphics/example3_4_3.png" width="600" /> </p> <center><b>图3-4-3</b> 编译后效果</center> 【**例4**】使用`\usepackage{indentfirst}`命令使章节后第一段首行缩进。 ```tex \documentclass[12pt]{article} \setlength{\parindent}{2em} \usepackage{indentfirst} \begin{document} \section{Introduction} In \LaTeX, We can use the setlength command to adjust the indentation distance of the first line. In this case, we set the indentation distance as 2em. In \LaTeX, We can use the setlength command to adjust the indentation distance of the first line. In this case, we set the indentation distance as 2em. \end{document} ``` 编译后效果如图3-4-4所示。 <p align="center"> <img align="middle" src="graphics/example3_4_4.png" width="600" /> </p> <center><b>图3-4-4</b> 编译后效果</center> ### 3.4.2 段落间距调整 在使用LaTeX排版时,有时为了使段落与段落之间的区别更加明显,我们可以在段落之间设置一定的间距,最简单的方式是使用`\smallskip`、`\medskip`和`\bigskip`等命令。 > 参考[How to insert a blank line between any two paragraphs???](https://latex.org/forum/viewtopic.php?f=44&t=6934) 【**例5**】使用`\smallskip`、`\medskip`和`\bigskip`等命令调整不同的段落间距。 ```tex \documentclass[12pt]{article} \begin{document} How to set space between any two paragraphs? \smallskip How to set space between any two paragraphs? \medskip How to set space between any two paragraphs? \bigskip How to set space between any two paragraphs? \end{document} ``` 编译后效果如图3-4-5所示。 <p align="center"> <img align="middle" src="graphics/example3_4_5.png" width="600" /> </p> <center><b>图3-4-5</b> 编译后效果</center> 设置段落间距的几种方法:[https://tex.stackexchange.com/questions/41476](https://tex.stackexchange.com/questions/41476) ### 3.4.3 段落添加文本框 有时因为文档全都是大段大段的文字,版面显得较为单调,这时,我们可以通过给文字加边框来让版面有所变化,不至于过于单调。在LaTeX中,我们可以使用`\fbox{}`命令对文本新增边框。 > 参考[How to put a box around multiple lines](https://latex.org/forum/viewtopic.php?f=44&t=4117)。 【**例6**】使用`\fbox{}`创建文本边框。 ```tex \documentclass[12pt]{article} \begin{document} \fbox{ \parbox{0.8\linewidth}{ In \LaTeX, we can use fbox and parbox to put a box around multiple lines. In this case, we set the linewidth as 0.8. } } \end{document} ``` 编译后效果如图3-4-6所示。 <p align="center"> <img align="middle" src="graphics/example3_4_6.png" width="600" /> </p> <center><b>图3-4-6</b> 编译后效果</center> ### 3.4.4 段落对齐方式调整 LaTeX默认的对齐方式是两端对齐,有时在进行文档排版的过程中,我们为了突出某一段落的内容,会选择将其居中显示,在LaTeX中,我们可以使用`center`环境对文本进行居中对齐。另外还有一些出版商要求文档是左对齐或者右对齐,这时我们同样可以使用`flushleft`环境和`flushright`环境将文档设置为左对齐或右对齐。 【**例7**】分别使用`center`、`flushleft`和`flushright`环境对文本进行居中对齐、左对齐和右对齐。 ```tex \documentclass[12pt]{article} \begin{document} \begin{center} This is latex-cookbook \end{center} \begin{flushleft} This is latex-cookbook \end{flushleft} \begin{flushright} This is latex-cookbook \end{flushright} \end{document} ``` 编译后效果如图3-4-7所示。 <p align="center"> <img align="middle" src="graphics/example3_4_7.png" width="600" /> </p> <center><b>图3-4-7</b> 编译后效果</center> 【回放】[**3.3 生成目录**](https://nbviewer.jupyter.org/github/xinychen/latex-cookbook/blob/main/chapter-3/section3.ipynb) 【继续】[**3.5 编辑文字**](https://nbviewer.jupyter.org/github/xinychen/latex-cookbook/blob/main/chapter-3/section5.ipynb) ### License <div class="alert alert-block alert-danger"> <b>This work is released under the MIT license.</b> </div>
github_jupyter
``` from NADINEmainloop import NADINEmain, NADINEmainId from NADINEbasic import NADINE from utilsNADINE import dataLoader, plotPerformance import random import torch import numpy as np # random seed control np.random.seed(0) torch.manual_seed(0) random.seed(0) # load data dataStreams = dataLoader('../dataset/pmnist2.mat') print('All Data') allMetrics = [] # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet0, performanceHistory0, allPerformance = NADINEmain(NADINEnet,dataStreams) allMetrics.append(allPerformance) plotPerformance(performanceHistory0[0],performanceHistory0[1],performanceHistory0[2], performanceHistory0[3],performanceHistory0[4],performanceHistory0[5]) # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet1, performanceHistory1, allPerformance = NADINEmain(NADINEnet,dataStreams) allMetrics.append(allPerformance) plotPerformance(performanceHistory1[0],performanceHistory1[1],performanceHistory1[2], performanceHistory1[3],performanceHistory1[4],performanceHistory1[5]) # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet2, performanceHistory2, allPerformance = NADINEmain(NADINEnet,dataStreams) allMetrics.append(allPerformance) plotPerformance(performanceHistory2[0],performanceHistory2[1],performanceHistory2[2], performanceHistory2[3],performanceHistory2[4],performanceHistory2[5]) # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet3, performanceHistory3, allPerformance = NADINEmain(NADINEnet,dataStreams) allMetrics.append(allPerformance) plotPerformance(performanceHistory3[0],performanceHistory3[1],performanceHistory3[2], performanceHistory3[3],performanceHistory3[4],performanceHistory3[5]) # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet4, performanceHistory4, allPerformance = NADINEmain(NADINEnet,dataStreams) allMetrics.append(allPerformance) plotPerformance(performanceHistory4[0],performanceHistory4[1],performanceHistory4[2], performanceHistory4[3],performanceHistory4[4],performanceHistory4[5]) # all results # 0: accuracy # 1: f1_score # 2: precision_score # 3: recall_score # 4: training_time # 5: testingTime # 6: nHiddenLayer # 7: nHiddenNode meanResults = np.round_(np.mean(allMetrics,0), decimals=2) stdResults = np.round_(np.std(allMetrics,0), decimals=2) print('\n') print('========== Performance pmnist ==========') print('Preq Accuracy: ', meanResults[0].item(), '(+/-)',stdResults[0].item()) print('F1 score: ', meanResults[1].item(), '(+/-)',stdResults[1].item()) print('Precision: ', meanResults[2].item(), '(+/-)',stdResults[2].item()) print('Recall: ', meanResults[3].item(), '(+/-)',stdResults[3].item()) print('Training time: ', meanResults[4].item(), '(+/-)',stdResults[4].item()) print('Testing time: ', meanResults[5].item(), '(+/-)',stdResults[5].item()) print('\n') print('========== Network ==========') print('Number of hidden layers: ', meanResults[6].item(), '(+/-)',stdResults[6].item()) print('Number of features: ', meanResults[7].item(), '(+/-)',stdResults[7].item()) ``` ### 50% labeled data ``` ## dataset # sea # hyperplane # weather # rfid # permutedMnist # rotatedMnist # susy # hepmass print('50% Data') # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet0, performanceHistory0 = NADINEmain(NADINEnet,dataStreams,labeled = False, nLabeled = 0.5) plotPerformance(performanceHistory0[0],performanceHistory0[1],performanceHistory0[2], performanceHistory0[3],performanceHistory0[4],performanceHistory0[5]) # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet1, performanceHistory1 = NADINEmain(NADINEnet,dataStreams,labeled = False, nLabeled = 0.5) plotPerformance(performanceHistory1[0],performanceHistory1[1],performanceHistory1[2], performanceHistory1[3],performanceHistory1[4],performanceHistory1[5]) # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet2, performanceHistory2 = NADINEmain(NADINEnet,dataStreams,labeled = False, nLabeled = 0.5) plotPerformance(performanceHistory2[0],performanceHistory2[1],performanceHistory2[2], performanceHistory2[3],performanceHistory2[4],performanceHistory2[5]) # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet3, performanceHistory3 = NADINEmain(NADINEnet,dataStreams,labeled = False, nLabeled = 0.5) plotPerformance(performanceHistory3[0],performanceHistory3[1],performanceHistory3[2], performanceHistory3[3],performanceHistory3[4],performanceHistory3[5]) # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet4, performanceHistory4 = NADINEmain(NADINEnet,dataStreams,labeled = False, nLabeled = 0.5) plotPerformance(performanceHistory4[0],performanceHistory4[1],performanceHistory4[2], performanceHistory4[3],performanceHistory4[4],performanceHistory4[5]) # average performance print('Mean Accuracy: ', np.mean([performanceHistory0[1][1:]+performanceHistory1[1][1:]+ performanceHistory2[1][1:]+performanceHistory3[1][1:]+ performanceHistory4[1][1:]])) print('Std Accuracy: ', np.std([performanceHistory0[1][1:]+performanceHistory1[1][1:]+ performanceHistory2[1][1:]+performanceHistory3[1][1:]+ performanceHistory4[1][1:]])) print('Hidden Node mean', np.mean([performanceHistory0[3][1:]+performanceHistory1[3][1:]+ performanceHistory2[3][1:]+performanceHistory3[3][1:]+ performanceHistory4[3][1:]])) print('Hidden Node std: ', np.std([performanceHistory0[3][1:]+performanceHistory1[3][1:]+ performanceHistory2[3][1:]+performanceHistory3[3][1:]+ performanceHistory4[3][1:]])) print('Hidden Layer mean: ', np.mean([performanceHistory0[4][1:]+performanceHistory1[4][1:]+ performanceHistory2[4][1:]+performanceHistory3[4][1:]+ performanceHistory4[4][1:]])) print('Hidden Layer std: ', np.std([performanceHistory0[4][1:]+performanceHistory1[4][1:]+ performanceHistory2[4][1:]+performanceHistory3[4][1:]+ performanceHistory4[4][1:]])) ``` ### 25% Labeled Data ``` print('25% Data') # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet0, performanceHistory0 = NADINEmain(NADINEnet,dataStreams,labeled = False, nLabeled = 0.25) plotPerformance(performanceHistory0[0],performanceHistory0[1],performanceHistory0[2], performanceHistory0[3],performanceHistory0[4],performanceHistory0[5]) # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet1, performanceHistory1 = NADINEmain(NADINEnet,dataStreams,labeled = False, nLabeled = 0.25) plotPerformance(performanceHistory1[0],performanceHistory1[1],performanceHistory1[2], performanceHistory1[3],performanceHistory1[4],performanceHistory1[5]) # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet2, performanceHistory2 = NADINEmain(NADINEnet,dataStreams,labeled = False, nLabeled = 0.25) plotPerformance(performanceHistory2[0],performanceHistory2[1],performanceHistory2[2], performanceHistory2[3],performanceHistory2[4],performanceHistory2[5]) # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet3, performanceHistory3 = NADINEmain(NADINEnet,dataStreams,labeled = False, nLabeled = 0.25) plotPerformance(performanceHistory3[0],performanceHistory3[1],performanceHistory3[2], performanceHistory3[3],performanceHistory3[4],performanceHistory3[5]) # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet4, performanceHistory4 = NADINEmain(NADINEnet,dataStreams,labeled = False, nLabeled = 0.25) plotPerformance(performanceHistory4[0],performanceHistory4[1],performanceHistory4[2], performanceHistory4[3],performanceHistory4[4],performanceHistory4[5]) # average performance print('Mean Accuracy: ', np.mean([performanceHistory0[1][1:]+performanceHistory1[1][1:]+ performanceHistory2[1][1:]+performanceHistory3[1][1:]+ performanceHistory4[1][1:]])) print('Std Accuracy: ', np.std([performanceHistory0[1][1:]+performanceHistory1[1][1:]+ performanceHistory2[1][1:]+performanceHistory3[1][1:]+ performanceHistory4[1][1:]])) print('Hidden Node mean', np.mean([performanceHistory0[3][1:]+performanceHistory1[3][1:]+ performanceHistory2[3][1:]+performanceHistory3[3][1:]+ performanceHistory4[3][1:]])) print('Hidden Node std: ', np.std([performanceHistory0[3][1:]+performanceHistory1[3][1:]+ performanceHistory2[3][1:]+performanceHistory3[3][1:]+ performanceHistory4[3][1:]])) print('Hidden Layer mean: ', np.mean([performanceHistory0[4][1:]+performanceHistory1[4][1:]+ performanceHistory2[4][1:]+performanceHistory3[4][1:]+ performanceHistory4[4][1:]])) print('Hidden Layer std: ', np.std([performanceHistory0[4][1:]+performanceHistory1[4][1:]+ performanceHistory2[4][1:]+performanceHistory3[4][1:]+ performanceHistory4[4][1:]])) ``` ### Infinite Delay ``` print('Infinite Delay') # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet0, performanceHistory0 = NADINEmainId(NADINEnet,dataStreams) plotPerformance(performanceHistory0[0],performanceHistory0[1],performanceHistory0[2], performanceHistory0[3],performanceHistory0[4],performanceHistory0[5]) # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet1, performanceHistory1 = NADINEmainId(NADINEnet,dataStreams) plotPerformance(performanceHistory1[0],performanceHistory1[1],performanceHistory1[2], performanceHistory1[3],performanceHistory1[4],performanceHistory1[5]) # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet2, performanceHistory2 = NADINEmainId(NADINEnet,dataStreams) plotPerformance(performanceHistory2[0],performanceHistory2[1],performanceHistory2[2], performanceHistory2[3],performanceHistory2[4],performanceHistory2[5]) # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet3, performanceHistory3 = NADINEmainId(NADINEnet,dataStreams) plotPerformance(performanceHistory3[0],performanceHistory3[1],performanceHistory3[2], performanceHistory3[3],performanceHistory3[4],performanceHistory3[5]) # initialization NADINEnet = NADINE(dataStreams.nInput,dataStreams.nOutput) NADINEnet4, performanceHistory4 = NADINEmainId(NADINEnet,dataStreams) plotPerformance(performanceHistory4[0],performanceHistory4[1],performanceHistory4[2], performanceHistory4[3],performanceHistory4[4],performanceHistory4[5]) # average performance print('Mean Accuracy: ', np.mean([performanceHistory0[1][1:]+performanceHistory1[1][1:]+ performanceHistory2[1][1:]+performanceHistory3[1][1:]+ performanceHistory4[1][1:]])) print('Std Accuracy: ', np.std([performanceHistory0[1][1:]+performanceHistory1[1][1:]+ performanceHistory2[1][1:]+performanceHistory3[1][1:]+ performanceHistory4[1][1:]])) print('Hidden Node mean', np.mean([performanceHistory0[3][1:]+performanceHistory1[3][1:]+ performanceHistory2[3][1:]+performanceHistory3[3][1:]+ performanceHistory4[3][1:]])) print('Hidden Node std: ', np.std([performanceHistory0[3][1:]+performanceHistory1[3][1:]+ performanceHistory2[3][1:]+performanceHistory3[3][1:]+ performanceHistory4[3][1:]])) print('Hidden Layer mean: ', np.mean([performanceHistory0[4][1:]+performanceHistory1[4][1:]+ performanceHistory2[4][1:]+performanceHistory3[4][1:]+ performanceHistory4[4][1:]])) print('Hidden Layer std: ', np.std([performanceHistory0[4][1:]+performanceHistory1[4][1:]+ performanceHistory2[4][1:]+performanceHistory3[4][1:]+ performanceHistory4[4][1:]])) ```
github_jupyter
# scikit-learn中的逻辑回归 ``` import numpy as np import matplotlib.pyplot as plt np.random.seed(666) X = np.random.normal(0, 1, size=(200, 2)) y = np.array((X[:,0]**2+X[:,1])<1.5, dtype='int') for _ in range(20): y[np.random.randint(200)] = 1 y plt.scatter(X[y==0,0], X[y==0,1]) plt.scatter(X[y==1,0], X[y==1,1]) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=666) ``` ### 使用scikit-learn中的逻辑回归 ``` from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression() log_reg.fit(X_train, y_train) log_reg.score(X_train, y_train) log_reg.score(X_test, y_test) def plot_decision_boundary(model, axis): x0, x1 = np.meshgrid( np.linspace(axis[0], axis[1], int((axis[1]-axis[0])*100)).reshape(-1, 1), np.linspace(axis[2], axis[3], int((axis[3]-axis[2])*100)).reshape(-1, 1), ) X_new = np.c_[x0.ravel(), x1.ravel()] y_predict = model.predict(X_new) zz = y_predict.reshape(x0.shape) from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#EF9A9A','#FFF59D','#90CAF9']) plt.contourf(x0, x1, zz, linewidth=5, cmap=custom_cmap) plot_decision_boundary(log_reg, axis=[-4, 4, -4, 4]) plt.scatter(X[y==0,0], X[y==0,1]) plt.scatter(X[y==1,0], X[y==1,1]) from sklearn.preprocessing import PolynomialFeatures from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler def PolynomialLogisticRegression(degree): return Pipeline([ ('poly', PolynomialFeatures(degree=degree)), ('std_scaler', StandardScaler()), ('log_reg', LogisticRegression()) ]) poly_log_reg = PolynomialLogisticRegression(degree=2) poly_log_reg.fit(X_train, y_train) poly_log_reg.score(X_train, y_train) poly_log_reg.score(X_test, y_test) plot_decision_boundary(poly_log_reg, axis=[-4, 4, -4, 4]) plt.scatter(X[y==0,0], X[y==0,1]) plt.scatter(X[y==1,0], X[y==1,1]) poly_log_reg2 = PolynomialLogisticRegression(degree=20) poly_log_reg2.fit(X_train, y_train) poly_log_reg2.score(X_train, y_train) poly_log_reg2.score(X_test, y_test) plot_decision_boundary(poly_log_reg2, axis=[-4, 4, -4, 4]) plt.scatter(X[y==0,0], X[y==0,1]) plt.scatter(X[y==1,0], X[y==1,1]) def PolynomialLogisticRegression(degree, C): return Pipeline([ ('poly', PolynomialFeatures(degree=degree)), ('std_scaler', StandardScaler()), ('log_reg', LogisticRegression(C=C)) ]) poly_log_reg3 = PolynomialLogisticRegression(degree=20, C=0.1) poly_log_reg3.fit(X_train, y_train) poly_log_reg3.score(X_train, y_train) poly_log_reg3.score(X_test, y_test) plot_decision_boundary(poly_log_reg3, axis=[-4, 4, -4, 4]) plt.scatter(X[y==0,0], X[y==0,1]) plt.scatter(X[y==1,0], X[y==1,1]) def PolynomialLogisticRegression(degree, C, penalty='l2'): return Pipeline([ ('poly', PolynomialFeatures(degree=degree)), ('std_scaler', StandardScaler()), ('log_reg', LogisticRegression(C=C, penalty=penalty)) ]) poly_log_reg4 = PolynomialLogisticRegression(degree=20, C=0.1, penalty='l1') poly_log_reg4.fit(X_train, y_train) poly_log_reg4.score(X_train, y_train) poly_log_reg4.score(X_test, y_test) plot_decision_boundary(poly_log_reg4, axis=[-4, 4, -4, 4]) plt.scatter(X[y==0,0], X[y==0,1]) plt.scatter(X[y==1,0], X[y==1,1]) ```
github_jupyter
``` !date ``` # Preprocessing FASTQ files into a sample by gene matrix ### Download files and software ``` !git clone https://github.com/pachterlab/BLCSBGLKP_2020.git !mkdir temporary !chmod +x BLCSBGLKP_2020/data/kb/parseSS.py !BLCSBGLKP_2020/data/kb/parseSS.py < BLCSBGLKP_2020/data/kb/SampleSheet.csv > temporary/metadata.txt !cat temporary/metadata.txt | awk '{print $1}' > temporary/whitelist.txt !head -16 BLCSBGLKP_2020/data/kb/SampleSheet.csv !head -1 temporary/metadata.txt !head -1 temporary/whitelist.txt ``` ### Install kallisto and bustools from GitHub ``` # We need cmake to install kallisto and bustools from source !apt update !apt install -y cmake !git clone https://github.com/pachterlab/kallisto.git !mv kallisto/ temporary/ !cd temporary/kallisto && git checkout covid && mkdir build && cd build && cmake .. && make !chmod +x temporary/kallisto/build/src/kallisto !mv temporary/kallisto/build/src/kallisto /usr/local/bin/ !git clone https://github.com/BUStools/bustools.git !mv bustools/ temporary/ !cd temporary/bustools && git checkout covid && mkdir build && cd build && cmake .. && make !chmod +x temporary/bustools/build/src/bustools !mv temporary/bustools/build/src/bustools /usr/local/bin/ !kallisto version !bustools version !pip install anndata !pip install git+https://github.com/pachterlab/kb_python@count-kite ``` ### Make the index ``` !kb ref -k 11 --workflow kite -i temporary/index.idx -g temporary/t2g.txt -f1 temporary/transcriptome.fa BLCSBGLKP_2020/data/kb/kite_11.txt !rm temporary/index.idx !echo ">RPP30\nAGATTTGGACCTGCGAGCGGGTTCTGACCTGAAGGCTCTGCGCGGACTTGTGGAGACAGCCGCTC" >> temporary/transcriptome.fa !echo "RPP30\tRPP30\tRPP30" >> temporary/t2g.txt !tail temporary/t2g.txt ``` ### Download the FASTQs ``` !mkdir temporary/fastqs !wget --quiet -O temporary/fastqs/Undetermined_S0_L001_I1_001.fastq.gz https://caltech.box.com/shared/static/3i46orxgtwlaho7f9z255hplg6tvfs6h.gz !wget --quiet -O temporary/fastqs/Undetermined_S0_L001_R2_001.fastq.gz https://caltech.box.com/shared/static/lh0nyo1v95k1s7nvw4zj84yl6jwx3hpg.gz !wget --quiet -O temporary/fastqs/Undetermined_S0_L001_R1_001.fastq.gz https://caltech.box.com/shared/static/0f3h3837xvo2dcqkax67njops5s4zxz0.gz !wget --quiet -O temporary/fastqs/Undetermined_S0_L002_I1_001.fastq.gz https://caltech.box.com/shared/static/rxb4h3owka0x2deh0royge4w55u0bub5.gz !wget --quiet -O temporary/fastqs/Undetermined_S0_L002_R2_001.fastq.gz https://caltech.box.com/shared/static/2eyqb989cohgv4h00mtjj3lrn3tpgi41.gz !wget --quiet -O temporary/fastqs/Undetermined_S0_L002_R1_001.fastq.gz https://caltech.box.com/shared/static/orqpywdlryss9df49tha4i8yywlswrtj.gz !wget --quiet -O temporary/fastqs/Undetermined_S0_L003_I1_001.fastq.gz https://caltech.box.com/shared/static/0r5ezocuh9mzxxj6nsf1fgfl38fdbfye.gz !wget --quiet -O temporary/fastqs/Undetermined_S0_L003_R2_001.fastq.gz https://caltech.box.com/shared/static/d48e56j9qqxo4sveqiwa3lq9bwzxua4f.gz !wget --quiet -O temporary/fastqs/Undetermined_S0_L003_R1_001.fastq.gz https://caltech.box.com/shared/static/7q3xgu2lp2t46638c1rg569duz5kdw9a.gz !wget --quiet -O temporary/fastqs/Undetermined_S0_L004_I1_001.fastq.gz https://caltech.box.com/shared/static/pkgyve9ft7u09du66a0e3r4a3ae4mmhc.gz !wget --quiet -O temporary/fastqs/Undetermined_S0_L004_R2_001.fastq.gz https://caltech.box.com/shared/static/nvfmriwe1891lfqrvedmoko6i5sd0mm6.gz !wget --quiet -O temporary/fastqs/Undetermined_S0_L004_R1_001.fastq.gz https://caltech.box.com/shared/static/krcntl56mgt91ca08qvljfhohh9g197m.gz ``` Check the files ``` !zcat temporary/fastqs/Undetermined_S0_L001_I1_001.fastq.gz | awk '(NR-2)%4==0' | head -2 !zcat temporary/fastqs/Undetermined_S0_L001_R2_001.fastq.gz | awk '(NR-2)%4==0' | head -2 !zcat temporary/fastqs/Undetermined_S0_L001_R1_001.fastq.gz | awk '(NR-2)%4==0' | head -2 ``` # Processing ### Build the kallisto index ``` !kallisto index -i temporary/index.idx -k 11 temporary/transcriptome.fa ``` # Align reads to the reference ``` %%time # The SwabSeq technology expects the first index, then second, then the biological read. !kallisto bus -x SwabSeq -o temporary/out/ -t 16 -i temporary/index.idx \ temporary/fastqs/Undetermined_S0_L001_I1_001.fastq.gz \ temporary/fastqs/Undetermined_S0_L001_R2_001.fastq.gz \ temporary/fastqs/Undetermined_S0_L001_R1_001.fastq.gz \ temporary/fastqs/Undetermined_S0_L002_I1_001.fastq.gz \ temporary/fastqs/Undetermined_S0_L002_R2_001.fastq.gz \ temporary/fastqs/Undetermined_S0_L002_R1_001.fastq.gz \ temporary/fastqs/Undetermined_S0_L003_I1_001.fastq.gz \ temporary/fastqs/Undetermined_S0_L003_R2_001.fastq.gz \ temporary/fastqs/Undetermined_S0_L003_R1_001.fastq.gz \ temporary/fastqs/Undetermined_S0_L004_I1_001.fastq.gz \ temporary/fastqs/Undetermined_S0_L004_R2_001.fastq.gz \ temporary/fastqs/Undetermined_S0_L004_R1_001.fastq.gz ``` ### Process the BUS file ``` # sort the BUS file by barcode !bustools sort -t 2 -m 1G -o temporary/out/sort.bus temporary/out/output.bus # Correct to the barcodes in the whitelist (obtained from the SampleSheet) !bustools correct --split -d temporary/out/dump.txt -w temporary/whitelist.txt -o temporary/out/sort.correct.bus temporary/out/sort.bus # Sort again to sum the Amplicon counts !bustools sort -t 2 -m 1G -o temporary/out/sort.correct.sort.bus temporary/out/sort.correct.bus # write busfile to text output !bustools text -p temporary/out/sort.correct.sort.bus > temporary/out/data.txt # Write the sorted bus file out for barcode QC !bustools text -p temporary/out/sort.bus > temporary/out/sort.txt import pandas as pd import numpy as np import matplotlib.pyplot as plt import string import anndata from collections import defaultdict, OrderedDict from sklearn.preprocessing import normalize, scale from sklearn.decomposition import TruncatedSVD def nd(arr): return np.asarray(arr).reshape(-1) def yex(ax): lims = [ np.min([ax.get_xlim(), ax.get_ylim()]), # min of both axes np.max([ax.get_xlim(), ax.get_ylim()]), # max of both axes ] # now plot both limits against eachother ax.plot(lims, lims, 'k-', alpha=0.75, zorder=0) ax.set_aspect('equal') ax.set_xlim(lims) ax.set_ylim(lims) return ax fsize=15 plt.rcParams.update({'font.size': fsize}) %config InlineBackend.figure_format = 'retina' # Effectivley python implementation of bustools covid (to be made) def make_mtx(bcs, ecs, cnt, unique_ecs): bold = bcs[0] eold = ecs[0] cold = cnt[0] mtx = [] d = defaultdict() #d[eold] = cold bold = 0 for idx, b in enumerate(bcs): if b != bold and idx > 0: count = [] for e in unique_ecs: count.append(d.get(e, 0)) mtx.append(count) d = defaultdict() d[ecs[idx]] = cnt[idx] bold = b count = [] for e in unique_ecs: count.append(d.get(e, 0)) mtx.append(count) return np.asarray(mtx) ``` # Load Files ### BUS file in text format ``` df = pd.read_csv("temporary/out/data.txt", sep="\t", header=None, names=["bcs", "umi", "ecs", "cnt"]) df = df.sort_values("bcs") t2g = pd.read_csv("temporary/t2g.txt", header=None, names = ["tid", "gene"], sep="\t", usecols=[0,2], index_col=0) ecs = pd.read_csv("temporary/out/matrix.ec", header=None, names = ["ecs", "tid_list"], sep="\t") tid = pd.read_csv("temporary/out/transcripts.txt", header=None, names=["tid"]) del df["umi"] t2g = t2g["gene"].to_dict() tid_map = tid["tid"].apply(lambda x: t2g.get(x, x)).to_dict() ecmap = ecs.tid_list.apply(lambda x: [int(i) for i in x.split(",")]).to_dict() e2g = defaultdict(list) for k, v in ecmap.items(): tmp = [] for i in v: tmp.append(tid_map.get(i)) e2g[k] = list(set(tmp)) df["gene_list"] = df["ecs"].map(e2g) df["len"] = df['gene_list'].apply(lambda x: len(set(x))) df = df.query("len==1") df["gene"] = df["gene_list"].apply(lambda x: x[0]) df ``` ### Genes from our reference ``` gene = pd.read_csv("temporary/out/transcripts.txt", header=None, names = ["gene"]) gene = gene.gene.apply(lambda x: t2g.get(x,x)) gene ``` ### Plate Metadeta (from SampleSheet) ``` # I switch index 1 and index 2 since they are swapped in the BUS file pmap = pd.read_csv("temporary/metadata.txt", sep="\t", header=None, names=["bcs", "i1", "i2", "plate", "well", "gene", "lysate", "Twist", "ATCC_RNA", "ATCC_viral"], index_col=0) pmap["bcs"] = pmap["i1"] + pmap["i2"] pmap.index = pmap["bcs"] pmap.head() ``` # Filter BUS file and add relevant metadata ``` num_map = dict(zip(np.sort(df.gene.unique()), np.arange(len(np.sort(df.gene.unique()))))) df["ecs"] = df.gene.map(num_map) df = df.groupby(["bcs", "ecs"])["cnt"].sum().reset_index() #nodup = df.drop_duplicates("bcs") nodup = df nodup = nodup.sort_values("bcs") nodup["plate"] = nodup["bcs"].map(pmap["plate"]) nodup["well"] = nodup["bcs"].map(pmap["well"]) nodup["lysate"] = nodup["bcs"].map(pmap["lysate"]) #nodup["gene"] = nodup["bcs"].map(pmap["gene"]) nodup["Twist"] = nodup["bcs"].map(pmap["Twist"]) nodup["ATCC_RNA"] = nodup["bcs"].map(pmap["ATCC_RNA"]) nodup["ATCC_viral"] = nodup["bcs"].map(pmap["ATCC_viral"]) # Drop the barcodes that do not have metadata (keep only ones in the platemap) nodup = nodup.loc[nodup["ATCC_RNA"].dropna().index].sort_values("bcs") nodup[nodup["plate"] == "Plate1"]["cnt"].sum() nodup[nodup["plate"] == "Plate2"]["cnt"].sum() nodup.cnt.sum() var = pd.Series(list(num_map.keys())) var ``` # Run the python bustools covid (make matrix from BUS file) ``` # Well by gene matrix bcs = nodup.bcs.values ecs = nodup.ecs.values cnt = nodup.cnt.values unique_ecs = np.unique(ecs) mtx = make_mtx(bcs, ecs, cnt, unique_ecs) mtx.shape mtx.sum() nodup = nodup.drop_duplicates("bcs") ``` # Make a anndata object ``` adata = anndata.AnnData(X=mtx, obs = nodup, var={"gene":var.values})#, var = gene) adata.obs["Twist_bool"] = np.logical_and(adata.obs.ATCC_viral.values==0, adata.obs.ATCC_RNA.values==0) adata.obs["ATCC_viral_bool"] = np.logical_and(adata.obs.Twist.values==0, adata.obs.ATCC_RNA.values==0) adata.obs["ATCC_RNA_bool"] = np.logical_and(adata.obs.Twist.values==0, adata.obs.ATCC_viral.values==0) adata.var ``` # Normalize per well (CPM), Log1p, Scale columns ``` adata.layers["raw"] = adata.X scale_num = 1000000 adata.layers["norm"] = normalize(adata.X, norm="l1", axis=1)*scale_num adata.layers["log1p"] = np.log1p(adata.layers["norm"]) adata.uns = OrderedDict([("log1p", {"base":None})]) adata.X = adata.layers["log1p"] adata.layers["scale"] = scale(adata.layers["log1p"], axis=0, with_mean=True, with_std=True, copy=True) adata.X = adata.layers["scale"] ``` # Make PCA ``` %%time # PCA X = adata.layers["scale"] tsvd = TruncatedSVD(n_components=2) adata.obsm["X_pca"] = tsvd.fit_transform(X) fig, ax = plt.subplots(figsize=(7,7)) x = adata.obsm["X_pca"][:,0] y = adata.obsm["X_pca"][:,1] c = adata.obs["plate"].astype("category").cat.codes.astype(int) ax.scatter(x, y, c = c, cmap='nipy_spectral') ax.set_axis_off() plt.tight_layout() plt.show() ``` # Write anndata ``` adata.write("temporary/adata.h5ad") ``` # Barcode QC We will use the RPP30 gene to check that the whitelist barcode has the top most counts compared to its hamming one distance variants. This is a great QC to ensure that there are no problems with the barcodes. Since the reference contains no shared sequences of 11, each "gene" corresponds to one equivalence class. We will use equivalence class 4 which corresponds to the RPP30 gene. ``` s = pd.read_csv("temporary/out/sort.txt", header=None, names=["bcs", "umi", "ecs", "cnt"], sep="\t") s = s[s["ecs"] == 316] m = pd.read_csv("temporary/out/dump.txt", header=None, names=["old", "new"], sep="\t") m = m.sort_values("new") m["plate"] = m["new"].map(pmap["plate"]) m["plate"] = m["new"].map(pmap["plate"]) m["well"] = m["new"].map(pmap["well"]) m["lysate"] = m["new"].map(pmap["lysate"]) m["gene"] = m["new"].map(pmap["gene"]) m["Twist"] = m["new"].map(pmap["Twist"]) m["ATCC_RNA"] = m["new"].map(pmap["ATCC_RNA"]) m["ATCC_viral"] = m["new"].map(pmap["ATCC_viral"]) m["old_cnt"] = m["old"].map(s.groupby("bcs")["cnt"].sum()) m["new_cnt"] = m["new"].map(s.groupby("bcs")["cnt"].sum()) m = m.dropna(subset=["old_cnt"]) bad = m[m["old_cnt"] > m["new_cnt"]] ``` ### We find that many barcodes have less counts than there hamming distance variants ``` bad.new.unique().shape bad[["old", "new","plate", "well", "lysate", "old_cnt", "new_cnt"]] ``` ## An example of an abberant barcode ``` bad[bad["new"] == "AGCCAAGAGAGGGCAT"][["old", "new","plate", "well", "lysate", "old_cnt", "new_cnt"]] ```
github_jupyter
``` import pandas as pd import numpy as np #Visualization tools from matplotlib import pyplot as plt import seaborn as sns; sns.set() import itertools #ML steps structure from sklearn.pipeline import FeatureUnion, Pipeline #Preprocessing from sklearn.preprocessing import FunctionTransformer, MinMaxScaler from sklearn.feature_extraction.text import HashingVectorizer, CountVectorizer from sklearn.base import TransformerMixin from imblearn.under_sampling import RandomUnderSampler #Dimensionality reduction from sklearn.decomposition import TruncatedSVD #Model Validation from sklearn.model_selection import train_test_split, cross_val_score, cross_val_predict from sklearn.metrics.cluster import adjusted_rand_score from sklearn.metrics import explained_variance_score #Ensemble from sklearn.ensemble import BaggingClassifier #from vecstack import stacking from mlxtend.classifier import StackingClassifier #Model selected from sklearn.naive_bayes import GaussianNB #Naive Bayes from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.neural_network import MLPClassifier from sklearn.linear_model import LogisticRegression from xgboost import XGBClassifier #Read the data using the Unnamed (probably id) as index url = 'https://s3.amazonaws.com/drivendata/data/4/public/81e8f2de-9915-4934-b9ae-9705685c9d50.csv' training = pd.read_csv(url, index_col='Unnamed: 0') labels = ['Function', 'Object_Type', 'Operating_Status', 'Position_Type', 'Pre_K', 'Reporting', 'Sharing', 'Student_Type', 'Use'] numeric = ['FTE', 'Total'] categoric = [ 'Facility_or_Department', 'Function_Description', 'Fund_Description', 'Job_Title_Description', 'Location_Description', 'Object_Description', 'Position_Extra', 'Program_Description', 'SubFund_Description', 'Sub_Object_Description', 'Text_1', 'Text_2', 'Text_3', 'Text_4'] ``` ## Pre-Processing ``` #Imputing data in Total column def impute_func_total(data): if(pd.isnull(data['Total'])): if(data['Object_Type'] == 'Base Salary/Compensation'): return 24146 if(data['Object_Type'] == 'Benefits'): return 38163 if(data['Object_Type'] == 'Contracted Services'): return 24146 if(data['Object_Type'] == 'Equipment & Equipment Lease'): return 11257 if(data['Object_Type'] == 'NO_LABEL'): return 58545 if(data['Object_Type'] == 'Other Compensation/Stipend'): return 1605 if(data['Object_Type'] == 'Other Non-Compensation'): return 10646 if(data['Object_Type'] == 'Rent/Utilities'): return 46611 if(data['Object_Type'] == 'Substitute Compensation'): return 1090 if(data['Object_Type'] == 'Supplies/Materials'): return 7745 if(data['Object_Type'] == 'Travel & Conferences'): return 1659 else: return data['Total'] #Imputing data in FTE column def impute_func_FTE(data): if(pd.isnull(data['FTE'])): if(data['Object_Type'] == 'Base Salary/Compensation'): return 0.45 if(data['Object_Type'] == 'Benefits'): return 0.0 if(data['Object_Type'] == 'Contracted Services'): return 0.0 if(data['Object_Type'] == 'Equipment & Equipment Lease'): return 0.0 if(data['Object_Type'] == 'NO_LABEL'): return 0.75 if(data['Object_Type'] == 'Other Compensation/Stipend'): return 0.000107 if(data['Object_Type'] == 'Other Non-Compensation'): return 0.0 if(data['Object_Type'] == 'Rent/Utilities'): return 0.0 if(data['Object_Type'] == 'Substitute Compensation'): return 0.000059 if(data['Object_Type'] == 'Supplies/Materials'): return 0.0 if(data['Object_Type'] == 'Travel & Conferences'): return 0.0 else: return data['FTE'] def preProcessing(training): # Remove inconsistent data training.loc[(training['FTE'] < 0) | (training['FTE'] > 1), 'FTE'] = np.nan training.loc[training['Total'] < 0, 'Total'] = np.nan training['Total'] = training.apply(impute_func_total, axis = 1) training['FTE'] = training.apply(impute_func_FTE, axis = 1) for category in categoric: training[category] = training[category].str.lower() training[categoric] = training[categoric].fillna("") return training df_training = preProcessing(training) df_training = df_training.reset_index(drop = True) X = df_training.drop(columns=labels) labels_data = pd.get_dummies(df_training['Object_Type']) #col_names = list(range(0,11)) #labels_data.columns = col_names #labels_data = labels_data.idxmax(axis=1) labels_true = labels_data.idxmax(axis=1) ``` ## Pipeline ``` def combine_text_columns(dataset): return dataset[categoric].apply(lambda x: " ".join(x), axis = 1) get_text_data = FunctionTransformer(combine_text_columns, validate = False) def combine_numeric_columns(dataset): return dataset[numeric] get_numeric_data = FunctionTransformer(combine_numeric_columns, validate = False) #def pipeline_(clf): pl = Pipeline([ ('union', FeatureUnion( transformer_list = [ ('numeric_features', Pipeline([ ('selector', get_numeric_data), ])), ('text_features', Pipeline([ ('selector', get_text_data), ('vectorizer',HashingVectorizer(token_pattern="[A-Za-z0-9]+(?=\\s+)", norm=None, binary=False, ngram_range=(1,2), stop_words = 'english') ) ])) ] )), ('reduce_dim', TruncatedSVD(n_components = 150)) # ('clf', clf) ]) # return pl sdv_data = pl.fit_transform(X, labels_true) rus = RandomUnderSampler(replacement=True) X_resampled, y_resampled = rus.fit_resample(sdv_data, labels_true) #pd.DataFrame(y_resampled).to_csv("y_.csv") #from imblearn.under_sampling import NearMiss #nm1 = NearMiss(version=3) #X_resampled_nm1, y_resampled = nm1.fit_resample(sdv_data, labels_true) #from imblearn.under_sampling import AllKNN #allknn = AllKNN() #X_res, y_res = allknn.fit_resample(sdv_data, labels_true) ``` ## Bagging with Naive Bayes method ``` d = {'NO_LABEL':1, 'Base Salary/Compensation':2, 'Benefits':3, 'Substitute Compensation':4, 'Supplies/Materials':5, 'Rent/Utilities':6, 'Other Compensation/Stipend': 7, 'Contracted Services' : 8, 'Equipment & Equipment Lease':9, 'Other Non-Compensation':10, 'Travel & Conferences':11} Y = pd.DataFrame(y_resampled) target = Y.applymap(lambda s: d.get(s) if s in d else s) bagging_NB = GaussianNB(var_smoothing = 1e-30) cross_val_score(bagging_NB, X_resampled, target, cv=10) np.mean([0.10982496, 0.11405985, 0.11010728, 0.11716544, 0.1177301 , 0.11716544, 0.10897798, 0.11377753, 0.11998871, 0.13438735]) np.std([0.10982496, 0.11405985, 0.11010728, 0.11716544, 0.1177301 , 0.11716544, 0.10897798, 0.11377753, 0.11998871, 0.13438735]) #target = df_training[['Object_Type']].applymap(lambda s: d.get(s) if s in d else s) #target = target['Object_Type'].values #train = df_training.drop(columns=labels) #labels_obT = df_training['Object_Type'].unique() #X_train, X_test, y_train, y_test = train_test_split(train_NB, # target_NB, # test_size=0.3, # random_state=42) ``` ### Bagging NB 10 classifiers ``` bagging_NB = BaggingClassifier(GaussianNB(var_smoothing = 1e-30)) cross_val_score(bagging_NB, X_resampled, target, cv=10) np.mean([0.11038961, 0.11490683, 0.10897798, 0.11801242, 0.11801242, 0.11716544, 0.11462451, 0.18407679, 0.11970638, 0.11688312]) np.std([0.11038961, 0.11490683, 0.10897798, 0.11801242, 0.11801242, 0.11716544, 0.11462451, 0.18407679, 0.11970638, 0.11688312]) ``` ### Bagging NB 15 classifiers ``` bagging_NB = BaggingClassifier(GaussianNB(var_smoothing = 1e-30), n_estimators = 15) cross_val_score(bagging_NB, X_resampled, target, cv=10) np.mean([0.15386787, 0.13184641, 0.10897798, 0.1177301 , 0.1284585 , 0.16883117, 0.11038961, 0.11405985, 0.11998871, 0.18322981]) np.std([0.15386787, 0.13184641, 0.10897798, 0.1177301 , 0.1284585 , 0.16883117, 0.11038961, 0.11405985, 0.11998871, 0.18322981]) ``` ### Bagging NB 20 classifiers ``` bagging_NB = BaggingClassifier(GaussianNB(), n_estimators = 20) cross_val_score(bagging_NB, X_resampled, target, cv=10) np.mean([0.11038961, 0.11490683, 0.12902315, 0.11744777, 0.12196499, 0.11801242, 0.10982496, 0.1134952 , 0.19649915, 0.17080745]) np.std([0.11038961, 0.11490683, 0.12902315, 0.11744777, 0.12196499, 0.11801242, 0.10982496, 0.1134952 , 0.19649915, 0.17080745]) ``` ## Bagging with KNN method ``` bagging_KNN = KNeighborsClassifier(n_neighbors=7) cross_val_score(bagging_KNN, X_resampled, y_resampled, cv=10) np.mean([0.53218521, 0.52484472, 0.52484472, 0.52315076, 0.51919819, 0.53303219, 0.52004517, 0.52173913, 0.52879729, 0.51468097]) np.std([0.53218521, 0.52484472, 0.52484472, 0.52315076, 0.51919819, 0.53303219, 0.52004517, 0.52173913, 0.52879729, 0.51468097]) ``` ### Bagging KNN 10 classifiers ``` #from imblearn.ensemble import BalancedBaggingClassifier #bbc = BalancedBaggingClassifier(base_estimator=KNeighborsClassifier(n_neighbors=7), # sampling_strategy='not majority', # replacement=False, # random_state=0) bagging_KNN = BaggingClassifier(KNeighborsClassifier(n_neighbors=7)) cross_val_score(bagging_KNN, X_resampled, y_resampled, cv=10) np.mean([0.53218521, 0.53303219, 0.53020892, 0.52484472, 0.52597403, 0.54009034, 0.52597403, 0.52625635, 0.5364201 , 0.52399774]) np.std([0.53218521, 0.53303219, 0.53020892, 0.52484472, 0.52597403, 0.54009034, 0.52597403, 0.52625635, 0.5364201 , 0.52399774]) ``` ### Bagging KNN 15 classifiers ``` bagging_KNN = BaggingClassifier(KNeighborsClassifier(n_neighbors=7), n_estimators = 15) cross_val_score(bagging_KNN, X_resampled, y_resampled, cv=10) np.mean([0.52851496, 0.53472614, 0.53783173, 0.52936194, 0.52823264, 0.54150198, 0.5299266 , 0.52766798, 0.53811406, 0.52795031]) np.std([0.52851496, 0.53472614, 0.53783173, 0.52936194, 0.52823264, 0.54150198, 0.5299266 , 0.52766798, 0.53811406, 0.52795031]) ``` ### Bagging KNN 20 classifiers ``` bagging_KNN = BaggingClassifier(KNeighborsClassifier(n_neighbors=7), n_estimators = 20) cross_val_score(bagging_KNN, X_resampled, y_resampled, cv=10) np.mean([0.53331451, 0.53416149, 0.53246753, 0.52625635, 0.52738566, 0.53811406, 0.52597403, 0.53387916, 0.53020892, 0.52428007]) np.std([0.53331451, 0.53416149, 0.53246753, 0.52625635, 0.52738566, 0.53811406, 0.52597403, 0.53387916, 0.53020892, 0.52428007]) ``` ## Bagging with AD method ``` bagging_AD = DecisionTreeClassifier(random_state=0, max_depth = 25) cross_val_score(bagging_AD, X_resampled, y_resampled, cv=10) np.mean([0.92010164, 0.92490119, 0.92546584, 0.93732355, 0.92744212, 0.91417278, 0.91897233, 0.92603049, 0.92123094, 0.92151327]) np.std([0.92010164, 0.92490119, 0.92546584, 0.93732355, 0.92744212, 0.91417278, 0.91897233, 0.92603049, 0.92123094, 0.92151327]) ``` ### Bagging AD 10 classifiers ``` bagging_AD = BaggingClassifier(DecisionTreeClassifier(random_state=0, max_depth = 25)) cross_val_score(bagging_AD, X_resampled, y_resampled, cv=10) np.mean([0.95115754, 0.95736872, 0.95313382, 0.95878035, 0.95256917, 0.94522868, 0.94833427, 0.95454545, 0.95200452, 0.94974591]) np.std([0.95115754, 0.95736872, 0.95313382, 0.95878035, 0.95256917, 0.94522868, 0.94833427, 0.95454545, 0.95200452, 0.94974591]) ``` ### Bagging AD 15 classifiers ``` bagging_AD = BaggingClassifier(DecisionTreeClassifier(random_state=0, max_depth = 25), n_estimators = 15) cross_val_score(bagging_AD, X_resampled, y_resampled, cv=10) np.mean([0.9539808 , 0.95680407, 0.95736872, 0.959345 , 0.95652174, 0.95115754, 0.9539808 , 0.95708639, 0.959345 , 0.95680407]) np.std([0.9539808 , 0.95680407, 0.95736872, 0.959345 , 0.95652174, 0.95115754, 0.9539808 , 0.95708639, 0.959345 , 0.95680407]) ``` ### Bagging AD 20 classifiers ``` bagging_AD = BaggingClassifier(DecisionTreeClassifier(random_state=0, max_depth = 25), n_estimators = 20) cross_val_score(bagging_AD, X_resampled, y_resampled, cv=10) np.mean([0.95765104, 0.96216827, 0.95793337, 0.96640316, 0.95652174, 0.95143986, 0.95708639, 0.95962733, 0.96103896, 0.95623941]) np.std([0.95765104, 0.96216827, 0.95793337, 0.96640316, 0.95652174, 0.95143986, 0.95708639, 0.95962733, 0.96103896, 0.95623941]) ``` ## Bagging with ML method ``` mlp = MLPClassifier(hidden_layer_sizes=(50,), max_iter=200, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.2, early_stopping = True, n_iter_no_change = 50, learning_rate_init=.1) cross_val_score(mlp, X_resampled, y_resampled, cv=10) np.mean([0.10700169, 0.10135517, 0.11038961, 0.108131 , 0.09514399, 0.10474308, 0.10784867, 0.0920384 , 0.09232072, 0.12874082]) np.std([0.10700169, 0.10135517, 0.11038961, 0.108131 , 0.09514399, 0.10474308, 0.10784867, 0.0920384 , 0.09232072, 0.12874082]) ``` ### Bagging ML 10 classifiers ``` bagging_ML = BaggingClassifier(mlp) cross_val_score(bagging_ML, X_resampled, y_resampled, cv=10) np.mean([0.11293055, 0.11660079, 0.12337662, 0.13184641, 0.12083569, 0.11236589, 0.11518916, 0.11405985, 0.11716544, 0.13297572]) np.std([0.11293055, 0.11660079, 0.12337662, 0.13184641, 0.12083569, 0.11236589, 0.11518916, 0.11405985, 0.11716544, 0.13297572]) ``` ### Bagging ML 15 classifiers ``` bagging_ML = BaggingClassifier(mlp, n_estimators = 15) cross_val_score(bagging_ML, X_resampled, y_resampled, cv=10) np.mean([0.1177301 , 0.11660079, 0.12563523, 0.11998871, 0.1177301 , 0.11857708, 0.11490683, 0.11801242, 0.12027103, 0.12196499]) np.std([0.1177301 , 0.11660079, 0.12563523, 0.11998871, 0.1177301 , 0.11857708, 0.11490683, 0.11801242, 0.12027103, 0.12196499]) ``` ### Bagging ML 20 classifiers ``` bagging_ML = BaggingClassifier(mlp, n_estimators = 20) cross_val_score(bagging_ML, X_resampled, y_resampled, cv=10) np.mean([0.1177301 , 0.1134952 , 0.12365895, 0.11518916, 0.11405985, 0.14426877, 0.12930548, 0.11998871, 0.12140034, 0.12394128]) np.std([0.1177301 , 0.1134952 , 0.12365895, 0.11518916, 0.11405985, 0.14426877, 0.12930548, 0.11998871, 0.12140034, 0.12394128]) ``` ## Stacking model ``` d = {'NO_LABEL':1, 'Base Salary/Compensation':2, 'Benefits':3, 'Substitute Compensation':4, 'Supplies/Materials':5, 'Rent/Utilities':6, 'Other Compensation/Stipend': 7, 'Contracted Services' : 8, 'Equipment & Equipment Lease':9, 'Other Non-Compensation':10, 'Travel & Conferences':11} Y = pd.DataFrame(y_resampled) target = Y.applymap(lambda s: d.get(s) if s in d else s) #Método A clfA1 = DecisionTreeClassifier(random_state=0, max_depth = 25) clfA2 = DecisionTreeClassifier(random_state=10, max_depth = 10, min_samples_split = 4) clfA3 = DecisionTreeClassifier(random_state=5, max_depth = 15) clfA4 = DecisionTreeClassifier(random_state=15, max_depth = 25, min_weight_fraction_leaf = 0.2) clfA5 = DecisionTreeClassifier(random_state=0, max_depth = 25, criterion = "entropy", max_features = 100) clfA6 = DecisionTreeClassifier(random_state=8, max_depth = 25, max_features = 100) clfA7 = DecisionTreeClassifier(random_state=0, max_depth = 20, max_features = "sqrt") clfA8 = DecisionTreeClassifier(random_state=0, max_depth = 20, max_features = "log2") clfA9 = DecisionTreeClassifier(random_state=0, max_depth = 25, max_features = 0.6, min_samples_split = 5) clfA10 = DecisionTreeClassifier(random_state=0, max_depth = 25, splitter = "random") clfA11 = DecisionTreeClassifier(random_state=0, max_depth = 5, splitter = "random") clfA12 = DecisionTreeClassifier(random_state=10, max_depth = 15, min_samples_split = 4) clfA13 = DecisionTreeClassifier(random_state=5, max_depth = 15, max_features = "sqrt") clfA14 = DecisionTreeClassifier(random_state=30, max_depth = 5, min_weight_fraction_leaf = 0.2) clfA15 = DecisionTreeClassifier(random_state=7, max_depth = 30, criterion = "entropy", max_features = 50) clfA16 = DecisionTreeClassifier(random_state=8, max_depth = 40, max_features = 100) clfA17 = DecisionTreeClassifier(random_state=50, max_depth = 20, max_features = "sqrt", criterion = "entropy") clfA18 = DecisionTreeClassifier(random_state=30, max_depth = 10, max_features = "log2") clfA19 = DecisionTreeClassifier(random_state=4, max_depth = 15, max_features = 0.5, min_samples_split = 3) clfA20 = DecisionTreeClassifier(random_state=19, max_depth = 25, splitter = "random", criterion = "entropy") #Método B clfB1 = KNeighborsClassifier(n_neighbors=7) clfB2 = KNeighborsClassifier(n_neighbors=5, weights = "distance") clfB3 = KNeighborsClassifier(n_neighbors=4, weights = "distance") clfB4 = KNeighborsClassifier(n_neighbors=8) clfB5 = KNeighborsClassifier(n_neighbors=7, metric = "minkowski", p = 1) clfB6 = KNeighborsClassifier(n_neighbors=4, algorithm = "ball_tree") clfB7 = KNeighborsClassifier(n_neighbors=3, algorithm = "brute") clfB8 = KNeighborsClassifier(n_neighbors=7, algorithm = "kd_tree", leaf_size = 50) clfB9 = KNeighborsClassifier(n_neighbors=5, algorithm = "kd_tree") clfB10 = KNeighborsClassifier(n_neighbors=7, algorithm = "brute") clfB11 = KNeighborsClassifier(n_neighbors=3) clfB12 = KNeighborsClassifier(n_neighbors=3, weights = "distance") clfB13 = KNeighborsClassifier(n_neighbors=4, weights = "distance", metric = "minkowski", p = 1) clfB14 = KNeighborsClassifier(n_neighbors=8, algorithm = "brute") clfB15 = KNeighborsClassifier(n_neighbors=7, metric = "minkowski", p = 1) clfB16 = KNeighborsClassifier(n_neighbors=7, algorithm = "ball_tree") clfB17 = KNeighborsClassifier(n_neighbors=6, algorithm = "ball_tree", leaf_size = 20) clfB18 = KNeighborsClassifier(n_neighbors=7, algorithm = "kd_tree", leaf_size = 50, metric = "minkowski", p = 1) clfB19 = KNeighborsClassifier(n_neighbors=5, algorithm = "kd_tree", metric = "minkowski", p = 1) clfB20 = KNeighborsClassifier(n_neighbors=7, algorithm = "brute") #Método C clfC1 = MLPClassifier(hidden_layer_sizes=(20,), max_iter=100, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.3, early_stopping = True, n_iter_no_change = 10, learning_rate_init=.1) clfC2 = MLPClassifier(hidden_layer_sizes=(10,), max_iter=100, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.3, early_stopping = True, n_iter_no_change = 20, learning_rate_init=.01) clfC3 = MLPClassifier(hidden_layer_sizes=(30,), max_iter=100, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.3, early_stopping = True, n_iter_no_change = 25, learning_rate_init=.1) clfC4 = MLPClassifier(hidden_layer_sizes=(20,), max_iter=200, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.3, early_stopping = True, n_iter_no_change = 10, learning_rate_init=.01) clfC5 = MLPClassifier(hidden_layer_sizes=(25,), max_iter=100, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.3, early_stopping = True, n_iter_no_change = 10, learning_rate_init=.01) clfC6 = MLPClassifier(hidden_layer_sizes=(50,), max_iter=150, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.2, early_stopping = True, n_iter_no_change = 40, learning_rate_init=.01) clfC7 = MLPClassifier(hidden_layer_sizes=(30,), max_iter=100, alpha=1e-4, solver='sgd', verbose=False, tol=1e-5, random_state=42, momentum = 0.8, validation_fraction = 0.2, early_stopping = True, n_iter_no_change = 30, learning_rate_init=.1) clfC8 = MLPClassifier(hidden_layer_sizes=(15,), max_iter=150, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.3, early_stopping = False, learning_rate_init=.01) clfC9 = MLPClassifier(hidden_layer_sizes=(20,), max_iter=100, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.3, early_stopping = False, learning_rate_init=.1) clfC10 = MLPClassifier(hidden_layer_sizes=(40,), max_iter=100, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.2, early_stopping = False, learning_rate_init=.1) clfC11 = MLPClassifier(hidden_layer_sizes=(5,), max_iter=100, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.4, early_stopping = True, n_iter_no_change = 10, learning_rate_init=.1) clfC12 = MLPClassifier(hidden_layer_sizes=(10,), max_iter=50, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.4, early_stopping = True, n_iter_no_change = 20, learning_rate_init=.1) clfC13 = MLPClassifier(hidden_layer_sizes=(30,), max_iter=150, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.5, early_stopping = True, n_iter_no_change = 25, learning_rate_init=.01) clfC14 = MLPClassifier(hidden_layer_sizes=(20,), max_iter=200, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.4, early_stopping = True, n_iter_no_change = 10, learning_rate_init=.01) clfC15 = MLPClassifier(hidden_layer_sizes=(25,), max_iter=75, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.2, early_stopping = False, n_iter_no_change = 10, learning_rate_init=.1) clfC16 = MLPClassifier(hidden_layer_sizes=(50,), max_iter=200, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.3, early_stopping = True, n_iter_no_change = 40, learning_rate_init=.01) clfC17 = MLPClassifier(hidden_layer_sizes=(30,), max_iter=100, alpha=1e-4, solver='sgd', verbose=False, tol=1e-5, random_state=42, momentum = 0.8, validation_fraction = 0.2, early_stopping = True, n_iter_no_change = 30, learning_rate_init=.01) clfC18 = MLPClassifier(hidden_layer_sizes=(15,), max_iter=150, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.3, early_stopping = False, learning_rate_init=.01) clfC19 = MLPClassifier(hidden_layer_sizes=(5,), max_iter=100, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.3, early_stopping = False, learning_rate_init=.1) clfC20 = MLPClassifier(hidden_layer_sizes=(5,), max_iter=100, alpha=1e-4, solver='sgd', verbose=False, tol=1e-4, random_state=42, momentum = 0.8, validation_fraction = 0.3, early_stopping = False, learning_rate_init=.01) clfD1 = GaussianNB(priors=None, var_smoothing=1e-1) clfD2 = GaussianNB(priors=None, var_smoothing=1e-1.4) clfD3 = GaussianNB(priors=None, var_smoothing=1e-1.8) clfD4 = GaussianNB(priors=None, var_smoothing=1e-2.2) clfD5 = GaussianNB(priors=None, var_smoothing=1e-2.6) clfD6 = GaussianNB(priors=None, var_smoothing=1e-3) clfD7 = GaussianNB(priors=None, var_smoothing=1e-3.4) clfD8 = GaussianNB(priors=None, var_smoothing=1e-3.8) clfD9 = GaussianNB(priors=None, var_smoothing=1e-4.2) clfD10 = GaussianNB(priors=None, var_smoothing=1e-4.6) clfD11 = GaussianNB(priors=None, var_smoothing=1e-5) clfD12 = GaussianNB(priors=None, var_smoothing=1e-5.4) clfD13 = GaussianNB(priors=None, var_smoothing=1e-5.8) clfD14 = GaussianNB(priors=None, var_smoothing=1e-6.2) clfD15 = GaussianNB(priors=None, var_smoothing=1e-6.6) clfD16 = GaussianNB(priors=None, var_smoothing=1e-7) clfD17 = GaussianNB(priors=None, var_smoothing=1e-7.4) clfD18 = GaussianNB(priors=None, var_smoothing=1e-7.8) clfD19 = GaussianNB(priors=None, var_smoothing=1e-8.2) clfD20 = GaussianNB(priors=None, var_smoothing=1e-8.6) #Meta-classificador xgb = XGBClassifier(random_state=0, n_jobs=-1, learning_rate=0.1, n_estimators=100, max_depth=3) ``` ### Stacking Homogêneo: *AD* - - 10 ``` sclf = StackingClassifier(classifiers=[clfA1, clfA2, clfA3, clfA4, clfA5, clfA6, clfA7, clfA8, clfA9, clfA10], meta_classifier=xgb) scores = cross_val_score(sclf, X_resampled, target, cv=5, scoring='accuracy') print("Accuracy: %f (+/- %f)" % (scores.mean(), scores.std())) ``` ### Stacking Homogêneo: *AD* - - 15 ``` sclf = StackingClassifier(classifiers=[clfA1, clfA2, clfA3, clfA4, clfA5, clfA6, clfA7, clfA8, clfA9, clfA10, clfA11, clfA12, clfA13, clfA14, clfA15], meta_classifier=xgb) scores = cross_val_score(sclf, X_resampled, target, cv=5, scoring='accuracy') print("Accuracy: %f (+/- %f)" % (scores.mean(), scores.std())) ``` ### Stacking Homogêneo: *AD* - - 20 ``` sclf = StackingClassifier(classifiers=[clfA1, clfA2, clfA3, clfA4, clfA5, clfA6, clfA7, clfA8, clfA9, clfA10, clfA11, clfA12, clfA13, clfA14, clfA15, clfA16, clfA17, clfA18, clfA19, clfA20], meta_classifier=xgb) scores = cross_val_score(sclf, X_resampled, target, cv=5, scoring='accuracy') print("Accuracy: %f (+/- %f)" % (scores.mean(), scores.std())) ``` ### Stacking Homogêneo: *KNN* - - 10 ``` sclf = StackingClassifier(classifiers=[clfB1, clfB2, clfB3, clfB4, clfB5, clfB6, clfB7, clfB8, clfB9, clfB10], meta_classifier=xgb) scores = cross_val_score(sclf, X_resampled, target, cv=5, scoring='accuracy') print("Accuracy: %f (+/- %f)" % (scores.mean(), scores.std())) ``` ### Stacking Homogêneo: *KNN* - - 15 ``` sclf = StackingClassifier(classifiers=[clfB1, clfB2, clfB3, clfB4, clfB5, clfB6, clfB7, clfB8, clfB9, clfB10, clfB11, clfB12, clfB13, clfB14, clfB15], meta_classifier=xgb) scores = cross_val_score(sclf, X_resampled, target, cv=5, scoring='accuracy') print("Accuracy: %f (+/- %f)" % (scores.mean(), scores.std())) ``` ### Stacking Homogêneo: *KNN* - - 20 ``` sclf = StackingClassifier(classifiers=[clfB1, clfB2, clfB3, clfB4, clfB5, clfB6, clfB7, clfB8, clfB9, clfB10, clfB11, clfB12, clfB13, clfB14, clfB15, clfB16, clfB17, clfB18, clfB19, clfB20], meta_classifier=xgb) scores = cross_val_score(sclf, X_resampled, target, cv=5, scoring='accuracy') print("Accuracy: %f (+/- %f)" % (scores.mean(), scores.std())) ``` ### Stacking Homogêneo: *MLP* - - 10 ``` sclf = StackingClassifier(classifiers=[clfC1, clfC2, clfC3, clfC4, clfC5, clfC6, clfC7, clfC8, clfC9, clfC10], meta_classifier=xgb) scores = cross_val_score(sclf, X_resampled, target, cv=3, scoring='accuracy') print("Accuracy: %f (+/- %f)" % (scores.mean(), scores.std())) ``` ### Stacking Homogêneo: *MLP* - - 15 ``` sclf = StackingClassifier(classifiers=[clfC1, clfC2, clfC3, clfC4, clfC5, clfC6, clfC7, clfC8, clfC9, clfC10, clfC11, clfC12, clfC13, clfC14, clfC15], meta_classifier=xgb) scores = cross_val_score(sclf, X_resampled, target, cv=5, scoring='accuracy') print("Accuracy: %f (+/- %f)" % (scores.mean(), scores.std())) ``` ### Stacking Homogêneo: *MLP* - - 20 ``` sclf = StackingClassifier(classifiers=[clfC1, clfC2, clfC3, clfC4, clfC5, clfC6, clfC7, clfC8, clfC9, clfC10, clfC11, clfC12, clfC13, clfC14, clfC15, clfC16, clfC17, clfC18, clfC19, clfC20], meta_classifier=xgb) scores = cross_val_score(sclf, X_resampled, target, cv=5, scoring='accuracy') print("Accuracy: %f (+/- %f)" % (scores.mean(), scores.std())) ``` ### Stacking Homogêneo: *NB* - - 10 ``` sclf = StackingClassifier(classifiers=[clfD1, clfD2, clfD3, clfD4, clfD5, clfD6, clfD7, clfD8, clfD9, clfD10], meta_classifier=xgb) scores = cross_val_score(sclf, X_resampled, target, cv=5, scoring='accuracy') print("Accuracy: %f (+/- %f)" % (scores.mean(), scores.std())) ``` ### Stacking Homogêneo: *NB* - - 15 ``` sclf = StackingClassifier(classifiers=[clfD1, clfD2, clfD3, clfD4, clfD5, clfD6, clfD7, clfD8, clfD9, clfD10, clfD11, clfD12, clfD13, clfD14, clfD15], meta_classifier=xgb) scores = cross_val_score(sclf, X_resampled, target, cv=5, scoring='accuracy') print("Accuracy: %f (+/- %f)" % (scores.mean(), scores.std())) ``` ### Stacking Homogêneo: *NB* - - 20 ``` sclf = StackingClassifier(classifiers=[clfD1, clfD2, clfD3, clfD4, clfD5, clfD6, clfD7, clfD8, clfD9, clfD10, clfD11, clfD12, clfD13, clfD14, clfD15, clfD16, clfD17, clfD18, clfD19, clfD20], meta_classifier=xgb) scores = cross_val_score(sclf, X_resampled, target, cv=5, scoring='accuracy') print("Accuracy: %f (+/- %f)" % (scores.mean(), scores.std())) ```
github_jupyter
# Import ``` import gc import os import random import numpy as np import pandas as pd ``` # Random seed initialize ``` def random_seed_initialize(seed=42): random.seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) random_seed_initialize() ``` # Reduce memory Function ``` def reduce_mem_usage(df, verbose=True): numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64'] start_mem = df.memory_usage().sum() / 1024**2 for col in df.columns: col_type = df[col].dtypes if col_type in numerics: c_min = df[col].min() c_max = df[col].max() if str(col_type)[:3] == 'int': if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max: df[col] = df[col].astype(np.int8) elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max: df[col] = df[col].astype(np.int16) elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max: df[col] = df[col].astype(np.int32) elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max: df[col] = df[col].astype(np.int64) else: if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max: df[col] = df[col].astype(np.float16) elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max: df[col] = df[col].astype(np.float32) else: df[col] = df[col].astype(np.float64) end_mem = df.memory_usage().sum() / 1024**2 if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem)) return df ``` # Read CSV data <https://www.kaggle.com/cdeotte/data-without-drift> ``` train_data = pd.read_csv('../input/data-without-drift/train_clean.csv') test_data = pd.read_csv('../input/data-without-drift/test_clean.csv') ``` # Add Feature ``` def set_index(df): df = df.sort_values(by=['time']).reset_index(drop=True) df.index = ((df.time * 10_000) - 1).values return df def set_batch_index(df, batch_size1=50_000, batch_size2=5_000): df['batch'] = df.index // batch_size1 df['batch_index'] = df.index - (df.batch * batch_size1) df['batch_slices'] = df['batch_index'] // batch_size2 df['batch_slices2'] = df.apply(lambda r: '_'.join( [str(r['batch']).zfill(3), str(r['batch_slices']).zfill(3)]), axis=1) return df def set_features_batch50000(df): df['signal_batch_min'] = df.groupby('batch')['signal'].transform('min') # 最小値 df['signal_batch_max'] = df.groupby('batch')['signal'].transform('max') # 最大値 df['signal_batch_std'] = df.groupby('batch')['signal'].transform('std') # 標準偏差 df['signal_batch_mean'] = df.groupby('batch')['signal'].transform('mean') # 平均 df['mean_abs_chg_batch'] = df.groupby(['batch'])['signal'].transform(lambda x: np.mean(np.abs(np.diff(x)))) # 前回との差分の平均 df['abs_max_batch'] = df.groupby(['batch'])['signal'].transform(lambda x: np.max(np.abs(x))) # 絶対値の最大値 df['abs_min_batch'] =df.groupby(['batch'])['signal'].transform(lambda x: np.min(np.abs(x))) # 絶対値の最小値 df['range_batch'] = df['signal_batch_max'] - df['signal_batch_min'] # 最大値と最小値のギャップ df['maxtomin_batch'] = df['signal_batch_max'] / df['signal_batch_min'] # 最大値÷最小値 df['abs_avg_batch'] = (df['abs_min_batch'] + df['abs_max_batch']) / 2 # 最大値(絶対値)と最小値(絶対値)の平均 return df def set_features_batch5000(df): df['signal_batch_5k_min'] = df.groupby('batch_slices2')['signal'].transform('min') df['signal_batch_5k_max'] = df.groupby('batch_slices2')['signal'].transform('max') df['signal_batch_5k_std'] = df.groupby('batch_slices2')['signal'].transform('std') df['signal_batch_5k_mean'] = df.groupby('batch_slices2')['signal'].transform('mean') df['mean_abs_chg_batch_5k'] = df.groupby(['batch_slices2'])['signal'].transform(lambda x: np.mean(np.abs(np.diff(x)))) df['abs_max_batch_5k'] = df.groupby(['batch_slices2'])['signal'].transform(lambda x: np.max(np.abs(x))) df['abs_min_batch_5k'] = df.groupby(['batch_slices2'])['signal'].transform(lambda x: np.min(np.abs(x))) df['range_batch_5k'] = df['signal_batch_5k_max'] - df['signal_batch_5k_min'] df['maxtomin_batch_5k'] = df['signal_batch_5k_max'] / df['signal_batch_5k_min'] df['abs_avg_batch_5k'] = (df['abs_min_batch_5k'] + df['abs_max_batch_5k']) / 2 return df def set_shift_features(df): df['signal_shift+1'] = df.groupby(['batch']).shift(1)['signal'] df['signal_shift-1'] = df.groupby(['batch']).shift(-1)['signal'] df['signal_shift+2'] = df.groupby(['batch']).shift(2)['signal'] df['signal_shift-2'] = df.groupby(['batch']).shift(-2)['signal'] return df def set_difference_features(df, ignore=['open_channels', 'time', 'batch', 'batch_index', 'batch_slices', 'batch_slices2',]): for c in list(set(df.columns) ^ set(ignore)): df[f'{c}_msignal'] = df[c] - df['signal'] return df def set_gradients_features(df, n_grads=4): for i in range(n_grads): if i == 0: df['grad_' + str(i+1)] = df.groupby(['batch'])['signal'].transform(lambda x: np.gradient(x)) else: df['grad_' + str(i+1)] = df.groupby(['batch'])['grad_' + str(i)].transform(lambda x: np.gradient(x)) return df def set_features(df, is_test=False, memory_reduce=True): print('set_index()') df = set_index(df) print('set_batch_index()') df = set_batch_index(df) print('set_features_batch50000()') df = set_features_batch50000(df) print('set_features_batch5000()') df = set_features_batch5000(df) print('set_lag_features()') df = set_shift_features(df) print('set_gradients_features()') df = set_gradients_features(df) print('set_difference_features()') if not is_test: df = set_difference_features(df, ignore=['open_channels', 'time', 'batch', 'batch_index', 'batch_slices', 'batch_slices2']) else: df = set_difference_features(df, ignore=['time', 'batch', 'batch_index', 'batch_slices', 'batch_slices2']) df = df.fillna(0) if memory_reduce: print('reduce_mem_usage()') df = reduce_mem_usage(df) return df ``` ``` train_data = set_features(train_data) pd.set_option('display.max_columns', 200) train_data.head(10) ``` # Sampling ``` frac = 1.0 train_data = train_data.sample(frac=frac, random_state=42).reset_index(drop=True) ``` # PyCaret Setup ``` # !pip install pycaret IGNORE_FEATURES = [ 'time', 'batch', 'batch_index', 'batch_slices', 'batch_slices2', 'abs_max_batch', 'abs_min_batch', 'abs_avg_batch', 'signal_batch_min_msignal', 'signal_batch_mean_msignal', 'range_batch_5k_msignal' ] print('TARGET FEATURE LIST : ', end="") print([f for f in list(set(IGNORE_FEATURES) ^ set(train_data.columns))]) from pycaret.regression import * exp = setup(data = train_data, target = 'open_channels', silent=True, sampling = False, ignore_features = IGNORE_FEATURES, session_id=42) ``` # Create LGBM model ``` lgbm_model = create_model('lightgbm', fold=10) lgbm_model = finalize_model(lgbm_model) ``` # Predict ``` test_data = set_features(test_data, is_test=True) test_data.head() predictions = predict_model(lgbm_model, data=test_data) predictions['open_channels'] = predictions['Label'] sub = pd.read_csv("../input/liverpool-ion-switching/sample_submission.csv") submission = pd.DataFrame() submission['time'] = sub['time'] submission['open_channels'] = predictions['open_channels'] submission['open_channels'] = submission['open_channels'].round(decimals=0) submission['open_channels'] = submission['open_channels'].astype(int) submission.to_csv('submission.csv', float_format='%0.4f', index = False) ```
github_jupyter
``` import os import numpy from numpy import arange, sin, exp, pi, diff, floor, asarray from scipy.io import savemat from utils.simulation_utils import add_noise, quantize from utils.gps_l1ca_utils import generate_GPS_L1CA_code from utils.acquisition_utils import coarse_acquire from utils.utils import PSKSignal, sample_sequence import matplotlib.pyplot as plt plt.rcParams.update({'font.size': 20}) T_sim = 4 # duration of simulated signal (s) fs = 5e6 # sampling rate (Hz) N = int(T_sim * fs) t = arange(N) / fs f_carrier = 1.57542e9 # L1 carrier frequency (Hz) f_center = 1.57542e9 - 1.25e6 # radio front-end center frequency (Hz) f_inter = f_carrier - f_center # intermediate frequency (Hz) f_code = 1.023e6 # L1 C/A code rate (chips/s) c = 299792458 # speed of light (m/s) G_flat = t * 0 - pi / (2 * pi * f_carrier / c) G_step = .1 * (t > T_sim / 2) G_linear = 0.5 * (t > T_sim / 2) * 3 * (t - T_sim / 2) G_quadratic = 0.5 * (t >= T_sim / 2) * 2 * (t - T_sim / 2)**2 G_sinusoid = 0.5 * (exp((t - T_sim / 2)) * (t < T_sim / 2) + (t >= T_sim / 2)) * sin(2 * pi * 4 / T_sim * t) fig = plt.figure(figsize=(12, 5), dpi=300) ax = fig.add_subplot(111) skip = 1000 args = {'linewidth': 3} ax.plot(t[:-1:skip], -1.3+diff(G_flat)[::skip] * fs * f_carrier / c, label='flat', **args) ax.plot(t[:-1:skip], diff(G_step)[::skip] * fs * f_carrier / c, label='impulse', **args) ax.plot(t[:-1:skip], diff(G_linear)[::skip] * fs * f_carrier / c, label='step', **args) ax.plot(t[:-1:skip], diff(G_quadratic)[::skip] * fs * f_carrier / c, label='ramp', **args) ax.plot(t[:-1:skip], diff(G_sinusoid)[::skip] * fs * f_carrier / c, label='growing sinusoid', **args) ax.set_xlim(t[0], t[-1]) ax.set_ylim(-40, 60) ax.set_xlabel('Time [s]') ax.set_ylabel('Doppler [Hz]') ax.legend(loc=2, fontsize=16, frameon=False) ax.grid() plt.tight_layout() plt.show() prns = [4, 7, 10, 15, 29, 32] n0s = [100, 1225, 2500, 4560, 2052.7846, 0.5] tau0s = [n0 / fs for n0 in n0s] fds = [1000, -1000, 2200, -3300, -3210, -4001] Gs = [c * tau0 + c / f_carrier * fd * t + G for tau0, fd, G in zip(tau0s, fds, [G_flat, G_step, G_linear, G_quadratic, G_sinusoid, G_flat])] cn0s = [45, 47, 49, 45, 41, 45 - 26 / 6 * t] signal_samples = [] chips = [] for prn, G in zip(prns, Gs): code_seq = generate_GPS_L1CA_code(prn) chip = f_code * G / c code_samples = exp(1j * pi * code_seq[(floor(t * f_code + chip) % len(code_seq)).astype(int)]) theta = 2 * pi * f_carrier * G / c samples = code_samples * exp(1j * (2 * pi * f_inter * t + theta)) chips.append(chip) signal_samples.append(samples) samples = add_noise(signal_samples, cn0s) samples = quantize(samples, bits=2) if not os.path.exists('../data'): os.makedirs('../data') filepath = '../data/sim-RF_GPS-L1CA_5000_1250_complex_{0:02}s.mat'.format(T_sim) savemat(filepath, {'samples': samples, 'chips': asarray(chips), 'prns': prns}) ```
github_jupyter
# SimSwap for videos Reference: [my changes to the official notebook](https://gist.github.com/woctezuma/78a98b73cbba8cba478d99c8c50bc359) ## Prepare code ``` %cd /content !git clone https://github.com/woctezuma/SimSwap %cd /content/SimSwap !git checkout no-logo !pip install insightface==0.2.1 onnxruntime moviepy > /dev/null !pip install googledrivedownloader > /dev/null !pip install imageio==2.4.1 > /dev/null ``` ## Prepare models ``` %cd /content/SimSwap from google_drive_downloader import GoogleDriveDownloader GoogleDriveDownloader.download_file_from_google_drive(file_id='1TLNdIufzwesDbyr_nVTR7Zrx9oRHLM_N', dest_path='./arcface_model/arcface_checkpoint.tar') GoogleDriveDownloader.download_file_from_google_drive(file_id='1PXkRiBUYbu1xWpQyDEJvGKeqqUFthJcI', dest_path='./checkpoints.zip') !wget --no-check-certificate \ https://sh23tw.dm.files.1drv.com/y4mmGiIkNVigkSwOKDcV3nwMJulRGhbtHdkheehR5TArc52UjudUYNXAEvKCii2O5LAmzGCGK6IfleocxuDeoKxDZkNzDRSt4ZUlEt8GlSOpCXAFEkBwaZimtWGDRbpIGpb_pz9Nq5jATBQpezBS6G_UtspWTkgrXHHxhviV2nWy8APPx134zOZrUIbkSF6xnsqzs3uZ_SEX_m9Rey0ykpx9w \ -O antelope.zip !unzip ./checkpoints.zip -d ./checkpoints !unzip antelope.zip -d ./insightface_func/models/ ``` ## Prepare data ### Download ``` %cd /content !wget https://i.imgur.com/iQtmj1N.png -O photo.png !wget https://i0.wp.com/john.do/wp-content/uploads/2019/07/james-franco-so-good-1.gif -O video.gif input_image_fname = '/content/photo.png' input_video_fname = '/content/video.gif' ``` ## Run ### Official code ``` %cd /content/SimSwap/ import cv2 import torch import fractions import numpy as np from PIL import Image import torch.nn.functional as F from torchvision import transforms from models.models import create_model from options.test_options import TestOptions from insightface_func.face_detect_crop_mutil import Face_detect_crop from util.videoswap import video_swap from util.add_watermark import watermark_image transformer = transforms.Compose([ transforms.ToTensor(), #transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) transformer_Arcface = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) detransformer = transforms.Compose([ transforms.Normalize([0, 0, 0], [1/0.229, 1/0.224, 1/0.225]), transforms.Normalize([-0.485, -0.456, -0.406], [1, 1, 1]) ]) # If the algorithm misses some faces, you could lower the detection threshold. # Reference: https://github.com/neuralchen/SimSwap/issues/39#issuecomment-873758730 det_thresh = 0.6 # You could also decrease the image size used for face detection: det_size = (640,640) opt = TestOptions() opt.initialize() opt.parser.add_argument('-f') ## dummy arg to avoid bug opt = opt.parse() opt.pic_a_path = input_image_fname ## or replace it with image from your own google drive opt.video_path = input_video_fname ## or replace it with video from your own google drive opt.output_path = '/content/output.mp4' opt.temp_path = './tmp' opt.Arc_path = './arcface_model/arcface_checkpoint.tar' opt.isTrain = False crop_size = 224 torch.nn.Module.dump_patches = True model = create_model(opt) model.eval() app = Face_detect_crop(name='antelope', root='./insightface_func/models') app.prepare(ctx_id= 0, det_thresh=det_thresh, det_size=det_size) pic_a = opt.pic_a_path # img_a = Image.open(pic_a).convert('RGB') img_a_whole = cv2.imread(pic_a) img_a_align_crop, _ = app.get(img_a_whole,crop_size) img_a_align_crop_pil = Image.fromarray(cv2.cvtColor(img_a_align_crop[0],cv2.COLOR_BGR2RGB)) img_a = transformer_Arcface(img_a_align_crop_pil) img_id = img_a.view(-1, img_a.shape[0], img_a.shape[1], img_a.shape[2]) # convert numpy to tensor img_id = img_id.cuda() #create latent id img_id_downsample = F.interpolate(img_id, scale_factor=0.5) latend_id = model.netArc(img_id_downsample) latend_id = latend_id.detach().to('cpu') latend_id = latend_id/np.linalg.norm(latend_id,axis=1,keepdims=True) latend_id = latend_id.to('cuda') try: video_swap(opt.video_path, latend_id, model, app, opt.output_path,temp_results_dir=opt.temp_path) except IndexError: print('[error] This is most likely due to the absence of audio from a GIF input.') ``` ### My fix for GIF If you want to apply SimSwap to a GIF, there will be an error because the input video has no audio. To fix this issue, aggregate the temporary output by yourself by running: ``` import cv2 def get_fps(video_path): video = cv2.VideoCapture(video_path) fps = video.get(cv2.CAP_PROP_FPS) return fps import os import glob from moviepy.video.io.ImageSequenceClip import ImageSequenceClip def collate_into_gif(temp_results_dir, output_fname, fps): path = os.path.join(temp_results_dir,'*.jpg') image_filenames = sorted(glob.glob(path)) clips = ImageSequenceClip(image_filenames,fps = fps) clips.write_gif(output_fname) return temp_results_dir = '/content/SimSwap/tmp/' output_fname = '/content/output.gif' collate_into_gif(temp_results_dir, output_fname, fps=get_fps(input_video_fname)) ``` To optimize the file size (in MB) of the GIF, you can upload it to a website like https://ezgif.com/optimize
github_jupyter
# City street network orientations Compare the spatial orientations of city street networks with OSMnx. - [Overview of OSMnx](http://geoffboeing.com/2016/11/osmnx-python-street-networks/) - [GitHub repo](https://github.com/gboeing/osmnx) - [Examples, demos, tutorials](https://github.com/gboeing/osmnx-examples) - [Documentation](https://osmnx.readthedocs.io/en/stable/) - [Journal article/citation](http://geoffboeing.com/publications/osmnx-complex-street-networks/) ``` import matplotlib.pyplot as plt import numpy as np import osmnx as ox import pandas as pd ox.config(log_console=True, use_cache=True) weight_by_length = False # define the study sites as label : query places = {'Atlanta' : 'Atlanta, GA, USA', 'Boston' : 'Boston, MA, USA', 'Buffalo' : 'Buffalo, NY, USA', 'Charlotte' : 'Charlotte, NC, USA', 'Chicago' : 'Chicago, IL, USA', 'Cleveland' : 'Cleveland, OH, USA', 'Dallas' : 'Dallas, TX, USA', 'Houston' : 'Houston, TX, USA', 'Denver' : 'Denver, CO, USA', 'Detroit' : 'Detroit, MI, USA', 'Las Vegas' : 'Las Vegas, NV, USA', 'Los Angeles' : {'city':'Los Angeles', 'state':'CA', 'country':'USA'}, 'Manhattan' : 'Manhattan, NYC, NY, USA', 'Miami' : 'Miami, FL, USA', 'Minneapolis' : 'Minneapolis, MN, USA', 'Orlando' : 'Orlando, FL, USA', 'Philadelphia' : 'Philadelphia, PA, USA', 'Phoenix' : 'Phoenix, AZ, USA', 'Portland' : 'Portland, OR, USA', 'Sacramento' : 'Sacramento, CA, USA', 'San Francisco' : {'city':'San Francisco', 'state':'CA', 'country':'USA'}, 'Seattle' : 'Seattle, WA, USA', 'St Louis' : 'St. Louis, MO, USA', 'Tampa' : 'Tampa, FL, USA', 'Washington' : 'Washington, DC, USA'} # verify OSMnx geocodes each query to what you expect gdf = ox.gdf_from_places(places.values()) gdf ``` ## Get the street networks and their edge bearings ``` def reverse_bearing(x): return x + 180 if x < 180 else x - 180 bearings = {} for place in sorted(places.keys()): # get the graph query = places[place] G = ox.graph_from_place(query, network_type='drive') # calculate edge bearings Gu = ox.add_edge_bearings(ox.get_undirected(G)) if weight_by_length: # weight bearings by length (meters) city_bearings = [] for u, v, k, d in Gu.edges(keys=True, data=True): city_bearings.extend([d['bearing']] * int(d['length'])) b = pd.Series(city_bearings) bearings[place] = pd.concat([b, b.map(reverse_bearing)]).reset_index(drop='True') else: # don't weight bearings, just take one value per street segment b = pd.Series([d['bearing'] for u, v, k, d in Gu.edges(keys=True, data=True)]) bearings[place] = pd.concat([b, b.map(reverse_bearing)]).reset_index(drop='True') ``` ## Visualize it ``` def count_and_merge(n, bearings): # make twice as many bins as desired, then merge them in pairs # prevents bin-edge effects around common values like 0° and 90° n = n * 2 bins = np.arange(n + 1) * 360 / n count, _ = np.histogram(bearings, bins=bins) # move the last bin to the front, so eg 0.01° and 359.99° will be binned together count = np.roll(count, 1) return count[::2] + count[1::2] # function to draw a polar histogram for a set of edge bearings def polar_plot(ax, bearings, n=36, title=''): bins = np.arange(n + 1) * 360 / n count = count_and_merge(n, bearings) _, division = np.histogram(bearings, bins=bins) frequency = count / count.sum() division = division[0:-1] width = 2 * np.pi / n ax.set_theta_zero_location('N') ax.set_theta_direction('clockwise') x = division * np.pi / 180 bars = ax.bar(x, height=frequency, width=width, align='center', bottom=0, zorder=2, color='#003366', edgecolor='k', linewidth=0.5, alpha=0.7) ax.set_ylim(top=frequency.max()) title_font = {'family':'Century Gothic', 'size':24, 'weight':'bold'} xtick_font = {'family':'Century Gothic', 'size':10, 'weight':'bold', 'alpha':1.0, 'zorder':3} ytick_font = {'family':'Century Gothic', 'size': 9, 'weight':'bold', 'alpha':0.2, 'zorder':3} ax.set_title(title.upper(), y=1.05, fontdict=title_font) ax.set_yticks(np.linspace(0, max(ax.get_ylim()), 5)) yticklabels = ['{:.2f}'.format(y) for y in ax.get_yticks()] yticklabels[0] = '' ax.set_yticklabels(labels=yticklabels, fontdict=ytick_font) xticklabels = ['N', '', 'E', '', 'S', '', 'W', ''] ax.set_xticklabels(labels=xticklabels, fontdict=xtick_font) ax.tick_params(axis='x', which='major', pad=-2) # create figure and axes n = len(places) ncols = int(np.ceil(np.sqrt(n))) nrows = int(np.ceil(n / ncols)) figsize = (ncols * 5, nrows * 5) fig, axes = plt.subplots(nrows, ncols, figsize=figsize, subplot_kw={'projection':'polar'}) # plot each city's polar histogram for ax, place in zip(axes.flat, sorted(places.keys())): polar_plot(ax, bearings[place].dropna(), title=place) # add super title and save full image suptitle_font = {'family':'Century Gothic', 'fontsize':60, 'fontweight':'normal', 'y':1.07} fig.suptitle('City Street Network Orientation', **suptitle_font) fig.tight_layout() fig.subplots_adjust(hspace=0.35) fig.savefig('images/street-orientations.png', dpi=120, bbox_inches='tight') plt.close() ```
github_jupyter
# Object Detection Demo Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) before you start. # Imports ``` import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image # This is needed since the notebook is stored in the object_detection folder. sys.path.append("..") from object_detection.utils import ops as utils_ops if tf.__version__ < '1.4.0': raise ImportError('Please upgrade your tensorflow installation to v1.4.* or later!') ``` ## Env setup ``` # This is needed to display the images. %matplotlib inline ``` ## Object detection imports Here are the imports from the object detection module. ``` from utils import label_map_util from utils import visualization_utils as vis_util ``` # Model preparation ## Variables Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_CKPT` to point to a new .pb file. By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. ``` # What model to download. MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17' MODEL_FILE = MODEL_NAME + '.tar.gz' DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/' # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb' # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt') NUM_CLASSES = 90 ``` ## Download Model ``` opener = urllib.request.URLopener() opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE) tar_file = tarfile.open(MODEL_FILE) for file in tar_file.getmembers(): file_name = os.path.basename(file.name) if 'frozen_inference_graph.pb' in file_name: tar_file.extract(file, os.getcwd()) ``` ## Load a (frozen) Tensorflow model into memory. ``` detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') ``` ## Loading label map Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine ``` label_map = label_map_util.load_labelmap(PATH_TO_LABELS) categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True) category_index = label_map_util.create_category_index(categories) ``` ## Helper code ``` def load_image_into_numpy_array(image): (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (im_height, im_width, 3)).astype(np.uint8) ``` # Detection ``` # For the sake of simplicity we will use only 2 images: # image1.jpg # image2.jpg # If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS. PATH_TO_TEST_IMAGES_DIR = 'test_images' TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ] # Size, in inches, of the output images. IMAGE_SIZE = (12, 8) def run_inference_for_single_image(image, graph): with graph.as_default(): with tf.Session() as sess: # Get handles to input and output tensors ops = tf.get_default_graph().get_operations() all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' if tensor_name in all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # Run inference output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)}) # all outputs are float32 numpy arrays, so convert types as appropriate output_dict['num_detections'] = int(output_dict['num_detections'][0]) output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) output_dict['detection_boxes'] = output_dict['detection_boxes'][0] output_dict['detection_scores'] = output_dict['detection_scores'][0] if 'detection_masks' in output_dict: output_dict['detection_masks'] = output_dict['detection_masks'][0] return output_dict for image_path in TEST_IMAGE_PATHS: image = Image.open(image_path) # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. image_np = load_image_into_numpy_array(image) # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. output_dict = run_inference_for_single_image(image_np, detection_graph) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) plt.figure(figsize=IMAGE_SIZE) plt.imshow(image_np) ```
github_jupyter
# Predicting Boston Housing Prices ## Using XGBoost in SageMaker (Batch Transform) _Deep Learning Nanodegree Program | Deployment_ --- As an introduction to using SageMaker's High Level Python API we will look at a relatively simple problem. Namely, we will use the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the median value of a home in the area of Boston Mass. The documentation for the high level API can be found on the [ReadTheDocs page](http://sagemaker.readthedocs.io/en/latest/) ## General Outline Typically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons. 1. Download or otherwise retrieve the data. 2. Process / Prepare the data. 3. Upload the processed data to S3. 4. Train a chosen model. 5. Test the trained model (typically using a batch transform job). 6. Deploy the trained model. 7. Use the deployed model. In this notebook we will only be covering steps 1 through 5 as we just want to get a feel for using SageMaker. In later notebooks we will talk about deploying a trained model in much more detail. ## Step 0: Setting up the notebook We begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need. ``` %matplotlib inline import os import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import load_boston import sklearn.model_selection ``` In addition to the modules above, we need to import the various bits of SageMaker that we will be using. ``` import sagemaker from sagemaker import get_execution_role from sagemaker.amazon.amazon_estimator import get_image_uri from sagemaker.predictor import csv_serializer # This is an object that represents the SageMaker session that we are currently operating in. This # object contains some useful information that we will need to access later such as our region. session = sagemaker.Session() # This is an object that represents the IAM role that we are currently assigned. When we construct # and launch the training job later we will need to tell it what IAM role it should have. Since our # use case is relatively simple we will simply assign the training job the role we currently have. role = get_execution_role() ``` ## Step 1: Downloading the data Fortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward. ``` boston = load_boston() ``` ## Step 2: Preparing and splitting the data Given that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets. ``` # First we package up the input data and the target variable (the median value) as pandas dataframes. This # will make saving the data to a file a little easier later on. X_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names) Y_bos_pd = pd.DataFrame(boston.target) # We split the dataset into 2/3 training and 1/3 testing sets. X_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33) # Then we split the training set further into 2/3 training and 1/3 validation sets. X_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33) ``` ## Step 3: Uploading the data files to S3 When a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. In addition, when we perform a batch transform job, SageMaker expects the input data to be stored on S3. We can use the SageMaker API to do this and hide some of the details. ### Save the data locally First we need to create the test, train and validation csv files which we will then upload to S3. ``` # This is our local data directory. We need to make sure that it exists. data_dir = '../data/boston' if not os.path.exists(data_dir): os.makedirs(data_dir) # We use pandas to save our test, train and validation data to csv files. Note that we make sure not to include header # information or an index as this is required by the built in algorithms provided by Amazon. Also, for the train and # validation data, it is assumed that the first entry in each row is the target variable. X_test.to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) ``` ### Upload to S3 Since we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project. ``` prefix = 'boston-xgboost-HL' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix) ``` ## Step 4: Train the XGBoost model Now that we have the training and validation data uploaded to S3, we can construct our XGBoost model and train it. We will be making use of the high level SageMaker API to do this which will make the resulting code a little easier to read at the cost of some flexibility. To construct an estimator, the object which we wish to train, we need to provide the location of a container which contains the training code. Since we are using a built in algorithm this container is provided by Amazon. However, the full name of the container is a bit lengthy and depends on the region that we are operating in. Fortunately, SageMaker provides a useful utility method called `get_image_uri` that constructs the image name for us. To use the `get_image_uri` method we need to provide it with our current region, which can be obtained from the session object, and the name of the algorithm we wish to use. In this notebook we will be using XGBoost however you could try another algorithm if you wish. The list of built in algorithms can be found in the list of [Common Parameters](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html). ``` # As stated above, we use this utility method to construct the image name for the training container. container = get_image_uri(session.boto_region_name, 'xgboost') # Now that we know which container to use, we can construct the estimator object. xgb = sagemaker.estimator.Estimator(container, # The image name of the training container role, # The IAM role to use (our current role in this case) train_instance_count=1, # The number of instances to use for training train_instance_type='ml.m4.xlarge', # The type of instance to use for training output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), # Where to save the output (the model artifacts) sagemaker_session=session) # The current SageMaker session ``` Before asking SageMaker to begin the training job, we should probably set any model specific hyperparameters. There are quite a few that can be set when using the XGBoost algorithm, below are just a few of them. If you would like to change the hyperparameters below or modify additional ones you can find additional information on the [XGBoost hyperparameter page](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost_hyperparameters.html) ``` xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, objective='reg:linear', early_stopping_rounds=10, num_round=200) ``` Now that we have our estimator object completely set up, it is time to train it. To do this we make sure that SageMaker knows our input data is in csv format and then execute the `fit` method. ``` # This is a wrapper around the location of our train and validation data, to make sure that SageMaker # knows our data is in csv format. s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb.fit({'train': s3_input_train, 'validation': s3_input_validation}) ``` ## Step 5: Test the model Now that we have fit our model to the training data, using the validation data to avoid overfitting, we can test our model. To do this we will make use of SageMaker's Batch Transform functionality. To start with, we need to build a transformer object from our fit model. ``` xgb_transformer = xgb.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge') ``` Next we ask SageMaker to begin a batch transform job using our trained model and applying it to the test data we previously stored in S3. We need to make sure to provide SageMaker with the type of data that we are providing to our model, in our case `text/csv`, so that it knows how to serialize our data. In addition, we need to make sure to let SageMaker know how to split our data up into chunks if the entire data set happens to be too large to send to our model all at once. Note that when we ask SageMaker to do this it will execute the batch transform job in the background. Since we need to wait for the results of this job before we can continue, we use the `wait()` method. An added benefit of this is that we get some output from our batch transform job which lets us know if anything went wrong. ``` xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line') xgb_transformer.wait() ``` Now that the batch transform job has finished, the resulting output is stored on S3. Since we wish to analyze the output inside of our notebook we can use a bit of notebook magic to copy the output file from its S3 location and save it locally. ``` !aws s3 cp --recursive $xgb_transformer.output_path $data_dir ``` To see how well our model works we can create a simple scatter plot between the predicted and actual values. If the model was completely accurate the resulting scatter plot would look like the line $x=y$. As we can see, our model seems to have done okay but there is room for improvement. ``` Y_pred = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) plt.scatter(Y_test, Y_pred) plt.xlabel("Median Price") plt.ylabel("Predicted Price") plt.title("Median Price vs Predicted Price") ``` ## Optional: Clean up The default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook. ``` # First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir ```
github_jupyter
``` import numpy as np import pandas as pd import scipy import matplotlib.pyplot as plt from pandas.api.types import CategoricalDtype from plotnine import * from scipy.stats import * import scikit_posthocs as sp data = pd.read_csv("./NewCols.csv") dataControl = pd.read_csv("./control.csv") dataLps= pd.read_csv("./lps.csv") diffData = pd.read_csv("./diffData.csv") diff_data=diffData ``` ## TOTAL2 ``` removed_outliers = data.total2.between(data.total2.quantile(.05), data.total2.quantile(.95)) data_total= data[removed_outliers] ggplot(data_total, aes(x='treatment',y="total2" ), ) + geom_boxplot(outlier_shape = "") + geom_jitter(data_total,aes(y="total2",colour='treatment',shape='treatment') ) + ggtitle("QQ Plot of IRAK-1 expression per GbP") + xlab("Treatment") + ylab("Total IRAK-1 Levels per Gigabase pair") + ylim(data_total.total2.quantile(.05), data_total.total2.quantile(.95)) a = 0.05 wilcoxon(diffData["diff_total2"]) removed_outliers_diffData = diff_data.diff_total2.between(diff_data.diff_total2.quantile(.05), diff_data.diff_total2.quantile(.95)) difftotalData=diff_data[removed_outliers_diffData] ggplot(difftotalData, aes( x='0',y='diff_total2') ) + geom_boxplot() + geom_point(color="red") + ylim(difftotalData.diff_total2.quantile(.05), difftotalData.diff_total2.quantile(.95)) + ggtitle("QQ Plot of changes in IRAK-1 levels per Gbp") + xlab("Treatment") + ylab("Changes in IRAK-1 Levels per Gigabase pair") from sklearn.linear_model import LinearRegression model = LinearRegression().fit(dataLps.total2.to_numpy().reshape((-1, 1)), dataControl.total2) r_sq= model.score(dataLps.total2.to_numpy().reshape((-1, 1)), dataControl.total2) print('coefficient of determination:', r_sq) print('intercept:', model.intercept_) print('slope:', model.coef_) ``` ## - A ``` removed_outliers = data.totalA.between(data.totalA.quantile(.05), data.totalA.quantile(.95)) data_total= data[removed_outliers] ggplot(data, aes(x='treatment',y="totalA" ), ) + geom_boxplot(outlier_shape = "") + geom_jitter(data_total,aes(y="totalA",colour='treatment',shape='treatment') ) + ggtitle("Boxplot Plot of IRAK-1 A expression per GbP") + xlab("Treatment") + ylab("Total IRAK-1 Levels per Gigabase pair") + ylim(data_total.totalA.quantile(.05), data_total.totalA.quantile(.95)) shapiro_test = shapiro(diff_data['diff_totalA']) shapiro_test removed_outliers_diffData = diff_data.diff_totalA.between(diff_data.diff_totalA.quantile(.05), diff_data.diff_totalA.quantile(.95)) difftotalData=diff_data[removed_outliers_diffData] ggplot(difftotalData, aes( x='0',y='diff_totalA') ) + geom_boxplot() + geom_point(color="red") + ylim(difftotalData.diff_totalA.quantile(.05), difftotalData.diff_totalA.quantile(.95)) + ggtitle("QQ Plot of changes in IRAK-1A levels per Gbp") + xlab("Treatment") + ylab("Changes in IRAK-1A Levels per Gigabase pair") from sklearn.linear_model import LinearRegression model = LinearRegression().fit(dataLps.total2.to_numpy().reshape((-1, 1)), dataControl.total2) r_sq= model.score(dataLps.total2.to_numpy().reshape((-1, 1)), dataControl.total2) print('coefficient of determination:', r_sq) print('intercept:', model.intercept_) print('slope:', model.coef_) ``` # C ``` data = pd.read_csv("./NewCols.csv") removed_outliers_C = data.totalC.between(data.totalC.quantile(.05), data.totalC.quantile(.95)) data_total_C= data[removed_outliers_C] ggplot(data, aes(x='treatment',y="totalC" ), ) + geom_boxplot(outlier_shape = "") + geom_jitter(data_total_C ,aes(y="totalC",colour='treatment',shape='treatment') ) + ggtitle("QQ Plot of IRAK-1C expression per GbP") + xlab("Treatment") + ylab("Total IRAK-1C Levels per Gigabase pair") + ylim(data.totalC.quantile(.05), data.totalC.quantile(.95)) # min(data_total["totalC"]), max(data_total_C["totalC"]) ) shapiro_test = shapiro(diff_data['diff_totalC']) shapiro_test removed_outliers_diffData = diff_data.diff_totalC.between(diff_data.diff_totalC.quantile(.05), diff_data.diff_totalC.quantile(.95)) difftotalData=diff_data[removed_outliers_diffData] ggplot(difftotalData, aes( x='0',y='diff_totalC') ) + geom_boxplot() + geom_point(color="red") + ylim(difftotalData.diff_totalC.quantile(.05), difftotalData.diff_totalC.quantile(.95)) + ggtitle("QQ Plot of changes in IRAK-1C levels per Gbp") + xlab("Treatment") + ylab("Changes in IRAK-1 C Levels per Gigabase pair") ```
github_jupyter
# Desafio 3 Neste desafio, iremos praticar nossos conhecimentos sobre distribuições de probabilidade. Para isso, dividiremos este desafio em duas partes: 1. A primeira parte contará com 3 questões sobre um *data set* artificial com dados de uma amostra normal e uma binomial. 2. A segunda parte será sobre a análise da distribuição de uma variável do _data set_ [Pulsar Star](https://archive.ics.uci.edu/ml/datasets/HTRU2), contendo 2 questões. > Obs.: Por favor, não modifique o nome das funções de resposta. ## _Setup_ geral ``` import pandas as pd import matplotlib.pyplot as plt import numpy as np import scipy.stats as sct import seaborn as sns from statsmodels.distributions.empirical_distribution import ECDF #%matplotlib inline from IPython.core.pylabtools import figsize figsize(12, 8) sns.set() ``` ## Parte 1 ### _Setup_ da parte 1 ``` np.random.seed(42) dataframe = pd.DataFrame({"normal": sct.norm.rvs(20, 4, size=10000), "binomial": sct.binom.rvs(100, 0.2, size=10000)}) ``` ## Inicie sua análise a partir da parte 1 a partir daqui ``` # Sua análise da parte 1 começa aqui. dataframe.head() dataframe.info() dataframe.describe() ``` ## Questão 1 Qual a diferença entre os quartis (Q1, Q2 e Q3) das variáveis `normal` e `binomial` de `dataframe`? Responda como uma tupla de três elementos arredondados para três casas decimais. Em outra palavras, sejam `q1_norm`, `q2_norm` e `q3_norm` os quantis da variável `normal` e `q1_binom`, `q2_binom` e `q3_binom` os quantis da variável `binom`, qual a diferença `(q1_norm - q1 binom, q2_norm - q2_binom, q3_norm - q3_binom)`? ``` def q1(): quantiles = dataframe.quantile([.25, .5, .75]) quantiles_diff = quantiles['normal'] - quantiles['binomial'] return tuple(quantiles_diff.round(3).to_list()) q1() ``` Para refletir: * Você esperava valores dessa magnitude? * Você é capaz de explicar como distribuições aparentemente tão diferentes (discreta e contínua, por exemplo) conseguem dar esses valores? ## Questão 2 Considere o intervalo $[\bar{x} - s, \bar{x} + s]$, onde $\bar{x}$ é a média amostral e $s$ é o desvio padrão. Qual a probabilidade nesse intervalo, calculada pela função de distribuição acumulada empírica (CDF empírica) da variável `normal`? Responda como uma único escalar arredondado para três casas decimais. ``` def q2(): inferior = dataframe.normal.mean() - dataframe.normal.std() superior = dataframe.normal.mean() + dataframe.normal.std() ecdf = ECDF(dataframe.normal) return np.float(round(ecdf(superior) - ecdf(inferior), 3)) q2() ``` Para refletir: * Esse valor se aproxima do esperado teórico? * Experimente também para os intervalos $[\bar{x} - 2s, \bar{x} + 2s]$ e $[\bar{x} - 3s, \bar{x} + 3s]$. ## Questão 3 Qual é a diferença entre as médias e as variâncias das variáveis `binomial` e `normal`? Responda como uma tupla de dois elementos arredondados para três casas decimais. Em outras palavras, sejam `m_binom` e `v_binom` a média e a variância da variável `binomial`, e `m_norm` e `v_norm` a média e a variância da variável `normal`. Quais as diferenças `(m_binom - m_norm, v_binom - v_norm)`? ``` def q3(): mean_std = dataframe.describe()[1:3] mean_std.loc['std'] **= 2 mean_std_diff = mean_std['binomial'] - mean_std['normal'] return tuple(mean_std_diff.round(3).to_list()) q3() ``` Para refletir: * Você esperava valore dessa magnitude? * Qual o efeito de aumentar ou diminuir $n$ (atualmente 100) na distribuição da variável `binomial`? ## Parte 2 ### _Setup_ da parte 2 ``` stars = pd.read_csv("pulsar_stars.csv") stars.rename({old_name: new_name for (old_name, new_name) in zip(stars.columns, ["mean_profile", "sd_profile", "kurt_profile", "skew_profile", "mean_curve", "sd_curve", "kurt_curve", "skew_curve", "target"]) }, axis=1, inplace=True) stars.loc[:, "target"] = stars.target.astype(bool) ``` ## Inicie sua análise da parte 2 a partir daqui ``` stars.head() stars.info() stars.describe() ``` ## Questão 4 Considerando a variável `mean_profile` de `stars`: 1. Filtre apenas os valores de `mean_profile` onde `target == 0` (ou seja, onde a estrela não é um pulsar). 2. Padronize a variável `mean_profile` filtrada anteriormente para ter média 0 e variância 1. Chamaremos a variável resultante de `false_pulsar_mean_profile_standardized`. Encontre os quantis teóricos para uma distribuição normal de média 0 e variância 1 para 0.80, 0.90 e 0.95 através da função `norm.ppf()` disponível em `scipy.stats`. Quais as probabilidade associadas a esses quantis utilizando a CDF empírica da variável `false_pulsar_mean_profile_standardized`? Responda como uma tupla de três elementos arredondados para três casas decimais. ``` def standardization(x): return (x - x.mean()) / x.std() def q4(): false_pulsar_mean_profile = stars.loc[stars['target'] == False]['mean_profile'] false_pulsar_mean_profile_standardized = standardization(false_pulsar_mean_profile) ecdf = ECDF(false_pulsar_mean_profile_standardized) ppf = pd.Series(ecdf(sct.norm.ppf([0.80, 0.90, 0.95])), [0.80, 0.90, 0.95]) return tuple(ppf.round(3).to_list()) q4() ``` Para refletir: * Os valores encontrados fazem sentido? * O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? ## Questão 5 Qual a diferença entre os quantis Q1, Q2 e Q3 de `false_pulsar_mean_profile_standardized` e os mesmos quantis teóricos de uma distribuição normal de média 0 e variância 1? Responda como uma tupla de três elementos arredondados para três casas decimais. ``` def standardization(x): return (x - x.mean()) / x.std() def q5(): false_pulsar_mean_profile = stars.loc[stars['target'] == False]['mean_profile'] false_pulsar_mean_profile_standardized = standardization(false_pulsar_mean_profile) ppf = pd.Series(sct.norm.ppf([0.25, 0.50, 0.75]), [0.25, 0.50, 0.75]) quantiles = false_pulsar_mean_profile_standardized.quantile([0.25, 0.50, 0.75]) return tuple((quantiles - ppf).round(3).to_list()) q5() ``` Para refletir: * Os valores encontrados fazem sentido? * O que isso pode dizer sobre a distribuição da variável `false_pulsar_mean_profile_standardized`? * Curiosidade: alguns testes de hipóteses sobre normalidade dos dados utilizam essa mesma abordagem.
github_jupyter
# Image classification transfer learning demo 1. [Introduction](#Introduction) 2. [Prerequisites and Preprocessing](#Prequisites-and-Preprocessing) 3. [Fine-tuning the Image classification model](#Fine-tuning-the-Image-classification-model) 4. [Set up hosting for the model](#Set-up-hosting-for-the-model) 1. [Import model into hosting](#Import-model-into-hosting) 2. [Create endpoint configuration](#Create-endpoint-configuration) 3. [Create endpoint](#Create-endpoint) 5. [Perform Inference](#Perform-Inference) ## Introduction Welcome to our end-to-end example of distributed image classification algorithm in transfer learning mode. In this demo, we will use the Amazon sagemaker image classification algorithm in transfer learning mode to fine-tune a pre-trained model (trained on imagenet data) to learn to classify a new dataset. In particular, the pre-trained model will be fine-tuned using [caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/). To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. ## Prequisites and Preprocessing ### Permissions and environment variables Here we set up the linkage and authentication to AWS services. There are three parts to this: * The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook * The S3 bucket that you want to use for training and model data * The Amazon sagemaker image classification docker image which need not be changed ``` %%time import boto3 import re from sagemaker import get_execution_role role = get_execution_role() bucket='<<bucket-name>>' # customize to your bucket containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/image-classification:latest', 'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/image-classification:latest', 'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/image-classification:latest', 'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/image-classification:latest'} training_image = containers[boto3.Session().region_name] print(training_image) ``` ## Fine-tuning the Image classification model The caltech 256 dataset consist of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category. The image classification algorithm can take two types of input formats. The first is a [recordio format](https://mxnet.incubator.apache.org/tutorials/basic/record_io.html) and the other is a [lst format](https://mxnet.incubator.apache.org/how_to/recordio.html?highlight=im2rec). Files for both these formats are available at http://data.dmlc.ml/mxnet/data/caltech-256/. In this example, we will use the recordio format for training and use the training/validation split [specified here](http://data.dmlc.ml/mxnet/data/caltech-256/). ``` import os import urllib.request import boto3 def download(url): filename = url.split("/")[-1] if not os.path.exists(filename): urllib.request.urlretrieve(url, filename) def upload_to_s3(channel, file): s3 = boto3.resource('s3') data = open(file, "rb") key = channel + '/' + file s3.Bucket(bucket).put_object(Key=key, Body=data) # # caltech-256 download('http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec') download('http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec') upload_to_s3('validation', 'caltech-256-60-val.rec') upload_to_s3('train', 'caltech-256-60-train.rec') ``` Once we have the data available in the correct format for training, the next step is to actually train the model using the data. Before training the model, we need to setup the training parameters. The next section will explain the parameters in detail. ## Training parameters There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include: * **Input specification**: These are the training and validation channels that specify the path where training data is present. These are specified in the "InputDataConfig" section. The main parameters that need to be set is the "ContentType" which can be set to "application/x-recordio" or "application/x-image" based on the input data format and the S3Uri which specifies the bucket and the folder where the data is present. * **Output specification**: This is specified in the "OutputDataConfig" section. We just need to specify the path where the output can be stored after training * **Resource config**: This section specifies the type of instance on which to run the training and the number of hosts used for training. If "InstanceCount" is more than 1, then training can be run in a distributed manner. Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are: * **num_layers**: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used. * **num_training_samples**: This is the total number of training samples. It is set to 15420 for caltech dataset with the current split * **num_classes**: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class * **epochs**: Number of training epochs * **learning_rate**: Learning rate for training * **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run After setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 10 to 12 minutes per epoch on a p2.xlarge machine. The network typically converges after 10 epochs. ``` # The algorithm supports multiple network depth (number of layers). They are 18, 34, 50, 101, 152 and 200 # For this training, we will use 18 layers num_layers = 18 # we need to specify the input image shape for the training data image_shape = "3,224,224" # we also need to specify the number of training samples in the training set # for caltech it is 15420 num_training_samples = 15420 # specify the number of output classes num_classes = 257 # batch size for training mini_batch_size = 128 # number of epochs epochs = 2 # learning rate learning_rate = 0.01 top_k=2 # Since we are using transfer learning, we set use_pretrained_model to 1 so that weights can be # initialized with pre-trained weights use_pretrained_model = 1 ``` # Training Run the training using Amazon sagemaker CreateTrainingJob API ``` %%time import time import boto3 from time import gmtime, strftime s3 = boto3.client('s3') # create unique job name job_name_prefix = 'sagemaker-imageclassification-notebook' timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime()) job_name = job_name_prefix + timestamp training_params = \ { # specify the training docker image "AlgorithmSpecification": { "TrainingImage": training_image, "TrainingInputMode": "File" }, "RoleArn": role, "OutputDataConfig": { "S3OutputPath": 's3://{}/{}/output'.format(bucket, job_name_prefix) }, "ResourceConfig": { "InstanceCount": 1, "InstanceType": "ml.p2.8xlarge", "VolumeSizeInGB": 50 }, "TrainingJobName": job_name, "HyperParameters": { "image_shape": image_shape, "num_layers": str(num_layers), "num_training_samples": str(num_training_samples), "num_classes": str(num_classes), "mini_batch_size": str(mini_batch_size), "epochs": str(epochs), "learning_rate": str(learning_rate), "use_pretrained_model": str(use_pretrained_model) }, "StoppingCondition": { "MaxRuntimeInSeconds": 360000 }, #Training data should be inside a subdirectory called "train" #Validation data should be inside a subdirectory called "validation" #The algorithm currently only supports fullyreplicated model (where data is copied onto each machine) "InputDataConfig": [ { "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": 's3://{}/train/'.format(bucket), "S3DataDistributionType": "FullyReplicated" } }, "ContentType": "application/x-recordio", "CompressionType": "None" }, { "ChannelName": "validation", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": 's3://{}/validation/'.format(bucket), "S3DataDistributionType": "FullyReplicated" } }, "ContentType": "application/x-recordio", "CompressionType": "None" } ] } print('Training job name: {}'.format(job_name)) print('\nInput Data Location: {}'.format(training_params['InputDataConfig'][0]['DataSource']['S3DataSource'])) # create the Amazon SageMaker training job sagemaker = boto3.client(service_name='sagemaker') sagemaker.create_training_job(**training_params) # confirm that the training job has started status = sagemaker.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus'] print('Training job current status: {}'.format(status)) try: # wait for the job to finish and report the ending status sagemaker.get_waiter('training_job_completed_or_stopped').wait(TrainingJobName=job_name) training_info = sagemaker.describe_training_job(TrainingJobName=job_name) status = training_info['TrainingJobStatus'] print("Training job ended with status: " + status) except: print('Training failed to start') # if exception is raised, that means it has failed message = sagemaker.describe_training_job(TrainingJobName=job_name)['FailureReason'] print('Training failed with the following error: {}'.format(message)) training_info = sagemaker.describe_training_job(TrainingJobName=job_name) status = training_info['TrainingJobStatus'] print("Training job ended with status: " + status) ``` If you see the message, > `Training job ended with status: Completed` then that means training sucessfully completed and the output model was stored in the output path specified by `training_params['OutputDataConfig']`. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab. # Inference *** A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the topic mixture representing a given document. This section involves several steps, 1. [Create Model](#CreateModel) - Create model for the training output 1. [Create Endpoint Configuration](#CreateEndpointConfiguration) - Create a configuration defining an endpoint. 1. [Create Endpoint](#CreateEndpoint) - Use the configuration to create an inference endpoint. 1. [Perform Inference](#Perform Inference) - Perform inference on some input data using the endpoint. ## Create Model We now create a SageMaker Model from the training output. Using the model we can create an Endpoint Configuration. ``` %%time import boto3 from time import gmtime, strftime sage = boto3.Session().client(service_name='sagemaker') model_name="test-image-classification-model" print(model_name) info = sage.describe_training_job(TrainingJobName=job_name) model_data = info['ModelArtifacts']['S3ModelArtifacts'] print(model_data) containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/image-classification:latest', 'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/image-classification:latest', 'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/image-classification:latest', 'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/image-classification:latest'} hosting_image = containers[boto3.Session().region_name] primary_container = { 'Image': hosting_image, 'ModelDataUrl': model_data, } create_model_response = sage.create_model( ModelName = model_name, ExecutionRoleArn = role, PrimaryContainer = primary_container) print(create_model_response['ModelArn']) ``` ### Create Endpoint Configuration At launch, we will support configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment, and at launch will describe the autoscaling configuration. ``` from time import gmtime, strftime timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime()) endpoint_config_name = job_name_prefix + '-epc-' + timestamp endpoint_config_response = sage.create_endpoint_config( EndpointConfigName = endpoint_config_name, ProductionVariants=[{ 'InstanceType':'ml.m4.xlarge', 'InitialInstanceCount':1, 'ModelName':model_name, 'VariantName':'AllTraffic'}]) print('Endpoint configuration name: {}'.format(endpoint_config_name)) print('Endpoint configuration arn: {}'.format(endpoint_config_response['EndpointConfigArn'])) ``` ### Create Endpoint Lastly, the customer creates the endpoint that serves up the model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 9-11 minutes to complete. ``` %%time import time timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime()) endpoint_name = job_name_prefix + '-ep-' + timestamp print('Endpoint name: {}'.format(endpoint_name)) endpoint_params = { 'EndpointName': endpoint_name, 'EndpointConfigName': endpoint_config_name, } endpoint_response = sagemaker.create_endpoint(**endpoint_params) print('EndpointArn = {}'.format(endpoint_response['EndpointArn'])) ``` Finally, now the endpoint can be created. It may take sometime to create the endpoint... ``` # get the status of the endpoint response = sagemaker.describe_endpoint(EndpointName=endpoint_name) status = response['EndpointStatus'] print('EndpointStatus = {}'.format(status)) # wait until the status has changed sagemaker.get_waiter('endpoint_in_service').wait(EndpointName=endpoint_name) # print the status of the endpoint endpoint_response = sagemaker.describe_endpoint(EndpointName=endpoint_name) status = endpoint_response['EndpointStatus'] print('Endpoint creation ended with EndpointStatus = {}'.format(status)) if status != 'InService': raise Exception('Endpoint creation failed.') ``` If you see the message, > `Endpoint creation ended with EndpointStatus = InService` then congratulations! You now have a functioning inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console. We will finally create a runtime object from which we can invoke the endpoint. ## Perform Inference Finally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint. ``` import boto3 runtime = boto3.Session().client(service_name='runtime.sagemaker') ``` ### Download test image ``` !wget -O /tmp/test.jpg http://www.vision.caltech.edu/Image_Datasets/Caltech256/images/008.bathtub/008_0007.jpg file_name = '/tmp/test.jpg' # test image from IPython.display import Image Image(file_name) import json import numpy as np with open(file_name, 'rb') as f: payload = f.read() payload = bytearray(payload) response = runtime.invoke_endpoint(EndpointName=endpoint_name, ContentType='application/x-image', Body=payload) result = response['Body'].read() # result will be in json format and convert it to ndarray result = json.loads(result) # the result will output the probabilities for all classes # find the class with maximum probability and print the class index index = np.argmax(result) object_categories = ['ak47', 'american-flag', 'backpack', 'baseball-bat', 'baseball-glove', 'basketball-hoop', 'bat', 'bathtub', 'bear', 'beer-mug', 'billiards', 'binoculars', 'birdbath', 'blimp', 'bonsai-101', 'boom-box', 'bowling-ball', 'bowling-pin', 'boxing-glove', 'brain-101', 'breadmaker', 'buddha-101', 'bulldozer', 'butterfly', 'cactus', 'cake', 'calculator', 'camel', 'cannon', 'canoe', 'car-tire', 'cartman', 'cd', 'centipede', 'cereal-box', 'chandelier-101', 'chess-board', 'chimp', 'chopsticks', 'cockroach', 'coffee-mug', 'coffin', 'coin', 'comet', 'computer-keyboard', 'computer-monitor', 'computer-mouse', 'conch', 'cormorant', 'covered-wagon', 'cowboy-hat', 'crab-101', 'desk-globe', 'diamond-ring', 'dice', 'dog', 'dolphin-101', 'doorknob', 'drinking-straw', 'duck', 'dumb-bell', 'eiffel-tower', 'electric-guitar-101', 'elephant-101', 'elk', 'ewer-101', 'eyeglasses', 'fern', 'fighter-jet', 'fire-extinguisher', 'fire-hydrant', 'fire-truck', 'fireworks', 'flashlight', 'floppy-disk', 'football-helmet', 'french-horn', 'fried-egg', 'frisbee', 'frog', 'frying-pan', 'galaxy', 'gas-pump', 'giraffe', 'goat', 'golden-gate-bridge', 'goldfish', 'golf-ball', 'goose', 'gorilla', 'grand-piano-101', 'grapes', 'grasshopper', 'guitar-pick', 'hamburger', 'hammock', 'harmonica', 'harp', 'harpsichord', 'hawksbill-101', 'head-phones', 'helicopter-101', 'hibiscus', 'homer-simpson', 'horse', 'horseshoe-crab', 'hot-air-balloon', 'hot-dog', 'hot-tub', 'hourglass', 'house-fly', 'human-skeleton', 'hummingbird', 'ibis-101', 'ice-cream-cone', 'iguana', 'ipod', 'iris', 'jesus-christ', 'joy-stick', 'kangaroo-101', 'kayak', 'ketch-101', 'killer-whale', 'knife', 'ladder', 'laptop-101', 'lathe', 'leopards-101', 'license-plate', 'lightbulb', 'light-house', 'lightning', 'llama-101', 'mailbox', 'mandolin', 'mars', 'mattress', 'megaphone', 'menorah-101', 'microscope', 'microwave', 'minaret', 'minotaur', 'motorbikes-101', 'mountain-bike', 'mushroom', 'mussels', 'necktie', 'octopus', 'ostrich', 'owl', 'palm-pilot', 'palm-tree', 'paperclip', 'paper-shredder', 'pci-card', 'penguin', 'people', 'pez-dispenser', 'photocopier', 'picnic-table', 'playing-card', 'porcupine', 'pram', 'praying-mantis', 'pyramid', 'raccoon', 'radio-telescope', 'rainbow', 'refrigerator', 'revolver-101', 'rifle', 'rotary-phone', 'roulette-wheel', 'saddle', 'saturn', 'school-bus', 'scorpion-101', 'screwdriver', 'segway', 'self-propelled-lawn-mower', 'sextant', 'sheet-music', 'skateboard', 'skunk', 'skyscraper', 'smokestack', 'snail', 'snake', 'sneaker', 'snowmobile', 'soccer-ball', 'socks', 'soda-can', 'spaghetti', 'speed-boat', 'spider', 'spoon', 'stained-glass', 'starfish-101', 'steering-wheel', 'stirrups', 'sunflower-101', 'superman', 'sushi', 'swan', 'swiss-army-knife', 'sword', 'syringe', 'tambourine', 'teapot', 'teddy-bear', 'teepee', 'telephone-box', 'tennis-ball', 'tennis-court', 'tennis-racket', 'theodolite', 'toaster', 'tomato', 'tombstone', 'top-hat', 'touring-bike', 'tower-pisa', 'traffic-light', 'treadmill', 'triceratops', 'tricycle', 'trilobite-101', 'tripod', 't-shirt', 'tuning-fork', 'tweezer', 'umbrella-101', 'unicorn', 'vcr', 'video-projector', 'washing-machine', 'watch-101', 'waterfall', 'watermelon', 'welding-mask', 'wheelbarrow', 'windmill', 'wine-bottle', 'xylophone', 'yarmulke', 'yo-yo', 'zebra', 'airplanes-101', 'car-side-101', 'faces-easy-101', 'greyhound', 'tennis-shoes', 'toad', 'clutter'] print("Result: label - " + object_categories[index] + ", probability - " + str(result[index])) ``` ### Clean up When we're done with the endpoint, we can just delete it and the backing instances will be released. Run the following cell to delete the endpoint. ``` sage.delete_endpoint(EndpointName=endpoint_name) ```
github_jupyter
# Concise Implementation of Softmax Regression :label:`sec_softmax_concise` Just as high-level APIs of deep learning frameworks made it much easier to implement linear regression in :numref:`sec_linear_concise`, we will find it similarly (or possibly more) convenient for implementing classification models. Let us stick with the Fashion-MNIST dataset and keep the batch size at 256 as in :numref:`sec_softmax_scratch`. ``` from d2l import tensorflow as d2l import tensorflow as tf batch_size = 256 train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size) ``` ## Initializing Model Parameters As mentioned in :numref:`sec_softmax`, the output layer of softmax regression is a fully-connected layer. Therefore, to implement our model, we just need to add one fully-connected layer with 10 outputs to our `Sequential`. Again, here, the `Sequential` is not really necessary, but we might as well form the habit since it will be ubiquitous when implementing deep models. Again, we initialize the weights at random with zero mean and standard deviation 0.01. ``` net = tf.keras.models.Sequential() net.add(tf.keras.layers.Flatten(input_shape=(28, 28))) weight_initializer = tf.keras.initializers.RandomNormal(mean=0.0, stddev=0.01) net.add(tf.keras.layers.Dense(10, kernel_initializer=weight_initializer)) ``` ## Softmax Implementation Revisited :label:`subsec_softmax-implementation-revisited` In the previous example of :numref:`sec_softmax_scratch`, we calculated our model's output and then ran this output through the cross-entropy loss. Mathematically, that is a perfectly reasonable thing to do. However, from a computational perspective, exponentiation can be a source of numerical stability issues. Recall that the softmax function calculates $\hat y_j = \frac{\exp(o_j)}{\sum_k \exp(o_k)}$, where $\hat y_j$ is the $j^\mathrm{th}$ element of the predicted probability distribution $\hat{\mathbf{y}}$ and $o_j$ is the $j^\mathrm{th}$ element of the logits $\mathbf{o}$. If some of the $o_k$ are very large (i.e., very positive), then $\exp(o_k)$ might be larger than the largest number we can have for certain data types (i.e., *overflow*). This would make the denominator (and/or numerator) `inf` (infinity) and we wind up encountering either 0, `inf`, or `nan` (not a number) for $\hat y_j$. In these situations we do not get a well-defined return value for cross-entropy. One trick to get around this is to first subtract $\max(o_k)$ from all $o_k$ before proceeding with the softmax calculation. You can verify that this shifting of each $o_k$ by constant factor does not change the return value of softmax. After the subtraction and normalization step, it might be possible that some $o_j$ have large negative values and thus that the corresponding $\exp(o_j)$ will take values close to zero. These might be rounded to zero due to finite precision (i.e., *underflow*), making $\hat y_j$ zero and giving us `-inf` for $\log(\hat y_j)$. A few steps down the road in backpropagation, we might find ourselves faced with a screenful of the dreaded `nan` results. Fortunately, we are saved by the fact that even though we are computing exponential functions, we ultimately intend to take their log (when calculating the cross-entropy loss). By combining these two operators softmax and cross-entropy together, we can escape the numerical stability issues that might otherwise plague us during backpropagation. As shown in the equation below, we avoid calculating $\exp(o_j)$ and can use instead $o_j$ directly due to the canceling in $\log(\exp(\cdot))$. $$ \begin{aligned} \log{(\hat y_j)} & = \log\left( \frac{\exp(o_j)}{\sum_k \exp(o_k)}\right) \\ & = \log{(\exp(o_j))}-\log{\left( \sum_k \exp(o_k) \right)} \\ & = o_j -\log{\left( \sum_k \exp(o_k) \right)}. \end{aligned} $$ We will want to keep the conventional softmax function handy in case we ever want to evaluate the output probabilities by our model. But instead of passing softmax probabilities into our new loss function, we will just pass the logits and compute the softmax and its log all at once inside the cross-entropy loss function, which does smart things like the ["LogSumExp trick"](https://en.wikipedia.org/wiki/LogSumExp). ``` loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) ``` ## Optimization Algorithm Here, we use minibatch stochastic gradient descent with a learning rate of 0.1 as the optimization algorithm. Note that this is the same as we applied in the linear regression example and it illustrates the general applicability of the optimizers. ``` trainer = tf.keras.optimizers.SGD(learning_rate=.1) ``` ## Training Next we call the training function defined in :numref:`sec_softmax_scratch` to train the model. ``` num_epochs = 10 d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer) ``` As before, this algorithm converges to a solution that achieves a decent accuracy, albeit this time with fewer lines of code than before. ## Summary * Using high-level APIs, we can implement softmax regression much more concisely. * From a computational perspective, implementing softmax regression has intricacies. Note that in many cases, a deep learning framework takes additional precautions beyond these most well-known tricks to ensure numerical stability, saving us from even more pitfalls that we would encounter if we tried to code all of our models from scratch in practice. ## Exercises 1. Try adjusting the hyperparameters, such as the batch size, number of epochs, and learning rate, to see what the results are. 1. Increase the numper of epochs for training. Why might the test accuracy decrease after a while? How could we fix this? [Discussions](https://discuss.d2l.ai/t/260)
github_jupyter
# Yelp dataset: CAVI vs noise Experiments that explore the performance of CAVI at different levels of noise. ``` import collabclass import json import matplotlib.pyplot as plt import numpy as np ``` ## Loading & preparing the data ``` classes = ( "AZ", "NV", "ON", "OH", "NC", "PA", "QC", "AB", "WI", "IL", ) cls2idx = {cls: idx for idx, cls in enumerate(classes)} k = len(cls2idx) %%time label = dict() with open("../_data/yelp/yelp_academic_dataset_business.json") as f: for line in f: biz = json.loads(line) if biz["state"] in classes: label[biz["business_id"]] = biz["state"] %%time user_cnt = 0 item_cnt = 0 user2idx = dict() item2idx = dict() edges = list() with open("../_data/yelp/yelp_academic_dataset_review.json") as f: for line in f: x = json.loads(line) uid = x["user_id"] bid = x["business_id"] if bid not in label: # We don't have the business -> skip. continue if bid not in item2idx: item2idx[bid] = item_cnt item_cnt += 1 if uid not in user2idx: user2idx[uid] = user_cnt user_cnt += 1 edges.append((user2idx[uid], item2idx[bid])) m = user_cnt n = item_cnt graph = collabclass.graph_from_edges(m, n, edges) print("Number of users: {:,}".format(m)) print("Number of items: {:,}".format(n)) print("Number of edges: {:,}".format(len(graph.user_edges))) idx2item = {v: k for k, v in item2idx.items()} vs = list() for j in range(n): cat = label[idx2item[j]] vs.append(cls2idx[cat]) vs = np.array(vs) ``` ## Plot results ``` %%time deltas = np.linspace(0.05, 0.85, num=9) alpha = np.ones((m, k)) np.random.seed(0) deltas2 = np.hstack(([0], deltas, [0.9])) res2a = np.zeros(len(deltas2)) res2b = np.zeros(len(deltas2)) inf_deltas = np.array([1e-7, 0.6, 0.8, 0.87, 0.87, 0.87, 0.87, 0.87, 0.87, 0.895, 0.895]) for i, delta in enumerate(deltas2): print(".", end="", flush=True) vs_hat = collabclass.symmetric_channel(vs, k, delta=delta) beta = collabclass.init_beta(k, vs_hat, delta=inf_deltas[i]) apost, bpost = collabclass.cavi(graph, alpha, beta, 3) rankings = np.argsort(bpost, axis=1)[:,::-1] top1 = (rankings[:,0] != vs) ps = np.percentile(graph.item_idx[:,1], (50, 90, 98)) mask = (graph.item_idx[:,1] >= ps[1]) res2a[i] = np.count_nonzero(top1) / len(vs) res2b[i] = np.count_nonzero(top1[mask]) / np.count_nonzero(mask) print() fig, ax = plt.subplots(figsize=(9, 6)) ax.plot(deltas2, res2a, marker="o", ms=5, label="CAVI all") ax.plot(deltas2, res2b, marker="o", ms=5, label="CAVI P90") ax.plot(deltas2, deltas2, ls="--") ax.axvline(0.9, ls=":") ax.set_ylim(bottom=0.0) ax.set_xlabel("Corruption rate") ax.set_ylabel("Error rate") ax.legend() ```
github_jupyter
``` # Source/Reference: https://www.tensorflow.org/tutorials/structured_data/time_series # Reasoning for explaining TF guides/tutorials: # You will become comfortable with reading other tutorials/guides on TF2.0/Keras # Pre-req: # - LSTMs, RNNs, GRUs chapter # - Previous code-walkthrough sessions ``` ## Dataset ``` #imports import tensorflow as tf import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import os import pandas as pd # gloabl params for all matplotlib plots mpl.rcParams['figure.figsize'] = (8, 6) mpl.rcParams['axes.grid'] = False # get data # data source: Max Plank Institute, https://www.bgc-jena.mpg.de/wetter/ zip_path = tf.keras.utils.get_file( # origin='https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip', fname='jena_climate_2009_2016.csv.zip', extract=True) print(zip_path) csv_path, _ = os.path.splitext(zip_path) # https://docs.python.org/3/library/os.path.html print(csv_path) ! ls /root/.keras/datasets/ df = pd.read_csv(csv_path) df.head() ``` Observations: 1. One reading every 10 mins 2. 1 day = 6*24 = 144 readings 3. 5 days = 144*5 = 720 readings **Forecasting task:** Predict temperature (in deg C) in the future. ``` # univariate data: Temp vs Time uni_data_df = df['T (degC)'] uni_data_df.index = df['Date Time'] uni_data_df.head() uni_data_df.plot() uni_data = uni_data_df.values # numpy ndarray from pandas TRAIN_SPLIT = 300000 # First 300000 obs will be used as train data and rest as test data. # 300,000 => ~2100 days worth of training data tf.random.set_seed(13) # random seed # Normalize data: mean centering and variance-scaling. # NOTE: use only train data to normalize all of the data. otherwise, leakage-issue uni_train_mean = uni_data[:TRAIN_SPLIT].mean() uni_train_std = uni_data[:TRAIN_SPLIT].std() uni_data = (uni_data-uni_train_mean)/uni_train_std print(type(uni_data)) ``` ## Moving window average ### Pose a simple problem: Given last 'k' values of temp-observations (only one feature <=> univariate), predict the next observation ### MWA: Average the previous k values to predict the next value. ``` # This function creates the data we need for the above problem # dataset: numpy ndarray # start_index: # end_index: # history_size: k => take k values at a time # target_size: 0 => next value in the time-series # Output: data: (n,k) and labels (n,1) def univariate_data(dataset, start_index, end_index, history_size, target_size): data = [] labels = [] start_index = start_index + history_size if end_index is None: end_index = len(dataset) - target_size for i in range(start_index, end_index): indices = range(i-history_size, i) # Reshape data from (history_size,) to (history_size, 1) data.append(np.reshape(dataset[indices], (history_size, 1))) labels.append(dataset[i+target_size]) return np.array(data), np.array(labels) # use the above function to create the datasets. univariate_past_history = 20 univariate_future_target = 0 x_train_uni, y_train_uni = univariate_data(uni_data, 0, TRAIN_SPLIT, univariate_past_history, univariate_future_target) x_val_uni, y_val_uni = univariate_data(uni_data, TRAIN_SPLIT, None, univariate_past_history, univariate_future_target) print(x_train_uni.shape) print(y_train_uni.shape) print(x_val_uni.shape) print(y_val_uni.shape) print ('Single window of past history') print (x_train_uni[0]) print ('\n Target temperature to predict') print (y_train_uni[0]) #utility function def create_time_steps(length): return list(range(-length, 0)) print(create_time_steps(20)) # Plotting function # plot_data: contains labels as list # delta: 0 => next time step given last "k" steps. # title: plot title # Usage: show_plot([x_train_uni[0], y_train_uni[0]], 0, 'Sample Example') def show_plot(plot_data, delta, title): labels = ['History', 'True Future', 'Model Prediction'] marker = ['.-', 'rx', 'go'] # dot-line, red-x, green-o refer: https://matplotlib.org/3.1.1/api/markers_api.html time_steps = create_time_steps(plot_data[0].shape[0]) if delta: future = delta else: future = 0 plt.title(title) for i, x in enumerate(plot_data): if i: plt.plot(future, plot_data[i], marker[i], markersize=10, label=labels[i]) else: plt.plot(time_steps, plot_data[i].flatten(), marker[i], label=labels[i]) plt.legend() plt.xlim([time_steps[0], (future+5)*2]) plt.xlabel('Time-Step') return plt show_plot([x_train_uni[0], y_train_uni[0]], 0, 'Sample Example') i=20 show_plot([x_train_uni[i], y_train_uni[i]], 0, 'Sample Example') def mwa(history): return np.mean(history) i=0 show_plot([x_train_uni[i], y_train_uni[i], mwa(x_train_uni[i])], 0, 'MWA Prediction Example') i=20 show_plot([x_train_uni[i], y_train_uni[i], mwa(x_train_uni[i])], 0, 'MWA Prediction Example') ``` ## Univariate time-series forecasting - Features from the history: only temperature => univariate - Problem definition: Given last "k=20" values of temp, predict the next temp value. ``` # TF Dataset preperation BATCH_SIZE = 256 # bacth size in batch-SGD/variants BUFFER_SIZE = 10000 # for shuffling the dataset train_univariate = tf.data.Dataset.from_tensor_slices((x_train_uni, y_train_uni)) train_univariate = train_univariate.cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat() #https://www.tensorflow.org/api_docs/python/tf/data/Dataset#repeat val_univariate = tf.data.Dataset.from_tensor_slices((x_val_uni, y_val_uni)) val_univariate = val_univariate.batch(BATCH_SIZE).repeat() print(train_univariate) print(val_univariate) ``` <img src="https://www.tensorflow.org/tutorials/structured_data/images/time_series.png" width="50%" height="50%" /> ``` # MODEL: simple_lstm_model = tf.keras.models.Sequential([ tf.keras.layers.LSTM(8, input_shape=x_train_uni.shape[-2:]), tf.keras.layers.Dense(1) ]) simple_lstm_model.compile(optimizer='adam', loss='mae') # Why not GRUs? # https://www.appliedaicourse.com/lecture/11/applied-machine-learning-online-course/3436/grus/8/module-8-neural-networks-computer-vision-and-deep-learning # https://www.quora.com/Whats-the-difference-between-LSTM-and-GRU # Train and evaluate STEPS_PER_EPOCH = 200 EPOCHS = 10 # https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit simple_lstm_model.fit(train_univariate, epochs=EPOCHS, steps_per_epoch=STEPS_PER_EPOCH, validation_data=val_univariate, validation_steps=50) for x, y in val_univariate.take(5): # take 5 random inputs from validation data plot = show_plot([x[0].numpy(), y[0].numpy(), simple_lstm_model.predict(x)[0]], 0, 'Simple LSTM model') plot.show() ``` ## Multi-variate & single-step forecasting - Problem definition: Given three features (p, T, rho) at each time stamp in the past, predict the temperature at a single time-stamp in the future. ``` # Features features_considered = ['p (mbar)', 'T (degC)', 'rho (g/m**3)'] features = df[features_considered] features.index = df['Date Time'] features.head() features.plot(subplots=True) # Standardize data dataset = features.values data_mean = dataset[:TRAIN_SPLIT].mean(axis=0) data_std = dataset[:TRAIN_SPLIT].std(axis=0) dataset = (dataset-data_mean)/data_std # Same as univariate_data above. # New params: # step: instead of taking data for each 10min, do you want to generate data once evrey 6 steps (60min) # single_step: lables from single timestamp or multiple timesteps def multivariate_data(dataset, target, start_index, end_index, history_size, target_size, step, single_step=False): data = [] labels = [] start_index = start_index + history_size if end_index is None: end_index = len(dataset) - target_size for i in range(start_index, end_index): indices = range(i-history_size, i, step) # step used here. data.append(dataset[indices]) if single_step: # single_step used here. labels.append(target[i+target_size]) else: labels.append(target[i:i+target_size]) return np.array(data), np.array(labels) # Generate data past_history = 720 # 720*10 mins future_target = 72 # 72*10 mins STEP = 6 # one obs every 6X10min = 60 min => 1 hr # past history: 7200 mins => 120 hrs, sampling at one sample evry hours # future_target: 720 mins = > 12 hrs in the future, not next hour x_train_single, y_train_single = multivariate_data(dataset, dataset[:, 1], 0, TRAIN_SPLIT, past_history, future_target, STEP, single_step=True) x_val_single, y_val_single = multivariate_data(dataset, dataset[:, 1], TRAIN_SPLIT, None, past_history, future_target, STEP, single_step=True) print(x_train_single.shape) print(y_train_single.shape) #TF dataset train_data_single = tf.data.Dataset.from_tensor_slices((x_train_single, y_train_single)) train_data_single = train_data_single.cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat() val_data_single = tf.data.Dataset.from_tensor_slices((x_val_single, y_val_single)) val_data_single = val_data_single.batch(BATCH_SIZE).repeat() print(train_data_single) print(val_data_single) # Model single_step_model = tf.keras.models.Sequential() single_step_model.add(tf.keras.layers.LSTM(32, input_shape=x_train_single.shape[-2:])) single_step_model.add(tf.keras.layers.Dense(1)) single_step_model.compile(optimizer=tf.keras.optimizers.RMSprop(), loss='mae') single_step_history = single_step_model.fit(train_data_single, epochs=EPOCHS, steps_per_epoch=STEPS_PER_EPOCH, validation_data=val_data_single, validation_steps=50) # Plot train and validation loss over epochs def plot_train_history(history, title): loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(loss)) plt.figure() plt.plot(epochs, loss, 'b', label='Training loss') plt.plot(epochs, val_loss, 'r', label='Validation loss') plt.title(title) plt.legend() plt.grid() plt.show() plot_train_history(single_step_history, 'Single Step Training and validation loss') # plot time series and predicted values for x, y in val_data_single.take(5): plot = show_plot([x[0][:, 1].numpy(), y[0].numpy(), single_step_model.predict(x)[0]], 12, 'Single Step Prediction') plot.show() ``` ## Multi-variate & multi-step forecasting - Generate multiple future values of temperature ``` # single_step=FALSE default value future_target = 72 # 72 future values x_train_multi, y_train_multi = multivariate_data(dataset, dataset[:, 1], 0, TRAIN_SPLIT, past_history, future_target, STEP) x_val_multi, y_val_multi = multivariate_data(dataset, dataset[:, 1], TRAIN_SPLIT, None, past_history, future_target, STEP) print(x_train_multi.shape) print(y_train_multi.shape) print(x_val_multi.shape) print(y_val_multi.shape) # TF DATASET train_data_multi = tf.data.Dataset.from_tensor_slices((x_train_multi, y_train_multi)) train_data_multi = train_data_multi.cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat() val_data_multi = tf.data.Dataset.from_tensor_slices((x_val_multi, y_val_multi)) val_data_multi = val_data_multi.batch(BATCH_SIZE).repeat() #plotting function def multi_step_plot(history, true_future, prediction): plt.figure(figsize=(12, 6)) num_in = create_time_steps(len(history)) num_out = len(true_future) plt.grid() plt.plot(num_in, np.array(history[:, 1]), label='History') plt.plot(np.arange(num_out)/STEP, np.array(true_future), 'bo', label='True Future') if prediction.any(): plt.plot(np.arange(num_out)/STEP, np.array(prediction), 'ro', label='Predicted Future') plt.legend(loc='upper left') plt.show() for x, y in train_data_multi.take(1): multi_step_plot(x[0], y[0], np.array([0])) multi_step_model = tf.keras.models.Sequential() multi_step_model.add(tf.keras.layers.LSTM(32, return_sequences=True, input_shape=x_train_multi.shape[-2:])) multi_step_model.add(tf.keras.layers.LSTM(16, activation='relu')) multi_step_model.add(tf.keras.layers.Dense(72)) # for 72 outputs multi_step_model.compile(optimizer=tf.keras.optimizers.RMSprop(clipvalue=1.0), loss='mae') multi_step_history = multi_step_model.fit(train_data_multi, epochs=EPOCHS, steps_per_epoch=STEPS_PER_EPOCH, validation_data=val_data_multi, validation_steps=50) plot_train_history(multi_step_history, 'Multi-Step Training and validation loss') for x, y in val_data_multi.take(3): multi_step_plot(x[0], y[0], multi_step_model.predict(x)[0]) ```
github_jupyter
``` import geemap import ee import os import geopandas as gpd import json import requests from geemap import geojson_to_ee, ee_to_geojson from ipyleaflet import GeoJSON Map = geemap.Map() Map # Get ESA World cover world_cover_collection = ee.ImageCollection("ESA/WorldCover/v100") world_cover_image = world_cover_collection.first() geemap.image_props(world_cover_image).getInfo() Map.addLayer(world_cover_image, {}, "First image") Map # get city boundaries city_boundary = ee.FeatureCollection('users/saifshabou/dataportal/boundaries/PHL-makati') Map.centerObject(city_boundary,12) Map.addLayer(city_boundary,{} ,"City boundary") # convert geojson boundary into ee feature collection def city_boundary_geojson_to_ee(city_id): # read geojson geojson_path = '../data/'+city_id+'.geojson' file_path = os.path.abspath(geojson_path) # open geojson with open(file_path) as f: json_data = json.load(f) # convert geojson to ee feature collection ee_data = geojson_to_ee(json_data) return ee_data city_boundary_ee = city_boundary_geojson_to_ee(city_id = 'CRI-San_jose-boundary') city_id = 'CRI-San_jose-boundary' # clip raster clip = world_cover_image.clipToCollection(city_boundary_ee) # output name output_file_name = 'data/world_cover_esa/world_cover_'+ city_id+'.tif' # export geemap.ee_export_image(clip, filename=output_file_name, region=city_boundary_ee.geometry()) # export clipped geotif def city_world_cover_clip(world_cover_image, city_id, city_boundary_ee): # clip raster clip = world_cover_image.clipToCollection(ee_data) # output name output_file_name = 'world_cover_'+ city_id+'.tif' # export geemap.ee_export_image(clip, filename=output_file_name, region=ee_data.geometry()) output_file_name = 'world_cover_'+ city_id+'.tif' geemap.ee_export_image(world_cover_image, filename=output_file_name, #scale=90, region=city_boundary_ee, file_per_band=False) city_id = 'CRI-San_jose-boundary' geojson_path = '../data/'+city_id+'.geojson' file_path = os.path.abspath(geojson_path) with open(file_path) as f: json_data = json.load(f) json_layer = GeoJSON(data=json_data, name='San Jose boundaries', hover_style={'fillColor': 'red' , 'fillOpacity': 0.5}) Map.add_layer(json_layer) ee_data = geojson_to_ee(json_data) Map.addLayer(ee_data, {}, "San Jose boundaries ee") world_cover_city = world_cover_collection \ .filterBounds(ee_data) geemap.ee_export_image(world_cover_image, filename="test wc.tif", #scale=90, region=ee_data, file_per_band=False) clip = world_cover_image.clipToCollection(ee_data) geemap.ee_export_image(clip, filename='test.tif', region=ee_data.geometry()) country = 'PHL' popASun = ee.ImageCollection("WorldPop/GP/100m/pop_age_sex_cons_unadj").filterBounds(city_boundary) pop = popASun \ .filter(ee.Filter.inList('country',[country])) \ .filter(ee.Filter.inList('year',[2020])) \ .select('population') population = popASun \ .select(['population']) \ .mean() \ .reduce(ee.Reducer.sum()) \ .rename('population') popChildren = popASun \ .select(['M_0','M_1','M_5','M_10','M_15','F_0','F_1','F_5','F_10','F_15']) \ .mean() \ .reduce(ee.Reducer.sum()) \ .rename('popChildren') # get children category data_popChildren = popASun.select(['M_0','M_1','M_5','M_10','M_15','F_0','F_1','F_5','F_10','F_15']) city_popChildren_img = data_popChildren.map(lambda image: image.clip(city_boundary)) city_popChildren_img = city_popChildren_img \ .mean() \ .reduce(ee.Reducer.sum()) \ .rename('popChildren') pop_city = ee.ImageCollection("WorldPop/GP/100m/pop_age_sex_cons_unadj") pop_city = population \ .filter(ee.Filter.geometry(city_boundary)) \ pop_city.getInfo() popviz = { 'min': 0.0, 'max': 150.0, 'palette': ['24126c', '1fff4f', 'd4ff50'] } Map.addLayer(population,popviz,'Population',False,0.5) Map.addLayer(popChildren,popviz,'Population - Children (<20)',False,0.5) Map.addLayer(pop_city,popviz,'pop_city',False,0.5) # export geotif out_dir = os.path.join(os.path.expanduser('~'), 'data') filename = os.path.join(out_dir, 'city_pop_children.tif') # load administrative boundaries: Makati boundaries_cities = gpd.read_file('https://storage.googleapis.com/data_portal_exposure/data/administrative_boundaries/mapped/PHL_makati.geojson') # get bbox xmin, ymin, xmax, ymax = boundaries_cities.total_bounds geometry = ee.Geometry.Rectangle([xmin, ymin, xmax, ymax]) geemap.ee_export_image(city_popChildren_img, filename=filename, #scale=90, region=geometry, file_per_band=False) popChildren = popASun \ .filterBounds(city_boundary) \ .select(['M_0','M_1','M_5','M_10','M_15','F_0','F_1','F_5','F_10','F_15']) \ .mean() \ .reduce(ee.Reducer.sum()) \ .rename('popChildren') city_popChildren_img.getInfo() popviz = { 'min': 0.0, 'max': 150.0, 'palette': ['24126c', '1fff4f', 'd4ff50'] } Map.addLayer(population,popviz,'Population',False,0.5) Map.addLayer(popChildren,popviz,'Population - Children (<20)',False,0.5) Map.addLayer(city_popChildren_img,popviz,'city_popChildren_img',False,0.5) # get city boundaries city_boundary = ee.FeatureCollection('users/saifshabou/dataportal/boundaries/PHL-makati') # get population data pop_age_sex = ee.ImageCollection("WorldPop/GP/100m/pop_age_sex_cons_unadj") visualization = { 'bands': ['population'], 'min': 0.0, 'max': 50.0, 'palette': ['24126c', '1fff4f', 'd4ff50'] } Map.centerObject(city_boundary,9) Map.addLayer(pop_age_sex, visualization, 'Population') # filter parameters year = 2020 country_code = 'PHL' city_pop_age_sex = pop_age_sex.filter(ee.Filter.bounds(city_boundary)) \ .filterMetadata('year', 'equals', year) \ .filterMetadata('country', 'equals', country_code) Map.addLayer(city_pop_age_sex, visualization, 'Population city') Map.centerObject(city_boundary,9) # process population data city_pop_dataset = ee.ImageCollection("WorldPop/GP/100m/pop_age_sex_cons_unadj").filterBounds(city_boundary) data_population = city_pop_dataset.select('M_0') #population city_population_img = data_population.map(lambda image: image.clip(city_boundary)) city_population_img.getInfo() # viz city_pop_vis = { 'min': 0.0, 'max': 50.0, 'palette': ['24126c', '1fff4f', 'd4ff50'] } # adds image layers to map Map.addLayer(city_population_img, city_pop_vis, 'city_population_img') # process population data city_pop_dataset = ee.ImageCollection("WorldPop/GP/100m/pop_age_sex_cons_unadj").filterBounds(city_boundary) data_population_children = city_pop_dataset.select('M_0','M_1','M_5','M_10','M_15','F_0','F_1','F_5','F_10','F_15')\ .reduce(ee.Reducer.sum()) city_population_children_img = data_population_children.map(lambda image: image.clip(city_boundary)) popChildren = popASun \ .select(['M_0','M_1','M_5','M_10','M_15','F_0','F_1','F_5','F_10','F_15']) \ .mean() \ .reduce(ee.Reducer.sum()) \ .rename('popChildren') data_population_children.getInfo() # adds image layers to map Map.addLayer(city_population_children_img, city_pop_vis, 'city_population_children_img') ```
github_jupyter
``` ########################### # -*- coding: utf-8 -*- # # PHM_data_challenge_2019 # # Author: Huet Zhu # # Date:2019.5 # # All Rights Reserved # ########################### # # 基于机器学习的飞控系统故障诊断方案设计 from __future__ import division import numpy as np import thundergbm import pickle import pandas as pd import matplotlib.pyplot as plt import matplotlib from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split import numpy as np from tqdm import tqdm import gc from scipy import sparse from scipy.sparse import csr_matrix, hstack, vstack import warnings import random warnings.filterwarnings('ignore') age_test = pd.read_csv("../data/age_test.csv", header = None) age_train = pd.read_csv("../data/age_train.csv", header = None) csr_trainData = sparse.load_npz('../trainTestData/trainData13100.npz') csr_trainData = csr_trainData[:, :2100] csr_trainData.shape ``` # 随机森林 ``` k_fold = [[0, int(csr_trainData.shape[0]*0.5)], [int(csr_trainData.shape[0]*0.5), int(csr_trainData.shape[0])], [0, int(csr_trainData.shape[0]*0.5)], [int(csr_trainData.shape[0]*0.5), int(csr_trainData.shape[0])], [0, int(csr_trainData.shape[0]*0.5)], [int(csr_trainData.shape[0]*0.5), int(csr_trainData.shape[0])],] # k_fold = [[0, int(csr_trainData.shape[0]*0.2)], # [int(csr_trainData.shape[0]*0.2), int(csr_trainData.shape[0]*0.4)], # [int(csr_trainData.shape[0]*0.4), int(csr_trainData.shape[0]*0.6)], # [int(csr_trainData.shape[0]*0.6), int(csr_trainData.shape[0]*0.8)], # [int(csr_trainData.shape[0]*0.8), int(csr_trainData.shape[0])]] # k=0 # trainData, valData, trainLabel, valLabel = train_test_split(csr_trainData[k_fold[k][0]:k_fold[k][1]], # age_train.iloc[k_fold[k][0]:k_fold[k][1], 1].values, # test_size=0.1, # random_state=k) # trainData.shape, valData.shape k=5 clf = thundergbm.TGBMClassifier(bagging=1, lambda_tgbm=1, learning_rate=0.07, min_child_weight=1.2, n_gpus=1, verbose=0, n_parallel_trees=40, gamma=0.2, depth=7, n_trees=4000, tree_method='auto', objective='multi:softmax') clf.fit(csr_trainData[k_fold[k][0]:k_fold[k][1]], age_train.iloc[k_fold[k][0]:k_fold[k][1], 1].values) clf.save_model('../model/rf_part_'+str(k)+'.model') k=5-k clf.score(csr_trainData[k_fold[k][0]:k_fold[k][1]], age_train.iloc[k_fold[k][0]:k_fold[k][1], 1].values) clf.load_model('../model/rf_part_'+str(k)+'.model') clf.score(csr_trainData[k_fold[k][0]:k_fold[k][1]], age_train.iloc[k_fold[k][0]:k_fold[k][1], 1].values) predict_sigal = clf.predict(csr_trainData[k_fold[k][0]:k_fold[k][1]]) ``` # stacking 数据准备 ``` trainData = sparse.load_npz('../trainTestData/csr_trainData13100.npz') testData = sparse.load_npz('../trainTestData/csr_testData13100.npz') trainData = trainData[:, :2100] testData = testData[:, :2100] trainData.shape, testData.shape clf = thundergbm.TGBMClassifier(bagging=1, lambda_tgbm=1, learning_rate=0.07, min_child_weight=1.2, n_gpus=1, verbose=0, n_parallel_trees=40, gamma=0.2, depth=7, n_trees=4000, tree_method='auto', objective='multi:softmax') k_fold = [[0, int(trainData.shape[0]*0.5)], [int(trainData.shape[0]*0.5), int(trainData.shape[0])], [0, int(trainData.shape[0]*0.5)], [int(trainData.shape[0]*0.5), int(trainData.shape[0])], [0, int(trainData.shape[0]*0.5)], [int(trainData.shape[0]*0.5), int(trainData.shape[0])],] k_flod_predict = np.zeros((trainData.shape[0], 3)) for k in tqdm(range(6)): valData = trainData[k_fold[5-k][0]:k_fold[5-k][1]] clf.load_model('../model/rf_part_'+str(k)+'.model') predict_sigal = clf.predict(valData) k_flod_predict[k_fold[5-k][0]:k_fold[5-k][1], k//2] = predict_sigal k_flod_predict = k_flod_predict.astype(int) k_flod_predict predict = np.zeros((k_flod_predict.shape[0])) for i in range(k_flod_predict.shape[0]): predict[i] = np.argmax(np.bincount(k_flod_predict[i])) predict = predict.astype(int) predict train_stacking_predict = np.zeros((k_flod_predict.shape[0], 6)) for i in range(train_stacking_predict.shape[0]): train_stacking_predict[i, predict[i]-1] = 1 np.savetxt('../processed/stacking/rf_val.txt', train_stacking_predict, fmt='%s', delimiter=',', newline='\n') ``` # 预测 ``` testData = sparse.load_npz('../trainTestData/testData13100.npz') testData = testData[:, :2100] testData.shape clf = thundergbm.TGBMClassifier(bagging=1, learning_rate=0.1, min_child_weight=1.2, n_gpus=1, n_parallel_trees=40, gamma=0.2, depth=7, n_trees=3000, tree_method='auto', objective='multi:softmax') k_flod_predict = np.zeros((testData.shape[0], 6)) for i in tqdm(range(6)): clf.load_model('../model/rf_part_'+str(i)+'.model') predict_sigal =clf.predict(testData) for j in range(len(predict_sigal)): k_flod_predict[j, i] = predict_sigal[j] k_flod_predict = k_flod_predict.astype(int) k_flod_predict predict = np.zeros((k_flod_predict.shape[0])) for i in range(k_flod_predict.shape[0]): predict[i] = np.argmax(np.bincount(k_flod_predict[i])) predict = predict.astype(int) np.savetxt('../processed/stacking/rf_test.txt', test_stacking_predict, fmt='%s', delimiter=',', newline='\n') ```
github_jupyter
``` # Copyright 2020 Fagner Cunha # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import random import json import pandas as pd ``` #### Loading Snaposhot Serengeti data ``` with open('../data/SnapshotSerengetiSplits_v0.json') as json_file: recommend_train_val_splits = json.load(json_file) serengeti_annotations = pd.read_csv('../data/SnapshotSerengeti_v2_1_annotations.csv') serengeti_annotations = serengeti_annotations[['capture_id', 'season', 'site', 'question__species']].copy() serengeti_images = pd.read_csv('../data/SnapshotSerengeti_v2_1_images.csv') serengeti_images = serengeti_images.drop('Unnamed: 0', axis=1) serengeti_images_labeled = pd.merge(serengeti_images, serengeti_annotations, on='capture_id', how='outer') ``` We will only use seasons 1-6: ``` serengeti_images_labeled = serengeti_images_labeled[ serengeti_images_labeled.season.isin(['S1', 'S2', 'S3', 'S4', 'S5', 'S6'])].copy() ``` #### Remove images with more than one species identified ``` non_single_spc_instances = serengeti_images_labeled[ serengeti_images_labeled[['image_path_rel']].duplicated(keep=False)] non_single_spc_instances = non_single_spc_instances.image_path_rel.unique() serengeti_images_labeled = serengeti_images_labeled[ ~serengeti_images_labeled.image_path_rel.isin(non_single_spc_instances)].copy() ``` ### Split by site: #### Mark train/val images: ``` val_dev = ['D09', 'J06', 'C12', 'B12', 'F11', 'C08', 'E07', 'O09', 'Q07', 'C13', 'E04', 'I06', 'D10', 'I08', 'M11', 'F02', 'D06', 'G09', 'N03', 'E10', 'J09', 'H13', 'T13'] #val_dev = random.sample(recommend_train_val_splits['splits']['train'], 23) serengeti_images_labeled_split = serengeti_images_labeled.copy() def mark_split(row): if row['site'] in val_dev: return 'val_dev' elif row['site'] in recommend_train_val_splits['splits']['train']: return 'train' else: return 'val' serengeti_images_labeled_split['split'] = serengeti_images_labeled_split.apply(mark_split, axis=1) pd.crosstab(serengeti_images_labeled_split.question__species, serengeti_images_labeled_split.split) ``` Select instances: ``` serengeti_images_labeled_split def binarize_categories(row): if row['question__species'] == 'blank': return 0 else: return 1 instances = serengeti_images_labeled_split[['image_path_rel', 'question__species', 'split']].copy() instances['category'] = instances.apply(binarize_categories, axis=1) pd.crosstab(instances.category, instances.split) ``` Verify if images were sized correctly: ``` ss_path = '/data/fagner/coruja/datasets/serengeti/serengeti_600x1024/' all_images_download = [value['image_path_rel'] for key, value in instances.iterrows() if os.path.isfile(ss_path + value['image_path_rel'])] len(all_images_download) len(instances) instances = instances[instances.image_path_rel.isin(all_images_download)].copy() ``` ### Saving csv files ``` def save_split(data, split, col_file_name, col_category, file_patern): data_processed = data[data.split == split].copy() data_processed['file_name'] = data_processed[col_file_name] data_processed['category'] = data_processed[col_category] file_name = file_patern % split data_processed[['file_name', 'category']].to_csv(file_name, index=False) save_split(instances, 'train', 'image_path_rel', 'category', '../data/ss_%s_empty.csv') save_split(instances, 'val_dev', 'image_path_rel', 'category', '../data/ss_%s_empty.csv') save_split(instances, 'val', 'image_path_rel', 'category', '../data/ss_%s_empty.csv') save_split(instances, 'train', 'image_path_rel', 'question__species', '../data/ss_%s_species.csv') save_split(instances, 'val_dev', 'image_path_rel', 'question__species', '../data/ss_%s_species.csv') save_split(instances, 'val', 'image_path_rel', 'question__species', '../data/ss_%s_species.csv') ``` Balancing classes for empty/nonempty model: ``` train_empty_sample = instances[(instances.split == 'train') & (instances.category == 0)].sample(524804).copy() instances_bal = pd.concat([train_empty_sample, instances[(instances.split == 'train') & (instances.category == 1)]]) save_split(instances_bal, 'train', 'image_path_rel', 'category', '../data/ss_%s_empty_bal.csv') ``` ### Split by time: ``` serengeti_images_labeled_split_time = serengeti_images_labeled.copy() def mark_time_split(row): if row['season'] in ['S6'] : return 'val' elif row['season'] in ['S5']: return 'val_dev' else: return 'train' serengeti_images_labeled_split_time['split'] = serengeti_images_labeled_split_time.apply(mark_time_split, axis=1) serengeti_images_labeled_split_time pd.crosstab(serengeti_images_labeled_split_time.question__species, serengeti_images_labeled_split_time.split) def binarize_categories(row): if row['question__species'] == 'blank': return 0 else: return 1 instances = serengeti_images_labeled_split_time[['image_path_rel', 'question__species', 'split']].copy() instances['category'] = instances.apply(binarize_categories, axis=1) pd.crosstab(instances.category, instances.split) ss_path = '/data/fagner/coruja/datasets/serengeti/serengeti_600x1024/' all_images_download = [value['image_path_rel'] for key, value in instances.iterrows() if os.path.isfile(ss_path + value['image_path_rel'])] instances = instances[instances.image_path_rel.isin(all_images_download)].copy() len(instances) save_split(instances, 'train', 'image_path_rel', 'category', '../data/ss_time_%s_empty.csv') save_split(instances, 'val_dev', 'image_path_rel', 'category', '../data/ss_time_%s_empty.csv') save_split(instances, 'val', 'image_path_rel', 'category', '../data/ss_time_%s_empty.csv') save_split(instances, 'train', 'image_path_rel', 'question__species', '../data/ss_time_%s_species.csv') save_split(instances, 'val_dev', 'image_path_rel', 'question__species', '../data/ss_time_%s_species.csv') save_split(instances, 'val', 'image_path_rel', 'question__species', '../data/ss_time_%s_species.csv') train_empty_sample = instances[(instances.split == 'train') & (instances.category == 0)].sample(516635).copy() instances_bal = pd.concat([train_empty_sample, instances[(instances.split == 'train') & (instances.category == 1)]]) save_split(instances_bal, 'train', 'image_path_rel', 'category', '../data/ss_time_%s_empty_bal.csv') ```
github_jupyter
# Time series forecasting using ARIMA ### Import necessary libraries ``` %matplotlib notebook import numpy import pandas import datetime import sys import time import matplotlib.pyplot as ma import statsmodels.tsa.seasonal as st import statsmodels.tsa.arima_model as arima import statsmodels.tsa.stattools as tools ``` ### Load necessary CSV file ``` try: ts = pandas.read_csv('../../datasets/srv-1-art-5m.csv') except: print("I am unable to connect to read .csv file", sep=',', header=1) ts.index = pandas.to_datetime(ts['ts']) # delete unnecessary columns del ts['id'] del ts['ts'] del ts['min'] del ts['max'] del ts['sum'] del ts['cnt'] del ts['p50'] del ts['p95'] del ts['p99'] # print table info ts.info() ``` ### Get values from specified range ``` ts = ts['2018-06-16':'2018-07-15'] ``` ### Remove possible zero and NA values (by interpolation) We are using MAPE formula for counting the final score, so there cannot occure any zero values in the time series. Replace them with NA values. NA values are later explicitely removed by linear interpolation. ``` def print_values_stats(): print("Zero Values:\n",sum([(1 if x == 0 else 0) for x in ts.values]),"\n\nMissing Values:\n",ts.isnull().sum(),"\n\nFilled in Values:\n",ts.notnull().sum(), "\n") idx = pandas.date_range(ts.index.min(), ts.index.max(), freq="5min") ts = ts.reindex(idx, fill_value=None) print("Before interpolation:\n") print_values_stats() ts = ts.replace(0, numpy.nan) ts = ts.interpolate(limit_direction="both") print("After interpolation:\n") print_values_stats() ``` ### Plot values ``` # Idea: Plot figure now and do not wait on ma.show() at the end of the notebook ma.ion() ma.show() fig1 = ma.figure(1) ma.plot(ts, color="blue") ma.draw() try: ma.pause(0.001) # throws NotImplementedError, ignore it except: pass ``` ### Ignore timestamps, make the time series single dimensional Since now the time series is represented by continuous single-dimensional Python list. ARIMA does not need timestamps or any irrelevant data. ``` dates = ts.index # save dates for further use ts = [x[0] for x in ts.values] ``` ### Split time series into train and test series We have decided to split train and test time series by two weeks. ``` train_data_length = 12*24*7 ts_train = ts[:train_data_length] ts_test = ts[train_data_length+1:] ``` ### Estimate integrated (I) parameter Check time series stationarity and estimate it's integrated parameter (maximum integration value is 2). The series itself is highly seasonal, so we can assume that the time series is not stationary. ``` def check_stationarity(ts, critic_value = 0.05): try: result = tools.adfuller(ts) return result[0] < 0.0 and result[1] < critic_value except: # Program may raise an exception when there are NA values in TS return False integrate_param = 0 ts_copy = pandas.Series(ts_train, copy=True) # Create copy for stationarizing while not check_stationarity(ts_copy) and integrate_param < 2: integrate_param += 1 ts_copy = ts_copy - ts_copy.shift() ts_copy.dropna(inplace=True) # Remove initial NA values print("Estimated integrated (I) parameter: ", integrate_param, "\n") ``` ### Print ACF and PACF graphs for AR(p) and MA(q) order estimation AutoCorellation and Parcial AutoCorellation Functions are necessary for ARMA order estimation. Configure the *NLagsACF* and *NlagsPACF* variables for number of lagged values in ACF and PACF graphs. ``` def plot_bar(ts, horizontal_line=None): ma.bar(range(0, len(ts)), ts, width=0.5) ma.axhline(0) if horizontal_line != None: ma.axhline(horizontal_line, linestyle="-") ma.axhline(-horizontal_line, linestyle="-") ma.draw() try: ma.pause(0.001) # throws NotImplementedError, ignore it except: pass NlagsACF = 200 NLagsPACF = 50 # ACF ma.figure(2) plot_bar(tools.acf(ts_train, nlags=NlagsACF), 1.96 / numpy.sqrt(len(ts))) # PACF ma.figure(3) plot_bar(tools.pacf(ts_train, nlags=NLagsPACF), 1.96 / numpy.sqrt(len(ts))) ``` ### ARIMA order estimation and prediction configuration According to the Box-Jenkins model (https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc446.htm) we assumed that this time series should be a MA(p) model. But just like in Network-transport-time analysis and after doing some trial-and-error analysis, it's better to use ARIMA(2,1,0) model, because it produces better results. You can specify how many values you want to use for ARIMA model fitting (by setting *N_train_data* variable) and how many new values you want to predict in single step (by setting *N_values_to_forecast* variable). ``` ARIMA_order = (2,1,0) M_train_data = sys.maxsize N_values_to_forecast = 1 ``` ### Forecast new values Unexpectedly, we have a very large time series (over 8 thousand samples), so the forecasting takes much time. ``` predictions = [] confidence = [] print("Forecasting started...") start_time = time.time() ts_len = len(ts) for i in range(train_data_length+1, ts_len, N_values_to_forecast): try: start = i-M_train_data if i-M_train_data >= 0 else 0 arima_model = arima.ARIMA(ts[start:i], order=ARIMA_order).fit(disp=0) forecast = arima_model.forecast(steps=N_values_to_forecast) for j in range(0, N_values_to_forecast): predictions.append(forecast[0][j]) confidence.append(forecast[2][j]) except: print("Error during forecast: ", i, i+N_values_to_forecast) # Push back last successful predictions for j in range(0, N_values_to_forecast): predictions.append(predictions[-1] if len(predictions) > 0 else 0) confidence.append(confidence[-1] if len(confidence) > 0 else 0) print("Forecasting finished") print("Time elapsed: ", time.time() - start_time) ``` ### Count mean absolute percentage error We use MAPE (https://www.forecastpro.com/Trends/forecasting101August2011.html) instead of MSE because the result of MAPE does not depend on size of values. ``` values_sum = 0 for value in zip(ts_test, predictions): actual = value[0] predicted = value[1] values_sum += abs((actual - predicted) / actual) values_sum *= 100/len(predictions) print("MAPE: ", values_sum, "%\n") ``` ### Plot forecasted values ``` fig2 = ma.figure(4) ma.plot(ts_test, color="blue", label="Test") ma.plot(predictions, color="red", label="ARIMA") ts_len = len(ts) date_offset_indices = ts_len // 6 num_date_ticks = ts_len // date_offset_indices + 1 ma.xticks(range(0, ts_len, date_offset_indices), [x.date().strftime('%Y-%m-%d') for x in dates[::date_offset_indices]]) ma.xlabel("Timestamps") ma.ylabel("Response times") ma.legend(loc='best') ma.draw() ```
github_jupyter
# A Cantera Simulation Using RMG-Py ``` from IPython.display import display, Image from rmgpy.chemkin import load_chemkin_file from rmgpy.tools.canteramodel import Cantera, get_rmg_species_from_user_species from rmgpy.species import Species ``` Load the species and reaction from the RMG-generated chemkin file `chem_annotated.inp` and `species_dictionary.txt` file found in your `chemkin` folder after running a job. ``` species_list, reaction_list = load_chemkin_file('data/ethane_model/chem_annotated.inp', 'data/ethane_model/species_dictionary.txt', 'data/ethane_model/tran.dat') ``` Set a few conditions for how to react the system ``` # Find the species: ethane user_ethane=Species().from_smiles('CC') species_dict = get_rmg_species_from_user_species([user_ethane], species_list) ethane = species_dict[user_ethane] reactor_type_list = ['IdealGasReactor'] mol_frac_list=[{ethane: 1}] Tlist = ([1300, 1500, 2000], 'K') Plist = ([1], 'bar') reaction_time_list = ([0.5], 'ms') # Create cantera object, loading in the species and reactions job = Cantera(species_list=species_list, reaction_list=reaction_list, output_directory='temp') # The cantera file must be created from an associated chemkin file # We can either load the Model from the initialized set of rmg species and reactions job.load_model() # Or load it from a chemkin file by uncommenting the following line: #job.load_chemkin_model('data/ethane_model/chem_annotated.inp',transport_file='data/ethane_model/tran.dat') # Generate the conditions based on the settings we declared earlier job.generate_conditions(reactor_type_list, reaction_time_list, mol_frac_list, Tlist, Plist) # Simulate and plot alldata = job.simulate() job.plot(alldata) # Show the plots in the ipython notebook for i, condition in enumerate(job.conditions): print('Condition {0}'.format(i+1)) display(Image(filename="temp/{0}_mole_fractions.png".format(i+1))) # We can get the cantera model Solution's species and reactions ct_species = job.model.species() ct_reactions = job.model.reactions() # We can view a cantera species or reaction object from this ct_ethane = ct_species[4] ct_rxn = ct_reactions[0] print(ct_ethane) print(ct_rxn) # We can also do things like modifying the cantera species thermo and reaction kinetics through modifying the # RMG objects first, then using the `modifyReactionKinetics` or `modifySpeciesThermo` functions # Alter the RMG objects in place, lets pick ethane and the first reaction rmg_ethane = species_dict[user_ethane] rmg_ethane.thermo.change_base_enthalpy(2*4184) # Change base enthalpy by 2 kcal/mol rmg_rxn = reaction_list[0] rmg_rxn.kinetics.change_rate(4) # Change A factor by multiplying by a factor of 4 # Take a look at the state of the cantera model before and after print('Cantera Model: Before') ct_species = job.model.species() ct_reactions = job.model.reactions() print('Ethane Thermo = {} kcal/mol'.format(ct_species[4].thermo.h(300)/1000/4184)) print('Reaction 1 Kinetics = {}'.format(ct_reactions[0].rate)) # Now use the altered RMG objects to modify the kinetics and thermo job.modify_reaction_kinetics(0, rmg_rxn) job.modify_species_thermo(4, rmg_ethane, use_chemkin_identifier = True) # If we modify thermo, the cantera model must be refreshed. If only kinetics are modified, this does not need to be done. job.refresh_model() print('') print('Cantera Model: After') ct_species = job.model.species() ct_reactions = job.model.reactions() print('Ethane Thermo = {} kcal/mol'.format(ct_species[4].thermo.h(300)/1000/4184)) print('Reaction 1 Kinetics = {}'.format(ct_reactions[0].rate)) # Simulate and plot alldata = job.simulate() job.plot(alldata) # Show the plots in the ipython notebook for i, condition in enumerate(job.conditions): print('Condition {0}'.format(i+1)) display(Image(filename="temp/{0}_mole_fractions.png".format(i+1))) ```
github_jupyter
*Poonam Ligade* *1st Feb 2017* ---------- This notebook is like note to self. I am trying to understand various components of Artificial Neural Networks aka Deep Learning. Hope it might be useful for someone else here. I am designing neural net on MNIST handwritten digits images to identify their correct label i.e number in image. You must have guessed its an image recognition task. MNIST is called Hello world of Deep learning. Lets start!! This notebook is inspired from [Jeremy's][1] [Deep Learning][2] mooc and [Deep learning with python][3] book by Keras author [François Chollet][4] . [1]: https://www.linkedin.com/in/howardjeremy/ [2]: http://course.fast.ai/ [3]: https://www.manning.com/books/deep-learning-with-python [4]: https://research.google.com/pubs/105096.html **Import all required libraries** =============================== ``` # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt %matplotlib inline from keras.models import Sequential from keras.layers import Dense , Dropout , Lambda, Flatten from keras.optimizers import Adam ,RMSprop from sklearn.model_selection import train_test_split from keras import backend as K from keras.preprocessing.image import ImageDataGenerator # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory from subprocess import check_output print(check_output(["ls", "../input"]).decode("utf8")) # Any results you write to the current directory are saved as output. ``` **Load Train and Test data** ============================ ``` # create the training & test sets, skipping the header row with [1:] train = pd.read_csv("../input/train.csv") print(train.shape) train.head() test= pd.read_csv("../input/test.csv") print(test.shape) test.head() X_train = (train.iloc[:,1:].values).astype('float32') # all pixel values y_train = train.iloc[:,0].values.astype('int32') # only labels i.e targets digits X_test = test.values.astype('float32') X_train y_train ``` The output variable is an integer from 0 to 9. This is a **multiclass** classification problem. ## Data Visualization Lets look at 3 images from data set with their labels. ``` #Convert train datset to (num_images, img_rows, img_cols) format X_train = X_train.reshape(X_train.shape[0], 28, 28) for i in range(6, 9): plt.subplot(330 + (i+1)) plt.imshow(X_train[i], cmap=plt.get_cmap('gray')) plt.title(y_train[i]); #expand 1 more dimention as 1 for colour channel gray X_train = X_train.reshape(X_train.shape[0], 28, 28,1) X_train.shape X_test = X_test.reshape(X_test.shape[0], 28, 28,1) X_test.shape ``` **Preprocessing the digit images** ================================== **Feature Standardization** ------------------------------------- It is important preprocessing step. It is used to centre the data around zero mean and unit variance. ``` mean_px = X_train.mean().astype(np.float32) std_px = X_train.std().astype(np.float32) def standardize(x): return (x-mean_px)/std_px ``` *One Hot encoding of labels.* ----------------------------- A one-hot vector is a vector which is 0 in most dimensions, and 1 in a single dimension. In this case, the nth digit will be represented as a vector which is 1 in the nth dimension. For example, 3 would be [0,0,0,1,0,0,0,0,0,0]. ``` from keras.utils.np_utils import to_categorical y_train= to_categorical(y_train) num_classes = y_train.shape[1] num_classes ``` Lets plot 10th label. ``` plt.title(y_train[9]) plt.plot(y_train[9]) plt.xticks(range(10)); ``` Oh its 3 ! **Designing Neural Network Architecture** ========================================= ``` # fix random seed for reproducibility seed = 43 np.random.seed(seed) ``` *Linear Model* -------------- ``` from keras.models import Sequential from keras.layers.core import Lambda , Dense, Flatten, Dropout from keras.callbacks import EarlyStopping from keras.layers import BatchNormalization, Convolution2D , MaxPooling2D ``` Lets create a simple model from Keras Sequential layer. 1. Lambda layer performs simple arithmetic operations like sum, average, exponentiation etc. In 1st layer of the model we have to define input dimensions of our data in (rows,columns,colour channel) format. (In theano colour channel comes first) 2. Flatten will transform input into 1D array. 3. Dense is fully connected layer that means all neurons in previous layers will be connected to all neurons in fully connected layer. In the last layer we have to specify output dimensions/classes of the model. Here it's 10, since we have to output 10 different digit labels. ``` model= Sequential() model.add(Lambda(standardize,input_shape=(28,28,1))) model.add(Flatten()) model.add(Dense(10, activation='softmax')) print("input shape ",model.input_shape) print("output shape ",model.output_shape) ``` ***Compile network*** ------------------- Before making network ready for training we have to make sure to add below things: 1. A loss function: to measure how good the network is 2. An optimizer: to update network as it sees more data and reduce loss value 3. Metrics: to monitor performance of network ``` from keras.optimizers import RMSprop model.compile(optimizer=RMSprop(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy']) from keras.preprocessing import image gen = image.ImageDataGenerator() ``` ## Cross Validation ``` from sklearn.model_selection import train_test_split X = X_train y = y_train X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.10, random_state=42) batches = gen.flow(X_train, y_train, batch_size=64) val_batches=gen.flow(X_val, y_val, batch_size=64) history=model.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=3, validation_data=val_batches, validation_steps=val_batches.n) history_dict = history.history history_dict.keys() import matplotlib.pyplot as plt %matplotlib inline loss_values = history_dict['loss'] val_loss_values = history_dict['val_loss'] epochs = range(1, len(loss_values) + 1) # "bo" is for "blue dot" plt.plot(epochs, loss_values, 'bo') # b+ is for "blue crosses" plt.plot(epochs, val_loss_values, 'b+') plt.xlabel('Epochs') plt.ylabel('Loss') plt.show() plt.clf() # clear figure acc_values = history_dict['acc'] val_acc_values = history_dict['val_acc'] plt.plot(epochs, acc_values, 'bo') plt.plot(epochs, val_acc_values, 'b+') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.show() ``` ## Fully Connected Model Neurons in a fully connected layer have full connections to all activations in the previous layer, as seen in regular Neural Networks. Adding another Dense Layer to model. ``` def get_fc_model(): model = Sequential([ Lambda(standardize, input_shape=(28,28,1)), Flatten(), Dense(512, activation='relu'), Dense(10, activation='softmax') ]) model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy']) return model fc = get_fc_model() fc.optimizer.lr=0.01 history=fc.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=1, validation_data=val_batches, validation_steps=val_batches.n) ``` ## Convolutional Neural Network CNNs are extremely efficient for images. ``` from keras.layers import Convolution2D, MaxPooling2D def get_cnn_model(): model = Sequential([ Lambda(standardize, input_shape=(28,28,1)), Convolution2D(32,(3,3), activation='relu'), Convolution2D(32,(3,3), activation='relu'), MaxPooling2D(), Convolution2D(64,(3,3), activation='relu'), Convolution2D(64,(3,3), activation='relu'), MaxPooling2D(), Flatten(), Dense(512, activation='relu'), Dense(10, activation='softmax') ]) model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy']) return model model= get_cnn_model() model.optimizer.lr=0.01 history=model.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=1, validation_data=val_batches, validation_steps=val_batches.n) ``` ## Data Augmentation It is tehnique of showing slighly different or new images to neural network to avoid overfitting. And to achieve better generalization. In case you have very small dataset, you can use different kinds of data augmentation techniques to increase your data size. Neural networks perform better if you provide them more data. Different data aumentation techniques are as follows: 1. Cropping 2. Rotating 3. Scaling 4. Translating 5. Flipping 6. Adding Gaussian noise to input images etc. ``` gen =ImageDataGenerator(rotation_range=8, width_shift_range=0.08, shear_range=0.3, height_shift_range=0.08, zoom_range=0.08) batches = gen.flow(X_train, y_train, batch_size=64) val_batches = gen.flow(X_val, y_val, batch_size=64) model.optimizer.lr=0.001 history=model.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=1, validation_data=val_batches, validation_steps=val_batches.n) ``` ## Adding Batch Normalization BN helps to fine tune hyperparameters more better and train really deep neural networks. ``` from keras.layers.normalization import BatchNormalization def get_bn_model(): model = Sequential([ Lambda(standardize, input_shape=(28,28,1)), Convolution2D(32,(3,3), activation='relu'), BatchNormalization(axis=1), Convolution2D(32,(3,3), activation='relu'), MaxPooling2D(), BatchNormalization(axis=1), Convolution2D(64,(3,3), activation='relu'), BatchNormalization(axis=1), Convolution2D(64,(3,3), activation='relu'), MaxPooling2D(), Flatten(), BatchNormalization(), Dense(512, activation='relu'), BatchNormalization(), Dense(10, activation='softmax') ]) model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy']) return model model= get_bn_model() model.optimizer.lr=0.01 history=model.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=1, validation_data=val_batches, validation_steps=val_batches.n) ``` ## Submitting Predictions to Kaggle. Make sure you use full train dataset here to train model and predict on test set. ``` model.optimizer.lr=0.01 gen = image.ImageDataGenerator() batches = gen.flow(X, y, batch_size=64) history=model.fit_generator(generator=batches, steps_per_epoch=batches.n, epochs=3) predictions = model.predict_classes(X_test, verbose=0) submissions=pd.DataFrame({"ImageId": list(range(1,len(predictions)+1)), "Label": predictions}) submissions.to_csv("DR.csv", index=False, header=True) ``` More to come . Please upvote if you find it useful. You can increase number of epochs on your GPU enabled machine to get better results.
github_jupyter
# Numpy and Matplotlib # These are two of the most fundamental parts of the scientific python "ecosystem". Most everything else is built on top of them. This is an introduction to python written by [Ryan Abernathy at Columbia University](https://ocean-transport.github.io/index.html) for his module on [Research Computing](https://rabernat.github.io/research_computing/). ``` import numpy as np ``` What did we just do? We _imported_ a package. This brings new variables (mostly functions) into our interpreter. We access them as follows. ``` # find out what is in our namespace dir() # find out what's in numpy dir(np) # find out what version we have np.__version__ ``` The numpy documentation is crucial! http://docs.scipy.org/doc/numpy/reference/ ## NDArrays ## The core class is the numpy ndarray (n-dimensional array). ``` from IPython.display import Image Image(url='http://docs.scipy.org/doc/numpy/_images/threefundamental.png') # create an array from a list a = np.array([9,0,2,1,0]) # find out the datatype a.dtype # find out the shape a.shape # what is the shape type(a.shape) # another array with a different datatype and shape b = np.array([[5,3,1,9],[9,2,3,0]], dtype=np.float64) # check dtype and shape b.dtype, b.shape ``` __Important Concept__: The fastest varying dimension is the last dimension! The outer level of the hierarchy is the first dimension. (This is called "c-style" indexing) ## More array creation ## There are lots of ways to create arrays. ``` # create some uniform arrays c = np.zeros((9,9)) d = np.ones((3,6,3), dtype=np.complex128) e = np.full((3,3), np.pi) e = np.ones_like(c) f = np.zeros_like(d) # create some ranges np.arange(10) # arange is left inclusive, right exclusive np.arange(2,4,0.25) # linearly spaced np.linspace(2,4,20) # log spaced np.logspace(1,2,10) # two dimensional grids x = np.linspace(-2*np.pi, 2*np.pi, 100) y = np.linspace(-np.pi, np.pi, 50) xx, yy = np.meshgrid(x, y) xx.shape, yy.shape ``` ## Indexing ## Basic indexing is similar to lists ``` # get some individual elements of xx xx[0,0], xx[-1,-1], xx[3,-5] # get some whole rows and columns xx[0].shape, xx[:,-1].shape # get some ranges xx[3:10,30:40].shape ``` There are many advanced ways to index arrays. You can read about them in the manual. Here is one example. ``` # use a boolean array as an index idx = xx<0 yy[idx].shape # the array got flattened xx.ravel().shape ``` ## Array Operations ## There are a huge number of operations available on arrays. All the familiar arithemtic operators are applied on an element-by-element basis. ### Basic Math ## ``` f = np.sin(xx) * np.cos(0.5*yy) ``` At this point you might be getting curious what these arrays "look" like. So we need to introduce some visualization. ``` from matplotlib import pyplot as plt %matplotlib inline #This last line is important as otherwise the plots won't show in your notebook plt.pcolormesh(f) ``` ## Manipulating array dimensions ## ``` # transpose plt.pcolormesh(f.T) # reshape an array (wrong size) g = np.reshape(f, (8,9)) # reshape an array (right size) and mess it up print(f.size) g = np.reshape(f, (200,25)) plt.pcolormesh(g) # tile an array plt.pcolormesh(np.tile(f,(6,1))) ``` ## Broadcasting ## Broadcasting is an efficient way to multiply arrays of different sizes ``` Image(url='http://scipy-lectures.github.io/_images/numpy_broadcasting.png', width=720) # multiply f by x print(f.shape, x.shape) g = f * x print(g.shape) # multiply f by y print(f.shape, y.shape) h = f * y print(h.shape) # use newaxis special syntax h = f * y[:,np.newaxis] print(h.shape) plt.pcolormesh(g) ``` ## Reduction Operations ## ``` # sum g.sum() # mean g.mean() # std g.std() # apply on just one axis g_ymean = g.mean(axis=0) g_xmean = g.mean(axis=1) plt.plot(x, g_ymean) plt.plot(g_xmean, y) ``` ## Fancy Plotting ## Enough lessons, let's have some fun. ``` fig = plt.figure(figsize=(12,8)) ax1 = plt.subplot2grid((6,6),(0,1),colspan=5) ax2 = plt.subplot2grid((6,6),(1,0),rowspan=5) fig = plt.figure(figsize=(10,6)) ax1 = plt.subplot2grid((6,6),(0,1),colspan=5) ax2 = plt.subplot2grid((6,6),(1,0),rowspan=5) ax3 = plt.subplot2grid((6,6),(1,1),rowspan=5, colspan=5) ax1.plot(x, g_ymean) ax2.plot(g_xmean, y) ax3.pcolormesh(x, y, g) ax1.set_xlim([x.min(), x.max()]) ax3.set_xlim([x.min(), x.max()]) ax2.set_ylim([y.min(), y.max()]) ax3.set_ylim([y.min(), y.max()]) plt.tight_layout() ``` ## Real Data ## ARGO float profile from North Atlantic ``` # download with curl !curl -O https://www.ldeo.columbia.edu/~rpa/argo_float_4901412.npz # load numpy file and examine keys data = np.load('argo_float_4901412.npz') data.keys() # access some data T = data['T'] # there are "nans", missing data, which screw up our routines T.min() ar_w_mask = np.ma.masked_array([1, 2, 3, 4, 5], mask=[True, True, False, False, False]) ar_w_mask ar_w_mask.mean() T_ma = np.ma.masked_invalid(T) T_ma.mean() ``` ## Masked Arrays ## This is how we deal with missing data in numpy ``` # create masked array T = np.ma.masked_invalid(data['T']) type(T) # max and min T.max(), T.min() # load other data S = np.ma.masked_invalid(data['S']) P = np.ma.masked_invalid(data['P']) # scatter plot plt.scatter(S, T, c=P) plt.grid() plt.colorbar() ```
github_jupyter
# Clustering CIML Clustering experiment on CIML. **Motivation:** During CIML supervised learning on multiple classification experiments, where the classes are cloud operators providing the VMs to run CI jobs, the classes predicted with the best metrics were those with the higher amount of samples in the dataset. We want to evaluate if unsupervised learning can group those cloud providers with high support in separate clusters. Clustering algorithm: k-means. <br>Method for deciding the number of clusters: elbow method and silhouette score. ``` from ciml import gather_results from ciml import tf_trainer from sklearn.cluster import KMeans from sklearn.mixture import GaussianMixture import tensorflow as tf import matplotlib.pyplot as plt import numpy as np import pandas as pd from mpl_toolkits.mplot3d import Axes3D import matplotlib.cm as cmx import matplotlib.colors as pltcolors import matplotlib.pyplot as plt import plotly.express as px from plotly.subplots import make_subplots from sklearn import metrics from scipy.spatial.distance import cdist from sklearn.metrics import silhouette_samples, silhouette_score import matplotlib.cm as cm ``` ## Data loading and analysis From the supervised learning experiments on multiple data classification on CIML data, the best results were obtained for the following experiment: * Features from dstat data: User CPU `usr` and Average System Load `1m`. * Data resolution: 1 minute * Classes reduction: cloud providers with several regions were mapped to a single class. * Model hyperparameters: * NW topology: DNN with 3 hidden layers and 100 units per layer. * Activation function: RELU. * Output layer: Sigmoid. * Initial learning rate: 0.05 * Optimizer: Adagrad We will load the dataset used for this experiment and analyse the distribution of samples per cloud provider. ``` #Define datapath data_path = '/Users/kw/ciml_data/cimlodsceu2019seed' #dataset = 'usr_1m-10s-node_provider' dataset = 'usr_1m-1min-node_provider' #Dataset including classes labels = gather_results.load_dataset(dataset, 'labels', data_path=data_path)['labels'] training_data = gather_results.load_dataset(dataset, 'training', data_path=data_path) test_data = gather_results.load_dataset(dataset, 'test', data_path=data_path) config = gather_results.load_model_config(dataset, data_path=data_path) classes = training_data['classes'] examples = training_data['examples'] example_ids = training_data['example_ids'] # Create an int representation of class unique_classes = list(set(classes)) dict_classes = dict(zip(unique_classes, list(range(len(unique_classes))))) int_classes = [dict_classes[x] for x in classes] df_data = pd.DataFrame(examples, columns=labels, index=example_ids) df_data['classes'] = int_classes ``` The dataset contains 185 feautures and 2377 samples. Each sample is a CI job run. ``` #Let's have a look at the data df_data.shape ``` We now list the cloud provider clases in the dataset and see how many samples the dataset contains per class. ``` #Cloud providers in the dataset and their numerical mapping classes_count = pd.DataFrame.from_dict(dict_classes, orient='index').reset_index() classes_count = classes_count.rename(columns={'index':'cloud_prov',0:'id'}) classes_count #Add the total amount of samples in the dataset per cloud provider to have an overall view of the dataset total_count = pd.DataFrame(df_data['classes'].value_counts()).add_suffix('_count').reset_index() classes_count['count'] = classes_count.apply( lambda x: (total_count[total_count['index']==x['id']]['classes_count']).values[0], axis=1, result_type = 'expand') classes_count.sort_values(by='count', ascending=False) ``` ## Determine the optimal number of clusters Next step is to determine the optimal number of clusters for training our k-means clustering model. <br>We will use the elbow method and the silhouette score to find out their recommendation. ``` #Numpy representation of the dataframe df_data. #This representation is needed for calculating the silhouette coefficients. cluster_examples = df_data.to_numpy() cluster_examples.shape ``` ### Elbow method In cluster analysis, the elbow method is a heuristic used in determining the number of clusters in a data set. <br>The method consists of plotting the explained variation as a function of the number of clusters, and picking the elbow of the curve as the number of clusters to use.[1](https://en.wikipedia.org/wiki/Elbow_method_(clustering)#:~:text=In%20cluster%20analysis%2C%20the%20elbow,number%20of%20clusters%20to%20use.) ``` # k means determine k using elbow method distortions = [] K = range(1,10) X = cluster_examples for k in K: kmeanModel = KMeans(n_clusters=k).fit(X) kmeanModel.fit(X) distortions.append(sum(np.min(cdist(X, kmeanModel.cluster_centers_, 'euclidean'), axis=1)) / X.shape[0]) # Plot the elbow plt.plot(K, distortions, 'bx-') plt.xlabel('k') plt.ylabel('Distortion') plt.title('The Elbow Method showing the optimal k') plt.show() ``` The elbow method suggest running k-means with 2 clusters. ### Silhouette score The elbow method can be ambiguous, as an alternative the average silhouette method can be used. <br>The silhouette value is a measure of how similar an object is to its own cluster (cohesion) compared <br>to other clusters (separation). The silhouette ranges from −1 to +1, where a high value indicates that <br>the object is well matched to its own cluster and poorly matched to neighboring clusters. <br>If most objects have a high value, then the clustering configuration is appropriate. <br>If many points have a low or negative value, then the clustering configuration may have too many or too few clusters. [2](https://en.wikipedia.org/wiki/Silhouette_(clustering)#:~:text=Silhouette%20refers%20to%20a%20method,consistency%20within%20clusters%20of%20data.&text=The%20silhouette%20ranges%20from%20%E2%88%921,poorly%20matched%20to%20neighboring%20clusters.) ``` X = cluster_examples range_n_clusters = (2,3,4,5,6,7,8) for n_clusters in range_n_clusters: # Create a subplot with 1 row and 2 columns fig, (ax1, ax2) = plt.subplots(1, 2) fig.set_size_inches(18, 7) # The 1st subplot is the silhouette plot # The silhouette coefficient can range from -1, 1 but in this example all # lie within [-0.1, 1] ax1.set_xlim([-0.1, 1]) # The (n_clusters+1)*10 is for inserting blank space between silhouette # plots of individual clusters, to demarcate them clearly. ax1.set_ylim([0, len(X) + (n_clusters + 1) * 10]) # Initialize the clusterer with n_clusters value and a random generator # seed of 10 for reproducibility. clusterer = KMeans(n_clusters=n_clusters, random_state=555) cluster_labels = clusterer.fit_predict(X) # The silhouette_score gives the average value for all the samples. # This gives a perspective into the density and separation of the formed # clusters silhouette_avg = silhouette_score(X, cluster_labels) print("For n_clusters =", n_clusters, "The average silhouette_score is :", silhouette_avg) # Compute the silhouette scores for each sample sample_silhouette_values = silhouette_samples(X, cluster_labels) y_lower = 10 for i in range(n_clusters): # Aggregate the silhouette scores for samples belonging to # cluster i, and sort them ith_cluster_silhouette_values = \ sample_silhouette_values[cluster_labels == i] ith_cluster_silhouette_values.sort() size_cluster_i = ith_cluster_silhouette_values.shape[0] y_upper = y_lower + size_cluster_i color = cm.nipy_spectral(float(i) / n_clusters) ax1.fill_betweenx(np.arange(y_lower, y_upper), 0, ith_cluster_silhouette_values, facecolor=color, edgecolor=color, alpha=0.7) # Label the silhouette plots with their cluster numbers at the middle ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i)) # Compute the new y_lower for next plot y_lower = y_upper + 10 # 10 for the 0 samples ax1.set_title("The silhouette plot for the various clusters.") ax1.set_xlabel("The silhouette coefficient values") ax1.set_ylabel("Cluster label") # The vertical line for average silhouette score of all the values ax1.axvline(x=silhouette_avg, color="red", linestyle="--") ax1.set_yticks([]) # Clear the yaxis labels / ticks ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1]) # 2nd Plot showing the actual clusters formed colors = cm.nipy_spectral(cluster_labels.astype(float) / n_clusters) ax2.scatter(X[:, 0], X[:, 1], marker='.', s=30, lw=0, alpha=0.7, c=colors, edgecolor='k') # Labeling the clusters centers = clusterer.cluster_centers_ # Draw white circles at cluster centers ax2.scatter(centers[:, 0], centers[:, 1], marker='o', c="white", alpha=1, s=200, edgecolor='k') for i, c in enumerate(centers): ax2.scatter(c[0], c[1], marker='$%d$' % i, alpha=1, s=50, edgecolor='k') ax2.set_title("The visualization of the clustered data.") ax2.set_xlabel("Feature space for the 1st feature") ax2.set_ylabel("Feature space for the 2nd feature") plt.suptitle(("Silhouette analysis for KMeans clustering on sample data " "with n_clusters = %d" % n_clusters), fontsize=14, fontweight='bold') plt.show() ``` For 2,3,5 and 6 clusters, the silhouette coefficient has higher values with best clustering separation for 2 clusters. ## Clustering Experiments We run now the experiment using k-means with two, three, five and six clusters and evaluate how the cloud providers are grouped in them. <br>First we define the functions to execute the training and create an overview of the results. ``` experiments = [2,3,5,6] data_clusters = df_data.copy() data_clusters.head() def k_training(c): clusterer = KMeans(n_clusters=c, random_state=555) cluster_labels = clusterer.fit_predict(X) k_labels = clusterer.labels_ data_clusters['clusters_'+str(c)] = k_labels #Create a dataframe with the original dataset and the resulting cluster label found during training of k-means. classes_totals = data_clusters['classes'].value_counts() ``` We define a function to produce an overview of the resulting clustering including: * List of cloud providers in each cluster. * Percentage of the overall samples of the cloud provider included in the cluster `pclass`. * Percentage of the cluster covered by the cloud provider `pcluster`. ``` def statistics(c): clusters_totals = data_clusters['clusters_'+str(c)].value_counts() stats = pd.DataFrame(data_clusters.groupby(by=['clusters_'+str(c),'classes'])['classes'].count()) stats = stats.add_suffix('_count').reset_index() stats['p_class'] = (stats.apply( lambda x: 100*x['classes_count']/classes_totals[x['classes']], axis=1, result_type = 'expand')).round(2) stats['p_cluster'] = (stats.apply( lambda x: 100*x['classes_count']/clusters_totals[x['clusters_'+str(c)]], axis=1, result_type = 'expand')).round(2) stats['cloud_prov'] = stats.apply( lambda x: (classes_count[classes_count['id']==x['classes']]['cloud_prov']).values[0], axis=1, result_type = 'expand') return stats ``` We define a function to highlight in the table returned by `stats` the class with biggest coverage within a cluster. ``` def highlight_biggestclass(row): # if row.p_cluster > 50: # return ['background-color: cyan']*6 # else: # return ['background-color: white']*6 return ['background-color: orange' if (row.p_cluster > 50) else 'background-color: cyan' if (row.p_class > 50) else 'background-color: white']*6 ``` # Experiments runs and results Comparing with the amount of samples of each cloud provider in the original dataset ``` classes_count.sort_values(by='count', ascending=False) ``` ## Experiment with 2 clusters ``` k_training(2) stats = statistics(2) stats.style.apply(highlight_biggestclass, axis=1) ``` Besides cloud operator `vexxhost`, which is distributed in the two clusters, the remaining cloud operators are separated in the two clusters. <br>However, this result is not significant for the aim of our experiments. ## Experiment with 3 clusters ``` k_training(3) stats = statistics(3) stats.style.apply(highlight_biggestclass, axis=1) ``` Clustering of the cloud providers is divisive and not significant. ## Experiment with 4 clusters ``` k_training(4) stats = statistics(4) stats.style.apply(highlight_biggestclass, axis=1) ``` Three of the cloud operators have predominance in separate clusters. <br>Cloud operator `rax` is the one with highest supper in the dataset and dominates cluster 2 even though with only 20% of samples of its class. <br>Cloud operator `inap` is grouped in a cluster with little noise and 99.69% of its samples. <br>Cloud operator `ovh` is grouped in a separate cluster with little noise and 99.01% of its samples. ## Experiment with 5 clusters ``` k_training(5) stats = statistics(5) stats.style.apply(highlight_biggestclass, axis=1) ``` <br>Cloud operator `inap` is grouped in a cluster with 99.69% of its samples and even less noise as in the experiment with 4 clusters. <br>Cloud operators `rax` and `ovh` also have separate clusters with high class and cluster coverage. However they are also predominant in other two clusters as they have more samples as the remaining operators. ## Experiment with 6 clusters ``` k_training(6) stats = statistics(6) stats.style.apply(highlight_biggestclass, axis=1) ``` The resulting clustering is noise with exception of cloud operator `inap` ### Conclusion Although the elbow method suggested 2 clusters and the silhouette score recommended 2 or 3 clusters as optimal number of clusters value for the clustering training, in the resulting experiments, the clustering with better differentiation among cloud providers was with 4 clusters. <br>We are not considering the experiment with 2 clusters the best result as we wanted to evaluate how many operators with high support a clustering algorith could group. For experiments with more than 3 clusters, the cloud operator `inap` was grouped in a separate cluster with very little noise and a 99.69% of its samples. This result indicates that the dstat data generated when running CI jobs on `inap` VM has a combination of values discernible enough for k-means to group them efficiently. The top three cloud operators with higher support in the dataset (`rax`, `ovh` and `inap`) could be grouped in different clusters. Cloud operator `rax` has the highest support and had an unique cluster only for the experiment with 2 clusters, otherwise it was split into two clusters with the highest coverage of 79% of samples in a cluster for the experiment with 3 and 4 clusters. This might be due to the regions that were reduced to a single class. Cloud operator `ovh` had the best coverage of samples in a single cluster for the experiment with 4 clusters (99%). In general, the dstat data from the CI jobs has potential for further exploration using unsupervised learning. <br>Especially clustering of failed CI jobs could help engineers to better triage failures coming from the gate pipeline when considering the CI system in Openstack. Thsi approach could be used in other CI systems as well.
github_jupyter
<a href="https://colab.research.google.com/github/finerbrighterlighter/myanmar_covid19/blob/master/exponential_growth.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> #Libraries ``` import statsmodels.api as sm import pandas as pd import numpy as np from math import pi import seaborn as sns import matplotlib.pyplot as plt import matplotlib.ticker as mticker from matplotlib.ticker import StrMethodFormatter from google.colab import files import warnings warnings.simplefilter(action='ignore', category=FutureWarning) %matplotlib inline ``` #Data Since the Government fails to provide a complete and open dataset for disease status in the country, several young doctors in Myanmar volunteered on their own to monitor announcements. Current data applied is collected by Dr. Nyein Chan Ko Ko. ``` data = "https://raw.githubusercontent.com/finerbrighterlighter/myanmar_covid19/master/mohs_announcement.csv" df = pd.read_csv(data,header= 0) df.insert(loc=0, column="case_id", value=np.arange(1,len(df)+1)) df["case_id"] = "case_" + df["case_id"].astype(str) df["first_date"] = pd.to_datetime(df["first_date"].values, dayfirst=True, utc=False).tz_localize("Asia/Yangon") df["qua_date"] = pd.to_datetime(df["qua_date"].values, dayfirst=True, utc=False).tz_localize("Asia/Yangon") df["ann_date"] = pd.to_datetime(df["ann_date"].values, dayfirst=True, utc=False).tz_localize("Asia/Yangon") df["exp_date"] = pd.to_datetime(df["exp_date"].values, dayfirst=True, utc=False).tz_localize("Asia/Yangon") df["dsc_date"] = pd.to_datetime(df["dsc_date"].values, dayfirst=True, utc=False).tz_localize("Asia/Yangon") df ``` # Basic Timeline ( Total cases, Daily new cases, infection spread) ``` case_df = df[["ann_date","travel"]].copy() case_df.columns = ["date", "travel"] case_df["overseas_inflow"] = np.where(df["travel"].isna(), 0, 1) case_df["local_spread"] = np.where(df["travel"].notna(), 0, 1) case_df["known_contact"] = np.where(df["travel"].notna(), 0, np.where(df["contact"]=="0", 0, np.where(df["contact"]=="1", 0, 1))) case_df["unknown_contact"] = np.where(df["travel"].notna(), 0, np.where(df["contact"]=="0", 1, 0)) case_df["contact_blinded"] = np.where(df["travel"].notna(), 0, np.where(df["contact"]=="1", 1, 0)) case_df["date"] = pd.to_datetime(case_df["date"]) case_df.drop("travel", axis=1 , inplace=True) case_df=case_df.groupby(["date"]).sum().reset_index() case_df timeline_df = pd.DataFrame(columns=["ndays","date"]) timeline_df["ndays"] = np.arange(len(pd.date_range(start=df.ann_date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")))) timeline_df.loc[0,"date"]=df.ann_date.min() for i in range(1,len(timeline_df)): timeline_df.loc[i,"date"] = timeline_df.loc[i-1,"date"] + pd.Timedelta(days=1) i=i+1 timeline_df["date"] = pd.to_datetime(timeline_df["date"]) timeline_df=timeline_df.merge(case_df,indicator=False,how='left') timeline_df["overseas_inflow"].fillna(0, inplace=True) timeline_df["local_spread"].fillna(0, inplace=True) timeline_df["known_contact"].fillna(0, inplace=True) timeline_df["unknown_contact"].fillna(0, inplace=True) timeline_df["contact_blinded"].fillna(0, inplace=True) timeline_df["overseas_inflow"]=timeline_df["overseas_inflow"].astype(int) timeline_df["local_spread"]=timeline_df["local_spread"].astype(int) timeline_df["known_contact"]=timeline_df["known_contact"].astype(int) timeline_df["unknown_contact"]=timeline_df["unknown_contact"].astype(int) timeline_df["contact_blinded"]=timeline_df["contact_blinded"].astype(int) timeline_df["total"] = (timeline_df["overseas_inflow"]+timeline_df["local_spread"]).cumsum().astype(int) timeline_df ``` # Pie (Donut) Chart ``` osf = timeline_df["overseas_inflow"].sum()/timeline_df["total"][timeline_df.index[-1]] ls = timeline_df["local_spread"].sum()/timeline_df["total"][timeline_df.index[-1]] ls_kc = timeline_df["known_contact"].sum()/timeline_df["total"][timeline_df.index[-1]] ls_ukc = timeline_df["unknown_contact"].sum()/timeline_df["total"][timeline_df.index[-1]] con_bli = timeline_df["contact_blinded"].sum()/timeline_df["total"][timeline_df.index[-1]] # First Ring (outside) fig, ax = plt.subplots() ax.axis('equal') mypie, _ = ax.pie([osf,ls], radius=2, labels=["Overseas Inflow = "+str("{:.2f}".format(osf*100))+" %", "Local Spread = "+str("{:.2f}".format(ls*100))+" %"], labeldistance=1,colors=["cornflowerblue", "lightcoral"]) plt.setp( mypie, width=0.7, edgecolor='white') # Second Ring (Inside) mypie2, _ = ax.pie([osf,ls_kc,ls_ukc,con_bli], radius=2-0.7, labels=[" ", "Known Contact = "+str("{:.2f}".format(ls_kc*100))+" %", "Unknown Contact = "+str("{:.2f}".format(ls_ukc*100))+" %","Contact Blinded = "+str("{:.2f}".format(con_bli*100))+" %"], labeldistance=0.8, colors=["lightsteelblue", "rosybrown", "firebrick", "coral"]) plt.setp( mypie2, width=0.5, edgecolor='white') plt.margins(0,0) plt.title("Total cases as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")),y=1.5) plt.text(0, -0.5, "* out of "+str(timeline_df["total"][timeline_df.index[-1]])+" patients confirmed as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")), horizontalalignment="left", verticalalignment="bottom", transform=ax.transAxes) # show it tot_dist = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_total_dist.svg" plt.savefig(tot_dist, bbox_inches = "tight", format="svg") plt.show() files.download(tot_dist) ``` ## Daily New Case ``` fig, ax = plt.subplots(figsize=(10,5)) ax.grid(linestyle=':', linewidth='0.5', color='silver') ax.set_axisbelow(True) xindex = np.arange(len(pd.date_range(start=timeline_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")))) plt.xticks(xindex,pd.date_range(start=timeline_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")).strftime('%d/%m'), fontsize=10, rotation=90) plt.gca().yaxis.set_major_locator(mticker.MultipleLocator(5)) oi_case = plt.bar(xindex, timeline_df["overseas_inflow"], color = "cornflowerblue") ls_k_case = plt.bar(xindex, timeline_df["known_contact"], bottom=timeline_df["overseas_inflow"], color = "rosybrown") ls_bli = plt.bar(xindex, timeline_df["contact_blinded"], bottom=timeline_df["overseas_inflow"]+timeline_df["known_contact"], color = "coral") ls_uk_case = plt.bar(xindex, timeline_df["unknown_contact"], bottom=timeline_df["overseas_inflow"]+timeline_df["known_contact"]+timeline_df["contact_blinded"], color = "firebrick") """oi_case = plt.plot(xindex, timeline_df["overseas_inflow"], color = "cornflowerblue") ls_k_case = plt.plot(xindex, timeline_df["known_contact"], color = "rosybrown") ls_uk_case = plt.plot(xindex, timeline_df["unknown_contact"], color = "firebrick") total = plt.plot(xindex, timeline_df["overseas_inflow"]+timeline_df["known_contact"]+timeline_df["unknown_contact"], color = "teal")""" plt.title("Daily new cases as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y"))) plt.legend((oi_case[0],ls_k_case[0],ls_bli[0],ls_uk_case[0]), ("Overseas Inflow", "Local Spread ( Known Contact )", "Local Spread ( Contact Blinded )", "Local Spread ( Unknown Contact )"),loc="lower left", bbox_to_anchor=(0, -0.4)) #plt.legend((oi_case[0],ls_k_case[0],ls_uk_case[0],total[0]), ("Overseas Inflow", "Local Spread ( Known Contact )", "Local Spread ( Unknown Contact ) or Local Spread ( Contact Blinded )","Total cases per day"),loc="lower left", bbox_to_anchor=(0, -0.4)) new_cases = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_new_cases.svg" plt.savefig(new_cases, bbox_inches = "tight") plt.show() files.download(new_cases) ``` #Mortality ``` exp_df = pd.DataFrame(columns=["ndays","date"]) exp_df["ndays"] = np.arange(len(pd.date_range(start=case_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")))) exp_df.loc[0,"date"]=case_df.date.min() for i in range(1,len(exp_df)): exp_df.loc[i,"date"] = exp_df.loc[i-1,"date"] + pd.Timedelta(days=1) i=i+1 exp_df["date"] = pd.to_datetime(exp_df["date"]) exp_df=exp_df.merge(df.groupby(["exp_date"]).size().to_frame("expire"),left_on="date",right_on="exp_date",indicator=False,how='left') exp_df["expire"].fillna(0, inplace=True) exp_df["expire"]=exp_df["expire"].astype(int) exp_df["total"]=exp_df["expire"].cumsum().astype(int) exp_df fig, ax = plt.subplots(figsize=(10,5)) ax.grid(linestyle=':', linewidth='0.5', color='silver') ax.set_axisbelow(True) xindex = np.arange(len(pd.date_range(start=exp_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")))) plt.xticks(xindex,pd.date_range(start=exp_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")).strftime('%d/%m'), fontsize=10, rotation=90) plt.gca().yaxis.set_major_locator(mticker.MultipleLocator(1)) expire = plt.bar(xindex,exp_df["total"], linestyle=(0, (3, 1, 1, 1, 1, 1)), color="red") plt.title("Cummulative mortality as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y"))) #plt.legend((expire),("Patient expired",),loc='upper left', bbox_to_anchor=(1, 1)) exp_cases = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_exp.svg" plt.savefig(exp_cases, bbox_inches = "tight") plt.show() files.download(exp_cases) ``` ## Radar chart for underlying conditions of expired patients ``` mort_data = "https://raw.githubusercontent.com/finerbrighterlighter/myanmar_covid19/master/expired_underlying.csv" mort_df = pd.read_csv(mort_data,header= 0) mort_df #mort_df.insert(loc=0, column="exp_id", value=np.arange(1,len(mort_df)+1)) #mort_df["exp_id"] = "exp_" + mort_df["exp_id"].astype(str) mort_df.drop("case", axis=1 , inplace=True) mort_df.drop("date", axis=1 , inplace=True) mort_df.drop("underlying_con", axis=1 , inplace=True) mort_df.ht_ds = mort_df.ht_ds.cumsum() mort_df.dm = mort_df.dm.cumsum() mort_df.ht = mort_df.ht.cumsum() mort_df.ch_resp = mort_df.ch_resp.cumsum() mort_df.ca = mort_df.ca.cumsum() mort_df Attributes =list(mort_df) AttNo = len(Attributes) Attributes values = mort_df.iloc[-1].tolist() values += values [:1] values[:] = [x / len(mort_df) for x in values] values angles = [n / float(AttNo) * 2 * pi for n in range(AttNo)] angles += angles [:1] fig = plt.figure(figsize=(8,8)) ax = plt.subplot(111, polar=True) #Add the attribute labels to our axes plt.xticks(angles[:-1],["Heart Disease", "Diabetes Mellitus", "Hypertension", "Chronic Respiratory Disease", "Cancer"]) plt.gca().yaxis.set_major_formatter(mticker.PercentFormatter(1,decimals=0)) plt.ylim(0, 1.0) #Plot the line around the outside of the filled area, using the angles and values calculated before ax.plot(angles,values) #Fill in the area plotted in the last line ax.fill(angles, values, 'slategray', alpha=0.8) #Give the plot a title and show it ax.set_title("Underlying conditions of the expired patients", pad=20) plt.text(1, -0.15, "* out of "+str(len(mort_df))+" patients expired as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")), horizontalalignment="left", verticalalignment="bottom", transform=ax.transAxes) under = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_underlying.svg" plt.savefig(under, bbox_inches = "tight") plt.show() files.download(under) ``` #Status ``` dsc_df = pd.DataFrame(columns=["ndays","date"]) dsc_df["ndays"] = np.arange(len(pd.date_range(start=case_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")))) dsc_df.loc[0,"date"]=case_df.date.min() for i in range(1,len(dsc_df)): dsc_df.loc[i,"date"] = dsc_df.loc[i-1,"date"] + pd.Timedelta(days=1) i=i+1 dsc_df["date"] = pd.to_datetime(dsc_df["date"]) dsc_df=dsc_df.merge(df.groupby(["dsc_date"]).size().to_frame("recovered"),left_on="date",right_on="dsc_date",indicator=False,how='left') dsc_df["recovered"].fillna(0, inplace=True) dsc_df["recovered"]=dsc_df["recovered"].astype(int) dsc_df["total"]=dsc_df["recovered"].cumsum().astype(int) dsc_df total_df = timeline_df[["date","total"]].copy() total_df["expire"] = exp_df["total"] total_df["recovered"] = dsc_df["total"] total_df["hosp"] = (total_df["total"]-total_df["expire"]-total_df["recovered"]) total_df["expire"] = total_df["expire"]/total_df["total"] total_df["recovered"] = total_df["recovered"]/total_df["total"] total_df["hosp"] = total_df["hosp"]/total_df["total"] total_df fig, ax = plt.subplots(figsize=(15,7.5)) ax.grid(linestyle=':', which="both", linewidth='0.5', color='silver') box = dict(boxstyle="square, pad=1", facecolor="skyblue", alpha=0.25) xindex = np.arange(len(pd.date_range(start=total_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")))) plt.xticks(xindex,pd.date_range(start=total_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")).strftime('%d/%m'), fontsize=10, rotation=90) plt.gca().yaxis.set_major_formatter(mticker.PercentFormatter(1)) plt.gca().yaxis.set_major_locator(mticker.MultipleLocator(0.2)) plt.gca().yaxis.set_minor_locator(mticker.MultipleLocator(0.05)) local_spread = plt.stackplot(xindex,[total_df["expire"],total_df["recovered"],total_df["hosp"]],labels=["Patients expired","Patients recovered","Currently under hospitalization"],colors=["black","limegreen","teal"]) plt.title("Comfirmed patients' status as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")),fontsize=15) plt.legend(loc="lower left", bbox_to_anchor=(0, -0.3),fontsize=12) plt.text(0, 0, "As of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y"))+ ",\nThere is "+str("%.2f" %((total_df.loc[len(total_df)-1,"recovered"])*100))+" % recovery rate and "+ str("%.2f" %((total_df.loc[len(total_df)-1,"expire"])*100))+" % mortality rate.", fontsize=15, linespacing= 2, bbox=box , position=(0.45,-0.25), horizontalalignment="left", verticalalignment="bottom", transform=ax.transAxes) status = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_status.svg" plt.savefig(status, bbox_inches = "tight") plt.show() files.download(status) ``` ## Spread Trend ``` spread_trend_df = pd.DataFrame(columns=["ndays","date"]) spread_trend_df["ndays"] = np.arange(len(pd.date_range(start=case_df.date.min()-pd.Timedelta(days=5), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")))) spread_trend_df.loc[0,"date"]=case_df.date.min()-pd.Timedelta(days=5) for i in range(1,len(spread_trend_df)): spread_trend_df.loc[i,"date"] = spread_trend_df.loc[i-1,"date"] + pd.Timedelta(days=1) i=i+1 spread_trend_df["date"] = pd.to_datetime(spread_trend_df["date"]) spread_trend_df=spread_trend_df.merge(case_df,indicator=False,how='left') spread_trend_df["overseas_inflow"].fillna(0, inplace=True) spread_trend_df["local_spread"].fillna(0, inplace=True) spread_trend_df["overseas_inflow"]=spread_trend_df["overseas_inflow"].astype(int) spread_trend_df["local_spread"]=spread_trend_df["local_spread"].astype(int) spread_trend_df["tot_overseas_inflow"]=spread_trend_df["overseas_inflow"].cumsum() spread_trend_df["tot_local_spread"]=spread_trend_df["local_spread"].cumsum() spread_trend_df fig, ax = plt.subplots(figsize=(15,5)) ax.grid(linestyle=':', which="both" , linewidth='0.5', color='silver') ax.set_axisbelow(True) xindex = np.arange(len(pd.date_range(start=spread_trend_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")))) plt.xticks(xindex,pd.date_range(start=spread_trend_df.date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")).strftime('%d/%m'), fontsize=12, rotation=90) plt.yticks(fontsize=12) plt.gca().yaxis.set_major_locator(mticker.MultipleLocator(20)) plt.gca().yaxis.set_minor_locator(mticker.MultipleLocator(10)) box = dict(boxstyle='square,pad=1', facecolor="indianred", alpha=0.25) land_close_fore = plt.axvline(x=1, color="springgreen", linestyle="--") com_qua = plt.axvline(x=4, color="plum", linestyle="--") visa_close = plt.axvline(x=11, color="skyblue", linestyle="--") air_close = plt.axvline(x=12, color="gold", linestyle="--") insein_cluster = plt.axvline(x=25, color="brown", linestyle="--") thingyan_over= plt.axvline(x=33, color="burlywood", linestyle="--") total = plt.plot(xindex, spread_trend_df["tot_overseas_inflow"]+spread_trend_df["tot_local_spread"], color="teal") overseas_inflow = plt.plot(xindex, spread_trend_df["tot_overseas_inflow"], color="cornflowerblue") local_spread = plt.plot(xindex, spread_trend_df["tot_local_spread"], color="lightcoral") plt.title("Cummulative COVID-19 cases in Myanmar as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y")),fontsize=15) plt.legend((total[0],local_spread[0],overseas_inflow[0],land_close_fore,com_qua,visa_close,air_close,insein_cluster,thingyan_over), ("Cummulative Total", "Local Spread", "Overseas Inflow", "Land border closed to foreigners", "All foreign entries have to undergo 14 days quarantine", "Visa paused","International Flight Ban", "Insein religious cluster is discovered","Thingyan Holidays are over"), loc="lower left", bbox_to_anchor=(-0.025, -0.85),fontsize=12) plt.text(0, 0, "The first COVID-19 patient was identified on 23rd March 2020.\n \nAs of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y"))+ ",\nThere are "+str(spread_trend_df.loc[len(spread_trend_df)-1,"tot_overseas_inflow"]+spread_trend_df.loc[len(spread_trend_df)-1,"tot_local_spread"])+" confirmed patients in Myanmar.\n"+ str(spread_trend_df.loc[len(spread_trend_df)-1,"tot_overseas_inflow"])+" of the patients had returned from foreign countries and\n"+ str(spread_trend_df.loc[len(spread_trend_df)-1,"tot_local_spread"])+" patients were contracted within the country.", fontsize=15, linespacing= 2, bbox=box , position=(0.45,-0.793), horizontalalignment="left", verticalalignment="bottom", transform=ax.transAxes) tot_cases = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_total_cases.svg" plt.savefig(tot_cases, bbox_inches = "tight", format="svg") plt.show() files.download(tot_cases) ``` # Age Distribution ``` age_df = df[["sex","age"]].copy() age_df["age_gp"]=pd.cut(age_df.age,bins=[0,10.0,20.0,30.0,40.0,50.0,60.0,70.0,80.0,90.0,100.0],labels=["0-10","10-20","20-30","30-40","40-50","50-60","60-70","70-80","80-90","90-100"]) age_df.drop("age", axis=1 , inplace=True) #pd.set_option('display.max_rows', 100) # discharged age_df["f_dsc"] = np.where(df["sex"]!="Female", 0, np.where(df["dsc_date"].notna(), 1, 0)) age_df["m_dsc"] = np.where(df["sex"]!="Male", 0, np.where(df["dsc_date"].notna(), 1, 0)) # expired age_df["f_exp"] = np.where(df["sex"]!="Female", 0, np.where(df["dsc_date"].notna(), 0, np.where(df["exp_date"].notna(), 1,0))) age_df["m_exp"] = np.where(df["sex"]!="Male", 0, np.where(df["dsc_date"].notna(), 0, np.where(df["exp_date"].notna(), 1,0))) # in hospital age_df["f_hosp"] = np.where(df["sex"]!="Female", 0, np.where(df["dsc_date"].notna(), 0, np.where(df["exp_date"].notna(), 0,1))) age_df["m_hosp"] = np.where(df["sex"]!="Male", 0, np.where(df["dsc_date"].notna(), 0, np.where(df["exp_date"].notna(), 0,1))) age_df age_df = age_df.groupby(["sex","age_gp"]).sum().reset_index() age_df["f_dsc"].fillna(0, inplace=True) age_df["m_dsc"].fillna(0, inplace=True) age_df["f_exp"].fillna(0, inplace=True) age_df["m_exp"].fillna(0, inplace=True) age_df["f_hosp"].fillna(0, inplace=True) age_df["m_hosp"].fillna(0, inplace=True) age_df # these arrays are no longer necessary # leaving them here to see the map into the dataframe # age_gp=age_df.iloc[0:9,1].to_numpy() # f_dsc=age_df.iloc[0:9,2].to_numpy() # f_dsc=f_dsc*-1 # m_dsc=age_df.iloc[10:19,3].to_numpy() # f_exp=age_df.iloc[0:9,4].to_numpy() # f_exp=f_exp*-1 # m_exp=age_df.iloc[10:19,5].to_numpy() # f_hosp=age_df.iloc[0:9,6].to_numpy() # f_hosp=f_hosp*-1 # m_hosp=age_df.iloc[10:19,7].to_numpy() fig, ax = plt.subplots(figsize=(10,5)) # grids ax.grid(linestyle=":", which="both", linewidth="0.5", color="silver") ax.set_axisbelow(True) plt.axvline(x=0, color="snow") # ticks plt.gca().xaxis.set_major_formatter(mticker.PercentFormatter(len(df))) plt.gca().xaxis.set_major_locator(mticker.MultipleLocator(5)) plt.gca().xaxis.set_major_formatter(StrMethodFormatter('{x:,.0f} %')) plt.gca().xaxis.set_minor_locator(mticker.MultipleLocator(1)) plt.xticks(rotation=90) plt.xlim((-25, 25)) # data m_hosp = plt.barh(age_df.iloc[0:9,1], age_df.iloc[10:19,7], color = "teal") m_dsc = plt.barh(age_df.iloc[0:9,1], age_df.iloc[10:19,3], left=age_df.iloc[10:19,7], color = "limegreen") m_exp = plt.barh(age_df.iloc[0:9,1], age_df.iloc[10:19,5], left=age_df.iloc[10:19,7]+age_df.iloc[10:19,3], color = "black") f_hosp = plt.barh(age_df.iloc[0:9,1], age_df.iloc[0:9,6]*-1, color = "teal") f_dsc = plt.barh(age_df.iloc[0:9,1], age_df.iloc[0:9,2]*-1, left=age_df.iloc[0:9,6]*-1, color = "limegreen") f_exp = plt.barh(age_df.iloc[0:9,1], age_df.iloc[0:9,4]*-1, left=(age_df.iloc[0:9,6]+age_df.iloc[0:9,2])*-1, color = "black") # titles, subtitles, labels and legends plt.title("Distribution of confirmed patients as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y"))) plt.xlabel("Gender Distribution") plt.ylabel("Age Distribution") plt.text(0.25, -0.2, "Female", horizontalalignment="left", verticalalignment="center", transform=ax.transAxes) plt.text(0.7, -0.2, "Male", horizontalalignment="left", verticalalignment="center", transform=ax.transAxes) plt.text(0.01, -0.3, "Please consider the X axis as absolute values.", horizontalalignment="left", verticalalignment="bottom", transform=ax.transAxes) plt.legend((m_hosp[0],m_dsc[0],m_exp[0]), ("Currently under Hospitalization", "Recovered", "Expired"),loc="lower left", bbox_to_anchor=(0, -0.55)) # download age = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_age.svg" plt.savefig(age, bbox_inches = "tight", format="svg") plt.show() files.download(age) ``` - travel -> travel history - region -> states and admininstrative regions of Myanmar where the case is quarantined - first_date -> entry into the country, or first symptom for no travel history - qua_date -> first date of hospital quarantine - ann_date -> date of announcement by MOHS as positive - exp_date -> date of patient's death - dsc_date -> date of discharge # Per Patient Timeline ``` timeline_df = pd.DataFrame(columns=["case_id"]) timeline_df["case_id"] = df["case_id"] timeline_df["until_qua"] = (df["qua_date"]-df["first_date"]).dt.days timeline_df["until_ann"] = (df["ann_date"]-df["qua_date"]).dt.days timeline_df["until_first"] = (df["first_date"]-df["first_date"].min()).dt.days timeline_df["hosp"] = np.where(df["exp_date"].isna(), np.where(df["dsc_date"].isna(), (pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")-df["ann_date"]).dt.days, (df["dsc_date"]-df["ann_date"]).dt.days), (df["exp_date"]-df["ann_date"]).dt.days) timeline_df["until_dsc"] = np.where(df["dsc_date"].notna(), 0.5, 0) timeline_df["until_exp"] = np.where(df["exp_date"].notna(), 0.5, 0) timeline_df # case_39 expired on the same day as admission. adding 0.5 to visualize on the plot timeline_df.loc[38, "hosp"] = timeline_df.loc[38, "hosp"] + 0.75 timeline_df.fillna(0, inplace=True) ``` ##Timeline for each patient(Bar Plot) ``` fig, ax = plt.subplots(figsize=(10,30)) ax.grid(linestyle=':', linewidth='0.5', color='silver') ax.set_axisbelow(True) yindex = np.arange(len(timeline_df["case_id"])) xindex = np.arange(len(pd.date_range(start=df.first_date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")))) plt.yticks(yindex,timeline_df["case_id"], fontsize=10) plt.xticks(xindex,pd.date_range(start=df.first_date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")).strftime('%d/%m'), fontsize=10, rotation=90) plt.gca().invert_yaxis() until_qua = plt.barh(yindex, timeline_df["until_qua"], left= timeline_df["until_first"], color = "tomato") until_ann = plt.barh(yindex, timeline_df["until_ann"], left= timeline_df["until_qua"]+timeline_df["until_first"], color = "lightsalmon") hosp = plt.barh(yindex, timeline_df["hosp"], left= timeline_df["until_ann"]+timeline_df["until_qua"]+timeline_df["until_first"], color = "teal") until_exp = plt.barh(yindex, timeline_df["until_exp"], left= timeline_df["hosp"]+timeline_df["until_ann"]+timeline_df["until_qua"]+timeline_df["until_first"]-0.5, color = "black") until_dsc = plt.barh(yindex, timeline_df["until_dsc"], left= timeline_df["hosp"]+timeline_df["until_exp"]+timeline_df["until_ann"]+timeline_df["until_qua"]+timeline_df["until_first"]-0.5, color = "limegreen") plt.title("Timeline as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y"))) plt.legend((until_qua[0], until_ann[0],hosp[0],until_dsc[0],until_exp[0]), ("Arrival in country or Contact with suspected carrier", "Under Hospital Quarantine","Under hospitalization","Patient recovered","Patient expired"),loc="lower left", bbox_to_anchor=(0, 0)) timeline = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_timeline.svg" plt.savefig(timeline, bbox_inches = "tight") plt.show() files.download(timeline) ``` ##Time taken for action (Bar Plot) ***Depracated*** ``` """fig, ax = plt.subplots(figsize=(40,10)) ax.grid(linestyle=':', linewidth='0.5', color='silver') ax.set_axisbelow(True) index = np.arange(len(timeline_df["case_id"])) pre_incub = plt.axhline(y=14, color="teal", linestyle="--") com_qua = plt.axhline(y=21, color="skyblue", linestyle="--") new_incub = plt.axhline(y=28, color="aqua", linestyle="--") p1 = plt.bar(index, timeline_df["until_qua"], color = "tomato") p2 = plt.bar(index, timeline_df["until_ann"], bottom=timeline_df["until_qua"], color = "lightsalmon") plt.ylabel("Days", fontsize=10) plt.title("Days until being announced positive as of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y"))) plt.xticks(index,timeline_df["case_id"], fontsize=10, rotation=90) plt.legend((p1[0], p2[0],pre_incub,com_qua,new_incub), ("Days until hospital quarantine", "Days until announcement", "Incubation period was assumed to be 14 days", "As of 11/4/2020, Community Quarantine is extended to 21 days, continuing with 7 days Home quarantine", "As of 11/4/2020, incubation period is readjusted to be 28 days "),loc="lower left", bbox_to_anchor=(0, -0.5)) days = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_time_for_action.svg" plt.savefig(days, bbox_inches = "tight") plt.show() files.download(days)""" ``` # Exponential Growth ``` sum_df = df[["ann_date","case_id"]].copy() sum_df.columns = ["Date", "id"] sum_df=sum_df.groupby(["Date"]).size().to_frame("Case").reset_index() sum_df["Date"] = pd.to_datetime(sum_df["Date"]) sum_df confirmed_df = pd.DataFrame(columns=["ndays","Date"]) #confirmed_df["ndays"] = np.arange(len(pd.date_range(start=sum_df.Date.min(), end=sum_df.Date.max()))) confirmed_df["ndays"] = np.arange(len(pd.date_range(start=sum_df.Date.min(), end=pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon")))) confirmed_df.loc[0,"Date"]=sum_df.Date.min() for i in range(1,len(confirmed_df)): confirmed_df.loc[i,"Date"] = confirmed_df.loc[i-1,"Date"] + pd.Timedelta(days=1) i=i+1 confirmed_df["Date"] = pd.to_datetime(confirmed_df["Date"]) confirmed_df=confirmed_df.merge(sum_df,indicator=False,how='left') confirmed_df["Case"].fillna(0, inplace=True) confirmed_df["Case"]=confirmed_df["Case"].astype(int) confirmed_df["Case"] = confirmed_df["Case"].cumsum() # Natural Log of Real Cases confirmed_df["logCase"] = np.log(confirmed_df.Case).astype(float) confirmed_df ``` Natural log makes it better in terms of visualization and long term comparison, make the data look more linear. That is why I will be plotting both real and natural log line graphs. # Model of choice True exponential does not exist, but exponential growth is assumed until the inflection point has arrived. Linear Regression is applied. ## Logistic Regression ### Ordinary Least Squared Regression ``` X = confirmed_df.ndays X = sm.add_constant(X) y = confirmed_df.logCase model = sm.OLS(y, X) result = model.fit() result.summary() ``` Exponential Formaula<br> y = ab<sup>x</sup> <br> a = Initial Value<br> b = Rate of Change<br> x = The feature ( Here it is time )<br> b = (1+r) = Growth Rate <- Before Inflection <br> b = (1-r) = Decay Rate <- After Inflection <br> In the summary, "constant stands" for initial "a".<br> "ndays" is the coefficient of "time", which means the value increasing y as x is increased by 1. In our case, the number of cases to increase as the next day comes. ``` def linear_predictions(t): return np.exp(result.params["const"]) * np.exp(result.params["ndays"]) ** t ``` As we fitted our model with natural log values, we should change them back to real numbers to predict. # Next Week Prediction ``` ndays = len(confirmed_df)+7 nextweek_df = pd.DataFrame(columns=["ndays","Date"]) nextweek_df["ndays"] = np.arange(ndays) nextweek_df.loc[0,"Date"]=confirmed_df.loc[0,"Date"] for i in range(1,len(nextweek_df)): nextweek_df.loc[i,"Date"] = nextweek_df.loc[i-1,"Date"] + pd.Timedelta(days=1) i=i+1 nextweek_df["Predictions"] = nextweek_df.ndays.apply(linear_predictions) # Natural Log of Predicted Cases nextweek_df["logPredictions"] = np.log(nextweek_df.Predictions).astype(float) nextweek_df ``` Although I stated next week, here I added only "3". Since our data and history is very short right now, it is not sufficient to predict far without sacraficing. This currently here is a proof of concept. We shall increase the data and after that, we should pursure further analysis. # Real Number Plot ``` real = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_real.svg" log = str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d-%m-%Y"))+"_log.svg" confirmed_x = pd.date_range(start=confirmed_df["Date"][confirmed_df.index[0]], end=confirmed_df["Date"][confirmed_df.index[-1]]) confirmed_y = confirmed_df["Case"].tolist() confirmed_plot = pd.Series(data=confirmed_y, index=confirmed_x) nextweek_x = pd.date_range(start=nextweek_df["Date"][nextweek_df.index[0]], end=nextweek_df["Date"][nextweek_df.index[-1]]) nextweek_y = nextweek_df["Predictions"].tolist() nextweek_plot = pd.Series(data=nextweek_y, index=nextweek_x) fig, ax = plt.subplots() ax.plot(confirmed_plot, label="Confirmed", color="red") ax.plot(nextweek_plot, label="Predicted", color ="blue") ax.grid(linestyle=':', linewidth='0.5', color='silver') ax.set_axisbelow(True) legend = ax.legend(loc="upper left", fontsize="large") plt.xlabel("Date") plt.ylabel("Infections") plt.suptitle("Predicted number of cases vs comfirmed number of cases") plt.title("As of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y"))) plt.xticks(rotation=90) plt.savefig(real, bbox_inches = "tight") plt.show() files.download(real) ``` # Natural Log Plot ``` confirmed_logy = confirmed_df["logCase"].tolist() confirmed_logplot = pd.Series(data=confirmed_logy, index=confirmed_x) nextweek_logy = nextweek_df["logPredictions"].tolist() nextweek_logplot = pd.Series(data=nextweek_logy, index=nextweek_x) fig, ax = plt.subplots() ax.plot(confirmed_logplot, label="Confirmed", color="red") ax.plot(nextweek_logplot, label="Predicted", color ="blue") ax.grid(linestyle=':', linewidth='0.5', color='silver') ax.set_axisbelow(True) legend = ax.legend(loc="upper left", fontsize="large") plt.xlabel("Date") plt.ylabel("Infections") plt.suptitle("Predicted number of cases vs confirmed number of cases (Natural Log)") plt.title("As of "+str(pd.to_datetime("today").tz_localize("UTC").tz_convert("Asia/Yangon").strftime("%d/%m/%Y"))) plt.xticks(rotation=90) plt.savefig(log, bbox_inches = "tight") plt.show() files.download(log) %reset ```
github_jupyter
### I. Discover API Documentation First, let's take a look at the [API Docs](https://www.weatherapi.com/docs/) for the free WeatherAPI. When performing discovery on an API, look for answers to the following questions: 1. What endpoints are available and what data sources do they offer? 2. How is Authentication performed when making requests? 3. Is a software development kit (SDK) available? 4. How are requests to the API formatted? 5. How are request payloads formatted? What fields are included? The WeatherAPI offers forecasted and historical weather data as well as location, astronomy, and other misc endpoints. For this lab, we will be focusing on current and forecasted weather. But feel free to explore the other endpoints available. Data is returned in json or xml format. Authentication is performed via API Key. We can get one by signing up for a free account. In general, it is not a best practice to include API Keys directly in the code that is making the API request. We will cover storing and retrieving API Keys in the next section. [SDKs](https://github.com/weatherapicom/) are available in several programming languages. SDKs are useful tools as they provide shortcuts and integrations for developing in the programming language of your choice. We will not be using the WeatherAPI SDK for this lab, but much of the code we will write for this lab has already been replicated in some form via the SDK. In general, use SDKs when they are available. They will save you time. API Requests are formatted as follows: `base url + <endpoint_name>.<file_extension> + auth + <query>` With: base url = `'http://api.weatherapi.com/v1/'` auth = `'key=<api_key>'` We'll look at some examples in Part III. Finally, let's look at the docs for the forecast API. It looks like the payload contains three parts: 1. A day element containing the date and daily forecast information 2. An astro element sunrise and sunset data 3. An hour element containing the datetime and hourly forecast information We'll explore these later on in the lab. ### II. Storing and Retrieving API Keys [YAML](https://yaml.org/) (YAML Ain't Markup Language) is a human readable, [data-serialization](https://en.wikipedia.org/wiki/Serialization) language that is accessible to all programming languages. It is commonly used for config files as it is extremely easy to write due to it's lack of punctuation, We can store our API key for the WeatherAPI in a yaml file like so: ```yaml weather_key: <key> ``` We can then access our API key in python by using the pyyaml library. We can do this by: 1. Importing pyyaml 2. Reading our config file 3. Converting the config file to a python dict via pyyaml and accessing the 'weather_key' element ``` import yaml key_file = open('labClass4.yaml', 'r') weather_key = yaml.safe_load(key_file)['weather_key'] ``` ### III. Making API Calls in Python The Python [Requests](https://docs.python-requests.org/en/latest/) Library is a lightweight tool for making HTTP and API requests. It's also part of the Python standard library, making it easily accessible in all applications. Let's examine some of it's features by using the Current Weather endpoint from WeatherAPI. Recall that we need the following components to make a GET request from WeatherAPI: 1. The base url: `'http://api.weatherapi.com/v1/'` 2. The Endpoint and return file format: `'current.json'` 3. Our API Key: `'key=<weather_key>'` 4. And our query, which will be the location for which we want the current weather Here is an example from the API Docs: `http://api.weatherapi.com/v1/current.json?key=<YOUR_API_KEY>&q=London` Notice how the pieces are combined: 1. There is a question mark ("?") between the endpoint and auth components 2. The query component begins with "q=" 3. And there is an ampersand ("&") between the auth and query components. We can thus parametrize our request as follows: ``` base_url = 'http://api.weatherapi.com/v1/' current_api = 'current.json?' auth = f'key={weather_key}' #Using the weather key variable from the previous section query = '11101' #My Zip Code request_body = base_url + current_api + auth + f'&q={query}' print(request) ``` You might ask why we are storing our auth component as its own variable but use an [f-string](https://www.geeksforgeeks.org/formatted-string-literals-f-strings-python/) for the query component in the request body. We do this because our query component will change depending on the API endpoint we use but our auth component will always be the same. Now that we have our request body, let's make our first get request. We can do so by: 1. Importing the requests library 2. Using the `requests.get()` method and passing our body as a parameter. ``` import requests requests.get(request_body) ``` You should have received the following output: `<Response [200]>` That's great! A 200 code means your request was successfully. But now what? The Docs for the Requests library recommends storing your request as a variable so that you can easily access its methods and properties. ``` current_request = requests.get(request_body) print(current_request.status_code) #Should return same 200 code print(current_request.text) #Returns the body of the payload as a string ``` Awesome! Our request was successful and we can now access the payload. But something is still missing. Can you think of it? Our payload is currently formatted as a string. It would be more useful if it was converted into an iterable class, like a dict. We can do so by using the built in JSON decoder, which converts JSON formatted strings into python dicts. ``` print(current_request.json()) print() #Newline print(current_request.json()['location']) print() #Newline print(current_request.json()['location']['name']) ``` We'll take a deeper look at what we can do with our request payloads in the next section. But first, let's take a 5 minute break. ### Break ### IV. Parsing GET Request Payloads Let's review. To this point, we have: 1. Reviewed the documentation for the WeatherAPI 2. Stored and retrieved our API Key 3. Parametrized our GET request body 4. Made a GET request to the Current Weather endpoint of the WeatherAPI 5. Converted the request payload into a python iterable. That's a lot! Now let's see what we can do with our request payloads. We'll be using the Forecast endpoint for this section as it has a lot more data in it than the Current endpoint. What changes will we need to make to our request body from Section 3? We'll need to update the endpoint as well as our query. Requests to the Forecasts Endpoint are formatted like this: `http://api.weatherapi.com/v1/forecast.json?key=<YOUR_API_KEY>&q=07112&days=7` So now our query has two parameters: location and number of days. ``` forecast_api = 'forecast.json?' query = {'zipcode': '11101', 'days': '3'} ``` Our query parameter is now a dict so we can keep all of it's components together. Using the above and the request body from Section 3, trying writing the body for our GET request. ``` request_body = base_url + forecast_api + auth + f"&q={query['zipcode']}&days={query['days']}" print(request_body) ``` Great! Now like before, we can pass our request body into a GET request using the `requests.get()` method and convert it into an iterable using `.json()`. ``` forecast_request = requests.get(request_body).json() print(forecast_request) ``` Ok! There's a lot more here now. If we remember back to Section 1, the Forecast Endpoint payload has three sections: 1. Day 2. Astro 3. Hour We can more easily read our payload by using a [JSON Formatter](https://jsonformatter.curiousconcept.com/#) and feeding it our payload. Interestingly, it seems calling the Forecast API also makes a get request to the Location and Current Endpoints as we have three nested objects: - location - current - forecast Inside the forecast object, we see an array of objects called 'forecastday'. Each forecastday object contains the date as well as a Day, Astro and Hour objects. The Hour object is an array containing forecast data for each hour of the day. Let's start by isolating just the day and hour objects. For the daily forecast, we can start by creating a blank list. Then we can loop through the our payload and append each day object as a new element in our list. ``` forecast_day = [] # create an empty list for day in forecast_request['forecast']['forecastday']: # loop through the forecastday array forecast_day.append( {'date': day['date'], 'forecast': day['day']} # append an object containing the date and forecast for each day ) print(forecast_day[0]) ``` So now, we have to loop through each object in the forecast day array, and then loop through each element in the hour array contained in the elements. Specifically, for each of the 3 elements in forecastday, we want to pull out the 24 elements in hour. So we should end up with a python list containing 72 elements. ``` forecast_hour = [] # create an empty list for day in forecast_request['forecast']['forecastday']: # loop through the forecastday array for hour in day['hour']: # loop through each hour array in forecast day forecast_hour.append( {'date': day['date'], 'hour': hour['time'], 'forecast': hour} # append an object contianing the date, hour, and forecast for each hour ) print(len(forecast_hour)) # Check that we have 72 elements print(forecast_hour[0]) # Let's just look at the first one ``` ### V. Writing Request Payloads to .csv Files The Python [csv](https://docs.python.org/3/library/csv.html) Library is a simple module for reading, writing, and interacting with csv files. It also provides classes for converting csv files to python dicts and vice versa. We'll start by writing our daily forecast list to a CSV. The process is as follows: 1. Import the csv library 2. create a new csv file called 'daily_forecast.csv' 3. Instantiate a new csv.writer() object for our daily forecast, using our 'daily_forecast.csv' file as a target 4. Write a header row containing the names of our forecast fields 5. Loop through each element of our `forecast_day` list from Part IV 6. Write a new row in 'daily_forecast.csv' for each element in `forecast_day` For this example, we'll use these fields: 1. Date 2. Max Temperature (in fahrenheit) 3. Min Temperature 4. Chance of rain 5. And weather conditions The end result should be a csv file with 3 rows ``` import csv with open('daily_forecast.csv', 'w', newline = '') as file: # Create a new csv file named 'daily_forecast' daily_forecast = csv.writer(file, delimiter = ',') # Instantiate a csv.writer() object, writing to 'daily_forecast.csv' daily_forecast.writerow(['date', 'maxtemp_f', 'mintemp_f', 'avgtemp_f', 'daily_chance_of_rain','condition_text']) # Write the header for our csv file for day in forecast_day: # Loop through our forecast_day list and write the following fields in our csv file daily_forecast.writerow([ day['date'], day['forecast']['maxtemp_f'], day['forecast']['mintemp_f'], day['forecast']['avgtemp_f'], day['forecast']['daily_chance_of_rain'], day['forecast']['condition']['text'] ]) ``` Now let's do the same for hourly forecast data. This is going to be easier with the work we did in Section IV. We would have to write two for loops if we hadn't created our forecast_hour object. Now we can replicate the process for writing the daily forecast csv with the fields from forecast_hour. Recall one element in forecast_hour looks like this: ```python {'date': '2022-01-16', 'hour': '2022-01-16 00:00', 'forecast': {'time_epoch': 1642309200, 'time': '2022-01-16 00:00', 'temp_c': -8.4, 'temp_f': 16.9, 'is_day': 0, 'condition': {'text': 'Clear', 'icon': '//cdn.weatherapi.com/weather/64x64/night/113.png', 'code': 1000}, 'wind_mph': 8.5, 'wind_kph': 13.7, 'wind_degree': 356, 'wind_dir': 'N', 'pressure_mb': 1029.0, 'pressure_in': 30.4, 'precip_mm': 0.0, 'precip_in': 0.0, 'humidity': 28, 'cloud': 0, 'feelslike_c': -14.4, 'feelslike_f': 6.1, 'windchill_c': -14.4, 'windchill_f': 6.1, 'heatindex_c': -8.4, 'heatindex_f': 16.9, 'dewpoint_c': -23.7, 'dewpoint_f': -10.7, 'will_it_rain': 0, 'chance_of_rain': 0, 'will_it_snow': 0, 'chance_of_snow': 0, 'vis_km': 10.0, 'vis_miles': 6.0, 'gust_mph': 11.0, 'gust_kph': 17.6, 'uv': 1.0}} ``` Using the daily_forecast code block as an example, write a csv for the hourly forecast data with the following fields: 1. Date 2. Time 3. Temperature in fahrenheit 4. Humidity 4. Chance of rain 5. Chance of snow 6. Visibility in kilometers 6. Wind speed in kph 6. Wind direction 6. Weather condition Remember, there should be 72 rows. ``` with open('hourly_forecast.csv', 'w', newline = '') as file: # Create a new csv file named 'hourly_forecast' hourly_forecast = csv.writer(file, delimiter = ',') # Instantiate a csv.writer() object, writing to 'hourly_forecast.csv' hourly_forecast.writerow(['date', 'hour' ,'temp_f', 'humidity', 'chance_of_rain', 'chance_of_snow', 'vis_km','wind_kph','wind_dir', 'condition_text']) # Write the header for our csv file for hour in forecast_hour: # Loop through our forecast_day list and write the following fields in our csv file hourly_forecast.writerow([ hour['date'], hour['hour'], hour['forecast']['temp_f'], hour['forecast']['humidity'], hour['forecast']['chance_of_rain'], hour['forecast']['chance_of_snow'], hour['forecast']['vis_km'], hour['forecast']['wind_kph'], hour['forecast']['wind_dir'], hour['forecast']['condition']['text'] ]) ```
github_jupyter
# Using LightGBM as designed (not through sklearn API) ## Automatically Encode Categorical Columns I've been encoding the geo_level columns as numeric this whole time. Can it perform better by using categorical columns? LGBM can handle categorical features directly. No need to OHE them. But they must be ints. 1. Load in X 2. Label Encode all the categorical features - All `object` dypes are categorical and need to be LabelEncoded ``` import matplotlib.pyplot as plt import seaborn as sns import numpy as np import pandas as pd import pickle import lightgbm as lgb from pathlib import Path ### USE FOR LOCAL JUPYTER NOTEBOOKS ### DOWNLOAD_DIR = Path('../download') DATA_DIR = Path('../data') SUBMISSIONS_DIR = Path('../submissions') MODEL_DIR = Path('../models') ####################################### X = pd.read_csv(DOWNLOAD_DIR / 'train_values.csv', index_col='building_id') categorical_columns = X.select_dtypes(include='object').columns bool_columns = [col for col in X.columns if col.startswith('has')] X_test = pd.read_csv(DOWNLOAD_DIR / 'test_values.csv', index_col='building_id') y = pd.read_csv(DOWNLOAD_DIR / 'train_labels.csv', index_col='building_id') sns.set() from sklearn.preprocessing import OrdinalEncoder, LabelEncoder from sklearn.compose import ColumnTransformer label_enc = LabelEncoder() t = [('ord_encoder', OrdinalEncoder(dtype=int), categorical_columns)] ct = ColumnTransformer(transformers=t, remainder='passthrough') X_all_ints = ct.fit_transform(X) y = label_enc.fit_transform(y.reshape(-1,)) # Note that append for pandas objects works differently to append with # python objects e.g. python append modifes the list in-place # pandas append returns a new object, leaving the original unmodified not_categorical_columns = X.select_dtypes(exclude='object').columns cols_ordered_after_ordinal_encoding = categorical_columns.append(not_categorical_columns) cols_ordered_after_ordinal_encoding geo_cols = pd.Index(['geo_level_1_id', 'geo_level_2_id', 'geo_level_3_id']) cat_cols_plus_geo = categorical_columns.append(geo_cols) list(cat_cols_plus_geo) train_data = lgb.Dataset(X_all_ints, label=y, feature_name=list(cols_ordered_after_ordinal_encoding), categorical_feature=list(cat_cols_plus_geo)) # train_data = lgb.Dataset(X_all_ints, label=y) validation_data = lgb.Dataset('validation.svm', reference=train_data) ``` After reading through the docs for [lgb.train](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.train.html) and [lgb.cv](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.cv.html), I had to make a separate function `get_ith_pred` and then call that repeatedly within `lgb_f1_score`. The function's docstring explains how it works. I have used the same argument names as in the LightGBM docs. This can work for any number of classes but does not work for binary classification. In the binary case, `preds` is a 1D array containing the probability of the positive class (it does not contain groups). ``` # Taken from the docs for lgb.train and lgb.cv # Helpful Stackoverflow answer: # https://stackoverflow.com/questions/50931168/f1-score-metric-in-lightgbm from sklearn.metrics import f1_score def get_ith_pred(preds, i, num_data, num_class): """ preds: 1D NumPY array A 1D numpy array containing predicted probabilities. Has shape (num_data * num_class,). So, For binary classification with 100 rows of data in your training set, preds is shape (200,), i.e. (100 * 2,). i: int The row/sample in your training data you wish to calculate the prediction for. num_data: int The number of rows/samples in your training data num_class: int The number of classes in your classification task. Must be greater than 2. LightGBM docs tell us that to get the probability of class 0 for the 5th row of the dataset we do preds[0 * num_data + 5]. For class 1 prediction of 7th row, do preds[1 * num_data + 7]. sklearn's f1_score(y_true, y_pred) expects y_pred to be of the form [0, 1, 1, 1, 1, 0...] and not probabilities. This function translates preds into the form sklearn's f1_score understands. """ # Does not work for binary classification, preds has a different form # in that case assert num_classs > 2 preds_for_ith_row = [preds[class_label * num_data + i] for class_label in range(num_class)] # The element with the highest probability is predicted return np.argmax(preds_for_ith_row) def lgb_f1_micro(preds, train_data): y_true = train_data.get_label() num_data = len(y_true) num_class = 3 y_pred = [] for i in range(num_data): ith_pred = get_ith_pred(preds, i, num_data, num_class) y_pred.append(ith_pred) return 'f1', f1_score(y_true, y_pred, average='micro'), True probs = [[.12, 0.18, 0.7], [0.2, 0.5, 0.3]] [np.argmax(p) for p in probs] param = {'num_leaves': 120, # 'num_iterations': 240, 'min_child_samples': 40, 'learning_rate': 0.2, 'boosting_type': 'goss', 'objective': 'multiclass', 'num_class': 3} # LGBM seem to hate using plurals. Why??? num_round = 10 evals_result = {} booster = lgb.train(param, train_data, num_round, categorical_feature=list(cat_cols_plus_geo), feval=lgb_f1_micro, evals_result=evals_result) evals_result lgb.plot_metric(evals_result) booster.feature_importance() # LGBM seem to hate using plurals. Why??? num_boost_round = 100 cv_results = lgb.cv(param, train_data, num_boost_round, nfold=5, categorical_feature=list(cat_cols_plus_geo), feval=lgb_f1_micro) cv_results.keys() plt.plot(cv_results['f1-mean']) f1_mean = cv_results['f1-mean'] max(f1_mean) np.argmax(f1_mean) len(f1_mean[20:35]) plt.plot(range(20, 35), f1_mean[20:35]) plt.xticks() plt.show() eval_results = {} booster = lgb.train(param, train_data, 28, # valid_sets=[validation_data], categorical_feature=list(cat_cols_plus_geo), feval=lgb_f1_micro, evals_result=evals_result) # early_stopping_rounds=5) booster.feature_importance() data = {'name': booster.feature_name(), 'importance': booster.feature_importance()} df_booster = pd.DataFrame(data) df_booster ``` ## Submit this new model This model with the native LightGBM API looks like an improvement over the sklearn implementation. Let's give it a whirl! ``` def make_submission_lgbm_api(booster, ct, title): """ ct: ColumnTransformer The ColumnTransformer class already fit to X_train to label encode the features label_enc: LabelEncoder The LabelEncoder used to transform y to [0, 1, 2] """ X_test = pd.read_csv(DOWNLOAD_DIR / 'test_values.csv', index_col='building_id') X_test_ints = ct.transform(X_test) prediction_probabilities = booster.predict(X_test_ints) # Shift by 1 as submission is in format [1, 2, 3] predictions = [np.argmax(p) + 1 for p in prediction_probabilities] sub_format = pd.read_csv(DOWNLOAD_DIR / 'submission_format.csv', index_col='building_id') my_sub = pd.DataFrame(data=predictions, columns=sub_format.columns, index=sub_format.index) my_sub.to_csv(SUBMISSIONS_DIR / f'{title}.csv') title = '02-26 LightGBM API - All features - 28 rounds - cat+geo are cat features' make_submission_lgbm_api(booster, ct, title) ``` # Woop That Scored 0.7446 (with cv score of 0.7446) and pushed me up 100 places to 227 Let's remove some unimportant features ``` df_booster with open(DATA_DIR / 'df_feature_importance_lgbm_api.pkl', 'wb') as f: pickle.dump(df_booster, f) with open(DATA_DIR / 'df_feature_importance_lgbm_api.pkl', 'rb') as f: a = pickle.load(f) with open(DATA_DIR / 'cat_cols_plus_geo.pkl', 'wb') as f: pickle.dump(cat_cols_plus_geo, f) ``` Want to test now which features give the best performance and then submit the best one(s) 1. fi > 0 2. fi > 10 3. fi > 20 4. fi > 50 5. fi > 70 6. fi > 100 7. fi > 200 8. fi > 500 9. fi > 800 10. fi > 1000 ``` # These are the features we want to include keep = df_booster[df_booster.importance > 500].name.values keep test = list(cols_ordered_after_ordinal_encoding) test def cv_with_fi_range(keep_features=-1, num_boost_round=50): """ Perform cv with native LightGBM API keep_feautures: int Keep features with feature importance greater than this value. Default: -1 to keep all features """ X_train = pd.read_csv(DOWNLOAD_DIR / 'train_values.csv', index_col='building_id') y_train = pd.read_csv(DOWNLOAD_DIR / 'train_labels.csv', index_col='building_id') # Must ordinal encode the categorical columns for LGBM to work categorical_columns = X.select_dtypes(include='object').columns t = [('ord_encoder', OrdinalEncoder(dtype=int), categorical_columns)] ct = ColumnTransformer(transformers=t, remainder='passthrough') X_train_ints = ct.fit_transform(X_train) # Must label encode y (LGBM expects [0, 1, 2] as classes) label_enc = LabelEncoder() y_train = label_enc.fit_transform(np.ravel(y)) non_cat_cols = X.select_dtypes(exclude='object').columns cols_after_ord_encoding = categorical_columns.append(non_cat_cols) # Turn into DF df_ints = pd.DataFrame(data=X_train_ints, columns=cols_after_ord_encoding) with open(DATA_DIR / 'df_feature_importance_lgbm_api.pkl', 'rb') as f: df_feature_importance = pickle.load(f) # Only keep features with FI > keep_features mask = df_feature_importance.importance > keep_features features_to_use = df_feature_importance[mask].name.values X_train_kept_features = df_ints[features_to_use] feature_names = list(df_ints[features_to_use].columns) # Need this to check which columns remain are categorical with open(DATA_DIR / 'cat_cols_plus_geo.pkl', 'rb') as f: cat_cols_plus_geo = pickle.load(f) # Create list of categorical features categorical_features = [] for feature in feature_names: if feature in cat_cols_plus_geo: categorical_features.append(feature) train_dataset = lgb.Dataset(X_train_kept_features, label=y_train, feature_name=feature_names, categorical_feature=categorical_features) param = {'num_leaves': 120, # 'num_iterations': 240, 'min_child_samples': 40, 'learning_rate': 0.2, 'boosting_type': 'goss', 'objective': 'multiclass', 'num_class': 3} cv_results = lgb.cv(param, train_dataset, num_boost_round, nfold=5, categorical_feature=categorical_features, feval=lgb_f1_micro) f1_mean = cv_results['f1-mean'] fig, ax = plt.subplots() ax.plot(f1_mean) title = f'F1-score kept features above {keep_features} feature importance' ax.set(title=title, xlabel='Boosting round', ylabel='f1-score (micro)') plt.show() print(f'RESULTS KEEPING FEATURES > {keep_features} FEATURE IMPORTANCE') print('Best f1 score: ', max(f1_mean)) print('Best f1 score iteration:', np.argmax(f1_mean)) ``` # TL;DR Keeping features above feature importance 20 gives the best results: 0.7448 (in comparison to 0.7446 with all features). Do 26 iterations. ``` cv_with_fi_range(-1) cv_with_fi_range(0) cv_with_fi_range(10) cv_with_fi_range(20) cv_with_fi_range(50) cv_with_fi_range(70) cv_with_fi_range(100) cv_with_fi_range(200) cv_with_fi_range(500) cv_with_fi_range(800) cv_with_fi_range(1000) ``` ## Results Keeping features with importance > 20 gives the best results
github_jupyter
# Monte Carlo simulations of the 2D Ising model *Authors: Enze Chen (University of California, Berkeley)* ![Ising model](https://raw.githubusercontent.com/enze-chen/learning_modules/master/fig/Ising_schematic.png) This notebook guides you through the steps of setting up a Monte Carlo (MC) simulation of the 2D ferromagnetic [Ising model](https://en.wikipedia.org/wiki/Ising_model) and observing the characteristic phase transition at the critical temperature $T_c$. We will use the [Metroplis-Hastings algorithm](https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm). This notebook is far from being original as several similar notebooks exist online, but I have tried to keep the code simple and explanations plentiful so that *beginners* to Python and computational MSE can successfully complete everything. ## How to run this notebook If you are viewing this notebook on Google Colaboratory, then all the software is already set up for you (hooray). If you want to run the notebook locally, make sure all the Python libraries in the [`requirements.txt`](https://github.com/enze-chen/learning_modules/blob/master/requirements.txt) file are installed. For pedagogical reasons, there are a few sections for you to complete the code in order to run the simulation. These are delineated with the dashed lines as follows, and you should **only change what's inside**. You don't have to edit the text or code anywhere else. I've also included "**TODO**" to separate the background context from the actual instructions. ```python # ---------------------- # # YOUR CODE HERE # ---------------------- # ``` If you edit the code in a cell, just press `Shift+Enter` to run it again. You have to execute **all** the code cells in this notebook from top to bottom (so don't skip around). A number `[#]` will appear to the left of the code cell once it's done executing. When all done successfully, you should be able to see a few images of your system and plots of the system properties as a function of $T$. ## Acknowledgements I thank [Dr. Matthew Sherburne](https://mse.berkeley.edu/people_new/sherburne/) for teaching MATSCI 215: Computational Materials Science and my advisor [Prof. Mark Asta](https://mse.berkeley.edu/people_new/asta/) for encouraging me in my education-related pursuits. An interactive version of this notebook can be found online at [Google Colaboratory](https://colab.research.google.com/github/enze-chen/learning_modules/blob/master/mse/Monte_Carlo_Ising_model.ipynb). For more details, see the classical work by [Onsager, L. *Physical Review*, **65**, 1944](https://journals.aps.org/pr/abstract/10.1103/PhysRev.65.117) or the textbooks by [Newman and Barkema](https://global.oup.com/academic/product/monte-carlo-methods-in-statistical-physics-9780198517979?cc=us&lang=en&) or [Landau and Binder](https://www.cambridge.org/core/books/guide-to-monte-carlo-simulations-in-statistical-physics/2522172663AF92943C625056C14F6055). ## Important equations The [Ising model](https://en.wikipedia.org/wiki/Ising_model) is one of the simplest models for magnetism where the magnetic dipole moments ("spins") are discrete values from the set $\{+1, -1\}$ corresponding to `up` and `down` spins, respectively. We will work with a 2D square lattice, so the spin at location $(i,j)$ will be denoted by $\sigma_{ij}$. The 2D square lattice Ising model is also one of the simplest systems to display a *phase transition*. The first equation we must write down is the **Hamiltonian**, which in our case will only feature an **exchange interaction** parameter $J$ that is positive for ferromagnetism (we ignore the magnetic field interaction parameter $h$). Furthermore, this exchange interaction only applies to nearest-neighbors where we will also assume **periodic boundary conditions**. Therefore, $$ H(\sigma) = -J \sum_{\langle p,q \rangle} \sigma_{p_i,p_j} \sigma_{q_i,q_j}$$ where $\langle p,q \rangle$ represent a summation over adjacent lattice sites only. We'll be using the [Metroplis-Hastings algorithm](https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm) for the MC simulation, which has the following steps for each iteration through the cycle. As we begin coding, we'll find ways of optimizing the calculations. 1. Choose a site $(i,j)$ at random. 1. Calculate the change in energy, $\Delta E$, if the sign at that site was flipped. 1. If $\Delta E \le 0$, accept the change by keeping the new spin because we've lowered the total energy of our system. 1. If $\Delta E > 0$, generate a random number $r \in [0, 1]$. 1. Accept the change only if $r < \exp (-\beta \Delta E)$ where $\beta = \frac{1}{k_BT}$ and $k_B$ is the Boltzmann constant. 1. Otherwise, return the spin to the original state. Since our goal is to simulate a phase transition, there must be properties of the system that we can track. The properties that are relevant to us are the expected values of the system's **energy** ($E$) and **magnetization** ($M$), where the expected value of a quantity $Q$ over **both space and time** is given by $$ \langle Q \rangle_{N,T} = \frac{1}{T} \sum_{t=1}^{T} \left[ \frac{1}{N} \sum_{s=1}^{N} Q_{s,t} \right] $$ Knowing the expected values will allow us to calculate the **heat capacity** per site and **magnetic susceptibility** per site, which are given respectively by $$ C = \frac{\langle E^2 \rangle - \langle E \rangle^2}{k_B T^2 L^2} $$ $$ \chi = \frac{\langle M^2 \rangle - \langle M \rangle^2}{k_B T L^2} $$ where $L$ is the size of one dimension. At the phase transition, we should see a divergence in these two quantities (which might not be precise depending on the discretization, among other factors). ## Known issues * As the code isn't heavily optimized, it will be slow if you run it for too many iterations or for too large of a system. Please be gentle. ❤ ## Import Python libraries These are all the required Python libraries. As you can see, not that many! ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline ``` ## Normalized energies To make our lives even simpler, we will work with dimensionless units, which can be achieved by setting $J = 1$ and $k_B = 1$. In the subsequent code, we'll still include $J$ and $k_B$ in the appropriate places to be as general as possible. **TODO**: Set the correct constants below. ``` # ---------------------- # # YOUR CODE HERE # ---------------------- # ``` ## Helper functions We'll write a few helper functions that we'll call in the main body. For this first one, `create_spins()` will initialize the $L \times L$ grid of spins randomly. **TODO**: Complete the function below which will create an $L \times L$ array of spins randomly selected from the set $\{+1, -1\}$. You might find the [`np.random.choice()`](https://numpy.org/devdocs/reference/random/generated/numpy.random.choice.html) function helpful. ``` # ---------------------- # # YOUR CODE HERE def create_spins(L): '''Create an L x L array of spins randomly chosen from {+1, -1}. Args: L (int): The number of lattice sites along each dimension. Returns: A NumPy array of spins of dimension L x L. ''' pass # delete this and write your own code # ---------------------- # ``` No code is complete without tests, so we'll test our function below with an easy case. I suggest running the following cell at least twice to make sure your spins are randomly being assigned. ``` # Test for the create_spins() function create_spins(3) ``` ### Plotting utility Next we'll write a helper function that assists with plotting the spins. This part can be safely skipped, but completing it can help you visualize what's going on in your system. Some hints: * We can use the [`ax.imshow()`](https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.pyplot.imshow.html) function to display the spins as a heatmap. ``` # ---------------------- # # YOUR CODE HERE plt.rcParams.update({'figure.figsize':(4,4), 'font.size':14, 'image.cmap':'coolwarm'}) def plot_spins(spins, T): """Display the spin configurations in your system. Args: spins (numpy.ndarray): An array of spin values. Returns: None, but a pyplot is displayed. """ pass # delete this and write your own code # ---------------------- # # We'll test it here plot_spins(create_spins(16), T=1) ``` ### Total energy and magnetization Since we care about the total energy and total magnetization in our system, let's write helper functions to help us calculate those quantities. **TODO**: Finish the `compute_energy()` and `compute_mag()` functions to calculate the total energy and total magnetization, respectively, which are given by: $$ E = -J \sum_{\langle p,q \rangle} \sigma_{p_i,p_j} \sigma_{q_i,q_j} \qquad M = \sum_{i,j} \sigma_{ij}$$ Some hints: * Each lattice site $(i,j)$ has four nearest neighbors given by the following figure. However you do the sum, make sure to account for overcounting in your final answer for the energy. ![Ising neighbors](https://raw.githubusercontent.com/enze-chen/learning_modules/master/fig/Ising_neighbors.png) * You can index into a 2D NumPy array at site $(i,j)$ with the syntax `spins[i, j]`. * Since we are using periodic boundary conditions, you might find the [modulo operator](https://www.tutorialspoint.com/What-is-modulo-operator-in-Python) `%` handy. * Calculating the total magnetization should be very simple: you just have to add up all the spins in your array. Check out [`np.sum()`](https://numpy.org/doc/1.18/reference/generated/numpy.sum.html). ``` # ---------------------- # # YOUR CODE HERE def compute_energy(spins): '''Compute the total energy in the Ising model. Args: spins (numpy.ndarray): An array of spin values. Returns: A float for the total energy. ''' pass # delete this and write your own code def compute_mag(spins): '''Compute the total magnetization in the Ising model. Args: spins (numpy.ndarray): An array of spin values. Returns: A float for the total magnetization. ''' pass # delete this and write your own code # ---------------------- # ``` Once again, we'll write some tests for known cases. I encourage you to write some more! My `print()` statements are using [f-strings](https://www.geeksforgeeks.org/formatted-string-literals-f-strings-python/) which are new as of Python 3.6 and wicked cool. ``` # Tests for compute_energy() and compute_mag() spins_all_pos = np.array([[1, 1], [1, 1]]) # should return E = -4 and M = 4 print(f'The all positive case has a total energy of {compute_energy(spins_all_pos)} ' + f'and a total magnetization of {compute_mag(spins_all_pos)}') spins_alt = np.array([[1, -1], [-1, 1]]) # should return E = 4 and M = 0 print(f'The alternating spins case has a total energy of {compute_energy(spins_alt)} ' + f'and a total magnetization of {compute_mag(spins_alt)}') ``` ### Monte Carlo sweeps The next helper function will perform one MC sweep, which means we'll choose a random site within our lattice and decide if we should flip the spin or not. We must do this $L^2$ times. **TODO**: Finish the `mc_sweep()` function below. Some hints are: * You might find the [`np.random.randint()`](https://numpy.org/devdocs/reference/random/generated/numpy.random.randint.html) method helpful. * Note that while we could compute the total energy twice, we only really need the energy *difference* $\Delta E$. Furthermore, our system is discretized nicely such that the spins only take on $+1$ and $-1$. Can we leverage this somehow? * Don't forget we are using the Metroplis-Hastings algorithm, so you might need functions like `np.random.random()` and `np.exp()`. * Let's be cognizant of memory usage and make the changes to `spins` in-place (i.e. don't make excessive copies of the `spins` array). ``` # ---------------------- # # YOUR CODE HERE def mc_sweep(spins, beta): '''Peform one MC sweep through all the sites. Args: spins (numpy.ndarray): An array of spin values. beta (float): Inverse temperature. Returns: A NumPy array of spins. ''' pass # delete this and write your own code # ---------------------- # ``` While we can't necessarily control which sites the MC algorithm will select, we can at least see if our code above is flipping signs at all using the following test code. ``` orig = create_spins(5) dup = np.copy(orig) # we have to make a copy because our changes are in-place mc_sweep(spins=dup, beta=1) print(f'The spins changed at {np.count_nonzero(dup - orig)} sites!') ``` ## Main method Now that we have created all the pieces, it's time to assemble them. **TODO**: We need to write the main method, which we will call `mc_ising_model()`. Read the docstring for what the input arguments correspond to. Some hints: * Don't forget to include an equilibration period! * We should also average of properties over the number of `mcsteps` and normalize by the system size to get an intensive quantity. ``` # ---------------------- # # YOUR CODE HERE def mc_ising_model(L, Ts, eqsteps, mcsteps): '''Perform an MC simulation for the 2D ferromagnetic Ising model and calculate relevant physical properties. Args: L (int): Number of lattice sites along one side. Ts (numpy.ndarray): A list of temperatures to simulate. eqsteps (int): Number of equilibration MC steps. mcsteps (int): Number of additional MC steps. Returns: E_T (list): The average energy at each temperature. M_T (list): The average magnetization at each temperature. C_T (list): The average heat capacity at each temperature. X_T (list): The average susceptibility at each temperature. ''' # Store the final values as a function of temperature E_T = [] M_T = [] C_T = [] X_T = [] # Return the four lists as a tuple return (E_T, M_T, C_T, X_T) # ---------------------- # ``` Next we need to specify the parameters of our MC simulation, as described in the docstring above. Note that to start off, you might want to make all these values small to make sure your code runs correctly before using larger values for more accurate statistics. **TODO**: Set the correct experimental parameters below. Again, I wouldn't recommend anything too crazy. ``` # ---------------------- # # YOUR CODE HERE # ---------------------- # E_T, M_T, C_T, X_T = mc_ising_model(L=L, Ts=Ts, eqsteps=eqsteps, mcsteps=mcsteps) ``` **TODO**: Now we will plot the properties we obtained above as a function of $T$ and estimate $T_c$ based on our simulations. Hopefully your estimated $T_c$ and transitions are near the theoretical value of: $$ T_c = \frac{2J}{k_B \ln \left( 1 + \sqrt{2} \right)} \approx 2.269 $$ Some hints: * You'll want to reference the variables you used previously. * You can plots on different axes from the subplots using `ax[i].plot()` where `i` is the index. * For the magnetization, it might be easier for you to plot the absolute value. You can obtain the absolute value with [`np.absolute()`](https://numpy.org/doc/1.18/reference/generated/numpy.absolute.html), * You can set the axis title with the function `ax[i].set_title('Title')` and $x$-axis label with `ax[i].set_xlabel('Label')`. * There are many ways of estimating the transition temperature. One way is to average the $T$s at which $C$ and $\chi$ are maximized. You might find [`np.argmax()`](https://numpy.org/doc/1.18/reference/generated/numpy.argmax.html) handy. ``` plt.rcParams.update({'figure.figsize':(16,3), 'lines.markersize':8}) # ---------------------- # # YOUR CODE HERE # ---------------------- # ``` How do the results change as you change $L$ and `eqsteps`? ---------------- ## Conclusion I hope you learned how we can use Monte Carlo and the Metropolis-Hastings algorithm to simulate the 2D Ising model and observe the characteristic phase transition. If you have any remaining questions or ideas for this and other modules, please don't hesitate to reach out. ## Extensions * One of the benefits of MC sampling is the ability to incorporate uncertainties into the calculations. Can you add this into the `mc_ising_model()` function above? You can then display the uncertainties with the [`plt.errorbar()`](https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.pyplot.errorbar.html) function. * Can you modify the code to calculate other lattice geometries (e.g. triangular)? * Can you compute a radial distribution function? ## Answers If you found yourself stuck at certain points, I provide some sample answers [here](https://github.com/enze-chen/learning_modules/blob/master/data/answers.md#Monte_Carlo_Ising_model).
github_jupyter
# Deep Learning & Art: Neural Style Transfer Welcome to the second assignment of this week. In this assignment, you will learn about Neural Style Transfer. This algorithm was created by Gatys et al. (2015) (https://arxiv.org/abs/1508.06576). **In this assignment, you will:** - Implement the neural style transfer algorithm - Generate novel artistic images using your algorithm Most of the algorithms you've studied optimize a cost function to get a set of parameter values. In Neural Style Transfer, you'll optimize a cost function to get pixel values! ``` import os import sys import scipy.io import scipy.misc import matplotlib.pyplot as plt from matplotlib.pyplot import imshow from PIL import Image from nst_utils import * import numpy as np import tensorflow as tf %matplotlib inline ``` ## 1 - Problem Statement Neural Style Transfer (NST) is one of the most fun techniques in deep learning. As seen below, it merges two images, namely, a "content" image (C) and a "style" image (S), to create a "generated" image (G). The generated image G combines the "content" of the image C with the "style" of image S. In this example, you are going to generate an image of the Louvre museum in Paris (content image C), mixed with a painting by Claude Monet, a leader of the impressionist movement (style image S). <img src="images/louvre_generated.png" style="width:750px;height:200px;"> Let's see how you can do this. ## 2 - Transfer Learning Neural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of that. The idea of using a network trained on a different task and applying it to a new task is called transfer learning. Following the original NST paper (https://arxiv.org/abs/1508.06576), we will use the VGG network. Specifically, we'll use VGG-19, a 19-layer version of the VGG network. This model has already been trained on the very large ImageNet database, and thus has learned to recognize a variety of low level features (at the earlier layers) and high level features (at the deeper layers). Run the following code to load parameters from the VGG model. This may take a few seconds. ``` model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat") print(model) ``` The model is stored in a python dictionary where each variable name is the key and the corresponding value is a tensor containing that variable's value. To run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the [tf.assign](https://www.tensorflow.org/api_docs/python/tf/assign) function. In particular, you will use the assign function like this: ```python model["input"].assign(image) ``` This assigns the image as an input to the model. After this, if you want to access the activations of a particular layer, say layer `4_2` when the network is run on this image, you would run a TensorFlow session on the correct tensor `conv4_2`, as follows: ```python sess.run(model["conv4_2"]) ``` ## 3 - Neural Style Transfer We will build the NST algorithm in three steps: - Build the content cost function $J_{content}(C,G)$ - Build the style cost function $J_{style}(S,G)$ - Put it together to get $J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$. ### 3.1 - Computing the content cost In our running example, the content image C will be the picture of the Louvre Museum in Paris. Run the code below to see a picture of the Louvre. ``` content_image = scipy.misc.imread("images/louvre.jpg") imshow(content_image) ``` The content image (C) shows the Louvre museum's pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds. ** 3.1.1 - How do you ensure the generated image G matches the content of the image C?** As we saw in lecture, the earlier (shallower) layers of a ConvNet tend to detect lower-level features such as edges and simple textures, and the later (deeper) layers tend to detect higher-level features such as more complex textures as well as object classes. We would like the "generated" image G to have similar content as the input image C. Suppose you have chosen some layer's activations to represent the content of an image. In practice, you'll get the most visually pleasing results if you choose a layer in the middle of the network--neither too shallow nor too deep. (After you have finished this exercise, feel free to come back and experiment with using different layers, to see how the results vary.) So, suppose you have picked one particular hidden layer to use. Now, set the image C as the input to the pretrained VGG network, and run forward propagation. Let $a^{(C)}$ be the hidden layer activations in the layer you had chosen. (In lecture, we had written this as $a^{[l](C)}$, but here we'll drop the superscript $[l]$ to simplify the notation.) This will be a $n_H \times n_W \times n_C$ tensor. Repeat this process with the image G: Set G as the input, and run forward progation. Let $$a^{(G)}$$ be the corresponding hidden layer activation. We will define as the content cost function as: $$J_{content}(C,G) = \frac{1}{4 \times n_H \times n_W \times n_C}\sum _{ \text{all entries}} (a^{(C)} - a^{(G)})^2\tag{1} $$ Here, $n_H, n_W$ and $n_C$ are the height, width and number of channels of the hidden layer you have chosen, and appear in a normalization term in the cost. For clarity, note that $a^{(C)}$ and $a^{(G)}$ are the volumes corresponding to a hidden layer's activations. In order to compute the cost $J_{content}(C,G)$, it might also be convenient to unroll these 3D volumes into a 2D matrix, as shown below. (Technically this unrolling step isn't needed to compute $J_{content}$, but it will be good practice for when you do need to carry out a similar operation later for computing the style const $J_{style}$.) <img src="images/NST_LOSS.png" style="width:800px;height:400px;"> **Exercise:** Compute the "content cost" using TensorFlow. **Instructions**: The 3 steps to implement this function are: 1. Retrieve dimensions from a_G: - To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()` 2. Unroll a_C and a_G as explained in the picture above - If you are stuck, take a look at [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape). 3. Compute the content cost: - If you are stuck, take a look at [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract). ``` # GRADED FUNCTION: compute_content_cost def compute_content_cost(a_C, a_G): """ Computes the content cost Arguments: a_C -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image C a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image G Returns: J_content -- scalar that you compute using equation 1 above. """ ### START CODE HERE ### # Retrieve dimensions from a_G (≈1 line) m, n_H, n_W, n_C = a_G.get_shape().as_list() # Reshape a_C and a_G (≈2 lines) a_C_unrolled = tf.reshape(a_C, [ n_C, n_W*n_H]) a_G_unrolled = tf.reshape(a_G, [ n_C, n_W*n_H]) # compute the cost with tensorflow (≈1 line) J_content = tf.reduce_sum(tf.squared_difference(a_C_unrolled, a_G_unrolled))/(4*n_W*n_C*n_H) ### END CODE HERE ### return J_content tf.reset_default_graph() with tf.Session() as test: tf.set_random_seed(1) a_C = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4) a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4) J_content = compute_content_cost(a_C, a_G) print("J_content = " + str(J_content.eval())) ``` **Expected Output**: <table> <tr> <td> **J_content** </td> <td> 6.76559 </td> </tr> </table> <font color='blue'> **What you should remember**: - The content cost takes a hidden layer activation of the neural network, and measures how different $a^{(C)}$ and $a^{(G)}$ are. - When we minimize the content cost later, this will help make sure $G$ has similar content as $C$. ### 3.2 - Computing the style cost For our running example, we will use the following style image: ``` style_image = scipy.misc.imread("images/monet_800600.jpg") imshow(style_image) ``` This painting was painted in the style of *[impressionism](https://en.wikipedia.org/wiki/Impressionism)*. Lets see how you can now define a "style" const function $J_{style}(S,G)$. ### 3.2.1 - Style matrix The style matrix is also called a "Gram matrix." In linear algebra, the Gram matrix G of a set of vectors $(v_{1},\dots ,v_{n})$ is the matrix of dot products, whose entries are ${\displaystyle G_{ij} = v_{i}^T v_{j} = np.dot(v_{i}, v_{j}) }$. In other words, $G_{ij}$ compares how similar $v_i$ is to $v_j$: If they are highly similar, you would expect them to have a large dot product, and thus for $G_{ij}$ to be large. Note that there is an unfortunate collision in the variable names used here. We are following common terminology used in the literature, but $G$ is used to denote the Style matrix (or Gram matrix) as well as to denote the generated image $G$. We will try to make sure which $G$ we are referring to is always clear from the context. In NST, you can compute the Style matrix by multiplying the "unrolled" filter matrix with their transpose: <img src="images/NST_GM.png" style="width:900px;height:300px;"> The result is a matrix of dimension $(n_C,n_C)$ where $n_C$ is the number of filters. The value $G_{ij}$ measures how similar the activations of filter $i$ are to the activations of filter $j$. One important part of the gram matrix is that the diagonal elements such as $G_{ii}$ also measures how active filter $i$ is. For example, suppose filter $i$ is detecting vertical textures in the image. Then $G_{ii}$ measures how common vertical textures are in the image as a whole: If $G_{ii}$ is large, this means that the image has a lot of vertical texture. By capturing the prevalence of different types of features ($G_{ii}$), as well as how much different features occur together ($G_{ij}$), the Style matrix $G$ measures the style of an image. **Exercise**: Using TensorFlow, implement a function that computes the Gram matrix of a matrix A. The formula is: The gram matrix of A is $G_A = AA^T$. If you are stuck, take a look at [Hint 1](https://www.tensorflow.org/api_docs/python/tf/matmul) and [Hint 2](https://www.tensorflow.org/api_docs/python/tf/transpose). ``` # GRADED FUNCTION: gram_matrix def gram_matrix(A): """ Argument: A -- matrix of shape (n_C, n_H*n_W) Returns: GA -- Gram matrix of A, of shape (n_C, n_C) """ ### START CODE HERE ### (≈1 line) GA = tf.matmul(A, tf.transpose(A)) ### END CODE HERE ### return GA tf.reset_default_graph() with tf.Session() as test: tf.set_random_seed(1) A = tf.random_normal([3, 2*1], mean=1, stddev=4) GA = gram_matrix(A) print("GA = " + str(GA.eval())) ``` **Expected Output**: <table> <tr> <td> **GA** </td> <td> [[ 6.42230511 -4.42912197 -2.09668207] <br> [ -4.42912197 19.46583748 19.56387138] <br> [ -2.09668207 19.56387138 20.6864624 ]] </td> </tr> </table> ### 3.2.2 - Style cost After generating the Style matrix (Gram matrix), your goal will be to minimize the distance between the Gram matrix of the "style" image S and that of the "generated" image G. For now, we are using only a single hidden layer $a^{[l]}$, and the corresponding style cost for this layer is defined as: $$J_{style}^{[l]}(S,G) = \frac{1}{4 \times {n_C}^2 \times (n_H \times n_W)^2} \sum _{i=1}^{n_C}\sum_{j=1}^{n_C}(G^{(S)}_{ij} - G^{(G)}_{ij})^2\tag{2} $$ where $G^{(S)}$ and $G^{(G)}$ are respectively the Gram matrices of the "style" image and the "generated" image, computed using the hidden layer activations for a particular hidden layer in the network. **Exercise**: Compute the style cost for a single layer. **Instructions**: The 3 steps to implement this function are: 1. Retrieve dimensions from the hidden layer activations a_G: - To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()` 2. Unroll the hidden layer activations a_S and a_G into 2D matrices, as explained in the picture above. - You may find [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape) useful. 3. Compute the Style matrix of the images S and G. (Use the function you had previously written.) 4. Compute the Style cost: - You may find [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract) useful. ``` # GRADED FUNCTION: compute_layer_style_cost def compute_layer_style_cost(a_S, a_G): """ Arguments: a_S -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image S a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image G Returns: J_style_layer -- tensor representing a scalar value, style cost defined above by equation (2) """ ### START CODE HERE ### # Retrieve dimensions from a_G (≈1 line) m, n_H, n_W, n_C = a_G.get_shape().as_list() # Reshape the images to have them of shape (n_C, n_H*n_W) (≈2 lines) a_S = tf.transpose(tf.reshape(a_S, [n_H*n_W, n_C])) a_G = tf.transpose(tf.reshape(a_G, [n_H*n_W, n_C])) # Computing gram_matrices for both images S and G (≈2 lines) GS = tf.matmul(a_S, tf.transpose(a_S)) GG = tf.matmul(a_G, tf.transpose(a_G)) # Computing the loss (≈1 line) J_style_layer = tf.reduce_sum(tf.squared_difference(GS, GG))/(4*n_C*n_C*n_H*n_H*n_W*n_W) ### END CODE HERE ### return J_style_layer tf.reset_default_graph() with tf.Session() as test: tf.set_random_seed(1) a_S = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4) a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4) J_style_layer = compute_layer_style_cost(a_S, a_G) print("J_style_layer = " + str(J_style_layer.eval())) ``` **Expected Output**: <table> <tr> <td> **J_style_layer** </td> <td> 9.19028 </td> </tr> </table> ### 3.2.3 Style Weights So far you have captured the style from only one layer. We'll get better results if we "merge" style costs from several different layers. After completing this exercise, feel free to come back and experiment with different weights to see how it changes the generated image $G$. But for now, this is a pretty reasonable default: ``` STYLE_LAYERS = [ ('conv1_1', 0.2), ('conv2_1', 0.2), ('conv3_1', 0.2), ('conv4_1', 0.2), ('conv5_1', 0.2)] ``` You can combine the style costs for different layers as follows: $$J_{style}(S,G) = \sum_{l} \lambda^{[l]} J^{[l]}_{style}(S,G)$$ where the values for $\lambda^{[l]}$ are given in `STYLE_LAYERS`. We've implemented a compute_style_cost(...) function. It simply calls your `compute_layer_style_cost(...)` several times, and weights their results using the values in `STYLE_LAYERS`. Read over it to make sure you understand what it's doing. <!-- 2. Loop over (layer_name, coeff) from STYLE_LAYERS: a. Select the output tensor of the current layer. As an example, to call the tensor from the "conv1_1" layer you would do: out = model["conv1_1"] b. Get the style of the style image from the current layer by running the session on the tensor "out" c. Get a tensor representing the style of the generated image from the current layer. It is just "out". d. Now that you have both styles. Use the function you've implemented above to compute the style_cost for the current layer e. Add (style_cost x coeff) of the current layer to overall style cost (J_style) 3. Return J_style, which should now be the sum of the (style_cost x coeff) for each layer. !--> ``` def compute_style_cost(model, STYLE_LAYERS): """ Computes the overall style cost from several chosen layers Arguments: model -- our tensorflow model STYLE_LAYERS -- A python list containing: - the names of the layers we would like to extract style from - a coefficient for each of them Returns: J_style -- tensor representing a scalar value, style cost defined above by equation (2) """ # initialize the overall style cost J_style = 0 for layer_name, coeff in STYLE_LAYERS: # Select the output tensor of the currently selected layer out = model[layer_name] # Set a_S to be the hidden layer activation from the layer we have selected, by running the session on out a_S = sess.run(out) # Set a_G to be the hidden layer activation from same layer. Here, a_G references model[layer_name] # and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that # when we run the session, this will be the activations drawn from the appropriate layer, with G as input. a_G = out # Compute style_cost for the current layer J_style_layer = compute_layer_style_cost(a_S, a_G) # Add coeff * J_style_layer of this layer to overall style cost J_style += coeff * J_style_layer return J_style ``` **Note**: In the inner-loop of the for-loop above, `a_G` is a tensor and hasn't been evaluated yet. It will be evaluated and updated at each iteration when we run the TensorFlow graph in model_nn() below. <!-- How do you choose the coefficients for each layer? The deeper layers capture higher-level concepts, and the features in the deeper layers are less localized in the image relative to each other. So if you want the generated image to softly follow the style image, try choosing larger weights for deeper layers and smaller weights for the first layers. In contrast, if you want the generated image to strongly follow the style image, try choosing smaller weights for deeper layers and larger weights for the first layers !--> <font color='blue'> **What you should remember**: - The style of an image can be represented using the Gram matrix of a hidden layer's activations. However, we get even better results combining this representation from multiple different layers. This is in contrast to the content representation, where usually using just a single hidden layer is sufficient. - Minimizing the style cost will cause the image $G$ to follow the style of the image $S$. </font color='blue'> ### 3.3 - Defining the total cost to optimize Finally, let's create a cost function that minimizes both the style and the content cost. The formula is: $$J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$$ **Exercise**: Implement the total cost function which includes both the content cost and the style cost. ``` # GRADED FUNCTION: total_cost def total_cost(J_content, J_style, alpha = 10, beta = 40): """ Computes the total cost function Arguments: J_content -- content cost coded above J_style -- style cost coded above alpha -- hyperparameter weighting the importance of the content cost beta -- hyperparameter weighting the importance of the style cost Returns: J -- total cost as defined by the formula above. """ ### START CODE HERE ### (≈1 line) J = alpha*J_content + beta*J_style ### END CODE HERE ### return J tf.reset_default_graph() with tf.Session() as test: np.random.seed(3) J_content = np.random.randn() J_style = np.random.randn() J = total_cost(J_content, J_style) print("J = " + str(J)) ``` **Expected Output**: <table> <tr> <td> **J** </td> <td> 35.34667875478276 </td> </tr> </table> <font color='blue'> **What you should remember**: - The total cost is a linear combination of the content cost $J_{content}(C,G)$ and the style cost $J_{style}(S,G)$ - $\alpha$ and $\beta$ are hyperparameters that control the relative weighting between content and style ## 4 - Solving the optimization problem Finally, let's put everything together to implement Neural Style Transfer! Here's what the program will have to do: <font color='purple'> 1. Create an Interactive Session 2. Load the content image 3. Load the style image 4. Randomly initialize the image to be generated 5. Load the VGG16 model 7. Build the TensorFlow graph: - Run the content image through the VGG16 model and compute the content cost - Run the style image through the VGG16 model and compute the style cost - Compute the total cost - Define the optimizer and the learning rate 8. Initialize the TensorFlow graph and run it for a large number of iterations, updating the generated image at every step. </font> Lets go through the individual steps in detail. You've previously implemented the overall cost $J(G)$. We'll now set up TensorFlow to optimize this with respect to $G$. To do so, your program has to reset the graph and use an "[Interactive Session](https://www.tensorflow.org/api_docs/python/tf/InteractiveSession)". Unlike a regular session, the "Interactive Session" installs itself as the default session to build a graph. This allows you to run variables without constantly needing to refer to the session object, which simplifies the code. Lets start the interactive session. ``` # Reset the graph tf.reset_default_graph() # Start interactive session sess = tf.InteractiveSession() ``` Let's load, reshape, and normalize our "content" image (the Louvre museum picture): ``` content_image = scipy.misc.imread("images/camp-nou.jpg") content_image = reshape_and_normalize_image(content_image) ``` Let's load, reshape and normalize our "style" image (Claude Monet's painting): ``` style_image = scipy.misc.imread("images/monet.jpg") style_image = reshape_and_normalize_image(style_image) ``` Now, we initialize the "generated" image as a noisy image created from the content_image. By initializing the pixels of the generated image to be mostly noise but still slightly correlated with the content image, this will help the content of the "generated" image more rapidly match the content of the "content" image. (Feel free to look in `nst_utils.py` to see the details of `generate_noise_image(...)`; to do so, click "File-->Open..." at the upper-left corner of this Jupyter notebook.) ``` generated_image = generate_noise_image(content_image) imshow(generated_image[0]) ``` Next, as explained in part (2), let's load the VGG16 model. ``` model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat") ``` To get the program to compute the content cost, we will now assign `a_C` and `a_G` to be the appropriate hidden layer activations. We will use layer `conv4_2` to compute the content cost. The code below does the following: 1. Assign the content image to be the input to the VGG model. 2. Set a_C to be the tensor giving the hidden layer activation for layer "conv4_2". 3. Set a_G to be the tensor giving the hidden layer activation for the same layer. 4. Compute the content cost using a_C and a_G. ``` # Assign the content image to be the input of the VGG model. sess.run(model['input'].assign(content_image)) # Select the output tensor of layer conv4_2 out = model['conv4_2'] # Set a_C to be the hidden layer activation from the layer we have selected a_C = sess.run(out) # Set a_G to be the hidden layer activation from same layer. Here, a_G references model['conv4_2'] # and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that # when we run the session, this will be the activations drawn from the appropriate layer, with G as input. a_G = out # Compute the content cost J_content = compute_content_cost(a_C, a_G) ``` **Note**: At this point, a_G is a tensor and hasn't been evaluated. It will be evaluated and updated at each iteration when we run the Tensorflow graph in model_nn() below. ``` # Assign the input of the model to be the "style" image sess.run(model['input'].assign(style_image)) # Compute the style cost J_style = compute_style_cost(model, STYLE_LAYERS) ``` **Exercise**: Now that you have J_content and J_style, compute the total cost J by calling `total_cost()`. Use `alpha = 10` and `beta = 40`. ``` ### START CODE HERE ### (1 line) J = total_cost(J_content, J_style, 10, 40) ### END CODE HERE ### ``` You'd previously learned how to set up the Adam optimizer in TensorFlow. Lets do that here, using a learning rate of 2.0. [See reference](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer) ``` # define optimizer (1 line) optimizer = tf.train.AdamOptimizer(2.0) # define train_step (1 line) train_step = optimizer.minimize(J) ``` **Exercise**: Implement the model_nn() function which initializes the variables of the tensorflow graph, assigns the input image (initial generated image) as the input of the VGG16 model and runs the train_step for a large number of steps. ``` def model_nn(sess, input_image, num_iterations = 200): # Initialize global variables (you need to run the session on the initializer) ### START CODE HERE ### (1 line) sess.run(tf.global_variables_initializer()) ### END CODE HERE ### # Run the noisy input image (initial generated image) through the model. Use assign(). ### START CODE HERE ### (1 line) sess.run(model['input'].assign(input_image)) ### END CODE HERE ### for i in range(num_iterations): # Run the session on the train_step to minimize the total cost ### START CODE HERE ### (1 line) sess.run(train_step) ### END CODE HERE ### # Compute the generated image by running the session on the current model['input'] ### START CODE HERE ### (1 line) generated_image = sess.run(model['input']) ### END CODE HERE ### # Print every 20 iteration. if i%20 == 0: Jt, Jc, Js = sess.run([J, J_content, J_style]) print("Iteration " + str(i) + " :") print("total cost = " + str(Jt)) print("content cost = " + str(Jc)) print("style cost = " + str(Js)) # save current generated image in the "/output" directory save_image("output/" + str(i) + ".png", generated_image) # save last generated image save_image('output/generated_image.jpg', generated_image) return generated_image ``` Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after ≈140 iterations. Neural Style Transfer is generally trained using GPUs. ``` model_nn(sess, generated_image) ``` **Expected Output**: <table> <tr> <td> **Iteration 0 : ** </td> <td> total cost = 5.05035e+09 <br> content cost = 7877.67 <br> style cost = 1.26257e+08 </td> </tr> </table> You're done! After running this, in the upper bar of the notebook click on "File" and then "Open". Go to the "/output" directory to see all the saved images. Open "generated_image" to see the generated image! :) You should see something the image presented below on the right: <img src="images/louvre_generated.png" style="width:800px;height:300px;"> We didn't want you to wait too long to see an initial result, and so had set the hyperparameters accordingly. To get the best looking results, running the optimization algorithm longer (and perhaps with a smaller learning rate) might work better. After completing and submitting this assignment, we encourage you to come back and play more with this notebook, and see if you can generate even better looking images. Here are few other examples: - The beautiful ruins of the ancient city of Persepolis (Iran) with the style of Van Gogh (The Starry Night) <img src="images/perspolis_vangogh.png" style="width:750px;height:300px;"> - The tomb of Cyrus the great in Pasargadae with the style of a Ceramic Kashi from Ispahan. <img src="images/pasargad_kashi.png" style="width:750px;height:300px;"> - A scientific study of a turbulent fluid with the style of a abstract blue fluid painting. <img src="images/circle_abstract.png" style="width:750px;height:300px;"> ## 5 - Test with your own image (Optional/Ungraded) Finally, you can also rerun the algorithm on your own images! To do so, go back to part 4 and change the content image and style image with your own pictures. In detail, here's what you should do: 1. Click on "File -> Open" in the upper tab of the notebook 2. Go to "/images" and upload your images (requirement: (WIDTH = 300, HEIGHT = 225)), rename them "my_content.png" and "my_style.png" for example. 3. Change the code in part (3.4) from : ```python content_image = scipy.misc.imread("images/louvre.jpg") style_image = scipy.misc.imread("images/claude-monet.jpg") ``` to: ```python content_image = scipy.misc.imread("images/my_content.jpg") style_image = scipy.misc.imread("images/my_style.jpg") ``` 4. Rerun the cells (you may need to restart the Kernel in the upper tab of the notebook). You can also tune your hyperparameters: - Which layers are responsible for representing the style? STYLE_LAYERS - How many iterations do you want to run the algorithm? num_iterations - What is the relative weighting between content and style? alpha/beta ## 6 - Conclusion Great job on completing this assignment! You are now able to use Neural Style Transfer to generate artistic images. This is also your first time building a model in which the optimization algorithm updates the pixel values rather than the neural network's parameters. Deep learning has many different types of models and this is only one of them! <font color='blue'> What you should remember: - Neural Style Transfer is an algorithm that given a content image C and a style image S can generate an artistic image - It uses representations (hidden layer activations) based on a pretrained ConvNet. - The content cost function is computed using one hidden layer's activations. - The style cost function for one layer is computed using the Gram matrix of that layer's activations. The overall style cost function is obtained using several hidden layers. - Optimizing the total cost function results in synthesizing new images. This was the final programming exercise of this course. Congratulations--you've finished all the programming exercises of this course on Convolutional Networks! We hope to also see you in Course 5, on Sequence models! ### References: The Neural Style Transfer algorithm was due to Gatys et al. (2015). Harish Narayanan and Github user "log0" also have highly readable write-ups from which we drew inspiration. The pre-trained network used in this implementation is a VGG network, which is due to Simonyan and Zisserman (2015). Pre-trained weights were from the work of the MathConvNet team. - Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, (2015). A Neural Algorithm of Artistic Style (https://arxiv.org/abs/1508.06576) - Harish Narayanan, Convolutional neural networks for artistic style transfer. https://harishnarayanan.org/writing/artistic-style-transfer/ - Log0, TensorFlow Implementation of "A Neural Algorithm of Artistic Style". http://www.chioka.in/tensorflow-implementation-neural-algorithm-of-artistic-style - Karen Simonyan and Andrew Zisserman (2015). Very deep convolutional networks for large-scale image recognition (https://arxiv.org/pdf/1409.1556.pdf) - MatConvNet. http://www.vlfeat.org/matconvnet/pretrained/
github_jupyter
# Git Git is the version control software used in the workshop. ## Installation - Go to the following link and choose the correct version for your operating system: https://git-scm.com/downloads. - Following the download, run the installer as per usual on your machine. - **Windows**: You may leave all selection widgets at their default values. ## Check installation You can check that git has been succefully installed: - Open a terminal/command prompt and type `git`. (Hit the `[Enter]` key to terminate the command entry.) If git is installed then you should see the following: ![](static/git.png) ### Notes for Windows In your Search bar, type `git-bash` and start the `git-bash` executable. This brings up a terminal window. If you cannot locate the `git-bash` launcher, go to "Start Menu -> Git -> Git Bash". You may want to create a shortcut on the Desktop or add it to the taskbar. ![Screenshot 2021-01-04 163805.png](attachment:bb163b1a-b129-49e2-9b5b-d47faaa4fbdb.png) Enter ```shell $ git ``` and observe the output given above. # Python This workshop uses Python as the example programming language, however, the ideas and principles covered extend and apply to any language. To install Python we will use **Anaconda**. Anaconda is a python distribution (similar as how Ubuntu is a distribution of Linux). It bundles the python interpreter with numerous libraries, addons, and development environments such as jupyter. ## Installation - Go to the following link https://www.anaconda.com/download/ and select Python 3. - Following the download, run the installer as per usual on your machine. **Windows users** Below are some screenshots for installing anaconda under windows. Most importantly, select "Install for: Just me". ![Screenshot 2021-01-04 155532.png](attachment:f02bed15-4df1-4665-b1ed-9c4f82143d8a.png) Next, accept the suggested installation path. ![Screenshot 2021-01-04 155730.png](attachment:0db32393-438e-4993-bbd0-da9cb8a454ae.png) Finally, select "Add Anaconda3 to my PATH environment variable" despite the warning. ![Screenshot 2021-01-04 155932.png](attachment:cd51a94d-3213-47c8-b74c-bbecff89f761.png). ## Test your python installation In a terminal window (windows: `git-bash`), enter the command ```shell $ python -c "import sys; print(sys.version)" ``` ![Screenshot 2021-01-04 161145.png](attachment:1526b200-18c9-42b2-b3cc-c7777ad766e2.png) # Editor There are several editors that allow us to edit files such as python scripts, LaTex files etc. For this workhop the suggested editor is VS code. If you prefer to use a different editor skip this part. ## Installation - Go to the following link https://code.visualstudio.com and click download. ## Install python extension for VSCode - Open VS code. - Bring up the Extensions view by clicking on the Extensions icon in the Activity Bar on the side of VS Code. - There type "Python" and select and install the first hit returned from the search. ![](static/vs_extension.png) # GitLab In this workshop, we will use GitLab, a code sharing service. The GWDG offers free GitLab accounts to all MPI employees. To interact with the code on GitLab, we will use `git`, the version control system. To login to GitLab, use this website: https://gitlab.gwdg.de/users/sign_in. **Use the tab titled "eMail-address"**, do not use the tab "Standard". Enter your email address and password into the login dialog of the eMail-address tab. ![drawings.png](attachment:a6e7e367-bc6f-49a2-ae89-1f663e2df7c1.png) # Check list - [ ] Python - [ ] Git - [ ] VS code (or any other editor) - [ ] GitLab
github_jupyter
# Cross Validation - Gabbar ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' import warnings warnings.filterwarnings("ignore") import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set_style('ticks') from sklearn.model_selection import cross_val_score pd.set_option('display.precision', 2) pd.set_option('display.max_columns', None) random_state = 5 cv = 10 non_training_attributes = ['changeset_id', 'changeset_harmful', 'feature_id', 'new_tags', 'old_tags'] labelled_path = '../downloads/highway-classifier/labelled/' labelled = pd.read_csv(labelled_path + 'attributes.csv') print(labelled.shape) # Sort the dataset randomly. labelled = labelled.sample(labelled.shape[0], random_state=random_state) labelled.head() # Drop all duplicate samples. print('Shape before dropping duplicates: {}'.format(labelled.shape)) labelled = labelled.drop_duplicates(subset=['changeset_id', 'feature_id']) print('Shape after dropping duplicates: {}'.format(labelled.shape)) # Fill null values in tags with empty string. labelled['old_tags'] = labelled['old_tags'].fillna('') labelled['new_tags'] = labelled['new_tags'].fillna('') from sklearn.externals import joblib model = joblib.load('../gabbar/trained/model.pkl') new_vectorizer = joblib.load('../gabbar/trained/new_vectorizer.pkl') old_vectorizer = joblib.load('../gabbar/trained/old_vectorizer.pkl') # Vectorize old_tags. old_vectorized = pd.DataFrame(old_vectorizer.transform(labelled['old_tags']).toarray(), columns=old_vectorizer.get_feature_names()) old_vectorized.columns = ['old_{}'.format(item) for item in old_vectorized.columns] # Vectorize new_tags. new_vectorized = pd.DataFrame(new_vectorizer.transform(labelled['new_tags']).toarray(), columns=new_vectorizer.get_feature_names()) new_vectorized.columns = ['new_{}'.format(item) for item in new_vectorized.columns] # Concatenate both initial validation data and vectorized data. labelled = pd.concat([labelled, new_vectorized, old_vectorized], axis=1) print(labelled.shape) labelled.head() X = labelled.drop(non_training_attributes, axis=1) y = labelled['changeset_harmful'] metrics = ['precision', 'recall', 'f1'] index = [] scores = [] for cv in [2, 3, 5, 10, 20, 40, 80, 160, 320]: temp_scores = [] for metric in metrics: temp_score = cross_val_score(model, X, y, cv=cv, scoring=metric) # Using just the mean. temp_scores.append(temp_score.mean()) scores.append(temp_scores) index.append(cv) scores = pd.DataFrame(scores, columns=['precision', 'recall', 'f1_score'], index=index) print(scores.shape) scores.head() axes = scores.plot(figsize=(8, 5)) axes.set_xlabel('Cross validation splitting strategy') axes.set_ylabel('Scores') axes.set_ylim(0, 0.5) # axes.set_xticks(np.arange(0, 100, 10)) plt.grid() plt.tight_layout() ```
github_jupyter
# Modelo lindo de infecção progressiva estocastica ## Introdução A alguns dias atrás, vi no facebok a seguinte imagem: ![Modelo de infecção com isolamento em árvore](https://scontent.fcpq4-1.fna.fbcdn.net/v/t1.0-9/91112031_3683857418354413_569726385317216256_o.jpg?_nc_cat=100&_nc_sid=8bfeb9&_nc_ohc=T6sLKt5A6IkAX9CZbrx&_nc_pt=1&_nc_ht=scontent.fcpq4-1.fna&oh=17a20312c67a4307eeaf138076b39d5f&oe=5EA33542) Ela descreve um modelo de infeção em árvore, onde uma pessoa tem o potencial de infectar outras três. Esse número vem dos relatórios canonicos (?) sobre infectologia, mas representa um modelo muito solto em relação ao cenário real. Abaixo eu descrevo um modelo que acredito ser mais realista, baseado em relacionamentos extraídos de uma base dados sobre conecções entre usuários do Facebook. As principais diferenças são: - Grafo de baixo raio (4.7) e, portanto, muito mais conexo que a árvore acima (aproxima melhor as relações sociais humanas) - Constantes do sistema são razoavelmente supostas a partir de dados extraídos de fontes estáveis (e.g. IBGE). Algumas destas são: proba home-officing, proba de infecção, dias de quarentena-em-casa. **Disclaimer:** mesmo que as relações sociais sejam melhor representadas por grafos complexos de múltiplos *clusters* do que por árvores ou conexões aleatórias, este problema possui uma propriedade fractal onde cada cluster se comporta como um indivíduo. As mesmas propriedades valem e os eventos são recursivamente passados para os indivíduos membros. Portanto, um modelo de infecção baseado em grafos realistas (como este) e um modelo de infeção aleatório (mais eficiente) se aproximam e possuem taxas de erro muito similares na prática. ``` import os from math import ceil import numpy as np import pandas as pd from numpy.random import rand import networkx as nx from google.colab import drive import matplotlib.pyplot as plt import matplotlib.animation as animation import seaborn as sns sns.set() ``` ## Lendo o conjunto de dados *Facebook Circles* As relações sociais são simuladas pelo grafo descrito no conjunto de dados *Facebook Circles*, contendo links de amizade no Facebook de usuários anonimizados. Para esta simulação, foi considerada a componente conexa de maior cardinalidade e os demais nós e arestas foram descartados. ``` FACEBOOK_DIR = '/content/gdrive/My Drive/datasets/facebook' #@title drive.mount('/content/gdrive') FILES = os.listdir(FACEBOOK_DIR) FILE_GROUPS = ('.edges', '.egofeat', '.feat', '.featnames', '.circles') EDGES, EGO_FEAT, FEAT, FEAT_NAMES, CIRCLES = ([f for f in FILES if f.endswith(ext)] for ext in FILE_GROUPS) G = (nx.read_edgelist(os.path.join(FACEBOOK_DIR, e)) for e in EDGES) G = nx.compose_all(G) # For simplicity, only consider the largest connected component. largest_cc = max(nx.connected_components(G), key=len) G = G.subgraph(largest_cc).copy() pos = nx.spring_layout(G) #@title print('Indivíduos:', len(G)) print('Relacionamentos:', len(G.edges)) #@title definindo algumas funções úteis def show_contamination_graph(G, pos, contaminated, verbose=1, ax=None): if verbose: print('population:', len(G)) print(f'contaminated: {len(contaminated)} ({len(contaminated) / len(G):.2%})') nx.draw_networkx_nodes(G, pos, node_color=~np.isin(G.nodes, contaminated), node_size=10, alpha=.8, cmap=plt.cm.Set1, ax=ax) nx.draw_networkx_edges(G, pos, width=1, alpha=.1, ax=ax) plt.axis('off') def show_contamination_progress(G, pos, contaminated, verbose=1, ax=None): COLUMNS = 7 for day, c in enumerate(contaminated): infection_rate = len(c) / len(G) plt.subplot(ceil(len(contaminated) / COLUMNS), COLUMNS, day + 1, title=f'Day {day} {infection_rate:.0%}') show_contamination_graph(G, pos, list(c), verbose=0) def random_uniform_sample(G, rate): return np.asarray(G.nodes)[np.random.rand(len(G)) <= rate] ``` ### Situação inicial de contaminação ``` #@title CONTAMINATED_RATE = .01 print('taxa de contaminados:', CONTAMINATED_RATE) contaminated_0 = random_uniform_sample(G, CONTAMINATED_RATE) show_contamination_graph(G, pos, contaminated_0) ``` ## Modelo de infecção instantânea Neste modelo, um indivíduo infectado imediatamente infecta seus vizinhos em um único *epoch*. ### Exibindo extensão da infecção Neste primeiro cenário, os indivíduos continuam suas rotinas de interação normalmente. ``` #@title def expand_contamination(G, contaminated): contamination_trees = [nx.algorithms.dfs_tree(G, c) for c in contaminated] return (nx.compose_all(contamination_trees) .nodes) contaminated_1 = expand_contamination(G, contaminated_0) show_contamination_graph(G, pos, contaminated_1) ``` ### Indivíduos aleatórios estão executando *home-officing* According to [IBGE](https://biblioteca.ibge.gov.br/visualizacao/livros/liv101694_informativo.pdf), 5.2% of the working class would work from home in 2018. Let's up-top that with 10% now that the pandemic is here. ``` #@title def expand_contamination_ho(G, contaminated, home_officing): G_nho = G.copy() G_nho.remove_nodes_from(home_officing) contaminated_not_ho = contaminated[~np.isin(contaminated, home_officing)] if len(contaminated_not_ho): contaminated_not_ho = expand_contamination(G_nho, contaminated_not_ho) else: contaminated_not_ho = [] return list(set(contaminated) | set(contaminated_not_ho)) def experiment(home_office_rate, contaminated_rate): home_officing_0 = random_uniform_sample(G, home_office_rate) contaminated_0 = random_uniform_sample(G, contaminated_rate) contaminated_1 = expand_contamination_ho(G, contaminated_0, home_officing_0) show_contamination_graph(G, pos, contaminated_1) experiment(home_office_rate=0.1, contaminated_rate=0.01) experiment(home_office_rate=0.2, contaminated_rate=0.01) experiment(home_office_rate=0.5, contaminated_rate=0.01) experiment(home_office_rate=0.9, contaminated_rate=0.01) ``` ## Modelo de infecção progressiva Neste modelo, epochs (em dias) são executados e a infeção cobre a população progressivamente, respeitando o grafo de relacionamentos *Facebook Circles* e dois fatores: - $E$: a probabilidade de um indivíduo se encontrar com um de seus vizinhos - $p$: a probabilidade de que um encontro se torne uma infecção O experimento conta também com algumas variáveis de controle. Elas estão listadas abaixo juntamente com seus valores padrões. - `OUT_FOR_GROCERIES_PR`: probabilidade de um indivíduo sair ao supermercado (1/7) - `INFECTIOUS_AFTER_DAYS`: número (em dias após contaminação) em que uma pessoa começa a ser infecciosa (3) - `INFECTIOUS_FOR_DAYS`: número (em dias) em que uma pessoa é infecciosa (14) - `CONTAMINATED_PR`: probabilidade inicial de um indivíduo da população estar contaminado (1%) - `HOME_OFFICING_PR`: probabilidade inicial de um indivíduo estar fazendo *home-officing* (10%) $N_d = E \cdot p \cdot N_d$ $N_{d+1} = N_d + E \cdot p \cdot N_d \implies N_{d+1} = (1 + E\cdot p) N_d$ [1] https://www.youtube.com/watch?v=Kas0tIxDvrg ``` def experiment(): contaminated_0 = random_uniform_sample(G, CONTAMINATED_PR) home_officing_0 = random_uniform_sample(G, HOME_OFFICING_PR) cs, infs, c = expand_contamination(G, contaminated_0, home_officing_0, days=DAYS) print('Initial state:') print(f' contaminated pr: {CONTAMINATED_PR:.2%}') print(f' home-officing pr: {HOME_OFFICING_PR:.2%}') print(f'contaminated after {DAYS} days: {cs[-1]} ({cs[-1] / len(G):.0%} of {len(G)})') d = pd.DataFrame({ 'day': np.arange(DAYS), 'contaminated': cs, 'infectous': infs }).melt(id_vars=['day']) plt.figure(figsize=(16, 6)) plt.subplot(121) sns.lineplot(data=d, x='day', y='value', hue='variable').set(ylim=(0, len(G))) plt.subplot(122) show_contamination_graph(G, pos, np.asarray(list(G.nodes))[c >= 0], verbose=0); def expand_contamination(G, c, ho, days=1): knows = np.asarray(nx.to_numpy_matrix(G)).astype(bool) # know each other ho = np.isin(G.nodes, ho) # is home-officing c = np.isin(G.nodes, c).astype(float) # contaminated for # days c[c == 0] = -np.inf cs = [] infs = [] for day in range(days): left_home = (~ho | (rand(len(G)) <= OUT_FOR_GROCERIES_PR)) infectious = ((c >= INFECTIOUS_AFTER_DAYS) & (c < INFECTIOUS_AFTER_DAYS + INFECTIOUS_FOR_DAYS) & left_home) c_day = ((knows[infectious, :] & left_home.reshape(1, -1) & (rand(infectious.sum(), len(G)) <= MEETING_PR * INFECTION_PR)) .any(axis=0)) # any of the acquaintances infected them c[c_day] = np.maximum(0, c[c_day]) # it is now infected c += 1 # the day is over cs.append((c >= 0).sum()) infs.append(infectious.sum()) return cs, infs, c DAYS = 365 MEETING_PR = 0.1 INFECTION_PR = 0.05 OUT_FOR_GROCERIES_PR = 1 / 7 # TODO: LEAVES_HOME_IF_INFECTOUS_PROBA = .1 INFECTIOUS_AFTER_DAYS = 3 INFECTIOUS_FOR_DAYS = 14 CONTAMINATED_PR=.01 HOME_OFFICING_PR=.1 experiment() HOME_OFFICING_PR=.3 experiment() HOME_OFFICING_PR=.5 experiment() ``` Segundo [pesquisa do IBGE e artigo da Folha](https://www1.folha.uol.com.br/cotidiano/2020/04/28-dos-brasileiros-nao-fazem-isolamento-contra-coronavirus-diz-datafolha.shtml), 22% dos brasileiros não estão em quarentena. ``` HOME_OFFICING_PR=.78 experiment() ```
github_jupyter
<a id="title"></a> <a id="toc"></a> ![title](source/header2.png) <div style="margin-top: 9px; background-color: #efefef; padding-top:10px; padding-bottom:10px;margin-bottom: 9px;box-shadow: 5px 5px 5px 0px rgba(87, 87, 87, 0.2);"> <center> <h2>Table of Contents</h2> </center> <ol> <li><a href="#01" style="color: #37509b;">Initialization</a></li> <li><a href="#02" style="color: #37509b;">Dataset: Cleaning and Exploration</a></li> <li><a href="#03" style="color: #37509b;">Modelling</a></li> <li><a href="#04" style="color: #37509b;">Quarta Seção</a></li> <li><a href="#05" style="color: #37509b;">Quinta Seção </a></li> </ol> </div> <a id="01" style=" background-color: #37509b; border: none; color: white; padding: 2px 10px; text-align: center; text-decoration: none; display: inline-block; font-size: 10px;" href="#toc">TOC ↻</a> <div style="margin-top: 9px; background-color: #efefef; padding-top:10px; padding-bottom:10px;margin-bottom: 9px;box-shadow: 5px 5px 5px 0px rgba(87, 87, 87, 0.2);"> <center> <h1>1. Initialization</h1> </center> <ol type="i"> <!-- <li><a href="#0101" style="color: #37509b;">Inicialização</a></li> <li><a href="#0102" style="color: #37509b;">Pacotes</a></li> <li><a href="#0103" style="color: #37509b;">Funcoes</a></li> <li><a href="#0104" style="color: #37509b;">Dados de Indicadores Sociais</a></li> <li><a href="#0105" style="color: #37509b;">Dados de COVID-19</a></li> --> </ol> </div> <a id="0101"></a> <h2>1.1 Description <a href="#01" style=" border-radius: 10px; background-color: #f1f1f1; border: none; color: #37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px; ">↻</a></h2> Dataset available in: <a href="https://www.kaggle.com/c/titanic/" target="_blank">https://www.kaggle.com/c/titanic/</a> ### Features <table> <tbody> <tr><th><b>Variable</b></th><th><b>Definition</b></th><th><b>Key</b></th></tr> <tr> <td>survival</td> <td>Survival</td> <td>0 = No, 1 = Yes</td> </tr> <tr> <td>pclass</td> <td>Ticket class</td> <td>1 = 1st, 2 = 2nd, 3 = 3rd</td> </tr> <tr> <td>sex</td> <td>Sex</td> <td></td> </tr> <tr> <td>Age</td> <td>Age in years</td> <td></td> </tr> <tr> <td>sibsp</td> <td># of siblings / spouses aboard the Titanic</td> <td></td> </tr> <tr> <td>parch</td> <td># of parents / children aboard the Titanic</td> <td></td> </tr> <tr> <td>ticket</td> <td>Ticket number</td> <td></td> </tr> <tr> <td>fare</td> <td>Passenger fare</td> <td></td> </tr> <tr> <td>cabin</td> <td>Cabin number</td> <td></td> </tr> <tr> <td>embarked</td> <td>Port of Embarkation</td> <td>C = Cherbourg, Q = Queenstown, S = Southampton</td> </tr> </tbody> </table> <a id="0102"></a> <h2>1.2 Packages <a href="#01" style=" border-radius: 10px; background-color: #f1f1f1; border: none; color: #37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px; ">↻</a></h2> ``` import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from tqdm import tqdm from time import time,sleep import nltk from nltk import tokenize from string import punctuation from nltk.stem import PorterStemmer, SnowballStemmer, LancasterStemmer from unidecode import unidecode from sklearn.dummy import DummyClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import GradientBoostingClassifier from sklearn.feature_extraction.text import CountVectorizer from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score,f1_score from sklearn.model_selection import train_test_split from sklearn.model_selection import cross_validate,KFold,GridSearchCV from sklearn.model_selection import RandomizedSearchCV from sklearn.naive_bayes import GaussianNB from sklearn.neighbors import KNeighborsClassifier from sklearn.neural_network import MLPClassifier from sklearn.preprocessing import OrdinalEncoder,OneHotEncoder, LabelEncoder from sklearn.preprocessing import StandardScaler,Normalizer from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from scipy.stats import randint from numpy.random import uniform ``` <a id="0103"></a> <h2>1.3 Settings <a href="#01" style=" border-radius: 10px; background-color: #f1f1f1; border: none; color: #37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px; ">↻</a></h2> ``` # pandas options pd.options.display.max_columns = 30 pd.options.display.float_format = '{:.2f}'.format # seaborn options sns.set(style="darkgrid") import warnings warnings.filterwarnings("ignore") SEED = 42 ``` <a id="0104"></a> <h2>1.4 Useful Functions <a href="#01" style=" border-radius: 10px; background-color: #f1f1f1; border: none; color: #37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px; ">↻</a></h2> ``` def treat_words(df, col, language='english', inplace=False, tokenizer = tokenize.WordPunctTokenizer(), decode = True, stemmer = None, lower = True, remove_words = [], ): """ Description: ---------------- Receives a dataframe and the column name. Eliminates stopwords for each row of that column and apply stemmer. After that, it regroups and returns a list. tokenizer = tokenize.WordPunctTokenizer() tokenize.WhitespaceTokenizer() stemmer = PorterStemmer() SnowballStemmer() LancasterStemmer() nltk.RSLPStemmer() # in portuguese """ pnct = [string for string in punctuation] # from string import punctuation wrds = nltk.corpus.stopwords.words(language) unwanted_words = pnct + wrds + remove_words processed_text = list() for element in tqdm(df[col]): # starts a new list new_text = list() # starts a list with the words of the non precessed text text_old = tokenizer.tokenize(element) # check each word for wrd in text_old: # if the word are not in the unwanted words list # add to the new list if wrd.lower() not in unwanted_words: new_wrd = wrd if decode: new_wrd = unidecode(new_wrd) if stemmer: new_wrd = stemmer.stem(new_wrd) if lower: new_wrd = new_wrd.lower() if new_wrd not in remove_words: new_text.append(new_wrd) processed_text.append(' '.join(new_text)) if inplace: df[col] = processed_text else: return processed_text def list_words_of_class(df, col, language='english', inplace=False, tokenizer = tokenize.WordPunctTokenizer(), decode = True, stemmer = None, lower = True, remove_words = [] ): """ Description: ---------------- Receives a dataframe and the column name. Eliminates stopwords for each row of that column, apply stemmer and returns a list of all the words. """ lista = treat_words( df,col = col,language = language, tokenizer=tokenizer,decode=decode, stemmer=stemmer,lower=lower, remove_words = remove_words ) words_list = [] for string in lista: words_list += tokenizer.tokenize(string) return words_list def get_frequency(df, col, language='english', inplace=False, tokenizer = tokenize.WordPunctTokenizer(), decode = True, stemmer = None, lower = True, remove_words = [] ): list_of_words = list_words_of_class( df, col = col, decode = decode, stemmer = stemmer, lower = lower, remove_words = remove_words ) freq = nltk.FreqDist(list_of_words) df_freq = pd.DataFrame({ 'word': list(freq.keys()), 'frequency': list(freq.values()) }).sort_values(by='frequency',ascending=False) n_words = df_freq['frequency'].sum() df_freq['prop'] = 100*df_freq['frequency']/n_words return df_freq def common_best_words(df,col,n_common = 10,tol_frac = 0.8,n_jobs = 1): list_to_remove = [] for i in range(0,n_jobs): print('[info] Most common words in not survived') sleep(0.5) df_dead = get_frequency( df.query('Survived == 0'), col = col, decode = False, stemmer = False, lower = False, remove_words = list_to_remove ) print('[info] Most common words in survived') sleep(0.5) df_surv = get_frequency( df.query('Survived == 1'), col = col, decode = False, stemmer = False, lower = False, remove_words = list_to_remove ) words_dead = df_dead.nlargest(n_common, 'frequency') list_dead = list(words_dead['word'].values) words_surv = df_surv.nlargest(n_common, 'frequency') list_surv = list(words_surv['word'].values) for word in list(set(list_dead).intersection(list_surv)): prop_dead = words_dead[words_dead['word'] == word]['prop'].values[0] prop_surv = words_surv[words_surv['word'] == word]['prop'].values[0] ratio = min([prop_dead,prop_surv])/max([prop_dead,prop_surv]) if ratio > tol_frac: list_to_remove.append(word) return list_to_remove def just_keep_the_words(df, col, keep_words = [], tokenizer = tokenize.WordPunctTokenizer() ): """ Description: ---------------- Removes all words that is not in `keep_words` """ processed_text = list() # para cada avaliação for element in tqdm(df[col]): # starts a new list new_text = list() # starts a list with the words of the non precessed text text_old = tokenizer.tokenize(element) for wrd in text_old: if wrd in keep_words: new_text.append(wrd) processed_text.append(' '.join(new_text)) return processed_text class Classifier: ''' Description ----------------- Class to approach classification algorithm Example ----------------- classifier = Classifier( algorithm = ChooseTheAlgorith, hyperparameters_range = { 'hyperparameter_1': [1,2,3], 'hyperparameter_2': [4,5,6], 'hyperparameter_3': [7,8,9] } ) # Looking for best model classifier.grid_search_fit(X,y,n_splits=10) #dt.grid_search_results.head(3) # Prediction Form 1 par = classifier.best_model_params dt.fit(X_trn,y_trn,params = par) y_pred = classifier.predict(X_tst) print(accuracy_score(y_tst, y_pred)) # Prediction Form 2 classifier.fit(X_trn,y_trn,params = 'best_model') y_pred = classifier.predict(X_tst) print(accuracy_score(y_tst, y_pred)) # Prediction Form 3 classifier.fit(X_trn,y_trn,min_samples_split = 5,max_depth=4) y_pred = classifier.predict(X_tst) print(accuracy_score(y_tst, y_pred)) ''' def __init__(self,algorithm, hyperparameters_range={},random_state=42): self.algorithm = algorithm self.hyperparameters_range = hyperparameters_range self.random_state = random_state self.grid_search_cv = None self.grid_search_results = None self.hyperparameters = self.__get_hyperparameters() self.best_model = None self.best_model_params = None self.fitted_model = None def grid_search_fit(self,X,y,verbose=0,n_splits=10,shuffle=True,scoring='accuracy'): self.grid_search_cv = GridSearchCV( self.algorithm(), self.hyperparameters_range, cv = KFold(n_splits = n_splits, shuffle=shuffle, random_state=self.random_state), scoring=scoring, verbose=verbose ) self.grid_search_cv.fit(X, y) col = list(map(lambda par: 'param_'+str(par),self.hyperparameters))+[ 'mean_fit_time', 'mean_test_score', 'std_test_score', 'params' ] results = pd.DataFrame(self.grid_search_cv.cv_results_) self.grid_search_results = results[col].sort_values( ['mean_test_score','mean_fit_time'], ascending=[False,True] ).reset_index(drop=True) self.best_model = self.grid_search_cv.best_estimator_ self.best_model_params = self.best_model.get_params() def best_model_cv_score(self,X,y,parameter='test_score',verbose=0,n_splits=10,shuffle=True,scoring='accuracy'): if self.best_model != None: cv_results = cross_validate( self.best_model, X = X, y = y, cv=KFold(n_splits = 10,shuffle=True,random_state=self.random_state) ) return { parameter+'_mean': cv_results[parameter].mean(), parameter+'_std': cv_results[parameter].std() } def fit(self,X,y,params=None,**kwargs): model = None if len(kwargs) == 0 and params == 'best_model' and self.best_model != None: model = self.best_model elif type(params) == dict and len(params) > 0: model = self.algorithm(**params) elif len(kwargs) >= 0 and params==None: model = self.algorithm(**kwargs) else: print('[Error]') if model != None: model.fit(X,y) self.fitted_model = model def predict(self,X): if self.fitted_model != None: return self.fitted_model.predict(X) else: print('[Error]') return np.array([]) def predict_score(self,X_tst,y_tst,score=accuracy_score): if self.fitted_model != None: y_pred = self.predict(X_tst) return score(y_tst, y_pred) else: print('[Error]') return np.array([]) def hyperparameter_info(self,hyperpar): str_ = 'param_'+hyperpar return self.grid_search_results[ [str_,'mean_fit_time','mean_test_score'] ].groupby(str_).agg(['mean','std']) def __get_hyperparameters(self): return [hp for hp in self.hyperparameters_range] def cont_class_limits(lis_df,n_class): ampl = lis_df.quantile(1.0)-lis_df.quantile(0.0) ampl_class = ampl/n_class limits = [[i*ampl_class,(i+1)*ampl_class] for i in range(n_class)] return limits def cont_classification(lis_df,limits): list_res = [] n_class = len(limits) for elem in lis_df: for ind in range(n_class-1): if elem >= limits[ind][0] and elem < limits[ind][1]: list_res.append(ind+1) if elem >= limits[-1][0]: list_res.append(n_class) return list_res ``` <a id="02" style=" background-color: #37509b; border: none; color: white; padding: 2px 10px; text-align: center; text-decoration: none; display: inline-block; font-size: 10px;" href="#toc">TOC ↻</a> <div style="margin-top: 9px; background-color: #efefef; padding-top:10px; padding-bottom:10px;margin-bottom: 9px;box-shadow: 5px 5px 5px 0px rgba(87, 87, 87, 0.2);"> <center> <h1>2. Dataset: Cleaning and Exploration</h1> </center> <ol type="i"> <!-- <li><a href="#0101" style="color: #37509b;">Inicialização</a></li> <li><a href="#0102" style="color: #37509b;">Pacotes</a></li> <li><a href="#0103" style="color: #37509b;">Funcoes</a></li> <li><a href="#0104" style="color: #37509b;">Dados de Indicadores Sociais</a></li> <li><a href="#0105" style="color: #37509b;">Dados de COVID-19</a></li> --> </ol> </div> <a id="0101"></a> <h2>2.1 Import Dataset <a href="#02" style=" border-radius: 10px; background-color: #f1f1f1; border: none; color: #37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px; ">↻</a></h2> ``` df_trn = pd.read_csv('data/train.csv') df_tst = pd.read_csv('data/test.csv') df = pd.concat([df_trn,df_tst]) df_trn = df_trn.drop(columns=['PassengerId']) df_tst = df_tst.drop(columns=['PassengerId']) df_tst.info() ``` ## Pclass Investigating if the class is related to the probability of survival ``` sns.barplot(x='Pclass', y="Survived", data=df_trn) ``` ## Name ``` treat_words(df_trn,col = 'Name',inplace=True) treat_words(df_tst,col = 'Name',inplace=True) %matplotlib inline from wordcloud import WordCloud import matplotlib.pyplot as plt all_words = ' '.join(list(df_trn['Name'])) word_cloud = WordCloud().generate(all_words) plt.figure(figsize=(10,7)) plt.imshow(word_cloud, interpolation='bilinear') plt.axis("off") plt.show() common_best_words(df_trn,col='Name',n_common = 10,tol_frac = 0.5,n_jobs = 1) ``` We can see that Master and William are words with equivalent proportion between both survived and not survived cases. So, they are not good descriptive words ``` df_comm = get_frequency(df_trn,col = 'Name',remove_words=['("','")','master', 'william']).reset_index(drop=True) surv_prob = [ df_trn['Survived'][df_trn['Name'].str.contains(row['word'])].mean() for index, row in df_comm.iterrows()] df_comm['survival_prob (%)'] = 100*np.array(surv_prob) print('Survival Frequency related to words in Name') df_comm.head(10) df_comm_surv = get_frequency(df_trn[df_trn['Survived']==1],col = 'Name',remove_words=['("','")']).reset_index(drop=True) sleep(0.5) print('Most frequent words within those who survived') df_comm_surv.head(10) df_comm_dead = get_frequency(df_trn[df_trn['Survived']==0],col = 'Name',remove_words=['("','")']).reset_index(drop=True) sleep(0.5) print("Most frequent words within those that did not survive") df_comm_dead.head(10) ``` ### Feature Engineering ``` min_occurrences = 2 df_comm = get_frequency(df,col = 'Name', remove_words=['("','")','john','henry', 'william','h','j','jr'] ).reset_index(drop=True) words_to_keep = list(df_comm[df_comm['frequency'] > min_occurrences]['word']) df_trn['Name'] = just_keep_the_words(df_trn, col = 'Name', keep_words = words_to_keep ) df_tst['Name'] = just_keep_the_words(df_tst, col = 'Name', keep_words = words_to_keep ) vectorize = CountVectorizer(lowercase=True,max_features = 4) vectorize.fit(df_trn['Name']) bag_of_words = vectorize.transform(df_trn['Name']) X = pd.DataFrame(vectorize.fit_transform(df_trn['Name']).toarray(), columns=list(map(lambda word: 'Name_'+word,vectorize.get_feature_names())) ) y = df_trn['Survived'] from sklearn.model_selection import train_test_split X_trn,X_tst,y_trn,y_tst = train_test_split( X, y, test_size = 0.25, random_state=42 ) from sklearn.linear_model import LogisticRegression classifier = LogisticRegression(C=100) classifier.fit(X_trn,y_trn) accuracy = classifier.score(X_tst,y_tst) print('Accuracy = %.3f%%' % (100*accuracy)) df_trn = pd.concat([ df_trn , pd.DataFrame(vectorize.fit_transform(df_trn['Name']).toarray(), columns=list(map(lambda word: 'Name_'+word,vectorize.get_feature_names())) ) ],axis=1).drop(columns=['Name']) df_tst = pd.concat([ df_tst , pd.DataFrame(vectorize.fit_transform(df_tst['Name']).toarray(), columns=list(map(lambda word: 'Name_'+word,vectorize.get_feature_names())) ) ],axis=1).drop(columns=['Name']) ``` ## Sex ``` from sklearn.preprocessing import LabelEncoder Sex_Encoder = LabelEncoder() df_trn['Sex'] = Sex_Encoder.fit_transform(df_trn['Sex']).astype(int) df_tst['Sex'] = Sex_Encoder.transform(df_tst['Sex']).astype(int) ``` ## Age ``` mean_age = df['Age'][df['Age'].notna()].mean() df_trn['Age'].fillna(mean_age,inplace=True) df_tst['Age'].fillna(mean_age,inplace=True) age_limits = cont_class_limits(df['Age'],3) df_trn['Age'] = cont_classification(df_trn['Age'],age_limits) df_tst['Age'] = cont_classification(df_tst['Age'],age_limits) ``` ## Family Size ``` # df_trn['FamilySize'] = df_trn['SibSp'] + df_trn['Parch'] + 1 # df_tst['FamilySize'] = df_tst['SibSp'] + df_tst['Parch'] + 1 # df_trn = df_trn.drop(columns = ['SibSp','Parch']) # df_tst = df_tst.drop(columns = ['SibSp','Parch']) ``` ## Cabin Feature There is very little data about the cabin ``` # df_trn['Cabin'] = df_trn['Cabin'].fillna('N000') # df_cab = df_trn[df_trn['Cabin'].notna()] # df_cab = pd.concat( # [ # df_cab, # df_cab['Cabin'].str.extract( # '([A-Za-z]+)(\d+\.?\d*)([A-Za-z]*)', # expand = True).drop(columns=[2]).rename( # columns={0: 'Cabin_Class', 1: 'Cabin_Number'} # ) # ], axis=1) # df_trn = df_cab.drop(columns=['Cabin','Cabin_Number']) # df_trn = pd.concat([ # df_trn.drop(columns=['Cabin_Class']), # pd.get_dummies(df_trn['Cabin_Class'],prefix='Cabin').drop(columns=['Cabin_N']) # # pd.get_dummies(df_trn['Cabin_Class'],prefix='Cabin') # ],axis=1) # df_tst['Cabin'] = df_tst['Cabin'].fillna('N000') # df_cab = df_tst[df_tst['Cabin'].notna()] # df_cab = pd.concat( # [ # df_cab, # df_cab['Cabin'].str.extract( # '([A-Za-z]+)(\d+\.?\d*)([A-Za-z]*)', # expand = True).drop(columns=[2]).rename( # columns={0: 'Cabin_Class', 1: 'Cabin_Number'} # ) # ], axis=1) # df_tst = df_cab.drop(columns=['Cabin','Cabin_Number']) # df_tst = pd.concat([ # df_tst.drop(columns=['Cabin_Class']), # pd.get_dummies(df_tst['Cabin_Class'],prefix='Cabin').drop(columns=['Cabin_N']) # # pd.get_dummies(df_tst['Cabin_Class'],prefix='Cabin') # ],axis=1) ``` ## Ticket ``` df_trn = df_trn.drop(columns=['Ticket']) df_tst = df_tst.drop(columns=['Ticket']) ``` ## Fare ``` mean_fare = df['Fare'][df['Fare'].notna()].mean() df_trn['Fare'].fillna(mean_fare,inplace=True) df_tst['Fare'].fillna(mean_fare,inplace=True) fare_limits = cont_class_limits(df['Fare'],3) df_trn['Fare'] = cont_classification(df_trn['Fare'],fare_limits) df_tst['Fare'] = cont_classification(df_tst['Fare'],fare_limits) ``` ## Embarked ``` most_frequent_emb = df['Embarked'].value_counts()[:1].index.tolist()[0] df_trn['Embarked'] = df_trn['Embarked'].fillna(most_frequent_emb) df_tst['Embarked'] = df_tst['Embarked'].fillna(most_frequent_emb) df_trn = pd.concat([ df_trn.drop(columns=['Embarked']), pd.get_dummies(df_trn['Embarked'],prefix='Emb').drop(columns=['Emb_C']) # pd.get_dummies(df_trn['Embarked'],prefix='Emb') ],axis=1) df_tst = pd.concat([ df_tst.drop(columns=['Embarked']), pd.get_dummies(df_tst['Embarked'],prefix='Emb').drop(columns=['Emb_C']) # pd.get_dummies(df_tst['Embarked'],prefix='Emb') ],axis=1) ``` <a id="03" style=" background-color: #37509b; border: none; color: white; padding: 2px 10px; text-align: center; text-decoration: none; display: inline-block; font-size: 10px;" href="#toc">TOC ↻</a> <div style="margin-top: 9px; background-color: #efefef; padding-top:10px; padding-bottom:10px;margin-bottom: 9px;box-shadow: 5px 5px 5px 0px rgba(87, 87, 87, 0.2);"> <center> <h1>3. Modelling</h1> </center> <ol type="i"> <!-- <li><a href="#0101" style="color: #37509b;">Inicialização</a></li> <li><a href="#0102" style="color: #37509b;">Pacotes</a></li> <li><a href="#0103" style="color: #37509b;">Funcoes</a></li> <li><a href="#0104" style="color: #37509b;">Dados de Indicadores Sociais</a></li> <li><a href="#0105" style="color: #37509b;">Dados de COVID-19</a></li> --> </ol> </div> ### Classification Approach ``` Model_Scores = {} Model_Scores = {} def print_model_scores(): return pd.DataFrame([[ model, Model_Scores[model]['test_accuracy_score'], Model_Scores[model]['cv_score_mean'], Model_Scores[model]['cv_score_std'] ] for model in Model_Scores.keys()], columns=['model','test_accuracy_score','cv_score','cv_score_std'] ).sort_values(by='cv_score',ascending=False) def OptimizeClassification(X,y, model, hyperparametric_space, cv = KFold(n_splits = 10, shuffle=True,random_state=SEED), model_description = 'classifier', n_iter = 20, test_size = 0.25 ): X_trn,X_tst,y_trn,y_tst = train_test_split( X, y, test_size = test_size, random_state=SEED ) start = time() # Searching the best setting print('[info] Searching for the best hyperparameter') search_cv = RandomizedSearchCV( model, hyperparametric_space, n_iter = n_iter, cv = cv, random_state = SEED) search_cv.fit(X, y) results = pd.DataFrame(search_cv.cv_results_) print('[info] Search Timing: %.2f seconds'%(time() - start)) # Evaluating Test Score For Best Estimator start = time() print('[info] Test Accuracy Score') gb = search_cv.best_estimator_ gb.fit(X_trn, y_trn) y_pred = gb.predict(X_tst) # Evaluating K Folded Cross Validation print('[info] KFolded Cross Validation') cv_results = cross_validate(search_cv.best_estimator_,X,y, cv = cv ) print('[info] Cross Validation Timing: %.2f seconds'%(time() - start)) Model_Scores[model_description] = { 'test_accuracy_score':gb.score(X_tst,y_tst), 'cv_score_mean':cv_results['test_score'].mean(), 'cv_score_std':cv_results['test_score'].std(), 'best_params':search_cv.best_estimator_.get_params() } pd.options.display.float_format = '{:,.5f}'.format print('\t\t test_accuracy_score: {:.3f}'.format(gb.score(X_tst,y_tst))) print('\t\t cv_score: {:.3f}±{:.3f}'.format( cv_results['test_score'].mean(),cv_results['test_score'].std())) params_list = ['mean_test_score']+list(map(lambda var: 'param_'+var,search_cv.best_params_.keys()))+['mean_fit_time'] return results[params_list].sort_values( ['mean_test_score','mean_fit_time'], ascending=[False,True] ) ``` ## Scaling DataSet ``` scaler = StandardScaler() # caler = Normalizer() scaler.fit(df_trn.drop(columns=['Survived','Cabin'])) X = scaler.transform(df_trn.drop(columns=['Survived','Cabin'])) y = df_trn['Survived'] ``` ## Logistic Regression ``` results = OptimizeClassification(X,y, model = LogisticRegression(random_state=SEED), hyperparametric_space = { 'solver' : ['newton-cg', 'lbfgs', 'liblinear'],# 'C' : uniform(0.075,0.125,200) #10**uniform(-2,2,200) }, cv = KFold(n_splits = 50, shuffle=True,random_state=SEED), model_description = 'LogisticRegression', n_iter = 20 ) results.head(5) ``` ## Support Vector Classifier ``` results = OptimizeClassification(X,y, model = SVC(random_state=SEED), hyperparametric_space = { 'kernel' : ['linear', 'poly','rbf','sigmoid'], 'C' : 10**uniform(-1,1,200), 'decision_function_shape' : ['ovo', 'ovr'], 'degree' : [1,2,3,4] }, cv = KFold(n_splits = 50, shuffle=True,random_state=SEED), model_description = 'SVC', n_iter = 20 ) results.head(5) ``` ## Decision Tree Classifier ``` results = OptimizeClassification(X,y, model = DecisionTreeClassifier(), hyperparametric_space = { 'min_samples_split': randint(10,30), 'max_depth': randint(10,30), 'min_samples_leaf': randint(1,10) }, cv = KFold(n_splits = 50, shuffle=True,random_state=SEED), model_description = 'DecisionTree', n_iter = 100 ) results.head(5) print_model_scores() ``` ## Random Forest Classifier ``` results = OptimizeClassification(X,y, model = RandomForestClassifier(random_state = SEED,oob_score=True), hyperparametric_space = { 'n_estimators': randint(190,250), 'min_samples_split': randint(10,15), 'min_samples_leaf': randint(1,6) # 'max_depth': randint(1,100), # , # 'min_weight_fraction_leaf': uniform(0,1,100), # 'max_features': uniform(0,1,100), # 'max_leaf_nodes': randint(10,100), }, cv = KFold(n_splits = 20, shuffle=True,random_state=SEED), model_description = 'RandomForestClassifier', n_iter = 20 ) results.head(5) print_model_scores() ``` ## Gradient Boosting Classifier ``` results = OptimizeClassification(X,y, model = GradientBoostingClassifier(), hyperparametric_space = { 'loss': ['exponential'], #'deviance', 'min_samples_split': randint(130,170), 'max_depth': randint(6,15), 'learning_rate': uniform(0.05,0.15,100), 'random_state' : randint(0,10), 'tol': 10**uniform(-5,-3,100) }, cv = KFold(n_splits = 20, shuffle=True,random_state=SEED), model_description = 'GradientBoostingClassifier', n_iter = 20 ) results.head(5) ``` ## Multi Layer Perceptron Classifier ``` def random_layer(max_depth=4,max_layer=100): res = list() depth = np.random.randint(1,1+max_depth) for i in range(1,depth+1): res.append(np.random.randint(2,max_layer)) return tuple(res) results = OptimizeClassification(X,y, model = MLPClassifier(random_state=SEED), hyperparametric_space = { 'hidden_layer_sizes': [random_layer(max_depth=4,max_layer=40) for i in range(10)], 'solver' : ['lbfgs', 'sgd', 'adam'], 'learning_rate': ['adaptive'], 'activation' : ['identity', 'logistic', 'tanh', 'relu'] }, cv = KFold(n_splits = 20, shuffle=True,random_state=SEED), model_description = 'MLPClassifier', n_iter = 20 ) results.head(5) ``` # Best Model **Best Model (until now)** ``` GradientBoostingClassifier(ccp_alpha=0.0, criterion='friedman_mse', init=None, learning_rate=0.10165006218060142, loss='exponential', max_depth=7, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=134, min_weight_fraction_leaf=0.0, n_estimators=100, n_iter_no_change=None, presort='deprecated', random_state=42, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) ``` ``` print_model_scores() model = GradientBoostingClassifier(**Model_Scores['GradientBoostingClassifier']['best_params']) X_trn,X_tst,y_trn,y_tst = train_test_split( X, y, test_size = 0.25 ) model.fit(X_trn,y_trn) y_pred = model.predict(X_tst) cv_results = cross_validate(model,X,y, cv = KFold(n_splits = 20, shuffle=True) ) print('test_accuracy_score: {:.3f}'.format(model.score(X_tst,y_tst))) print('cv_score: {:.3f}±{:.3f}'.format( cv_results['test_score'].mean(),cv_results['test_score'].std())) pass_id = pd.read_csv('data/test.csv')['PassengerId'] model = GradientBoostingClassifier(**Model_Scores['GradientBoostingClassifier']['best_params']) model.fit(X,y) X_sub = scaler.transform(df_tst.drop(columns=['Cabin'])) y_pred = model.predict(X_sub) sub = pd.Series(y_pred,index=pass_id,name='Survived') sub.to_csv('gbc_model_2.csv',header=True) y_pred model ```
github_jupyter
# Example InfluxDB Jupyter notebook - stream data This example demonstrates how to query data from InfluxDB 2.0 using Flux and display results in real time. Prerequisites: 1. Start InfluxDB: `./scripts/influxdb-restart.sh` 2. Start Telegraf: `telegraf -config ./notebooks/telegraf.conf` 3. Install the following dependencies: `rx`, `pandas`, `streamz`, `hvplot` ``` # Import a Client import os import sys sys.path.insert(0, os.path.abspath('../')) from datetime import timedelta from typing import List import hvplot.streamz import pandas as pd import rx from rx import operators as ops from streamz.dataframe import Random, DataFrame from streamz import Stream from influxdb_client import InfluxDBClient def source_data(auto_refresh: int, query: str, sink: Stream): rx \ .interval(period=timedelta(seconds=auto_refresh)) \ .pipe(ops.map(lambda start: f'from(bucket: "my-bucket") ' f'|> range(start: -{auto_refresh}s, stop: now()) ' f'{query}')) \ .pipe(ops.map(lambda query: client.query_api().query_data_frame(query, data_frame_index=['_time']))) \ .pipe(ops.map(lambda data_frame: data_frame.drop(columns=['result', 'table']))) \ .subscribe(observer=lambda data_frame: sink.emit(data_frame), on_error=lambda error: print(error)) pass client = InfluxDBClient(url='http://localhost:8086', token='my-token', org='my-org') cpu_query = '|> filter(fn: (r) => r._measurement == "cpu") ' \ '|> filter(fn: (r) => r._field == "usage_user") ' \ '|> filter(fn: (r) => r.cpu == "cpu-total") ' \ '|> keep(columns: ["_time", "_value"])' cpu_sink = Stream() cpu_example = pd.DataFrame({'_value': []}, columns=['_value']) cpu_df = DataFrame(cpu_sink, example=cpu_example) source_data(auto_refresh=5, sink=cpu_sink, query=cpu_query) mem_query = '|> filter(fn: (r) => r._measurement == "mem") ' \ '|> filter(fn: (r) => r._field == "available" or r._field == "free" or r._field == "total" or r._field == "used") ' \ '|> map(fn: (r) => ({ r with _value: r._value / 1024 / 1024 }))' \ '|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")' \ '|> keep(columns: ["_time", "used", "total", "free", "available"])' mem_sink = Stream() mem_example = pd.DataFrame({'used': [], 'total': [], 'free': [], 'available': []}, columns=['available', 'free', 'total', 'used']) mem_df = DataFrame(mem_sink, example=mem_example) source_data(auto_refresh=5, sink=mem_sink, query=mem_query) from bokeh.models.formatters import DatetimeTickFormatter # Time formatter formatter = DatetimeTickFormatter( microseconds = ["%H:%M:%S"], milliseconds = ["%H:%M:%S"], seconds = ["%H:%M:%S"], minsec = ["%H:%M:%S"], minutes = ["%H:%M:%S"], hourmin = ["%H:%M:%S"], hours=["%H:%M:%S"], days=["%H:%M:%S"], months=["%H:%M:%S"], years=["%H:%M:%S"], ) cpu_df.hvplot(width=450, backlog=50, title='CPU % usage', xlabel='Time', ylabel='%', xformatter=formatter) +\ mem_df.hvplot.line(width=450, backlog=50, title='Memory', xlabel='Time', ylabel='MiB', xformatter=formatter, legend='top_left') ```
github_jupyter
# Eel imports Now let's take a look at a cut of data on eel product imports. The data come from [a foreign trade database maintained by NOAA](https://www.st.nmfs.noaa.gov/commercial-fisheries/foreign-trade/). The CSV file lives here: `../data/eels.csv`. We'll start by importing pandas and creating a data frame. ``` import pandas as pd df = pd.read_csv('../data/eels.csv') df.head() ``` ### Check out the values Now let's poke through the values in each column to see what we're working with using a combination of `unique()`, `min()` and `max()`. Questions we're trying to answer here: What years and months are represented? What countries are in the data? Do the numeric data make sense? Are there any obvious errors or typos to handle? Are there any holes in our data (use `info()`)? ``` df.info() df.year.unique() df.month.unique() df.country.unique() # have to use bracket notation bc "product" is a pandas function df['product'].unique() ``` **Question:** What does "ATC" stand for? _Always ask, never assume._ ![atc](../img/eel-q.png "atc") ``` print(df.kilos.max()) print(df.kilos.min()) print(df.dollars.max()) print(df.dollars.min()) ``` ### Time-series data: Check for completeness Each row in our data is one month's worth of shipments of a particular eel product from a particular country to the U.S. Which means we might want to do some time-based comparisons, which means we need to check that we're dealing with complete years. So first let's think about what we want to see: For each year that's present in our data, we want a unique list of months for those records. If we were in Excel, we might pivot to group the data by `year` and then throw `month` in the "columns" section to see what months are present for each year. We could also do a pivot table in pandas (stay tuned!), but I Here, we're going to do something similar: - Select just the columns we're interested in - Use the pandas `groupby()` method to group the records by year ([see this notebook for reference](../reference/Grouping%20data%20in%20pandas.ipynb)) - For each set of grouped data, use the pandas `unique()` method on the month column to see what months are present When we call `groupby()` on a data frame, it returns a collection of items; each item in that collection is a Python tuple with two elements: the _grouping_ value (year, in this case) and a data frame of records that belong to that group (all records where year == that year). For our purposes, that means we can use a _for loop_ to iterate over the results and check each year. 👉For more details on _for loops_, [see this notebook](../reference/Python%20data%20types%20and%20basic%20syntax.ipynb#for-loops). ``` yearmonth = df[['year', 'month']] for yeargroup in yearmonth.groupby('year'): print(yeargroup[0], yeargroup[1].month.unique()) ``` So now we know that we have incomplete data for 2017 -- _news we can use_ as we start our analysis. ### Come up with a list of questions - In this data, what country ships the most eel products of any type to the U.S.? - Same question but broken out by year. - For each country, what was the percent change in eel shipments of all types from 2010-2016? - What type of product is most popular? - How many kilos of eels were imported from South America in 2016? ### Q: Who ships the most eels to the U.S. (in kilos)? We'll use our good friends `groupby()`, `sum()` and `sort_values()`to find out. ``` df[['country', 'kilos']].groupby('country') \ .sum() \ .sort_values('kilos', ascending=False) ``` ### Q: Who ships the most? (Broken out by year) Now we want to create a table where the rows are countries, the columns are years, and the values are sums for that country, that year. If we were doing this in an Excel pivot table, we'd just add "year" to the columns section. To do this in pandas, we're ... also going to use a pivot table. (Yes! Pandas has a [pivot table function](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html).) We are going to hand the `pivot_table()` function five arguments: - The data we're pivoting (`df`) - The type of aggregation to apply to the values -- default is `mean` and we do not want that (`aggfunc='sum'`) - The name of the column whose values we're doing math on (`values='kilos'`) - The name of the column we're grouping on (`index='country'`) - The name of the column whose values will become the columns of our table (`columns='year'`) Then we'll use `sort_values()` to sort the pivot table that results by our most recent year of data. ``` pivoted_sums = pd.pivot_table(df, aggfunc='sum', values='kilos', index='country', columns='year') pivoted_sums.sort_values(2017, ascending=False) ``` ### Q: What was the percent change in shipments for each country from 2010-2016? For this question, we'll re-use the pivot table we just made and add a calculated column. First, though, we need to filter the table to include only records where the `2010` and `2016` values are not null, using the [`notnull()` method](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.notnull.html). (Looks like just filtering for "2010 is not null" does the trick.) **If you didn't know about `.notnull()` already, how would you Google to find the answer?** ``` pivoted_sums_notnull = pivoted_sums[pivoted_sums[2010].notnull()] pivoted_sums_notnull ``` Now we can add a column -- `10to16pctchange`. The syntax, and the math -- new value minus old value divided by old value -- are relatively straightforward: `dataframe['new_column'] = (dataframe['new_value'] - dataframe['old_value']) / dataframe['old_value'] * 100` You might get a warning about slices vs. copies. You can ignore that for now. ``` pivoted_sums_notnull['10to16pctchange'] = (pivoted_sums_notnull[2016] - pivoted_sums_notnull[2010]) / pivoted_sums_notnull[2010] pivoted_sums_notnull.sort_values('10to16pctchange') ``` ### Q: What type of product is most popular (in kilos)? We'll use `groupby()`, `sum()` and `sort_values()` again. ``` pop_products = df[['product', 'kilos']].groupby('product') \ .sum() \ .sort_values('kilos', ascending=False) pop_products ``` ### How many kilos of eels were imported from Central and South America? We'll need to filter the data to get just shipments from Central and South American countries; for this, we'll use a filtering method called [`isin()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isin.html). 👉 For more details on filtering in pandas, [check out this notebook](../reference/Filtering%20columns%20and%20rows%20in%20pandas.ipynb). First, let's filter our main data frame to get just Central and South American countries. Let's take a look at what countries are present in our data, again: ``` df.country.unique() ``` So it looks like we want `PANAMA`, `CHILE` and `COSTA RICA`. The `isin()` method expects a list, so that's what we'll hand it. 👉 For more information on lists, [check out this notebook](../reference/Python%20data%20types%20and%20basic%20syntax.ipynb#Lists). ``` csa_countries = ['PANAMA', 'CHILE', 'COSTA RICA'] csa = df[df.country.isin(csa_countries)] csa.head() ``` Now we can run the `sum()` of the `kilos` column to get our answer: ``` csa.kilos.sum() ```
github_jupyter
# Documentation for the creation and usage of the heat pump library (hplib) This documentation covers the database preperation, validation and usage for simulation. If you're only interested in using hplib for simulation purpose, you should have a look into chapter [4. How to simulate](#how-to-simulate). 1. [Definitions](#definitions) 2. [Database preperation](#database-preparation) 3. [Work with database](#work-with-database) 1. [Load database](#load-database) 2. [Load specific model](#load-specific-model) 3. [Load generic model](#load-generic-model) 4. [How to simulate](#how-to-simulate) 1. [Simulate on time step](#simulate-one-timestep) 2. [Simulate a time series](#simulate-a-timeseries) 5. [Example heat pump](#example-heat-pump) 6. [Validation](#validation) 1. [Air/Water | on/off](#air-water-onoff) 2. [Brine/Water | on/off](#brine-water-onoff) 3. [Water/Water | on/off](#water-water-onoff) 4. [Air/Water | regulated](#air-water-regulated) 1. [Heating](#air-water-regulated-heating) 2. [Cooling](#air-water-regulated-cooling) 5. [Brine/Water | regulated](#brine-water-regulated) 7. [Conclusion](#conclusion) ``` import hplib as hpl import hplib_database as db import pandas as pd import matplotlib.pyplot as plt import warnings import numpy warnings.filterwarnings("ignore") %%html <style> table {float:left} </style> ``` ## **1. Definitions** <a class="anchor" id="definitions" name="definitions"></a> **Abbreviations** | Abbreviation | Meaning | | :--- | :--- | | P_th | Thermal output power in W | | P_el | Electical input Power in W | | COP | Coefficient of performance | | EER | Energy Efficiency Ratio | | T_in | Input temperature in °C at primary side of the heat pump | | T_out | Output temperature in °C at secondary side of the heat pump | | T_amb | Ambient temperature in °C | | P_th_h_ref | Thermal heating output power in W at T_in = -7 °C and T_out = 52 °C | | P_el_h_ref | Elecrical input power (heating) in W at T_in = -7 °C and T_out = 52 °C | | COP_ref | Coefficient of performance at T_in = -7 °C and T_out = 52 °C | | P_th_c_ref | Thermal cooling output power in W at T_in = 35 °C and T_out = 7 °C | | P_el_c_ref | Elecrical input power (cooling) in W at T_in = 35 °C and T_out = 7 °C | | p1-p4 | Fit-Parameters for Fit-Function | **Group IDs** | Group ID | Type | Subtype | | :--- | :--- | :--- | | 1 | Outdoor Air / Water | Regulated | | 2 | Brine / Water | Regulated | | 3 | Water / Water | Regulated | | 4 | Outdoor Air / Water | On-Off | | 5 | Brine / Water | On-Off | | 6 | Water / Water | On-Off | ## **2. Database preparation** <a class="anchor" id="database-preparation" name="database-preparation"></a> This section is only for documentation regarding the development of the final database. It's not neccesary to run this code again. 1. we downloaded all manufacturer data from https://keymark.eu/en/products/heatpumps/certified-products . 2. then we unzipped the files to the `input` folder and used the bash-skript `pdf2txt.sh` to convert pdf into txt. 3. afterwards we used the following functions to create and extent the heatpump keymark database. ``` # Import keymark data and save to csv database db.import_heating_data() # -> this creates /output/database_heating.csv db.import_cooling_data() # -> this creates /output/database_cooling.csv # Reduce to climate measurement series with average climate, delete redundant entries and save to csv sub-database db.reduce_heating_data('database_heating.csv','average') # -> this creates /output/database_heating_average.csv ``` **Process heating database** ``` # Normalize electrical and thermal power from the keymark database to values from setpoint T_in = -7 °C and T_out = 52 °C db.normalize_heating_data('database_heating_average.csv') # -> this creates /output/database_heating_average_normalized.csv # Identify subtypes like on-off or regulated heat pumps and assign group id depending on its type and subtype db.identify_subtypes('database_heating_average_normalized.csv') # -> this creates /output/database_heating_average_normalized_subtypes.csv # Calculate parameters p1-p4 for P_th, P_el and COP with a least-square fit approach # based on K. Schwamberger: „Modellbildung und Regelung von Gebäudeheizungsanlagen # mit Wärmepumpen“, VDI Verlag, Düsseldorf, Fortschrittsberichte VDI Reihe 6 Nr. 263, 1991. db.calculate_heating_parameters('database_heating_average_normalized_subtypes.csv') # -> this creates /output/hplib_database_heating.csv # -> this creates /src/hplib_database.csv # Many heat pump models have redundant entries because of different controller or storage configurations. # Reduce to unique heat pump models. db.reduce_to_unique() # -> this overwrites the /output/hplib_database_heating.csv # Calculate the relative error (RE) for each set point of every heat pump db.validation_relative_error_heating() # -> this creates /output/database_heating_average_normalized_subtypes_validation.csv # Calculate the mean absolute percentage error (MAPE) for every heat pump db.validation_mape_heating() # -> this overwrites the /output/hplib_database_heating.csv ``` **Process cooling database** Overall there are not so many unique Keymark heat pumps for cooling (34 models) in comparison to heating (505 models). Out of the 34 models only 4 heat pumps had set points at different outflow temperature. With our fit method it is not possible to fit only over one outflow temperature. For that reason we added another set point at 18°C output temperature based on the heat pumps we had with this condition in Keymark. For that purpose, we identified multiplication factors for eletrical power and eer between 7°C and 18°C secondary output temperature. The mean value of that are used to calculate the electrical power and EER at 18°C for other heat pumps: ``` P_el at 18°C = P_el at 7°C * multiplication factor EER at 18°C = EER at 7°C * multiplication factor ``` | Outside Tempertature | Multiplication factor for P_el | Multiplication factor for EER | | :--- | :--- | :--- | | 35 | 0.85 | 1.21 | | 30 | 0.82 | 1.21 | | 25 | 0.77 | 1.20 | | 20 | 0.63 | 0.95 | ``` #Only use heatpumps which are unique and also in the heating library db.reduce_cooling_data() #this generates /output/database_cooling_reduced.csv # Normalize electrical and thermal power from the keymark database to values from setpoint T_outside = 35 °C and T_out = 7 °C db.normalize_and_add_cooling_data() #-> this creates /output/database_cooling_reduced_normalized.csv # Used the same fit method like in heating # except: for P_el the point 20/7 is ignored db.calculate_cooling_parameters() # -> this overwrites /src/hplib_database.csv # Calculate the relative error (RE) for each set point of every heat pump db.validation_relative_error_cooling() # this creates /output/database_cooling_reduced_normalized_subtypes_validation.csv # Calculate the mean absolute percentage error (MAPE) for every heat pump db.validation_mape_cooling() # -> this overwrites the /src/hplib_database.csv ``` **Create generic heat pump models** ``` # Calculate generic heat pump models for each group id # for cooling: there is only a generic heat pump of type air/water | regulated available db.add_generic() # -> this overwrites the /src/hplib_database.csv ``` **Hint:** The csv files in the `output` folder are for documentation and validation purpose. The code `hplib.py` and database `hplib_database` files, which are meant to be used for simulations, are located in the `src` folder. ## **3. Work with database** <a class="anchor" id="work-with-database" name="work-with-database"></a> ### **3.1 Load database** <a class="anchor" id="load-database" name="load-database"></a> Simply execute the command without arguments and you will get a DataFrame with the complete list of manufacturers and models. Now you are able to view, filter or sort the database. ``` database = hpl.load_database() database ``` ### **3.2 Load specific model** <a class="anchor" id="load-specific-model" name="load-specific-model"></a> To get the parameters of a specific heat pump model, use the `get_parameters()` method with a specific Model name from the database. You will get a DataFrame with all parameters including the mean absolute percentage errors (MAPE) for this model. ``` parameters = hpl.get_parameters('i-SHWAK V4 06') parameters ``` ### **3.3 Load generic model** <a class="anchor" id="load-generic-model" name="load-generic-model"></a> To get the parameters of a generic heat pump model, use the `get_parameters()` method with the following keyword arguments of a free choosen set point * model='Generic' * group_id: 1,2,4,5,6 * t_in: choose primary input temperature in °C * t_out: choose secondary output temperature in °C * p_th: choose thermal output power in W You will get a DataFrame with all parameters for this generic model. For every group id the parameter set is based on the average parameters of all heat pumps of its group with an MAPE of less than 25%. ``` parameters = hpl.get_parameters(model='Generic', group_id=1, t_in=-7, t_out=52, p_th=10000) parameters ``` ## **4. How to simulate** <a class="anchor" id="how-to-simulate" name="how-to-simulate"></a> With the Fit-Parameters p1-p4 for P_th, P_el and COP it is possible to calculate the results with the following methods: 1. P_th and P_el with Fit-Functions and `COP = P_th / P_el` or 2. P_th and COP with Fit-Functions and `P_el = P_th / COP` or 3. P_el and COP with Fut-Functions and `P_th = P_el * COP` While the model by Schwarmberger [1] uses the first method, our validation showed, that the third method leads to better results. Therefore we decided to implement this in the `simulate` definition. ### **4.1 Simulate one timestep** <a class="anchor" id="simulate-one-timestep" name="simulate-one-timestep"></a> Please define a primary input temperature (t_in_primary), secondary input temperature (t_in_secondary), ambient / outdoor temperature (t_amb) in °C and the parameters from the previous step. The t_in_secondary is supposed to be heated up by 5 K which then results in output temperature. Important: * for air / water heat pumps t_in_primary and t_amb are supposed to be the same values. * for brine / water or water / water heat pumps t_in_primary and t_amb are independent values. ``` # Load parameters parameters = hpl.get_parameters(model='Generic', group_id=1, t_in=-7, t_out=52, p_th=10000) # Create heat pump object with parameters heatpump = hpl.HeatPump(parameters) # Simulate with values # whereas mode = 1 is for heating and mode = 2 is for cooling results = heatpump.simulate(t_in_primary=-7, t_in_secondary=47, t_amb=-7, mode=1) print(pd.DataFrame([results])) ``` ### **4.2 Simulate a timeseries** <a class="anchor" id="simulate-a-timeseries" name="simulate-a-timeseries"></a> The simulation approach could be to do a `for loop` around the method of simulating one timestep. But it will be faster to use the previous method with arrays as inputs. For usage you are allowed to define pandas.Series or arrays as input values. It is also possible to combine single values and pandas.Series / arrays. *The next example uses a measured timeseries for ambient / outdoor temperature and secondary input temperature of a real heating system to demonstrate the simulation approach. The data represents on year and has a temporal resolution of 1 minute* ``` # Read input file with temperatures df = pd.read_csv('../input/TestYear.csv') df['T_amb'] = df['T_in_primary'] # air/water heat pump -> T_amb = T_in_primary # Simulate with values # Load parameters parameters = hpl.get_parameters(model='Generic', group_id=1, t_in=-7, t_out=52, p_th=10000) # Create heat pump object with parameters heatpump = hpl.HeatPump(parameters) # whereas mode = 1 is for heating and mode = 2 is for cooling results = heatpump.simulate(t_in_primary=df['T_in_primary'].values, t_in_secondary=df['T_in_secondary'].values, t_amb=df['T_amb'].values, mode=1) # Plot / Print some results # example for distribution of COPs results=pd.DataFrame.from_dict(results) results['COP'].plot.hist(bins=50, title='Distribution of COP values') # Calclulate seasonal performance factor (SPF) SPF = results['P_th'].mean() / results['P_el'].mean() print('The seasonal performance factor (SPF) for one year is = '+str(round(SPF,1))) ``` ## **5. Example heat pump** <a class="anchor" id="example-heat-pump" name="example-heat-pump"></a> To get a overview over the different operation conditions, this section plots the electrical and thermal power as well as the COP for all possible primary and secondary input temperatures HEATING: **Schematic plot** of COP, P_el and P_th for an **generic air/water** heat pump: subtype = **on/off** ``` # Define Temperatures T_in_primary = range(-20,31) T_in_secondary = range(20,56) T_in = numpy.array([]) T_out = numpy.array([]) # Load parameters of generic air/water | regulated parameters = hpl.get_parameters('Generic', group_id=4, t_in=-7, t_out=52, p_th = 10000) heatpump=hpl.HeatPump(parameters) results=pd.DataFrame() # Create input series for t1 in T_in_primary: for t2 in T_in_secondary: T_in=numpy.append(T_in,t1) T_out=numpy.append(T_out,t2) results=heatpump.simulate(t_in_primary=T_in, t_in_secondary=T_out, t_amb=T_in, mode=1) results=pd.DataFrame.from_dict(results) # Plot COP fig1, ax1 = plt.subplots() plot = plt.tricontourf(results['T_in'], results['T_out'], results['COP']) ax1.tricontour(results['T_in'], results['T_out'], results['COP'], colors='k') fig1.colorbar(plot) ax1.set_title('COP') ax1.set_xlabel('Primary input temperature [°C]') ax1.set_ylabel('Secondary output temperature [°C]') fig1.show # Plot electrical input power fig1, ax1 = plt.subplots() plot = plt.tricontourf(results['T_in'], results['T_out'], results['P_el']) ax1.tricontour(results['T_in'], results['T_out'], results['P_el'], colors='k') fig1.colorbar(plot) ax1.set_title('Eletrical input power [W]') ax1.set_xlabel('Primary input temperature [°C]') ax1.set_ylabel('Secondary output temperature [°C]') fig1.show # Plot thermal output power fig1, ax1 = plt.subplots() plot = plt.tricontourf(results['T_in'], results['T_out'], results['P_th']) ax1.tricontour(results['T_in'], results['T_out'], results['P_th'], colors='k') fig1.colorbar(plot) ax1.set_title('Thermal output power [W]') ax1.set_xlabel('Primary input temperature [°C]') ax1.set_ylabel('Secondary output temperature [°C]') fig1.show ``` COOLING: **Schematic plot** of EER, P_el and P_th for an **generic air/water** heat pump: subtype = **regulated** ``` # Define Temperatures T_in_primary = range(20,36) T_in_secondary = range(10,30) T_in = numpy.array([]) T_out = numpy.array([]) # Load parameters of generic air/water | regulated parameters = hpl.get_parameters('Generic', group_id=1, t_in=-7, t_out=52, p_th = 10000) heatpump=hpl.HeatPump(parameters) results=pd.DataFrame() # Create input series for t1 in T_in_primary: for t2 in T_in_secondary: T_in=numpy.append(T_in,t1) T_out=numpy.append(T_out,t2) results=heatpump.simulate(t_in_primary=T_in, t_in_secondary=T_out, t_amb=T_in, mode=2) results=pd.DataFrame.from_dict(results) # Plot EER fig1, ax1 = plt.subplots() plot = plt.tricontourf(results['T_in'], results['T_out'], results['EER']) ax1.tricontour(results['T_in'], results['T_out'], results['EER'], colors='k') fig1.colorbar(plot) ax1.set_title('EER') ax1.set_xlabel('Primary input temperature [°C]') ax1.set_ylabel('Secondary output temperature [°C]') fig1.show # Plot electrical input power fig1, ax1 = plt.subplots() plot = plt.tricontourf(results['T_in'], results['T_out'], results['P_el']) ax1.tricontour(results['T_in'], results['T_out'], results['P_el'], colors='k') fig1.colorbar(plot) ax1.set_title('Eletrical input power [W]') ax1.set_xlabel('Primary input temperature [°C]') ax1.set_ylabel('Secondary output temperature [°C]') fig1.show # Plot thermal output power fig1, ax1 = plt.subplots() plot = plt.tricontourf(results['T_in'], results['T_out'], results['P_th']) ax1.tricontour(results['T_in'], results['T_out'], results['P_th'], colors='k') fig1.colorbar(plot) ax1.set_title('Thermal output power [W]') ax1.set_xlabel('Primary input temperature [°C]') ax1.set_ylabel('Secondary output temperature [°C]') fig1.show ``` ## **6. Validation** <a class="anchor" id="validation" name="validation"></a> The following plots will give you a detailed view on the differences between simulation and measurement from heat pump keymark. Therefore, all set points for all heat pumps are loaded from the file `database_heating_average_normalized_subtypes_validation.csv`. ``` df = pd.read_csv('../output/database_heating_average_normalized_subtypes_validation.csv') ``` ### **6.1 Air/Water | on/off** <a class="anchor" id="air-water-onoff" name="air-water-onoff"></a> * The mean absolute percentage error (MAPE) over all heat pumps is * 5.4 % for COP * 2.5 % for P_el * 5.8 % for P_th * The errors come from deviation mostly at low ambient / outside temperatures ``` # Plot relative error for all heat pumps of type air/water | on-off Group = 4 data = df.loc[(df['Group']==Group)] data = data[['T_amb [°C]','RE_COP', 'RE_P_el', 'RE_P_th']] ax = data.boxplot(by='T_amb [°C]', layout=(1,3), figsize=(10,5), showfliers=False) ax[0].set_ylim(-60,60) ax[0].set_ylabel('relative error in %') data.abs().mean()[1:4] # mean absolute percentage error (MAPE) # Plot absolute values all heat pumps of type air/water | on-off as scatter plot Group = 4 fig, ax = plt.subplots(1,3) data = df.loc[(df['Group']==Group)] data.plot.scatter(ax=ax[0], x='COP',y='COP_sim', alpha=0.3, figsize=(12,5), grid=True, title='Coefficient of performance') data.plot.scatter(ax=ax[1], x='P_el [W]',y='P_el_sim', alpha=0.3, figsize=(15,5), grid=True, title='Electrical input power [W]') data.plot.scatter(ax=ax[2], x='P_th [W]',y='P_th_sim', alpha=0.3, figsize=(15,5), grid=True, title='Thermal output power [W]') ax[0].set_xlim(0,10) ax[0].set_ylim(0,10) ax[1].set_xlim(0,20000) ax[1].set_ylim(0,20000) ax[2].set_xlim(0,50000) ax[2].set_ylim(0,50000) ``` ### **6.2 Brine/Water | on/off** <a class="anchor" id="brine-water-onoff" name="brine-water-onoff"></a> * The mean absolute percentage error (MAPE) over all heat pumps is * 3.9 % for COP * 1.7 % for P_el * 2.7 % for P_th * **Important**: Validation data is only available at primary input temperature of 0 °C. ``` # Plot relative error for all heat pumps of type brine/water | on-off Group = 5 data = df.loc[(df['Group']==Group)] data = data[['T_amb [°C]','RE_COP', 'RE_P_el', 'RE_P_th']] ax = data.boxplot(by='T_amb [°C]', layout=(1,3), figsize=(10,5), showfliers=False) ax[0].set_ylim(-60,60) ax[0].set_ylabel('relative error in %') data.abs().mean()[1:4] # mean absolute percentage error (MAPE) # Plot absolute values all heat pumps of type brine/water | on-off as scatter plot Group = 5 fig, ax = plt.subplots(1,3) data = df.loc[(df['Group']==Group)] data.plot.scatter(ax=ax[0], x='COP',y='COP_sim', alpha=0.3, figsize=(12,5), grid=True, title='Coefficient of performance') data.plot.scatter(ax=ax[1], x='P_el [W]',y='P_el_sim', alpha=0.3, figsize=(15,5), grid=True, title='Electrical input power [W]') data.plot.scatter(ax=ax[2], x='P_th [W]',y='P_th_sim', alpha=0.3, figsize=(15,5), grid=True, title='Thermal output power [W]') ax[0].set_xlim(0,10) ax[0].set_ylim(0,10) ax[1].set_xlim(0,20000) ax[1].set_ylim(0,20000) ax[2].set_xlim(0,50000) ax[2].set_ylim(0,50000) ``` ### **6.3 Water/Water | on/off** <a class="anchor" id="water-water-onoff" name="water-water-onoff"></a> * The mean absolute percentage error (MAPE) over all heat pumps is * 1.6 % for COP * 1.6 % for P_el * 2.4 % for P_th * **Important**: Validation data is only available at primary input temperature of 10 °C. ``` # Plot relative error for all heat pumps of type water/water | on-off Group = 6 data = df.loc[(df['Group']==Group)] data = data[['T_amb [°C]','RE_COP', 'RE_P_el', 'RE_P_th']] ax = data.boxplot(by='T_amb [°C]', layout=(1,3), figsize=(10,5), showfliers=False) ax[0].set_ylim(-60,60) ax[0].set_ylabel('relative error in %') data.abs().mean()[1:4] # mean absolute percentage error (MAPE) # Plot absolute values all heat pumps of type water/water | on-off as scatter plot Group = 6 fig, ax = plt.subplots(1,3) data = df.loc[(df['Group']==Group)] data.plot.scatter(ax=ax[0], x='COP',y='COP_sim', alpha=0.3, figsize=(12,5), grid=True, title='Coefficient of performance') data.plot.scatter(ax=ax[1], x='P_el [W]',y='P_el_sim', alpha=0.3, figsize=(15,5), grid=True, title='Electrical input power [W]') data.plot.scatter(ax=ax[2], x='P_th [W]',y='P_th_sim', alpha=0.3, figsize=(15,5), grid=True, title='Thermal output power [W]') ax[0].set_xlim(0,10) ax[0].set_ylim(0,10) ax[1].set_xlim(0,20000) ax[1].set_ylim(0,20000) ax[2].set_xlim(0,50000) ax[2].set_ylim(0,50000) ``` ### **6.4 Air/Water | regulated** <a class="anchor" id="air-water-regulated" name="air-water-regulated"></a> **Heating** * The mean absolute percentage error (MAPE) over all heat pumps is * 12.1 % for COP * 19.5 % for P_el * 23.5 % for P_th **Cooling** * The mean absolute percentage error (MAPE) over all heat pumps is * 4.8 % for EER * 16.5 % for P_el * 17.0 % for P_th Because of different control strategies, the deviation over different heat pump models is much higher compared to on/off types. #### **6.4.1 Heating** <a class="anchor" id="air-water-regulated-heating" name="air-water-regulated-heating"></a> ``` # Plot relative error for all heat pumps of type air/water | regulated Group = 1 data = df.loc[(df['Group']==Group)] data = data[['T_amb [°C]','RE_COP', 'RE_P_el', 'RE_P_th']] ax = data.boxplot(by='T_amb [°C]', layout=(1,3), figsize=(10,5), showfliers=False) ax[0].set_ylim(-60,60) ax[0].set_ylabel('relative error in %') data.abs().mean()[1:4] # mean absolute percentage error (MAPE) # Plot absolute values all heat pumps of type air/water | regulated as scatter plot Group = 1 fig, ax = plt.subplots(1,3) data = df.loc[(df['Group']==Group)] data.plot.scatter(ax=ax[0], x='COP',y='COP_sim', alpha=0.3, figsize=(12,5), grid=True, title='Coefficient of performance') data.plot.scatter(ax=ax[1], x='P_el [W]',y='P_el_sim', alpha=0.1, figsize=(15,5), grid=True, title='Electrical input power [W]') data.plot.scatter(ax=ax[2], x='P_th [W]',y='P_th_sim', alpha=0.1, figsize=(15,5), grid=True, title='Thermal output power [W]') ax[0].set_xlim(0,10) ax[0].set_ylim(0,10) ax[1].set_xlim(0,20000) ax[1].set_ylim(0,20000) ax[2].set_xlim(0,50000) ax[2].set_ylim(0,50000) ``` #### **6.4.2 Cooling** <a class="anchor" id="air-water-regulated-cooling" name="air-water-regulated-cooling"></a> ``` data = pd.read_csv('../output/database_cooling_reduced_normalized_validation.csv') data = data[['T_outside [°C]','RE_Pdc', 'RE_P_el', 'RE_EER']] ax = data.boxplot(by='T_outside [°C]', layout=(1,3), figsize=(10,5), showfliers=False) ax[0].set_ylim(-60,60) ax[0].set_ylabel('relative error in %') data.abs().mean()[1:4] # mean absolute percentage error (MAPE) data = pd.read_csv('../output/database_cooling_reduced_normalized_validation.csv') fig, ax = plt.subplots(1,3) data.plot.scatter(ax=ax[0], x='EER',y='EER_sim', alpha=0.3, figsize=(12,5), grid=True, title='Coefficient of performance') data.plot.scatter(ax=ax[1], x='P_el [W]',y='P_el_sim', alpha=0.1, figsize=(15,5), grid=True, title='Electrical input power [W]') data.plot.scatter(ax=ax[2], x='Pdc [W]',y='Pdc_sim', alpha=0.1, figsize=(15,5), grid=True, title='Thermal output power [W]') ax[0].set_xlim(0,10) ax[0].set_ylim(0,10) ax[1].set_xlim(0,6000) ax[1].set_ylim(0,6000) ax[2].set_xlim(0,15000) ax[2].set_ylim(0,15000) ``` ### **6.5 Brine/Water | regulated** <a class="anchor" id="brine-water-regulated" name="brine-water-regulated"></a> * The mean absolute percentage error (MAPE) is over all heat pumps is * 4.2 % for COP * 17.7 % for P_el * 19.7 % for P_th * **Important**: Validation data is only available at primary input temperature of 0 °C. ``` # Plot relative error for all heat pumps of type brine/water | regulated Group = 2 data = df.loc[(df['Group']==Group)] data = data[['T_amb [°C]','RE_COP', 'RE_P_el', 'RE_P_th']] ax = data.boxplot(by='T_amb [°C]', layout=(1,3), figsize=(10,5), showfliers=False) ax[0].set_ylim(-60,60) ax[0].set_ylabel('relative error in %') data.abs().mean()[1:4] # mean absolute percentage error (MAPE) # Plot absolute values all heat pumps of type brine/water | regulated as scatter plot Group = 2 fig, ax = plt.subplots(1,3) data = df.loc[(df['Group']==Group)] data.plot.scatter(ax=ax[0], x='COP',y='COP_sim', alpha=0.3, figsize=(12,5), grid=True, title='Coefficient of performance') data.plot.scatter(ax=ax[1], x='P_el [W]',y='P_el_sim', alpha=0.1, figsize=(15,5), grid=True, title='Electrical input power [W]') data.plot.scatter(ax=ax[2], x='P_th [W]',y='P_th_sim', alpha=0.1, figsize=(15,5), grid=True, title='Thermal output power [W]') ax[0].set_xlim(0,10) ax[0].set_ylim(0,10) ax[1].set_xlim(0,20000) ax[1].set_ylim(0,20000) ax[2].set_xlim(0,50000) ax[2].set_ylim(0,50000) ``` ## **7. Conclusion** <a class="anchor" id="conclusion" name="conclusion"></a> - On/Off heat pumps can be simulated very well (mean relative error < 6 %) - Regulated heat pumps show a good values in terms of COP and EER but relative mean errors about 15-20 % for electrical and thermal power, because of the non-linearity regarding different primary / secondary temperatures. - Despite of that, generic heat pumps should work well, because the median shows only a small relative error
github_jupyter
# Day 6 - Voronoi diagram <figure style="float: right; max-width: 25em; margin: 1em"> <img src="https://upload.wikimedia.org/wikipedia/commons/6/6d/Manhattan_Voronoi_Diagram.svg" alt="Manhattan Voronoi diagram illustration from Wikimedia"/> <figcaption style="font-style: italic; font-size: smaller"> Manhattan Voronoi diagram Balu Ertl [<a href="https://creativecommons.org/licenses/by-sa/1.0">CC BY-SA 1.0</a>],<br/><a href="https://commons.wikimedia.org/wiki/File:Manhattan_Voronoi_Diagram.svg">from Wikimedia Commons</a> </figcaption> </figure> * [Day 6](https://adventofcode.com/2018/day/6) Another computational geometry problem! This time we are asked to find the largest area in a [Voronoi diagram](https://en.wikipedia.org/wiki/Voronoi_diagram). The most efficient algorithm to produce the boundaries between the points ($O(n \log_2 n)$) is [Fortune's algorithm](https://en.wikipedia.org/wiki/Fortune%27s_algorithm), which (like [day 3](./Day%2003.ipynb)) is a [sweep line algorithm](https://en.wikipedia.org/wiki/Sweep_line_algorithm) to reduce the problem from 2 to 1 dimension. *But*, we don't need boundaries, we need *area*. A simpler method is to just use a $O(kn)$ double loop to find which of $n$ elements is closest for each of the $k$ `(x, y)` points in the map. There are three importand aspects to remember here: 1. we need to use [Manhattan distance](https://en.wikipedia.org/wiki/Taxicab_geometry), not Euclidian distance, when doing our calculations. 2. If 2 or more coordinates are equidistant from a given `(x, y)` point, that point doesn't count as area for any of the coordinates. This means we can't just use `min()` here, and `sort()` would be needlessly precise. Instead, we only need to know the *top 2 smallest distances*; if these two are equidistance we know we can't give the area to anyone. To get the two N of anything, you'd want to use a [heap queue](https://docs.python.org/3/library/heapq.html#heapq.nsmallest), which gives us the result in $O(n)$ time rather than $O(n \log_2 n)$. Or, for numpy arrays, use the [`numpy.partition()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.partition.html) function; we just want to group the top two separate from the remainder, it doesn't matter if they are ordered any further, after all. 3. Per coordinate we need to track if their area stretches to infinity, so we can disqualify them from consideration when we ask for the maximum area. Any coordinate that can claim a `(x, y)` point on the boundaries (defined as the min and max x and y coordinates) can be expected to stretch to infinity. All computations can be done with numpy arrays, and the distance calculations can be done for all points for the whole matrix in one step with the Scipy [`scipy.spacial.distance.cdist()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html) function, which directly supports calculating Manhattan distances. As inputs we need - an array for all the `(x, y)` positions for the coordinates <table> <thead> <tr><th>#</th><th>x</th><th>y</th></tr> </thead> <tbody> <tr><th>0</th><td>1</td><td>1</td></tr> <tr><th>1</th><td>1</td><td>6</td></tr> <tr><th>⋮</th><td>⋮</td><td>⋮</td></tr> <tr><th>5</th><td>8</td><td>9</td></tr> </tbody> </table> - an array of all possible `(x, y)` coordinates for the grid bounded the min and max x, y coordinates of the input coordinates <table> <thead> <tr><th>#</th><th>x</th><th>y</th></tr> </thead> <tbody> <tr><th>0</th><td>1</td><td>1</td></tr> <tr><th>1</th><td>2</td><td>1</td></tr> <tr><th>2</th><td>3</td><td>1</td></tr> <tr><th>⋮</th><td>⋮</td><td>⋮</td></tr> <tr><th>55</th><td>7</td><td>8</td></tr> </tbody> </table> Given these, `cdist()` will give us a big matrix with all distances: <table> <thead> <tr><th>#</th><th>distances</th></tr> </thead> <tbody> <tr><th>0</th><td> <table> <thead> <tr><th></th><th>0</th><th>1</th><th>2</th><th>3</th><th>4</th><th>5</th><th>6</th></tr></thead> <tbody> <tr><th>0</th><td>0.0</td><td>1.0</td><td>2.0</td><td>3.0</td><td>4.0</td><td>5.0</td><td>6.0</td></tr> <tr><th>1</th><td>1.0</td><td>2.0</td><td>3.0</td><td>4.0</td><td>5.0</td><td>6.0</td><td>7.0</td></tr> <tr><th>⋮</th><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td></tr> <tr><th>7</th><td>7.0</td><td>8.0</td><td>9.0</td><td>10.0</td><td>11.0</td><td>12.0</td><td>13.0</td></tr> </tbody> </table> </td></tr> <tr><th>1</th><td> <table> <thead> <tr><th></th><th>0</th><th>1</th><th>2</th><th>3</th><th>4</th><th>5</th><th>6</th></tr> </thead> <tbody> <tr><th>0</th><td>5.0</td><td>6.0</td><td>7.0</td><td>8.0</td><td>9.0</td><td>10.0</td><td>11.0</td></tr> <tr><th>1</th><td>4.0</td><td>5.0</td><td>6.0</td><td>7.0</td><td>8.0</td><td>9.0</td><td>10.0</td></tr> <tr><th>⋮</th><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td></tr> <tr><th>7</th><td>2.0</td><td>3.0</td><td>4.0</td><td>5.0</td><td>6.0</td><td>7.0</td><td>8.0</td></tr> </tbody> </table> </td></tr> <tr><th>⋮</th><td>⋮</td></tr> <tr><th>5</th><td> <table> <thead> <tr><th></th><th>0</th><th>1</th><th>2</th><th>3</th><th>4</th><th>5</th><th>6</th></tr> </thead> <tbody> <tr><th>0</th><td>15.0</td><td>14.0</td><td>13.0</td><td>12.0</td><td>11.0</td><td>10.0</td><td>9.0</td></tr> <tr><th>1</th><td>14.0</td><td>13.0</td><td>12.0</td><td>11.0</td><td>10.0</td><td>9.0</td><td>8.0</td></tr> <tr><th>⋮</th><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td><td>⋮</td></tr> <tr><th>7</th><td>8.0</td><td>7.0</td><td>6.0</td><td>5.0</td><td>4.0</td><td>3.0</td><td>2.0</td></tr> </tbody> </table> </td></tr> </tbody> </table> All that then remains to be done is find the *ids* of the input coordinates (integer index) that have the lowest distance at each point, remove the points at which the lowest distance and second lowest distance are equal (contested distances), remove the ids that claim area at the edges (those have infinite area), then count the ids and return the highest of those counts. ``` import numpy as np from scipy.spatial import distance def _manhattan_distances_matrix(coords): """Produce a len(coords) matrix of manhattan distances at all possible x, y""" x = np.arange(coords[..., 0].min(), coords[..., 0].max() + 1) y = np.arange(coords[..., 1].min(), coords[..., 1].max() + 1) # arrays with len(x) x len(y) x and y coordinates xx, yy = np.meshgrid(x, y) # array of all possible [x, y] all_xy = np.stack((xx, yy), axis=-1).reshape(-1, 2) # calculate distances for all points at all coordinates; essentially a # len(coordinates) set of matrices of distances return distance.cdist(coords, all_xy, metric='cityblock').reshape(-1, *xx.shape) def _claimed_area(coords): """matrix of claimed areas by id; -1 is used to mark equidistanc areas. """ distances = _manhattan_distances_matrix(coords) # What coordinate ids win for a given x, y position? coord_ids = distances.argmin(axis=0) # whereever the top and second best distance are the same, clear the # claim for a candidate id candidate, next_ = np.partition(distances, 2, axis=0)[:2] coord_ids[candidate == next_] = -1 return coord_ids def find_max_area(coords): """How large is the largest non-infinite area covered?""" coord_ids = _claimed_area(coords) # Any candidate id that's at the edge has infinite area, clear those # from consideration is_infinite = np.union1d( # top and bottom np.unique(coord_ids[[0, -1], :]), # left and right np.unique(coord_ids[:, [0, -1]]), ) coord_ids[np.isin(coord_ids, is_infinite)] = -1 # now we have a matrix of all positions on the infinite grid (bounded # by the min and max coordinates) with non-infinite areas marked by # the id of the coordinates that claim that position. -1 mark spots # not claimable or part of an infinite area. All we need to do now is # count the ids != -1, and return the maximum _, counts = np.unique(coord_ids[coord_ids != -1], return_counts=True) return counts.max() testcoords = np.genfromtxt('''\ 1, 1 1, 6 8, 3 3, 4 5, 5 8, 9'''.splitlines(), delimiter=',', dtype=np.int) assert find_max_area(testcoords) == 17 %matplotlib inline from PIL import Image import matplotlib.pyplot as plt def visualise(coords, cmap='rainbow', unclaimed='black', centers='white', ratio=1): coord_ids = _claimed_area(coords) vmin, vmax = coord_ids[coord_ids != -1].min(), coord_ids.max() # mark the coordinate centers with a separate colour; coordinates # must first be normalised as we don't necessarily start at 0, 0 # anymore. normalised_coords = coords - coords.min(axis=0) coord_ids[tuple(normalised_coords.T)[::-1]] = vmax + 1 # Generate a PIL image, using a matplotlib palette # resample a matplotlib colour map to cover our coords count (vmax + 1) p = plt.get_cmap(cmap)._resample(vmax + 1) # -1 is given one colour p.set_under(unclaimed) # vmax + 1 another p.set_over(centers) # map points through the resampled palette img = Image.fromarray(p(coord_ids, bytes=True)) if ratio != 1: img = img.resize((int(img.size[0] * ratio), int(img.size[1] * ratio))) return img visualise(testcoords, ratio=35) import aocd data = aocd.get_data(day=6, year=2018) coords = np.genfromtxt(data.splitlines(), delimiter=',', dtype=np.uint) print('Part 1:', find_max_area(coords)) # All the coordinates mapped out: visualise(coords) ``` ## Part 2 This part is actually easier, we can sum all the distances that `cdist()` gives us for all possible positions, and count how many of these fall below the threshold. Numpy was made for this kind of work. ``` def area_within_threshold(coords, threshold): distances = _manhattan_distances_matrix(coords) return (distances.sum(axis=0) < threshold).sum() testthreshold = 32 assert area_within_threshold(testcoords, testthreshold) == 16 def plot_area(coords, threshold): cmap = plt.get_cmap('cool') cmap.set_over('black') distance = _manhattan_distances_matrix(coords).sum(axis=0) plt.axis('off') plt.imshow(distance, vmax=threshold - 1, cmap=cmap) plot_area(testcoords, testthreshold) threshold = 10000 print('Part 2:', area_within_threshold(coords, threshold)) plot_area(coords, threshold) ```
github_jupyter
# 简单理解 SCF 中的 DIIS > 创建时间:2019-10-23 这篇文档将会简单地叙述 GGA 为代表的 SCF DIIS。 DIIS 是一种 (几乎) 专门用于自洽场过程加速的算法。关于 DIIS 的算法与数学论述,这里并不作展开。这里推荐 C. David Sherrill 的笔记 [^Sherrill-note] 与 Psi4NumPy 的 Jupyter Notebook [^psi4numpy-note] 作为拓展阅读。 这篇笔记会借助 PySCF 的 DIIS 程序,对 Fock 矩阵进行外推。我们将描述在第 $t$ 步 DIIS 过程之后,如何更新第 $t+1$ 步的 Fock 矩阵。我们 **并不会** 从头写一个 DIIS 程序;这一方面是出于程序复杂性考虑,因为一般的 DIIS 程序应当要允许便利地增添、删除迭代过程中间的向量,能够处理解矩阵方程时会出现的线性依赖问题,并且要能保证一定的程序效率。另一方面,若对 DIIS 的更新过程有所了解,那么原则上我们已经理解了 DIIS 程序了,剩下的细节将只是时间与耐心上的问题。 ``` import numpy as np import scipy from pyscf import gto, dft, lib from functools import partial np.einsum = partial(np.einsum, optimize=["greedy", 1024 ** 3 * 2 / 8]) np.set_printoptions(5, linewidth=150, suppress=True) ``` ## PySCF 的 DIIS 使用 ### 分子体系定义与 DIIS 程序 首先,我们的分子体系是不对称的双氧水。 ``` mol = gto.Mole() mol.atom = """ O 0.0 0.0 0.0 O 0.0 0.0 1.5 H 1.0 0.0 0.0 H 0.0 0.7 1.0 """ mol.basis = "6-31G" mol.verbose = 0 mol.build() nao = nmo = mol.nao nocc = mol.nelec[0] nvir = nmo - nocc so, sv, sa = slice(0, nocc), slice(nocc, nmo), slice(0, nmo) ``` 为了简化程序,我们借助 PySCF 的 DFT 自洽场类 `scf_eng`,用来生成 Fock 矩阵和密度矩阵。 ``` scf_eng = dft.RKS(mol) scf_eng.xc = "B3LYPg" S = mol.intor("int1e_ovlp") mo_occ = np.zeros(nmo) mo_occ[:nocc] = 2 ``` 我们先简单地用下述程序展示 PySCF 中的 DIIS 是如何使用的。我们在后面会介绍 DIIS 类的具体的一些函数;这个用作演示用途的 DIIS 也时通过下述程序生成的。 下面的自洽场程序取自 pyxdh 文档 [^pyxdh-note],但作了一些修改与简化。这个自洽场程序大体思路与 Szabo [^Szabo-Ostlund.Dover.1996] 第三章的叙述契合,也与 Psi4NumPy 和 PySCF 的不少演示程序比较相似。 与 Szabo 第三章叙述不太一样的地方有两处。其中一行代码是 ```python D = coef * D + (1 - coef) * D_old ``` 这行代码仅仅是用来对 Naive SCF 作的修改。Szabo 的第三章可以称作是 Naive SCF,即单纯地将 Fock 矩阵对角化生成分子轨道,再得到密度代入 Fock 矩阵中。但这一行会将上一次迭代的密度 $D_{\mu \nu}^{t-1}$ 与这一次迭代的密度 $D_{\mu \nu}^{t}$ 混合,产生新的密度代入 Fock 矩阵中。这仅仅是为了防止 Naive SCF 振荡收敛,不会用于 DIIS 加速的算法。 另一行代码是 ```python F = func(diis=diis, F=F, C=C, mo_occ=mo_occ) ``` 这行代码是用于指定 DIIS 的更新。方式在这份文档中,DIIS 的更新方式有 - `func_no_special`:Naive SCF,不引入 DIIS - `diis_err_deviation`:通过迭代过程中的 Fock 矩阵差值 $\Delta F_{\mu \nu}^{t} = F_{\mu \nu}^{t} - F_{\mu \nu}^{t - 1}$ 更新 DIIS 状态 - `diis_err_gradient`:通过占据-非占 Fock 矩阵 $F_{ai}^{t}$ 更新 DIIS 状态 之所以用这种不太常见也不太直观的代码方式指定 DIIS 更新方式,单纯地是因为节省文档中的代码空间,避免代码冗余。 ``` def scf_process(func, coef=1.0, maxcycle=128): diis = lib.diis.DIIS() C = e = NotImplemented # Orbital (canonical) coefficient D = np.zeros((nao, nao)) # Density in this iteration D_old = np.zeros((nao, nao)) + 1e-4 # Density in last iteration count = 0 # Iteration count (1, 2, ...) while (not np.allclose(D, D_old)): # atol=1e-8, rtol=1e-5 if count > maxcycle: raise ValueError("SCF not converged!") count += 1 D_old = D F = scf_eng.get_fock(dm=D) # Generate initial Fock matrix from Density if count > 1: # avoid the case: C = NotImplemented F = func(diis=diis, F=F, C=C, mo_occ=mo_occ) # Different DIIS approaches # func_no_special : nothing happens # diis_err_deviation : F = diis.update(F) # diis_err_gradient : F = diis.update(F, scf_eng.get_grad(C, mo_occ)) e, C = scipy.linalg.eigh(F, S) # Solve FC = SCe D = scf_eng.make_rdm1(mo_coeff=C, mo_occ=mo_occ) # D = 2 * C(occ).T @ C(occ) D = coef * D + (1 - coef) * D_old # For convergence of original SCF # func_no_special: D = 0.3 * D + 0.7 * D_old # other cases : nothing happens E_tot = scf_eng.energy_tot(dm=D) print("SCF Converged in ", count, " loops") print("Total energy (B3LYP) ", E_tot, " a.u.") ``` ### DIIS 方法加速效果 现在我们可以来看每种 DIIS 更新方式会产生的效果了。 **Naive SCF** ``` def func_no_special(*args, **kwargs): return kwargs["F"] scf_process(func_no_special, coef=0.3) ``` **DIIS:Fock 矩阵差值** ``` def diis_err_deviation(*args, **kwargs): diis, F = kwargs["diis"], kwargs["F"] return diis.update(F) scf_process(diis_err_deviation) ``` **DIIS:占据-非占 Fock 矩阵** ``` def diis_err_gradient(*args, **kwargs): diis, F, C, mo_occ = kwargs["diis"], kwargs["F"], kwargs["C"], kwargs["mo_occ"] return diis.update(F, scf_eng.get_grad(C, mo_occ)) scf_process(diis_err_gradient) ``` 尽管从原理上,使用占据-非占 Fock 矩阵的方法应当更好;但在当前的体系下 Fock 矩阵差值的方法能更快地收敛。 我们能发现,若使用 PySCF 的 DIIS 类 (Psi4NumPy 的 [helper_HF.py](https://github.com/psi4/psi4numpy/blob/master/Self-Consistent-Field/helper_HF.py) 的 `DIIS_helper` 类也是相似的),我们在实际进行 DIIS 时只需要相对于 Naive SCF 增加一行,非常方便地就可以加速收敛: ```python F = diis.update(F) # diis_err_deviation ``` 或者 ```python F = diis.update(F, scf_eng.get_grad(C, mo_occ)) # diis_err_gradient ``` 简单地说,这就是将 Fock 矩阵在每次迭代过程中,利用以前储存下来的 Fock 矩阵的信息进行再次更新。 ## DIIS 细节 在这一段中,我们主要通过占据-非占 Fock 矩阵更新法的程序,对迭代 6 次时的 DIIS 状态进行较为细致的分析,并以此推导出第 6 次 DIIS 更新后的 Fock 矩阵。 第 6 次迭代时的 DIIS 类 `diis` 与更新前的 Fock 矩阵 `F_old` $F_{\mu \nu}^{t=6}$、更新后的 Fock 矩阵 `F` $\mathscr{F}_{\mu \nu}$ 的获得方式是通过限制迭代次数而给出的: ``` diis = lib.diis.DIIS() C = e = NotImplemented # Orbital (canonical) coefficient D = np.zeros((nao, nao)) # Density in this iteration D_old = np.zeros((nao, nao)) + 1e-4 # Density in last iteration count = 0 # Iteration count (1, 2, ...) F_old = NotImplemented # Variable in last iteration while (not np.allclose(D, D_old)): # atol=1e-8, rtol=1e-5 count += 1 D_old = D F = scf_eng.get_fock(dm=D) # Generate initial Fock matrix from Density if count == 6: F_old = F.copy() F = diis.update(F, scf_eng.get_grad(C, mo_occ)) break elif count > 1: # avoid the case: C = NotImplemented F = diis.update(F, scf_eng.get_grad(C, mo_occ)) e, C = scipy.linalg.eigh(F, S) # Solve FC = SCe D = scf_eng.make_rdm1(mo_coeff=C, mo_occ=mo_occ) # D = 2 * C(occ).T @ C(occ) ``` 这里补充一个程序细节,更新后的 Fock 矩阵 $\mathscr{F}_{\mu \nu}$ 也可以通过 `diis.extrapolate` 获得: ``` np.allclose(F, diis.extrapolate().reshape(nmo, nmo)) ``` 而 `diis.update` 除了给出更新后的 Fock 矩阵外,还将当前的 Fock 矩阵与误差信息更新入 `diis` 中,为下一次迭代的 DIIS 更新作准备。 ### DIIS 储存内容 一般来说,DIIS 储存两部分内容:待外推信息 $p_I^t$ 与误差信息 $e_J^t$。 我们在 SCF 过程中使用 DIIS 的目的是借助以前若干步迭代过程中的信息,对 Fock 矩阵作外推,得到在当前迭代步 $t$ 下更好的 Fock 矩阵。因此,待外推信息 $p_I^t$ 是第 $t$ 次迭代过程计算得到的 Fock 矩阵 $F_{\mu \nu}^t$。 这里会有些诡异的地方是,待外推信息 $p_I^t$ 是单下标 $I$ 的向量,但原子轨道基组下的 Fock 矩阵 $F_{\mu \nu}^t$ 是双下标的矩阵。事实上,$p_I^t$ 在实践过程中就是将 $F_{\mu \nu}^t$ 压平成一维向量。待外推信息 $p_I^t$ 可以通过 `diis.get_vec` 给出,我们将这些向量整合为 `vecs` 变量中,其角标储存是 $(t, I)$: ``` vecs = np.array([diis.get_vec(i) for i in range(diis.get_num_vec())]) ``` 我们记每次迭代的误差信息为 $e_J^t$。对于占据-非占 Fock 矩阵更新法而言,$e_J^t$ 即是分子轨道基组下的非占-占据 Fock 矩阵 $F_{ai}^t$。我们知道,对于 SCF 收敛之后的状态下,$F_{ai} = 0$;但这在自洽场迭代过程中,该量一般地并不是零,甚至说自洽场过程就是为了达成 $F_{ai} = 0$ 的目的也不为过。因此,$F_{ai}^t$ 的状况可以看作是自洽场是否收敛得较好的判标;于是我们定义 $e_J^t$ 为压平之后的 $F_{ai}^t$。 误差信息 $e_J^t$ 可以通过 `diis.get_err_vec` 给出,我们将这些向量整合为 `err_vecs` 变量中,其角标储存是 $(t, J)$: ``` err_vecs = np.array([diis.get_err_vec(i) for i in range(diis.get_num_vec())]) ``` 我们指出,$p_I^t$ 与 $e_J^t$ 下标所指代的维度未必要是一样的。 ``` print(vecs.shape) print(err_vecs.shape) ``` :::{note} 从上述的叙述与代码中,能看到我们只进行了 6 次迭代,其中迭代过程的误差信息与待外推信息只储存了 5 次 ($t$ 从 1 计数,外推信息的 $t \in [2, 6]$)。我们定义当前作为迭代次数的上标 $t$ 的集合是 $\mathscr{T} = \{2, 3, 4, 5, 6\}$;但在 PySCF 的 DIIS 类 `diis` 中,通过程序取出这些向量的指标时则应当使用 `0, 1, 2, 3, 4`。 我们知道,DIIS 为了进行外推,会储存许多待外推信息与误差信息;但对于大分子而言,这会占用许多内存空间。出于这个目的 (以及出于收敛性的考量),DIIS 通常只会存比较少量地待外推信息与误差信息。PySCF 的 DIIS 一般只储存 6 次迭代过程的信息。这意味着,若我们进行了 15 次迭代,待外推矩阵至多也只会储存 6 个,其余的待外推信息或误差信息都会舍去。 为简化讨论,我们在这篇文档中不讨论如何舍去已经储存的待外推信息与误差信息。 ::: ### DIIS 外推:理论 有了所有的待外推信息 $p_I^t$ 与误差信息 $e_J^t$ 后,我们可以作出外推结果 $\mathscr{p}_I = \mathscr{F}_{\mu \nu}$。上述公式中看似有问题的单下标转双下标可以通过互阵的 reshape 实现。 外推的含义是 $$ \mathscr{p}_I = \sum_{t \in \mathscr{T}} w_t p_I^t $$ $\mathscr{T}$ 代表 DIIS 当前储存的每个外推信息对应的被迭代次数的集合,在这里恰好是从 2 开始的所有的被迭代次数。如果我们现在的迭代次数非常大,但只允许 DIIS 储存不多于 6 个待外推信息,那么求和指标 $t$ 的取值范围 $\mathscr{T}$ 将会舍去这些迭代次数,从而保持其集合元素数量 $|\mathscr{T}|$ 不超过 6。 我们人为地引入权重 $w_t$ 以归一条件: $$ \sum_{t \in \mathscr{T}} w_t = 1 $$ 如果我们假定待外推的信息 $p_I^t$ 与对应的误差信息 $e_J^t$ 呈线性关系,那么被外推的信息 $\mathscr{p}_I$ 的误差 $\mathscr{e}_J$ 应当满足 $$ \mathscr{e}_I = \sum_{t \in \mathscr{T}} w_t e_J^t $$ 我们希望误差 $\Vert \mathscr{e}_J \Vert_2^2$ 最小化,但同时又满足 $w_t$ 的归一化条件;那么我们通过 Lagrange 乘子法,构造以下损失函数 $$ \begin{align} \mathscr{L} (\{w_t\}_{t \in \mathscr{T}}, \lambda) &= \Vert \mathscr{e}_J \Vert_2^2 + 2 \lambda \left( \sum_{t \in \mathscr{T}} w_t - 1 \right) \\ &= \sum_J \sum_{t \in \mathscr{T}} w_t e_J^t \cdot \sum_{s \in \mathscr{T}} w_s e_J^s + 2 \lambda \left( \sum_{t \in \mathscr{T}} w_t - 1 \right) \end{align} $$ 我们现在定义 $$ B_{ts} = \sum_{J} e_J^t e_J^s $$ 那么损失函数可以写为 $$ \mathscr{L} (\{w_t\}_{t \in \mathscr{T}}, \lambda) = \sum_{t, s \in \mathscr{T}} w_t B_{ts} w_s + 2 \lambda \left( \sum_{t \in \mathscr{T}} w_t - 1 \right) $$ 对上述损失函数求取关于 $w_t$ 的偏导数,则得到 $$ \frac{\partial \mathscr{L}}{\partial w_t} = 2 \sum_{s \in \mathscr{T}} B_{ts} w_s + 2 \lambda $$ 我们显然是希望让损失函数对关于 $w_t$ 的偏导数为零;那么联立归一化条件 $\sum_{t \in \mathscr{T}} w_t = 1$,我们应当得到以下矩阵方程: $$ \begin{align} \begin{pmatrix} 0 & 1 & 1 & \cdots \\ 1 & B_{t_0 t_0} & B_{t_0 t_1} & \cdots \\ 1 & B_{t_1 t_0} & B_{t_1 t_1} & \\ \vdots & \vdots & & \ddots \\ \end{pmatrix} \begin{pmatrix} \lambda \\ w_{t_0} \\ w_{t_1} \\ \vdots \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \\ 0 \\ \vdots \end{pmatrix} \end{align} $$ 其中,$t_0, t_1, \cdots \in \mathscr{T}$ 是互不相同的指标。求解上述方程,就可以获得权重 $w_t$,进而给出 $\mathscr{F}_{\mu \nu} = \mathscr{p}_I = \sum_{t \in \mathscr{T}} w_t p_I^t$,达成目标。 ### DIIS 外推:实现 首先,我们出,`diis` 的一个隐含变量 `diis._H` 储存的就是矩阵方程 LHS 的矩阵部分: ``` A = diis._H[:diis.get_num_vec()+1, :diis.get_num_vec()+1] A ``` 我们能很方便地构建上述矩阵的第 1 行以下、第 1 列以右的子矩阵 $B_{ts} = \sum_{J} e_J^t e_J^s$: ``` np.einsum("tI, sI -> ts", err_vecs, err_vecs) ``` 我们可以直接解上述的矩阵方程: ``` b = np.zeros(diis.get_num_vec() + 1) b[0] = 1 w = np.linalg.solve(A, b) w = w[1:] w ``` 那么我们可以通过 $\mathscr{F}_{\mu \nu} = \mathscr{p}_I = \sum_{t \in \mathscr{T}} w_t p_I^t$ 给出外推 Fock 矩阵 `F_ex`,并且与 `diis` 给出的外推的 Fock 矩阵 `F` 进行比较: ``` F_ex = np.einsum("t, tI -> I", w, vecs).reshape(nmo, nmo) np.allclose(F_ex, F) ``` :::{tip} 在求解 DIIS 所给出的权重 $w_t$ 向量的过程中,会遇到线性依赖关系,或者说会遇到 $B_{ts}$ 数值上不满秩的情况。在这种情况下,求解矩阵方程可能会失败。 一种解决方案是,干脆关闭 DIIS,使用 Naive SCF 作最后的收尾工作。由于 DIIS 已经将电子态密度收敛到相当不错的状态了,因此应当能预期这种情况下 Naive SCF 可以正常地进行收敛。 另一种解决方式是对矩阵方程 $\mathbf{A} \boldsymbol{x} = \boldsymbol{b}$ 的矩阵 $\mathbf{A}$ 作对角化,并舍去其中绝对值极小的本征值与本征向量,求解一个子空间的线性方程组问题。这种解决方案应用在 PySCF 的 DIIS 程序中。 ::: ### 对 Fock 矩阵差值方法的补充说明 Fock 矩阵差值方法的计算过程与占据-非占 Fock 矩阵方法的实现过程几乎是相同的。唯一的区别是: - 占据-非占 Fock 矩阵方法 $e_J^t = F_{ai}^t$ - Fock 矩阵差值方法 $e_J^t = \Delta F_{\mu \nu}^{t} = F_{\mu \nu}^{t} - F_{\mu \nu}^{t - 1}$ [^Sherrill-note]: <http://vergil.chemistry.gatech.edu/notes/diis/diis.pdf> [^psi4numpy-note]: <https://github.com/psi4/psi4numpy/blob/master/Tutorials/03_Hartree-Fock/3b_rhf-diis.ipynb> [^pyxdh-note]: <https://py-xdh.readthedocs.io/zh_CN/latest/qcbasic/proj_xyg3.html> [^Szabo-Ostlund.Dover.1996]: Szabo, A.; Ostlund, N. S. *Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory (Dover Books on Chemistry)*; Dover Publications, 1996.
github_jupyter
``` import os import torch import torch.nn as nn import torchvision.models as models from torchvision import transforms from PIL import Image import numpy as np from tqdm.notebook import tqdm from sklearn.metrics import confusion_matrix from IPython.display import display from src.data_loader import read_dataset, get_loader from facenet_pytorch import InceptionResnetV1, fixed_image_standardization import numpy as np class FaceRecognitionCNN(nn.Module): def __init__(self): super(FaceRecognitionCNN, self).__init__() self.resnet = InceptionResnetV1(pretrained='vggface2') self.relu = nn.ReLU() self.dropout = nn.Dropout(0.5) self.fc = nn.Linear(512, 1) def forward(self, images): out = self.resnet(images) out = self.relu(out) out = self.dropout(out) out = self.fc(out) return out.squeeze() device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') device model_dir = '/home/nika/Desktop/DeepFaceForgeryDetection/cloud_models/Jan11_20-35-47_gpu-training/' model = FaceRecognitionCNN() state_dict = torch.load(os.path.join(model_dir, 'model.pt'), map_location=device) model.load_state_dict(state_dict) model.eval(); # Image preprocessing, normalization for the pretrained resnet transform = transforms.Compose([ transforms.Resize((160, 160)), np.float32, transforms.ToTensor(), fixed_image_standardization ]) _, val_dataset = read_dataset( '/home/nika/Desktop/FaceForensics/original', '/home/nika/Desktop/FaceForensics/neural_textures/', transform=transform, max_images_per_video=1, splits_path='dataset/splits', window_size=1 ) print(f'validation data size: {len(val_dataset)}') val_loader = get_loader( val_dataset, batch_size=32, shuffle=True, num_workers=4 ) all_predictions = [] all_targets = [] all_video_ids = [] with torch.no_grad(): for video_ids, images, targets in tqdm(val_loader): images = images.to(device) targets = targets.to(device) outputs = model(images) predictions = outputs > 0.0 all_predictions.append(predictions) all_targets.append(targets) all_video_ids.append(video_ids) all_predictions = torch.cat(all_predictions) all_targets = torch.cat(all_targets) all_video_ids = torch.cat(all_video_ids) all_predictions.shape (all_predictions[1:] == all_predictions[:-1]).sum().float() / all_predictions.shape[0] set(all_video_ids.tolist()) # All misclassified video ids set(all_video_ids[~(all_predictions == all_targets)].tolist()) all_predictions.shape, all_targets.shape confusion_matrix(all_predictions, all_targets) tn, fp, fn, tp = confusion_matrix(all_predictions, all_targets).ravel() tn, fp, fn, tp total = float(tn + fp + fn + tp) accuracy = (tp + tn) / total precision = tp / (tp + fp) recall = tp / (tp + fn) print('accuracy', accuracy) print('precision', precision) print('recall', recall) # original accuracy, tn / (tn + fp) # neural textures accuracy tp / (tp + fn) # overall accuracy (tn + tp) / (tp + tn + fp + fn) incorrect_labels = [] for index, (pred, target) in enumerate(zip(all_predictions, all_targets)): if pred != target: incorrect_labels.append((index, pred, target)) all_targets all_predictions.int() _, val_dataset = read_dataset( 'dataset/images_large/original', 'dataset/images_large/tampered', transform=None, max_images_per_video=999999, splits_path='dataset/splits' ) def display_sample(label_index): img_index, pred, target = incorrect_labels[label_index] print('label_index: {}, pred: {}, target: {}'.format(label_index, pred, target)) return val_dataset[img_index][0] display(*[x[0] for x in val_dataset][:-100]) y_pred = torch.Tensor([0]) y_true = torch.Tensor([1.]) nn.BCELoss()(y_pred, y_true) ``` $$ \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right] $$ ``` images.shape, targets.shape targets with torch.no_grad(): scores = model(images) nn.Sigmoid()(scores) img = images[0] reverse_transform = transforms.Compose([ transforms.Normalize(mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225], std=[1/0.229, 1/0.224, 1/0.255]), transforms.ToPILImage() ]) transforms.ToPILImage()(img) ```
github_jupyter
# DO NOT RUN THIS FILE This file extracts datas from multiple raw XML data files and create json file as needed for model ## Load packages ``` import warnings warnings.filterwarnings('ignore') import xml.etree.ElementTree as ET import sys import os import pandas as pd import json import xmltodict import nltk, re ``` ## Get list of XML data files ``` DATA_DIR = "../../dataset/raw/1_CancerGov_QA" DATA_FILES = sorted(os.listdir(DATA_DIR)) ``` ## Get the list of XML tags in DATASET ``` elemList = [] for each_file in DATA_FILES: tree = ET.parse(DATA_DIR+'/'+each_file) for elem in tree.iter(): elemList.append(elem.tag) elemList = list(set(elemList)) print(elemList) ``` ## Parsing XML to Data Frame with necessary tags ``` data_column_names = ["Focus",'qtype', "Question", "Answer"] QA_DATAFRAME = pd.DataFrame(columns = data_column_names) QA_SIZE = 0 for each_file in DATA_FILES: tree = ET.parse(DATA_DIR+'/'+each_file) root = tree.getroot() SemanticTypes = [] SemanticGroups = [] QTypes = [] Questions = [] Answers = [] doc_focus = [] for each_passage in root.iter('Document'): doc_focus.append(each_passage.find('Focus').text) for each_Question in each_passage.iter('Question'): QTypes.append(each_Question.attrib['qtype']) Questions.append(each_Question.text) for each_Answer in each_passage.iter('Answer'): temp_answer = each_Answer.text.replace('\n', ' ').replace('\t', '') Answers.append(temp_answer) doc_df = pd.DataFrame(columns = data_column_names) if (len(Questions) == len(Answers)): for index in range(len(Questions)): if (len(SemanticTypes) == 1): SemanticType = SemanticTypes[0] elif (len(SemanticTypes) == 2): SemanticType = SemanticTypes[0] + ',' + SemanticTypes[1] temp_df = pd.DataFrame([[doc_focus[0], QTypes[index], Questions[index], Answers[index]]], columns=data_column_names) doc_df = doc_df.append(temp_df, ignore_index=True) QA_SIZE += len(Questions) QA_DATAFRAME = QA_DATAFRAME.append(doc_df, ignore_index=True) QA_DATAFRAME ``` ## Saving Data frames to Tab Separated File ``` QA_DATAFRAME.to_csv("../../dataset/raw/new-formatted_1_CancerGov_QA/xml_table.tsv", sep="\t", index=False) ``` ## Saving data into editable format Currently, data is in uncleaned and also the `Answers` of `Questions` are not as efficent as needed to be. So, we have to manually correct and create the `Answers` ``` QA_DATAFRAME = QA_DATAFRAME[["qtype", "Question", "Answer"]] LOL = QA_DATAFRAME.values.tolist() fh = open("../../dataset/raw/new-formatted_1_CancerGov_QA/QA_RAW_text.txt", "w") ID = 1 for L in LOL: #print(L) qtype, Question, Answer = L print(">", ID, file=fh) print("$" + qtype, file=fh) print("$" + Question, file=fh) Answer = Answer.replace("Key Points - ", "").replace("-", "") Answer = re.sub(' +', ' ', Answer) # get rid of extra speces print("$" + Answer, file=fh) Answers = ["#" + i for i in nltk.sent_tokenize(Answer)] for a in Answers:print(a, file=fh) ID += 1 #break fh.close() ```
github_jupyter
![Callysto.ca Banner](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-top.jpg?raw=true) <a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcurriculum-notebooks&branch=master&subPath=Science/ReflectionsOfLightByPlaneAndSphericalMirrors/reflections-of-light-by-plane-and-spherical-mirrors.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a> ``` from IPython.display import display, Math, Latex, HTML HTML('''<script> function code_toggle() { if (code_shown){ $('div.input').hide('500'); $('#toggleButton').val('Show Code') } else { $('div.input').show('500'); $('#toggleButton').val('Hide Code') } code_shown = !code_shown } $( document ).ready(function(){ code_shown=false; $('div.input').hide() }); </script> <form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form>''') from helper import * %matplotlib inline ``` # Reflection of Light ## by Plane and Spherical Mirrors ## Introduction When light shines onto the surface of an object, some of the light is reflected, while the rest is either absorbed or transmitted. We can imagine the light consisting of many narrow beams that travel in straight-line paths called **rays**. The light rays that strike the surface are called the **incident rays**. The light rays that reflect off the surface are called the **reflected rays**. This model of light is called the **ray model**, and it can be used to describe many aspects of light, including the reflection and formation of images by plane and spherical mirrors. ## Law of Reflection <img src="Images/law_of_reflection.svg" width="50%"/> To measure the angles of the incident and reflected rays, we first draw the **normal**, which is the line perpendicular to the surface. The **angle of incidence, $\theta_{i}$,** is the angle between the incident ray and the normal. Likewise, the **angle of reflection, $\theta_{r}$,** is the angle between the reflected ray and the normal. The incident ray, the reflected ray, and the normal to the reflecting surface all lie within the same plane. This is shown in the figure above. Notice that the angle of reflection is equal to the angle of incidence. This is known as the **law of reflection**, and it can be expressed by the following equation: $$\theta_{r} = \theta_{i}$$ Use the slider below to change the angle of incidence. This changes the angle between the incident ray and the normal. Notice how the angle of reflection also changes when the slider is moved. ``` interactive_plot = widgets.interactive(f, Angle=widgets.IntSlider(value=45,min=0,max=90,step=15,continuous_update=False)) output = interactive_plot.children[-1] output.layout.height = '280px' interactive_plot ``` **Question:** *When the angle of incidence increases, what happens to the angle of reflection?* ``` #Assign each multiple choice to these four variables #Option_1 contains the answer option_1 = "The angle of reflection increases." option_2 = "The angle of reflection decreases." option_3 = "The angle of reflection remains constant." option_4 = "The angle of reflection equals zero." multiple_choice(option_1, option_2, option_3, option_4) ``` ## Specular and Diffuse Reflections For a very smooth surface, such as a mirror, almost all of the light is reflected to produce a **specular reflection**. In a specular reflection, the reflected light rays are parallel to one another and point in the same direction. This allows specular reflections to form images. If the surface is not very smooth, then the light may bounce off of the surface in various directions. This produces a **diffuse reflection**. Diffuse reflections cannot form images. <img src="Images/specular_diffuse_reflections.svg" width="70%"/> **Note:** The law of reflection still applies to diffuse reflections, even though the reflected rays are pointing in various directions. We can imagine that each small section of the rough surface is like a flat plane orientated differently than the sections around it. Since each of these sections is orientated differently, the angle of incidence is different at each section. This causes the reflected rays to scatter. **Question:** *Which of the following is an example of a specular reflection?* ``` #Assign each multiple choice to these four variables #Option_1 contains the answer option_1 = "The reflection off a clean window." option_2 = "The reflection off a wooden deck." option_3 = "The reflection off a carpet floor." option_4 = "The reflection off a table cloth." multiple_choice(option_1, option_2, option_3, option_4) ``` **Question:** *Which of the following is an example of a diffuse reflection?* ``` #Assign each multiple choice to these four variables #Option_1 contains the answer option_1 = "The reflection off a concrete sidewalk." option_2 = "The reflection off a mirror." option_3 = "The reflection off the surface of a still lake." option_4 = "The reflection off a polished sheet of metal." multiple_choice(option_1, option_2, option_3, option_4) ``` ## Image Formation by Plane Mirrors A **plane mirror** is simply a mirror made from a flat (or planar) surface. These types of mirrors are commonly found in bedroom or bathroom fixtures. When an object is reflected in a plane mirror, the image of the object appears to be located behind the mirror. This is because our brains interpret the reflected light rays entering our eyes as having travelled in straight-line paths. The light rays entering our eyes simply do not contain enough information for our brains to differentiate between a straight-line path and a path that changed direction due to a reflection. <img src="Images/plane_mirror_reflection.svg" width="60%"/> Notice in the figure above that the light rays do not actually converge at the location where the image appears to be formed (behind the mirror). Since the light rays do not actually go behind the mirror, they are represented as projections using dashed lines. If a film were placed at the image location behind the mirror, it would not be able to capture the image. As a result, this type of image is called a **virtual image**. For objects reflected in a plane mirror, the distance of the image from the mirror, $d_{i}$, is always equal to the distance of the object from the mirror, $d_{o}$. If the object is moved toward the mirror, the image of the object will also move toward the mirror such that the object and the image are always equidistant from the surface of the mirror. Use the slider below to change the object distance. Notice how the image distance also changes when the slider is moved. ``` interactive_plot = widgets.interactive(f,Distance=widgets.IntSlider(value=30,min=10,max=50,step=10,continuous_update=False)) output = interactive_plot.children[-1] output.layout.height = '280px' interactive_plot #Print question distance = round(random.uniform(5,10),1) print("If you stand " + str(distance) + " m in front of a plane mirror, how many metres behind the mirror is your virtual image?") #Answer calculation answer = distance #Assign each multiple choice to these four variables #Option_1 contains the answer option_1 = str(round((answer),1)) + " m" option_2 = str(round((answer * 2),1)) + " m" option_3 = str(round((answer / 2),1)) + " m" option_4 = str(round((answer / 4),1)) + " m" multiple_choice(option_1, option_2, option_3, option_4) #Print question distance = round(random.uniform(5,10),1) print("If you stand " + str(distance) + " m in front of a plane mirror, how many metres will separate you from your virtual image?") #Answer calculation answer = (distance * 2) #Assign each multiple choice to these four variables #Option_1 contains the answer option_1 = str(round((answer),1)) + " m" option_2 = str(round((answer * 2),1)) + " m" option_3 = str(round((answer / 2),1)) + " m" option_4 = str(round((answer / 4),1)) + " m" multiple_choice(option_1, option_2, option_3, option_4) ``` ## Spherical Mirrors Two common types of curved mirror are formed from a section of a sphere. If the reflection takes place on the inside of the spherical section, then the mirror is called a **concave mirror**. The reflecting surface of a concave mirror curves inward and away from the viewer. If the reflection takes place on the outside of the spherical section, then the mirror is called a **convex mirror**. The reflecting surface of a convex mirror curves outward and toward the viewer. <img src="Images/concave_convex_mirrors.svg" width="75%"/> The **centre of curvature, $C$,** is the point located at the centre of the sphere used to create the mirror. The **vertex, $V$,** is the point located at the geometric centre of the mirror itself. The **focus, $F$,** is the point located midway between the centre of curvature and the vertex. The line passing through the centre of curvature and the vertex is called the **principal axis**. Notice that the focus also lies on the principal axis. When an incident ray parallel to the principle axis strikes the mirror, the reflected ray always passes through the focus. When an incident ray passes through the focus and strikes the mirror, the reflected ray is always parallel to the principle axis. (In the above diagrams, reverse the arrow directions to see this case). These properties make the focus particularly useful when examining spherical mirrors. **Note:** The distance from the centre of curvature to the vertex is equal to the **radius, $R$,** of the sphere used to create the mirror. Any straight line drawn from the centre to any point on the surface of a spherical mirror will have a length equal to the radius. The distance from the vertex to the focus is called the **focal length, $f$**. This distance is equal to half the distance of the radius. $$f = \frac{R}{2}$$ ``` #Print question radius = round(random.uniform(10,30),1) print("If the radius of a curved mirror is " + str(radius) + " cm, how many centimetres is the focal length?") #Answer calculation answer = radius/2 #Assign each multiple choice to these four variables #Option_1 contains the answer option_1 = str(round((answer),1)) + " cm" option_2 = str(round((answer * 2),1)) + " cm" option_3 = str(round((answer / 2),1)) + " cm" option_4 = str(round((answer * 4),1)) + " cm" multiple_choice(option_1, option_2, option_3, option_4) #Print question focal_length = round(random.uniform(5,15),1) print("If the focal length of a curved mirror is " + str(focal_length) + " cm, how many centimetres is the radius of curvature?") #Answer calculation answer = focal_length*2 #Assign each multiple choice to these four variables #Option_1 contains the answer option_1 = str(round((answer),1)) + " cm" option_2 = str(round((answer * 2),1)) + " cm" option_3 = str(round((answer / 2),1)) + " cm" option_4 = str(round((answer / 4),1)) + " cm" multiple_choice(option_1, option_2, option_3, option_4) ``` ## Image Formation by Spherical Mirrors A simple way to determine the position and characteristics of an image formed by the rays reflected from a spherical mirror is to construct a **ray diagram**. A ray diagram is used to shown the path taken by light rays as they reflect from an object or mirror. This was used to find the image created by a plane mirror in the previous section. When constructing a ray diagram, we need only consider ourselves with finding the location of a single point on the reflected image. To do this, any point on the object may be chosen, but for consistency, we will choose the topmost point for the diagrams shown below. Any rays may be chosen, but there are three particular rays that are easy to draw: * **Ray 1:** This ray is drawn parallel to the principle axis from a point on the object to the surface of the mirror. Since the incident ray is parallel to the principle axis, the reflected ray must pass through the focus. * **Ray 2:** This ray is drawn from a point on the object and through the focus. Since the incident ray passes through the focus, the reflected ray must be parallel to the principle axis. * **Ray 3:** This ray is drawn from a point on the object and through the centre of curvature. This ray is therefore perpendicular to the mirror's surface (incident angle = 0). As such, the reflected ray must return along the same path and pass through the centre of curvature. The point at which any two of these three rays converge can be used to find the location and characteristics of the reflected image. ### Concave Mirrors The characteristics of an image formed in a concave mirror depend on the position of the object. There are essentially five cases. Each of these five cases are demonstrated below: ### Case 1: Object Located at a Distance Greater than $C$ In the first case, the distance of the object from the mirror is greater than the radius used to define the centre of curvature. In other words, the object is further away from the mirror than the centre of curvature. In this example, we can draw any two of the three rays mentioned above to find the image of the reflected object. **Note:** You only need to draw two of the three rays to find the image of the reflected object. ``` output_case_1 = widgets.Output() frame_case_1 = 1 #Toggle images def show_svg_case_1(): global frame_case_1 if frame_case_1 == 0: display(SVG("Images/case_1_0.svg")) frame_case_1 = 1 elif frame_case_1 == 1: display(SVG("Images/case_1_1.svg")) frame_case_1 = 2 elif frame_case_1 == 2: display(SVG("Images/case_1_2.svg")) frame_case_1 = 3 elif frame_case_1 == 3: display(SVG("Images/case_1_3.svg")) frame_case_1 = 0 button_case_1 = widgets.Button(description="Toggle rays", button_style = 'success') display(button_case_1) def on_submit_button_case_1_clicked(b): with output_case_1: clear_output(wait=True) show_svg_case_1() with output_case_1: display(SVG("Images/case_1_0.svg")) button_case_1.on_click(on_submit_button_case_1_clicked) display(output_case_1) ``` **Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?* ``` #Create dropdown menus dropdown1_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',) dropdown1_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',) dropdown1_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',) dropdown1_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',) #Display menus as 2x2 table container1_1 = widgets.VBox(children=[dropdown1_1,dropdown1_2]) container1_2 = widgets.VBox(children=[dropdown1_3,dropdown1_4]) display(widgets.HBox(children=[container1_1, container1_2])), print(" ", end='\r') #Evaluate input def check_answer1(b): answer1_1 = dropdown1_1.label answer1_2 = dropdown1_2.label answer1_3 = dropdown1_3.label answer1_4 = dropdown1_4.label if answer1_1 == "Between C and F" and answer1_2 == "Smaller than the object" and answer1_3 == "Inverted" and answer1_4 == "Real": print("Correct! ", end='\r') elif answer1_1 != ' ' and answer1_2 != ' ' and answer1_3 != ' ' and answer1_4 != ' ': print("Try again.", end='\r') else: print(" ", end='\r') dropdown1_1.observe(check_answer1, names='value') dropdown1_2.observe(check_answer1, names='value') dropdown1_3.observe(check_answer1, names='value') dropdown1_4.observe(check_answer1, names='value') ``` ### Case 2: Object Located at $C$ In the second case, the distance of the object from the mirror is equal to the radius used to define the centre of curvature. In other words, the object is located at the centre of curvature. In this case, we can draw only two rays to find the image of the reflected object. We cannot draw a ray passing through the centre of curvature because the object is located at $C$. ``` output_case_2 = widgets.Output() frame_case_2 = 1 #Toggle images def show_svg_case_2(): global frame_case_2 if frame_case_2 == 0: display(SVG("Images/case_2_0.svg")) frame_case_2 = 1 elif frame_case_2 == 1: display(SVG("Images/case_2_1.svg")) frame_case_2 = 2 elif frame_case_2 == 2: display(SVG("Images/case_2_2.svg")) frame_case_2 = 0 button_case_2 = widgets.Button(description="Toggle rays", button_style = 'success') display(button_case_2) def on_submit_button_case_2_clicked(b): with output_case_2: clear_output(wait=True) show_svg_case_2() with output_case_2: display(SVG("Images/case_2_0.svg")) button_case_2.on_click(on_submit_button_case_2_clicked) display(output_case_2) ``` **Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?* ``` #Create dropdown menus dropdown2_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',) dropdown2_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',) dropdown2_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',) dropdown2_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',) #Display menus as 2x2 table container2_1 = widgets.VBox(children=[dropdown2_1,dropdown2_2]) container2_2 = widgets.VBox(children=[dropdown2_3,dropdown2_4]) display(widgets.HBox(children=[container2_1, container2_2])), print(" ", end='\r') #Evaluate input def check_answer2(b): answer2_1 = dropdown2_1.label answer2_2 = dropdown2_2.label answer2_3 = dropdown2_3.label answer2_4 = dropdown2_4.label if answer2_1 == "At C" and answer2_2 == "Same size as the object" and answer2_3 == "Inverted" and answer2_4 == "Real": print("Correct! ", end='\r') elif answer2_1 != ' ' and answer2_2 != ' ' and answer2_3 != ' ' and answer2_4 != ' ': print("Try again.", end='\r') else: print(" ", end='\r') dropdown2_1.observe(check_answer2, names='value') dropdown2_2.observe(check_answer2, names='value') dropdown2_3.observe(check_answer2, names='value') dropdown2_4.observe(check_answer2, names='value') ``` ### Case 3: Object Located between $C$ and $F$ In the third case, the distance of the object from the mirror is less than the radius used to define the centre of curvature, but greater than the focal length. In other words, the object is located between $F$ and $C$. In this case, we can find the image of the reflected object using two rays as shown below. If the mirror is large enough, a third ray that passes through $C$ can also be drawn. ``` output_case_3 = widgets.Output() frame_case_3 = 1 #Toggle images def show_svg_case_3(): global frame_case_3 if frame_case_3 == 0: display(SVG("Images/case_3_0.svg")) frame_case_3 = 1 elif frame_case_3 == 1: display(SVG("Images/case_3_1.svg")) frame_case_3 = 2 elif frame_case_3 == 2: display(SVG("Images/case_3_2.svg")) frame_case_3 = 0 button_case_3 = widgets.Button(description="Toggle rays", button_style = 'success') display(button_case_3) def on_submit_button_case_3_clicked(b): with output_case_3: clear_output(wait=True) show_svg_case_3() with output_case_3: display(SVG("Images/case_3_0.svg")) button_case_3.on_click(on_submit_button_case_3_clicked) display(output_case_3) ``` **Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?* ``` #Create dropdown menus dropdown3_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',) dropdown3_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',) dropdown3_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',) dropdown3_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',) #Display menus as 2x2 table container3_1 = widgets.VBox(children=[dropdown3_1,dropdown3_2]) container3_2 = widgets.VBox(children=[dropdown3_3,dropdown3_4]) display(widgets.HBox(children=[container3_1, container3_2])), print(" ", end='\r') #Evaluate input def check_answer3(b): answer3_1 = dropdown3_1.label answer3_2 = dropdown3_2.label answer3_3 = dropdown3_3.label answer3_4 = dropdown3_4.label if answer3_1 == "Beyond C" and answer3_2 == "Larger than the object" and answer3_3 == "Inverted" and answer3_4 == "Real": print("Correct! ", end='\r') elif answer3_1 != ' ' and answer3_2 != ' ' and answer3_3 != ' ' and answer3_4 != ' ': print("Try again.", end='\r') else: print(" ", end='\r') dropdown3_1.observe(check_answer3, names='value') dropdown3_2.observe(check_answer3, names='value') dropdown3_3.observe(check_answer3, names='value') dropdown3_4.observe(check_answer3, names='value') ``` ### Case 4: Object Located at $F$ In the fourth case, the distance of the object from the mirror is equal to the focal length. In other words, the object is located at the focus. In this case, we can draw only two rays to find the image of the reflected object. We cannot draw a ray passing through the focus because the object is located at $F$. Notice that the reflected rays are parallel and therefore do not intersect. As a consequence, no image is formed. ``` output_case_4 = widgets.Output() frame_case_4 = 1 #Toggle images def show_svg_case_4(): global frame_case_4 if frame_case_4 == 0: display(SVG("Images/case_4_0.svg")) frame_case_4 = 1 elif frame_case_4 == 1: display(SVG("Images/case_4_1.svg")) frame_case_4 = 2 elif frame_case_4 == 2: display(SVG("Images/case_4_2.svg")) frame_case_4 = 0 button_case_4 = widgets.Button(description="Toggle rays", button_style = 'success') display(button_case_4) def on_submit_button_case_4_clicked(b): with output_case_4: clear_output(wait=True) show_svg_case_4() with output_case_4: display(SVG("Images/case_4_0.svg")) button_case_4.on_click(on_submit_button_case_4_clicked) display(output_case_4) ``` **Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?* ``` #import ipywidgets as widgets #Create dropdown menus dropdown4_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',) dropdown4_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',) dropdown4_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',) dropdown4_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',) #Display menus as 2x2 table container4_1 = widgets.VBox(children=[dropdown4_1,dropdown4_2]) container4_2 = widgets.VBox(children=[dropdown4_3,dropdown4_4]) display(widgets.HBox(children=[container4_1, container4_2])), print(" ", end='\r') #Evaluate input def check_answer4(b): answer4_1 = dropdown4_1.label answer4_2 = dropdown4_2.label answer4_3 = dropdown4_3.label answer4_4 = dropdown4_4.label if answer4_1 == "Not applicable (no image)" and answer4_2 == "Not applicable (no image)" and answer4_3 == "Not applicable (no image)" and answer4_4 == "Not applicable (no image)": print("Correct! ", end='\r') elif answer4_1 != ' ' and answer4_2 != ' ' and answer4_3 != ' ' and answer4_4 != ' ': print("Try again.", end='\r') else: print(" ", end='\r') dropdown4_1.observe(check_answer4, names='value') dropdown4_2.observe(check_answer4, names='value') dropdown4_3.observe(check_answer4, names='value') dropdown4_4.observe(check_answer4, names='value') ``` ### Case 5: Object Located between $F$ and $V$ In the fifth case, the distance of the object from the mirror is less than the focal length. In other words, the object is located between $F$ and $V$. In this case, we can find the image of the reflected object using two rays as shown below. Notice that the reflected rays do not actually converge. However, the projections of the reflected rays *do* converge behind the mirror. Therefore, a virtual image is formed. ``` output_case_5 = widgets.Output() frame_case_5 = 1 #Toggle images def show_svg_case_5(): global frame_case_5 if frame_case_5 == 0: display(SVG("Images/case_5_0.svg")) frame_case_5 = 1 elif frame_case_5 == 1: display(SVG("Images/case_5_1.svg")) frame_case_5 = 2 elif frame_case_5 == 2: display(SVG("Images/case_5_2.svg")) frame_case_5 = 0 button_case_5 = widgets.Button(description="Toggle rays", button_style = 'success') display(button_case_5) def on_submit_button_case_5_clicked(b): with output_case_5: clear_output(wait=True) show_svg_case_5() with output_case_5: display(SVG("Images/case_5_0.svg")) button_case_5.on_click(on_submit_button_case_5_clicked) display(output_case_5) ``` **Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?* ``` #Create dropdown menus dropdown5_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',) dropdown5_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',) dropdown5_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',) dropdown5_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',) #Display menus as 2x2 table container5_1 = widgets.VBox(children=[dropdown5_1,dropdown5_2]) container5_2 = widgets.VBox(children=[dropdown5_3,dropdown5_4]) display(widgets.HBox(children=[container5_1, container5_2])), print(" ", end='\r') #Evaluate input def check_answer5(b): answer5_1 = dropdown5_1.label answer5_2 = dropdown5_2.label answer5_3 = dropdown5_3.label answer5_4 = dropdown5_4.label if answer5_1 == "Beyond V" and answer5_2 == "Larger than the object" and answer5_3 == "Upright" and answer5_4 == "Virtual": print("Correct! ", end='\r') elif answer5_1 != ' ' and answer5_2 != ' ' and answer5_3 != ' ' and answer5_4 != ' ': print("Try again.", end='\r') else: print(" ", end='\r') dropdown5_1.observe(check_answer5, names='value') dropdown5_2.observe(check_answer5, names='value') dropdown5_3.observe(check_answer5, names='value') dropdown5_4.observe(check_answer5, names='value') ``` ### Convex Mirrors For reflections in convex mirrors, the location of the object does not change the general characteristics of the image. The image will always be between F and V, smaller than the object, upright, and virtual. ``` output_convex = widgets.Output() frame_convex = 1 #Toggle images def show_svg_convex(): global frame_convex if frame_convex == 0: display(SVG("Images/convex_mirror_reflection_0.svg")) frame_convex = 1 elif frame_convex == 1: display(SVG("Images/convex_mirror_reflection_1.svg")) frame_convex = 2 elif frame_convex == 2: display(SVG("Images/convex_mirror_reflection_2.svg")) frame_convex = 0 button_convex = widgets.Button(description="Toggle rays", button_style = 'success') display(button_convex) def on_submit_button_convex_clicked(b): with output_convex: clear_output(wait=True) show_svg_convex() with output_convex: display(SVG("Images/convex_mirror_reflection_0.svg")) button_convex.on_click(on_submit_button_convex_clicked) display(output_convex) ``` **Question:** *Which options from the dropdown menus best describe the image formed by the reflected rays shown above?* ``` #Create dropdown menus dropdown6_1 = widgets.Dropdown(options={' ':0,'Beyond C': 1, 'At C': 2, 'Between C and F': 3, 'At F': 4, 'Between F and V': 5, 'Beyond V': 6, 'Not applicable (no image)': 7}, value=0, description='Position',) dropdown6_2 = widgets.Dropdown(options={' ':0,'Larger than the object': 1, 'Same size as the object': 2, 'Smaller than the object': 3, 'Not applicable (no image)': 4}, value=0, description='Relative size',) dropdown6_3 = widgets.Dropdown(options={' ':0,'Upright': 1, 'Inverted': 2, 'Not applicable (no image)': 3}, value=0, description='Orientation',) dropdown6_4 = widgets.Dropdown(options={' ':0,'Real': 1, 'Virtual': 2, 'Not applicable (no image)': 3}, value=0, description='Type',) #Display menus as 2x2 table container6_1 = widgets.VBox(children=[dropdown6_1,dropdown6_2]) container6_2 = widgets.VBox(children=[dropdown6_3,dropdown6_4]) display(widgets.HBox(children=[container6_1, container6_2])), print(" ", end='\r') #Evaluate input def check_answer6(b): answer6_1 = dropdown6_1.label answer6_2 = dropdown6_2.label answer6_3 = dropdown6_3.label answer6_4 = dropdown6_4.label if answer6_1 == "Between F and V" and answer6_2 == "Smaller than the object" and answer6_3 == "Upright" and answer6_4 == "Virtual": print("Correct! ", end='\r') elif answer6_1 != ' ' and answer6_2 != ' ' and answer6_3 != ' ' and answer6_4 != ' ': print("Try again.", end='\r') else: print(" ", end='\r') dropdown6_1.observe(check_answer6, names='value') dropdown6_2.observe(check_answer6, names='value') dropdown6_3.observe(check_answer6, names='value') dropdown6_4.observe(check_answer6, names='value') ``` ## Conclusions In this notebook, the reflection of light off of plane and spherical mirrors was examined. In summary: * Light can be thought of as a collection of narrow beams called **rays** which travel in straight-line paths. This conceptualization of light is called the **ray model**. * The **law of reflection** states that the angle of reflection is equal to the angle of incidence. $$\theta_{r} = \theta_{i}$$ * A **specular reflection** is characterized by having reflected rays that are parallel and pointing in the same direction. * A **diffuse reflection** is characterized by having reflected rays pointing in various directions. * **Plane mirrors** always produce a virtual image behind the mirror. This image has the same size and orientation of the object, and the image and object are always equidistant from the mirror. * **A spherical mirror** is formed from a section of a sphere. If the reflecting surface is on the inside of the spherical section, the mirror is **concave**. If it is on the outside, the mirror is **convex**. * A **ray diagram** can be used to find the location and characteristics of a reflection in concave and convex mirrors. For concave mirrors, the characteristics of the possible images are summarized as follows: Object position | Image position | Relative size | Orientation | Type --- | --- | --- | --- | --- Beyond C | Between C and F | Smaller than the object | Inverted | Real At C | At C | Same size as the object | Inverted | Real Between C and F | Beyond C | Larger than the object | Inverted | Real At F | (No image) | (No image) | (No image) | (No image) Between F and V | Beyond V | Larger than the object | Upright | Virtual * The images formed by a convex mirror are always between F and V, smaller than the object, upright, and virtual. Images in this notebook represent original artwork. [![Callysto.ca License](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-bottom.jpg?raw=true)](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
github_jupyter
# 1. Sequence Manager ``` %matplotlib inline %load_ext autoreload %autoreload 2 import sys sys.path.append('../..') from vimms.SequenceManager import * data_dir = os.path.join(os.path.abspath(os.path.join(os.path.join(os.getcwd(),".."),"..")),'tests','fixtures') dataset_file = os.path.join(data_dir, 'QCB_22May19_1.p') dataset = load_obj(dataset_file) ps = load_obj(Path(data_dir,'peak_sampler_mz_rt_int_beerqcb_fragmentation.p')) url = 'http://researchdata.gla.ac.uk/870/2/example_data.zip' base_dir = os.path.abspath(os.path.join(os.getcwd(),'..','01. Data', 'example_data')) if not os.path.isdir(base_dir): # if not exist then download the example data and extract it print('Creating %s' % base_dir) out_file = 'example_data.zip' download_file(url, out_file) extract_zip_file(out_file, delete=True) else: print('Found %s' % base_dir) mzml_file = os.path.join(base_dir, 'beers', 'fullscan', 'mzML', 'Beer_multibeers_1_fullscan1.mzML') mzml_file_list=[None, mzml_file, None, mzml_file] set_log_level_info() ``` ### Set some default parameters ``` experiment_dir = os.path.join(os.getcwd(), 'results') DEFAULT_SCAN_TIME_DICT = {1: 0.4, 2: 0.2} mass_spec_params = {'ionisation_mode': POSITIVE, 'peak_sampler': ps, 'mz_noise': None, 'intensity_noise': None, 'isolation_transition_window': 'rectangular', 'isolation_transition_window_params': None, 'scan_duration_dict': DEFAULT_SCAN_TIME_DICT} controller_params = {"ionisation_mode": POSITIVE, "N": 10, "mz_tol": 10, "rt_tol":30, "min_ms1_intensity": 1.75E5, "rt_range": [(200, 400)], "isolation_width": 1} ``` Note: you will need to install the same version of MZMine2 and put it in the same location as ViMMS ``` evaluation_methods = [] mzmine_command = os.path.abspath(os.path.join(os.getcwd(),'..','..','..','MZmine-2.40.1','MZmine-2.40.1','startMZmine_Windows.bat')) ``` ### Set up some simple schedules ``` d = { 'Sample ID' : ['blank1', 'sample1', 'blank2', 'sample2'], 'Controller Method': [None, 'TopNController', None, 'TopNController'], 'Controller Params': [None, controller_params, None, controller_params], 'MassSpec Params' : [None, mass_spec_params, None, mass_spec_params], 'Dataset' : [None, dataset_file, None, dataset_file] } controller_schedule = pd.DataFrame(data=d) controller_schedule d2 = { 'Sample ID' : ['blank1', 'sample1', 'blank2', 'sample2'], 'Controller Method': [None, 'TopNController', None, 'TopNController'], 'Controller Params': [None, controller_params, None, controller_params], 'MassSpec Params' : [None, mass_spec_params, None, mass_spec_params], 'Dataset' : [None, None, None, None] } controller_schedule2 = pd.DataFrame(data=d2) controller_schedule2 ``` ### Example 1 - Seed with dataset, non-parallel ``` output_dir = os.path.join(experiment_dir, 'sequence_manager_example_1') parallel = False # note: true is not yet implemented vsm = VimmsSequenceManager(controller_schedule, evaluation_methods, output_dir, ms1_picked_peaks_file=None, progress_bar=True, mzmine_command=mzmine_command) experiment = BasicExperiment(vsm, parallel=parallel) experiment.results ``` ### Example 2 - Seed with mzml ``` output_dir = os.path.join(experiment_dir, 'sequence_manager_example_2') parallel = False vsm = VimmsSequenceManager(controller_schedule2, evaluation_methods, output_dir, ms1_picked_peaks_file=None, progress_bar=True, mzmine_command=mzmine_command) experiment = BasicExperiment(vsm, parallel=parallel, mzml_file_list=mzml_file_list, ps=ps) ```
github_jupyter
``` seed = 42 import pandas as pd import numpy as np import random np.random.seed(seed) random.seed(seed) ``` Load the data ``` data = pd.read_csv('data/EEG_data.csv') data.columns """ load subtitle vectors """ sub_vec_path = 'data/subtitles/subtitle_vecs.npy' sub_vecs = np.load(sub_vec_path) sub_vec_dim = sub_vecs.shape[1] """ Make a dataset of original data combined with sub vecs """ dataset = np.hstack((data.values.astype('float32'), sub_vecs)) ``` PCA to reduce dimension of the word average vectors (might give better results) to reduce training speed ``` from sklearn.decomposition import PCA sub_vec_dim = 12 pca = PCA(n_components=sub_vec_dim) pcad_sub_vecs = pca.fit_transform(sub_vecs) dataset = np.hstack((data.values.astype('float32'), pcad_sub_vecs)) ``` Define target variable and training variables ``` # The student's confusion column index is 14, predefined label index is 13 y_col = 13 orig_train_data_cols = list(range(2,y_col)) vector_cols = list(np.arange(sub_vec_dim) + 15) train_cols = orig_train_data_cols #+ vector_cols n_dims = len(train_cols) ``` Functions to make interval counts even for all data points ``` def min_max_rows_per_subject_vid(X): VideoID = list(set(X[:,1])) SubjectID = list(set(X[:,0])) max_intervals = 0 # length of signal min_intervals = len(X) for subId in SubjectID: for vidId in VideoID: X_tmp=X[(X[:, 0] == subId) & (X[:, 1] == vidId)] max_intervals = max(len(X_tmp), max_intervals) min_intervals = min(len(X_tmp), min_intervals) print(max_intervals) print(min_intervals) assert max_intervals == 144 return min_intervals, max_intervals min_intervals, max_intervals = min_max_rows_per_subject_vid(dataset) def zero_pad_data(data, max_intervals, train_cols, y_col): X_pad = None y = [] VideoID = list(set(data[:,1])) SubjectID = list(set(data[:,0])) for subId in SubjectID: for vidId in VideoID: data_sv = data[(data[:,0]==subId) & (data[:,1]==vidId)] y.append(data_sv[:, y_col].mean()) X_sv = data_sv[:, train_cols] pad_len = max_intervals - X_sv.shape[0] z = np.zeros((pad_len, X_sv.shape[1]), dtype=X_sv.dtype) z[:,0] = X_sv[:,0][pad_len] z[:,1] = X_sv[:,1][pad_len] X_sv_pad = np.concatenate((X_sv, z), axis=0) X_sv_pad = X_sv_pad.reshape(1, -1) X_pad = X_sv_pad if X_pad is None else np.vstack((X_pad, X_sv_pad)) return X_pad, np.array(y) def truncate_data(data, min_intervals, train_cols, y_col): X_trunc = None y = [] VideoID = list(set(data[:,1])) SubjectID = list(set(data[:,0])) for vidId in VideoID: for subId in SubjectID: data_sv = data[(data[:,0]==subId) & (data[:,1]==vidId)] y.append(data_sv[:, y_col].mean()) X_sv = data_sv[:, train_cols] trunc_len = min_intervals X_sv_trunc = X_sv[0:trunc_len].reshape(1, -1) X_trunc = X_sv_trunc if X_trunc is None else np.vstack((X_trunc, X_sv_trunc)) return X_trunc, np.array(y) """ Cross-validate """ from time import time from sklearn.metrics import precision_recall_fscore_support as pr_f1, accuracy_score as accur, matthews_corrcoef, \ roc_auc_score from sklearn.base import clone, BaseEstimator from time import time import numpy as np def get_scores(modelname, y, y_hat): acc = accur(y, y_hat) matt = matthews_corrcoef(y, y_hat) p, r, f1, sup = pr_f1(y, y_hat) roc_auc = roc_auc_score(y, y_hat) return acc, f1[1], f1[0], roc_auc def sklearn_cross_validate(modelname, model, data, intervals, even_data, n_test=2): """ even_data: either truncate_data or zero_pad_data to make number of intervals even for each data point """ results = [] start = time() for i in range(0, 10, n_test): cv_model = clone(model) X_train, y_train = even_data(data[np.in1d(data[:,0], (i, i+1), invert=True)], intervals, train_cols, y_col) X_test, y_test = even_data(data[np.in1d(data[:,0], (i, i+1))], intervals, train_cols, y_col) print(X_train.shape) #print(y_test) cv_model.fit(X_train, y_train) print('{}-fold cross validation for model {}, iteration {}' .format(int(10/n_test), modelname, len(results) +1)) acc, f1, f1_flip, roc_auc = get_scores('svm', y_test, np.round(cv_model.predict(X_test))) results.append({'acc': acc, 'F1': f1, 'F1-flipped': f1_flip, 'roc-auc': roc_auc}) result_means = {key:np.mean([r[key] for r in results]) for key in results[0].keys()} print('current cross-validation mean accuracy: {:.3f}, f1: {:.3f}, f1 flipped: {:.3f} and roc-auc: {:.3f}'.format( *result_means.values())) print('cross-validation total time: {:.3f} seconds'.format(time() - start)) return result_means, results ``` Test cross-validation with Support Vector Classifier ``` print(dataset.shape) from sklearn.svm import SVC model = SVC(C=1, kernel='linear') result_means, results = sklearn_cross_validate('svc', model, dataset, max_intervals, zero_pad_data) ``` Helper functions to save models in a format that can be used to create a pandas DataFrame ``` def init_scores(): return {'Model':[], 'Accuracy':[], 'F1':[], 'F1 flipped':[], 'ROC-AUC':[]} def save_model_score(scores, name, results): scores['Model'].append(name) scores['Accuracy'].append(results['acc']) scores['F1'].append(results['F1']) scores['F1 flipped'].append(results['F1-flipped']) scores['ROC-AUC'].append(results['roc-auc']) class Naive(BaseEstimator): def __init__(self): self.X = None self.y = None def fit(self, X, y): self.X = X self.y = y def predict(self, X): return np.zeros(len(X)) def predict_proba(self, X): return np.zeros(len(X)) def decision_function(self, X): return np.zeros(len(X)) ``` Perform cross-validation on multiple models ``` import warnings def warn(*args, **kwargs): pass old_warn = warnings.warn warnings.warn = warn from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB from sklearn.naive_bayes import BernoulliNB from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import GradientBoostingClassifier from sklearn import linear_model as lm from sklearn.svm import SVC from sklearn.neural_network import MLPClassifier as MLP from sklearn.model_selection import cross_val_score scores = init_scores() sk_scores = init_scores() seed = np.random.randint(0, 2000) models = {'Naive': Naive(), 'Logreg': lm.LogisticRegression(random_state=seed), 'Ptron': lm.Perceptron(random_state=seed), 'KNN': KNeighborsClassifier(n_neighbors=5), 'GNB': GaussianNB(), 'BNB': BernoulliNB(), 'RF': RandomForestClassifier(random_state=seed), 'GBT': GradientBoostingClassifier(n_estimators=777, random_state=seed), 'MLP2': MLP(hidden_layer_sizes=((32,)*2), random_state=seed, solver='adam', activation='logistic'), 'MLP3': MLP(hidden_layer_sizes=((16, 8, 16)), random_state=seed, solver='adam', activation='logistic'), 'SVC Linear': SVC(C=1, kernel='linear', random_state=seed)} for name, model in models.items(): result_means, results = sklearn_cross_validate(name, model, dataset, max_intervals, zero_pad_data) save_model_score(scores, name, result_means) X, y = zero_pad_data(dataset, max_intervals, train_cols, y_col) result_means={key:np.mean(cross_val_score(model, X,y, cv=5, n_jobs=-1, scoring=metric)) for key, metric in {'F1':'f1', 'acc':'accuracy'}.items()} result_means['F1-flipped'] = 0. result_means['roc-auc'] = 0. save_model_score(sk_scores, name, result_means) ``` Plot the results ``` %matplotlib notebook from matplotlib import pyplot as plt scores_df = pd.DataFrame(data=scores) print(scores_df) print(scores_df.Model.values) ax = scores_df.plot(kind='line', x='Model', y=['Accuracy', 'ROC-AUC', 'F1'], xticks=range(len(models) + 1), yticks=np.arange(10) / 10, color=['black', 'orange', 'green']) ax.set_xticklabels(list(scores_df.Model)) for p in ax.patches: ax.annotate('{:.2f}'.format(p.get_height()), (p.get_x() * 1.005, p.get_height() * 1.02)) ax.axhline(y=0.7, xmin=0.0, xmax=1.0, color='black', linewidth=.1) ax.axhline(y=0.6, xmin=0.0, xmax=1.0, color='black', linewidth=.1) plt.tight_layout() plt.show() scores_df = pd.DataFrame(data=sk_scores) print(scores_df) print(scores_df.Model.values) ax = scores_df.plot(kind='line', x='Model', y=['Accuracy', 'ROC-AUC', 'F1'], xticks=range(len(models) + 1), yticks=np.arange(10) / 10, color=['black', 'orange', 'green']) ax.set_xticklabels(list(scores_df.Model)) for p in ax.patches: ax.annotate('{:.2f}'.format(p.get_height()), (p.get_x() * 1.005, p.get_height() * 1.02)) ax.axhline(y=0.7, xmin=0.0, xmax=1.0, color='black', linewidth=.1) ax.axhline(y=0.6, xmin=0.0, xmax=1.0, color='black', linewidth=.1) plt.tight_layout() plt.show() print(dataset.shape) print(len(train_cols)) ```
github_jupyter
# Neighbors of neighbors In this notebook, we demonstrate how neighbor-based filters work in the contexts of measurements of cells in tissues. We also determine neighbor of neighbors and extend the radius of such filters. ``` import pyclesperanto_prototype as cle import numpy as np import matplotlib from numpy.random import random cle.select_device("RTX") # Generate artificial cells as test data tissue = cle.artificial_tissue_2d() touch_matrix = cle.generate_touch_matrix(tissue) cle.imshow(tissue, labels=True) ``` # Associate artificial measurements to the cells ``` centroids = cle.label_centroids_to_pointlist(tissue) coordinates = cle.pull_zyx(centroids) values = random([coordinates.shape[1]]) for i, y in enumerate(coordinates[1]): if (y < 128): values[i] = values[i] * 10 + 45 else: values[i] = values[i] * 10 + 90 measurements = cle.push_zyx(np.asarray([values])) # visualize measurments in space parametric_image = cle.replace_intensities(tissue, measurements) cle.imshow(parametric_image, min_display_intensity=0, max_display_intensity=100, color_map='jet') ``` # Local averaging smoothes edges By averaging measurments locally, we can reduce the noise, but we also introduce a stripe where the region touch ``` local_mean_measurements = cle.mean_of_touching_neighbors(measurements, touch_matrix) parametric_image = cle.replace_intensities(tissue, local_mean_measurements) cle.imshow(parametric_image, min_display_intensity=0, max_display_intensity=100, color_map='jet') ``` # Edge preserving filters: median By averaging using a median filter, we can also reduce noise while keeping the edge between the regions sharp ``` local_median_measurements = cle.median_of_touching_neighbors(measurements, touch_matrix) parametric_image = cle.replace_intensities(tissue, local_median_measurements) cle.imshow(parametric_image, min_display_intensity=0, max_display_intensity=100, color_map='jet') ``` # Increasing filter radius: neighbors of neighbors In order to increase the radius of the operation, we need to determin neighbors of touching neighbors ``` neighbor_matrix = cle.neighbors_of_neighbors(touch_matrix) local_median_measurements = cle.median_of_touching_neighbors(measurements, neighbor_matrix) parametric_image = cle.replace_intensities(tissue, local_median_measurements) cle.imshow(parametric_image, min_display_intensity=0, max_display_intensity=100, color_map='jet') ``` ## Short-cuts for visualisation only If you're not so much interested in the vectors of measurements, there are shortcuts: For example for visualizing the mean value of neighboring pixels with different radii: ``` # visualize measurments in space measurement_image = cle.replace_intensities(tissue, measurements) print('original') cle.imshow(measurement_image, min_display_intensity=0, max_display_intensity=100, color_map='jet') for radius in range(0, 5): print('Radius', radius) # note: this function takes a parametric image the label map instead of a vector and the touch_matrix used above parametric_image = cle.mean_of_touching_neighbors_map(measurement_image, tissue, radius=radius) cle.imshow(parametric_image, min_display_intensity=0, max_display_intensity=100, color_map='jet') ```
github_jupyter
## Linear Regression with eaget api (立即执行) ``` import matplotlib.pyplot as plt import numpy as np import tensorflow as tf import tensorflow.contrib.eager as tfe tfe.enable_eager_execution() # Training Data train_X = [3.3, 4.4, 5.5, 6.71, 6.93, 4.168, 9.779, 6.182, 7.59, 2.167, 7.042, 10.791, 5.313, 7.997, 5.654, 9.27, 3.1] train_Y = [1.7, 2.76, 2.09, 3.19, 1.694, 1.573, 3.366, 2.596, 2.53, 1.221, 2.827, 3.465, 1.65, 2.904, 2.42, 2.94, 1.3] n_samples = len(train_X) # Parameters learning_rate = 0.01 display_step = 100 num_steps = 1000 W = tfe.Variable(np.random.randn()) b = tfe.Variable(np.random.randn()) def linear_regerssion(inputs): return inputs * W + b def mean_square_fn(model_fn, inputs, labels): return tf.reduce_sum(tf.pow(model_fn(inputs) - labels, 2) / (2* n_samples)) #随机梯度下降优化器 optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) # 计算梯度 grad = tfe.implicit_gradients(mean_square_fn) print("Ininial cost={:.9f}".format(mean_square_fn(linear_regerssion, train_X, train_Y)), "W=", W.numpy(), "b=", b.numpy()) # 训练 for step in range(num_steps): optimizer.apply_gradients(grad(linear_regerssion, train_X, train_Y)) if (step + 1) % display_step == 0: print("Epoch:" '%04d' % (step + 1), "cost=", "{:.9f}".format(mean_square_fn(linear_regerssion, train_X, train_Y)), "W=", W.numpy(), "b=", b.numpy()) # Graphic display plt.plot(train_X, train_Y, 'ro', label='Original data') plt.plot(train_X, np.array(W * train_X + b), label='Fitted line') plt.legend() plt.show() ``` ## 逻辑回归识别手写数字 ``` from __future__ import absolute_import, division, print_function import tensorflow as tf import tensorflow.contrib.eager as tfe from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data", one_hot=False) # 参数 learning_rate = 0.1 batch_size = 128 num_steps = 1000 display_step = 100 dataset = tf.data.Dataset.from_tensor_slices((mnist.train.images, mnist.train.labels)).batch(batch_size) dataset_iter = tfe.Iterator(dataset) W = tfe.Variable(tf.zeros([784, 10]), name="weights") b = tfe.Variable(tf.zeros([10]), name="bias") def logistic_regression(inputs): return tf.matmul(inputs, W) + b def loss_fn(inference_fn, inputs, labels): # 使用交叉熵 return tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits( logits=inference_fn(inputs), labels=labels)) # 计算精确度 def accuracy_fn(inference_fn, inputs, labels): prediction = tf.nn.softmax(inference_fn(inputs)) correct_pred = tf.equal(tf.argmax(prediction, 1), labels) return tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # 随机梯度优化 optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) grad = tfe.implicit_gradients(loss_fn) # 训练 average_loss = 0. average_acc = 0. for step in range(num_steps): try: d = dataset_iter.next() except StopIteration: dataset_iter = tfe.Iterator(dataset) d = dataset_iter.next() x_batch = d[0] y_batch = tf.cast(d[1], dtype=tf.int64) # 计算batch的损失 batch_loss = loss_fn(logistic_regression, x_batch, y_batch) average_loss += batch_loss # 计算batch精确度 batch_accutacy = accuracy_fn(logistic_regression, x_batch, y_batch) average_acc += batch_accutacy if step == 0: print("Initial loss={:.9f}".format(average_loss)) optimizer.apply_gradients(grad(logistic_regression, x_batch, y_batch)) # Display info if (step + 1) % display_step == 0 or step == 0: if step > 0: average_loss /= display_step average_acc /= display_step print("Step:", '%04d' % (step + 1), " loss=", "{:.9f}".format(average_loss), " accuracy=", "{:.4f}".format(average_acc)) average_loss = 0. average_acc = 0. # Evaluate model on the test image set# Evalu testX = mnist.test.images testY = mnist.test.labels test_acc = accuracy_fn(logistic_regression, testX, testY) print("Testset Accuracy: {:.4f}".format(test_acc)) ```
github_jupyter