code
stringlengths
2.5k
150k
kind
stringclasses
1 value
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/tensorflow-install-mac-metal-jul-2021.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # T81-558: Applications of Deep Neural Networks **Manual Python Setup** * Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). # Software Installation (Mac on Apple Metal M1) This class is technically oriented. A successful student needs to be able to compile and execute Python code that makes use of TensorFlow for deep learning. There are two options for you to accomplish this: * Install Python, TensorFlow, and some IDE (Jupyter, TensorFlow, and others) * Use Google CoLab in the cloud ## Installing Python and TensorFlow Is your Mac Intel or Apple Metal (ARM)? The newer Mac ARM M1-based machines have considerably better deep learning support than their older Intel-based counterparts. Mac has not supported NVIDIA GPUs since 2016; however, the new M1 chips offer similar capabilities that will allow you to run most of the code in this course. You can run any code not supported by the Apple M1 chip through Google CoLab, a free GPU-based Python environment. If you are running an older Intel Mac, there still are some options. Refer to my [Intel Mac installation guide](tensorflow-install-mac-jan-2021.ipynb). With the introduction of the M1 chip, Apple introduced a system on a chip. The new Mac M1 contains CPU, GPU, and deep learning hardware support, all on a single chip. The Mac M1 can run software created for the older Intel Mac's using an emulation layer called [Rosetta](https://en.wikipedia.org/wiki/Rosetta_(software)). To leverage the new M1 chip from Python, you must use a special Python distribution called [Miniforge](https://github.com/conda-forge/miniforge). Miniforge replaces other Python distributions that you might have installed, such as Anaconda or Miniconda. Apple instructions suggest that you remove Anaconda or Miniconda before installing Miniforge. Because the Mac M1 is a very different architecture than Intel, the Miniforge distribution will maximize your performance. Be aware that once you install Miniforge, it will become your default Python interpreter. ## Install Miniforge There are a variety of methods for installing Miniforge. If you have trouble following my instructions, you may refer to this [installation process](https://developer.apple.com/metal/tensorflow-plugin/), upon which I base these instructions. I prefer to use [Homebrew](https://brew.sh/) to install Miniforge. Homebrew is a package manager for the Mac, similar to **yum** or **apt-get** for Linux. To install Homebrew, follow this [link](https://brew.sh/) and copy/paste the installation command into a Mac terminal window. Once you have installed Homebrew, I suggest closing the terminal window and opening a new one to complete the installation. Next, you should install the xcode-select command-line utilities. Use the following command to install: ``` xcode-select --install ``` If the above command gives an error, you should install XCode from the App Store. You will now use Homebrew to install Miniforge with the following command: ``` brew install miniforge ``` You should note which directory ## Initiate Miniforge Run the following command to initiate your conda base environment: ``` conda init ``` This will set the python `PATH` to the Miniforge base in your profile (`~/.bash_profile` if bash or `~/.zshenv` if zsh) and create the base virtual environment. ## Make Sure you Have the Correct Python (when things go wrong) Sometimes previous versions of Python might have been installed, and when you attempt to run the install script below you will recieve an error: ``` Collecting package metadata (repodata.json): done Solving environment: failed ResolvePackageNotFound: - tensorflow-deps ``` To verify that you have the correct Python version registered, close and reopen your terminal window. Issue the following command: ``` which python ``` This command should respond with something similar to: ``` /opt/homebrew/Caskroom/miniforge/base/bin/python ``` The key things to look for in the above response are "homebrew" and "miniforge". If you see "anaconda" or "miniconda" your path is pointing to the wrong Python. You will need to modify your ".zshrc", make sure that the three Python paths match the path that "brew" installed it into earlier. Most likely your "miniforge" is installed in one of these locations: * /usr/local/Caskroom/miniforge/base * /opt/homebrew/Caskroom/miniforge/base More info [here](https://github.com/conda-forge/miniforge/issues/127). ## Install Jupyter and Create Environment Next, lets install Jupyter, which is the editor you will use in this course. ``` conda install -y jupyter ``` We will actually launch Jupyter later. First, we deactivate the base environment. ``` conda deactivate ``` Next, we will install the Mac M1 [tensorflow-apple-metal.yml](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/tensorflow-apple-metal.yml) file that I provide. Run the following command from the same directory that contains **tensorflow-apple-metal.yml**. ``` conda env create -f tensorflow-apple-metal.yml -n tensorflow ``` # Issues Creating Environment (when things go wrong) Due to some [recent changes](https://github.com/grpc/grpc/issues/25082) in one of the TensorFlow dependancies you may get the following error when installing the YML file. ``` Collecting grpcio Using cached grpcio-1.34.0.tar.gz (21.0 MB) ERROR: Command errored out with exit status 1: ``` If you encounter this error, remove your environment, define two environmental variables, and try again: ``` conda env remove --name tensorflow export GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1 export GRPC_PYTHON_BUILD_SYSTEM_ZLIB=1 conda env create -f tensorflow-apple-metal.yml -n tensorflow ``` # Activating New Environment To enter this environment, you must use the following command: ``` conda activate tensorflow ``` For now, let's add Jupyter support to your new environment. ``` conda install nb_conda ``` ## Register your Environment The following command registers your **tensorflow** environment. Again, make sure you "conda activate" your new **tensorflow** environment. ``` python -m ipykernel install --user --name tensorflow --display-name "Python 3.9 (tensorflow)" ``` ## Testing your Environment You can now start Jupyter notebook. Use the following command. ``` jupyter notebook ``` You can now run the following code to check that you have the versions expected. ``` # What version of Python do you have? import sys import tensorflow.keras import pandas as pd import sklearn as sk import tensorflow as tf print(f"Tensor Flow Version: {tf.__version__}") print(f"Keras Version: {tensorflow.keras.__version__}") print() print(f"Python {sys.version}") print(f"Pandas {pd.__version__}") print(f"Scikit-Learn {sk.__version__}") gpu = len(tf.config.list_physical_devices('GPU'))>0 print("GPU is", "available" if gpu else "NOT AVAILABLE") ```
github_jupyter
# Introduction to Graph Matching ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline ``` The graph matching problem (GMP), is meant to find an allignment of nodes between two graphs that minimizes the number of edge disagreements between those two graphs. Therefore, the GMP can be formally written as an optimization problem: \begin{equation} \begin{aligned} \min & {\;-trace(APB^T P^T)}\\ \text{s.t. } & {\;P \: \epsilon \: \mathcal{P}} \\ \end{aligned} \end{equation} Where $\mathcal{P}$ is the set of possible permutation matrices. The Quadratic Assignment problem is a combinatorial opimization problem, modeling following the real-life problem: "Consider the problem of allocating a set of facilities to a set of locations, with the cost being a function of the distance and flow between the facilities, plus costs associated with a facility being placed at a certain location. The objective is to assign each facility to a location such that the total cost is minimized." [1] When written as an optimization problem, the QAP is represented as: \begin{equation} \begin{aligned} \min & {\; trace(APB^T P^T)}\\ \text{s.t. } & {\;P \: \epsilon \: \mathcal{P}} \\ \end{aligned} \end{equation} Since the GMP objective function is the negation of the QAP objective function, any algorithm that solves one can solve the other. This class is an implementation of the Fast Approximate Quadratic Assignment Problem (FAQ), an algorithm designed to efficiently and accurately solve the QAP, as well as GMP. [1] Optimierung, Diskrete & Er, Rainer & Ela, A & Burkard, Rainer & Dragoti-Cela, Eranda & Pardalos, Panos & Pitsoulis, Leonidas. (1998). The Quadratic Assignment Problem. Handbook of Combinatorial Optimization. 10.1007/978-1-4613-0303-9_27. ``` from graspy.match import GraphMatch as GMP from graspy.simulations import er_np ``` For the sake of tutorial, we will use FAQ to solve the GMP for two graphs where we know a solution exists. Below, we sample a binary graph (undirected and no self-loops) $G_1 \sim ER_{NP}(50, 0.3)$. Then, we randomly shuffle the nodes of $G_1$ to initiate $G_2$. The number of edge disagreements as a result of the node shuffle is printed below. ``` n = 50 p = 0.3 np.random.seed(1) G1 = er_np(n=n, p=p) node_shuffle_input = np.random.permutation(n) G2 = G1[np.ix_(node_shuffle_input, node_shuffle_input)] print("Number of edge disagreements: ", sum(sum(abs(G1-G2)))) ``` ## Visualize the graphs using heat mapping ``` from graspy.plot import heatmap heatmap(G1, cbar=False, title = 'G1 [ER-NP(50, 0.3) Simulation]') heatmap(G2, cbar=False, title = 'G2 [G1 Randomly Shuffled]') ``` Below, we create a model to solve GMP. The model is then fitted for the two graphs $G_1$ and $G_2$. One of the option for the algorithm is the starting position of $P$. In this case, the class default of barycenter intialization is used, or the flat doubly stochastic matrix. The number of edge disagreements is printed below. With zero edge disagreements, we see that FAQ is successful in unshuffling the graph. ``` gmp = GMP() gmp = gmp.fit(G1,G2) G2 = G2[np.ix_(gmp.perm_inds_, gmp.perm_inds_)] print("Number of edge disagreements: ", sum(sum(abs(G1-G2)))) heatmap(G1, cbar=False, title = 'G1[ER-NP(50, 0.3) Simulation]') heatmap(G2, cbar=False, title = 'G2[ER-NP(50, 0.3) Randomly Shuffled] unshuffled') ```
github_jupyter
``` cat ratings_train.txt | head -n 10 def read_data(filename): with open(filename, 'r') as f: data = [line.split('\t') for line in f.read().splitlines()] # txt ํŒŒ์ผ์˜ ํ—ค๋”(id document label)๋Š” ์ œ์™ธํ•˜๊ธฐ data = data[1:] return data train_data = read_data('ratings_train.txt') test_data = read_data('ratings_test.txt') print(len(train_data)) print(train_data[0]) print(len(test_data)) print(len(test_data[0])) from konlpy.tag import Okt okt = Okt() print(okt.pos(u'์ด ๋ฐค ๊ทธ๋‚ ์˜ ๋ฐ˜๋”ง๋ถˆ์„ ๋‹น์‹ ์˜ ์ฐฝ ๊ฐ€๊นŒ์ด ๋ณด๋‚ผ๊ฒŒ์š”')) import json import os from pprint import pprint def tokenize(doc): # norm์€ ์ •๊ทœํ™”, stem์€ ๊ทผ์–ด๋กœ ํ‘œ์‹œํ•˜๊ธฐ๋ฅผ ๋‚˜ํƒ€๋ƒ„ return ['/'.join(t) for t in okt.pos(doc, norm=True, stem=True)] if os.path.isfile('train_docs.json'): with open('train_docs.json') as f: train_docs = json.load(f) with open('test_docs.json') as f: test_docs = json.load(f) else: train_docs = [(tokenize(row[1]), row[2]) for row in train_data] test_docs = [(tokenize(row[1]), row[2]) for row in test_data] # JSON ํŒŒ์ผ๋กœ ์ €์žฅ with open('train_docs.json', 'w', encoding="utf-8") as make_file: json.dump(train_docs, make_file, ensure_ascii=False, indent="\t") with open('test_docs.json', 'w', encoding="utf-8") as make_file: json.dump(test_docs, make_file, ensure_ascii=False, indent="\t") # ์˜ˆ์˜๊ฒŒ(?) ์ถœ๋ ฅํ•˜๊ธฐ ์œ„ํ•ด์„œ pprint ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์‚ฌ์šฉ pprint(train_docs[0]) tokens = [t for d in train_docs for t in d[0]] print(len(tokens)) import nltk text = nltk.Text(tokens, name='NMSC') # ์ „์ฒด ํ† ํฐ์˜ ๊ฐœ์ˆ˜ print(len(text.tokens)) # ์ค‘๋ณต์„ ์ œ์™ธํ•œ ํ† ํฐ์˜ ๊ฐœ์ˆ˜ print(len(set(text.tokens))) # ์ถœํ˜„ ๋นˆ๋„๊ฐ€ ๋†’์€ ์ƒ์œ„ ํ† ํฐ 10๊ฐœ pprint(text.vocab().most_common(10)) import matplotlib.pyplot as plt from matplotlib import font_manager, rc %matplotlib inline font_fname = '/Library/Fonts/NanumGothic.ttf' font_name = font_manager.FontProperties(fname=font_fname).get_name() rc('font', family=font_name) plt.figure(figsize=(20,10)) text.plot(50) selected_words = [f[0] for f in text.vocab().most_common(100)] def term_frequency(doc): return [doc.count(word) for word in selected_words] train_x = [term_frequency(d) for d, _ in train_docs] test_x = [term_frequency(d) for d, _ in test_docs] train_y = [c for _, c in train_docs] test_y = [c for _, c in test_docs] import tensorflow as tf checkpoint_path = "training_1/cp.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, save_weights_only=True, verbose=1) import numpy as np x_train = np.asarray(train_x).astype('float32') x_test = np.asarray(test_x).astype('float32') y_train = np.asarray(train_y).astype('float32') y_test = np.asarray(test_y).astype('float32') from tensorflow.keras import models from tensorflow.keras import layers from tensorflow.keras import optimizers from tensorflow.keras import losses from tensorflow.keras import metrics model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(100,))) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.compile(optimizer=optimizers.RMSprop(lr=0.001), loss=losses.binary_crossentropy, metrics=[metrics.binary_accuracy]) model.fit(x_train, y_train, epochs=10, batch_size=512, callbacks=[cp_callback]) results = model.evaluate(x_test, y_test) results def predict_pos_neg(review): token = tokenize(review) tf = term_frequency(token) data = np.expand_dims(np.asarray(tf).astype('float32'), axis=0) score = float(model.predict(data)) if(score > 0.5): #print("[{}]๋Š” {:.2f}% ํ™•๋ฅ ๋กœ ๊ธ์ • ๋ฆฌ๋ทฐ์ด์ง€ ์•Š์„๊นŒ ์ถ”์ธกํ•ด๋ด…๋‹ˆ๋‹ค.^^\n".format(review, score * 100)) return score else: #print("[{}]๋Š” {:.2f}% ํ™•๋ฅ ๋กœ ๋ถ€์ • ๋ฆฌ๋ทฐ์ด์ง€ ์•Š์„๊นŒ ์ถ”์ธกํ•ด๋ด…๋‹ˆ๋‹ค.^^;\n".format(review, (1 - score) * 100)) return -score ''' predict_pos_neg("์˜ฌํ•ด ์ตœ๊ณ ์˜ ์˜ํ™”! ์„ธ ๋ฒˆ ๋„˜๊ฒŒ ๋ด๋„ ์งˆ๋ฆฌ์ง€๊ฐ€ ์•Š๋„ค์š”.") predict_pos_neg("๋ฐฐ๊ฒฝ ์Œ์•…์ด ์˜ํ™”์˜ ๋ถ„์œ„๊ธฐ๋ž‘ ๋„ˆ๋ฌด ์•ˆ ๋งž์•˜์Šต๋‹ˆ๋‹ค. ๋ชฐ์ž…์— ๋ฐฉํ•ด๊ฐ€ ๋ฉ๋‹ˆ๋‹ค.") predict_pos_neg("์ฃผ์—ฐ ๋ฐฐ์šฐ๊ฐ€ ์‹ ์ธ์ธ๋ฐ ์—ฐ๊ธฐ๋ฅผ ์ง„์งœ ์ž˜ ํ•˜๋„ค์š”. ๋ชฐ์ž…๊ฐ ใ…Žใ„ทใ„ท") predict_pos_neg("๋ฏฟ๊ณ  ๋ณด๋Š” ๊ฐ๋…์ด์ง€๋งŒ ์ด๋ฒˆ์—๋Š” ์•„๋‹ˆ๋„ค์š”") predict_pos_neg("์ฃผ์—ฐ๋ฐฐ์šฐ ๋•Œ๋ฌธ์— ๋ดค์–ด์š”") ''' ''' predict_pos_neg("ํ˜น์‹œ UCPC ๋ณธ์„ ") predict_pos_neg("์ง„์ถœํ•œ ํŒ€ ์žˆ๋‚˜์š”") predict_pos_neg("3๋ถ„ ํ›„ ๋ณธ์„  ๋๋‚˜๋Š”๋ฐ") predict_pos_neg("์šฐ๋ฆฌํŒ€๋งŒ ํ–ˆ๋‚˜ ํ•ด์„œ..") ''' def read_chat_data(filename): with open(filename, 'r') as f: data = [line.split('\t') for line in f.read().splitlines()] return data def make_score_dictionary(data): score_list = {} for chat in data: if(chat[0] in score_list): score_list[chat[0]].append(predict_pos_neg(chat[1])) else: score_list[chat[0]] = [predict_pos_neg(chat[1])] return score_list def make_average_score_dictionary(score_dictionary): average_score_dictionary = {} for key in score_dictionary.keys(): average_score_dictionary[key] = sum(score_dictionary[key])/len(score_dictionary[key]) return average_score_dictionary print(make_average_score_dictionary(make_score_dictionary(read_chat_data('test.txt')))) ```
github_jupyter
# VEHICLE_NUMBER_PLATE_RECOGNITION ## PART-1(DETECTION) #### 1. Importing required Libraries ``` ! pip install pydrive import os from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # 1. Authenticate and create the PyDrive client. auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) local_download_path = os.path.expanduser('~/data') try: os.makedirs(local_download_path) except: pass from google.colab import drive drive.mount('/content/drive',force_remount=True) %cd 'drive/My Drive' ``` #### 2. Installing required configs for darknet to use YOLOv3 ``` !apt-get update > /dev/null !apt-get upgrade > /dev/null !apt-get install build-essential > /dev/null !apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev > /dev/null !apt-get install libopencv-dev > /dev/null !apt-get install libavcodec-dev libavformat-dev libswscale-d > /dev/null %cd darknet !sed -i 's/OPENCV=1/OPENCV=0/g' Makefile !sed -i 's/GPU=0/GPU=1/g' Makefile !sed -i 's/CUDNN=0/CUDNN=1/g' Makefile !sed -i 's/CUDNN_HALF=0/CUDNN_HALF=1/g' Makefile !make !apt install g++-5 !apt install gcc-5 !apt update !apt upgrade !nvidia-smi !nvcc --version %cd darknet !sed -i 's/OPENCV=1/OPENCV=0/g' Makefile !sed -i 's/GPU=0/GPU=1/g' Makefile !sed -i 's/CUDNN=0/CUDNN=1/g' Makefile !sed -i 's/CUDNN_HALF=0/CUDNN_HALF=1/g' Makefile !make ``` #### 3. Testing the validate dataset using YOLOv3 with pre-trained weights and storing the coordinates in a json file ``` !./darknet detector test data/obj.data yolo-obj.cfg yolo-obj_final_vehicle.weights -ext_output -dont_show -out result_vehicle_plates.json < data/valid.txt import pandas as pd import json with open('result_vehicle_plates.json') as file: data=json.load(file) ``` #### 4. Coordinates DataFrame ``` dataset = pd.DataFrame(data) dataset ``` #### 5. Precision, Recall, F1-Score, Avg, map ``` ITERATION = "2500" !./darknet detector map data/obj.data yolo-obj.cfg yolo-obj_final_vehicle.weights -points 0 -iou_thresh 0.1 -thresh 0.2 > yolo-test_{ITERATION}.log !./darknet detector map data/obj.data yolo-obj.cfg yolo-obj_final_vehicle.weights -points 0 -iou_thresh 0.1 -thresh 0.15 >> yolo-test_{ITERATION}.log !./darknet detector map data/obj.data yolo-obj.cfg yolo-obj_final_vehicle.weights -points 0 -iou_thresh 0.1 -thresh 0.1 >> yolo-test_{ITERATION}.log !./darknet detector map data/obj.data yolo-obj.cfg yolo-obj_final_vehicle.weights -points 0 -iou_thresh 0.1 -thresh 0.05 >> yolo-test_{ITERATION}.log !./darknet detector map data/obj.data yolo-obj.cfg yolo-obj_final_vehicle.weights -points 0 -iou_thresh 0.05 -thresh 0.2 >> yolo-test_{ITERATION}.log !./darknet detector map data/obj.data yolo-obj.cfg yolo-obj_final_vehicle.weights -points 0 -iou_thresh 0.1 -thresh 0.2 >> yolo-test_{ITERATION}.log !./darknet detector map data/obj.data yolo-obj.cfg yolo-obj_final_vehicle.weights -points 0 -iou_thresh 0.15 -thresh 0.2 >> yolo-test_{ITERATION}.log !./darknet detector map data/obj.data yolo-obj.cfg yolo-obj_final_vehicle.weights -points 0 -iou_thresh 0.2 -thresh 0.2 >> yolo-test_{ITERATION}.log !./darknet detector map data/obj.data yolo-obj.cfg yolo-obj_final_vehicle.weights -points 0 -iou_thresh 0.3 -thresh 0.2 >> yolo-test_{ITERATION}.log !./darknet detector map data/obj.data yolo-obj.cfg yolo-obj_final_vehicle.weights -points 0 -iou_thresh 0.5 -thresh 0.2 >> yolo-test_{ITERATION}.log from subprocess import Popen, PIPE cmd = '''grep 'conf_thresh\|mAP@' yolo-test_{}.log'''.format(ITERATION) + ''' | awk -F' ' 'BEGIN{l="conf,pre,rec,f1-score,avg_iou,iou_thr,map"}{if($1=="for"){if($5=="precision"){l=l "\\n" $4 $7 $10 $13} else{l=l "," $17}} else{l=l "," $4 "," $8}}END{print l}' ''' p=Popen(cmd, shell=True, stdout=PIPE) p.wait() print(p.communicate()[0].decode()) %cd .. ```
github_jupyter
``` import pandas as pd import numpy as np import zucaml.zucaml as ml import matplotlib.pyplot as plt %matplotlib inline pd.set_option('display.max_columns', None) ``` #### gold ``` df_gold = ml.get_csv('data/gold/', 'gold', []) df_gold = df_gold.sort_values(['date', 'x', 'y', 'z'], ascending = [True, True, True, True]) results_grid = pd.DataFrame({}, index = []) ml.print_memory(df_gold) df_gold[:5] ``` #### features ``` target = 'target' time_ref = 'date' pid = 'zone_frame' all_features = { 'x': { 'class': 'location', 'type': 'categorical', 'subtype': 'onehot', 'level': 0, }, 'y': { 'class': 'location', 'type': 'categorical', 'subtype': 'onehot', 'level': 0, }, 'z': { 'class': 'location', 'type': 'categorical', 'subtype': 'onehot', 'level': 0, }, 'energy|rolling.mean#30': { 'class': 'energy.ma', 'type': 'numeric', 'subtype': 'float', 'level': 0, }, 'energy|rolling.mean#90': { 'class': 'energy.ma', 'type': 'numeric', 'subtype': 'float', 'level': 0, }, 'energy|rolling.mean#180': { 'class': 'energy.ma', 'type': 'numeric', 'subtype': 'float', 'level': 0, }, 'energy|rolling.mean#330': { 'class': 'energy.ma', 'type': 'numeric', 'subtype': 'float', 'level': 0, }, 'energy|rolling.mean#360': { 'class': 'energy.ma', 'type': 'numeric', 'subtype': 'float', 'level': 0, }, 'energy_neighbours|rolling.mean#30': { 'class': 'energy.ma', 'type': 'numeric', 'subtype': 'float', 'level': 0, }, 'energy_neighbours|rolling.mean#90': { 'class': 'energy.ma', 'type': 'numeric', 'subtype': 'float', 'level': 0, }, 'energy_neighbours|rolling.mean#180': { 'class': 'energy.ma', 'type': 'numeric', 'subtype': 'float', 'level': 0, }, 'energy_neighbours|rolling.mean#330': { 'class': 'energy.ma', 'type': 'numeric', 'subtype': 'float', 'level': 0, }, 'energy_neighbours|rolling.mean#360': { 'class': 'energy.ma', 'type': 'numeric', 'subtype': 'float', 'level': 0, }, 'energy|rolling.mean#30||ratio||energy|rolling.mean#360': { 'class': 'energy.ratio', 'type': 'numeric', 'subtype': 'float', 'level': 0, }, 'energy|rolling.mean#90||ratio||energy|rolling.mean#360': { 'class': 'energy.ratio', 'type': 'numeric', 'subtype': 'float', 'level': 0, }, 'energy|rolling.mean#180||ratio||energy|rolling.mean#360': { 'class': 'energy.ratio', 'type': 'numeric', 'subtype': 'float', 'level': 0, }, 'energy|rolling.mean#330||ratio||energy|rolling.mean#360': { 'class': 'energy.ratio', 'type': 'numeric', 'subtype': 'float', 'level': 0, }, 'days.since.last': { 'class': 'info', 'type': 'numeric', 'subtype': 'int', 'level': 0, }, } discarded_features = [feat for feat in df_gold if feat not in [feat2 for feat2 in all_features] + [target, time_ref, pid]] onehot_features = [feat for feat in all_features if all_features[feat]['type'] == 'categorical' and all_features[feat]['subtype'] == 'onehot'] print('Total features\t\t ' + str(len(all_features))) if len(discarded_features) > 0: print('Discarded features\t ' + str(len(discarded_features)) + '\t\t' + str(discarded_features)) print('Numerical features\t ' + str(sum([all_features[i]['type'] == 'numeric' for i in all_features]))) print('Categorical features\t ' + str(sum([all_features[i]['type'] == 'categorical' for i in all_features]))) if len(onehot_features) > 0: print('One-hot features\t ' + str(len(onehot_features)) + '\t\t' + str(onehot_features)) ``` #### Problem ``` this_problem = ml.problems.BINARY metrics = ['F0.5', 'precision', 'recall', 'roc_auc'] ``` #### split train test ``` df_train, df_test = ml.split_by_time_ref(df_gold, 0.88, target, time_ref, this_problem, True) ``` #### model ``` level_0_features = [feat for feat in all_features if all_features[feat]['level'] == 0] level_0_features_numeric = [feat for feat in level_0_features if feat not in onehot_features] level_0_features_onehot = [feat for feat in level_0_features if feat in onehot_features] level_0_features_energy_ma = [feat for feat in level_0_features if all_features[feat]['class'] == 'energy.ma'] level_0_features_energy_ratio = [feat for feat in level_0_features if all_features[feat]['class'] == 'energy.ratio'] number_categorical_onehot = df_train['x'].nunique() + df_train['y'].nunique() + df_train['z'].nunique() level_0_features_numeric_clip = [feat for feat in level_0_features_numeric if df_train[feat].abs().max() == np.inf] level_0_features_numeric_not_clip = [feat for feat in level_0_features_numeric if feat not in level_0_features_numeric_clip] level_0_features_location = [feat for feat in all_features if all_features[feat]['level'] == 0 and all_features[feat]['class'] == 'location'] %%time # ########################## # # linear models # ########################## lin_basic_config = { 'features': level_0_features, 'target': target, 'family': ml.lin(this_problem), 'algo': { 'penalty': 'l2', 'class_weight': 'balanced', }, 'preprocess': { 'original': { 'features': level_0_features, 'transformer': ['filler', 'clipper', 'standard_scaler'], }, }, } lin_params = { 'algo:C' : [0.01, 0.1, 1.0], 'algo:solver': ['lbfgs', 'newton-cg'], 'preprocess:iforest': [None, { 'features': level_0_features, 'transformer': ['filler', 'clipper', 'iforest_score'], }, ], 'preprocess:kmeans': [None, { 'features': [feat for feat in level_0_features if all_features[feat]['type'] == 'numeric' or all_features[feat]['subtype'] == 'bool'], 'transformer': ['filler', 'clipper', 'kmeans_distances'], }, ], } # ########################## # # rft models # ########################## rft_basic_config = { 'features': level_0_features, 'target': target, 'family': ml.rft(this_problem), 'algo': { 'criterion': 'entropy', 'class_weight': 'balanced', }, 'preprocess': { 'original': { 'features': level_0_features, 'transformer': ['filler', 'clipper'], }, }, } rft_params = { 'algo:max_depth': [6, 7, 8], 'algo:n_estimators': [25, 50, 75, 100, 150], 'preprocess:iforest': [None, { 'features': level_0_features, 'transformer': ['filler', 'clipper', 'iforest_score'], }, ], 'preprocess:kmeans': [None, { 'features': [feat for feat in level_0_features if all_features[feat]['type'] == 'numeric' or all_features[feat]['subtype'] == 'bool'], 'transformer': ['filler', 'clipper', 'kmeans_distances'], }, ], } # ########################## # # xgb models # ########################## xgb_basic_config = { 'features': level_0_features, 'target': target, 'family': ml.xgb(this_problem), 'algo': { 'criterion': 'entropy', 'scale_pos_weight': 'balanced', }, 'preprocess': { 'original': { 'features': level_0_features, 'transformer': ['filler', 'clipper'], }, }, } xgb_params = { 'algo:max_depth': [5, 6, 7], 'algo:n_estimators': [15, 25, 35], 'preprocess:iforest': [None, { 'features': level_0_features, 'transformer': ['filler', 'clipper', 'iforest_score'], }, ], 'preprocess:kmeans': [None, { 'features': [feat for feat in level_0_features if all_features[feat]['type'] == 'numeric' or all_features[feat]['subtype'] == 'bool'], 'transformer': ['filler', 'clipper', 'kmeans_distances'], }, ], } # ########################## # # search # ########################## basic_configs_and_params = [] basic_configs_and_params.append((lin_basic_config, lin_params)) basic_configs_and_params.append((rft_basic_config, rft_params)) basic_configs_and_params.append((xgb_basic_config, xgb_params)) grid_board, best_model = ml.grid_search( train = df_train, target = target, time_ref = time_ref, problem = this_problem, metrics = metrics, cv_strategy = ml.cv_strategies.TIME, k_fold = 3, percentage_test = 0.1, basic_configs_and_params = basic_configs_and_params, ) grid_board.sort_values(metrics[0], ascending = False).style.format(ml.results_format) label, results, register, model, final_features = ml.train_score_model(best_model, df_train, df_test, metrics) ml.results_add_all(results_grid, label, results, register) results_grid.style.format(ml.results_format) importances_groups = { '|kmeans': 'sum', } ml.plot_features_importances(model, final_features, importances_groups, True, False) shap_values = ml.get_shap_values(df_test[level_0_features], model, None) ml.plot_beeswarm(shap_values, final_features) residuals = ml.get_residuals(df_test, level_0_features, target, model, results['Threshold']) residuals['tp'].sum(), residuals['fp'].sum(), residuals['fn'].sum() notes = {} notes['number_features'] = len(level_0_features) for name_df, df in {'train': df_train, 'test': df_test}.items(): notes[name_df + '_lenght'] = len(df) notes[name_df + '_balance'] = df[target].sum() / len(df) notes[name_df + '_number_id'] = df[pid].nunique() notes[name_df + '_number_time'] = df[time_ref].nunique() notes[name_df + '_min_time'] = df[time_ref].min().strftime("%Y%m%d") notes[name_df + '_max_time'] = df[time_ref].max().strftime("%Y%m%d") for feature in ['x', 'y', 'z']: notes[name_df + '_' + feature] = list(np.sort(df[feature].unique().astype(str))) ml.save_model(model, label, results, df_train, notes) ```
github_jupyter
<a href="https://colab.research.google.com/github/LambdaTheda/DS-Unit-2-Linear-Models/blob/master/10_45p_Copy_of_latest_sun_mar_01_unit_2_sprint_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> _Lambda School Data Science, Unit 2_ # Applied Modeling Sprint Challenge: Predict Chicago food inspections ๐Ÿ• For this Sprint Challenge, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019. [See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset. According to [Chicago Department of Public Health โ€” Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls." #### Your challenge: Predict whether inspections failed The target is the `Fail` column. - When the food establishment failed the inspection, the target is `1`. - When the establishment passed, the target is `0`. #### Run this cell to install packages in Colab: ``` %%capture import sys if 'google.colab' in sys.modules: # Install packages in Colab !pip install category_encoders==2.* !pip install eli5 !pip install pandas-profiling==2.* !pip install pdpbox !pip install shap ``` #### Run this cell to load the data: ``` import pandas as pd train_url = 'https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5' test_url = 'https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a' train = pd.read_csv(train_url) test = pd.read_csv(test_url) assert train.shape == (51916, 17) assert test.shape == (17306, 17) ``` ### Part 1: Preprocessing You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding. _To earn a score of 3 for this part, find and explain leakage. The dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections._ ### Part 2: Modeling **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score. Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.** _To earn a score of 3 for this part, get an ROC AUC test score >= 0.70 (without using the feature with leakage)._ ### Part 3: Visualization Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types: - Confusion Matrix - Permutation Importances - Partial Dependence Plot, 1 feature isolation - Partial Dependence Plot, 2 features interaction - Shapley Values _To earn a score of 3 for this part, make four of these visualization types._ ## Part 1: Preprocessing > You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding. ``` # Exploratory Data Analyses train.describe() test.describe() train.head() test.head() # check Nulls train.isnull().sum() test.isnull().sum() # check 'Fail' class imbalance via a Plot: Pass Vs Fail (train['Fail'].map({0: 'Passed', 1: 'Failed'}).value_counts(normalize=True) * 100)\ .plot.bar(title='Percentages of Inspection Results', figsize=(10, 5)) # Drop 'AKA Name' column in train set since I will use the "DBA Name" column and the former has nulls while the latter does not, # and both serve similar enough functions as a business identifier for my purposes train = train.drop(columns=['AKA Name']) y = train['Fail'] y.unique() ''' Which evaluation measure is appropriate to use for a classification model with imbalanced classes? Precision metric tells us how many predicted samples are relevant i.e. our mistakes into classifying sample as a correct one if it's not true. this metric is a good choice for the imbalanced classification scenario.May 9, 2019 Metrics for Imbalanced Classification - Towards Data Science ''' # May use PRECISION METRIC? (instead of Accuracy in ntbk) for validation because our 2 class ratio is about 3:1; ~significant imbalance # TEST INSTRUCTION: estimate your ROC AUC validation score # find how many of Pass and Failed in our train['Fail'] y.value_counts(normalize=True) import pandas as pd # from LS_DSPT4_231.ipynb (Mod 1) ''' Next, do a time-based split: Brief Description: This dataset contains information from inspections of restaurants and other food establishments in Chicago from January 1, 2010 to the present. ''' train['Inspection Date'] = pd.to_datetime(train['Inspection Date']) # TRIED to split val from train, but got AttributeError: Can only use .dt accessor with datetimelike values.. # may have to feature engineer Inspection Date to parse out only date! # Attempt 2: Parsing out only YEAR from train['Inspection Date'] - works! train['Inspection Date'] = pd.to_datetime(train['Inspection Date']) train['Inspection Year'] = train['Inspection Date'].dt.year test['Inspection Date'] = pd.to_datetime(test['Inspection Date']) test['Inspection Year'] = test['Inspection Date'].dt.year # split_train = train[train['Inspection Date'].dt.year <= 2016] # val = train[train['Inspection Date'].dt.year > 2017] # Check if ~80 % train; 20% val split was chosen #split_train.shape, val.shape # May fine tune split using months additionally ''' val.value_counts(normalize=True) (gives err df obj has no val_cnts attrib..) # check 'Fail' class imbalance via a Plot: Pass Vs Fail # ?!?! # (train_split['Inspection Year'].map({ ('Inspection Year'<= 2016): 'train_split', 1: 'Failed'}).value_counts(normalize=True) * 100)\ # .plot.bar(title='Percentages of Inspection Results', figsize=(10, 5)) ''' train['Any Failed'] = train.groupby('Address')['Fail'].transform(lambda x: int((x == 1).any())) test['Any Failed'] = test.groupby('Address')['Fail'].transform(lambda x: int((x == 1).any())) ``` ## Part 2: Modeling > **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score. > > Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.** ``` #ATTEMPT 2: getting invalid type promotion err # Try a shallow decision tree as a fast, first model import category_encoders as ce from sklearn.pipeline import make_pipeline from sklearn.impute import SimpleImputer from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import roc_auc_score from sklearn.model_selection import train_test_split target = 'Fail' # features = ['Inspection Type', 'Any Failed', 'Facility Type', 'Latitude', 'Longitude'] features = ['Inspection Type', 'Zip', 'Any Failed', 'License #', 'Facility Type', 'Latitude', 'Longitude'] X_train, X_test, y_train, y_test = train_test_split(train[features], train[target]) pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='most_frequent'), RandomForestClassifier() ) pipeline.fit(X_train, y_train) acc_score = pipeline.score(test[features], test[target]) ra_score = roc_auc_score(test[target], pipeline.predict(test[features])) print(f'Test Accuracy: {pipeline.score(X_test, y_test)}') print(f'Test ROC AUC: {roc_auc_score(y_test, pipeline.predict(X_test))}\n') print(f'Val Accuracy: {acc_score}') print(f'Val ROC AUC: {ra_score}') ``` ## Part 3: Visualization > Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types: > > - Permutation Importances > - Partial Dependence Plot, 1 feature isolation > - Partial Dependence Plot, 2 features interaction > - Shapley Values ``` #Perm Impt: https://colab.research.google.com/drive/1z1R0m3XsaZMjukynx2Ub-531Sh32xPln#scrollTo=QxhmJFxvKDbM (u2s3m3) # 1) PERMUTATION IMPORTANCES # a) just to peek at which features are important to our model, get feature importances rf = pipeline.named_steps['randomforestclassifier'] importances = pd.Series(rf.feature_importances_, X_train.columns) # Plot feature importances %matplotlib inline import matplotlib.pyplot as plt n = 20 plt.figure(figsize=(10,n/2)) plt.title(f'Top {n} features') importances.sort_values()[-n:].plot.barh(color='grey'); # BEFORE: Sequence of the feature to be permuted: from Features Importance above, chose Latitude adn Inspection type columns/features to Permute import numpy as np for feature in ['Latitude', 'Inspection Type']: # PERMUTE X_train_permuted = X_train.copy() #copy whole df to submit all at once X_train_permuted[feature] = np.random.permutation(X_train[feature]) X_test_permuted = X_test.copy() X_test_permuted[feature] = np.random.permutation(X_test[feature]) score = pipeline.score(X_test, y_test) score_permuted = pipeline.score(X_test_permuted, y_test) #Calc. accuracy on the permuted val dataset print(f'Validation accuracy with {feature}: {score}') print(f'Validation accuracy with {feature} permuted: {score_permuted}') print(f'Permutation importance: {acc_score - score_permuted}\n') #2) Shapley Values: SHAP Values (an acronym from SHapley Additive exPlanations) break down a prediction to show the impact of each feature. # from https://colab.research.google.com/drive/1r2VFMtBAt3sLVIQfsMWyQXt8hB9gziRA#scrollTo=Ep1aBVpVcrDj (FINAL VERSION 234 u2s3m4.ipynb) import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline from xgboost import XGBClassifier processor = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median') ) val = train[train['Inspection Date'].dt.year > 2017] X_val = val[features] y_val = val[target] X_train_processed = processor.fit_transform(X_train) X_val_processed = processor.transform(X_val) eval_set = [(X_train_processed, y_train), (X_val_processed, y_val)] model = XGBClassifier(n_estimators=1000, n_jobs=-1) model.fit(X_train_processed, y_train, eval_set=eval_set, eval_metric='auc', early_stopping_rounds=10) from sklearn.metrics import roc_auc_score X_test_processed = processor.transform(X_test) class_index = 1 y_pred_proba = model.predict_proba(X_test_processed)[:, class_index] print(f'Test ROC AUC for class {class_index}:') print(roc_auc_score(y_test, y_pred_proba)) # Ranges from 0-1, higher is better import shap explainer = shap.TreeExplainer(model) row_processed = processor.transform(row) shap_values = explainer.shap_values(row_processed) shap.initjs() shap.force_plot( base_value=explainer.expected_value, shap_values=shap_values, features=row, link='logit' # For classification, this shows predicted probabilities ) #******************** FROM 1ST TEST TAKING ************************** # 2) CONFUSION MATRIX -NM #2) #Partial Dependence Plot, 1 feature interaction ''' Later, when you save matplotlib images to include in blog posts or web apps, increase the dots per inch (double it), so the text isn't so fuzzy ''' import matplotlib.pyplot as plt plt.rcParams['figure.dpi'] = 72 from sklearn.metrics import r2_score from xgboost import XGBRegressor gb = make_pipeline( ce.OrdinalEncoder(), XGBRegressor(n_estimators=200, objective='reg:squarederror', n_jobs=-1) ) gb.fit(X_train, y_train) y_pred = gb.predict(X_val) print('Gradient Boosting R^2', r2_score(y_val, y_pred)) from pdpbox.pdp import pdp_isolate, pdp_plot feature = 'DBA' isolated = pdp_isolate( model = gb, dataset=X_val, model_features=X_val.columns, feature=feature ) pdp_plot(isolated, feature_name=feature); pdp_plot(isolated, feature_name=feature); from pdpbox.pdp import pdp_interact, pdp_interact_plot import category_encoders as ce import seaborn as sns from sklearn.ensemble import RandomForestClassifier target = 'Fail' features = df.columns.drop(['Fail']) X = train[features] y = train[target] # Use Ordinal Encoder, outside of a pipeline encoder = ce.OrdinalEncoder() X_encoded = encoder.fit_transform(X) model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1) model.fit(X_encoded, y) # Use Pdpbox %matplotlib inline import matplotlib.pyplot as plt from pdpbox import pdp feature = 'Violation' pdp_dist = pdp.pdp_isolate(model=model, dataset=X_encoded, model_features=features, feature=feature) pdp.pdp_plot(pdp_dist, feature); # Look at the encoder's mappings encoder.mapping pdp.pdp_plot(pdp_dist, feature) # Manually change the xticks labels plt.xticks([1, 2], ['Violations', 'Fail']); # Let's automate it feature = 'Violation' for item in encoder.mapping: if item['col'] == feature: feature_mapping = item['mapping'] feature_mapping = feature_mapping[feature_mapping.index.dropna()] category_names = feature_mapping.index.tolist() category_codes = feature_mapping.values.tolist() pdp.pdp_plot(pdp_dist, feature) # Automatically change the xticks labels plt.xticks(category_codes, category_names); pdp.pdp_plot(pdp_dist, feature) # Automatically change the xticks labels plt.xticks(category_codes, category_names); pdp = interaction.pdp.pivot_table( values='preds', columns=features[0], # First feature on x axis index=features[1] # Next feature on y axis )[::-1] # Reverse the index order so y axis is ascending pdp = pdp.rename(columns=dict(zip(category_codes, category_names))) plt.figure(figsize=(10,8)) sns.heatmap(pdp, annot=True, fmt='.2f', cmap='viridis') plt.title('Partial Dependence of Inspection Failure, on Violation & Fails'); #Shapley # Assign to X, y features = ['Risk', 'Violations', 'Inspection Type'] target = 'Fail' X_train = train[features] y_train = train[target] X_test = test[features] y_test = test[target] from scipy.stats import randint, uniform from sklearn.ensemble import RandomForestRegressor from sklearn.model_selection import RandomizedSearchCV param_distributions = { 'n_estimators': randint(50, 500), 'max_depth': [5, 10, 15, 20, None], 'max_features': uniform(0, 1), } search = RandomizedSearchCV( RandomForestRegressor(random_state=42), #want CLassifier though? param_distributions=param_distributions, n_iter=5, cv=2, scoring='neg_mean_absolute_error', verbose=10, return_train_score=True, n_jobs=-1, random_state=42 ) search.fit(X_train, y_train); print('Best hyperparameters', search.best_params_) print('Cross-validation MAE', -search.best_score_) model = search.best_estimator_ ```
github_jupyter
``` #format the book %matplotlib inline from __future__ import division, print_function import sys sys.path.insert(0, '..') import book_format book_format.set_style() ``` # Converting the Multivariate Equations to the Univariate Case The multivariate Kalman filter equations do not resemble the equations for the univariate filter. However, if we use one dimensional states and measurements the equations do reduce to the univariate equations. This section will provide you with a strong intuition into what the Kalman filter equations are actually doing. While reading this section is not required to understand the rest of the book, I recommend reading this section carefully as it should make the rest of the material easier to understand. Here are the multivariate equations for the prediction. $$ \begin{aligned} \mathbf{\bar{x}} &= \mathbf{F x} + \mathbf{B u} \\ \mathbf{\bar{P}} &= \mathbf{FPF}^\mathsf{T} + \mathbf Q \end{aligned} $$ For a univariate problem the state $\mathbf x$ only has one variable, so it is a $1\times 1$ matrix. Our motion $\mathbf{u}$ is also a $1\times 1$ matrix. Therefore, $\mathbf{F}$ and $\mathbf B$ must also be $1\times 1$ matrices. That means that they are all scalars, and we can write $$\bar{x} = Fx + Bu$$ Here the variables are not bold, denoting that they are not matrices or vectors. Our state transition is simple - the next state is the same as this state, so $F=1$. The same holds for the motion transition, so, $B=1$. Thus we have $$x = x + u$$ which is equivalent to the Gaussian equation from the last chapter $$ \mu = \mu_1+\mu_2$$ Hopefully the general process is clear, so now I will go a bit faster on the rest. We have $$\mathbf{\bar{P}} = \mathbf{FPF}^\mathsf{T} + \mathbf Q$$ Again, since our state only has one variable $\mathbf P$ and $\mathbf Q$ must also be $1\times 1$ matrix, which we can treat as scalars, yielding $$\bar{P} = FPF^\mathsf{T} + Q$$ We already know $F=1$. The transpose of a scalar is the scalar, so $F^\mathsf{T} = 1$. This yields $$\bar{P} = P + Q$$ which is equivalent to the Gaussian equation of $$\sigma^2 = \sigma_1^2 + \sigma_2^2$$ This proves that the multivariate prediction equations are performing the same math as the univariate equations for the case of the dimension being 1. These are the equations for the update step: $$ \begin{aligned} \mathbf{K}&= \mathbf{\bar{P}H}^\mathsf{T} (\mathbf{H\bar{P}H}^\mathsf{T} + \mathbf R)^{-1} \\ \textbf{y} &= \mathbf z - \mathbf{H \bar{x}}\\ \mathbf x&=\mathbf{\bar{x}} +\mathbf{K\textbf{y}} \\ \mathbf P&= (\mathbf{I}-\mathbf{KH})\mathbf{\bar{P}} \end{aligned} $$ As above, all of the matrices become scalars. $H$ defines how we convert from a position to a measurement. Both are positions, so there is no conversion, and thus $H=1$. Let's substitute in our known values and convert to scalar in one step. The inverse of a 1x1 matrix is the reciprocal of the value so we will convert the matrix inversion to division. $$ \begin{aligned} K &=\frac{\bar{P}}{\bar{P} + R} \\ y &= z - \bar{x}\\ x &=\bar{x}+Ky \\ P &= (1-K)\bar{P} \end{aligned} $$ Before we continue with the proof, I want you to look at those equations to recognize what a simple concept these equations implement. The residual $y$ is nothing more than the measurement minus the prediction. The gain $K$ is scaled based on how certain we are about the last prediction vs how certain we are about the measurement. We choose a new state $x$ based on the old value of $x$ plus the scaled value of the residual. Finally, we update the uncertainty based on how certain we are about the measurement. Algorithmically this should sound exactly like what we did in the last chapter. Let's finish off the algebra to prove this. Recall that the univariate equations for the update step are: $$ \begin{aligned} \mu &=\frac{\sigma_1^2 \mu_2 + \sigma_2^2 \mu_1} {\sigma_1^2 + \sigma_2^2}, \\ \sigma^2 &= \frac{1}{\frac{1}{\sigma_1^2} + \frac{1}{\sigma_2^2}} \end{aligned} $$ Here we will say that $\mu_1$ is the state $x$, and $\mu_2$ is the measurement $z$. Thus it follows that that $\sigma_1^2$ is the state uncertainty $P$, and $\sigma_2^2$ is the measurement noise $R$. Let's substitute those in. $$\begin{aligned} \mu &= \frac{Pz + Rx}{P+R} \\ \sigma^2 &= \frac{1}{\frac{1}{P} + \frac{1}{R}} \end{aligned}$$ I will handle $\mu$ first. The corresponding equation in the multivariate case is $$ \begin{aligned} x &= x + Ky \\ &= x + \frac{P}{P+R}(z-x) \\ &= \frac{P+R}{P+R}x + \frac{Pz - Px}{P+R} \\ &= \frac{Px + Rx + Pz - Px}{P+R} \\ &= \frac{Pz + Rx}{P+R} \end{aligned} $$ Now let's look at $\sigma^2$. The corresponding equation in the multivariate case is $$ \begin{aligned} P &= (1-K)P \\ &= (1-\frac{P}{P+R})P \\ &= (\frac{P+R}{P+R}-\frac{P}{P+R})P \\ &= (\frac{P+R-P}{P+R})P \\ &= \frac{RP}{P+R}\\ &= \frac{1}{\frac{P+R}{RP}}\\ &= \frac{1}{\frac{R}{RP} + \frac{P}{RP}} \\ &= \frac{1}{\frac{1}{P} + \frac{1}{R}} \quad\blacksquare \end{aligned} $$ We have proven that the multivariate equations are equivalent to the univariate equations when we only have one state variable. I'll close this section by recognizing one quibble - I hand waved my assertion that $H=1$ and $F=1$. In general we know this is not true. For example, a digital thermometer may provide measurement in volts, and we need to convert that to temperature, and we use $H$ to do that conversion. I left that issue out to keep the explanation as simple and streamlined as possible. It is very straightforward to add that generalization to the equations above, redo the algebra, and still have the same results.\\\
github_jupyter
``` %load_ext autoreload %autoreload 2 import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as colors from matplotlib import cm from matplotlib import rc import os, sys import astropy.constants as const import astropy.units as u from astropy.cosmology import z_at_value from astropy.cosmology import WMAP9 as cosmo from fractions import Fraction import hasasia.sensitivity as hassens import hasasia.sim as hassim rc('text',usetex=True) rc('font',**{'family':'serif','serif':['Times New Roman'],'size':14})#,'weight':'bold'}) current_path = os.getcwd() splt_path = current_path.split("/") top_path_idx = splt_path.index('DetectorDesignSensitivities') top_directory = "/".join(splt_path[0:top_path_idx+1]) load_directory = top_directory + '/LoadFiles/InstrumentFiles/' sys.path.insert(0,top_directory + '/Functions') import StrainandNoise_v2 as SnN import SNRcalc_v3 as SnC LISA_Other_filedirectory = load_directory + 'LISA_Other/StrainFiles/' LISA_Neil_filedirectory = load_directory + 'LISA_Neil/StrainFiles/' LISA_ESA_filedirectory = load_directory + 'LISA_ESA/StrainFiles/' ET_filedirectory = load_directory + 'EinsteinTelescope/StrainFiles/' aLIGO_filedirectory = load_directory + 'aLIGO/StrainFiles/' NANOGrav_filedirectory = load_directory + 'NANOGrav/StrainFiles/' EOBdiff_filedirectory = top_directory + '/LoadFiles/DiffStrain/EOBdiff/' fig_save_idx = splt_path.index('Research') fig_save_location = "/".join(splt_path[0:fig_save_idx+1]) fig_save_location += '/paperfigs' axissize = 14 labelsize = 16 legendsize = 12 figsize = (10,8) colornorm = colors.Normalize(vmin=0.0, vmax=5.0) linesize = 3 ``` #################################################################### # Initialize different instruments ### aLIGO ``` #aLIGO aLIGO_filename = 'aLIGODesign.txt' aLIGO_filelocation = aLIGO_filedirectory + aLIGO_filename aLIGO = SnN.GroundBased('aLIGO') aLIGO.Default_Setup(aLIGO_filelocation) ``` ### Einstein Telescope ``` #Einstein Telescope ET_filename = 'ET_B_data.txt' ET_filelocation = ET_filedirectory + ET_filename ET_data = np.loadtxt(ET_filelocation) ET = SnN.GroundBased('ET') ET.Default_Setup(ET_filelocation) ``` ### Plots of Ground Detectors ``` fig = plt.figure(figsize=(10,5)) plt.loglog(ET.fT,ET.h_n_f,label='Einsteing Telescope B') plt.loglog(aLIGO.fT,aLIGO.h_n_f,label='Advanced LIGO') plt.xlabel(r'Frequency $[Hz]$',fontsize = labelsize) plt.ylabel('Characteristic Strain',fontsize = labelsize) plt.legend() ######################### #Save Figure to File figname = '/Ground_Char_Strain.pdf' figloc = fig_save_location+figname isitsavetime = False if isitsavetime: fig.savefig(figloc, bbox_inches='tight') plt.show() ``` ### LISA Martin data ``` #Martin data LISA_Martin_filename = 'LISA_Allocation_S_h_tot.txt' LISA_Martin_filelocation = LISA_Other_filedirectory + LISA_Martin_filename LISA_Martin = SnN.SpaceBased('LISA_Martin') LISA_Martin.Load_Data(LISA_Martin_filelocation) LISA_Martin.Get_Strain() ``` ### LISA Neil Cornish data ``` #Neil Cornish data LISA_Neil_filename = 'LISA_sensitivity.txt' LISA_Neil_filelocation = LISA_Neil_filedirectory + LISA_Neil_filename LISA_Neil = SnN.SpaceBased('LISA_Neil') LISA_Neil.Load_Data(LISA_Neil_filelocation) LISA_Neil.Get_Strain() ``` ### LISA Larson Sensitivity Curve ``` #Larson Sensitivity Curve LISA_Larson_filename = 'scg_6981.dat' LISA_Larson_filelocation = LISA_Other_filedirectory + LISA_Larson_filename LISA_Larson = SnN.SpaceBased('LISA_Larson') LISA_Larson.Load_Data(LISA_Larson_filelocation) LISA_Larson.Get_Strain() ``` ### Below is wrong, not strain ``` fig = plt.figure(figsize=(10,5)) plt.loglog(LISA_Martin.fT,LISA_Martin.h_n_f,label='LISA Martin file') plt.loglog(LISA_Neil.fT,LISA_Neil.h_n_f,label='LISA Neil file') plt.loglog(LISA_Larson.fT,LISA_Larson.h_n_f**2/np.sqrt(LISA_Larson.fT),label='LISA Larson file') plt.xlabel(r'Frequency $[Hz]$',fontsize = labelsize) plt.ylabel('Characteristic Strain',fontsize = labelsize) plt.xlim([5e-6,3]) plt.legend() ######################### #Save Figure to File figname = '/Ground_Char_Strain.pdf' figloc = fig_save_location+figname isitsavetime = False if isitsavetime: fig.savefig(figloc, bbox_inches='tight') plt.show() ``` ### Numerical Relativity from EOB subtraction #### Diff0002 ``` diff0002 = SnN.TimeDomain('diff0002') diff0002.Load_Strain() diff0002.Get_hf_from_hcross_hplus() ``` #### Diff0114 ``` diff0114 = SnN.TimeDomain('diff0114') diff0114.Load_Strain() diff0114.Get_hf_from_hcross_hplus() ``` #### Diff0178 ``` diff0178 = SnN.TimeDomain('diff0178') diff0178.Load_Strain() diff0178.Get_hf_from_hcross_hplus() ``` #### Diff0261 ``` diff0261 = SnN.TimeDomain('diff0261') diff0261.Load_Strain() diff0261.Get_hf_from_hcross_hplus() ``` #### Diff0303 ``` diff0303 = SnN.TimeDomain('diff0303') diff0303.Load_Strain() diff0303.Get_hf_from_hcross_hplus() plt.figure() plt.plot(diff0002.t,diff0002.h_plus_t) plt.plot(diff0002.t,diff0002.h_cross_t) plt.show() plt.figure(figsize=(10,5)) plt.plot(diff0002.natural_f,diff0002.natural_h_f) plt.xscale('log') plt.yscale('log') plt.show() ``` ### NANOGrav continuous wave sensitivity ``` #NANOGrav continuous wave sensitivity NANOGrav_background = 4e-16 # Unsubtracted GWB amplitude: 0,4e-16 NANOGrav_dp = 0.95 #Detection Probablility: 0.95,0.5 NANOGrav_fap = 0.0001 #False Alarm Probability: 0.05,0.003,0.001,0.0001 NANOGrav_Tobs = 15 #Observation years: 15,20,25 NANOGrav_filename = 'cw_simulation_Ared_' + str(NANOGrav_background) + '_dp_' + str(NANOGrav_dp) \ + '_fap_' + str(NANOGrav_fap) + '_T_' + str(NANOGrav_Tobs) + '.txt' NANOGrav_filelocation = NANOGrav_filedirectory + NANOGrav_filename NANOGrav_Mingarelli_no_GWB = SnN.PTA('NANOGrav_Mingarelli_no_GWB') NANOGrav_Mingarelli_no_GWB.Load_Data(NANOGrav_filelocation) #NANOGrav continuous wave sensitivity NANOGrav_background_2 = 0 # Unsubtracted GWB amplitude: 0,4e-16 NANOGrav_dp_2 = 0.95 #Detection Probablility: 0.95,0.5 NANOGrav_fap_2 = 0.0001 #False Alarm Probability: 0.05,0.003,0.001,0.0001 NANOGrav_Tobs_2 = 15 #Observation years: 15,20,25 NANOGrav_filename_2 = 'cw_simulation_Ared_' + str(NANOGrav_background_2) + '_dp_' + str(NANOGrav_dp_2) \ + '_fap_' + str(NANOGrav_fap_2) + '_T_' + str(NANOGrav_Tobs_2) + '.txt' NANOGrav_filelocation_2 = NANOGrav_filedirectory + NANOGrav_filename_2 NANOGrav_Mingarelli_GWB = SnN.PTA('NANOGrav_Mingarelli_GWB') NANOGrav_Mingarelli_GWB.Load_Data(NANOGrav_filelocation_2) ``` ### SKA parameters and methods from arXiv:0804.4476 section 7.1 ``` ############################################### #SKA calculation using parameters and methods from arXiv:0804.4476 section 7.1 sigma_SKA = 10*u.ns.to('s')*u.s #sigma_rms timing residuals in nanoseconds to seconds T_SKA = 15*u.yr #Observing time in years N_p_SKA = 20 #Number of pulsars cadence_SKA = 1/(u.wk.to('yr')*u.yr) #Avg observation cadence of 1 every week in [number/yr] SKA_Hazboun = SnN.PTA('SKA_Hazboun') SKA_Hazboun.Default_Setup_Hazboun_2019(T_SKA,N_p_SKA,sigma_SKA,cadence_SKA) SKA_Moore = SnN.PTA('SKA_Moore') SKA_Moore.Default_Setup_Moore_2014(T_SKA,N_p_SKA,sigma_SKA,cadence_SKA) ``` #### Using Jeff's Methods/code https://arxiv.org/abs/1907.04341 ### NANOGrav 11.5yr parameters https://arxiv.org/abs/1801.01837 ``` ############################################### #NANOGrav calculation using 11.5yr parameters https://arxiv.org/abs/1801.01837 sigma_nano = 100*u.ns.to('s')*u.s #rms timing residuals in nanoseconds to seconds T_nano = 15*u.yr #Observing time in years N_p_nano = 18 #Number of pulsars cadence_nano = 1/(2*u.wk.to('yr')*u.yr) #Avg observation cadence of 1 every 2 weeks in number/year NANOGrav_Hazboun = SnN.PTA('NANOGrav Hazboun') NANOGrav_Hazboun.Default_Setup_Hazboun_2019(T_nano,N_p_nano,sigma_nano,cadence_nano) NANOGrav_Moore = SnN.PTA('NANOGrav Moore') NANOGrav_Moore.Default_Setup_Moore_2014(T_nano,N_p_nano,sigma_nano,cadence_nano) fig = plt.figure(figsize=(10,8)) plt.loglog(NANOGrav_Hazboun.fT,NANOGrav_Hazboun.h_n_f, linewidth = linesize,label = r'NANOGrav Hazboun') plt.loglog(NANOGrav_Moore.fT,NANOGrav_Moore.h_n_f,linestyle = '--', linewidth = linesize,label = r'NANOGrav Moore') plt.loglog(SKA_Moore.fT,SKA_Moore.h_n_f,linestyle = '--', linewidth = linesize,label = r'SKA Moore') plt.loglog(SKA_Hazboun.fT,SKA_Hazboun.h_n_f, linewidth = linesize,label = r'SKA Hazboun') plt.loglog(NANOGrav_Mingarelli_GWB.fT,NANOGrav_Mingarelli_GWB.h_n_f,linestyle = ':', linewidth = linesize,\ label = r'Mingarelli, et al. (2017) with GWB') plt.loglog(NANOGrav_Mingarelli_no_GWB.fT,NANOGrav_Mingarelli_no_GWB.h_n_f,linestyle = ':', linewidth = linesize,\ label = r'Mingarelli, et al. (2017) w/o GWB') plt.tick_params(axis = 'both',which = 'major', labelsize = axissize) plt.ylim([5e-19,1e-12]) plt.xlim([1e-10,1e-6]) #plt.title('NANOGrav (15yr)',fontsize=labelsize) plt.xlabel(r'Frequency $[Hz]$',fontsize = labelsize) plt.ylabel('Characteristic Strain',fontsize = labelsize) plt.legend(loc='lower right', fontsize = 12) ######################### #Save Figure to File figname = '/PTA_Char_Strain.pdf' figloc = fig_save_location+figname isitsavetime = False if isitsavetime: fig.savefig(figloc, bbox_inches='tight') plt.show() ``` #################################################################### # Calculate LISA amplitude spectral densities for various models ``` L = 2.5*u.Gm #armlength in Gm L = L.to('m') LISA_T_obs = 4*u.yr.to('s')*u.s Ground_T_obs = 4*u.yr.to('s')*u.s ``` ### LISA Calculation from https://arxiv.org/pdf/1702.00786.pdf (Amaro-Seaone 2017) ``` f_acc_break_low = .4*u.mHz.to('Hz')*u.Hz f_acc_break_high = 8.*u.mHz.to('Hz')*u.Hz f_IMS_knee = 2.*u.mHz.to('Hz')*u.Hz A_acc = 3e-15*u.m/u.s/u.s A_IMS = 10e-12*u.m Background = False ESA_LISA = SnN.SpaceBased('ESA_LISA') ESA_LISA.Default_Setup(LISA_T_obs,L,A_acc,f_acc_break_low,f_acc_break_high,A_IMS,f_IMS_break,Background) ``` ### Neil Calculation from https://arxiv.org/pdf/1803.01944.pdf ``` #Neil Calculation from https://arxiv.org/pdf/1803.01944.pdf f_acc_break_low = .4*u.mHz.to('Hz')*u.Hz f_acc_break_high = 8.*u.mHz.to('Hz')*u.Hz f_IMS_knee = 2.*u.mHz.to('Hz')*u.Hz A_acc = 3e-15*u.m/u.s/u.s A_IMS = 1.5e-11*u.m Background = False Neil_LISA = SnN.SpaceBased('Neil_LISA') Neil_LISA.Default_Setup(LISA_T_obs,L,A_acc,f_acc_break_low,f_acc_break_high,A_IMS,f_IMS_break,Background) ``` ### Plots of Space-Based Detectors ``` fig = plt.figure(figsize=(10,5)) plt.loglog(ESA_LISA.fT,ESA_LISA.h_n_f,label='ESA LISA') plt.loglog(Neil_LISA.fT,Neil_LISA.h_n_f,label='Neil LISA') plt.xlabel(r'Frequency $[Hz]$',fontsize = labelsize) plt.ylabel('Characteristic Strain',fontsize = labelsize) plt.legend() ######################### #Save Figure to File figname = '/LISA_Char_Strain.pdf' figloc = fig_save_location+figname isitsavetime = False if isitsavetime: fig.savefig(figloc, bbox_inches='tight') plt.show() ``` ####################################################################### # BBH strain calculation ``` #Vars = [M,q,chi1,chi2,z] M = [1e6,65.0,1e10] q = [1.0,18.0,1.0] x1 = [0.95,0.0,-0.95] x2 = [0.95,0.0,-0.95] z = [3.0,0.093,20.0] inc = 0.0 #Doesn't really work... Vars1 = [M[0],q[0],x1[0],x2[0],z[0]] Vars2 = [M[1],q[1],x1[1],x2[1],z[1]] Vars3 = [M[2],q[2],x1[2],x2[2],z[2]] Vars4 = [M[1],q[0],x1[1],x2[1],z[1]] source_1 = SnN.BlackHoleBinary() source_1.Default_Setup(M[0],q[0],x1[0],x2[0],z[0],inc,ESA_LISA) source_2 = SnN.BlackHoleBinary() source_2.Default_Setup(M[1],q[1],x1[1],x2[1],z[1],inc,aLIGO) source_3 = SnN.BlackHoleBinary() source_3.Default_Setup(M[2],q[2],x1[2],x2[2],z[2],inc,SKA) source_4 = SnN.BlackHoleBinary() source_4.Default_Setup(M[1],q[0],x1[1],x2[1],z[1],inc,ET) diff0002.Default_Setup(M[1],q[0],z[1]) diff0114.Default_Setup(M[1],q[0],z[1]) diff0178.Default_Setup(M[1],q[0],z[1]) diff0261.Default_Setup(M[1],q[0],z[1]) diff0303.Default_Setup(M[1],q[0],z[1]) fig,ax = plt.subplots(figsize = figsize) plt.loglog(ET.fT,ET.h_n_f, linewidth = linesize,color = cm.hsv(colornorm(1.75)),label = 'ET') plt.loglog(diff0002.f,diff0002.Get_CharStrain(),label = 'diff0002') plt.loglog(diff0114.f,diff0114.Get_CharStrain(),label = 'diff0114') plt.loglog(diff0178.f,diff0178.Get_CharStrain(),label = 'diff0178') plt.loglog(diff0261.f,diff0261.Get_CharStrain(),label = 'diff0261') plt.loglog(diff0303.f,diff0303.Get_CharStrain(),label = 'diff0303') plt.xlabel(r'Frequency $[Hz]$',fontsize = labelsize) plt.ylabel('Characteristic Strain',fontsize = labelsize) plt.legend() plt.show() fig,ax = plt.subplots(figsize = figsize) #plt.loglog(NANOGrav_f,NANOGrav_h_f) ax.loglog(SKA_no_GWB.fT,SKA_no_GWB.h_n_f, linewidth = linesize,color = cm.hsv(colornorm(0.0)),label = 'IPTA ~2030s') ax.loglog(NANOGrav_approx_no_GWB.fT,NANOGrav_approx_no_GWB.h_n_f, linewidth = linesize,color = cm.hsv(colornorm(0.5)),label = 'NANOGrav (15yr)') ax.loglog(ESA_LISA.fT,ESA_LISA.h_n_f, linewidth = linesize,color = cm.hsv(colornorm(1.75)),label = 'LISA') ax.loglog(aLIGO.fT,aLIGO.h_n_f,color = cm.hsv(colornorm(2.8)),label = 'aLIGO') ax.loglog(ET.fT,ET.h_n_f, linewidth = linesize,color = cm.hsv(colornorm(2.5)),label = 'Einstein Telescope') ax.loglog(source_1.f,source_1.Get_CharStrain(), linewidth = linesize,color = cm.hsv(colornorm(0.8)),label = r'$M = %.1e$ $M_{\odot}$, $q = %.1f$, $z = %.1f$, $\chi_{i} = %.2f$' %(M[0],q[0],z[0],x1[0])) ax.loglog(source_2.f,source_2.Get_CharStrain(), linewidth = linesize,color = cm.hsv(colornorm(3.0)),label = r'$M = %.1e$ $M_{\odot}$, $q = %.1f$, $z = %.1f$, $\chi_{i} = %.0f$' %(M[1],q[1],z[1],x1[1])) ax.loglog(source_3.f,source_3.Get_CharStrain(), linewidth = linesize,color = cm.hsv(colornorm(4.5)),label = r'$M = %.1e$ $M_{\odot}$, $q = %.1f$, $z = %.1f$, $\chi_{i} = %.2f$' %(M[2],q[2],z[2],x1[2])) '''ax.scatter(fT[mono_idx_1],h_mono_1) ax.scatter(ET_f[mono_idx_2],h_mono_2) ax.scatter(NANOGrav_f[mono_idx_3],h_mono_3)''' ax.set_xlim([8e-10, 1e4]) ax.set_ylim([1e-24, 1e-11]) ax.tick_params(axis = 'both',which = 'major', labelsize = axissize) ax.set_xlabel(r'Frequency $[Hz]$',fontsize = labelsize) ax.set_ylabel('Characteristic Strain',fontsize = labelsize) ax.legend(loc='upper right', fontsize = legendsize) plt.show() ######################### #Save Figure to File figname = '/Char_Strain_v1.pdf' figloc = current_path+figname isitsavetime = False if isitsavetime: fig.savefig(figloc, bbox_inches='tight') ```
github_jupyter
``` import glob, sys from IPython.display import HTML import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from astropy.io import fits from pyflowmaps.flow import flowLCT import warnings warnings.filterwarnings("ignore") ``` # Load the data We include in the folder *data/* a cube fits file with the data coaligned and centered in a active region NOAA 1757. ``` cube = fits.getdata('data/cube_sunspot.fits') print(cube.shape) ``` Look into one of the frames in the cube. ``` fig, ax = plt.subplots(figsize=(10,10)) im=ax.imshow(cube[15],origin='lower',cmap='gray') ax.set_title('NOAA 1757 frame no. 15') ax.set_xlabel('X-axis [pix]') ax.set_ylabel('Y-axis [pix]') fig.colorbar(im,ax=ax,label='Intensity',shrink=0.82,aspect=15) ``` The shape of the data corresponds to 30 images with 128x128 pix dimesions per image. The frames are cut off from HMI/SDO data from 2013-01-05, intensity product, wtih a cadence of $720 s$, and the pixel size is around $\sim 0.504$. Other parameter we need is the size of the apodization window $FWHM$ which for this example will be $3\, arcsec$. This size depends on the size of the feature you want to study, as well as the resolution of your instrument. Other parameter that is neccesary is the average time over which the velocities will be calculated, but actually, it is included on the size of the input cube. For this example, the time over the average will be calculate is 6 hours ($30\times720 s=21600 s=6 h$). ``` flows = flowLCT(cube, 3, 0.504, 720,method='square',interpolation='fivepoint',window='boxcar') ``` We extract the velocities ``` vx = flows.vx vy = flows.vy vz = flows.vz ``` Velocities are returned in $kms^{-1}$. The velocity $v_z$ comes from $$ v_z = h_m\nabla\cdot v_h(v_x,v_y) $$ where $v_h$ are the horizontal velocities which depends on $v_x$ and $v_y$, whereas $h_m=150\,km$ is the mass-flux scale-heigth [(November 1989, ApJ,344,494)](https://ui.adsabs.harvard.edu/abs/1989ApJ...344..494N/abstract). Some authors prefer to show the divergences instead of the $v_z$, so the user just need to divide $v_z/h_m$. Next, the users can also create colormaps and personlize them. ``` from matplotlib import cm from matplotlib.colors import ListedColormap top = cm.get_cmap('Reds_r', 128) bottom = cm.get_cmap('YlGn', 128) newcolors = np.vstack((top(np.linspace(0.3, 1, 128)), bottom(np.linspace(0, 0.75, 128)))) newcmp = ListedColormap(newcolors, name='RdYlGn') ``` Now, we will plot the flows in each horizontal direction, and the divergence. ``` fig, ax = plt.subplots(1,3,figsize=(15,8),sharey=True) plt.subplots_adjust(wspace=0.03) flowx=ax[0].imshow(vx,origin='lower',cmap='RdYlGn',vmin = vx.mean()-3*vx.std(),vmax=vx.mean()+3*vx.std()) ax[0].set_title('Horizontal flowmap vx') ax[0].set_xlabel('X-axis [pix]') ax[0].set_ylabel('Y-axis [pix]') flowy=ax[1].imshow(vy,origin='lower',cmap='RdYlGn',vmin = vy.mean()-3*vy.std(),vmax=vy.mean()+3*vy.std()) ax[1].set_title('Horizontal flowmap vy') ax[1].set_xlabel('X-axis [pix]') div = vz/150 flowz=ax[2].imshow(div,origin='lower',cmap='RdYlGn',vmin = div.mean()-3*div.std(),vmax=div.mean()+3*div.std()) ax[2].set_title('Horizontal flowmap divergence') ax[2].set_xlabel('X-axis [pix]') fig.colorbar(flowx,ax=ax[0],orientation='horizontal',shrink=1,label='vx [km/s]') fig.colorbar(flowy,ax=ax[1],orientation='horizontal',shrink=1,label='vy [km/s]') fig.colorbar(flowz,ax=ax[2],orientation='horizontal',shrink=1,label='divergence') fig.savefig('/Users/joseivan/pyflowmaps/images/flowmaps.jpg',format='jpeg',bbox_inches='tight') ``` Finally, we can also plot the arrows associated with the horizontal velocities ``` xx,yy = np.meshgrid(np.arange(128),np.arange(128)) # we create a grid dense = 2 # each how many pixels you want to plot arrows fig,ax = plt.subplots(figsize=(10,10)) Q = ax.quiver(xx[::dense,::dense],yy[::dense,::dense],vx[::dense,::dense],vy[::dense,::dense], color='k', scale=8, headwidth= 4, headlength=4, width=0.0012) im = ax.imshow(cube[15],cmap='gray',origin='lower') ax.set_title('Flowmap horizontal velocities overplotted') ax.set_xlabel('X-axis [pix]') ax.set_ylabel('Y-axis [pix]') fig.colorbar(im,ax=ax,label='Intensity',shrink=0.82,aspect=15) fig.savefig('/Users/joseivan/pyflowmaps/images/flowmaps_arrows.jpg',format='jpeg',bbox_inches='tight') ```
github_jupyter
# Loops Loops is a basic statement in any programming language. Python supports the two typical loops: - for --> Loops in a pre-defined number of iterations - while --> Loops until a condition is reached # For ``` # For for i in range(1, 20): print(i) # iterates a list to retrieve data my_list = [1, 2, 2, 4, 8, 16] for i in range(0, len(my_list)): value = my_list[i] print("iteration number: {}, value: {}".format(i, value)) # Changes all the values from a list for i in range(0, len(my_list)): my_list[i] = 0 print("new list: {}".format(my_list)) ``` ## While ``` # while # iterates a list to retrieve data my_list = [1, 2, 2, 4, 8, 16] i = 0 while i < len(my_list): value = my_list[i] print("iteration number: {}, value: {}".format(i, value)) i += 1 # Changes all the values from a list i = 0 while i < len(my_list): my_list[i] = 0 i += 1 print("new list: {}".format(my_list)) ``` # Conditionals and control structures One of the most important topics to any programmer, control structures allows your code to take decisions based on pre-set conditions. Python supports the below control and conditionals definitions: - if --> Evaluates the value of a variable - is --> Complements any if condition, _is_ is the conector to the conditional to evaluate a variable's value - else --> Complements any if condition, the else statement is executed if the previous _if_ does not reach the condition - Logic operators: <, >, <=, >=, ==, != - in --> Mostly used when working with lists, allows to the programmer to know if a value is present inside a given list or tuple. ``` text = "this is a text" if text == "this is a text": print("text matches!") text = "this is a text" if text == "this text does not match": print("text matches!") else: print("texts are different") # Verifies if a number is between the range 1 < x < 5 def is_in_range(my_num): if my_num > 1 and my_num < 5: print("number {} is in range".format(my_num)) else: print("number {} is not in range".format(my_num)) # Tests is_in_range(10) is_in_range(2) is_in_range(-1) # iterates a list and prints values only if index is odd my_list = [1, 2, 2, 4, 8, 16] for i in range(0, len(my_list)): value = my_list[i] if i%2 > 0: print("iteration number: {}, value: {}".format(i, value)) # verifies if a number is in a list my_list = [1, 2, 3, 4, 5] def is_in_list(my_num): if my_num in my_list: print("number {} is inside the list".format(my_num)) else: print("number {} is not inside the list".format(my_num)) # Tests is_in_list(5) is_in_list(0) ``` ## Exercises 1. Write a Python function that takes two lists and returns True if they have at least one common 2. Write a Python program to print a specified list after removing the 0th, 4th and 5th elements. ``` Sample List : ['Red', 'Green', 'White', 'Black', 'Pink', 'Yellow'] Expected Output : ['Green', 'White', 'Black'] ``` 3. Write a Python program to print the numbers of a specified list after removing even numbers 4. **Challenge** Write a Python program to find those numbers which are divisible by 7 and multiple of 5, between 1500 and 2700 (both included) 5. **Challenge** Write a Python program to construct the following pattern, using a nested for loop. o o o o o o o o o o o o o o o o o o o o o o o o o 6. Write a Python program which takes two digits m (row) and n (column) as input and generates a two-dimensional array. The element value in the i-th row and j-th column of the array should be i*j. ``` Note : i = 0,1.., m-1 j = 0,1, n-1. Test Data : Rows = 3, Columns = 4 Expected Result : [[0, 0, 0, 0], [0, 1, 2, 3], [0, 2, 4, 6]] ```
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt # TODO Read in weight_loss.csv # Assign variables to columns mean_group_a = np.mean(weight_lost_a) mean_group_b = np.mean(weight_lost_b) plt.hist(weight_lost_a) plt.show() plt.hist(weight_lost_b) plt.show() mean_difference = mean_group_b - mean_group_a print(mean_difference) mean_difference = 2.52 print(all_values) mean_differences = [] for i in range(1000): group_a = [] group_b = [] for value in all_values: assignment_chance = np.random.rand() if assignment_chance >= 0.5: group_a.append(value) else: group_b.append(value) iteration_mean_difference = np.mean(group_b) - np.mean(group_a) mean_differences.append(iteration_mean_difference) plt.hist(mean_differences) plt.show() sampling_distribution = {} for df in mean_differences: if sampling_distribution.get(df, False): sampling_distribution[df] = sampling_distribution[df] + 1 else: sampling_distribution[df] = 1 frequencies = [] for sp in sampling_distribution.keys(): if sp >= 2.52: frequencies.append(sampling_distribution[sp]) p_value = np.sum(frequencies) / 1000 ``` Chi-squared tests - creating distribution ``` chi_squared_values = [] from numpy.random import random import matplotlib.pyplot as plt for i in range(1000): sequence = random((32561,)) sequence[sequence < .5] = 0 sequence[sequence >= .5] = 1 male_count = len(sequence[sequence == 0]) female_count = len(sequence[sequence == 1]) male_diff = (male_count - 16280.5) ** 2 / 16280.5 female_diff = (female_count - 16280.5) ** 2 / 16280.5 chi_squared = male_diff + female_diff chi_squared_values.append(chi_squared) plt.hist(chi_squared_values) chi_squared_values = [] from numpy.random import random import matplotlib.pyplot as plt # loop 1000 times for i in range(1000): # numpy random generating 300 numbers between 0.0 and 1.0. # get a vector with 300 elements. sequence = random((300,)) # # if it is less than .5, replace it with 0 sequence[sequence < .5] = 0 # otherwise replace it with 1 sequence[sequence >= .5] = 1 # Compute the male_diff by subtracting the expected Male count (150) # from the observed Male count, squaring it, #and dividing by the expected Male count. Do the same for female_diff male_count = len(sequence[sequence == 0]) female_count = len(sequence[sequence == 1]) male_diff = (male_count - 150) ** 2 / 150 female_diff = (female_count - 150) ** 2 / 150 # find the chi squared chi_squared = male_diff + female_diff # append the values chi_squared_values.append(chi_squared) plt.hist(chi_squared_values) diffs = [] observed = [27816, 3124, 1039, 311, 271] expected = [26146.5, 3939.9, 944.3, 260.5, 1269.8] for i, obs in enumerate(observed): exp = expected[i] diff = (obs - exp) ** 2 / exp diffs.append(diff) race_chisq = sum(diffs) from scipy.stats import chisquare observed = np.array([27816, 3124, 1039, 311, 271]) expected = np.array([26146.5, 3939.9, 944.3, 260.5, 1269.8]) chisquare_value, race_pvalue = chisquare(observed, expected) table = pd.crosstab(income["sex"], [income["race"]]) print(table) ```
github_jupyter
# Assignment 09 Solutions #### 1. YouTube offers different playback speed options for users. This allows users to increase or decrease the speed of the video content. Given the actual duration and playback speed of the video, calculate the playback duration of the video. **Examples:** `playback_duration("00:30:00", 2) โžž "00:15:00"` `playback_duration("01:20:00", 1.5) โžž "00:53:20"` `playback_duration("51:20:09", 0.5) โžž "102:40:18"` ``` def playback_duration(in_time,playback_speed): time = in_time.split(":") time_in_secs = (3600*int(time[0])+60*int(time[1])+int(time[2]))/playback_speed f_time_in_hours = str(int(time_in_secs/3600)) if time_in_secs > 3600 else '00' f_time_in_mins = str(int((time_in_secs%3600)/60)) if (time_in_secs)%3600 > 60 else '00' f_time_in_secs = str(int((time_in_secs%3600)%60)) if ((time_in_secs)%3600)%60 > 0 else '00' output = f'{f_time_in_hours}:{f_time_in_mins}:{f_time_in_secs}' print(f'playback_duration{in_time, playback_speed} โžž {output}') playback_duration("00:30:00", 2) playback_duration("01:20:00", 1.5) playback_duration("51:20:09", 0.5) ``` #### 2. We needs your help to construct a building which will be a pile of n cubes. The cube at the bottom will have a volume of n^3, the cube above will have volume of (n-1)^3 and so on until the top which will have a volume of 1^3. Given the total volume m of the building, can you find the number of cubes n required for the building? In other words, you have to return an integer n such that: `n^3 + (n-1)^3 + ... + 1^3 == m` Return None if there is no such number. **Examples:** `pile_of_cubes(1071225) โžž 45` `pile_of_cubes(4183059834009) โžž 2022` `pile_of_cubes(16) โžž None` ``` def pile_of_cubes(in_volume): out_volume = 0 output = 0 for cube in range(1,in_volume): out_volume += pow(cube,3) if in_volume <= out_volume: output = cube if in_volume == out_volume else None break print(f'pile_of_cubes({in_volume}) โžž {output}') pile_of_cubes(1071225) pile_of_cubes(4183059834009) pile_of_cubes(16) ``` #### 3. A fulcrum of a list is an integer such that all elements to the left of it and all elements to the right of it sum to the same value. Write a function that finds the fulcrum of a list. **To illustrate:** `find_fulcrum([3, 1, 5, 2, 4, 6, -1]) โžž 2` // Since [3, 1, 5] and [4, 6, -1] both sum to 9 **Examples:** `find_fulcrum([1, 2, 4, 9, 10, -10, -9, 3]) โžž 4` `find_fulcrum([9, 1, 9]) โžž 1` `find_fulcrum([7, -1, 0, -1, 1, 1, 2, 3]) โžž 0` `find_fulcrum([8, 8, 8, 8]) โžž -1` ``` def find_fulcrum(in_list): output = -1 for ele in in_list: index_of_ele =in_list.index(ele) if sum(in_list[:index_of_ele]) == sum(in_list[index_of_ele+1:]): output = ele break print(f'find_fulcrum({in_list}) โžž {output}') find_fulcrum([3, 1, 5, 2, 4, 6, -1]) find_fulcrum([1, 2, 4, 9, 10, -10, -9, 3]) find_fulcrum([9, 1, 9]) find_fulcrum([7, -1, 0, -1, 1, 1, 2, 3]) find_fulcrum([8, 8, 8, 8]) ``` #### 4. Given a list of integers representing the color of each sock, determine how many pairs of socks with matching colors there are. For example, there are 7 socks with colors [1, 2, 1, 2, 1, 3, 2]. There is one pair of color 1 and one of color 2. There are three odd socks left, one of each color. The number of pairs is 2. Create a function that returns an integer representing the number of matching pairs of socks that are available. **Examples:** `sock_merchant([10, 20, 20, 10, 10, 30, 50, 10, 20]) โžž 3` `sock_merchant([50, 20, 30, 90, 30, 20, 50, 20, 90]) โžž 4` `sock_merchant([]) โžž 0` ``` def sock_merchant(in_list): paired_socks = {} output = 0 for ele in in_list: if ele in paired_socks: paired_socks[ele]+=1 else: paired_socks[ele]=1 for pair in paired_socks.values(): output += pair//2 print(f'sock_merchant({in_list}) โžž {output}') sock_merchant([10, 20, 20, 10, 10, 30, 50, 10, 20]) sock_merchant([50, 20, 30, 90, 30, 20, 50, 20, 90]) sock_merchant([]) ``` #### 5. Create a function that takes a string containing integers as well as other characters and return the sum of the negative integers only. **Examples:** `negative_sum("-12 13%14&-11") โžž -23` `# -12 + -11 = -23` `negative_sum("22 13%14&-11-22 13 12") โžž -33` `# -11 + -22 = -33` ``` import re def negative_sum(in_string): pattern = '-\d+' output = sum([int(ele) for ele in re.findall(pattern,in_string)]) print(f'negative_sum("{in_string}") โžž {output}') negative_sum("-12 13%14&-11") negative_sum("22 13%14&-11-22 13 12") ```
github_jupyter
![logo.png](image/logo.png) # Functions & Modules ### You can access this notebook on: [colab](https://colab.com/py/), [github](https://github.com/chisomloius/iLearnPy/), [kaggle](https://kaggle.com/chisomloius/ilearnPy/), [medium](https://medium.com/@chisomloius/ilearnPy/), [web](https://chisomloius.github.io), [zindi](https://zindi.com/@chisomloius/ilearnPy/) # Table of Contents ### Click on the links to go directly to specific sections on the notebook. 1. [Import Dependencies](#dependencies) <br> 2. [File Handling: Read /Write/Delete Functions](#read-write) <br> 3. [Functions: Definition, Arguments, Scopes](#functions) <br> 4. [Functions: Concatenation, Lambdas, Iterators, Generators](#lambdas) <br> 5. [Error Handling: Try and Except Handling, Error blocking and Error Tracing](#errors) <br> 6. [Assignment Link](#assignment-link) <br> 7. [After Thoughts](#after-thoughts) <br> 8. [About Author](#about) <br> 9. [More Info](#more-info) <br> <p>Estimated time needed: <strong>50 min</strong></p> ---- ### Python - Let's get you writing some Python Code now!</h1> <p><strong>Welcome!</strong> This notebook will teach you the basics of the Python programming language. Although the information presented here is quite basic, it is an important foundation that will help you read and write Python code. By the end of this notebook, you'll know the basics of Python, including how to write basic commands, understand some basic types, and how to perform simple operations on them.</p> <div class="alert alert-block alert-info" style="margin-top: 20px"> <a id='dependencies'></a> ### Import Dependencies </div> <div class="alert alert-block alert-info" style="margin-top: 20px"> <a id='read-write'></a> ### File Handling: Read /Write/Delete Functions </div> <p><strong>Welcome!</strong> This notebook will teach you about read and write the text to file in the Python Programming Language. By the end of this lab, you'll know how to write to file and copy the file.</p> #### Reading Text Files One way to read or write a file in Python is to use the built-inย <code>open</code>ย function.ย The <code>open</code> function provides a <b>File object</b> thatย contains the methods and attributes you need in order to read, save, and manipulate the file. In this notebook, we will only cover <b>.txt</b> files. The first parameter you need is the file path and the file name. An example is shown as follow: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/ReadOpen.png" width="500" /> The mode argument is optional and the default value is <b>r</b>. In this notebook we only cover two modes:ย  <ul> <li><b>r</b> Read mode for reading files </li> <li><b>w</b> Write mode for writing files</li> </ul> For the next example, we will use the text file <b>Example1.txt</b>. The file is shown as follow: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/ReadFile.png" width="200" /> We read the file: ``` # assigned_name1 = "filepath" # assigned_name2 = open(assigned_name, 'r') # Read the Example1.txt example1 = r"..\data\Example_1.txt" file1 = open(example1, "r") file1 ``` We can view the attributes of the file. The name of the file: ``` # Print the path of file file1.name ``` The mode the file object is in: ``` # Print the mode of file, either 'r' or 'w' file1.mode ``` We can read the file and assign it to a variable : ``` # Read the file FileContent = file1.read() FileContent ``` The <b>/n</b> means that there is a new line. We can print the file: ``` # Print the file with '\n' as a new line print(FileContent) ``` The file is of type string: ``` # Type of file content type(FileContent) ``` We must close the file object: ``` # Close file after finish file1.close() ``` ##### A Better Way to Open a File Using the <code>with</code> statement is better practice, it automatically closes the file even if the code encounters an exception. The code will run everything in the indent block then close the file object. ``` # Open file using with with open(example1, "r") as f: Content = f.read() print(Content) ``` The file object is closed, you can verify it by running the following cell: ``` # Verify if the file is closed f.closed ``` We can see the info in the file: ``` # See the content of file print(Content) ``` The syntax is a little confusing as the file object is after the <code>as</code> statement. We also donโ€™t explicitly close the file. Therefore we summarize the steps in a figure: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/ReadWith.png" width="500" /> We donโ€™t have to read the entire file, for example, we can read the first 4 characters by entering three as a parameter to the method **.read()**: ``` # Read first four characters with open(example1, "r") as f: print(f.read(4)) ``` Once the method <code>.read(4)</code> is called the first 4 characters are called. If we call the method again, the next 4 characters are called. The output for the following cell will demonstrate the process for different inputs to the method <code>read()</code>: ``` # Read certain amount of characters with open(example1, "r") as f: print(f.read(2)) print(f.read(4)) print(f.read(7)) print(f.read(15)) ``` The process is illustrated in the below figure, and each color represents the part of the file read after the method <code>read()</code> is called: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/ReadChar.png" width="500" /> Here is an example using the same file, but instead we read 16, 5, and then 9 characters at a time: ``` # Read certain amount of characters with open(example1, "r") as f: print(f.read(16)) print(f.read(5)) print(f.read(9)) ``` We can also read one line of the file at a time using the method <code>readline()</code>: ``` # Read one line with open(example1, "r") as f: print("first line: " + f.readline()) ``` We can use a loop to iterate through each line: ``` # Iterate through the lines with open(example1,"r") as f: i = 0; for line in f: print("Iteration", str(i), ": ", line) i = i + 1; ``` We can use the method <code>readlines()</code> to save the text file to a list: ``` # Read all lines and save as a list with open(example1, "r") as f: FileasList = f.readlines() ``` Each element of the list corresponds to a line of text: ``` # Print the first line FileasList[0] # Print the second line FileasList[1] # Print the third line FileasList[2] ``` #### Writing Files We can open a file object using the method <code>write()</code> to save the text file to a list. To write the mode, argument must be set to write <b>w</b>. Letโ€™s write a file <b>Example2.txt</b> with the line: <b>โ€œThis is line Aโ€</b> ``` # Write line to file path = r'..\data\Example_2.txt' with open(path, 'w') as w: w.write("This is line A") ``` We can read the file to see if it worked: ``` # Read file with open(path, 'r') as t: print(t.read()) ``` We can write multiple lines: ``` # Write lines to file with open(r'..\data\Example2.txt', 'w') as w: w.write("This is line A\n") w.write("This is line B\n") ``` The method <code>.write()</code> works similar to the method <code>.readline()</code>, except instead of reading a new line it writes a new line. The process is illustrated in the figure , the different colour coding of the grid represents a new line added to the file after each method call. <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%204/Images/WriteLine.png" width="500" /> You can check the file to see if your results are correct ``` # Check whether write to file with open(r'..\data\Example_2.txt', 'r') as t: print(t.read()) ``` By setting the mode argument to append **a** you can append a new line as follows: ``` # Write a new line to text file with open(r'..\data\Example_2.txt', 'a') as t: t.write("This is line C\n") t.write("This is line D\n") ``` You can verify the file has changed by running the following cell: ``` # Verify if the new line is in the text file with open(r'..\data\Example_2.txt', 'r') as f: print(f.read()) ``` We write a list to a <b>.txt</b> file as follows: ``` # Sample list of text Lines = ["This is line A\n", "This is line B\n", "This is line C\n"] Lines # Write the strings in the list to text file with open('Example2.txt', 'w') as f: for line in Lines: print(line) f.write(line) ``` We can verify the file is written by reading it and printing out the values: ``` # Verify if writing to file is successfully executed with open('Example_2.txt', 'r') as f: print(f.read()) ``` We can again append to the file by changing the second parameter to <b>a</b>. This adds the code: ``` # Append the line to the file with open('Example_2.txt', 'a') as t: t.write("This is line D\n") ``` We can see the results of appending the file: ``` # Verify if the appending is successfully executed with open('Example2.txt', 'r') as t: print(t.read()) ``` #### Copy a File Let's copy the file <b>Example2.txt</b> to the file <b>Example3.txt</b>: ``` # Copy file to another with open('Example2.txt','r') as r: with open('Example3.txt','w') as w: for line in r: w.write(line) ``` We can read the file to see if everything works: ``` # Verify if the copy is successfully executed with open('Example3.txt','r') as t: print(t.read()) # add an extra line to example3.txt with open('Example3.txt', 'a') as t: t.write('This is line E \n') # confirm that the extra line has been added with open('Example3.txt', 'r') as t: print(t.read()) ``` After reading files, we can also write data into files and save them in different file formats like **.txt, .csv, .xls (for excel files) etc**. Let's take a look at some examples. Now go to the directory to ensure the <b>.txt</b> file exists and contains the summary data that we wrote. <div class="alert alert-block alert-info" style="margin-top: 20px"> <a id='functions'></a> ### Functions: Definition, Arguments, Scopes </div> #### Functions in Python <p><strong>Welcome!</strong> This notebook will teach you about the functions in the Python Programming Language. By the end of this lab, you'll know the basic concepts about function, variables, and how to use functions.</p> A function is a reusable block of code which performs operations specified in the function. They let you break down tasks and allow you to reuse your code in different programs. There are two types of functions : - <b>Pre-defined functions</b> - <b>User defined functions</b> <h3 id="content">What is a Function?</h3> You can define functions to provide the required functionality. Here are simple rules to define a function in Python: - Functions blocks begin <code>def</code>ย followed by the function <code>name</code> and parentheses <code>()</code>. - There are input parameters or arguments that should be placed within these parentheses. - You can also define parameters inside these parentheses. - There is a body within every function that starts with a colon (<code>:</code>) and is indented. - You can also place documentation before the body - The statement <code>return</code> exits a function, optionally passing back a value An example of a function that adds on to the parameter <code>a</code> prints and returns the output as <code>b</code>: ``` # First function example: Add 1 to a and store as b def add(a): """ this function add a value to argument a """ b = a + 1 print(a, "if you add one", b) return b ``` The figure below illustrates the terminology: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/FuncsDefinition.png" width="500" /> We can obtain help about a function : ``` # Get a help on add function help(add) ``` We can call the function: ``` # Call the function add() add(16) ``` If we call the function with a new input we get a new result: ``` # Call the function add() add(2) ``` We can create different functions. For example, we can create a function that multiplies two numbers. The numbers will be represented by the variables <code>a</code> and <code>b</code>: ``` # Define a function for multiple two numbers def Mult(a, b): c = a * b return(c) ``` The same function can be used for different data types. For example, we can multiply two integers: ``` # Use mult() multiply two integers Mult(2, 3) ``` Two Floats: ``` # Use mult() multiply two floats Mult(10.0, 3.14) ``` We can even replicate a string by multiplying with an integer: ``` # Use mult() multiply two different type values together Mult(2, "Michael Jackson ") ``` #### Variables The input to a function is called a formal parameter. A variable that is declared inside a function is called a local variable. The parameter only exists within the function (i.e. the point where the function starts and stops). A variable that is declared outside a function definition is a global variable, and its value is accessible and modifiable throughout the program. We will discuss more about global variables at the end of the lab. ``` # Function Definition def square(a): # Local variable b b = 1 c = a * a + b print(a, "if you square a + 1", c) return(c) square(3) ``` The labels are displayed in the figure: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/FuncsVar.png" width="500" /> We can call the function with an input of <b>3</b>: ``` # Initializes Global variable x = 3 # Makes function call and return function a y y = square(x) y ``` We can call the function with an input of <b>2</b> in a different manner: ``` # Directly enter a number as parameter square(2) ``` If there is no <code>return</code> statement, the function returns <code>None</code>. The following two functions are equivalent: ``` # Define functions, one with return value None and other without return value def MJ(): print('Michael Jackson') def MJ1(): print('Michael Jackson') return(None) # See the output MJ() # See the output MJ1() ``` Printing the function after a call reveals a **None** is the default return statement: ``` # See what functions returns are print(MJ()) print(MJ1()) ``` Create a function <code>con</code> that concatenates two strings using the addition operation: ``` # Define the function for combining strings def con(a, b): return(a + b) # Test on the con() function con("This ", "is ") ``` <hr/> <div class="alert alert-success alertsuccess" style="margin-top: 20px"> <h4> [Tip] How do I learn more about the pre-defined functions in Python? </h4> <p>We will be introducing a variety of pre-defined functions to you as you learn more about Python. There are just too many functions, so there's no way we can teach them all in one sitting. But if you'd like to take a quick peek, here's a short reference card for some of the commonly-used pre-defined functions: <a href="http://www.astro.up.pt/~sousasag/Python_For_Astronomers/Python_qr.pdf">Reference</a></p> </div> <hr/> #### Functions Make Things Simple Consider the two lines of code in <b>Block 1</b> and <b>Block 2</b>: the procedure for each block is identical. The only thing that is different is the variable names and values. <h4>Block 1:</h4> ``` # a and b calculation block1 a1 = 4 b1 = 5 c1 = a1 + b1 + 2 * a1 * b1 - 1 if(c1 < 0): c1 = 0 else: c1 = 5 c1 ``` <h4>Block 2:</h4> ``` # a and b calculation block2 a2 = 0 b2 = 0 c2 = a2 + b2 + 2 * a2 * b2 - 1 if(c2 < 0): c2 = 0 else: c2 = 5 c2 ``` We can replace the lines of code with a function. A function combines many instructions into a single line of code. Once aย functionย is defined, it can be used repeatedly. You can invoke the sameย functionย many times in yourย program. You can save your function and use it in another program or use someone elseโ€™s function. The lines of code in code <b>Block 1</b> and code <b>Block 2</b> can be replaced by the following function: ``` # Make a Function for the calculation above def Equation(a,b): c = a + b + 2 * a * b - 1 if(c < 0): c = 0 else: c = 5 return(c) ``` This function takes two inputs, a and b, then applies several operations to return c. We simply define the function, replace the instructions with the function, and input the new values of <code>a1</code>, <code>b1</code> and <code>a2</code>, <code>b2</code> as inputs. The entire process is demonstrated in the figure: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%203/Images/FuncsPros.gif" width="850" /> Code **Blocks 1** and **Block 2** can now be replaced with code **Block 3** and code **Block 4**. <h4>Block 3:</h4> ``` a1 = 4 b1 = 5 c1 = Equation(a1, b1) c1 ``` <h4>Block 4:</h4> ``` a2 = 0 b2 = 0 c2 = Equation(a2, b2) c2 ``` <hr> #### Pre-defined functions There are many pre-defined functions in Python, so let's start with the simple ones. The <code>print()</code> function: ``` # Build-in function print() album_ratings = [10.0, 8.5, 9.5, 7.0, 7.0, 9.5, 9.0, 9.5] print(album_ratings) ``` The <code>sum()</code> function adds all the elements in a list or tuple: ``` # Use sum() to add every element in a list or tuple together sum(album_ratings) ``` The <code>len()</code> function returns the length of a list or tuple: ``` # Show the length of the list or tuple len(album_ratings) ``` #### Using <code>if</code>/<code>else</code> Statements and Loops in Functions The <code>return()</code> function is particularly useful if you have any IF statements in the function, when you want your output to be dependent on some condition: ``` # Function example def type_of_album(artist, album, year_released): print(artist, album, year_released) if year_released > 1980: return "Modern" else: return "Oldie" x = type_of_album("Michael Jackson", "Thriller", 1980) print(x) y = type_of_album("Michael Jackson", "Thriller", 1990) print(y) ``` We can use a loop in a function. For example, we can <code>print</code> out each element in a list: ``` # Print the list using for loop def PrintList(the_list): for element in the_list: print(element[i]) # Implement the printlist function #PrintList(['1', 1, 'the man', "abc"]) ``` <hr> ##### Setting default argument values in your custom functions You can set a default value for arguments in your function. For example, in the <code>isGoodRating()</code> function, what if we wanted to create a threshold for what we consider to be a good rating? Perhaps by default, we should have a default rating of 4: ``` # Example for setting param with default value def isGoodRating(rating=4): if(rating < 7): print("this album sucks it's rating is",rating) else: print("this album is good its rating is",rating) # Test the value with default value and with input isGoodRating(4) isGoodRating(10) ``` <hr> #### Global variables So far, we've been creating variables within functions, but we have not discussed variables outside the function. These are called global variables. <br> Let's try to see what <code>printer1</code> returns: ``` # Example of global variable artist = "Michael Jackson" def printer1(artist): internal_var = artist print(artist, "is an artist") printer1(internal_var) ``` If we print <code>internal_var</code> we get an error. <b>We got a Name Error: <code>name 'internal_var' is not defined</code>. Why?</b> It's because all the variables we create in the function is a <b>local variable</b>, meaning that the variable assignment does not persist outside the function. But there is a way to create <b>global variables</b> from within a function as follows: ``` artist = "Michael Jackson" def printer(artist): global internal_var internal_var= "Whitney Houston" print(artist,"is an artist") printer(artist) printer(internal_var) ``` #### Scope of a Variable The scope of a variable is the part of that program where that variable is accessible. Variables that are declared outside of all function definitions, such as the <code>myFavouriteBand</code> variable in the code shown here, are accessible from anywhere within the program. As a result, such variables are said to have global scope, and are known as global variables. <code>myFavouriteBand</code> is a global variable, so it is accessible from within the <code>getBandRating</code> function, and we can use it to determine a band's rating. We can also use it outside of the function, such as when we pass it to the print function to display it: ``` # Example of global variable myFavouriteBand = "AC/DC" def getBandRating(bandname): if bandname == myFavouriteBand: return 10.0 else: return 0.0 print("AC/DC's rating is:", getBandRating("AC/DC")) print("Deep Purple's rating is:",getBandRating("Deep Purple")) print("My favourite band is:", myFavouriteBand) ``` Take a look at this modified version of our code. Now the <code>myFavouriteBand</code> variable is defined within the <code>getBandRating</code> function. A variable that is defined within a function is said to be a local variable of that function. That means that it is only accessible from within the function in which it is defined. Our `getBandRating` function will still work, because <code>myFavouriteBand</code> is still defined within the function. However, we can no longer print <code>myFavouriteBand</code> outside our function, because it is a local variable of our <code>getBandRating</code> function; it is only defined within the <code>getBandRating</code> function: ``` # Example of local variable def getBandRating(bandname): myFavouriteBand = "AC/DC" if bandname == myFavouriteBand: return 10.0 else: return 0.0 print("AC/DC's rating is: ", getBandRating("AC/DC")) print("Deep Purple's rating is: ", getBandRating("Deep Purple")) print("My favourite band is", myFavouriteBand) ``` Finally, take a look at this example. We now have two <code>myFavouriteBand</code> variable definitions. The first one of these has a global scope, and the second of them is a local variable within the <code>getBandRating</code> function. Within the <code>getBandRating</code> function, the local variable takes precedence. **Deep Purple** will receive a rating of 10.0 when passed to the <code>getBandRating</code> function. However, outside of the <code>getBandRating</code> function, the <code>getBandRating</code> s local variable is not defined, so the <code>myFavouriteBand</code> variable we print is the global variable, which has a value of **AC/DC**: ``` # Example of global variable and local variable with the same name myFavouriteBand = "AC/DC" def getBandRating(bandname): myFavouriteBand = "Deep Purple" if bandname == myFavouriteBand: return 10.0 else: return 0.0 print("AC/DC's rating is:",getBandRating("AC/DC")) print("Deep Purple's rating is: ",getBandRating("Deep Purple")) print("My favourite band is:", myFavouriteBand) ``` <div class="alert alert-block alert-info" style="margin-top: 20px"> <a id='lambdas'></a> ### Functions: String Formats, Concatenation, Lambdas, Iterators, Generators </div> #### String Format function ``` name = "okocha" age = 20 jersey = 20 #using the f string format will help you format the print statement print(f"{name} is {age} years old. His jersey number is {jersey}") #the above code can be replaicated as this: print("{} is {} is 20 years old. His jersey number is {}".format(name, age, jersey)) print(f"I like curly brackets {{}}") print(f"Neuer is the 'best' goalkeeper in the \n world") ``` #### Concatenation #### Lambda #### Iterators #### Generators #### Time it Magic ``` %%timeit sum([1, 2, 3]) ``` <div class="alert alert-block alert-info" style="margin-top: 20px"> <a id='errors'></a> ### Errors: Try and Except Handling, Error blocking and Error Tracing </div> This is how to simply handle error and move your program cells. ``` my_tuple = (1, 2, 3) my_tuple[0] = -1 try: my_tuple[0] = -1 except TypeError: print("This can't be done") print("program will not be stopped") my_list3 = [1, 3, 5, 7] try: print(my_list3[4]) except IndexError: print("out of range selection") a = 1 b = 0 try: print(c = a / b) except ZeroDivisionError: print("stopped") def division(a, b): try: return a/b except Exception: return -1 ``` it is good practice to state the particular error type to get *Copyright &copy; 2020 The Data Incubator. All rights reserved.* **Edited by Chisiomloius** <div class="alert alert-block alert-info" style="margin-top: 20px"> <a id='assignment-link'></a> ### Assignment Link </div> Now we would try out some practical examples with what we have learnt so far ! Let us try out this [notebook](https://typei.com) <div class="alert alert-block alert-info" style="margin-top: 20px"> <a id='after-thoughts'></a> ### After Thoughts ?? </div> What do you think so far ? <div class="alert alert-block alert-info" style="margin-top: 20px"> <a id='about'></a> ### About this Instructor: </div> <p><a href="https://github.com/chisomloius/" target= "_blank"> ChisomLoius</a> is very passionate about Data Analysis and Machine Learning and does lot of free lance teaching and learning. Holding a B.Eng. in Petroleum Engineering, my focused is leveraging the knowledge of Data Science and Machine Learning to help build solutions in Education and High Tech Security. I currently work as a Petrochemist. </p> <div class="alert alert-block alert-info" style="margin-top: 20px"> <a id='more-info'></a> ### More Info </div> <p> Visit our <a href="https://techorigin.alisutechnology.com" target= "_blank">website</a>, or further enquire more information via our <a href="info@techorigin.alisutechnology.com" target= "_blank">email</a>. <hr> <p>Copyright &copy; 2021 TechOrigin. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
github_jupyter
``` from config import * from utils import * import os import sys import copy import numpy as np import collections import multiprocessing import pickle import numpy as np import scipy # Suppress pandas future warning, which messes tqdm import warnings warnings.simplefilter(action='ignore', category=FutureWarning) import pandas as pd from tqdm.notebook import tqdm %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns from Bio import pairwise2 ``` # Check for inDelphi training data leakage inDelphi (Shen et al. 2018) is trained on a dataset of 55-bp sequences (available from their [GitHub](https://github.com/maxwshen/indelphi-dataprocessinganalysis/blob/master/SupplementaryData.xlsx)), referred to as "lib-A" in its paper. We are evaluating the performance of inDelphi on our lib-SA library of 61-bp sequences (specifically, the dat-A subset). For the evaluation to be meaningful, we need to make sure inDelphi lib-A sequences do not overlap and are not homologous to sequences from our dat-A. ## inDelphi's Lib-A ``` libA_df = pd.read_excel(os.path.join(DATA_DIR, 'indelphiLibA', 'SupplementaryData.xlsx'), header=1, sheet_name='Supplementary Table 2') libA_df.head() libA_seqs = libA_df['Sequence Context'].unique().tolist() len(libA_seqs) ``` ## Our dat-A Target Sequences ``` exp_design.head() # lib-SA datA_df = pd.read_csv(os.path.join(TABLES_DIR, 'datA_table.csv.gz'), compression='gzip') datA_df datA_seqs = exp_design.loc[datA_df['gRNA ID'].unique()]['Designed 61-bp target site (37i-24e, AG)'].unique().tolist() len(datA_seqs) ``` ## Sequence Identity Analysis For each target sequence in our dat-A, align it with every sequence in inDelphi's lib-A to determine the most similar sequence, and record the sequence identity. Plot distribution of such max sequence identities. If lib-SA sequences are dissimilar to inDelphi's lib-A sequences, then the distribution should be skewed towards lower max sequence identities. Local alignment (Smith Waterman) parameters: +1 match, -3 mismatch, -5 gap open, -2 gap extend. These are the same as the default parameters of BLAST's blastn-short program. Sequence identity is the definition used by BLAST: (# match positions in alignment(seq1, seq2))/(min(len(seq1), len(seq2)) ``` def sequence_identity(seq1, seq2, alignment): num_matches = pairwise2.format_alignment(*alignment).split('\n')[1].count('|') return num_matches / min(len(seq1), len(seq2)) def max_seq_identity_libA(our_seq): max_seq_identity = -1 for inDelphi_seq in libA_seqs: # Using BLAST suite's blastn-short defaults: # +1 match # -3 mismatch # -5 gap open # -2 gap extend alignment = pairwise2.align.localms(inDelphi_seq, our_seq, 1, -3, -5, -2) identity = sequence_identity(inDelphi_seq, our_seq, alignment[0]) max_seq_identity = max(max_seq_identity, identity) return max_seq_identity def compute_max_sequence_identities(): max_sequence_identities = [] try: p = multiprocessing.Pool(NUM_PROCESSES) for max_seq_identity in tqdm(p.imap_unordered(max_seq_identity_libA, datA_seqs, chunksize=2), total=len(datA_seqs)): max_sequence_identities.append(max_seq_identity) finally: p.close() p.join() return max_sequence_identities if not pickle_exists(DAT_A_INDELPHI_SEQUENCE_IDENTITY): max_sequence_identities = compute_max_sequence_identities() save_var(max_sequence_identities, DAT_A_INDELPHI_SEQUENCE_IDENTITY) else: max_sequence_identities = load_var(DAT_A_INDELPHI_SEQUENCE_IDENTITY) ``` ## S2 FigA ``` def plot_max_sequence_identities(max_sequence_identities): plt.rcParams.update({'font.size': 12}) fig, ax = plt.subplots(figsize=(5,5)) sns.distplot(max_sequence_identities, kde=False, ax = ax) ax.set(xlabel="Sequence Identity", ylabel='# of dat-A Target Sequences (' + str(len(datA_seqs)) + ' Total)', title="Distribution of pairwise best aligned\nsequence identity\nbetween dat-A & inDelphi's Lib-A") median = np.median(max_sequence_identities) plt.axvline(median, color='gray', linestyle='dotted') plt.text(median + 0.01, 450, 'Median = ' + "{:.2f}".format(median)) plt.savefig(os.path.join(IMAGES_DIR, 'datA_indelphi_sequence_identity.png'), dpi=300, bbox_inches='tight') plt.show() print("Median sequence identity:", np.median(max_sequence_identities)) print("Mean sequence identity:", np.mean(max_sequence_identities)) print("Min sequence identity:", np.min(max_sequence_identities)) print("Max sequence identity:", np.max(max_sequence_identities)) print("Second largest sequence identity:", np.sort(max_sequence_identities)[::-1][1]) plot_max_sequence_identities(max_sequence_identities) ```
github_jupyter
# Practical Quantum Computing Approach for Sustainable Workflow Optimization in Cloud Infrastructures by [Valter Uotila](https://researchportal.helsinki.fi/en/persons/valter-johan-edvard-uotila), PhD student, [Unified Database Management Systems](https://www2.helsinki.fi/en/researchgroups/unified-database-management-systems-udbms/news), University of Helsinki This is just a specified shortest path finding application applied to the problem presented in the [document](https://github.com/valterUo/Quantum-Computing-based-Optimization-for-Sustainable-Data-Workflows-in-Cloud/blob/main/Quantum_Computing__based_Optimization_for_Sustainable_Data_Workflows_in_Cloud.pdf) that comes along with this implementation. Possible quantum software-harware combinations to solve the problem: 1. Amazon Braket: Ocean implementation of this code 2. D-wave's Leap Advantage: Ocean implementation of this code 3. IBM Quantum systems 1. Simulator in cloud 2. NISQ device in cloud 4. Local machine 1. Ocean's imulated annealing 2. Qiskit's local qasm simulator ## Part 1: Implementation with Ocean connecting to Amazon Braket and D-wave Leap quantum annealers ``` # Install a pip package in the current Jupyter kernel #import sys #!{sys.executable} -m pip install numpy #!{sys.executable} -m pip install ocean_plugin import dimod from dimod.generators.constraints import combinations from dwave.system import LeapHybridSampler, DWaveSampler from hybrid.reference import KerberosSampler from dwave.system.composites import EmbeddingComposite from braket.aws import AwsDevice from braket.ocean_plugin import BraketSampler, BraketDWaveSampler import numpy as np import json import itertools import os import math import random import networkx as nx import matplotlib.pyplot as plt notebook_path = os.path.abspath("main.ipynb") def append_linear_safe(variable, value, linear_dict): if variable in linear_dict.keys(): linear_dict[variable] = linear_dict[variable] + value else: linear_dict[variable] = value def append_quadratic_safe(variable, value, quadratic_dict): if variable in quadratic_dict.keys(): quadratic_dict[variable] = quadratic_dict[variable] + value else: quadratic_dict[variable] = value ``` ## Importing data This demonstration implements three different sized data sets. Comment and uncomment the data sets you want to use. ``` # Demonstration 1 #cloud_partners_data = "cloud_partners_small.json" #workload_data = "workload_small.json" #strength = 1500.0 #num_reads = 10 #annealing_time = 1.0 # This is the minimal possible annealing time # Demonstration 2 #cloud_partners_data = "cloud_partners_medium.json" #workload_data = "workload_medium.json" #strength = 90.0 #num_reads = 900 #annealing_time = 20.0 # Demonstration 3 cloud_partners_data = "cloud_partners_large.json" workload_data = "workload_large.json" strength = 100.0 cloud_partners_file_path = os.path.join(os.path.dirname(notebook_path), "data/single_round_data/cloud_partners/" + cloud_partners_data) f = open(cloud_partners_file_path) partners_root = json.load(f) cloud_partners = partners_root["cloud_partners"] workload_file_path = os.path.join(os.path.dirname(notebook_path), "data/single_round_data/workloads/" + workload_data) f = open(workload_file_path) workload_root = json.load(f) workload = workload_root["workload"] #print("Cloud partners: ", json.dumps(cloud_partners, indent=1)) #print("Workloads: ", json.dumps(workload, indent=1)) ``` ## Emission simulator This section implements an emission simulator which simulates emission changes in data center operations. Note that it is relatively hard to get accurate data from individual data centers. This simulator is just for demonstration and it does not have an actual scientific background. ``` def emission_simulator(variable1, variable2, cloud_partners, workload): simulated_carbon_footprint = 1 emission_factor = 1 workload_type_in_process = None source_data_center_id = variable1[1] work_in_process = variable2[0] target_data_center_id = variable2[1] for work in workload: if work["work_id"] == int(work_in_process): emission_factor = work["emission_factor"] workload_type_in_process = work["work_type"] for partner in cloud_partners: for center in partner["data_centers"]: # Find correct target center if target_data_center_id == center["center_id"]: for workload_type in center["workload_dependent_emissions"]: # Find correct workload type i.e. Big Data, IoT, ML, etc. if workload_type_in_process == workload_type["workload_type"]: center_emission_factor = workload_type["center_emission_factor"] #print(center_emission_factor) simulated_carbon_footprint = emission_factor*center_emission_factor return simulated_carbon_footprint ``` ## Creating variables for the binary quadratic model In the demo paper we defined variables to be $ x_{i,j} = (w_i, d_j) $. ``` #%%timeit variables = dict() workload_order = [] for work in workload: variables[str(work["work_id"])] = list() workload_order.append(str(work["work_id"])) for partner in cloud_partners: for center in partner["data_centers"]: # The each key in the variables dictionary corresponds to a level in a tree i.e. a time step in the workflow variables[str(work["work_id"])].append((str(work["work_id"]), center["center_id"])) #print(json.dumps(variables, indent=1)) ``` ## Constructing constraints ### Constraint 1 This constraint implements the requirement that for every work $ w_i $ we have exactly one variable $ x_{i,j} = (w_i, d_j) = 1$. In other words, this means that every work is executed exactly on a single data center. ``` def construct_bqm_constraint1(bqm, variables, strength): for work_id in variables: one_work_bqm = combinations(variables[work_id], 1, strength=strength) bqm.update(one_work_bqm) return bqm ``` ### Constraint 2 This constraint implements the requirement that for every pair of variables $x_{i,j} = (w_i, d_j)$ and $x_{i+1,k} = (w_{i+1}, d_k)$ we associate the estimated emission coefficient $e(x_{i,j}, x_{i+1,k})$. This coefficient is calculated in emission_simulator function. Note that we need to calculate this only for those pairs, where the works $w_i$ and $w_{i+1}$ are consecutive works in the workload. To evaluate the algorithm we store the tree in a networkx graph. ``` def construct_bqm_constraint2(bqm, variables, workload_order): vartype = dimod.BINARY A = 1 linear = dict() quadratic = dict() offset = 0.0 tree = nx.Graph() for work_id_current in range(len(workload_order) - 1): work_id_next = work_id_current + 1 key_current = workload_order[work_id_current] key_next = workload_order[work_id_next] for work1 in variables[key_current]: for work2 in variables[key_next]: coeff = emission_simulator(work1, work2, cloud_partners, workload) append_quadratic_safe((work1, work2), coeff, quadratic) tree.add_edge(work1, work2, weight=coeff) #print("Works", work1, work2) #print("Coefficient", coeff) bqm_c2 = dimod.BinaryQuadraticModel(linear, quadratic, offset, vartype) bqm_c2.scale(A) bqm.update(bqm_c2) return bqm, tree ``` ## Demonstrating algorithm ``` def compare_to_optimal(solution, tree, optimal_weight): current_total = 0 try: for i in range(len(solution) - 1): edge_weight = tree.get_edge_data(solution[i], solution[i+1]) current_total += edge_weight["weight"] except: print("The quantum result contains edges which are not in the tree.") return np.abs(optimal_weight - current_total)/optimal_weight def print_solution(sample, tree, optimal_weight = -1): positive_solution = [] for varname, value in sample.items(): if value == 1: positive_solution.append(varname) print(varname, value) positive_solution = sorted(positive_solution, key=lambda x: int(x[0])) if optimal_weight != -1: print("Difference from the optimal ", compare_to_optimal(positive_solution, tree, optimal_weight)) ``` ### Wrapping up various methods to solve the QUBO ``` def solve_bqm_in_leap(bqm, sampler = "DWaveSampler"): bqm.normalize() if sampler == "DWaveSampler": num_reads = 900 annealing_time = 20.0 sampler = DWaveSampler() sampler = EmbeddingComposite(sampler) sampleset = sampler.sample(bqm, num_reads=num_reads, annealing_time = annealing_time, label = 'Data workflow optimization with DWaveSampler') elif sampler == "Kerberos": kerberos_sampler = KerberosSampler() sampleset = kerberos_sampler.sample(bqm, max_iter=10, convergence=3, qpu_params={'label': 'Data workflow optimization with Kerberos'}) elif sampler == "LeapHybrid": sampler = LeapHybridSampler() sampleset = sampler.sample(bqm) print(json.dumps(sampleset.info, indent=1)) sample = sampleset.first.sample return sample #print(sampleset) #print(best_solution) #sample = best_solution #energy = sampleset.first.energy def solve_bqm_in_amazon_braket(bqm, system = "Advantage"): device = None num_reads = 900 annealing_time = 20.0 if system == "Advantage": device = "arn:aws:braket:::device/qpu/d-wave/Advantage_system4" elif system == "2000Q": device = "arn:aws:braket:::device/qpu/d-wave/DW_2000Q_6" sampler = BraketDWaveSampler(device_arn = device) sampler = EmbeddingComposite(sampler) sampleset = sampler.sample(bqm, num_reads=num_reads, annealing_time = annealing_time) sample = sampleset.first.sample # print timing info for the previous D-Wave job print(json.dumps(sampleset.info['additionalMetadata']['dwaveMetadata']['timing'], indent=1)) return sample def solve_with_simulated_annealing(bqm): num_reads = 200 sampler = dimod.SimulatedAnnealingSampler() sampleset = sampler.sample(bqm, num_reads=num_reads) sample = sampleset.first.sample return sample def solve_exactly(bqm): sampler = dimod.ExactSolver() sampleset = sampler.sample(bqm) sample = sampleset.first.sample return sample def solve_with_networkx(tree, variables, start_work): possible_solutions = [] best_solution = None min_weight = float('Inf') for source_var in variables[start_work]: for target_var in variables[str(len(variables) - 1)]: possible_solutions.append(nx.dijkstra_path(tree, source=source_var, target=target_var)) for sol in possible_solutions: current_total = 0 for i in range(len(sol) - 1): edge_weight = tree.get_edge_data(sol[i], sol[i+1]) current_total += edge_weight["weight"] #print("Shortest path ", sol) #print("Current total ", current_total) if min_weight > current_total: min_weight = current_total best_solution = sol return best_solution, min_weight ``` ## Run single time step ``` vartype = dimod.BINARY bqm = dimod.BinaryQuadraticModel({}, {}, 0.0, vartype) ``` Timing the construction of the model ``` #%timeit construct_bqm_constraint1(bqm, variables, strength) #%timeit construct_bqm_constraint2(bqm, variables, workload_order) ``` Constructing the model ``` bqm = construct_bqm_constraint1(bqm, variables, strength) bqm, tree = construct_bqm_constraint2(bqm, variables, workload_order) #print(bqm) #print("The problem is to find the minimum path from some of the nodes ('0', x) to some of the nodes ('5', y). The weight of the edges are defined by carbon footprint associated to the computation.") #nx.draw(tree, with_labels = True) ``` #### Optimal and correct solution for evaluation Timing the classical solution ``` #%timeit solve_with_networkx(tree, variables, '0') ``` Solving the problem classically ``` print("Size of the problem") print("Number of nodes: ", tree.number_of_nodes()) print("Number of edges: ", tree.number_of_nodes()) best_solution, optimal_weight = solve_with_networkx(tree, variables, '0') print("Best solution: ", best_solution) print("Optimal weight: ", optimal_weight) ``` The following results we obtain with annealing. Ideally we would be close to the results we obtain from the function solve_with_networkx. ``` #print("Solution with Amazon Braket using Advantage") #solution = solve_bqm_in_amazon_braket(bqm) #print_solution(solution, tree, optimal_weight) #print("Solution with Amazon Braket using 2000Q") #solution = solve_bqm_in_amazon_braket(bqm, "2000Q") #print_solution(solution, tree, optimal_weight) #print("Solution with D-wave Leap with DWaveSampler") #solution = solve_bqm_in_leap(bqm, "DWaveSampler") #print_solution(solution, tree, optimal_weight) #print("Solution with D-wave Leap with LeapHybridSampler") #solution = solve_bqm_in_leap(bqm, "LeapHybrid") #print_solution(solution, tree, optimal_weight) Kerberos #print("Solution with D-wave Leap with KerberosSampler") #solution = solve_bqm_in_leap(bqm, "Kerberos") #print_solution(solution, tree, optimal_weight) print("Solution with simulated annealing") %timeit solve_with_simulated_annealing(bqm) solution = solve_with_simulated_annealing(bqm) print_solution(solution, tree, optimal_weight) #print("Exact solution (takes time)") #solve_exactly() ``` ## Part 2: Transfering problem to Qiskit In this part of the code I rely on the [Qiskit Tutorials](https://qiskit.org/documentation/optimization/tutorials/index.html). I want to learn to understand the connection between Ocean implementation and Qiskit. The formulation in Qiskit enables solving the problem using IBM Quantum systems. Although Amazon Braket does not implement the following kind of approach, it might be possible to translate the Qiskit into the equivalent Pennylane code and run it in Braket. ### Importing Qiskit and IBM Quantum Systems ``` from qiskit import IBMQ, BasicAer from qiskit.providers.basicaer import QasmSimulatorPy from qiskit.utils import algorithm_globals, QuantumInstance from qiskit.algorithms import QAOA, NumPyMinimumEigensolver from qiskit_optimization.algorithms import ( MinimumEigenOptimizer, RecursiveMinimumEigenOptimizer, SolutionSample, OptimizationResultStatus, ) from qiskit_optimization import QuadraticProgram provider = IBMQ.load_account() ``` ### Transforming QUBO in Ocean to QUBO in Qiskit Function for evaluating Qiskit result: ``` def evaluate_qiskit_solution(result, tree, optimal): #print(result.variables_dict) path = [] for key in result.variables_dict: if result.variables_dict[key] == 1.0: path.append(eval(key)) print("Difference (in [0,1]) between the optimal solution and the solution found with Qiskit:") print(compare_to_optimal(path, tree, optimal)) ``` Transforming the QUBO in Qiskit. We use QAOA module in order to understand the details of the process better. ``` qubo = QuadraticProgram() qubo_variables = [] for var in bqm.variables: qubo.binary_var(str(var)) qubo_variables.append(str(var)) constant = bqm.offset linear = [] quadratic = {} for var in bqm.variables: linear.append(bqm.linear[var]) for key in bqm.quadratic: quadratic[(str(key[0]), str(key[1]))] = bqm.quadratic[key] #print("Variables: ", qubo_variables) #print("Offset ", constant) #print("Linear ", linear) #print("Quadratic ", quadratic) qubo.minimize(constant = constant, linear=linear, quadratic=quadratic) # Local qasm simulator backend = BasicAer.get_backend("qasm_simulator") # ibmq_quito real universal QPU #backend = provider.get_backend('ibmq_quito') # IBM QASM simulator in cloud #backend = provider.get_backend('ibmq_qasm_simulator') algorithm_globals.random_seed = 10598 quantum_instance = QuantumInstance( backend = backend, seed_simulator=algorithm_globals.random_seed, seed_transpiler=algorithm_globals.random_seed, ) qaoa_mes = QAOA(quantum_instance=quantum_instance) exact_mes = NumPyMinimumEigensolver() qaoa = MinimumEigenOptimizer(qaoa_mes) # using QAOA exact = MinimumEigenOptimizer(exact_mes) # using the exact classical numpy minimum eigen solver qaoa_result = qaoa.solve(qubo) print(qaoa_result) print() evaluate_qiskit_solution(qaoa_result, tree, optimal_weight) if type(backend) == QasmSimulatorPy: %timeit qaoa.solve(qubo) #rqaoa = RecursiveMinimumEigenOptimizer(qaoa, min_num_vars=1, min_num_vars_optimizer=exact) #rqaoa_result = rqaoa.solve(qubo) #print(rqaoa_result) #print() #evaluate_qiskit_solution(rqaoa_result, tree, optimal_weight) ```
github_jupyter
## MatplotLib Tutorial Matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+. Some of the major Pros of Matplotlib are: * Generally easy to get started for simple plots * Support for custom labels and texts * Great control of every element in a figure * High-quality output in many formats * Very customizable in general ``` import matplotlib.pyplot as plt %matplotlib inline import numpy as np ## Simple Examples x=np.arange(0,10) y=np.arange(11,21) a=np.arange(40,50) b=np.arange(50,60) ##plotting using matplotlib ##plt scatter plt.scatter(x,y,c='g') plt.xlabel('X axis') plt.ylabel('Y axis') plt.title('Graph in 2D') plt.savefig('Test.png') y=x*x ## plt plot plt.plot(x,y,'r*',linestyle='dashed',linewidth=2, markersize=12) plt.xlabel('X axis') plt.ylabel('Y axis') plt.title('2d Diagram') ## Creating Subplots plt.subplot(2,2,1) plt.plot(x,y,'r--') plt.subplot(2,2,2) plt.plot(x,y,'g*--') plt.subplot(2,2,3) plt.plot(x,y,'bo') plt.subplot(2,2,4) plt.plot(x,y,'go') x = np.arange(1,11) y = 3 * x + 5 plt.title("Matplotlib demo") plt.xlabel("x axis caption") plt.ylabel("y axis caption") plt.plot(x,y) plt.show() np.pi # Compute the x and y coordinates for points on a sine curve x = np.arange(0, 4 * np.pi, 0.1) y = np.sin(x) plt.title("sine wave form") # Plot the points using matplotlib plt.plot(x, y) plt.show() #Subplot() # Compute the x and y coordinates for points on sine and cosine curves x = np.arange(0, 5 * np.pi, 0.1) y_sin = np.sin(x) y_cos = np.cos(x) # Set up a subplot grid that has height 2 and width 1, # and set the first such subplot as active. plt.subplot(2, 1, 1) # Make the first plot plt.plot(x, y_sin,'r--') plt.title('Sine') # Set the second subplot as active, and make the second plot. plt.subplot(2, 1, 2) plt.plot(x, y_cos,'g--') plt.title('Cosine') # Show the figure. plt.show() ## Bar plot x = [2,8,10] y = [11,16,9] x2 = [3,9,11] y2 = [6,15,7] plt.bar(x, y) plt.bar(x2, y2, color = 'g') plt.title('Bar graph') plt.ylabel('Y axis') plt.xlabel('X axis') plt.show() ``` ## Histograms ``` a = np.array([22,87,5,43,56,73,55,54,11,20,51,5,79,31,27]) plt.hist(a) plt.title("histogram") plt.show() ``` ## Box Plot using Matplotlib ``` data = [np.random.normal(0, std, 100) for std in range(1, 4)] # rectangular box plot plt.boxplot(data,vert=True,patch_artist=False); data ``` ## Pie Chart ``` # Data to plot labels = 'Python', 'C++', 'Ruby', 'Java' sizes = [215, 130, 245, 210] colors = ['gold', 'yellowgreen', 'lightcoral', 'lightskyblue'] explode = (0.4, 0, 0, 0) # explode 1st slice # Plot plt.pie(sizes, explode=explode, labels=labels, colors=colors, autopct='%1.1f%%', shadow=False) plt.axis('equal') plt.show() ```
github_jupyter
``` import numpy as np docs = ["I enjoy playing TT", "I like playing TT"] docs[0][0].split() from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(min_df=0, token_pattern=r"\b\w+\b") vectorizer.fit(docs) print(vectorizer.vocabulary_) # encode document vector = vectorizer.transform(docs) # summarize encoded vector print(vector.shape) print(type(vector)) print(vector.toarray()) print(vectorizer.vocabulary_) print(vector.shape) print(vector.toarray()) x = [] y = [] for i in range(len(docs)): for j in range(len(docs[i].split())): t_x = [] t_y = [] for k in range(4): if(j==k): t_y.append(docs[i].split()[k]) continue else: t_x.append(docs[i].split()[k]) x.append(t_x) y.append(t_y) x y x2 = [] y2 = [] for i in range(len(x)): x2.append(' '.join(x[i])) y2.append(' '.join(y[i])) x2 y2 vector_x = vectorizer.transform(x2) vector_x.toarray() vector_y = vectorizer.transform(y2) vector_y.toarray() from keras.models import Sequential from keras.layers import Dense, Embedding from keras.layers import LSTM , Bidirectional,Dropout from keras import backend as K from keras.layers.advanced_activations import LeakyReLU from keras import regularizers model = Sequential() model.add(Dense(3, activation='linear', input_shape=(5,))) model.add(Dense(5,activation='sigmoid')) model.summary() model.compile(loss='binary_crossentropy',optimizer='adam') model.fit(vector_x, vector_y, epochs=1000, batch_size=4,verbose=1) model.predict(vector_x) [list(vectorizer.vocabulary_.keys())[0]] vectorizer.transform([list(vectorizer.vocabulary_.keys())[1]]).toarray() model.summary() from keras.models import Model layer_name = 'dense_1' intermediate_layer_model = Model(inputs=model.input, outputs=model.get_layer(layer_name).output) for i in range(len(vectorizer.vocabulary_)): word = list(vectorizer.vocabulary_.keys())[i] word_vec = vectorizer.transform([list(vectorizer.vocabulary_.keys())[i]]).toarray() print(word, '\t', intermediate_layer_model.predict(word_vec)) ``` # Measuring similarity between word vectors ``` a = word = list(vectorizer.vocabulary_.keys())[1] word_vec_a = intermediate_layer_model.predict(vectorizer.transform([list(vectorizer.vocabulary_.keys())[1]]).toarray()) b = word = list(vectorizer.vocabulary_.keys())[4] word_vec_b = intermediate_layer_model.predict(vectorizer.transform([list(vectorizer.vocabulary_.keys())[4]]).toarray()) word_vec_a np.sum(word_vec_a*word_vec_b)/((np.sqrt(np.sum(np.square(word_vec_a))))*np.sqrt(np.sum(np.square(word_vec_b)))) np.sum(np.square(word_vec_a - word_vec_b)) ```
github_jupyter
# Lista de Exercรญcios 4 Mรฉtodos Numรฉricos para Engenharia - Turma C Nome: Vinรญcius de Castro Cantuรกria Matrรญcula: 14/0165169 Observaรงรตes: 0. A lista de exercรญcios deve ser entregue no moodle da disciplina. 0. A lista de exercรญcios deve ser respondida neste รบnico arquivo (.ipynb). Responda a cada questรฃo na cรฉlula imediatamente abaixo do seu enunciado. 0. Nรฃo se esqueรงa de alterar o nome do arquivo e o cabeรงalho acima, colocando seu nome e matrรญcula. 0. A lista รฉ uma atividade avaliativa e individual. Nรฃo serรก tolerado qualquer tipo de plรกgio. ``` # Deixe-me incluir o conjunto de mรณdulos do Python cientรญfico para vocรช. %pylab inline ``` --- ## Questรฃo 01 Para implementar o Mรฉtodo da Eliminaรงรฃo de Gauss, um mรฉtodo analรญtico para solucionar sistemas lineares, รฉ preciso primeiro gerar uma matriz aumentada, resultado da concatenaรงรฃo da matriz de coeficientes com o vetor de termos independentes. Leia um valor `N`, uma matriz de coeficientes e um vetor de termos independentes de um sistema linear de tamanho `N` e imprima a matriz aumentada gerada. ``` N = int(input()) A = np.zeros((N,N)) for i in range(N): A[i] = [float(x) for x in input().split()] b = np.array([float(x) for x in input().split()]) G = np.hstack((A, b[:, None])) print(G) ``` --- ## Questรฃo 02 O Mรฉtodo da Eliminaรงรฃo de Gauss รฉ um mรฉtodo analรญtico de resoluรงรฃo de sistemas lineares. Dado o sistema linear $Ax = b$, onde: $$ A = \begin{bmatrix} 13 & 7 & 3 \\ 5 & 19 & 1 \\ 2 & 11 & 23 \end{bmatrix},\ b = \begin{bmatrix} 31 \\ 17 \\ 29 \end{bmatrix} $$ Encontre a matriz aumentada e calcule a matriz triangular superior utilizando o pivoteamento parcial do Mรฉtodo da Eliminaรงรฃo de Gauss. Nรฃo utilize funรงรตes prontas, como "`np.linalg.solve()`", para realizar o cรกlculo. Este exercรญcio nรฃo contรฉm entradas. #### Saรญda Esperada ``` [[ 13. 7. 3. 31. ] [ 0. 16.30769231 -0.15384615 5.07692308] [ 0. 0. 22.63207547 21.14150943]] ``` ``` A = np.array([[13, 7, 3], [ 5, 19, 1], [ 2, 11, 23.0]]) b = np.array([31, 17, 29.0]) N = len(b) G = np.hstack((A, b[:, None])) for i in range(N-1): for j in range(i+1, N): G[j,:] -= G[j,i] / G[i,i] * G[i,:] print(G) ``` --- ## Questรฃo 03 O Mรฉtodo da Eliminaรงรฃo de Gauss com pivoteamento completo, tambรฉm chamado de Mรฉtodo de Gauss-Jordan, รฉ um mรฉtodo analรญtico utilizado para encontrar a soluรงรฃo de sistemas lineares, mas tambรฉm pode ser utilizado para para encontrar a inversa de matrizes quadradas nรฃo-singulares. Dado o sistema linear $Ax = b$, onde: $$ A = \begin{bmatrix} 13 & 7 & 3 \\ 5 & 19 & 1 \\ 2 & 11 & 23 \end{bmatrix},\ b = \begin{bmatrix} 31 \\ 17 \\ 29 \end{bmatrix} $$ Utilize o Mรฉtodo de Gauss-Jordan para encontrar o vetor soluรงรฃo do sistema linear acima. Nรฃo utilize funรงรตes prontas, como "`np.linalg.solve()`", para realizar o cรกlculo. Este exercรญcio nรฃo contรฉm entradas. #### Saรญda Esperada ``` [ 1.99666528 0.32013339 0.93413922] ``` ``` A = np.array([[13, 7, 3], [ 5, 19, 1], [ 2, 11, 23.0]]) b = np.array([31, 17, 29.0]) N = len(b) G = np.hstack((A, b[:, None])) for i in range(N): G[i] = G[i] / G[i,i] for j in range(i+1, N): G[j,:] -= G[j,i] * G[i,:] for i in range(N-1, -1, -1): for j in range(i-1, -1, -1): G[j,:] -= G[j,i] * G[i,:] print(G[:,N]) ``` --- ## Questรฃo 04 O Mรฉtodo de Gauss-Jordan tambรฉm pode ser utilizado para encontrar a inversa de matrizes quadradas nรฃo-singulares. Para isso, รฉ preciso concatenar ร  matriz de coeficientes, uma matriz identidade ao invรฉs de o vetor de termos independentes. Utilizando a mesma matriz do exercรญcio acima: $$ A = \begin{bmatrix} 13 & 7 & 3 \\ 5 & 19 & 1 \\ 2 & 11 & 23 \end{bmatrix} $$ Calcule $A^{-1}$ utilizando o mรฉtodo de Gauss-Jordan. Nรฃo utilize funรงรตes prontas para realizar o cรกlculo. #### Saรญda Esperada ``` [[ 0.08878699 -0.02667778 -0.01042101] [-0.02355148 0.06106711 0.00041684] [ 0.00354314 -0.0268862 0.04418508]] ``` ``` N = 3 A = np.array([[13, 7, 3], [ 5, 19, 1], [ 2, 11, 23.0]]) G = np.hstack((A, np.identity(N))) for i in range(N): G[i] = G[i] / G[i,i] for j in range(i+1, N): G[j,:] -= G[j,i] * G[i,:] for i in range(N-1, -1, -1): for j in range(i-1, -1, -1): G[j,:] -= G[j,i] * G[i,:] print(G[:,N:]) ``` --- ## Questรฃo 05 O Mรฉtodo de Jacobi รฉ um mรฉtodo utilizado para encontrar a soluรงรฃo de sistemas lineares de forma numรฉrica, ou seja, por aproximaรงรตes sucessivas. Uma condiรงรฃo suficiente para que a soluรงรฃo aproximada convirja para a soluรงรฃo correta do sistema linear รฉ: a matriz de coeficientes do sistema linear deve ser estritamente diagonal dominante. Uma matriz estritamente diagonal dominante satisfaz a seguinte equaรงรฃo: $$ |a_{ii}| > \sum_{j \neq i}{|a_{ij}|} $$ Utilizando a mesma matriz dos exercรญcios acima: $$ A = \begin{bmatrix} 13 & 7 & 3 \\ 5 & 19 & 1 \\ 2 & 11 & 23 \end{bmatrix} $$ Verifique se ela รฉ uma matriz estritamente diagonal dominante. #### Saรญda Esperada A matriz "A" รฉ uma matriz estritamente diagonal dominante. ## N = 3 A = np.array([[13, 7, 3], [ 5, 19, 1], [ 2, 11, 23.0]]) D = np.zeros(N) R = np.zeros(N) for i in range(N): D[i] = abs(A[i,i]) for j in range(N): R[i] += abs(A[i,j]) if i != j else 0 is_diag_dom = True for i in range(N): if D[i] <= R[i]: is_diag_dom = False break if is_diag_dom: print('A matriz "A" รฉ uma matriz estritamente diagonal dominante.') else: print('A matriz "A" nรฃo รฉ uma matriz estritamente diagonal dominante.') --- ## Questรฃo 06 O Mรฉtodo de Jacobi รฉ um mรฉtodo utilizado para encontrar a soluรงรฃo de sistemas lineares de forma numรฉrica, ou seja, por aproximaรงรตes sucessivas. O mรฉtodo funciona da seguinte forma: dado um sistema linear $Ax = b$, divimos a matriz de coeficientes $A$ em $D+R$, onde $D$ รฉ a matriz diagonal formada pela diagonal principal da matriz $A$, e $R$ รฉ a matriz de resรญduos, que รฉ a matriz $A$ com todos os elementos da diagonal principal iguais a zero. A prรณxima aproximaรงรฃo para $x$, รฉ calculada pela equaรงรฃo: $$ x^{(novo)} = D^{-1}(b - Rx^{(antigo)}) $$ Utilizando o mesmo sistema linear $Ax = b$ dos exercรญcios anteriores: $$ A = \begin{bmatrix} 13 & 7 & 3 \\ 5 & 19 & 1 \\ 2 & 11 & 23 \end{bmatrix}, b = \begin{bmatrix} 31 \\ 17 \\ 29 \end{bmatrix} $$ Iniciando pela aproximaรงรฃo `x = np.zeros(3)`, encontre as 10 (dez) primeiras aproximaรงรตes da soluรงรฃo do sistema linear acima, utilizando o Mรฉtodo de Jacobi. Nรฃo utilize funรงรตes prontas na implementaรงรฃo deste exercรญcio. Este exercรญcio nรฃo contรฉm entradas. #### Saรญda Esperada ``` [ 0. 0. 0.] [ 2.38461538 0.89473684 1.26086957] [ 1.61186411 0.20084492 0.62559409] [ 2.13210025 0.43763607 1.0246512 ] [ 1.91250722 0.27972882 0.86616533] [ 2.03410787 0.34585782 0.96078124] [ 1.9766655 0.30887786 0.91858036] [ 2.00631645 0.32621537 0.94126141] [ 1.99174678 0.31721875 0.93039122] [ 1.99909962 0.32162499 0.93596088] ``` ``` A = np.array([[13, 7, 3], [ 5, 19, 1], [ 2, 11, 23.0]]) b = np.array([31, 17, 29.0]) N = len(b) x = np.zeros(N) D = np.zeros(N) R = np.zeros((N,N)) for i in range(N): for j in range(N): R[i,j] = A[i,j] for i in range(N): D[i] = 1.0 / A[i,i] R[i,i] = 0 x_next = D * (b - R.dot(x[:, None])[:,0]) for i in range(10): print(x) x = x_next x_next = D * (b - R.dot(x[:, None])[:,0]) ``` --- ## Questรฃo 07 Questรตes criadas para treinar a utilizaรงรฃo da biblioteca "numpy" para manipulaรงรฃo de matrizes. Resolva os exercรญcios abaixo. **7.1.** Declare uma matriz `A` de tamanho `3x4` utilizando a funรงรฃo `np.array()` com quaisquer valores e, em seguida, imprima-a. ``` A = np.array([[13, 7, 3, 4], [ 5, 19, 1, 4], [ 2, 11, 23.0, 6]]) print(A) ``` **7.2.** Utilizando a matriz definida acima, agora imprima sua transposta. ``` print(A.T) ``` **7.3.** Imprima toda a segunda linha da matriz $A$ original definida no item 7.1. ``` print(A[1,:]) ``` **7.4.** Imprima a segunda e a terceira colunas da segunda linha da matriz transposta $A^T$ gerada no item 7.2. ``` print(A.T[1,1:3]) ``` **7.5.** Imprima o seguinte produto matricial: $AA^T$. ``` print(A.dot(A.T)) ``` --- ## Questรฃo 08 O Mรฉtodo de Gauss-Seidel รฉ um mรฉtodo numรฉrico utilizado para encontrar a soluรงรฃo de sistemas lineares. O mรฉtodo funciona da seguinte forma: dado um sistema linear $Ax = b$, divimos a matriz de coeficientes $A$ em $L_*+U$, onde $L_*$ รฉ uma matriz triangular inferior formada pelos elementos da diagonal principal e abaixo da diagonal principal da matriz $A$, e $U$ รฉ uma matriz triangular superior formada pelos elementos estritamente acima da diagonal princiapl da matriz $A$. $x$ รฉ calculado iterativamente da forma: $$ x^{(k+1)} = L_*^{-1}(b - Ux^{(k)}) $$ Utilizando o mesmo sistema linear abaixo: $$ A = \begin{bmatrix} 13 & 7 & 3 \\ 5 & 19 & 1 \\ 2 & 11 & 23 \end{bmatrix},\ b = \begin{bmatrix} 31 \\ 17 \\ 29 \end{bmatrix} $$ Iniciando pela aproximaรงรฃo `x = np.zeros(3)`, encontre as 10 (dez) primeiras aproximaรงรตes da soluรงรฃo do sistema linear, utilizando o Mรฉtodo de Gauss-Seidel. Nรฃo utilize funรงรตes prontas na implementaรงรฃo deste exercรญcio. Este exercรญcio nรฃo contรฉm entradas. #### Saรญda Esperada ``` [ 0. 0. 0.] [ 2.38461538 0.89473684 1.26086957] [ 1.61186411 0.20084492 0.62559409] [ 2.13210025 0.43763607 1.0246512 ] [ 1.91250722 0.27972882 0.86616533] [ 2.03410787 0.34585782 0.96078124] [ 1.9766655 0.30887786 0.91858036] [ 2.00631645 0.32621537 0.94126141] [ 1.99174678 0.31721875 0.93039122] [ 1.99909962 0.32162499 0.93596088] ``` --- ## Questรฃo 09 Na mesma linha das questรตes 06 e 08, utilize os Mรฉtodos de Jacobi e Gauss-Seidel para encontrar a soluรงรฃo do sistema abaixo: $$ A = \begin{bmatrix} 13 & 7 & 3 \\ 5 & 19 & 1 \\ 2 & 11 & 23 \end{bmatrix},\ b = \begin{bmatrix} 31 \\ 17 \\ 29 \end{bmatrix} $$ Iniciando pela aproximaรงรฃo `x = np.zeros(3)`. Mostre a primeira aproximaรงรฃo $x^k$ tal que nenhuma das diferenรงas absolutas $|x_i^k - x_i^{k-1}|$ sejam maiores que $10^{-8}$. Mostre tambรฉm a quantidade de iteraรงรตes necessรกrias e o vetor de resรญduos ($e = Ax^k - b$) encontrado, para ambos os mรฉtodos. #### Saรญda Esperada ``` Mรฉtodo de Jacobi: 27 iteraรงรตes x = [1.99666527 0.32013339 0.93413922] e = [-9.07208886e-08 -8.03053837e-08 -1.20908751e-07] Mรฉtodo de Gauss-Seidel: 9 iteraรงรตes x = [1.99666528 0.32013339 0.93413922] e = [ 4.54047218e-08 -1.17981358e-09 0.00000000e+00] ```
github_jupyter
``` import numpy as np from pyspark import SparkConf, SparkContext from pyspark.sql import SparkSession import time import re spark=SparkSession.builder\ .config("spark.debug.maxToStringFields", 100000)\ .config("spark.local.dir", '/home/osboxes/hw/hw3/')\ .appName("hw3")\ .getOrCreate() sc=spark.sparkContext path = "/home/osboxes/hw/hw3/" k=3 data = sc.wholeTextFiles("file:"+path+"datasets/reut2-*") ``` # Q1 ``` #data = sc.wholeTextFiles('reut2-*') articles = data.map(lambda x:x[1]).flatMap(lambda x:x.split('<BODY>')[1:]).map(lambda x:x.split('</BODY>')[0])\ .map(lambda x:re.sub(' +', ' ', x.replace('\n', ' '))) articles.take(3) k = 3 shingles = articles.flatMap(lambda x:[x[i:i+k] for i in range(len(x)-k+1)]).distinct() shingles.take(5) shingles_count = shingles.count() articles_count = articles.count() print(shingles_count, ': # of distinct shingles.') print(articles_count, ': # of distinct articles.') articles = articles.collect() kShinglingMatrix = shingles.map(lambda s:[1 if s in a else 0 for a in articles]) kShinglingMatrix.coalesce(1).saveAsTextFile("file:"+path+'outputs/kshingling') ``` # Q2 ``` import random def biggerThanNFirstPrime(N): p = 2 while True: isPrime = True for i in range(2,p//2+1): if(p%i==0): isPrime = False break if isPrime and p > N: return p else: p+=1 h = 100 a = [random.randint(0, 100) for i in range(h)] b = [random.randint(0, 100) for i in range(h)] p = biggerThanNFirstPrime(articles_count) N = articles_count def rowHash(row, a, b, p, N): return ((a*row+b)%p)%N kShinglesMatrixZipWithIndex = kShinglingMatrix.zipWithIndex().cache() minHashSignatures = [] kShinglesMatrixZipWithIndex = kShinglingMatrix.zipWithIndex().cache() for i in range(h): minHashSignatures.append(kShinglesMatrixZipWithIndex\ .map(lambda x:[rowHash(x[1], a[i], b[i], p ,N) if c == 1 else (articles_count + 10) for c in x[0]])\ .reduce(lambda x, y:[Mx if Mx < My else My for Mx, My in zip(x, y)])) print(str(i) + '\n') count = 0 with open(path+'outputs/minHashSignatures.txt', 'w') as result: for row in minHashSignatures: result.write(str(row) + '\n') print(count) count += 1 ``` # Q3 ``` import numpy as np from operator import add bands = 20 r = 5 similarRate = 0.8 buckets = articles_count hashFuct = [[random.randint(0, 100) for i in range(r + 1)] for j in range(bands)] with open(path+'outputs/candidatePairs.txt', 'w') as result: for i in range(articles_count):#12 candidatePairs = list() for j in range(bands): band = np.array(minHashSignatures[j*r:j*r+r]).T band = [(np.array(article).dot(np.array(hashFuct[j][:r])) + hashFuct[j][-1]) % buckets for article in band] for k, article in enumerate(band): if k > i and (article == band[i]).all(): candidatePairs.append(k) candidatePairs = [(article, candidatePairs.count(article)) for article in set(candidatePairs)] candidatePairsTreshold = list() for candidatePair in candidatePairs: if candidatePair[1] >= bands*similarRate: candidatePairsTreshold.append(candidatePair[0]) result.write('Articles' + str(i) + ':' + str(candidatePairsTreshold) + '\n') print(str(i)) ```
github_jupyter
# Video Super Resolution with OpenVINO Super Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook applies Single Image Super Resolution (SISR) to frames in a 360p (480ร—360) video in 360p resolution. We use a model called [single-image-super-resolution-1032](https://github.com/openvinotoolkit/open_model_zoo/tree/develop/models/intel/single-image-super-resolution-1032) which is available from the Open Model Zoo. It is based on the research paper cited below. Y. Liu et al., ["An Attention-Based Approach for Single Image Super Resolution,"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760. **NOTE:** The Single Image Super Resolution (SISR) model used in this demo is not optimized for video. Results may vary depending on the video. We are looking for a more suitable Multi Image Super Resolution (MISR) model, so if you know of a great open source model, please let us know! You can start a [discussion](https://github.com/openvinotoolkit/openvino_notebooks/discussions) or create an [issue](https://github.com/openvinotoolkit/openvino_notebooks/issues) on GitHub. ## Preparation ### Imports ``` import os import time import urllib from pathlib import Path import cv2 import numpy as np from IPython.display import ( HTML, FileLink, Pretty, ProgressBar, Video, clear_output, display, ) from openvino.inference_engine import IECore from pytube import YouTube ``` ### Settings ``` # Device to use for inference. For example, "CPU", or "GPU" DEVICE = "CPU" # 1032: 4x superresolution, 1033: 3x superresolution MODEL_FILE = "model/single-image-super-resolution-1032.xml" model_name = os.path.basename(MODEL_FILE) model_xml_path = Path(MODEL_FILE).with_suffix(".xml") ``` ### Functions ``` def write_text_on_image(image: np.ndarray, text: str) -> np.ndarray: """ Write the specified text in the top left corner of the image as white text with a black border. :param image: image as numpy arry with HWC shape, RGB or BGR :param text: text to write :return: image with written text, as numpy array """ font = cv2.FONT_HERSHEY_PLAIN org = (20, 20) font_scale = 4 font_color = (255, 255, 255) line_type = 1 font_thickness = 2 text_color_bg = (0, 0, 0) x, y = org image = cv2.UMat(image) (text_w, text_h), _ = cv2.getTextSize( text=text, fontFace=font, fontScale=font_scale, thickness=font_thickness ) result_im = cv2.rectangle( img=image, pt1=org, pt2=(x + text_w, y + text_h), color=text_color_bg, thickness=-1 ) textim = cv2.putText( img=result_im, text=text, org=(x, y + text_h + font_scale - 1), fontFace=font, fontScale=font_scale, color=font_color, thickness=font_thickness, lineType=line_type, ) return textim.get() def load_image(path: str) -> np.ndarray: """ Loads an image from `path` and returns it as BGR numpy array. :param path: path to an image filename or url :return: image as numpy array, with BGR channel order """ if path.startswith("http"): # Set User-Agent to Mozilla because some websites block requests # with User-Agent Python request = urllib.request.Request(url=path, headers={"User-Agent": "Mozilla/5.0"}) response = urllib.request.urlopen(url=request) array = np.asarray(bytearray(response.read()), dtype="uint8") image = cv2.imdecode(buf=array, flags=-1) # Loads the image as BGR else: image = cv2.imread(filename=path) return image def convert_result_to_image(result) -> np.ndarray: """ Convert network result of floating point numbers to image with integer values from 0-255. Values outside this range are clipped to 0 and 255. :param result: a single superresolution network result in N,C,H,W shape """ result = result.squeeze(0).transpose(1, 2, 0) result *= 255 result[result < 0] = 0 result[result > 255] = 255 result = result.astype(np.uint8) return result ``` ## Load the Superresolution Model Load the model in Inference Engine with `ie.read_network` and load it to the specified device with `ie.load_network` ``` ie = IECore() net = ie.read_network(model=model_xml_path) exec_net = ie.load_network(network=net, device_name=DEVICE) ``` Get information about network inputs and outputs. The Super Resolution model expects two inputs: 1) the input image, 2) a bicubic interpolation of the input image to the target size 1920x1080. It returns the super resolution version of the image in 1920x1800. ``` # Network inputs and outputs are dictionaries. Get the keys for the # dictionaries. original_image_key = list(exec_net.input_info)[0] bicubic_image_key = list(exec_net.input_info)[1] output_key = list(exec_net.outputs.keys())[0] # Get the expected input and target shape. `.dims[2:]` returns the height # and width. OpenCV's resize function expects the shape as (width, height), # so we reverse the shape with `[::-1]` and convert it to a tuple input_height, input_width = tuple(exec_net.input_info[original_image_key].tensor_desc.dims[2:]) target_height, target_width = tuple(exec_net.input_info[bicubic_image_key].tensor_desc.dims[2:]) upsample_factor = int(target_height / input_height) print(f"The network expects inputs with a width of {input_width}, " f"height of {input_height}") print(f"The network returns images with a width of {target_width}, " f"height of {target_height}") print( f"The image sides are upsampled by a factor {upsample_factor}. " f"The new image is {upsample_factor**2} times as large as the " "original image" ) ``` ## Superresolution on Video Download a YouTube\* video with PyTube and enhance the video quality with superresolution. By default only the first 100 frames of the video are processed. Change NUM_FRAMES in the cell below to modify this. **Note:** - The resulting video does not contain audio. - The input video should be a landscape video and have an an input resultion of 360p (640x360) for the 1032 model, or 480p (720x480) for the 1033 model. ### Settings ``` VIDEO_DIR = "data" OUTPUT_DIR = "output" os.makedirs(name=str(OUTPUT_DIR), exist_ok=True) # Number of frames to read from the input video. Set to 0 to read all frames. NUM_FRAMES = 100 # The format for saving the result video's # vp09 is slow, but widely available. If you have FFMPEG installed, you can # change the FOURCC to `*"THEO"` to improve video writing speed FOURCC = cv2.VideoWriter_fourcc(*"vp09") ``` ### Download and Prepare Video ``` # Use pytube to download a video. It downloads to the videos subdirectory. # You can also place a local video there and comment out the following lines VIDEO_URL = "https://www.youtube.com/watch?v=V8yS3WIkOrA" yt = YouTube(VIDEO_URL) # Use `yt.streams` to see all available streams. See the PyTube documentation # https://python-pytube.readthedocs.io/en/latest/api.html for advanced # filtering options try: os.makedirs(name=VIDEO_DIR, exist_ok=True) stream = yt.streams.filter(resolution="360p").first() filename = Path(stream.default_filename.encode("ascii", "ignore").decode("ascii")).stem stream.download(output_path=OUTPUT_DIR, filename=filename) print(f"Video {filename} downloaded to {OUTPUT_DIR}") # Create Path objects for the input video and the resulting videos video_path = Path(stream.get_file_path(filename, OUTPUT_DIR)) except Exception: # If PyTube fails, use a local video stored in the VIDEO_DIR directory video_path = Path(rf"{VIDEO_DIR}/CEO Pat Gelsinger on Leading Intel.mp4") # Path names for the result videos superres_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres.mp4") bicubic_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_bicubic.mp4") comparison_video_path = Path(f"{OUTPUT_DIR}/{video_path.stem}_superres_comparison.mp4") # Open the video and get the dimensions and the FPS cap = cv2.VideoCapture(filename=str(video_path)) ret, image = cap.read() if not ret: raise ValueError(f"The video at '{video_path}' cannot be read.") fps = cap.get(cv2.CAP_PROP_FPS) original_frame_height, original_frame_width = image.shape[:2] cap.release() print( f"The input video has a frame width of {original_frame_width}, " f"frame height of {original_frame_height} and runs at {fps:.2f} fps" ) ``` Create superresolution video, bicubic video and comparison video. The superresolution video contains the enhanced video, upsampled with superresolution, the bicubic video is the input video upsampled with bicubic interpolation, the combination video sets the bicubic video and the superresolution side by side. ``` superres_video = cv2.VideoWriter( filename=str(superres_video_path), fourcc=FOURCC, fps=fps, frameSize=(target_width, target_height), ) bicubic_video = cv2.VideoWriter( filename=str(bicubic_video_path), fourcc=FOURCC, fps=fps, frameSize=(target_width, target_height), ) comparison_video = cv2.VideoWriter( filename=str(comparison_video_path), fourcc=FOURCC, fps=fps, frameSize=(target_width * 2, target_height), ) ``` ### Do Inference Read video frames and enhance them with superresolution. Save the superresolution video, the bicubic video and the comparison video to file. The code in this cell reads the video frame by frame. Each frame is resized and reshaped to network input shape and upsampled with bicubic interpolation to target shape. Both the original and the bicubic image are propagated through the network. The network result is a numpy array with floating point values, with a shape of (1,3,1920,1080). This array is converted to an 8-bit image with shape (1080,1920,3) and written to `superres_video`. The bicubic image is written to `bicubic_video` for comparison. Lastly, the bicubic and result frames are combined side by side and written to `comparison_video`. A progress bar shows the progress of the process. Inference time is measured, as well as total time to process each frame, which includes inference time as well as the time it takes to process and write the video. ``` start_time = time.perf_counter() frame_nr = 1 total_inference_duration = 0 total_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT) if NUM_FRAMES == 0 else NUM_FRAMES progress_bar = ProgressBar(total=total_frames) progress_bar.display() cap = cv2.VideoCapture(filename=str(video_path)) try: while cap.isOpened(): ret, image = cap.read() if not ret: cap.release() break if NUM_FRAMES > 0 and frame_nr == NUM_FRAMES: break # Resize the input image to network shape and convert from (H,W,C) to # (N,C,H,W) resized_image = cv2.resize(src=image, dsize=(input_width, input_height)) input_image_original = np.expand_dims(resized_image.transpose(2, 0, 1), axis=0) # Resize and reshape the image to the target shape with bicubic # interpolation bicubic_image = cv2.resize( src=image, dsize=(target_width, target_height), interpolation=cv2.INTER_CUBIC ) input_image_bicubic = np.expand_dims(bicubic_image.transpose(2, 0, 1), axis=0) # Do inference inference_start_time = time.perf_counter() result = exec_net.infer( inputs={ original_image_key: input_image_original, bicubic_image_key: input_image_bicubic, } )[output_key] inference_stop_time = time.perf_counter() inference_duration = inference_stop_time - inference_start_time total_inference_duration += inference_duration # Transform inference result into an image result_frame = convert_result_to_image(result=result) # Write resulting image and bicubic image to video superres_video.write(image=result_frame) bicubic_video.write(image=bicubic_image) stacked_frame = np.hstack((bicubic_image, result_frame)) comparison_video.write(image=stacked_frame) frame_nr = frame_nr + 1 # Update progress bar and status message progress_bar.progress = frame_nr progress_bar.update() if frame_nr % 10 == 0: clear_output(wait=True) progress_bar.display() display( Pretty( f"Processed frame {frame_nr}. Inference time: " f"{inference_duration:.2f} seconds " f"({1/inference_duration:.2f} FPS)" ) ) except KeyboardInterrupt: print("Processing interrupted.") finally: superres_video.release() bicubic_video.release() comparison_video.release() end_time = time.perf_counter() duration = end_time - start_time print(f"Video's saved to {comparison_video_path.parent} directory.") print( f"Processed {frame_nr} frames in {duration:.2f} seconds. Total FPS " f"(including video processing): {frame_nr/duration:.2f}. " f"Inference FPS: {frame_nr/total_inference_duration:.2f}." ) ``` ### Show Side-by-Side Video of Bicubic and Superresolution Version ``` if not comparison_video_path.exists(): raise ValueError("The comparison video does not exist.") else: video_link = FileLink(comparison_video_path) video_link.html_link_str = "<a href='%s' download>%s</a>" display( HTML( f"Showing side by side comparison. If you cannot see the video in " "your browser, please click on the following link to download " f"the video<br>{video_link._repr_html_()}" ) ) display(Video(comparison_video_path, width=800, embed=True)) ```
github_jupyter
**Reinforcement Learning with TensorFlow & TRFL: Q Learning** * This notebook shows how to apply the classic Reinforcement Learning (RL) idea of Q learning with TRFL. * In TD learning we estimated state values: V(s). In Q learning we estimate action values: Q(s,a). Here we'll go over Q learning in the simple tabular case. Next section we will use this same Q learning function in powerful Deep Learning algorithms like Deep Q Network. * A key concept in RL is exploration. We'll introduce and use epsilon greedy exploration, which is often used with Q learning. Outline: 1. Install TRFL 2. Define the GridWorld environment 3. Discuss Epsilon-Greedy Exploration 4. Find the value of each state-action value in the environment using Q learning ``` #TRFL has issues on Colab with TensorFlow version tensorflow-1.13.0rc1 #install TensorFlow 1.12 and restart run time !pip install tensorflow==1.12 import os os.kill(os.getpid(), 9) #install TRFL !pip install trfl==1.0 #install Tensorflow Probability !pip install tensorflow-probability==0.5.0 ``` **GridWorld** The GridWorld environment is a four by four grid. The agent randomly starts on the grid and can move either up, left, right, or down. If the agent reaches the upper left or lower right the episode is over. Every action the agent takes gets a reward of -1 until you reach the upper left or over right. ``` #Environment from: https://github.com/dennybritz/reinforcement-learning/blob/cee9e78652f8ce98d6079282daf20680e5e17c6a/lib/envs/gridworld.py #define the environment import io import numpy as np import sys from gym.envs.toy_text import discrete import pprint UP = 0 RIGHT = 1 DOWN = 2 LEFT = 3 class GridworldEnv(discrete.DiscreteEnv): """ Grid World environment from Sutton's Reinforcement Learning book chapter 4. You are an agent on an MxN grid and your goal is to reach the terminal state at the top left or the bottom right corner. For example, a 4x4 grid looks as follows: T o o o o x o o o o o o o o o T x is your position and T are the two terminal states. You can take actions in each direction (UP=0, RIGHT=1, DOWN=2, LEFT=3). Actions going off the edge leave you in your current state. You receive a reward of -1 at each step until you reach a terminal state. """ metadata = {'render.modes': ['human', 'ansi']} def __init__(self, shape=[4,4]): if not isinstance(shape, (list, tuple)) or not len(shape) == 2: raise ValueError('shape argument must be a list/tuple of length 2') self.shape = shape nS = np.prod(shape) nA = 4 MAX_Y = shape[0] MAX_X = shape[1] P = {} grid = np.arange(nS).reshape(shape) it = np.nditer(grid, flags=['multi_index']) while not it.finished: s = it.iterindex y, x = it.multi_index # P[s][a] = (prob, next_state, reward, is_done) P[s] = {a : [] for a in range(nA)} is_done = lambda s: s == 0 or s == (nS - 1) reward = 0.0 if is_done(s) else -1.0 #reward = 1.0 if is_done(s) else 0.0 # We're stuck in a terminal state if is_done(s): P[s][UP] = [(1.0, s, reward, True)] P[s][RIGHT] = [(1.0, s, reward, True)] P[s][DOWN] = [(1.0, s, reward, True)] P[s][LEFT] = [(1.0, s, reward, True)] # Not a terminal state else: ns_up = s if y == 0 else s - MAX_X ns_right = s if x == (MAX_X - 1) else s + 1 ns_down = s if y == (MAX_Y - 1) else s + MAX_X ns_left = s if x == 0 else s - 1 P[s][UP] = [(1.0, ns_up, reward, is_done(ns_up))] P[s][RIGHT] = [(1.0, ns_right, reward, is_done(ns_right))] P[s][DOWN] = [(1.0, ns_down, reward, is_done(ns_down))] P[s][LEFT] = [(1.0, ns_left, reward, is_done(ns_left))] it.iternext() # Initial state distribution is uniform isd = np.ones(nS) / nS # We expose the model of the environment for educational purposes # This should not be used in any model-free learning algorithm self.P = P super(GridworldEnv, self).__init__(nS, nA, P, isd) def _render(self, mode='human', close=False): """ Renders the current gridworld layout For example, a 4x4 grid with the mode="human" looks like: T o o o o x o o o o o o o o o T where x is your position and T are the two terminal states. """ if close: return outfile = io.StringIO() if mode == 'ansi' else sys.stdout grid = np.arange(self.nS).reshape(self.shape) it = np.nditer(grid, flags=['multi_index']) while not it.finished: s = it.iterindex y, x = it.multi_index if self.s == s: output = " x " elif s == 0 or s == self.nS - 1: output = " T " else: output = " o " if x == 0: output = output.lstrip() if x == self.shape[1] - 1: output = output.rstrip() outfile.write(output) if x == self.shape[1] - 1: outfile.write("\n") it.iternext() pp = pprint.PrettyPrinter(indent=2) ``` **An Introduction to Exploration: Epsilon-Greedy Exploration** Exploration is a key concept in RL. In order to find the best policies, an agent needs to explore the environment. By exploring, the agent can experience new states and rewards. In the last notebook, the agent explored GridWorld by taking a random action at every step. While random action explorations can work in some environments, the downside is the agent can spend too much time exploring bad states or states that have already been explored fully and not enough time exploring promising states. A simple--yet surprisingly effective--approach to exploration is Epsilon-Greedy exploration. A epsilon percentage of the time, the agent chooses a random action. The remaining amount of the time (1-epsilon) the agent choose the best estimated action aka the* greedy action*. Epsilon can be a fixed value between 0 and 1 or can start at a high value and gradually decay over time (ie start at .99 and decay to 0.01). In this notebook we will used a fixed epsilon value of 0.1. Below is a simple example of epsilon-greedy exploration. ``` #declare the environment env = GridworldEnv() #reset the environment and get the agent's current position (observation) current_state = env.reset() env._render() print("") action_dict = {0:"UP",1:"RIGHT", 2:"DOWN",3:"LEFT"} greedy_dict = {0:3,1:3,2:3,3:3, 4:0,5:0,6:0,7:0, 8:2,9:2,10:2,11:2, 12:1,13:1,14:1,15:1} epsilon = 0.1 for i in range(10): #choose random action epsilon amount of the time if np.random.rand() < epsilon: action = env.action_space.sample() action_type = "random" else: #Choose a greedy action. We will learn greedy actions with Q learning in the following cells. action = greedy_dict[current_state] action_type = "greedy" current_state,reward,done,info = env.step(action) print("Agent took {} action {} and is now in state {} ".format(action_type, action_dict[action], current_state)) env._render() print("") if done: print("Agent reached end of episode, resetting the env") print(env.reset()) print("") env._render() print("") ``` ** TRFL Usage ** Once again, the three main TRFL steps are: 1. In the TensorFlow graph, define the necessary TensorFlow tensors 2. In the graph, feed the tensors into the trfl method 3. In the TensorFlow session, run the graph operation We saw this in the last notebook. Here in Q learning there are some slight differences. We use the trfl.qlearning() method and we input the action and action values (instead of state values) into the method. Note for the action values q_t and q_next_t the shape is batch size X number of actions. ``` #set up TRFL graph import tensorflow as tf import trfl #https://github.com/deepmind/trfl/blob/master/docs/trfl.md#qlearningq_tm1-a_tm1-r_t-pcont_t-q_t-nameqlearning # Args: # q_tm1: Tensor holding Q-values for first timestep in a batch of transitions, shape [B x num_actions]. # a_tm1: Tensor holding action indices, shape [B]. # r_t: Tensor holding rewards, shape [B]. # pcont_t: Tensor holding pcontinue values, shape [B]. # q_t: Tensor holding Q-values for second timestep in a batch of transitions, shape [B x num_actions]. # name: name to prefix ops created within this op. num_actions = env.action_space.n batch_size = 1 q_t = tf.placeholder(dtype=tf.float32,shape=[batch_size,num_actions],name="q_value") action_t = tf.placeholder(dtype=tf.int32,shape=[batch_size],name="action") reward_t = tf.placeholder(dtype=tf.float32,shape=[batch_size],name='reward') gamma_t = tf.placeholder(dtype=tf.float32,shape=[batch_size],name='discount_factor') q_next_t= tf.placeholder(dtype=tf.float32,shape=[batch_size,num_actions],name='q_next_value') qloss_t, q_extra_t = trfl.qlearning(q_t,action_t,reward_t,gamma_t,q_next_t) ``` ** The RL Training Loop ** In the next cell we are going to define the training loop and then run it in the following cell. The goal is to estimate the action value of each state (the value of each state-action combination) using Q learning. action_value_array holds the estimated values. After each step the agent takes in the env, we update the action_value_array with the Q learning formula. ** TRFL Usage ** The TRFL usage here is to run the trfl operation q_learning_t in sess.run(). We then take the output (q_learning_output) and extract the td_error part of that tensor. Using the td_error we update the action_value_array. For reference, the code below shows the full output of trfl.qlearning and the classic RL method of performing tabular Q learning updates. ``` def q_learning_action_value_estimate(env,episodes=1000,alpha=0.05,discount_factor=1.0,epsilon=0.1): """ Args: env: OpenAI env. env.P represents the transition probabilities of the environment. env.P[s][a] is a list of transition tuples (prob, next_state, reward, done). env.nS is a number of states in the environment. env.nA is a number of actions in the environment. episodes: number of episodes to run alpha: learning rate for state value updates discount_factor: Gamma discount factor. pcont_t TRFL argument Returns: Value of each state with random policy """ with tf.Session() as sess: #initialize the estimated state values to zero action_value_array = np.zeros((env.nS,env.nA)) #reset the env current_state = env.reset() #env._render() #run through each episode taking a random action each time #upgrade estimated state value after each action current_episode = 0 while current_episode < episodes: #choose action based on epsilon-greedy policy if np.random.rand() < epsilon: eg_action = env.action_space.sample() else: #Choose a greedy action. We will learn greedy actions with Q learning in the following cells. eg_action = np.argmax(action_value_array[current_state]) #take a step using epsilon-greedy action next_state, rew, done, info = env.step(eg_action) #run TRFL operation in the session q_learning_output = sess.run([q_extra_t],feed_dict={q_t:np.expand_dims(action_value_array[current_state],axis=0), action_t:np.expand_dims(eg_action,axis=0), reward_t:np.expand_dims(rew,axis=0), gamma_t:np.expand_dims(discount_factor,axis=0), q_next_t:np.expand_dims(action_value_array[next_state],axis=0)}) # trfl.qlearning() returns: # A namedtuple with fields: # loss: a tensor containing the batch of losses, shape [B]. # extra: a namedtuple with fields: # target: batch of target values for q_tm1[a_tm1], shape [B]. # td_error: batch of temporal difference errors, shape [B]. # Here we are using the td_error to update our action values. We will use the loss with a gradient descent optimizer in Deep Q Network session. #Use the Q learning TD error to update estimated state-action values action_value_array[current_state,eg_action] = action_value_array[current_state,eg_action] + alpha * q_learning_output[0].td_error #For reference, here is the tabular Q learning update method # max_q_value = np.max(action_value_array[next_state]) # action_value_array[current_state,eg_action] = action_value_array[current_state,eg_action] + \ # alpha * (rew + discount_factor*max_q_value - action_value_array[current_state,eg_action]) #if the epsiode is done, reset the env, if not the next state becomes the current state and the loop repeats if done: current_state = env.reset() current_episode += 1 else: current_state = next_state return action_value_array #run episodes with Q learning and get the state value estimates action_values = q_learning_action_value_estimate(env,episodes=2000,alpha=0.1) print("All Action Value Estimates:") print(np.round(action_values.reshape((16,4)),1)) print("each row is a state, each column is an action") print("") optimal_action_estimates = np.max(action_values,axis=1) print("Optimal Action Value Estimates:") print(np.round(optimal_action_estimates.reshape(env.shape),1)) print("estimate of the optimal State value at each state") print("") ``` The first output shows the estimated value for each action in each state. Ie row 4 column 4 is the value if the agent was in the upper right grid cell and took that action left. In the second output, we take the best action for each of the 16 states and show the agent's estimate of the state value assuming the agent always acts greedily. ``` ```
github_jupyter
# Data Preparation and Advanced Model Evaluation ## Agenda **Data preparation** - Handling missing values - Handling categorical features (review) **Advanced model evaluation** - ROC curves and AUC - Bonus: ROC curve is only sensitive to rank order of predicted probabilities - Cross-validation ## Part 1: Handling missing values scikit-learn models expect that all values are **numeric** and **hold meaning**. Thus, missing values are not allowed by scikit-learn. ``` # read the Titanic data import pandas as pd url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/titanic.csv' titanic = pd.read_csv(url, index_col='PassengerId') titanic.shape # check for missing values titanic.isnull().sum() ``` One possible strategy is to **drop missing values**: ``` # drop rows with any missing values titanic.dropna().shape # drop rows where Age is missing titanic[titanic.Age.notnull()].shape ``` Sometimes a better strategy is to **impute missing values**: ``` # mean Age titanic.Age.mean() # median Age titanic.Age.median() # most frequent Age titanic.Age.mode() # fill missing values for Age with the median age titanic.Age.fillna(titanic.Age.median(), inplace=True) ``` Another strategy would be to build a **KNN model** just to impute missing values. How would we do that? If values are missing from a categorical feature, we could treat the missing values as **another category**. Why might that make sense? How do we **choose** between all of these strategies? ## Part 2: Handling categorical features (Review) How do we include a categorical feature in our model? - **Ordered categories:** transform them to sensible numeric values (example: small=1, medium=2, large=3) - **Unordered categories:** use dummy encoding (0/1) ``` titanic.head(10) # encode Sex_Female feature titanic['Sex_Female'] = titanic.Sex.map({'male':0, 'female':1}) # create a DataFrame of dummy variables for Embarked embarked_dummies = pd.get_dummies(titanic.Embarked, prefix='Embarked') embarked_dummies.drop(embarked_dummies.columns[0], axis=1, inplace=True) # concatenate the original DataFrame and the dummy DataFrame titanic = pd.concat([titanic, embarked_dummies], axis=1) titanic.head(1) ``` - How do we **interpret** the encoding for Embarked? - Why didn't we just encode Embarked using a **single feature** (C=0, Q=1, S=2)? - Does it matter which category we choose to define as the **baseline**? - Why do we only need **two dummy variables** for Embarked? ``` # define X and y feature_cols = ['Pclass', 'Parch', 'Age', 'Sex_Female', 'Embarked_Q', 'Embarked_S'] X = titanic[feature_cols] y = titanic.Survived # train/test split from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) # train a logistic regression model from sklearn.linear_model import LogisticRegression logreg = LogisticRegression(C=1e9) logreg.fit(X_train, y_train) # make predictions for testing set y_pred_class = logreg.predict(X_test) # calculate testing accuracy from sklearn import metrics print metrics.accuracy_score(y_test, y_pred_class) ``` ## Part 3: ROC curves and AUC ``` # predict probability of survival y_pred_prob = logreg.predict_proba(X_test)[:, 1] %matplotlib inline import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = (8, 6) plt.rcParams['font.size'] = 14 # plot ROC curve fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred_prob) plt.plot(fpr, tpr) plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.0]) plt.xlabel('False Positive Rate (1 - Specificity)') plt.ylabel('True Positive Rate (Sensitivity)') # calculate AUC print metrics.roc_auc_score(y_test, y_pred_prob) ``` Besides allowing you to calculate AUC, seeing the ROC curve can help you to choose a threshold that **balances sensitivity and specificity** in a way that makes sense for the particular context. ``` # histogram of predicted probabilities grouped by actual response value df = pd.DataFrame({'probability':y_pred_prob, 'actual':y_test}) df.hist(column='probability', by='actual', sharex=True, sharey=True) ``` What would have happened if you had used **y_pred_class** instead of **y_pred_prob** when drawing the ROC curve or calculating AUC? ``` # ROC curve using y_pred_class - WRONG! fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred_class) plt.plot(fpr, tpr) # AUC using y_pred_class - WRONG! print metrics.roc_auc_score(y_test, y_pred_class) ``` If you use **y_pred_class**, it will interpret the zeros and ones as predicted probabilities of 0% and 100%. ## Bonus: ROC curve is only sensitive to rank order of predicted probabilities ``` # print the first 10 predicted probabilities y_pred_prob[:10] # take the square root of predicted probabilities (to make them all bigger) import numpy as np y_pred_prob_new = np.sqrt(y_pred_prob) # print the modified predicted probabilities y_pred_prob_new[:10] # histogram of predicted probabilities has changed df = pd.DataFrame({'probability':y_pred_prob_new, 'actual':y_test}) df.hist(column='probability', by='actual', sharex=True, sharey=True) # ROC curve did not change fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred_prob_new) plt.plot(fpr, tpr) # AUC did not change print metrics.roc_auc_score(y_test, y_pred_prob_new) ``` ## Part 4: Cross-validation ``` # calculate cross-validated AUC from sklearn.cross_validation import cross_val_score cross_val_score(logreg, X, y, cv=10, scoring='roc_auc').mean() # add Fare to the model feature_cols = ['Pclass', 'Parch', 'Age', 'Sex_Female', 'Embarked_Q', 'Embarked_S', 'Fare'] X = titanic[feature_cols] # recalculate AUC cross_val_score(logreg, X, y, cv=10, scoring='roc_auc').mean() ```
github_jupyter
# Tensorboard example ``` import time from collections import namedtuple import numpy as np import tensorflow as tf with open('anna.txt', 'r') as f: text=f.read() vocab = set(text) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32) text[:100] encoded[:100] ``` Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from. ``` len(vocab) def get_batches(arr, n_seqs, n_steps_per_seq): '''Create a generator that returns batches of size n_seqs x n_steps from arr. Arguments --------- arr: Array you want to make batches from n_seqs: Batch size, the number of sequences per batch n_steps: Number of sequence steps per batch ''' # Get the batch size and number of batches we can make # ie n_seq = 10, n_steps_per_sew = 2, batch_size = 20 batch_size = n_seqs * n_steps_per_seq # ie arr= 40, over 20, so 2 batches n_batches = len(arr) // batch_size # Keep only enough characters to make full batches # n_batches = 2 * batch_size = 20 = 40?? # why not simply use len(arr)? arr = arr[ : n_batches * batch_size] # Reshape into n_seqs rows arr = arr.reshape((n_seqs, -1)) for n in range(0, arr.shape[1], n_steps_per_seq): # The features x = arr[ :, n: n + n_steps_per_seq] # The targets, shifted by one y = np.zeros_like(x) y[ :, : -1], y[ : , -1] = x[ :, 1: ], x[ :, 0] yield x, y batches = get_batches(encoded, 10, 50) x, y = next(batches) def build_inputs(batch_size, num_steps): ''' Define placeholders for inputs, targets, and dropout Arguments --------- batch_size: Batch size, number of sequences per batch num_steps: Number of sequence steps in a batch ''' with tf.name_scope('inputs'): # Declare placeholders we'll feed into the graph inputs = tf.placeholder(tf.int32, (batch_size, num_steps), name="inputs") targets = tf.placeholder(tf.int32, (batch_size, num_steps), name="targets") # Keep probability placeholder for drop out layers keep_prob = tf.placeholder(tf.float32, name='keep_prob') return inputs, targets, keep_prob def single_lstm_cell(lstm_size, keep_prob): with tf.name_scope("RNN_layers"): lstm = tf.contrib.rnn.NASCell(lstm_size, reuse = tf.get_variable_scope().reuse) # Add dropout to the cell outputs drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob = keep_prob) return drop def build_lstm(lstm_size, num_layers, batch_size, keep_prob): ''' Build LSTM cell. Arguments --------- keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability lstm_size: Size of the hidden layers in the LSTM cells num_layers: Number of LSTM layers batch_size: Batch size ''' ### Build the LSTM Cell # Stack up multiple LSTM layers, for deep learning with tf.name_scope("RNN_layers"): rnn_cells = tf.contrib.rnn.MultiRNNCell([single_lstm_cell(lstm_size, keep_prob) for _ in range(num_layers)], state_is_tuple = True) with tf.name_scope("RNN_init_state"): initial_state = rnn_cells.zero_state(batch_size, tf.float32) return rnn_cells, initial_state def build_output(lstm_output, in_size, out_size): ''' Build a softmax layer, return the softmax output and logits. Arguments --------- lstm_output: List of output tensors from the LSTM layer in_size: Size of the input tensor, for example, size of the LSTM cells out_size: Size of this softmax layer ''' # Reshape output so it's a bunch of rows, one row for each step for each sequence. # Concatenate lstm_output over axis 1 (the columns) # ie t1 = t1 = [[1, 2, 3], [4, 5, 6]] # t2 = [[7, 8, 9], [10, 11, 12]] # tf.concat([t1, t2], 1) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]] seq_output = tf.concat(lstm_output, axis=1) # Reshape seq_output to a 2D tensor with lstm_size columns x = tf.reshape(lstm_output, [-1, in_size]) # Connect the RNN outputs to a softmax layer with tf.variable_scope('softmax'): # Create the weight and bias variables here softmax_w = tf.Variable(tf.truncated_normal( (in_size, out_size), stddev=0.1)) softmax_b = tf.Variable(tf.zeros( out_size )) # tensorboard tf.summary.histogram("softmax_w", softmax_w) # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch # of rows of logit outputs, one for each step and sequence logits = tf.matmul(x, softmax_w) + softmax_b # Use softmax to get the probabilities for predicted characters out = tf.nn.softmax(logits, name="predictions") tf.summary.histogram("predictions", out) return out, logits def build_loss(logits, targets, lstm_size, num_classes): ''' Calculate the loss from the logits and the targets. Arguments --------- logits: Logits from final fully connected layer targets: Targets for supervised learning lstm_size: Number of LSTM hidden units num_classes: Number of classes in targets ''' # One-hot encode targets and reshape to match logits, one row per sequence per step y_one_hot = tf.one_hot(targets, num_classes) y_reshaped = tf.reshape( y_one_hot, logits.get_shape() ) # Softmax cross entropy loss loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped) loss = tf.reduce_mean(loss) # tensorboard tf.summary.scalar('loss', loss) return loss def build_optimizer(loss, learning_rate, grad_clip): ''' Build optmizer for training, using gradient clipping. Arguments: loss: Network loss learning_rate: Learning rate for optimizer ''' # Optimizer for training, using gradient clipping to control exploding gradients tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) return optimizer class CharRNN: def __init__(self, num_classes, batch_size=64, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): # When we're using this network for sampling later, we'll be passing in # one character at a time, so providing an option for that if sampling == True: batch_size, num_steps = 1, 1 else: batch_size, num_steps = batch_size, num_steps tf.reset_default_graph() # Build the input placeholder tensors self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps) x_one_hot = tf.one_hot(self.inputs, num_classes, name="x_one_hot") with tf.name_scope("RNN_layers"): # Build the LSTM cell cells, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob) ### Run the data through the RNN layers with tf.name_scope("RNN_forward"): # Run each sequence step through the RNN with tf.nn.dynamic_rnn outputs, state = tf.nn.dynamic_rnn(cells, x_one_hot, initial_state=self.initial_state) self.final_state = state # Get softmax predictions and logits self.prediction, self.logits = build_output(outputs, lstm_size, num_classes) # Loss and optimizer (with gradient clipping) self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes) self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip) batch_size = 64 # Sequences per batch num_steps = 128 # Number of sequence steps per batch lstm_size = 512 # Size of hidden layers in LSTMs num_layers = 2 # Number of LSTM layers learning_rate = 0.001 # Learning rate keep_prob = 0.5 # Dropout keep probability model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps, lstm_size=lstm_size, num_layers=num_layers, learning_rate=learning_rate) epochs = 3 # Save every N iterations save_every_n = 200 saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Tensoboard train_writer = tf.summary.FileWriter('./logs/1/train', sess.graph) test_writer = tf.summary.FileWriter('./logs/1/test') # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/______.ckpt') counter = 0 for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for x, y in get_batches(encoded, batch_size, num_steps): counter += 1 start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: keep_prob, model.initial_state: new_state} merged = tf.summary.merge_all() # Tensorboard summary, batch_loss, new_state, _ = sess.run([merged, model.loss, model.final_state, model.optimizer], feed_dict=feed) train_writer.add_summary(summary, counter) end = time.time() print('Epoch: {}/{}... '.format(e+1, epochs), 'Training Step: {}... '.format(counter), 'Training loss: {:.4f}... '.format(batch_loss), '{:.4f} sec/batch'.format((end-start))) if (counter % save_every_n == 0): saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) ``` #### Saved checkpoints Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables ``` tf.train.get_checkpoint_state('checkpoints') ``` ## Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. ``` def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): samples = [c for c in prime] model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, checkpoint) new_state = sess.run(model.initial_state) for c in prime: x = np.zeros((1, 1)) x[0,0] = vocab_to_int[c] feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) for i in range(n_samples): x[0,0] = c feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) return ''.join(samples) ``` Here, pass in the path to a checkpoint and sample from the network. ``` tf.train.latest_checkpoint('checkpoints') checkpoint = tf.train.latest_checkpoint('checkpoints') samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i200_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i600_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = 'checkpoints/i1200_l512.ckpt' samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) ```
github_jupyter
# The Python ecosystem - The pandas library The [pandas library](https://pandas.pydata.org/) was created by [Wes McKinney](http://wesmckinney.com/) in 2010. pandas provides **data structures** and **functions** for manipulating, processing, cleaning and crunching data. In the Python ecosystem pandas is the state-of-the-art tool for working with tabular or spreadsheet-like data in which each column may be a different type (`string`, `numeric`, `date`, or otherwise). pandas provides sophisticated indexing functionality to make it easy to reshape, slice and dice, perform aggregations, and select subsets of data. pandas relies on other packages, such as [NumPy](http://www.numpy.org/) and [SciPy](https://scipy.org/scipylib/index.html). Further pandas integrates [matplotlib](https://matplotlib.org/) for plotting. If you are new to pandas we strongly recommend to visit the very well written [__pandas tutorials__](https://pandas.pydata.org/pandas-docs/stable/tutorials.html), which cover all relevant sections for new users to properly get started. Once installed (for details refer to the [documentation](https://pandas.pydata.org/pandas-docs/stable/install.html)), pandas is imported by using the canonical alias `pd`. ``` import pandas as pd ``` The pandas library has two workhorse data structures: __*Series*__ and __*DataFrame*__. * one dimensional `pd.Series` object * two dimensional `pd.DataFrame` object *** ## The `pd.Series` object Data generation ``` # import the random module from numpy from numpy import random # set seed for reproducibility random.seed(123) # generate 26 random integers between -10 and 10 my_data = random.randint(low=-10, high=10, size=26) # print the data my_data ``` A Series is a one-dimensional array-like object containing an array of data and an associated array of data labels, called its _index_. We create a `pd.Series` object by calling the `pd.Series()` function. ``` # Uncomment to look up the documentation # ?pd.Series # docstring # ??pd.Series # source # create a pd.Series object s = pd.Series(data=my_data) s type(s) ``` *** ### `pd.Series` attributes Python objects in general and the `pd.Series` in particular offer useful object-specific *attributes*. * _attribute_ $\to$ `OBJECT.attribute` $\qquad$ _Note that the attribute is called without parenthesis_ ``` s.dtypes s.index ``` We can use the `index` attribute to assign an index to a `pd.Series` object. Consider the letters of the alphabet.... ``` import string letters = string.ascii_uppercase letters ``` By providing an array-type object we assign a new index to the `pd.Series` object. ``` s.index = [l for l in letters] s.index s ``` *** ### `pd.Series` methods Methods are functions that are called using the attribute notation. Hence they are called by appending a dot (`.`) to the Python object, followed by the name of the method, parentheses `()` and in case one or more arguments (`arg`). * _method_ $\to$ `OBJECT.method_name(arg1, arg2, ...)` ``` s.sum() s.mean() s.max() s.min() s.median() s.quantile(q=0.5) s.quantile(q=[0.25, 0.5, 0.75]) ``` *** ### Element-wise arithmetic A very useful feature of `pd.Series` objects is that we may apply arithmetic operations *element-wise*. ``` s*0.1 #s+10 #10/s #s**2 #(2+s)*1**3 ``` *** ### Selection and Indexing Another main data operation is indexing and selecting particular subsets of the data object. pandas comes with a very [rich set of methods](https://pandas.pydata.org/pandas-docs/stable/indexing.html) for these type of tasks. In its simplest form we index a Series numpy-like, by using the `[]` operator to select a particular `index` of the Series. ``` s[3] s[2:6] s["C"] s["C":"K"] ``` *** ## The `pd.DataFrame` object The primary pandas data structure is the `DataFrame`. It is a two-dimensional size-mutable, potentially heterogeneous tabular data structure with both row and column labels. Arithmetic operations align on both row and column labels. Basically, the `DataFrame` can be thought of as a `dictionary`-like container for Series objects. **Generate a `DataFrame` object from scratch** pandas facilitates the import of different data types and sources, however, for the sake of this tutorial we generate a `DataFrame` object from scratch. Source: http://duelingdata.blogspot.de/2016/01/the-beatles.html ``` df = pd.DataFrame({"id" : range(1,5), "Name" : ["John", "Paul", "George", "Ringo"], "Last Name" : ["Lennon", "McCartney", "Harrison", "Star"], "dead" : [True, False, True, False], "year_born" : [1940, 1942, 1943, 1940], "no_of_songs" : [62, 58, 24, 3] }) df ``` *** ### `pd.DataFrame` attributes ``` df.dtypes # axis 0 df.columns # axis 1 df.index ``` *** ### `pd.DataFrame` methods **Get a quick overview of the data set** ``` df.info() df.describe() df.describe(include="all") ``` **Change index to the variable `id`** ``` df df.set_index("id") df ``` Note that nothing changed!! For the purpose of memory and computation efficiency `pandas` returns a view of the object, rather than a copy. Hence, if we want to make a permanent change we have to assign/reassign the object to a variable: df = df.set_index("id") or, some methods have the `inplace=True` argument: df.set_index("id", inplace=True) ``` df = df.set_index("id") df ``` **Arithmetic methods** ``` df.sum() df.sum(axis=1) ``` #### `groupby` method [Hadley Wickham 2011: The Split-Apply-Combine Strategy for Data Analysis, Journal of Statistical Software, 40(1)](https://www.jstatsoft.org/article/view/v040i01) <img src="./_img/split-apply-combine.svg" width="600"> Image source: [Jake VanderPlas 2016, Data Science Handbook](https://jakevdp.github.io/PythonDataScienceHandbook/) ``` df df.groupby("dead") df.groupby("dead").sum() df.groupby("dead")["no_of_songs"].sum() df.groupby("dead")["no_of_songs"].mean() df.groupby("dead")["no_of_songs"].agg(["mean", "max", "min"]) ``` #### Family of `apply`/`map` methods * `apply` works on a row (`axis=0`, default) / column (`axis=1`) basis of a `DataFrame` * `applymap` works __element-wise__ on a `DataFrame` * `map` works __element-wise__ on a `Series`. ``` df # (axis=0, default) df[["Last Name", "Name"]].apply(lambda x: x.sum()) # (axis=1) df[["Last Name", "Name"]].apply(lambda x: x.sum(), axis=1) ``` _... maybe a more useful case..._ ``` df.apply(lambda x: " ".join(x[["Name", "Last Name"]]), axis=1) ``` *** ### Selection and Indexing **Column index** ``` df["Name"] df[["Name", "Last Name"]] df.dead ``` **Row index** In addition to the `[]` operator pandas ships with other indexing operators such as `.loc[]` and `.iloc[]`, among others. * `.loc[]` is primarily __label based__, but may also be used with a boolean array. * `iloc[]` is primarily __integer position based__ (from 0 to length-1 of the axis), but may also be used with a boolean array. ``` df.head(2) df.loc[1] df.iloc[1] ``` **Row and Columns indices** `df.loc[row, col]` ``` df.loc[1, "Last Name"] df.loc[2:4, ["Name", "dead"]] ``` **Logical indexing** ``` df df.no_of_songs > 50 df.loc[df.no_of_songs > 50] df.loc[(df.no_of_songs > 50) & (df.year_born >= 1942)] df.loc[(df.no_of_songs > 50) & (df.year_born >= 1942), ["Last Name", "Name"]] ``` *** ### Manipulating columns, rows and particular entries **Add a row to the data set** ``` from numpy import nan df.loc[5] = ["Mouse", "Mickey", nan, nan, 1928] df df.dtypes ``` _Note that the variable `dead` changed. Its values changed from `True`/`False` to `1.0`/`0.0`. Consequently its `dtype` changed from `bool` to `float64`._ **Add a column to the data set** ``` pd.datetime.today() now = pd.datetime.today().year now df["age"] = now - df.year_born df ``` **Change a particular entry** ``` df.loc[5, "Name"] = "Mini" df ``` *** ## Plotting The plotting functionality in pandas is built on top of matplotlib. It is quite convenient to start the visualization process with basic pandas plotting and to switch to matplotlib to customize the pandas visualization. ### `plot` method ``` # this call causes the figures to be plotted below the code cells % matplotlib inline df df[["no_of_songs", "age"]].plot() df["dead"].plot.hist() df["age"].plot.bar() ``` ## ...some notes on plotting with Python Plotting is an essential component of data analysis. However, the Python visualization world can be a frustrating place. There are many different options and choosing the right one is a challenge. (If you dare take a look at the [Python Visualization Landscape](https://github.com/rougier/python-visualization-landscape).) [matplotlib](https://matplotlib.org/) is probably the most well known 2D plotting Python library. It allows to produce publication quality figures in a variety of formats and interactive environments across platforms. However, matplotlib is the cause of frustration due to the complex syntax and due to existence of two interfaces, a __MATLAB like state-based interface__ and an __object-oriented interface__. Hence, __there is always more than one way to build a visualization__. Another source of confusion is that matplotlib is well integrated into other Python libraries, such as [pandas](http://pandas.pydata.org/index.html), [seaborn](http://seaborn.pydata.org/index.html), [xarray](http://xarray.pydata.org/en/stable/), among others. Hence, there is confusion as when to use pure matplotlib or a tool that is built on top of matplotlib. We import the `matplotlib` library and matplotlib's `pyplot` module using the canonical commands import matplotlib as mpl import matplotlib.pyplot as plt With respect to matplotlib terminology it is important to understand that the __`Figure`__ is the final image that may contain one or more axes, and that the __`Axes`__ represents an individual plot. To create a `Figure` object we call fig = plt.figure() However, a more convenient way to create a `Figure` object and an `Axes` object at once, is to call fig, ax = plt.subplots() Then we can use the `Axes` object to add data for ploting. ``` import matplotlib.pyplot as plt # create a Figure and Axes object fig, ax = plt.subplots(figsize=(10,5)) # plot the data and reference the Axes object df["age"].plot.bar(ax=ax) # add some customization to the Axes object ax.set_xticklabels(df.Name, rotation=0) ax.set_xlabel("") ax.set_ylabel("Age", size=14) ax.set_title("The Beatles and ... something else", size=18); ``` Note that we are only scratching the surface of the plotting capabilities with pandas. Refer to the pandas online documentation ([here](https://pandas.pydata.org/pandas-docs/stable/visualization.html)) for a comprehensive overview.
github_jupyter
# Run AwareDX ad-hoc on any drug and adverse event ``` from os import path from collections import Counter, defaultdict from tqdm.notebook import tqdm import numpy as np import pandas as pd import feather import scipy.stats from scipy import stats import pymysql import pymysql.cursors from database import Database from utils import Utils from drug import Drug u = Utils() db = Database('Mimir from Munnin') np.random.seed(u.RANDOM_STATE) def compile(results): results = results.dropna() results = results.reset_index() num_tests = results.shape[0] results.loc[:,'bonf_p_value'] = results.get('p_value') * num_tests #results = results.query('bonf_p_value<1') drug_adr_pairs = results.get(['drug','itr','adr']).groupby(by=['drug','adr']).count().query('itr==25').reset_index().get(['drug', 'adr']) scores = pd.DataFrame(columns=['drug', 'adr', 'p_val_min', 'p_val_med', 'p_val_max', 'logROR_avg','logROR_ci95_low', 'logROR_ci95_upp']).set_index(['drug','adr']) def mean_confidence_interval(data, confidence=0.95): a = 1.0 * np.array(data) n = len(a) m, se = np.mean(a), scipy.stats.sem(a) h = se * scipy.stats.t.ppf((1 + confidence) / 2., n-1) return m, m-h, m+h for _, (drug, adr) in tqdm(drug_adr_pairs.iterrows(), total=drug_adr_pairs.shape[0]): data = results.query('drug==@drug and adr==@adr') bonf_p = data['bonf_p_value'].values scores.at[(drug, adr), 'p_val_min'] = np.min(bonf_p) scores.at[(drug, adr), 'p_val_med'] = np.median(bonf_p) scores.at[(drug, adr), 'p_val_max'] = np.max(bonf_p) logROR = data['logROR'].values mean, lower, upper = mean_confidence_interval(logROR) scores.at[(drug, adr), 'logROR_avg'] = mean scores.at[(drug, adr), 'logROR_ci95_low'] = lower scores.at[(drug, adr), 'logROR_ci95_upp'] = upper scores = scores.reset_index() name_atc4, name_atc5, name_hlgt, name_soc, name_pt = defaultdict(str), defaultdict(str), defaultdict(str), defaultdict(str), defaultdict(str) for id_, name in db.run('select * from atc_4_name'): name_atc4[str(id_)] = name for id_, name in db.run('select * from atc_5_name'): name_atc5[str(id_)] = name for id_, name in db.run('select * from hlgt_name'): name_hlgt[id_] = name for id_, name in db.run('select * from soc_name'): name_soc[id_] = name for id_, name in db.run('select * from pt_name'): name_pt[id_] = name scores['drug_name'] = '' scores['drug_class'] = 0 scores = scores.set_index('drug') for id_ in np.unique(scores.index): if name_atc4[id_]: scores.at[id_, 'drug_name'] = name_atc4[id_] scores.at[id_, 'drug_class'] = 4 else: scores.at[id_, 'drug_name'] = name_atc5[id_] scores.at[id_, 'drug_class'] = 5 scores = scores.reset_index() scores['adr_name'] = '' scores['adr_class'] = '' scores = scores.set_index('adr') for id_ in np.unique(scores.index): if name_soc[id_]: scores.at[id_, 'adr_name'] = name_soc[id_] scores.at[id_, 'adr_class'] = 'soc' elif name_hlgt[id_]: scores.at[id_, 'adr_name'] = name_hlgt[id_] scores.at[id_, 'adr_class'] = 'hlgt' elif name_pt[id_]: scores.at[id_, 'adr_name'] = name_pt[id_] scores.at[id_, 'adr_class'] = 'pt' scores = scores.reset_index() return scores drug_name = input(' Enter ATC drug name: ') q_atc5 = "select atc_5_id from atc_5_name where atc_5_name=\'"+drug_name+"\'" q_atc4 = "select atc_4_id from atc_4_name where atc_4_name=\'"+drug_name+"\'" try: if db.get_list(q_atc5): drugID = db.get_list(q_atc5)[0] else: drugID = db.get_list(q_atc4)[0] except: raise NameError("drug not found") if not drugID: raise NameError("drug not found") adr_name = input(' Enter MedDRA outcome name: ') q = "select meddra_concept_id from pt_name where meddra_concept_name=\'"+adr_name+"\'" try: adrID = db.get_list(q) except: raise NameError("adr not found") if not adrID: raise NameError("adr not found") filename = 'Ad_Hoc/'+str(drugID)+'_'+str(adrID) print("Checking for {}".format(filename)) if path.exists(u.DATA_PATH+filename+'.feather'): results = u.load_df(filename) print("Found!") else: print("Not found, running ad-hoc") iterations=25 drug = Drug(drugID, adrID) for itr in tqdm(range(1, iterations+1)): drug.match() drug.count_adr() drug.assign_abcd(itr) drug.do_chi_square() drug.calc_logROR() drug.reset_for_next_itr() assert drug.ensure_results(itr) results = compile(drug.results) u.save_df(results, filename) u.print_table(results) results ```
github_jupyter
**Notas para contenedor de docker:** Comando de docker para ejecuciรณn de la nota de forma local: nota: cambiar `dir_montar` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker. ``` dir_montar=<ruta completa de mi mรกquina a mi directorio>#aquรญ colocar la ruta al directorio a montar, por ejemplo: #dir_montar=/Users/erick/midirectorio. ``` Ejecutar: ``` $docker run --rm -v $dir_montar:/datos --name jupyterlab_prope_r_kernel_tidyverse -p 8888:8888 -d palmoreck/jupyterlab_prope_r_kernel_tidyverse:2.1.4 ``` Ir a `localhost:8888` y escribir el password para jupyterlab: `qwerty` Detener el contenedor de docker: ``` docker stop jupyterlab_prope_r_kernel_tidyverse ``` Documentaciรณn de la imagen de docker `palmoreck/jupyterlab_prope_r_kernel_tidyverse:2.1.4` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/prope_r_kernel_tidyverse). --- Para ejecuciรณn de la nota usar: [docker](https://www.docker.com/) (instalaciรณn de forma **local** con [Get docker](https://docs.docker.com/install/)) y ejecutar comandos que estรกn al inicio de la nota de forma **local**. [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/palmoreck/dockerfiles-for-binder/jupyterlab_prope_r_kernel_tidyerse?urlpath=lab/tree/Propedeutico/Python/clases/3_algebra_lineal/2_interpolacion.ipynb) esta opciรณn crea una mรกquina individual en un servidor de Google, clona el repositorio y permite la ejecuciรณn de los notebooks de jupyter. [![Run on Repl.it](https://repl.it/badge/github/palmoreck/dummy)](https://repl.it/languages/python3) Esta opciรณn no clona el repositorio, no ejecuta los notebooks de jupyter pero permite ejecuciรณn de instrucciones de Python de forma colaborativa con [repl.it](https://repl.it/). Al dar click se crearรกn nuevos ***repl*** debajo de sus users de ***repl.it***. **Nota importante: Para esta nota hay que usar el jupyter notebook clรกsico. Si estรกn en jupyterlab deben dar click en la tab de *Help* y ahรญ estรก la opciรณn de usar el *notebook* clรกsico. Tambiรฉn asegรบrense que sรณlo estรฉn usando de forma local el notebook clรกsico de jupyter y no al mismo tiempo con el jupyterlab.** <img src="https://dl.dropboxusercontent.com/s/41fjwmyxzk5ocgn/launch_classic_jupyter_notebook.png?dl=0" heigth="300" width="300"> **Se utiliza la versiรณn clรกsica pues se usarรก el comando de magic `%matplotlib notebook`** # Interpolaciรณn Dados $n+1$ puntos $x_0,x_1,\dots,x_n$ el objetivo es construir una funciรณn $f(x)$ tal que $f(x_i) = y_i$ con $y_i$ conocido $\forall i=0,1,\dots,n$. <img src="https://dl.dropboxusercontent.com/s/m0gks881yffz85f/interpolacion.jpg?dl=0" heigth="300" width="300"> Entre las aplicaciones en interpolaciรณn se encuentran: * Reconstrucciรณn de funciones. * Aproximaciรณn a derivadas e integrales. * Estimaciรณn de funciones en cantidades no conocidas. ## Modelo en interpolaciรณn Tรญpicamente el modelo $f$ es de la forma $f(x|w) = \displaystyle \sum_{j=0}^nw_j \phi_j(x)$ con $\phi_j:\mathbb{R} \rightarrow \mathbb{R}$ funciones conocidas y $w_j$ parรกmetros desconocidos por determinar $\forall j=0,1,\dots,n$. **Obs:** * Comรบnmente las $\phi_j$'s son funciones polinomiales, trigonomรฉtricas, racionales y exponenciales. * La notaciรณn $f(x|w)$ se utiliza para denotar que $w$ es un vector de parรกmetros a estimar. ## ยฟCรณmo ajustar el modelo anterior? El problema de interpolaciรณn conduce a plantear y posteriormente resolver un sistema de ecuaciones lineales de la forma $Aw = y$ pues la condiciรณn de interpolaciรณn es: $f(x_i|w_i) = y_i$, $\forall i=0,1,\dots,n$ con $A \in \mathbb{R}^{{n+1}x{n+1}}$, $w,y \in \mathbb{R}^{n+1}$ definidas como sigue: $$A = \left[\begin{array}{cccc} \phi_0(x_0) &\phi_1(x_0)&\dots&\phi_n(x_0)\\ \phi_0(x_1) &\phi_1(x_1)&\dots&\phi_n(x_1)\\ \vdots &\vdots& \vdots&\vdots\\ \phi_0(x_n) &\phi_1(x_n)&\dots&\phi_n(x_n) \end{array} \right], w= \left[\begin{array}{c} w_0\\ w_1\\ \vdots \\ w_n \end{array} \right] , y= \left[\begin{array}{c} y_0\\ y_1\\ \vdots \\ y_n \end{array} \right] $$ Esto es, hay que resolver: $$\begin{array}{ccc} \phi_0(x_0)w_0 + \phi_1(x_0)w_1 + \cdots + \phi_n(x_0)w_n &= & y_0 \\ \phi_0(x_1)w_0 + \phi_1(x_1)w_1 + \cdots + \phi_n(x_1)w_n &= & y_1\\ \vdots & & \\ \phi_0(x_n)w_0 + \phi_1(x_n)w_1 + \cdots + \phi_n(x_n)w_n &= & y_n \end{array}$$ que es la condiciรณn de interpolaciรณn $f(x_i|w) = y_i \forall i=0,1,\dots,n$ bajo el modelo: $f(x|w) = \displaystyle \sum_{j=0}^nw_j \phi_j(x)$ en notaciรณn **matricial**. ## Interpolaciรณn polinomial: funciones $\phi_j$'s son polinomios **En numpy ...** ``` import numpy as np import matplotlib.pyplot as plt import pprint ``` Supongamos que queremos realizar la interpolaciรณn a los siguientes puntos: ``` #pseudorandom array np.random.seed(2000) #for reproducibility npoints = 6 x = np.random.randn(npoints) + 10 y = np.random.randn(npoints) - 10 pprint.pprint('x:') pprint.pprint(x) pprint.pprint('y:') pprint.pprint(y) ``` ver: [numpy.random.randn](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.randn.html#numpy.random.randn) **Los datos ejemplo** ``` plt.plot(x,y, 'r*') plt.xlabel('x') plt.ylabel('y') plt.title('Puntos ejemplo') plt.show() ``` Con numpy podemos usar la funciรณn `polyfit` en el paquete de `numpy` para realizar lo anterior: (ver [numpy.polyfit](https://docs.scipy.org/doc/numpy/reference/generated/numpy.polyfit.html)) El tercer argumento de polyfit especifica el grado del polinomio a ajustar. Como tenemos `npoints = 6` puntos, entonces debemos generar un polinomio de grado $5$ ``` ndegree = npoints -1 coefficients = np.polyfit(x,y,ndegree) ``` Una vez realizado el llamado a la funciรณn `polyfit` se regresan los coeficientes de $x$ ordenados del mayor grado al menor. ``` np.set_printoptions(precision = 2) #sรณlo dos decimales que se muestren pprint.pprint(coefficients) ``` Entonces nuestro polinomio es: $$p_{npoints}(x) = .0816x^5 -4.26x^4 +87.8x^3-895x^2+4500x-8980$$ **Nota: si queremos utilizar una representaciรณn con la matriz de [Vandermonde](https://en.wikipedia.org/wiki/Vandermonde_matrix) para el sistema de ecuaciones que se resolviรณ se tiene la siguiente representaciรณn matricial:** $$\left[\begin{array}{ccccc} 1 & x_0 & x_0^2 & x_0^3 & x_0^4 & x_0^5 \\ 1 & x_1 & x_1^2 & x_1^3 & x_1^4 & x_1^5\\ \vdots &\vdots& \vdots&\vdots\\ 1 & x_5 & x_5^2 & x_5^3 & x_5^4 & x_5^5 \end{array} \right] \left[\begin{array}{c} -8980\\ 4500\\ \vdots \\ .0816 \end{array} \right] = \left[\begin{array}{c} y_0\\ y_1\\ \vdots \\ y_5 \end{array} \right] $$ **Obs: hay diferentes representaciones matriciales para el problema de interpolaciรณn, por ejemplo representaciรณn por [Newton](https://en.wikipedia.org/wiki/Newton_polynomial) o por [Lagrange](https://en.wikipedia.org/wiki/Lagrange_polynomial). Cualquiera de las representaciones que se utilicen obtienen el mismo interpolador, la diferencia consiste en propiedades que tienen las matrices de cada representaciรณn (la matriz de Vandermonde para un grado alto conduce a tener sistemas de ecuaciones lineales muy sensibles a perturbaciones en los datos).** **La grรกfica** Ahora nos gustarรญa graficarlo en el intervalo `[min(x),max(x)]` con `min(x)` la entrada con valor mรญnimo del numpy array `x` y `max(x)` su entrada con valor mรกximo. Para lo anterior debemos evaluar $p_{npoints}(x)$ en diferentes valores de $x$. Para esto, generamos un numpy array con un nรบmero de puntos `neval`: ``` neval = 100 xeval = np.linspace(min(x),max(x), neval) yeval = np.polyval(coefficients,xeval) print('xeval.shape:', xeval.shape[0]) print('yeval.shape:', yeval.shape[0]) plt.plot(x, y, 'r*', xeval, yeval, 'k-') plt.legend(['datos','interpolador'], loc='best') plt.show() max(yeval) ``` Si tuviรฉramos que estimar cantidades negativas con nuestro interpolador, entonces la siguiente estimaciรณn calcularรญamos: ``` np.polyval(coefficients, 8.5) ``` ### Problema con: nรบmero de puntos y la interpolaciรณn polinomial Si incrementamos a 9 puntos por los que deseamos hacer pasar un interpolador tenemos: ``` #pseudorandom array np.random.seed(2000) #for reproducibility npoints = 9 x = np.random.randn(npoints) + 10 y = np.random.randn(npoints) - 10 pprint.pprint('x:') pprint.pprint(x) pprint.pprint('y:') pprint.pprint(y) ``` **Los datos** ``` plt.plot(x,y, 'r*') plt.xlabel('x') plt.ylabel('y') plt.title('Puntos ejemplo') plt.show() ndegree = npoints -1 new_coefficients = np.polyfit(x,y,ndegree) pprint.pprint(new_coefficients) ``` Nuestro polinomio ahora es (considerando dos dรญgitos a la derecha del punto decimal de los resultados anteriores): $$p_{npoints}(x) = 2.55x^8 -201x^7 + 6940x^6-1.36*10^5x^5+1.66*10^6x^4-1.3*10^7x^3 +6.31*10^7x^2-1.75*10^8x+2.11*10^8$$ **La grรกfica** ``` neval = 100 xeval = np.linspace(min(x),max(x), neval) yeval = np.polyval(new_coefficients,xeval) print('xeval.shape:', xeval.shape[0]) print('yeval.shape:', yeval.shape[0]) ``` Obsรฉrvese la oscilaciรณn que debe tener el polinomio de grado $9$ para pasar por los $10$ puntos: ``` plt.plot(x, y, 'r*',xeval, yeval, 'k-') plt.legend(['datos','interpolador'], loc='best') plt.show() max(yeval) ``` Este tipo de oscilaciรณn es tรญpica al tener un polinomio mayor o igual a $6$ (mรกs de $7$ puntos). Si tuviรฉramos que estimar cantidades negativas con nuestro interpolador, entonces la siguiente estimaciรณn serรญa errรณrena: ``` np.polyval(new_coefficients,8.5) ``` lo cual es errรณneo. **Nota** Los interpoladores obtenidos con alguno de los mรฉtodos anteriores se utilizan para estimar cantidades en el intervalo con el que fueron construรญdos. Si deseamos estimar fuera del intervalo debe de realizarse con cuidado pues se pueden tener estimaciones incorrectas. ``` np.polyval(coefficients, 15) np.polyval(new_coefficients, 15) ``` ### Polinomios piecewise Para arreglar la oscilaciรณn de interpoladores de grado alto, una soluciรณn es interpolar con polinomios de grado bajo en cada subintervalo compuesto por las $x$'s, esto es, una forma *piecewise*. En python se realiza con el mรฉtodo `interpolate` del paquete `scipy`: **Lineal** ``` from scipy.interpolate import interp1d pw_l = interp1d(x, y) #linear piecewise neval = 100 xeval = np.linspace(min(x),max(x), neval) yeval = pw_l(xeval) print('xeval.shape:', xeval.shape[0]) print('yeval.shape:', yeval.shape[0]) plt.plot(x, y, 'r*',xeval, yeval, 'k-') plt.legend(['datos','interpolador lineal piecewise'], loc='best') plt.show() ``` Aunque se ha resuelto la estimaciรณn: ``` print(pw_l(8.5)) ``` **Splines** Los *splines* cรบbicos *piecewise* resuelven la no diferenciabilidad del interpolador lineal en los puntos dados: ``` pw_spline = interp1d(x, y, kind = 'cubic') #spline piecewise neval = 100 xeval = np.linspace(min(x),max(x), neval) yeval = pw_spline(xeval) print('xeval.shape:', xeval.shape[0]) print('yeval.shape:', yeval.shape[0]) plt.plot(x, y, 'r*',xeval, yeval, 'k-') plt.legend(['datos','cubic splines piecewise'], loc='best') plt.show() print(pw_spline(8.5)) ``` Ver: [Interpolation (scipy.interpolate)](https://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html) **(Tarea)Ejercicio: Interpolar con 10 puntos generados de la funciรณn de [Runge](https://en.wikipedia.org/wiki/Runge%27s_phenomenon) $f(x) = \frac{1}{1+25x^2}$ en el intervalo $[-1,1]$ equidistantes. Hacer la grรกfica con $10,000$ puntos en el mismo intervalo. Utilizar polyfit para el polinomio interpolador y splines cรบbicos.** # Curvas paramรฉtricas e interpolaciรณn Ninguna de las tรฉcnicas vistas anteriormente pueden usarse **directamente** para generar curvas como la de una circunferencia: ``` radius = 1 npoints = 100 x = np.linspace(-radius,radius,npoints) y1 = np.sqrt(radius-x**2) y2 = -np.sqrt(radius-x**2) plt.plot(x,y1,'m', x,y2,'m') plt.title("Circunferencia") plt.show() ``` pues no puede expresarse como una funciรณn del tipo: $y = f(x)$. Obsรฉrvese que para la grรกfica anterior se han usado dos funciones: $y_1 = f_1(x) = \sqrt{r-x^2}$, $y_2 = f_2(x) = -\sqrt{r-x^2}$. Lo anterior puede resolverse definiendo una funciรณn, $f: \mathbb{R} \rightarrow \mathbb{R}^2$, de un parรกmetro $t$ que tome valores en el intervalo $[0,2\pi)$ y definida por $f(t) = (\cos(t), \sin(t))$. Obsรฉrvese que para $t=0$ se obtiene el punto $(1,0)$, para $t=\frac{\pi}{2}$ se obtiene $(0,1)$ y asรญ sucesivamente hasta $t=2\pi$ en el que obtendrรญamos nuevamente el punto $(1,0)$. Para este caso se cumple: $$f(t) = (x(t), y(t))$$ con $x(t) = \cos(t)$, $y(t) = \sin(t)$ funciones tales que $x : \mathbb{R} \rightarrow \mathbb{R}$, $y: \mathbb{R} \rightarrow \mathbb{R}$. ``` import time npoints = 100 a = 0 b = 2*np.pi t = np.linspace(a,b,npoints) x = np.cos(t) y = np.sin(t) x_min = np.min(x) y_min = np.min(y) x_max = np.max(x) y_max = np.max(y) ``` Ver [plt.draw](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.draw.html#matplotlib-pyplot-draw) ``` def make_plot(ax, idx): ax.plot(x[:idx], y[:idx]) window = 0.5 plt.xlim(x_min-window, x_max+window) plt.ylim(y_min-window, y_max+window) plt.plot(x[:idx], y[:idx], 'mo') fig.canvas.draw() #redraw the current figure ``` Ver: [matplotlib magic command](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-matplotlib), [plt.subplots](https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.pyplot.subplots.html#matplotlib-pyplot-subplots) ``` %matplotlib notebook #for interactive plotting fig, ax = plt.subplots() #create figure that will be used #in make_plot func. Also retrieve axes for idx,_ in enumerate(t): #enumerate creates tuples #in a sequentially way make_plot(ax, idx) time.sleep(0.2) ``` **Nota: Hay que dar click en el botรณn arriba de la figura de apagar interactividad.** ## Ejemplo **Nota importante: si estรกn usando el botรณn de binder para ejecuciรณn de forma interactiva no utilicen el comando de `wget` para descargar su imagen, mejor utilicen la funcionalidad del jupyter notebook clรกsico para subir archivos:** <img src="https://dl.dropboxusercontent.com/s/1v78rge4ehylmi2/upload_in_classic_jupyter_notebooks.png?dl=0" heigth="900" width="900"> **Y asegรบrense que estรกn subiendo la imagen en la ruta `/Propedeutico/Python/clases/3_algebra_lineal`** **No olviden dar click en `Upload` dos veces:** <img src="https://dl.dropboxusercontent.com/s/oa1rnxf5ryxdigg/upload_in_classic_jupyter_notebooks_2.png?dl=0" heigth="300" width="300"> Usemos la imagen siguiente para realizar una interpolaciรณn a una curva paramรฉtrica con *splines*: ``` !wget https://www.dropbox.com/s/25zbthmsco6u1u6/hummingbird.png?dl=0 -O hummingbird.png %%bash ls img=plt.imread('hummingbird.png') plt.imshow(img) plt.title('Colibrรญ') plt.show() ``` **Nota: Hay que dar click en el botรณn arriba de la figura de apagar interactividad.** **De manera interactiva vamos dando click a la imagen anterior con la siguiente celda, en la lista `pos` se irรกn guardando las coordenadas en donde hagamos click.** ``` %matplotlib notebook fig, ax = plt.subplots() pos = [] def onclick(event): pos.append([event.xdata,event.ydata]) fig.canvas.mpl_connect('button_press_event', onclick) plt.title('Colibrรญ') plt.imshow(img) pos ``` **Nota: una vez obtenida la lista `pos` dar click en el botรณn de apagado de interactividad.** ``` pos_array = np.array(pos) x = pos_array[:,0] ``` Algunas entradas imprimimos de $x$: ``` x[0:10] y = pos_array[:,1] ``` Algunas entradas imprimimos de $y$: ``` y[0:10] ``` Definamos nuestro parรกmetro $t$ en el intervalo $[0,1]$: ``` t = np.linspace(0,1, len(x)) t ``` Construyamos el spline para las curvas $x(t)$, $y(t)$ que nos definirรกn las coordenadas. ``` pw_spline_x = interp1d(t, x, kind = 'cubic') #spline piecewise pw_spline_y = interp1d(t,y, kind = 'cubic') #spline piecewise ``` Realicemos interpolaciรณn en $100$ puntos: ``` neval = 100 teval = np.linspace(min(t),max(t), neval) xeval = pw_spline_x(teval) yeval = pw_spline_y(teval) print('xeval.shape:', xeval.shape[0]) print('yeval.shape:', yeval.shape[0]) xeval[0:10] yeval[0:10] window_y = 50 window_x = 500 x_min = np.min(x) y_min = np.min(y) x_max = np.max(x) y_max = np.max(y) fig, ax = plt.subplots() ax.plot(xeval,yeval) ax.set_ylim(np.max(y)+window_y,np.min(y)-window_y) plt.xlim(np.min(x)-window_x,np.max(x)+window_x) plt.title('Colibrรญ con interpolaciรณn vรญa curva paramรฉtrica') plt.show() def make_plot(ax, idx): ax.plot(x[:idx], y[:idx]) ax.set_ylim(y_max+window_y,y_min-window_y) plt.xlim(x_min-window_x,x_max+window_x) plt.plot(x[:idx], y[:idx], 'bo-') plt.title('Colibrรญ con interpolaciรณn vรญa curva paramรฉtrica') fig.canvas.draw() %matplotlib notebook fig, ax = plt.subplots() for idx,_ in enumerate(t): make_plot(ax, idx) time.sleep(0.2) ``` **(Tarea) elegir una imagen y realizar interpolaciรณn con una curva paramรฉtrica.** **Referencias:** * [animated_matplotlib-binder](https://github.com/fomightez/animated_matplotlib-binder) * [how-get-a-x-y-position-pointing-with-mouse-in-a-interactive-plot-python](https://stackoverflow.com/questions/29379502/how-get-a-x-y-position-pointing-with-mouse-in-a-interactive-plot-python) * [matplotlib: invert_axes](https://matplotlib.org/3.1.1/gallery/subplots_axes_and_figures/invert_axes.html)
github_jupyter
``` import os import findspark findspark.find() findspark.init(os.environ.get("SPARK_HOME")) import sys sys.path.append("/Users/minjungchoi/spark/spark-2.4.0-bin-hadoop2.7/python/pyspark") from pyspark import SparkConf from pyspark import SparkContext from pyspark.sql import SparkSession from pyspark.sql.types import * from pyspark.sql.functions import col, collect_list,udf,concat, lit conf = SparkConf().setAppName("comstat-test").set("spark.yarn.driver.memoryOverhead", "2048") \ .set("spark.yarn.executor.memoryOverhead", "2048") \ .set("spark.default.parallelism", "116") \ .set("spark.shuffle.compress", "true") \ .set("spark.io.compression.codec", "snappy") spark = SparkSession.builder.config(conf=conf).getOrCreate() sc = spark.sparkContext sampleData = [ ('A','WD','1','Z001','S001'), ('A','WD','1','Z002','S020'), ('A','WD','2','Z005','S100'), ('A','WE','3','Z001','S001'), ('A','WE','2','Z002','S000'), ('A','WE','1','Z001','S001'), ('A','WD','3','Z001','S001'), ('A','WD','4','Z001','S002'), ('A','WD','4','Z002','S030'), ('A','WD','3','Z003','S009'), ('A','WD','1','Z001','S002'), ('A','WD','2','Z002','S030'), ('A','WD','3','Z001','S001'), ('A','WD','4','Z003','S003'), ('A','WD','4','Z003','S005'), ('A','WD','4','Z001','S001'), ('A','WD','3','Z005','S006'), ('A','WE','2','Z006','S007'), ('A','WE','3','Z001','S002'), ('B','WD','1','Z001','S001'), ('B','WD','1','Z002','S020'), ('B','WD','2','Z005','S100'), ('B','WE','3','Z001','S001'), ('B','WE','2','Z002','S000'), ('B','WE','1','Z001','S001'), ('B','WD','3','Z001','S001'), ('C','WD','4','Z001','S002'), ('C','WD','4','Z002','S030'), ('C','WD','3','Z003','S009'), ('C','WD','1','Z001','S002'), ('D','WD','2','Z002','S030'), ('D','WD','3','Z001','S001'), ('D','WD','4','Z003','S003'), ('D','WD','4','Z003','S005'), ('D','WD','4','Z001','S001'), ('D','WD','3','Z005','S006'), ('D','WE','2','Z006','S007'), ('D','WE','3','Z001','S002'), ('E','WE','1','Z003','S001'), ] field = [ StructField('CID',StringType(), True),\ StructField('WEEKDAY', StringType(), True),\ StructField('TIMESEG', StringType(), True),\ StructField('LOCATION', StringType(), True),\ StructField('SHOP', StringType(), True)] schema = StructType(field) sampleRDD = sc.parallelize(sampleData) sampleDF = spark.createDataFrame(sampleRDD,schema) print('Table') print(sampleDF.show()) # cidByShopMap = sampleDF.rdd.map(lambda x : (x['CID'] +':'+ x['SHOP'], 1)) # cidByShop = cidByShopMap.reduceByKey(lambda x,y : x+y) # cidByShop.take(20) #%% getTop3List = udf(lambda x:x[0:3],StringType()) #%% #cross tab์œผ๋กœ ๋ณด๊ธฐ sampleDF.crosstab('CID', 'SHOP').show() #%% #๊ณ ๊ฐ๋ณ„ ์ตœ๋‹ค ๋ฐฉ๋ฌธ SHOP TOP3 BySHOP = sampleDF.groupby('CID', 'SHOP').count().orderBy('CID', 'count',ascending=False) BySHOP.show() BySHOP = BySHOP.groupby('CID').agg(collect_list('SHOP').alias('SHOP3'))\ .withColumn('SHOP3', getTop3List('SHOP3')) BySHOP.show() #%% #๊ณ ๊ฐ๋ณ„ ์ตœ๋‹ค ๋ฐฉ๋ฌธ ์ง€์—ญ TOP3 ByLOCATION = sampleDF.groupby('CID', 'LOCATION').count().orderBy('CID', 'count',ascending=False) ByLOCATION.show() ByLOCATION = ByLOCATION.groupby('CID').agg(collect_list('LOCATION').alias('Location3'))\ .withColumn('Location3', getTop3List('Location3')) ByLOCATION.show() #%% #๊ณ ๊ฐ๋ณ„ ์ตœ๋‹ค ๋ฐฉ๋ฌธ ์ฃผ์ค‘/์ฃผ๋ง ๋ฐฉ๋ฌธ SHOP TOP3 ByWEEKDAY_SHOP = sampleDF.groupby('CID', 'WEEKDAY', 'SHOP').count().orderBy('CID', 'count',ascending=False)\ .select('CID', (concat(col('WEEKDAY') ,lit('|'),col('SHOP')).alias('WEEKDAY_SHOP')), 'count') ByWEEKDAY_SHOP.show() ByWEEKDAY_SHOP = ByWEEKDAY_SHOP.groupby('CID').agg(collect_list('WEEKDAY_SHOP').alias('WEEKDAY_SHOP3'))\ .withColumn('WEEKDAY_SHOP3', getTop3List('WEEKDAY_SHOP3')) ByWEEKDAY_SHOP.show() #%% #๊ณ ๊ฐ๋ณ„๋กœ ํ•œํ…Œ์ด๋ธ”๋กœ ๋ชจ์œผ์ž (๊ตณ์ด ๋ชจ์„ ํ•„์š”๋Š” ์—†๊ณ  ๊ฐ์ž ํ…Œ์ด๋ธ”๋กœ ๊ฐ€์ง€๊ณ  ์žˆ๋Š”๊ฒŒ ๋‚˜์„ ๊ฑธ?) #๋ณ„๋„์˜ ๊ณ ๊ฐ๋ฆฌ์ŠคํŠธ๊ฐ€ ์—†์œผ๋ฉด ํŠธ๋žœ์žญ์…˜์—์„œ ๋นผ๋‚ด์ž users = sampleDF.select('CID').distinct() totalSummary = users.join(BySHOP, BySHOP.CID == users.CID, how='left').drop(BySHOP.CID) totalSummary = totalSummary.join(ByLOCATION, ByLOCATION.CID == totalSummary.CID, how='left').drop(ByLOCATION.CID) totalSummary = totalSummary.join(ByWEEKDAY_SHOP, ByWEEKDAY_SHOP.CID == totalSummary.CID, how='left').drop(ByWEEKDAY_SHOP.CID) totalSummary.show() #%% ## map reduce๋กœ ๊ทธ๋ƒฅ ๊ตฌํ˜„ cidByShopMap = sampleDF.rdd.map(lambda x : (x['CID'] +':'+ x['SHOP'], 1)) cidByShop = cidByShopMap.reduceByKey(lambda x,y : x+y) print('Customer by shop') print(cidByShop.take(50)) print('Customer by shop TOP 3') print( cidByShop.sortBy(lambda x: x[1], ascending= False).take(3) ) #%% sc.stop() ```
github_jupyter
``` import transformers from transformers import BertModel, BertTokenizer, AdamW, get_linear_schedule_with_warmup import torch import gc gc.collect() torch.cuda.empty_cache() import numpy as np import pandas as pd import seaborn as sns from pylab import rcParams import matplotlib.pyplot as plt from matplotlib import rc from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix, classification_report from collections import defaultdict from textwrap import wrap from torch import nn, optim from torch.utils.data import Dataset, DataLoader import torch.nn.functional as F RANDOM_SEED = 42 np.random.seed(RANDOM_SEED) torch.manual_seed(RANDOM_SEED) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") device df_movies = pd.read_csv('datasets/classifiedMovies.txt', sep="\t", header=None, low_memory=False) df_movies.columns = ["message", "originalClassification","scoreP","scoreZ","polarity", "termsP", "termsN"] df = df_movies df['message_len'] = df['message'].astype(str).apply(len) df.shape df.info() df.polarity.value_counts() df.describe() df["message_len"].describe().apply(lambda x: format(x, 'f')) sns.displot( df , x= "message_len" ); df.head() df.originalClassification.value_counts() df.polarity.value_counts() sns.countplot(df.polarity) plt.xlabel('classification'); df['label'] = pd.factorize(df['polarity'])[0] sns.countplot(df.label) plt.xlabel('classification'); df['label'] = pd.factorize(df['polarity'])[0] # Class count count_class_0, count_class_1,count_class_2 = df.polarity.value_counts() # Divide by class df_class_0 = df[df['label'] == 0] df_class_1 = df[df['label'] == 1] df_class_2 = df[df['label'] == 2] df_class_2_over = pd.concat([df_class_2]*20, ignore_index=False) df_class_2_over df_class_0_under = df_class_0.sample(count_class_1) df_class_2_over = df_class_2.sample(n=5596, replace=True) df_test_over = pd.concat([df_class_1, df_class_0_under,df_class_2_over], axis=0) print('Random over-sampling:') print(df_test_over.polarity.value_counts()) df_test_overdf_test_over.polarity.value_counts().plot(kind='bar', title='Count (polarity)'); test_df_rest =pd.merge(df,df_test_over, indicator=True, how='outer').query('_merge=="left_only"').drop('_merge', axis=1) test_df_rest.label.value_counts() test_df_rest = pd.concat([test_df_rest, df_class_1,df_class_2], axis=0) test_df_rest.label.value_counts() PRE_TRAINED_MODEL_NAME = 'bert-base-cased' tokenizer = BertTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME, return_dict=False) sample_txt = 'When was I last outside? I am stuck at home for 2 weeks.' encoding = tokenizer.encode_plus( sample_txt, max_length=32, # sequence length add_special_tokens=True, # Add '[CLS]' and '[SEP]' return_token_type_ids=False, padding='max_length', return_attention_mask=True, return_tensors='pt', # Return PyTorch tensors(use tf for tensorflow and keras) ) encoding.keys() token_lens = [] for txt in df.message: tokens = tokenizer.encode(txt, truncation=True) token_lens.append(len(tokens)) %matplotlib inline %config InlineBackend.figure_format='retina' sns.set(style='whitegrid', palette='muted', font_scale=1.2) HAPPY_COLORS_PALETTE = ["#01BEFE", "#FFDD00", "#FF7D00", "#FF006D", "#ADFF02", "#8F00FF"] sns.set_palette(sns.color_palette(HAPPY_COLORS_PALETTE)) rcParams['figure.figsize'] = 12, 8 sns.distplot(token_lens) plt.xlim([0, 256]); plt.xlabel('Token count'); MAX_LEN = 90 class ExtremsSentiDataset(Dataset): def __init__(self, reviews, targets, tokenizer, max_len): self.reviews = reviews self.targets = targets self.tokenizer = tokenizer self.max_len = max_len def __len__(self): return len(self.reviews) def __getitem__(self, item): review = str(self.reviews[item]) target = self.targets[item] encoding = self.tokenizer.encode_plus( review, add_special_tokens=True, max_length=self.max_len, return_token_type_ids=False, padding='max_length', return_attention_mask=True, return_tensors='pt', ) return { 'review_text': review, 'input_ids': encoding['input_ids'].flatten(), 'attention_mask': encoding['attention_mask'].flatten(), 'targets': torch.tensor(target, dtype=torch.long) } df_train, df_test = train_test_split( df_test_over, test_size=0.2, random_state=RANDOM_SEED ) df_val, df_test = train_test_split( df_test, test_size=0.5, random_state=RANDOM_SEED ) df_train.shape, df_val.shape, df_test.shape def create_data_loader(df, tokenizer, max_len, batch_size): ds = ExtremsSentiDataset( reviews=df.message.to_numpy(), targets=df.label.to_numpy(), tokenizer=tokenizer, max_len=max_len ) return DataLoader( ds, batch_size=batch_size, num_workers=2 ) rest_df = pd.merge(a,b, indicator=True, how='outer').query('_merge=="left_only"').drop('_merge', axis=1) #rest_test = df1['Email'].isin(df2['Email']) #df1.drop(df1[cond].index, inplace = True) BATCH_SIZE = 16 train_data_loader = create_data_loader(df_train, tokenizer, MAX_LEN, BATCH_SIZE) val_data_loader = create_data_loader(df_val, tokenizer, MAX_LEN, BATCH_SIZE) test_data_loader = create_data_loader(df_test, tokenizer, MAX_LEN, BATCH_SIZE) all_test_data_loader = create_data_loader(test_df_rest, tokenizer, MAX_LEN, BATCH_SIZE) data = next(iter(train_data_loader)) data.keys() bert_model = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME) last_hidden_state, pooled_output = bert_model( input_ids=encoding['input_ids'], attention_mask=encoding['attention_mask'], return_dict=False ) encoding['input_ids'] last_hidden_state.shape bert_model.config.hidden_size pooled_output.shape class SentimentClassifier(nn.Module): def __init__(self, n_classes): super(SentimentClassifier, self).__init__() self.bert = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME, return_dict=False) self.drop = nn.Dropout(p=0.3) self.out = nn.Linear(self.bert.config.hidden_size, n_classes) def forward(self, input_ids, attention_mask): _, pooled_output = self.bert( input_ids=input_ids, attention_mask=attention_mask ) output = self.drop(pooled_output) return self.out(output) class_names = ["Inconclusive","Positive Extreme","Negative Extreme"] model = SentimentClassifier(len(class_names)) model = model.to(device) input_ids = data['input_ids'].to(device) attention_mask = data['attention_mask'].to(device) print(input_ids.shape) # batch size x seq length print(attention_mask.shape) # batch size x seq length F.softmax(model(input_ids, attention_mask), dim=1) EPOCHS = 10 optimizer = AdamW(model.parameters(), lr=2e-5, correct_bias=False) total_steps = len(train_data_loader) * EPOCHS scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=0, num_training_steps=total_steps ) loss_fn = nn.CrossEntropyLoss().to(device) def train_epoch( model, data_loader, loss_fn, optimizer, device, scheduler, n_examples ): model = model.train() losses = [] correct_predictions = 0 for d in data_loader: input_ids = d["input_ids"].to(device) attention_mask = d["attention_mask"].to(device) targets = d["targets"].to(device) outputs = model( input_ids=input_ids, attention_mask=attention_mask ) _, preds = torch.max(outputs, dim=1) loss = loss_fn(outputs, targets) correct_predictions += torch.sum(preds == targets) losses.append(loss.item()) loss.backward() nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0) optimizer.step() scheduler.step() optimizer.zero_grad() return correct_predictions.double() / n_examples, np.mean(losses) def eval_model(model, data_loader, loss_fn, device, n_examples): model = model.eval() losses = [] correct_predictions = 0 with torch.no_grad(): for d in data_loader: input_ids = d["input_ids"].to(device) attention_mask = d["attention_mask"].to(device) targets = d["targets"].to(device) outputs = model( input_ids=input_ids, attention_mask=attention_mask ) _, preds = torch.max(outputs, dim=1) loss = loss_fn(outputs, targets) correct_predictions += torch.sum(preds == targets) losses.append(loss.item()) return correct_predictions.double() / n_examples, np.mean(losses) %%time history = defaultdict(list) best_accuracy = 0 for epoch in range(EPOCHS): print(f'Epoch {epoch + 1}/{EPOCHS}') print('-' * 10) train_acc, train_loss = train_epoch( model, train_data_loader, loss_fn, optimizer, device, scheduler, len(df_train) ) print(f'Train loss {train_loss} accuracy {train_acc}') val_acc, val_loss = eval_model( model, val_data_loader, loss_fn, device, len(df_val) ) print(f'Val loss {val_loss} accuracy {val_acc}') print() history['train_acc'].append(train_acc) history['train_loss'].append(train_loss) history['val_acc'].append(val_acc) history['val_loss'].append(val_loss) if val_acc > best_accuracy: torch.save(model.state_dict(), 'best_model_state.bin') best_accuracy = val_acc plt.plot(history['train_acc'], label='train accuracy') plt.plot(history['val_acc'], label='validation accuracy') plt.title('Training history') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend() plt.ylim([0, 1]); test_acc, _ = eval_model( model, test_data_loader, loss_fn, device, len(test_df_rest) ) test_acc.item() def get_predictions(model, data_loader): model = model.eval() review_texts = [] predictions = [] prediction_probs = [] real_values = [] with torch.no_grad(): for d in data_loader: texts = d["review_text"] input_ids = d["input_ids"].to(device) attention_mask = d["attention_mask"].to(device) targets = d["targets"].to(device) outputs = model( input_ids=input_ids, attention_mask=attention_mask ) _, preds = torch.max(outputs, dim=1) review_texts.extend(texts) predictions.extend(preds) prediction_probs.extend(outputs) real_values.extend(targets) predictions = torch.stack(predictions).cpu() prediction_probs = torch.stack(prediction_probs).cpu() real_values = torch.stack(real_values).cpu() return review_texts, predictions, prediction_probs, real_values y_review_texts, y_pred, y_pred_probs, y_test = get_predictions( model, all_test_data_loader ) print(classification_report(y_test, y_pred, target_names=class_names)) def show_confusion_matrix(confusion_matrix): hmap = sns.heatmap(confusion_matrix, annot=True, fmt="d", cmap="Blues") hmap.yaxis.set_ticklabels(hmap.yaxis.get_ticklabels(), rotation=0, ha='right') hmap.xaxis.set_ticklabels(hmap.xaxis.get_ticklabels(), rotation=30, ha='right') plt.ylabel('True sentiment') plt.xlabel('Predicted sentiment'); cm = confusion_matrix(y_test, y_pred) df_cm = pd.DataFrame(cm, index=class_names, columns=class_names) show_confusion_matrix(df_cm) idx = 5 review_text = y_review_texts[idx] true_sentiment = y_test[idx] pred_df = pd.DataFrame({ 'class_names': class_names, 'values': y_pred_probs[idx] }) print("\n".join(wrap(review_text))) print() print(f'True sentiment: {class_names[true_sentiment]}') sns.barplot(x='values', y='class_names', data=pred_df, orient='h') plt.ylabel('sentiment') plt.xlabel('probability') plt.xlim([0, 1]); ```
github_jupyter
# Importing libraries and utils ``` import utils import pandas as pd ``` # Loading the data ``` offense, defense = utils.get_data("stats") salary = utils.get_data("salary") AFC, NFC = utils.get_data("wins") ``` # Verifying the data loaded correctly ``` offense[2] defense[3] salary AFC[0] NFC[2] ``` # Cleaning the data ``` Salary = utils.clean_data("salary", test = salary) Stats = utils.clean_data("stats", offense = offense, defense = defense) Wins = utils.clean_data("wins", AFCl = AFC, NFCl = NFC) ``` # Verifying the data cleaned correctly ``` Salary Stats Wins ``` # Beginning cluster analysis ``` CSalary = Salary.drop(["YEAR", "TEAM"], axis = 1) utils.find_clusters(CSalary) SCSalary = utils.scale_data(CSalary) utils.find_clusters(SCSalary) #The scores after scaling are significantly worse. Considering that all of the salary numbers are in the same unit (% of cap), maybe it is best not to scale here afterall ``` # Using PCA ``` utils.pca_exp_var(CSalary) PCSalary = utils.pca(CSalary, .99) utils.find_clusters(PCSalary) # 6 clusters appears to be a good choice for silhouette score, I choose this number ``` # Clustering the data using KMeans ``` clusters = utils.cluster_data(PCSalary, 6) clusters ``` # Adding the cluster assignments to the unscaled data for easier interpretation ``` SalaryClustered = utils.add_clusters(Salary, clusters) SalaryClustered ``` # Graphing components from PCA ``` pcadf = pd.DataFrame(PCSalary) pcadf.columns = ("PC1", "PC2","PC3", "PC4", "PC5", "PC6", "PC7", "PC8", "PC9", "PC10", "PC11", "PC12", "PC13", "PC14", "PC15", "PC16", "PC17") pcadf = utils.add_clusters(pcadf, clusters) cluster0, cluster1, cluster2, cluster3, cluster4, cluster5 = utils.break_clusters(pcadf) utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "PC1", "PC2", "Component 1", "Component 2", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5") utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "PC2", "PC3", "Component 2", "Component 3", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5") ``` # Examining the clustered salary data ``` SalaryClustered.groupby(["cluster"]).count() SalaryClustered.groupby(["cluster"]).mean() SalaryClustered.groupby(["cluster"]).std() SalaryClustered["Offense"] = SalaryClustered["QB"] + SalaryClustered["RB"] + SalaryClustered["FB"] + SalaryClustered["WR"] + SalaryClustered["TE"] + SalaryClustered["T"] + SalaryClustered["RT"] + SalaryClustered["LT"] + SalaryClustered["G"] + SalaryClustered["C"] SalaryClustered["Defense"] = SalaryClustered["DE"] + SalaryClustered["DT"] + SalaryClustered["OLB"] + SalaryClustered["ILB"] + SalaryClustered["LB"] + SalaryClustered["CB"] + SalaryClustered["SS"] + SalaryClustered["FS"] + SalaryClustered["S"] SalaryClustered["Special Teams"] = SalaryClustered["K"] + SalaryClustered["P"] + SalaryClustered["LS"] SalaryClustered.groupby(["cluster"]).mean() cluster0, cluster1, cluster2, cluster3, cluster4, cluster5 = utils.break_clusters(SalaryClustered) utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Offense", "Defense", "% Of Cap Spent on Offense", "% Of Cap Spent on Defense", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5") ``` # Adding win % and stats to check spending value ``` WSalaryClustered = (SalaryClustered.merge(Wins, how='inner', on=["YEAR","TEAM"])) SWSalaryClustered = (WSalaryClustered.merge(Stats, how='inner', on=["YEAR","TEAM"])) SWSalaryClustered.groupby(["cluster"]).mean() cluster0, cluster1, cluster2, cluster3, cluster4, cluster5 = utils.break_clusters(SWSalaryClustered) utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Yds_x", "Offense", "Offensive Yards Per Game", "% Of Cap Spent on Offense", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5") utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Yds_y", "Defense", "Deffensive Yards Per Game", "% Of Cap Spent on Defense", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5") utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Yds.1_x", "QB", "Offensive Passing Yards Per Game", "% Of Cap Spent on QB", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5") utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Yds.1_x", "WR", "Offensive Passing Yards Per Game", "% Of Cap Spent on WR", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5") utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Yds.2_x", "RB", "Offensive Rushing Yards Per Game", "% Of Cap Spent on RB", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5") utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Yds.1_y", "CB", "Defensive Passing Yards Per Game", "% Of Cap Spent on CB", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5") utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Offense", "W%", "% Of Cap Spent on Offense", "Win Percentage", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5") utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "Defense", "W%", "% Of Cap Spent on Defense", "Win Percentage", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5") utils.plot(cluster0, cluster1, cluster2, cluster3, cluster4, cluster5, "QB", "W%", "% Of Cap Spent on QB", "Win Percentage", "Cluster 0", "Cluster 1", "Cluster 2", "Cluster 3", "Cluster 4", "Cluster 5") ```
github_jupyter
[View in Colaboratory](https://colab.research.google.com/github/nishi1612/SC374-Computational-and-Numerical-Methods/blob/master/Set_3.ipynb) Set 3 --- **Finding roots of polynomial by bisection method** ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import math from google.colab import files def iterations(n, arr , i): plt.plot(range(n),arr) plt.xlabel('No. of iterations') plt.ylabel('Value of c') plt.grid(True) plt.savefig("Iterations" + str(i) + ".png") files.download("Iterations" + str(i) + ".png") plt.show() def graph(i): plt.xlabel('x') plt.ylabel('y') plt.grid(True) plt.legend(loc='upper right') plt.savefig("Graph" + str(i) + ".png") files.download("Graph" + str(i) + ".png") plt.show() def bissection( a,b,epsilon,k): table = pd.DataFrame(columns=['a','b','c','b-c','f(a)*f(c)','Assign']) c = (a+b)/2; dist = b-c; i = 0 arr = [] while(dist>epsilon): ans_a = func(a,k); ans_b = func(b,k); ans_c = func(c,k); ans = "" if(ans_a*ans_c < 0): b=c; ans = "b=c" else: a=c; ans = "a=c"; table.loc[i] = [a,b,c,dist,ans_a*ans_c,ans] arr.append(c) i = i+1 c = (a+b) / 2 dist = b-c return (a+b)/2 ,i , arr , table; def func(x,k): if k==1: return x**6 - x - 1; elif k==2: return x**3 - x**2 - x - 1; elif k==3: return x - 1 - 0.3*math.cos(x); elif k==4: return 0.5 + math.sin(x) - math.cos(x); elif k==5: return x - math.e**(-x); elif k==6: return math.e**(-x) - math.sin(x); elif k==7: return x**3 - 2*x - 2; elif k==8: return x**4 - x - 1; elif k==9: return math.e**(x) - x - 2; elif k==10: return 1- x + math.sin(x); elif k==11: return x - math.tan(x); x = np.arange(-2,3,0.001) plt.plot(x,x**6,label='$x^6$') plt.plot(x,x+1,label="x+1") graph(1) plt.plot(x**6-x-1,label='$x^6$ - x - 1') graph(1) a , n , arr , table = bissection(1,2,0.001,1) iterations(n,arr,1) print(str(a) + "\n" + str(func(a,1))) table b , n , arr , table = bissection(-1,0,0.001,1) iterations(n,arr,1) print(str(b) + "\n" + str(func(b,1))) table x = np.arange(-2,3,0.001) plt.plot(x,x**3,label='$x^3$') plt.plot(x,x**2 + x + 1,label='$x^2 + x + 1$') graph(2) plt.plot(x**3 - (x**2 + x + 1),label='$x^3 - x^2 - x - 1$') graph(2) a , n , arr, table = bissection(1,2,0.0001,2) iterations(n,arr,2) print(str(a) + "\n" + str(func(a,2))) table x = np.arange(-3,5,0.001) plt.plot(x,x-1,label='$x-1$') plt.plot(x,0.3*np.cos(x),label='$0.3cos(x)$') graph(3) plt.plot(x,x-1-0.3*np.cos(x) , label='$x - 1 - 0.3cos(x)$') graph(3) a , n , arr , table = bissection(0,2,0.0001,3) iterations(n,arr,3) print(str(a) + "\n" + str(func(a,3))) table x = np.arange(-10,10,0.001) plt.plot(x,0.5 + np.sin(x),label='$0.5 + sin(x)$') plt.plot(x,np.cos(x),label='$cos(x)$') graph(4) plt.plot(x,0.5 + np.sin(x) - np.cos(x),label='$0.5 + sin(x) - cos(x)$') graph(4) a , n , arr , table = bissection(0,2,0.0001,4) iterations(n,arr,4) print(str(a) + "\n" + str(func(a,4))) table x = np.arange(-0,5,0.001) plt.plot(x,x,label='$x$') plt.plot(x,np.e**(-x),label='$e^{-x}$') graph(5) plt.plot(x,x - np.e**(-x),label='$x - e^{-x}$') graph(5) a , n , arr , table = bissection(0,1,0.0001,5) iterations(n,arr,5) print(str(a) + "\n" + str(func(a,5))) table x = np.arange(0,5,0.001) plt.plot(x,np.sin(x),label='$sin(x)$') plt.plot(x,np.e**(-x),label='$e^{-x}$') graph(6) plt.plot(x,np.sin(x) - np.e**(-x),label='$sin(x) - e^{-x}$') graph(6) a , n , arr , table = bissection(0,1,0.0001,6) iterations(n,arr,6) print(str(a) + "\n" + str(func(a,6))) table a , n , arr , table = bissection(3,4,0.0001,6) iterations(n,arr,6) print(str(a) + "\n" + str(func(a,6))) table x = np.arange(-2,4,0.001) plt.plot(x,x**3,label='$x^3$') plt.plot(x,2*x+2,label='$2x + 2$') graph(7) plt.plot(x,x**3 - 2*x - 2,label='$x^3 - 2x - 2$') graph(7) a , n , arr , table = bissection(1,2,0.0001,7) iterations(n,arr,7) print(str(a) + "\n" + str(func(a,7))) table x = np.arange(-2,4,0.001) plt.plot(x,x**4,label='$x^4$') plt.plot(x,x+1,label='$x+1$') graph(8) plt.plot(x,x**4 - x - 1,label='$x^4 - x - 1$') graph(8) a , n , arr , table = bissection(-1,0,0.0001,8) iterations(n,arr,8) print(str(a) + "\n" + str(func(a,8))) table a , n , arr , table = bissection(1,2,0.0001,8) iterations(n,arr,8) print(str(a) + "\n" + str(func(a,8))) table x = np.arange(-5,4,0.001) plt.plot(x,np.e**(x),label='$e^x$') plt.plot(x,x+2,label='$x+2$') graph(9) plt.plot(x,np.e**(x) - x - 2,label='$e^2 - x - 2$') graph(9) a , n , arr , table = bissection(1,2,0.0001,9) iterations(n,arr,9) print(str(a) + "\n" + str(func(a,9))) table x = np.arange(-5,4,0.001) plt.plot(x,-np.sin(x),label='$-sin(x)$') plt.plot(x,1-x,label='$1 - x$') graph(10) plt.plot(x,-np.sin(x) - 1 + x,label='$-sin(x) - 1 + x$') graph(10) a , n , arr , table = bissection(0,2,0.0001,10) iterations(n,arr,10) print(str(a) + "\n" + str(func(a,10))) table x = np.arange(-10,10,.001) plt.plot(np.tan(x),label='$tan(x)$') plt.plot(x,label='$x$') graph(11) plt.plot(np.tan(x) - x,label='$x - tan(x)$') graph(11) a , n , arr , table = bissection(4,5,0.0001,11) iterations(n,arr,11) print(str(a) + "\n" + str(func(a,11))) table a , n , arr , table = bissection(80,120,0.0001,11) iterations(n,arr,11) print(str(a) + "\n" + str(func(a,11))) table ```
github_jupyter
# PyTorch Introduction This is an introduction of PyTorch. Itโ€™s a Python-based scientific computing package targeted at two sets of audiences: - A replacement for NumPy to use the power of GPUs; - a deep learning research platform that provides maximum flexibility and speed. - [`torch.Tensor`](https://pytorch.org/docs/stable/tensors.html) is the central class of PyTorch. - Central to all neural networks in PyTorch is the [`autograd`](https://pytorch.org/docs/stable/autograd.html) package. It provides automatic differentiation for all operations on Tensors. If we set the attribute `.requires_grad` of `torch.Tensor` as `True`, it starts to track all operations on it. When finishing computation, we can call `.backward()` and have all the gradients computed automatically. The gradient for this tensor will be accumulated into `.grad` attribute. ## Goals of this tutorial - Understanding PyTorch's Tensor library and neural networks at a high level; - Training a small network with PyTorch; ## Preparation - Install [PyTorch](https://pytorch.org/) and [torchvision](https://github.com/pytorch/vision) (CPU version); (**If you want to install a cuda version, remember to change the type of the following cell into markdown**) ``` # Linux and probably Windows, remove the "> /dev/null" if you want to see the output # !pip install torch==1.4.0+cpu torchvision==0.5.0+cpu -f https://download.pytorch.org/whl/torch_stable.html > /dev/null # Mac !pip install torch==1.4.0 torchvision==0.5.0 > /dev/null ``` - <div class="alert alert-block alert-info"><b>(Optional)</b> You can also install a <a href="https://developer.nvidia.com/cuda-downloads">Cuda</a> version if an Nvidia GPU and Cuda setup is installed on your machine, e.g.</div> ```python # CUDA 10.0 pip install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f https://download.pytorch.org/whl/torch_stable.html ``` - <div class="alert alert-block alert-danger">Make sure you've installed the <b>same version of PyTorch and torchvision</b>. If you install your own version, there might be some issues.</div> ``` import torch import torchvision print(f"Torch version: {torch.__version__}\nTorchvision version: {torchvision.__version__}\n") if not torch.__version__.startswith("1.4.0"): print("you are using an another version of PyTorch. We expect PyTorch 1.4.0. You can continue with your version but it" " might cause some issues") if not torchvision.__version__.startswith("0.5.0"): print("you are using an another version of torchvision. We expect torchvision 0.5.0. You can continue with your version but it" " might cause some issues") ``` ## 1. Getting Started In this session you will learn the basic element Tensor and some simple oprations of PyTorch. ``` import numpy as np import matplotlib.pyplot as plt import torchvision.transforms as transforms from torch.utils.data.sampler import SubsetRandomSampler import os import pandas as pd pd.options.mode.chained_assignment = None # default='warn' %load_ext autoreload %autoreload 2 %matplotlib inline ``` ### 1.1 Tensors Tensors are similar to NumPyโ€™s ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing. ``` # Construct a (2,3) NumPy array and a (2,3) tensor directly from data # [[1 2 3] # [4 5 6]] a_np = np.array([[1,2,3],[5,6,7]]) #NumPy array a_ts = torch.tensor([[1,2,3],[4,5,6]]) # Tensor print("a_np:\n {},\n Shape: {}".format(type(a_np), a_np.shape)) # print(a_np) print("a_ts:\n {},\n Shape: {}".format(type(a_ts), a_ts.shape) ) print(a_ts) ``` ### 1.2 Conversion btw. NumPy ndarray and Tensor The conversion between NumPy ndarray and PyTorh tensor is quite easy. ``` # Conversion m_np = np.array([1, 2, 3]) n_ts = torch.from_numpy(m_np) #Convert a numpy array to a Tensor v_np = n_ts.numpy() #Tensor to numpy v_np[1] = -1 #Numpy and Tensor share the same memory assert(m_np[1] == v_np[1]) #Change Numpy will also change the Tensor ``` <div class="alert alert-block alert-info"><b>Hint:</b> During the conversion, both ndarray and Tensor share the same memory storage. Change value from either side will affect the other.</div> ### 1.3 Operations #### 1.3.1 Indexing We can use the NumPy indexing in Tensors: ``` a_ts # Let us take the first two columns from the original array and save it in a new one b = a_ts[:2, :2] #Use numpy type indexing #b.shape b[:, 0] = 0 #For assignment print(b) # Select elements which satisfy a condition # Using numpy array makes such a selection trivial mask = a_ts > 1 new_array = a_ts[mask] print(new_array) # Do the same thing in a single step c = a_ts[a_ts>1] print(c == new_array) #Why assert doesn't work here ##assert np.all(new_array == c) # np.all() to indicate that all the values need to match ``` #### 1.3.2 Mathematical operations ``` torch.empty(2, 2) # Mathematical operations x = torch.tensor([[1,2],[3,4]]) y = torch.tensor([[5,6],[7,8]]) # Elementwise Addition # [[ 6.0 8.0] # [10.0 12.0]] #Addition: syntax 1 print("x + y: {}".format(x + y)) #Addition: syntax 2 print("x + y: {}".format(torch.add(x, y))) #Addition: syntax 3 result_add = torch.empty(2, 2) torch.add(x, y, out=result_add) print("x + y: {}".format(result_add)) # Elementwise Subtraction # [[-4.0 -4.0] # [-4.0 -4.0]] # Subtraction: syntax 1 print("x - y: {}".format(x - y)) # Subtraction: syntax 2 print("x - y: {}".format(torch.sub(x, y))) # Subtraction: syntax 3 result_sub = torch.empty(2, 2) torch.sub(x, y, out=result_sub) print("x - y: {}".format(result_sub)) # Elementwise Multiplication # [[ 5.0 12.0] # [21.0 32.0]] # Multiplication: syntax 1 print("x * y: {}".format(x * y)) # Multiplication: syntax 2 print("x * y: {}".format(torch.mul(x, y))) # Multiplication: syntax 3 result_mul = torch.empty(2, 2) torch.mul(x, y, out=result_mul) print("x * y: {}".format(result_mul)) ``` When dividing two ints in NumPy, the result is always a **float**, e.g. ``` x_np = np.array([[1,2],[3,4]]) y_np = np.array([[5,6],[7,8]]) print(x_np / y_np) ``` **However, in PyTorch 1.4.0 `torch.div` calculates floor division if both operands have integer types**; If you want **true division** for integers, pleases convert the integers into floats first or specify the output as `torch.div(a, b, out=c)`. <div class="alert alert-block alert-danger">In PyTorch 1.5.0 you can use <b>true_divide</b> or <b>floor_divide</b> to calculate true division or floor division. And in future release div will perform true division as in Python 3. </div> ``` # Elementwise Division # Floor Division: syntax 1 print("x // y: {}".format(x / y)) # Floor Division: syntax 2 print("x // y: {}".format(torch.div(x, y))) # True Division: syntax 1 result_true_div = torch.empty(2, 2) torch.div(x, y, out=result_true_div) print("x / y: {}".format(result_true_div)) ``` ### 1.4 Devices When training a neural network, make sure that all the tensors are on the same device. Tensors can be moved onto any device using `.to` method. ``` # We will use ``torch.device`` objects to move tensors in and out of GPU device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device) print(f"Original device: {x.device}") # "cpu", integer tensor = x.to(device) print(f"Current device: {tensor.device}") #"cpu" or "cuda", double ``` So `x` has been moved onto cuda for those who have a GPU; otherwise it's still on the CPU. <div class="alert alert-block alert-info"><b>Tip:</b> Include the <b>.to(device)</b> calls for every project such that you can easily port it to a GPU version.</div> ## 2. Training a classifier with PyTorch In this session, you'll have an overview about how we could use PyTorch to load data, define neural networks, compute loss and make updates to the weights of the network. We will do the following steps in order: a) Dataloading in Pytorch compared to our previous datasets b) Define a two-layer network c) Define a loss function and optimizer d) Train the network e) Test the network ### 2.1 Datasets and Loading The general procedure of dataloading is: a) Extract: Get the data from the source b) Transform: Put our data into suitable form (e.g. tensor form) c) Load: Put our data into an object to make it easily accessible #### 2.1.1 House price We'll use our dataloader and the dataloader of PyTorch to load the house price dataset separately. First, let's initialize our csv dataset from exercise 3: ``` from exercise_code.data.csv_dataset import CSVDataset, get_exercise5_transform from exercise_code.data.dataloader import DataLoader as our_DataLoader # dataloading and preprocessing steps as in ex04 2_logistic_regression.ipynb target_column = 'SalePrice' i2dl_exercises_path = os.path.dirname(os.path.abspath(os.getcwd())) root_path = os.path.join(i2dl_exercises_path, "datasets", 'housing') housing_file_path = os.path.join(root_path, "housing_train.csv") download_url = 'https://cdn3.vision.in.tum.de/~dl4cv/housing_train.zip' # Set up the transform to get two prepared columns select_two_columns_transform = get_exercise5_transform() # Set up the dataset our_csv_dataset = CSVDataset(target_column=target_column, root=root_path, download_url=download_url, mode="train", transform=select_two_columns_transform) ``` Now we can set up our dataloader similar to Exercise 5 ``` # Set up our old dataloader batch_size = 4 our_dataloader = our_DataLoader(our_csv_dataset, batch_size=batch_size) for i, item in enumerate(our_dataloader): print('Starting item {}'.format(i)) print('item contains') for key in item: print(key) print(type(item[key])) print(item[key].shape) if i+1 >= 1: break ``` In pyTorch we can directly use a [`Dataloader` class](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) and simply initalize it. And it also provides more parameters than ours, such as easy multiprocessing using `num_workers`. You can refer to the link to learn those additional supports. ``` from torch.utils.data import DataLoader pytorch_dataloader = DataLoader(our_csv_dataset, batch_size=batch_size) # We can use the exact same way to iterate over samples for i, item in enumerate(pytorch_dataloader): print('Starting item {}'.format(i)) print('item contains') for key in item: print(key) print(type(item[key])) print(item[key].shape) if i+1 >= 1: break ``` <div class="alert alert-block alert-info">As you can see, both dataloaders load the data with batch_size 4 and the data contains 2 features and 1 target. The only <b>difference</b> here is that the Dataloader of PyTorch will automatically transform the dataset into tensor format.</div> #### 2.1.2 Torchvision Specifically for vision, there's a package called `torchvision`, that has data loaders for common datasets such as Imagenet, FashionMNIST, MNIST, etc. and data transformers for images: `torchvision.datasets` and `torch.utils.data.DataLoader`. This provides a huge convenience and avoids writing boilerplate code. For this tutorial, we will use FashionMNIST dataset. It has 10 classes: 'T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'. The images in FashionMNIST are of size $1 \times 28 \times 28 $, i.e. 1-channel color images of $ 28 \times 28 $ pixels in size. ``` transforms #Define a transform to convert images to tensor transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,),(0.5,))]) # mean and std have to be sequences (e.g. tuples), # therefore we should add a comma after the values fashion_mnist_dataset = torchvision.datasets.FashionMNIST(root='../datasets', train=True, download=True, transform=transform) fashion_mnist_test_dataset = torchvision.datasets.FashionMNIST(root='../datasets', train=False, download=True, transform=transform) fashion_mnist_dataloader = DataLoader(fashion_mnist_dataset, batch_size=8) fashion_mnist_test_dataloader = DataLoader(fashion_mnist_test_dataset, batch_size=8) classes = ('T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot') ``` - `transforms.Compose` creates a series of transformation to prepare the dataset. - `transforms.ToTenser` convert `PIL image` or numpy.ndarray $(H \times W\times C)$ in the range [0,255] to a `torch.FloatTensor` of shape $(C \times H \times W)$ in the range [0.0, 1.0]. - `transforms.Normalize` normalize a tensor image with mean and standard deviation. - `datasets.FashionMNIST` to download the Fashion MNIST datasets and transform the data. `train=True` if we want to get the training set; otherwise set `train=False` to get the test set. - `torch.utils.data.Dataloader` takes our training data or test data with parameter `batch_size` and `shuffle`. `batch_size` defines how many samples per batch to load. `shuffle=True` makes the data reshuffled at every epoch. ``` # We can use the exact same way to iterate over samples for i, item in enumerate(fashion_mnist_dataloader): print('Starting item {}'.format(i)) print('item contains') image, label = item print(f"Type of input: {type(image)}") print(f"Shape of the input: {image.shape}") print(f"label: {label}") if i+1 >= 1: break ``` Since we loaded the data with `batch_size` 8, the shape of the input is (8, 1, 28, 28). So before we push it into the affine layer, we need to flatten it with `x = x.view(-1, x.size[0)` (It will be shown later in 2.2) Let's show some of the training images. ``` def imshow(img): img = img / 2 + 0.5 # unormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() # get some random training images dataiter = iter(fashion_mnist_dataloader) images, labels = dataiter.next() # show images imshow(torchvision.utils.make_grid(images)) # print labels print(' '.join('%5s' % classes[labels[j]] for j in range(8))) ``` ### 2.2 Define a Two-Layer Network In exercise_06 we've defined the forward and backward pass for an affine layer and a Sigmoid layer (`exercise_code/networks/layer.py`) and completed the implementation of the `ClassificationionNet` class (`exercise_code/networks/classifiation_net.py`). ``` from exercise_code.networks.classification_net import ClassificationNet hidden_size = 100 std = 1.0 model_ex06 = ClassificationNet(input_size=2, hidden_size=hidden_size, std=std) ``` Have a look at your lengthy implementation first ;). Now, we can use `torch.nn.Module` to define our network class, e.g. ``` import torch.nn as nn class Net(nn.Module): def __init__(self, activation=nn.Sigmoid(), input_size=1*28*28, hidden_size=100, classes=10): super(Net, self).__init__() self.input_size = input_size # Here we initialize our activation and set up our two linear layers self.activation = activation self.fc1 = nn.Linear(input_size, hidden_size) self.fc2 = nn.Linear(hidden_size, classes) def forward(self, x): x = x.view(-1, self.input_size) # flatten x = self.fc1(x) x = self.activation(x) x = self.fc2(x) return x ``` Similar to the `ClassificationNet` in exercise_06, here we defined a network with PyTorch. - PyTorch provides a `nn.Module` that builds neural networks - `super().__init__` creates a class that inherits attributes and behaviors from another class - `self.fc1` creates an affine layer with `input_size` inputs and `hidden_size` outputs. - `self.fc2` is similar to `self.fc1`. - `Forward` pass: - first flatten the `x` with `x = x.view(-1, self.input_size)` - 'Sandwich layer' by applying `fc1`, `activation`, `fc2` sequentially. <div class="alert alert-block alert-info">Thanks to <b>autograd</b> package, we just have to define the <b>forward</b> function. And the <b>backward</b> function (where gradients are computed) is automatically defined. We can use any of the Tensor operations in the <b>forward</b> function.</div> <div class="alert alert-block alert-info"> We can use <b>print</b> to see all difined layers (but it won't show the information of the forward pass). And all the learnable parameters of a model are returned by <b>[model_name].parameters()</b>. We also have access to the parameters of different layers by <b>[model_name].[layer_name].parameters()</b> </div> ``` # create model net = Net() net = net.to('cpu') #always remember to move the network to the device print(net) for parameter in net.parameters(): print(parameter.shape) ``` ### 2.3 Define a Loss function and optimizer Let's use a Classification Cross-Entropy loss and SGD with momentum. Recall that we've implemented SGD and MSE in exercise_04. Have a look at their implementations in `exercise_code/networks/optimizer.py` and `exercise_code/networks/loss.py` ``` from exercise_code.networks.optimizer import SGD from exercise_code.networks.loss import MSE, L1 ``` Now we can import the loss function and optimizer directly from `torch.nn` and `torch.optim` respectively, e.g. ``` import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) ``` ### 2.4 Train the network This is when things start to get interesting. We simply have to loop over our data iterator, and feed the inputs to the network and optimize. ``` device = 'cpu' train_loss_history = [] # loss train_acc_history = [] # accuracy for epoch in range(2): # TRAINING running_loss = 0.0 correct = 0.0 total = 0 for i, data in enumerate(fashion_mnist_dataloader, 0): # get the inputs; data is a list of [inputs, labels] X, y = data X = X.to(device) y = y.to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize y_pred = net(X) # input x and predict based on x loss = criterion(y_pred, y) # calculate the loss loss.backward() # backpropagation, compute gradients optimizer.step() # apply gradients # loss and acc running_loss += loss.item() _, preds = torch.max(y_pred, 1) #convert output probabilities to predicted class correct += preds.eq(y).sum().item() total += y.size(0) # print statistics if i % 1000 == 999: # print every 1000 mini-batches running_loss /= 1000 correct /= total print("[Epoch %d, Iteration %5d] loss: %.3f acc: %.2f %%" % (epoch+1, i+1, running_loss, 100*correct)) train_loss_history.append(running_loss) train_acc_history.append(correct) running_loss = 0.0 correct = 0.0 total = 0 print('FINISH.') ``` So the general training pass is as fowllows: - `zero_grad()`: zero the gradient buffers of all parameters and backprops with random gradient - `y_pred = net(X)`: make a forward pass through the network to getting log probabilities by passing the images to the model. - `loss = criterion(y_pred, y)`: calculate the loss - `loss.backward()`: perform a backward pass through the network to calculate the gradients for model parameters. - `optimizer.step()`: take a step with the optimizer to update the model parameters. We keep tracking the training loss and accuracy over time. The following plot shows averages values for train loss and accuracy. ``` plt.plot(train_acc_history) plt.plot(train_loss_history) plt.title("FashionMNIST") plt.xlabel('iteration') plt.ylabel('acc/loss') plt.legend(['acc', 'loss']) plt.show() ``` ### 2.5 Test the network on the test data We have trained the network for 2 passes over the training dataset. Now we want to check the model by predicting the class label that the neural network outputs, and checking it against the ground-truth. If the prediction is correct, we add the sample to the list of correct predictions. And we'll visualize the data to display test images and their labels in the following format: `predicted (ground-truth)`. The text will be green for accurately classified examples and red for incorrect predictions. ``` #obtain one batch of test images dataiter = iter(fashion_mnist_test_dataloader) images, labels = dataiter.__next__() images, labels = images.to(device), labels.to(device) # get sample outputs outputs = net(images) # convert output probabilites to predicted class _, predicted = torch.max(outputs, 1) # prep images for display images = images.cpu().numpy() # plot the images in the batch, along with predicted and true labels fig = plt.figure(figsize=(25,4)) for idx in range(8): ax = fig.add_subplot(2, 8/2, idx+1, xticks=[], yticks=[]) ax.imshow(np.squeeze(images[idx]), cmap='gray') ax.set_title(f"{classes[predicted[idx]]} ({classes[labels[idx]]})", color="green" if predicted[idx]==labels[idx] else "red") ``` We can also show what are the classes that performed well, and the classes that did not perform well: ``` class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) with torch.no_grad(): for data in fashion_mnist_test_dataloader: images, labels = data images, labels = images.to(device), labels.to(device) outputs = net(images) _, predicted = torch.max(outputs, 1) c = (predicted == labels).squeeze() for i in range(4): label = labels[i] class_correct[label] += c[i].item() class_total[label] += 1 for i in range(10): print('Accuracy of %11s: %2d %%' % (classes[i], 100 * class_correct[i] / class_total[i])) ``` ## Reference 1. [PyTorch Tutorial](https://pytorch.org/tutorials/) 2. [Fashion MNIST dataset training using PyTorch](https://medium.com/@aaysbt/fashion-mnist-data-training-using-pytorch-7f6ad71e96f4)
github_jupyter
## Regression with gradient-boosted trees and MLlib pipelines This notebook uses a bike-sharing dataset to illustrate MLlib pipelines and the gradient-boosted trees machine learning algorithm. The challenge is to predict the number of bicycle rentals per hour based on the features available in the dataset such as day of the week, weather, season, and so on. Demand prediction is a common problem across businesses; good predictions allow a business or service to optimize inventory and to match supply and demand to make customers happy and maximize profitability. ## Load the dataset The dataset is from the UCI Machine Learning Repository and is provided with Databricks Runtime. The dataset includes information about bicycle rentals from the Capital bikeshare system in 2011 and 2012. Load the data using the CSV datasource for Spark, which creates a Spark DataFrame. ``` from pyspark.sql.types import DoubleType, StringType, StructField, StructType from pyspark.sql import SparkSession spark = SparkSession \ .builder \ .appName("Regression ") \ .getOrCreate() df = spark.read.csv("/home/robin/datatsets/bikeSharing/hour.csv", header="true", inferSchema="true") # The following command caches the DataFrame in memory. This improves performance since subsequent calls to the DataFrame can read from memory instead of re-reading the data from disk. df.cache() ``` ## Data description The following columns are included in the dataset: ### Index column: - instant: record index ### Feature columns: - dteday: date - season: season (1:spring, 2:summer, 3:fall, 4:winter) - yr: year (0:2011, 1:2012) - mnth: month (1 to 12) - hr: hour (0 to 23) - holiday: 1 if holiday, 0 otherwise - weekday: day of the week (0 to 6) - workingday: 0 if weekend or holiday, 1 otherwise - weathersit: (1:clear, 2:mist or clouds, 3:light rain or snow, 4:heavy rain or snow) - temp: normalized temperature in Celsius - atemp: normalized feeling temperature in Celsius - hum: normalized humidity - windspeed: normalized wind speed ### Label columns: - casual: count of casual users - registered: count of registered users - cnt: count of total rental bikes including both casual and registered Call display() on a DataFrame to see a sample of the data. The first row shows that 16 people rented bikes between midnight and 1am on January 1, 2011. ``` display(df) print("The dataset has %d rows." % df.count()) ``` ## Preprocess data This dataset is well prepared for machine learning algorithms. The numeric input columns (temp, atemp, hum, and windspeed) are normalized, categorial values (season, yr, mnth, hr, holiday, weekday, workingday, weathersit) are converted to indices, and all of the columns except for the date (dteday) are numeric. The goal is to predict the count of bike rentals (the cnt column). Reviewing the dataset, you can see that some columns contain duplicate information. For example, the cnt column equals the sum of the casual and registered columns. You should remove the casual and registered columns from the dataset. The index column instant is also not useful as a predictor. You can also delete the column dteday, as this information is already included in the other date-related columns yr, mnth, and weekday ``` df = df.drop("instant").drop("dteday").drop("casual").drop("registered") display(df) ``` Print the dataset schema to see the type of each column. ``` df.printSchema() ``` Split data into training and test sets Randomly split data into training and test sets. By doing this, you can train and tune the model using only the training subset, and then evaluate the model's performance on the test set to get a sense of how the model will perform on new data. ``` # Split the dataset randomly into 70% for training and 30% for testing. Passing a seed for deterministic behavior train, test = df.randomSplit([0.7, 0.3], seed = 0) print("There are %d training examples and %d test examples." % (train.count(), test.count())) ``` ## Visualize the data You can plot the data to explore it visually. The following plot shows the number of bicycle rentals during each hour of the day. As you might expect, rentals are low during the night, and peak at commute hours. To create plots, call display() on a DataFrame in Databricks and click the plot icon below the table. To create the plot shown, run the command in the following cell. The results appear in a table. From the drop-down menu below the table, select "Line". Click Plot Options.... In the dialog, drag hr to the Keys field, and drag cnt to the Values field. Also in the Keys field, click the "x" next to <id> to remove it. In the Aggregation drop down, select "AVG". ``` display(train.select("hr", "cnt")) ``` ## Train the machine learning pipeline Now that you have reviewed the data and prepared it as a DataFrame with numeric values, you're ready to train a model to predict future bike sharing rentals. Most MLlib algorithms require a single input column containing a vector of features and a single target column. The DataFrame currently has one column for each feature. MLlib provides functions to help you prepare the dataset in the required format. MLlib pipelines combine multiple steps into a single workflow, making it easier to iterate as you develop the model. In this example, you create a pipeline using the following functions: - VectorAssembler: Assembles the feature columns into a feature vector. - VectorIndexer: Identifies columns that should be treated as categorical. This is done heuristically, identifying any column with a small number of distinct values as categorical. In this example, the following columns are considered categorical: yr (2 values), season (4 values), holiday (2 values), workingday (2 values), and weathersit (4 values). - GBTRegressor: Uses the Gradient-Boosted Trees (GBT) algorithm to learn how to predict rental counts from the feature vectors. - CrossValidator: The GBT algorithm has several hyperparameters. This notebook illustrates how to use hyperparameter tuning in Spark. This capability automatically tests a grid of hyperparameters and chooses the best resulting model. For more information: - VectorAssembler - VectorIndexer The first step is to create the VectorAssembler and VectorIndexer steps. ``` from pyspark.ml.feature import VectorAssembler, VectorIndexer # Remove the target column from the input feature set. featuresCols = df.columns featuresCols.remove('cnt') # vectorAssembler combines all feature columns into a single feature vector column, "rawFeatures". vectorAssembler = VectorAssembler(inputCols=featuresCols, outputCol="rawFeatures") # vectorIndexer identifies categorical features and indexes them, and creates a new column "features". vectorIndexer = VectorIndexer(inputCol="rawFeatures", outputCol="features", maxCategories=4) ``` ### Next, define the model. ``` from pyspark.ml.regression import GBTRegressor # The next step is to define the model training stage of the pipeline. # The following command defines a GBTRegressor model that takes an input column "features" by default and learns to predict the labels in the "cnt" column. gbt = GBTRegressor(labelCol="cnt") ``` The third step is to wrap the model you just defined in a CrossValidator stage. CrossValidator calls the GBT algorithm with different hyperparameter settings. It trains multiple models and selects the best one, based on minimizing a specified metric. In this example, the metric is root mean squared error (RMSE). ``` from pyspark.ml.tuning import CrossValidator, ParamGridBuilder from pyspark.ml.evaluation import RegressionEvaluator # Define a grid of hyperparameters to test: # - maxDepth: maximum depth of each decision tree # - maxIter: iterations, or the total number of trees paramGrid = ParamGridBuilder()\ .addGrid(gbt.maxDepth, [2, 5])\ .addGrid(gbt.maxIter, [10, 100])\ .build() # Define an evaluation metric. The CrossValidator compares the true labels with predicted values for each combination of parameters, and calculates this value to determine the best model. evaluator = RegressionEvaluator(metricName="rmse", labelCol=gbt.getLabelCol(), predictionCol=gbt.getPredictionCol()) # Declare the CrossValidator, which performs the model tuning. cv = CrossValidator(estimator=gbt, evaluator=evaluator, estimatorParamMaps=paramGrid) ``` ### Create the pipeline. ``` from pyspark.ml import Pipeline pipeline = Pipeline(stages=[vectorAssembler, vectorIndexer, cv]) ``` ### Train the pipeline. Now that you have set up the workflow, you can train the pipeline with a single call. When you call fit(), the pipeline runs feature processing, model tuning, and training and returns a fitted pipeline with the best model it found. This step takes several minutes. ``` pipelineModel = pipeline.fit(train) ``` MLlib will automatically track trials in MLflow. After your tuning fit() call has completed, view the MLflow UI to see logged runs. ## Make predictions and evaluate results The final step is to use the fitted model to make predictions on the test dataset and evaluate the model's performance. The model's performance on the test dataset provides an approximation of how it is likely to perform on new data. For example, if you had weather predictions for the next week, you could predict bike rentals expected during the next week. Computing evaluation metrics is important for understanding the quality of predictions, as well as for comparing models and tuning parameters. The transform() method of the pipeline model applies the full pipeline to the input dataset. The pipeline applies the feature processing steps to the dataset and then uses the fitted GBT model to make predictions. The pipeline returns a DataFrame with a new column predictions. ``` predictions = pipelineModel.transform(test) display(predictions.select("cnt", "prediction", *featuresCols)) ``` A common way to evaluate the performance of a regression model is the calculate the root mean squared error (RMSE). The value is not very informative on its own, but you can use it to compare different models. CrossValidator determines the best model by selecting the one that minimizes RMSE. ``` rmse = evaluator.evaluate(predictions) print("RMSE on our test set: %g" % rmse) ``` You can also plot the results, as you did the original dataset. In this case, the hourly count of rentals shows a similar shape. ``` display(predictions.select("hr", "prediction")) ``` It's also a good idea to examine the residuals, or the difference between the expected result and the predicted value. The residuals should be randomly distributed; if there are any patterns in the residuals, the model may not be capturing something important. In this case, the average residual is about 1, less than 1% of the average value of the cnt column. ``` import pyspark.sql.functions as F predictions_with_residuals = predictions.withColumn("residual", (F.col("cnt") - F.col("prediction"))) display(predictions_with_residuals.agg({'residual': 'mean'})) ``` Plot the residuals across the hours of the day to look for any patterns. In this example, there are no obvious correlations. ``` display(predictions_with_residuals.select("hr", "residual")) ``` ## Improving the model Here are some suggestions for improving this model: - The count of rentals is the sum of registered and casual rentals. These two counts may have different behavior, as frequent cyclists and casual cyclists may rent bikes for different reasons. Try training one GBT model for registered and one for casual, and then add their predictions together to get the full prediction. - For efficiency, this notebook used only a few hyperparameter settings. You might be able to improve the model by testing more settings. A good start is to increase the number of trees by setting maxIter=200; this takes longer to train but might more accurate. This notebook used the dataset features as-is, but you might be able to improve performance with some feature engineering. For example, the weather might have more of an impact on the number of rentals on weekends and holidays than on workdays. You could try creating a new feature by combining those two columns. MLlib provides a suite of feature transformers; find out more in the ML guide.
github_jupyter
# Compiling and running C programs As in [the example](https://github.com/tweag/funflow/tree/v1.5.0/funflow-examples/compile-and-run-c-files) in funflow version 1, we can construct a `Flow` which compiles and executes a C program. As in the older versions of this example, we will use the `gcc` Docker image to run our compilation step. ``` :opt no-lint {-# LANGUAGE Arrows #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE QuasiQuotes #-} -- Funflow libraries import qualified Data.CAS.ContentStore as CS import Funflow ( Flow, dockerFlow, ioFlow, getDirFlow, pureFlow, putDirFlow, runFlow, ) import qualified Funflow.Tasks.Docker as DE -- Other libraries import Path (toFilePath, Abs, Dir, Path, File, absdir, parseAbsDir, relfile, reldir, (</>)) import System.Directory (getCurrentDirectory) import System.Process (runCommand, ProcessHandle) ``` Similar to in Funflow version 1.x, inputs to Docker tasks are mounted in from the content store. This means that we need to copy our example c files to the content store before we can compile them: ``` -- | Helper for getting the absolute path to the src directory srcDir :: () -> IO (Path Abs Dir) srcDir _ = do cwd <- getCurrentDirectory cwdAbs <- parseAbsDir cwd return $ cwdAbs </> [reldir|./src|] -- | A `Flow` which copies the c sources to the content store copyExampleToStore :: Flow () CS.Item copyExampleToStore = proc _ -> do exampleDir <- ioFlow srcDir -< () putDirFlow -< exampleDir ``` Now we can define a task which compiles the example C files using `gcc`: ``` config :: DE.DockerTaskConfig config = DE.DockerTaskConfig { DE.image = "gcc:9.3.0", DE.command = "gcc", DE.args = [ "/example/double.c", "/example/square.c", "/example/main.c"] } -- | Compile our C program and get the path to the output executable compile :: Flow CS.Item CS.Item compile = proc exampleItem -> do -- Define a volume for the example directory let exampleVolume = DE.VolumeBinding {DE.item = exampleItem, DE.mount = [absdir|/example/|]} dockerFlow config -< DE.DockerTaskInput {DE.inputBindings = [exampleVolume], DE.argsVals = mempty} ``` And finally, we can construct our full Flow graph and execute it! ``` flow :: Flow Integer ProcessHandle flow = proc input -> do -- 1. Add the example to the content store example <- copyExampleToStore -< () -- 2. Compile the C sources and get the path to the new executable output <- compile -< example outputDir <- getDirFlow -< output exe <- pureFlow (\x -> toFilePath (x </> [relfile|a.out|])) -< outputDir -- 3. Call the executable command <- pureFlow (\(c, n) -> c <> " " <> show n) -< (exe, input) ioFlow runCommand -< command -- Our C program defined in `src/main.c` defines a function f(x) = 2*x + x^2 -- For input 3 this should output 15. runFlow flow 3 :: IO ProcessHandle ```
github_jupyter
# Pivot ## Formato Wide y Formato Long Dentro del mundo de los dataframe (o datos tabulares) existen dos formas de presentar la naturaleza de los datos: formato wide y formato long. Por ejemplo, el conjunto de datos [Zoo Data Set](http://archive.ics.uci.edu/ml/datasets/zoo) presenta las caracterรญsticas de diversos animales, de los cuales presentamos las primeras 5 columnas. |animal_name|hair|feathers|eggs|milk| |-----------|----|--------|----|----| |antelope|1|0|0|1| |bear|1|0|0|1| |buffalo|1|0|0|1| |catfish|0|0|1|0| La tabla asรญ presentada se encuentra en **wide format**, es decir, donde los valores se extienden a travรฉs de las columnas. Serรญa posible representar el mismo contenido anterior en **long format**, es decir, donde los mismos valores se indicaran a travรฉs de las filas: |animal_name|characteristic|value| |-----------|----|--------| |antelope|hair |1| |antelope|feathers|0| |antelope|eggs|0| |antelope|milk|1| |...|...|...|...|..| |catfish|hair |0| |catfish|feathers|0| |catfish|eggs|1| |catfish|milk|0| <img src="images/wide_and_long.png" align="center"/> En python existen maneras de pasar del formato **wide** al formato **long** y viceversa. ## Pivotear y despivotear tablas ### Pivot El pivoteo de una tabla corresponde al paso de una tabla desde el formato **long** al formato **wide**. Tรญpicamente esto se realiza para poder comparar los valores que se obtienen para algรบn registro en particular, o para utilizar algunas herramientas de visualizaciรณn bรกsica que requieren dicho formato. Para ejemplificar estos resultados, ocupemos el conjunto de datos **terremotos.csv**, con contiene los registros de terremotos de distintos paises desde el aรฑo 2000 al 2011. <img src="./images/logo_terremoto.png" alt="" align="center" width="300"/> ``` import pandas as pd import numpy as np import os # formato long df = pd.read_csv(os.path.join("data","terremotos.csv"), sep=",") df.head() ``` Por ejemplo, se quiere saber el terremoto de mayor magnitud a nivel de paรญs aรฑo. Tenemos dos formas de mostrar la informaciรณn. ``` # formato long df.groupby(['Pais','Aรฑo']).max() # formato wide df.pivot_table(index="Pais", columns="Aรฑo", values="Magnitud", fill_value='', aggfunc=pd.np.max) ``` ### Despivotear un tabla El despivotear una tabla corresponde al paso de una tabla desde el formato **wide** al formato **long**. Se reconocen dos situaciones: 1. El valor indicado para la columna es รบnico, y sรณlo se requiere definir correctamente las columnas. 2. El valor indicado por la columna no es รบnico o requiere un procesamiento adicional, y se requiere una iteraciรณn mรกs profunda. Para ejemplificar esto, crearemos un conjunto de datos con los horarios de los ramos que se tiene que dictar en un determinado dรญa, hora y lugar. <img src="./images/logo_classroom.png" alt="" align="center" width="400px"/> **a) El valor indicado para la columna es รบnico** ``` columns = ["sala","dia","08:00","09:00","10:00"] data = [ ["C201","Lu", "mat1","mat1", ""], ["C201","Ma", "","",""], ["C202","Lu", "","",""], ["C202","Ma", "mat1","mat1", ""], ["C203","Lu", "fis1","fis1","fis1"], ["C203","Ma", "fis1","fis1","fis1"], ] df = pd.DataFrame(data=data, columns=columns) df # Despivotear incorrectamente la tabla df.melt(id_vars=["sala"], var_name="hora", value_name="curso") # Despivotear correctamente la tabla df_melt = df.melt(id_vars=["sala", "dia"], var_name="hora", value_name="curso") df_melt[df_melt.curso!=""].sort_values(["sala","dia","hora"]) ``` **b) Relaciones no รบnicas** ``` columns = ["sala","curso","Lu","Ma","hora"] data = [ ["C201","mat1","X","","8:00-10:00"], ["C202","mat1","","X","8:00-10:00"], ["C203","fis1","X","X","8:00-11:00"], ] df = pd.DataFrame(data=data, columns=columns) df ``` #### Mรฉtodos **mรฉtodo 01: Despivotear manualmente y generar un nuevo dataframe** * **Ventajas**: Si se puede es una soluciรณn directa y rรกpida. * **Desventaja**: requiere programaciรณn explรญcita de la tarea, no es reutilizable. ``` # Obtener el dรญa lunes df_Lu = df.loc[df.Lu=="X", ["sala","curso","hora"]] df_Lu["dia"] = "Lu" df_Lu # Obtener el dรญa martes df_Ma = df.loc[df.Ma=="X", ["sala","curso","hora"]] df_Ma["dia"] = "Ma" df_Ma # Juntar pd.concat([df_Lu,df_Ma]) ``` **mรฉtodo 02: Iterar sobre las filas y generar contenido para un nuevo dataframe** * **Ventajas**: En general, fรกcil de codificar. * **Desventaja**: puede ser lento, es ineficiente. ``` my_columns = ["sala","curso","dia","hora"] my_data = [] for i, df_row in df.iterrows(): # Procesar cada fila if df_row.Lu=="X": my_row = [df_row.sala, df_row.curso, "Lu", df_row.hora] my_data.append(my_row) if df_row.Ma=="X": my_row = [df_row.sala, df_row.curso, "Ma", df_row.hora] my_data.append(my_row) new_df = pd.DataFrame(data=my_data, columns=my_columns) new_df ``` ## Referencia 1. [Reshaping and pivot tables](https://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html)
github_jupyter
<a href="https://colab.research.google.com/github/cshah13/workforce-opportunities-baltimore-denver/blob/main/Shah_Baltimore_Denver_Job_Analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Import Libraries ``` # import libraries # data analysis import pandas as pd # data visualization import matplotlib.pyplot as plt import plotly.express as px # download files to our computer from google.colab import files ``` # Import Data for Baltimore Job Availability Imported data from [this github here](https://github.com/cshah13/workforce-opportunities-baltimore-denver) ``` # import csv of baltimore city job availability data # save github csv link job_data = "https://raw.githubusercontent.com/cshah13/workforce-opportunities-baltimore-denver/main/Original%20Baltimore%20Job%20Data%20CSV.csv" #define our initial dataframe df_job = pd.read_csv(job_data) ``` # Look at Our Data ``` # preview the first five rows df_job.head() # preview last five rows of the data df_job.tail() # general stats to help us understand the data df_job.describe() # delete the tract column del df_job["tract"] df_job.head() ``` Remove Non-Baltimore Data ``` df_job['location'].str.contains("baltimore") df_baltimore = df_job[df_job['location'].str.contains("Baltimore")] df_baltimore.head() df_baltimore = df_baltimore.rename(columns = {"location" : "location", "availability_of_jobs_in_2013" : "jobs_per_sq_mile"}) df_baltimore.head() # general stats to help us understand the data df_baltimore.describe() ``` # Create a Bar Graph for Baltimore Job Availability ``` # plot average number of jobs available per square mile in a bar graph df_baltimore.plot(x = "location", y = "jobs_per_sq_mile", kind = "bar", figsize = (45,8)) # add graph labels bmore_jobs_fig = plt.figure() df_baltimore.plot(x = "location", y = "jobs_per_sq_mile", kind = "bar", figsize = (45,8), title = "Average Number of Jobs Available per Square Mile in Baltimore, MD") plt.xlabel("Areas in Baltimore") plt.ylabel("Average Number of Jobs Available Per Square Mile") # save our graph bmore_jobs_fig.savefig("jobs_bmore.png") #downloading the files from google colab files.download("jobs_bmore.png") # melt dataframe to work easier with plotly express df_agg_melt = pd.melt(df_baltimore, id_vars = ["location"]) df_agg_melt.head() # make bar graph in plotly express bmore_job_fig = px.bar(df_baltimore, x = 'location', y = 'jobs_per_sq_mile', title="Average Number of Jobs Available per Square Mile in Baltimore, MD", labels = {"location": "Areas in Baltimore", "jobs_per_sq_mile": "Average Number of Jobs Available Per Square Mile"}, width=1200, height=1000) bmore_job_fig.show() # save an html file bmore_job_fig.write_html("plotly_bar_bmorejobs.html") #download from google files.download("plotly_bar_bmorejobs.html") #remove repeat locations df_avgbmorejob = df_baltimore.groupby("location")["jobs_per_sq_mile"].agg(["mean"]).reset_index() df_avgbmorejob.head() #recreate graph bmorejob__fig = px.bar(df_avgbmorejob, x = 'location', y = 'mean', title="Average Number of Jobs Available per Square Mile in Baltimore, MD", labels = {"location": "Areas in Baltimore", "mean": "Average Number of Jobs Available Per Square Mile"}, width=1200, height=1000,) bmorejob__fig.update_layout(barmode='stack', xaxis={'categoryorder':'total descending'}) bmorejob__fig.show() ``` Import Data for Denver Job Availability Imported data from [this github here](https://github.com/cshah13/workforce-opportunities-baltimore-denver) ``` # import csv of denver job availability data # save github csv link denverjob_data = "https://raw.githubusercontent.com/cshah13/workforce-opportunities-baltimore-denver/main/Original%20Denver%20Job%20Data%20CSV.csv" #define our initial dataframe df_denverjob = pd.read_csv(denverjob_data) # preview the first five rows df_denverjob.head() # preview last five rows of the data df_denverjob.tail() # delete the tract column del df_denverjob["tract"] df_denverjob.head() # general stats to help us understand the data df_denverjob.describe() ``` Remove Non-Denver Data ``` df_denver = df_denverjob[df_denverjob['location'].str.contains("Denver")] df_denverjob.head() # general stats to help us understand the data df_denver.describe() ``` # Create a Bar Graph with Denver Job Availability Data ``` # melt dataframe to work easier with plotly express df_agg_meltt = pd.melt(df_denver, id_vars = ["location"]) df_agg_meltt.head() # make bar graph in plotly express den_job_fig = px.bar(df_denver, x = 'location', y = 'availability_of_jobs_in_2013', title="Average Number of Jobs Available per Square Mile in Denver, CO", labels = {"location": "Areas in Denver", "availability_of_jobs_in_2013": "Average Number of Jobs Available Per Square Mile"}, width=1200, height=1200) den_job_fig.show() # save an html file den_job_fig.write_html("plotly_bar_denjobs.html") #download from google files.download("plotly_bar_denjobs.html") #remove repeat locations df_avgdenverjob = df_denver.groupby("location")["availability_of_jobs_in_2013"].agg(["mean"]).reset_index() df_avgdenverjob.head() # redo bar graph in plotly express dennew_fig = px.bar(df_avgdenverjob, x = 'location', y = 'mean', title="Average Number of Jobs Available per Square Mile in Denver, CO", labels = {"location": "Areas in Denver", "mean": "Average Number of Jobs Available Per Square Mile"}, width=1200, height=1200) dennew_fig.update_layout(barmode='stack', xaxis={'categoryorder':'total descending'}) dennew_fig.show() ```
github_jupyter
## Basic Training UC Berkeley Python Bootcamp ``` print("Hello, world.") ``` # Calculator # > there are `int` and `float` (but not doubles) ``` print(2 + 2) 2 + 2 print(2.1 + 2) 2.1 + 2 == 4.0999999999999996 %run talktools ``` - Python stores floats as their byte representation so is limited by the same 16-bit precision issues as most other languages - In doing calculations, unless you specify otherwise, Python will store the results in the smallest-byte representation > 1. Indentation matters! > 2. When you mess up, Python is gentle > 3. \# starts a comments (until the end of the line) ``` print(2 + 2) 2 + 2 2 # this is a comment and is not printed # this is also a comment ``` &nbsp; ** Calculator ** - In Python 3, there is no distinction between `int` and `long` ``` 42**42 (42**42).bit_length() bin(42**42) ``` Division always leads to a float ``` 2 / 2 2 / 2.0 ``` Note: This is an important difference between Python 2 and Python 3. Old-style division between `int`s can be done with a double slash `\\` ``` 2 // 2 3 // 2 2.5 // 2 # egad, dont do this. ``` There is also `complex` types ``` complex(1,2) 1+2j 1 + 2j - 2j ``` Note: Access to [`decimal`](https://docs.python.org/3/library/decimal.html#module-decimal) (decimal fixed point and floating point arithmetic) and [`fraction`](https://docs.python.org/3/library/fractions.html#module-fractions) types/operations is through built-in `modules`. &nbsp; Let's do some math ``` (3.0*10.0 - 25.0)/5.0 print(3.085e18*1e6) # this is a Megaparsec in units of cm! t = 1.0 # declare a variable t (time) accel = 9.8 # acceleration in units of m/s^2 # distance travelled in time t seconds is 1/2 a*t**2 dist = 0.5*accel*t*t print(dist) # this is the distance in meters dist1 = accel*(t**2)/2 print(dist1) dist2 = 0.5*accel*pow(t,2) print(dist2) ``` - **variables** are assigned on the fly - multiplication, division, exponents as you expect ``` print(6 / 5) ; print(9 / 5) print(6 // 5) ; print(9 // 5) # remember double-slash integer division returns the floor 6 % 5 # mod operator 1 << 2 ## shift: move the number 1 by two bits to the left ## that is make a new number 100 (base 2) 5 >> 1 ## shift: move the number 5 = 101 (base 2) one to ## to the right (10 = 2) x = 2 ; y = 3 ## assign two variables on the same line! x | y ## bitwise OR x ^ y ## exclusive OR (10 ^ 11 = 01) x & y ## bitwise AND x = x ^ y ; print(x) x += 3 ; print(x) x /= 2.0 ; print(x) ``` we'll see a lot more mathy operators and functions later ## Relationships ## ``` # from before dist1 = 4.9 and dist = 4.9 dist1 == dist dist < 10 dist <= 4.9 dist < (10 + 2j) dist < -2.0 dist != 3.1415 ``` &nbsp; ** More on Variables & Types ** ``` 0 == False not False 0.0 == False not (10.0 - 10.0) not -1 not 3.1415 x = None # None is something special. Not true or false None == False None == True False or True False and True float("nan") == True ``` &nbsp; ** More on Variables & Types ** ``` print(type(1)) x = 2 ; type(x) type(2) == type(1) print(type(True)) print(type(type(1))) print(type(pow)) ``` &nbsp; we can test whether something is a certain type with **`isinstance()`** ``` isinstance(1,int) isinstance(1,(int,float)) isinstance("spam",str) isinstance(1.212,int) isinstance(1.212,int) ``` We'll see later than numbers are an instance of an object, which have methods that can act upon itself: ``` (1.212).is_integer() (1.0).is_integer() ``` builtin-types: **`int`**, **`bool`**, **`str`**, **`float`**, **`complex`** # Strings Strings are a sequence of characters - they can be indexed and sliced up as if they were an array - you can glue strings together with + signs Strings are **immutable** (unlike in C), so you cannot change a string in place (this isn't so bad...) Strings can be formatted and compared ``` >>> x = "spam" ; print(type(x)) print("hello!\n...my sire.") "hello!\n...my sire." "wah?!" == 'wah?!' print("'wah?!' said the student") print("\"wah?!\" said the student") ``` backslashes (\\) start special (escape) characters: ``` \n = newline (\r = return) \t = tab \a = bell ``` string literals are defined with double quotes or quotes. The outermost quote type cannot be used inside the string (unless it's escaped with a backslash) See: http://docs.python.org/reference/lexical_analysis.html#string-literals ``` print("\a\a\a") # raw strings don't escape characters print(r'This is a raw string...newlines \r\n are ignored.') # Triple quotes are real useful for multiple line strings y = '''For score and seven minutes ago, you folks all learned some basic mathy stuff with Python and boy were you blown away!''' print(y) ``` - prepending ``r`` makes that string "raw" - triple quotes allow you to compose long strings https://docs.python.org/3.4/reference/lexical_analysis.html#literals ``` print("\N{RIGHT CURLY BRACKET}") print("\N{BLACK HEART SUIT}") ``` http://www.fileformat.info/info/unicode/char/search.htm ``` s = "spam" ; e = "eggs" print(s + e) print("spam" "eggs" "Trumpkins") print(s "eggs") print(s + " and " + e) print(s,"and",e, sep=" ") print("green " + e + " and\n " + s + "\n\t ... and Trumpkins") print(s*3 + e) print(s*3,e,sep="->") print("*"*50) print("spam" == "good") ; print("spam" == "spam") "spam" < "zoo" "s" < "spam" ``` - you can concatenate strings with ``+`` sign - you can do multiple concatenations with the ``*`` sign - strings can be compared ``` print('I want' + 3 + ' eggs and no ' + s) print('I want ' + str(3) + ' eggs and no ' + s) pi = 3.14159 print('I want ' + str(pi) + ' eggs and no ' + s) print(str(True) + ":" + ' I want ' + str(pi) + ' eggs and no ' + s) ``` you must concatenate only strings, coercing ("casting") other variable types to `str` there's a cleaner way to do this, with string formatting. we'll see that tomorrow. ### Getting input from the user: always a string response ``` faren = input("Enterย theย temperature (in Fahrenheit):ย ") cent = (5.0/9.0)*(faren - 32.0) faren = float(faren) cent = (5.0/9.0)*(faren - 32.0) ; print(cent) faren = float(input("Enterย theย temperature (in Fahrenheit):ย ")) print((5.0/9.0)*(faren - 32.0)) ``` &nbsp; #### We can think of strings as arrays (although, unlike in C you never really need to deal with directly addressing character locations in memory) ``` s ="spam" len(s) len("eggs\n") len("") s[0] s[-1] ``` - ``len()`` gives us the length of an array - strings are zero indexed - can also count backwards We can think of strings as arrays (although, unlike in C you never really need to deal with directly addressing character locations in memory) <img src="https://raw.github.com/profjsb/python-bootcamp/master/Lectures/01_BasicTraining/spam.png"> useful for slicing: indices are between the characters <img src="https://raw.github.com/profjsb/python-bootcamp/master/Lectures/01_BasicTraining/spam.png"> ``` s[0:1] # get every character between 0 and 1 s[1:4] # get every character between 1 and 4 s[-2:-1] ## slicing [m:n] will return abs(n-m) characters s[0:100] # if the index is beyond the len(str), you dont segfault! s[1:] # python runs the index to the end s[:2] # python runs the index to the beginning s[::-1] # print it out backwards ``` s = s[:n] + s[n:] for all n ## Basic Control (Flow) Python has pretty much all of what you use: if...elif...else, for, while As well as: break, continue (within loops) Does not have: case (explicitly), goto Does have: `pass` ### Flow is done within blocks (where indentation matters) ``` x = 1 if x > 0: print("yo") else: print("dude") ``` Note: if you are doing this within the Python interpreter you'll see the ... ``` >>> x = 1 >>> if x > 0: ... print "yo" ... else: ... print "dude" ... yo ``` Note colons & indentations (tabbed or spaced) ``` x = 1 if x > 0: print("yo") else: print("dude") ``` Indentations with the same block must be the same but not within different blocks (though this is ugly) one-liners ``` print("yo" if x > 0 else "dude") ``` a small program... Do Control-C to stop (in Python/IPython) or "Kernel->Interrupt" in IPython notebook ``` x = 1 y = 0 while True: print("yo" if x > 0 else "dude") x *= -1 y += 1 if y > 42: break ``` case statements can be constructed with just a bunch of if, elif,...else ``` if x < 1: print("t") elif x > 100: print("yo") else: print("dude") ``` ordering matters. The first block of `True` in an if/elif gets executed then everything else does not. blocks cannot be empty ``` x = "fried goldfish" if x == "spam for dinner": print("I will destroy the universe") else: # I'm fine with that. I'll do nothing ``` `pass` is a "do nothing" statement ``` if x == "spam for dinner": print("I will destroy the universe") else: # I'm fine with that. I'll do nothing pass ``` The double percent sign at the top of an IPython/Jupyter cell is a cell-level "magic". It's not Python itself, but defined as part of IPython/Jupyter. We'll see more on this later in the bootcamp. ``` %%file temp1.py # set some initial variables. Set the initial temperature low faren = -1000 # we dont want this going on forever, let's make sure we cannot have too many attempts max_attempts = 6 attempt = 0 while faren < 100: # let's get the user to tell us what temperature it is newfaren = float(input("Enter the temperature (in Fahrenheit): ")) if newfaren > faren: print("It's getting hotter") elif newfaren < faren: print("It's getting cooler") else: # nothing has changed, just continue in the loop continue faren = newfaren # now set the current temp to the new temp just entered attempt += 1 # bump up the attempt number if attempt >= max_attempts: # we have to bail out break if attempt >= max_attempts: # we bailed out because of too many attempts print("Too many attempts at raising the temperature.") else: # we got here because it's hot print("it's hot here, people.") %run temp1 %run temp1 %%file temp2.py # set some initial variables. Set the initial temperature low faren = -1000 # we dont want this going on forever, let's make sure we cannot have too many attempts max_attempts = 6 attempt = 0 while faren < 100 and (attempt < max_attempts): # let's get the user to tell us what temperature it is newfaren = float(input("Enter the temperature (in Fahrenheit): ")) if newfaren > faren: print("It's getting hotter") elif newfaren < faren: print("It's getting cooler") else: # nothing has changed, just continue in the loop continue faren = newfaren # now set the current temp to the new temp just entered attempt += 1 # bump up the attempt number if attempt >= max_attempts: # we bailed out because of too many attempts print("Too many attempts at raising the temperature.") else: # we got here because it's hot print("it's hot here, people.") ``` UC Berkeley Python Bootcamp - Basic Training (c) J. Bloom 2008-2016 All Rights Reserved
github_jupyter
``` import pandas as pd import numpy as np def load_forces(forces): df_streets = dict() for force in forces: file_path_streets = './Data/force_data/' + force + '_street.csv' df_streets[force] = pd.read_csv(file_path_streets, low_memory=False, index_col=0) return df_streets ``` The forces around London are: \ Metropolitan Police Service \ City of London Police \ Kent Police \ Sussex Police \ Surrey Police \ Essex Police \ Hertfordshire Police \ Thames Valley Police \ Bedfordshire Police \ Hampshire Police ``` forces = ['metropolitan', 'city-of-london', 'kent', 'sussex', 'surrey', 'essex', 'hertfordshire', 'thames-valley', 'bedfordshire', 'hampshire'] df_streets = load_forces(forces) df_streets_all = pd.DataFrame() for key in forces: df_streets_all = pd.concat([df_streets_all, df_streets[key]], ignore_index=True) df_streets_all.dtypes file_path_employment = './Data/2019_employment.csv' df_employment = pd.read_csv(file_path_employment, low_memory=False, sep=';') df_employment.dtypes df_employment[df_employment['LSOA Code (2011)'] == 'E01000027'] df_employment.columns df_employment[['LSOA Code (2011)', 'Local Authority District code (2019)', 'Local Authority District name (2019)', 'Employment Domain Score']] df_streets_all = df_streets_all.merge(df_employment[['LSOA Code (2011)', 'Local Authority District code (2019)', 'Local Authority District name (2019)', 'Employment Domain Score']] , how = 'left', left_on = 'LSOA code', right_on = 'LSOA Code (2011)') df_streets_all = df_streets_all.drop(['LSOA Code (2011)'], axis=1) file_path_income = './Data/2019_income.csv' df_income = pd.read_csv(file_path_income, low_memory=False, sep=';') df_income.dtypes df_income[df_income['LSOA Code (2011)'] == 'E01000027'] df_income.columns df_income[['LSOA Code (2011)', 'Income Domain Score', 'IDACI Score', 'IDAOPI Score']] df_streets_all = df_streets_all.merge(df_income[['LSOA Code (2011)', 'Income Domain Score', 'IDACI Score', 'IDAOPI Score']], how = 'left', left_on = 'LSOA code', right_on = 'LSOA Code (2011)') df_streets_all = df_streets_all.drop(['LSOA Code (2011)'], axis=1) df_streets_all.head() file_path_police_strength = './Data/police_strength.csv' df_police_strength = pd.read_csv(file_path_police_strength, low_memory=False, sep=';') df_police_strength.head() df_police_strength.dtypes df_police_strength.columns df_police_strength[['force_name', '2019']] df_streets_all['Reported by'].unique() force_conv = {'Metropolitan Police':'Metropolitan Police Service', 'London, City of':'City of London Police', 'Kent':'Kent Police', 'Hampshire':'Hampshire Constabulary', 'Avon & Somerset':'Avon and Somerset Constabulary', 'Sussex':'Sussex Police', 'Surrey':'Surrey Police', 'Essex':'Essex Police', 'Hertfordshire':'Hertfordshire Constabulary', 'Thames Valley':'Thames Valley Police', 'Bedfordshire':'Bedfordshire Police'} df_police_strength['force_name'] = df_police_strength['force_name'].map(force_conv, na_action='ignore') df_police_strength[df_police_strength['force_name'] == 'Bedfordshire Police']['2003'] df_streets_all.head() df_streets_all[] for col in df_police_strength.columns[2:]: df_streets_all = df_streets_all.merge(df_police_strength[['force_name', col]], how = 'left', left_on = 'Reported by', right_on = 'force_name') df_streets_all = df_streets_all.drop(['force_name'], axis=1) df_streets_all = df_streets_all.merge(df_police_strength[['force_name', '2019']], how = 'left', left_on = 'Reported by', right_on = 'force_name') df_streets_all = df_streets_all.drop(['force_name'], axis=1) file_path_police_funding = './Data/police_funding.csv' df_police_funding = pd.read_csv(file_path_police_funding, low_memory=False, sep=';') df_police_funding df_police_funding['Police force'] = df_police_funding['Police force'].map(force_conv, na_action='ignore') df_streets_all = df_streets_all.merge(df_police_funding[['Police force', '2018-19']], how = 'left', left_on = 'Reported by', right_on = 'Police force') df_streets_all.head(5) df_streets_all = df_streets_all.drop(['Police force'], axis=1) file_path_population = './Data/2018_population_data.csv' df_population = pd.read_csv(file_path_population, low_memory=False, sep=';') df_population df_streets_all = df_streets_all.merge(df_population[['CODE', 'POPULATION (2018)']], how = 'left', left_on = 'Local Authority District code (2019)', right_on = 'CODE') df_streets_all.head() df_streets_all = df_streets_all.drop(['CODE'], axis=1) df_streets_all.rename(columns = {'2019':'Police Strength', '2018-19':'Police Funding', 'POPULATION (2018)':'Population'}, inplace = True) df_2019 = df_streets_all[df_streets_all['Month'].str.contains('2019')] df_2019.to_csv('./Data/2019_data.csv', index=False) ```
github_jupyter
# 1millionwomentotech SummerOfCode ## Intro to AI: Week 4 Day 3 ``` print(baby_train[50000]['reviewText']) from nltk.sentiment.vader import SentimentIntensityAnalyzer sia = SentimentIntensityAnalyzer() text = baby_train[50000]['reviewText'] for s in sent_tokenize(text): print(s) print(sia.polarity_scores(s)) def sia_features(dataset): """For each review text in the dataset, extract: (1) the mean positive sentiment over all sentences (2) the mean neutral sentiment over all sentences (3) the mean negative sentiment over all sentences (4) the maximum positive sentiment over all sentences (5) the maximum neutral sentiment over all sentences (6) the maximum negative sentiment over all sentences""" feat_matrix = numpy.empty((len(dataset), 6)) for i in range(len(dataset)): sentences = sent_tokenize(dataset[i]['reviewText']) nsent = len(sentences) if nsent: sentence_polarities = numpy.empty((nsent, 3)) for j in range(nsent): polarity = sia.polarity_scores(sentences[j]) sentence_polarities[j, 0] = polarity['pos'] sentence_polarities[j, 1] = polarity['neu'] sentence_polarities[j, 2] = polarity['neg'] feat_matrix[i, 0:3] = numpy.mean(sentence_polarities, axis=0) # mean over the columns feat_matrix[i, 3:6] = numpy.max(sentence_polarities, axis=0) # maximum over the columns else: feat_matrix[i, 0:6] = 0.0 return feat_matrix sia_tr = sia_features(baby_train) testmat = numpy.arange(12.).reshape((3, 4)) print(testmat) print(numpy.max(testmat, axis=0)) print(numpy.mean(testmat, axis=1)) def len_features(dataset): """Add two features: (1) length of review (in thousands of characters) - truncate at 2,500 (2) percentage of exclamation marks (in %)""" feat_matrix = numpy.empty((len(dataset), 2)) for i in range(len(dataset)): text = dataset[i]['reviewText'] feat_matrix[i, 0] = len(text) / 1000. if text: feat_matrix[i, 1] = 100. * text.count('!') / len(text) else: feat_matrix[i, 1] = 0.0 feat_matrix[feat_matrix>2.5] = 2.5 return feat_matrix len_tr = len_features(baby_train) print(X_train_neg.shape, sia_tr.shape, len_tr.shape) X_train_augmented = numpy.concatenate((X_train_neg, sia_tr, len_tr), axis=1) # stack horizontally lreg_augmented = LinearRegression().fit(X_train_augmented, Y_train) pred_train_augmented = lreg_augmented.predict(X_train_augmented) mae_train_augmented = mean_absolute_error(pred_train_augmented, Y_train) print("Now the mean absolute error on the training data is %f stars" % mae_train_augmented) rf_augmented = RandomForestRegressor().fit(X_train_augmented, Y_train) rfpred_train_augmented = rf_augmented.predict(X_train_augmented) mae_train_rf_augmented = mean_absolute_error(rfpred_train_augmented, Y_train) print("For the RF, it is %f stars" % mae_train_rf_augmented) X_valid_neg = dataset_to_matrix_with_neg(baby_valid) sia_valid = sia_features(baby_valid) len_valid = len_features(baby_valid) X_valid_augmented = numpy.concatenate((X_valid_neg, sia_valid, len_valid), axis=1) pred_valid_augmented = lreg_augmented.predict(X_valid_augmented) pred_valid_rf_augmented = rf_augmented.predict(X_valid_augmented) mae_valid_augmented = mean_absolute_error(pred_valid_augmented, Y_valid) print("On the validation set, we get %f error for the linear regression" % mae_valid_augmented) mae_valid_rf_augmented = mean_absolute_error(pred_valid_rf_augmented, Y_valid) print("And %f for the random forest regression" % mae_valid_rf_augmented) print(baby_train[50000]['reviewText']) from nltk.sentiment.vader import SentimentIntensityAnalyzer sia = SentimentIntensityAnalyzer() text = baby_train[50000]['reviewText'] for s in sent_tokenize(text): print(s) print(sia.polarity_scores(s)) def sia_features(dataset): """For each review text in the dataset, extract: (1) mean positive sentiment over all sentences (2) mean neutral sentiment over all sentences (3) mean negative sentiment over all sentences (4) maximum positive sentiment over all sentences (5) maximum neutral sentiment over all sentences (6) maximum negative sentiment over all sentences """ feat_matrix = numpy.empty((len(dataset), 6)) for i in range(len(dataset)): sentences = sent_tokenize(dataset[i]['reviewText']) nsent = len(sentences) if nsent: sentence_polarities = numpy.empty((nsent, 3)) for j in range(nsent): polarity = sia.polarity_scores(sentences[j]) sentence_polarities[j, 0] = polarity['pos'] sentence_polarities[j, 1] = polarity['neu'] sentence_polarities[j, 2] = polarity['neg'] feat_matrix[i, 0:3] = numpy.mean(sentence_polarities, axis = 0) # mean over the columns feat_matrix[i, 3:6] = numpy.max(sentence_polarities, axis = 0) # maximum over the columns else: feat_matrix[i, 0:6] = 0.0 return feat_matrix sia_tr = sia_features(baby_train) print(sia_tr[:10]) testmat = numpy.arange(12.).reshape((3,4)) print(testmat) print(numpy.max(testmat, axis = 0)) print(numpy.mean(testmat, axis = 1)) # Homework - required for Certification def len_features(dataset): """Add two features: (1) length of review (in thousands of character) - truncate at 2,500 (2) percentage of exclamation marks (in %) """ len_tr = len_features(baby_train) print(X_train_neg.shape, sia_tr.shape) # stack horizontally X_train_augmented = numpy.concatenate( (X_train_neg, sia_tr), axis = 1) lreg_augmented = LinearRegression().fit(X_train_augmented, Y_train) pred_train_augmented = lreg_augmented.predict(X_train_augmented) mae_train_augmented = mean_absolute_error(pred_train_augmented, Y_train) print("Now the mean absolute error on the training data is %f starts" % mae_train_augmented) # random forest rf_augmented = RandomForestRegressor().fit(X_train_augmented, Y_train) rfpred_train_augmented = rf_augmented.predict(X_train_augmented) mae_train_rf_augmented = mean_absolute_error(rfpred_train_augmented, Y_train) print("For the RF, MAE is %f stars" % mae_train_rf_augmented) X_valid_neg = dataset_to_matrix_with_neg(baby_valid) sia_valid = sia_features(baby_valid) # len_valid = X_valid_augmented = numpy.concatenate((X_valid_neg, sia_valid), axis = 1) pred_valid_augmented = pred_valid_rfaugmented = mae_valid_augmented = mae_valid_rfaugmented = ``` # Homework for certification Refactor the code above: - "Be lazy. Not just lazy but proactively, agressively lazy." Remove duplication. - create a single function that takes in data and spits out all success metrics across all of your algos. # Where to go from here? - unigrams (NLTK) - word vector (gensim, [glove](https://nlp.stanford.edu/projects/glove/), word2vec) - recurrent neural net - convolutional neural net https://www.oreilly.com/learning/perform-sentiment-analysis-with-lstms-using-tensorflow http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/ https://machinelearningmastery.com/develop-n-gram-multichannel-convolutional-neural-network-sentiment-analysis/
github_jupyter
# ่‡ชๅŠจๆฑ‚ๅฏผ็š„็›ธๅ…ณ่ฎพ็ฝฎ - Tensor็š„ๅฑžๆ€ง๏ผš - requires_grad=True - ๆ˜ฏๅฆ็”จๆฅๆฑ‚ๅฏผ - is_leaf๏ผš - ๅถๅญ่Š‚็‚นๅฟ…้กปๆ˜ฏ่ฎก็ฎ—็š„็ป“ๆžœ๏ผ› - ็”จๆˆทๅˆ›ๅปบ็š„Tensor็š„is_leaf=True๏ผˆๅฐฝ็ฎกrequires_grad=True๏ผŒไนŸis_leaf=True๏ผ‰๏ผ› - requires_grad=False็š„Tensor็š„is_leaf=True๏ผ› - grad_fn๏ผš - ็”จๆฅๆŒ‡ๅฎšๆฑ‚ๅฏผๅ‡ฝๆ•ฐ๏ผ› - grad - ็”จๆฅ่ฟ”ๅ›žๅฏผๆ•ฐ๏ผ› - dtype - ๅชๆœ‰torch.float็š„ๅผ ้‡ๆ‰่ƒฝๆฑ‚ๅฏผ๏ผ› 1. ๆฑ‚ๅฏผ็š„ไพ‹ๅญ ``` import torch # x่‡ชๅ˜้‡ x = torch.Tensor([5]) x.requires_grad=True # yๅ› ๅ˜้‡ y = x ** 2 # ๆฑ‚ๅฏผ y.backward() # ๅฏผๆ•ฐ็š„็ป“ๆžœ print(x.grad) ``` 2. ๆฑ‚ๅฏผ็š„ๅฏ่ง†ๅŒ–(ๅฏผๆ•ฐๅ‡ฝๆ•ฐ็š„ๆ›ฒ็บฟ) ``` %matplotlib inline import matplotlib.pyplot as plt import torch # x่‡ชๅ˜้‡ x = torch.linspace(0, 10, 100) x.requires_grad=True # yๅ› ๅ˜้‡ y = (x - 5) ** 2 + 3 z = y.sum() # ๆฑ‚ๅฏผ z.backward() print() # ๅฏ่ง†ๅŒ– plt.plot(x.detach(), y.detach(), color=(1, 0, 0, 1), label='$y=(x-5)^2 + 3$') plt.plot(x.detach(), x.grad.detach(), color=(1, 0, 1, 1), label='$y=2(x-5)$') plt.legend() plt.show() # print(x.grad) # print(x) ``` 3. ๆฑ‚ๅฏผ็›ธๅ…ณ็š„ๅฑžๆ€งๅ€ผ ``` import torch # x่‡ชๅ˜้‡ x = torch.Tensor([5]) x.requires_grad=True # ๆฑ‚ๅฏผๅ‰็š„ๅฑžๆ€ง print("-------------ๆฑ‚ๅฏผๅ‰x") print("leaf:", x.is_leaf) print("grad_fn:", x.grad_fn) print("grad:", x.grad) # yๅ› ๅ˜้‡ y = x ** 2 print("-------------ๆฑ‚ๅฏผๅ‰y") print("requires_grad:", y.requires_grad) print("leaf:", y.is_leaf) print("grad_fn:", y.grad_fn) print("grad:", y.grad) # ๆฑ‚ๅฏผ y.backward() # ๅชๅฏนๆ ‡้‡่ฟ็ฎ— print("-------------ๆฑ‚ๅฏผๅŽx") # ๆฑ‚ๅฏผๅŽ็š„ๅฑžๆ€ง print("leaf:", x.is_leaf) print("grad_fn:", x.grad_fn) print("grad:", x.grad) print("-------------ๆฑ‚ๅฏผๅŽy") print("requires_grad:", y.requires_grad) print("leaf:", y.is_leaf) print("grad_fn:", y.grad_fn) print("grad:", y.grad) ``` # Tensor็š„backwardๅ‡ฝๆ•ฐ ## backwardๅ‡ฝๆ•ฐๅฎšไน‰ - ๅ‡ฝๆ•ฐๅฎšไน‰๏ผš ```python backward(self, gradient=None, retain_graph=None, create_graph=False) ``` - ๅ‚ๆ•ฐ่ฏดๆ˜Ž๏ผš - gradient=None๏ผš้œ€่ฆๆฑ‚ๅฏผ็š„ๅพฎๅˆ†ๅผ ้‡๏ผ› - retain_graph=None๏ผšไฟ็•™ๅ›พ๏ผ›ๅฆๅˆ™ๆฏๆฌก่ฎก็ฎ—ๅฎŒๆฏ•๏ผŒๅบŠๅˆ›ๅปบ็š„ๅ›พ้ƒฝไผš่ขซ้‡Šๆ”พใ€‚ - create_graph=False๏ผšๅˆ›ๅปบๅฏผๆ•ฐๅ›พ๏ผŒไธป่ฆ็”จๆฅๆฑ‚้ซ˜้˜ถๅฏผๆ•ฐ๏ผ› ## ๆฑ‚ๅฏผ็š„้€š็”จๆจกๅผ - ๅ‡ฝๆ•ฐ่กจ่พพๅผ๏ผš - $z = 2x + 3y$ - ๆ‰‹ๅทฅๆฑ‚ๅฏผ๏ผš - $\dfrac{\partial{z}}{\partial{x}} = 2$ ``` import torch x = torch.Tensor([1, 2, 3]) x.requires_grad=True # ่ฟ™ไธชๅฑžๆ€งๅฟ…้กปๅœจ z = 2*x + 3*y ่กจ่พพๅผๆž„ๅปบๅ›พ็š„ๆ—ถๅ€™่ฎพ็ฝฎ y = torch.Tensor([4, 5, 6]) z = 2*x + 3*y z.backward(x) # ๅฏนxๆฑ‚ๅฏผ๏ผŒๅพ—ๅˆฐ็š„็ป“ๆžœ๏ผŒ่‡ช็„ถๆ˜ฏ 2๏ผŒไฝ†ๆ˜ฏx็š„gradๆ˜ฏ 2 * x print(x.grad, y.grad, z.grad) # ๆฒกๆœ‰ๅฏนyๆฑ‚ๅฏผ๏ผŒๆ‰€ไปฅๅฏนyๆฒกๆœ‰่ฆๆฑ‚ ``` ## ็†่งฃๅฏผๆ•ฐ - ๅ‡ฝๆ•ฐ่กจ่พพๅผ๏ผš - $z = x^2$ - ๆ‰‹ๅทฅๆฑ‚ๅฏผ๏ผš - $\dfrac{\partial{z}}{\partial{x}} = 2x$ - $\color{red}{ไธŠ้ข่ฟ‡็จ‹ๆ€Žไนˆ่ฎก็ฎ—็š„ๅ‘ข๏ผŸ}$ ### ็ป“ๆžœๅผ ้‡ไธบๆ ‡้‡็š„ๆƒ…ๅ†ต - ๅฆ‚ๆžœzๆ˜ฏๆ ‡้‡๏ผŒๅˆ™็›ดๆŽฅ่ฎก็ฎ—ๅฏผๆ•ฐ๏ผš$\dfrac{\partial{z}}{\partial{x}} = 2x$ ``` import torch x = torch.Tensor([2]) x.requires_grad=True z = x**2 # ๆฑ‚ๅฏผๅ‡ฝๆ•ฐ z.backward() # ๅฏนxๆฑ‚ๅฏผ๏ผŒ2 * x ๏ผŒๅฏผๆ•ฐไธบ2x=4 print(x.grad, z.grad) ``` ### ็ป“ๆžœๅผ ้‡ไธบๅ‘้‡็š„ๆƒ…ๅ†ต - ๅฆ‚ๆžœzๆ˜ฏๅ‘้‡๏ผŒๅˆ™้œ€่ฆๅ…ˆ่ฎก็ฎ—zไธŽx็š„ๅ†…็งฏ๏ผŒๅพ—ๅˆฐๆ ‡้‡็ป“ๆžœ๏ผŒ็„ถๅŽๅ†ๆฑ‚ๅฏผใ€‚ - $z = x^2$ - $l = z \cdot x$ - $\dfrac{\partial{l}}{\partial{x}} = \dfrac{\partial{l}}{\partial{z}} \dfrac{\partial{z}}{\partial{x}} = x \dfrac{\partial{z}}{\partial{x}} = x 2x$ ``` import torch x = torch.Tensor([2]) x.requires_grad=True y = x**2 # ๆฑ‚ๅฏผๅ‡ฝๆ•ฐ y.backward(x) # 2 x x = 8 print(x.grad, y.grad) print(x.grad/x) # ๆญฃๅฎ—็ป“ๆžœ ``` ### ๅ–ๆฑ‚ๅฏผๅ‘้‡ไธบ1ๅ‘้‡ - ๆ นๆฎไธŠ้ข็š„ๆŽจๅฏผ๏ผŒๅœจ่‡ชๅŠจๆฑ‚ๅฏผไธญๅŒ…ๅซๅ‡ ไธช้ป˜่ฎคๅŠจไฝœ๏ผš - 1. ไฝฟ็”จz.backward()๏ผŒๆฒกๆœ‰ๆŒ‡ๅฎšๅพฎๅˆ†้‡็š„ๆƒ…ๅ†ตไธ‹๏ผŒๅฎž้™…ไธŠๆ˜ฏๅฏนๅ›พ็š„ๆ‰€ๆœ‰ๆ ‡่ฎฐไธบrequires_grad=True็š„ๅถๅญๅผ ้‡ๅฎž็Žฐๆฑ‚ๅฏผ๏ผ› - ๅฝ“ๅถๅญ่Š‚็‚น้ƒฝๆ˜ฏrequires_grad=False๏ผŒไผšๆŠ›ๅ‡บๅผ‚ๅธธใ€‚ - `RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn` - 2. ไฝฟ็”จz.backward(x)๏ผŒ็›ดๆŽฅๆŒ‡ๅฎš้œ€่ฆ็š„ๆฑ‚ๅฏผ๏ผ› - ๅ…ถๅฎž่ฟ™็งๆŒ‡ๅฎš๏ผŒๆ˜ฏๆฒกๆœ‰ๆ„ไน‰็š„๏ผŒๅ› ไธบๆŒ‡ๅฎšx๏ผŒไนŸๆ˜ฏๅฏนๆ‰€ๆœ‰requires_grad=True็š„ๅถๅญ่Š‚็‚นๆฑ‚ๅฏผใ€‚ - ไธ‹้ขไพ‹ๅญไฝ“ไผšไธ‹๏ผŒๅคšไธชๅถๅญ่Š‚็‚น็š„่‡ชๅŠจๆฑ‚ๅฏผ๏ผ› - ๅฐฑ็ฎ—ๅชๅฏนxๆฑ‚ๅฏผ๏ผŒๅฎž้™…ๅฏนyไนŸไผšๆฑ‚ๅฏผ๏ผ› ``` import torch x = torch.Tensor([1, 2, 3]) y = torch.Tensor([4, 5, 6]) x.requires_grad=True y.requires_grad=True z = 3*x + 2*y # ๆฑ‚ๅฏผๅ‡ฝๆ•ฐ z.backward(x) # ๅฏนxๆฑ‚ๅฏผ print(x.grad, y.grad) # [3., 6., 9.] ๏ผšๅฏผๆ•ฐๆ˜ฏ3 ไธŽ [2., 4., 6.]๏ผšๅฏผๆ•ฐๆ˜ฏ2 print(x.grad/x, y.grad/x) # [3., 6., 9.] ๏ผšๅฏผๆ•ฐๆ˜ฏ3 ไธŽ [2., 4., 6.]๏ผšๅฏผๆ•ฐๆ˜ฏ2 ``` - ไปŽไธŠ้ขไพ‹ๅญ็œ‹ๅ‡บ๏ผšbackward็š„ๅ‚ๆ•ฐๅผ ้‡๏ผŒไป…ไป…ๆ˜ฏๆŠŠๆฑ‚ๅฏผๅ‡ฝๆ•ฐไปŽๅ‘้‡่ฝฌๆขๆˆๆ ‡้‡ๆฑ‚ๅฏผ๏ผŒ ๆœฌ่บซๅนถๆฒกๆœ‰ๆŒ‡ๅฎšๅฏนๅ“ชไธชๅ˜้‡๏ผˆๅผ ้‡ๆฑ‚ๅฏผ็š„๏ผ‰็š„ๅซไน‰ใ€‚ - ็”ฑไบŽbackward็š„ๅ‚ๆ•ฐไป…ไป…ๆ˜ฏๅ‘้‡ๅˆฐๅ˜้‡็š„่ฝฌๅŒ–ๅทฅไฝœ๏ผŒๆ‰€ไปฅๆˆ‘ไปฌๅŽป่ฟ™ไธชๅ‚ๆ•ฐไธบ1ๅณๅฏใ€‚ไธ‹้ขๆ˜ฏๆŽจ็†็†่ฎบใ€‚ - $z = x^2$ - $l = z \cdot 1$ - $\dfrac{\partial{l}}{\partial{x}} = \dfrac{\partial{l}}{\partial{z}} \dfrac{\partial{z}}{\partial{x}} = \dfrac{\partial{z \cdot 1 }}{\partial{z}} \dfrac{\partial{z}}{\partial{x}} = \dfrac{\partial{z}}{\partial{x}} = 2x$ - ๅ–1ๅผ ้‡ไฝœไธบๆขฏๅบฆๆฑ‚ๅฏผ ``` import torch x = torch.Tensor([1, 2, 3]) x.requires_grad=True z = x**2 # ๆฑ‚ๅฏผๅ‡ฝๆ•ฐ z.backward(torch.ones_like(x)) print(x.grad, z.grad) ``` - ไธ‹้ข็š„ๆ“ไฝœไธŽๅ–1ๅผ ้‡็š„ๅŽŸ็†ๅฎŒๅ…จไธ€่‡ด - ๅชๆ˜ฏ็”จๆˆท่‡ชๅทฑๅšไบ†่ฟ™ไธชๅ†…็งฏ่ฟ็ฎ—่€Œๅทฒใ€‚ ``` import torch x = torch.Tensor([1, 2, 3]) x.requires_grad=True z = (x**2).sum() # ็›ดๆŽฅๆฑ‚ๅ’Œ z.backward() print(x.grad, z.grad) ``` ## ๅคๆ‚็š„ๆฑ‚ๅฏผ่ฟ็ฎ—ไพ‹ๅญ - ไธ‹้ขๆ˜ฏ่ฎก็ฎ—็š„ๅ›พ็คบๆ„ๅ›พ๏ผš - ![image.png](attachment:image.png) ``` import torch # ๅถๅญ่Š‚็‚น x = torch.Tensor([1, 2, 3]) y = torch.Tensor([3, 4, 5]) z = torch.Tensor([1, 2, 3]) x.requires_grad=True y.requires_grad=True z.requires_grad=True # ไธญ้—ด่Š‚็‚น xy = x + y xy2 = xy ** 2 z3 = z ** 3 xy2z3=xy2 * z3 # ๆฑ‚ๅฏผๆ•ฐ xy2z3.backward(torch.Tensor([1.0, 1.0, 1.0])) print(x.grad, y.grad, z.grad) print(xy.grad, xy2.grad, z3.grad, xy2z3.grad) # ๆฒกๆœ‰ๆขฏๅบฆ๏ผŒๅ› ไธบไธๆ˜ฏๅถๅญ่Š‚็‚น print(xy.grad_fn, xy2.grad_fn, z3.grad_fn, xy2z3.grad_fn) print(xy.requires_grad, xy2.requires_grad, z3.requires_grad, xy2z3.requires_grad) ``` ## ไธญ้—ดๅฏผๆ•ฐ - ไฝฟ็”จไธŠ้ขๆจกๅผ็ผ–็จ‹๏ผŒๅฏไปฅๅ‘็Žฐๅ…ถไธญๅช่ฎก็ฎ—ๅ‡บ่พ“ๅ…ฅๅ˜้‡็š„ๅฏผๆ•ฐ๏ผŒไธญ้—ดๅ˜้‡็š„ๅฏผๆ•ฐๆ˜ฏๆ— ๆณ•่Žทๅ–็š„๏ผŒๅฆ‚ๆžœๆƒณ่Žทๅ–ไธญ้—ดๅ˜้‡็š„ๅฏผๆ•ฐ๏ผŒ้œ€่ฆๆณจๅ†Œไธ€ไธชๅ›ž่ฐƒ้’ฉๅญๅ‡ฝๆ•ฐ๏ผŒ้€š่ฟ‡่ฟ™ไธชๅ‡ฝๆ•ฐ่ฟ”ๅ›žใ€‚ - ่Žทๅ–ไธญ้—ดๅ˜้‡ๅฏผๆ•ฐ็š„ไพ‹ๅญ ``` import torch # ๅถๅญ่Š‚็‚น x = torch.Tensor([1, 2, 3]) y = torch.Tensor([3, 4, 5]) z = torch.Tensor([1, 2, 3]) x.requires_grad=True y.requires_grad=True z.requires_grad=True # ไธญ้—ด่Š‚็‚น xy = x + y # xyz = xy * z # xyz.backward(torch.Tensor([1, 1, 1])) xyz = torch.dot(xy, z) # ==================== def get_xy_grad(grad): print(F"xy็š„ๅฏผๆ•ฐ๏ผš{ grad }") # ๅฏไปฅไฟๅญ˜ๅˆฐๅ…จๅฑ€ๅ˜้‡ไฝฟ็”จใ€‚ xy.register_hook(get_xy_grad) # ==================== xyz.backward() print(x.grad, y.grad, z.grad) print(xy.grad, y.grad, z.grad) ``` ## ้ซ˜้˜ถๅฏผๆ•ฐ 1. ๆไพ›create_graphๅ‚ๆ•ฐ็”จๆฅไฟ็•™ๅฏผๆ•ฐ็š„ๅ›พ๏ผŒ็”จๆฅๅฎž็Žฐ้ซ˜็บงๅฏผๆ•ฐ็š„่ฎก็ฎ—ใ€‚ 2. ้ซ˜้˜ถๅฏผๆ•ฐๅ› ไธบไธๆ˜ฏๅถๅญ่Š‚็‚น๏ผŒ้œ€่ฆ้€š่ฟ‡ๅ›ž่ฐƒ้’ฉๅญ่Žทๅ– ``` import torch x = torch.Tensor([1]) x.requires_grad=True z = x**6 # ๆฑ‚ๅฏผๅ‡ฝๆ•ฐ z.backward(create_graph=True) # retain_graphไฟ็•™็š„ๆ˜ฏๆœฌ่บซ็š„่ฟ็ฎ—ๅ›พ๏ผŒcreate_graphๆ˜ฏไฟ็•™ๅพฎๅˆ†ๅ›พ print(x.grad) # ๅฏผๆ•ฐ3 # ==================== def get_xy_grad(grad): print(F"x.grad็š„้ซ˜้˜ถๅฏผๆ•ฐ๏ผš{ grad }") # ๅฏไปฅไฟๅญ˜ๅˆฐๅ…จๅฑ€ๅ˜้‡ไฝฟ็”จใ€‚ x.register_hook(get_xy_grad) # ==================== x.grad.backward(create_graph=True) ``` # Tensor็š„่‡ชๅŠจๆฑ‚ๅฏผ - ๆœ‰ไบ†ไธŠ้ข็š„ๅŸบ็ก€๏ผŒไธ‹้ข็œ‹torch.autogradไธญ็š„่‡ชๅŠจๆฑ‚ๅฏผ๏ผŒๅฐฑๅŸบๆœฌไธŠ้žๅธธ็ฎ€ๅ•ใ€‚ - Torchๆไพ›ไบ†torch.autogradๆจกๅ—ๆฅๅฎž็Žฐ่‡ชๅŠจๆฑ‚ๅฏผ๏ผŒ่ฏฅๆจกๅ—ๆšด้œฒ็š„่ฐƒ็”จๅฆ‚ไธ‹๏ผš - `['Variable', 'Function', 'backward', 'grad_mode']` ## backward็š„ไฝฟ็”จ - autogradๆไพ›็š„backwardๆ˜ฏTensor็š„backward็š„้™ๆ€ๅ‡ฝๆ•ฐ็‰ˆๆœฌ๏ผŒไฝฟ็”จ่ฐˆไธไธŠไพฟๆท๏ผŒไฝ†ๅคšไบ†ไธ€ไธช้€‰ๆ‹ฉ๏ผ› ```python torch.autograd.backward( tensors, grad_tensors=None, retain_graph=None, create_graph=False, grad_variables=None) ``` - ๅ‚ๆ•ฐ่ฏดๆ˜Ž๏ผš - tensors๏ผš่ขซๆฑ‚ๅฏผ็š„ๅ‘้‡๏ผˆๅฟ…้กปๅ…ทๆœ‰grad_fn๏ผ‰๏ผ› - grad_tensors=None๏ผšๆขฏๅบฆๅ‘้‡๏ผ› - retain_graph=None๏ผšไฟ็•™่ฎก็ฎ—ๅ›พ๏ผ› - create_graph=False๏ผšๅˆ›ๅปบไธช้ซ˜้˜ถๅพฎๅˆ†ๅ›พ๏ผˆๅฏไปฅ่‡ชๅทฑๆ‰‹ๅทฅๅพ—ๅˆฐ้ซ˜้˜ถๅฏผๆ•ฐ๏ผŒไนŸๅฏไปฅไฝฟ็”จไธ‹้ข็š„gradๅฐ่ฃ…ๅ‡ฝๆ•ฐ๏ผ‰๏ผ› - grad_variables=None๏ผšๅ…ผๅฎนๅŽŸๆฅVariable็‰ˆๆœฌ็š„ๅ‚ๆ•ฐ๏ผŒๅœจๆ–ฐ็š„็‰ˆๆœฌไธญไธๅ†ไฝฟ็”จใ€‚ - torch.autograd.backwardๅ‡ฝๆ•ฐ็š„ไฝฟ็”จไพ‹ๅญ - ๅ‚ๆ•ฐgrad_variablesๅœจๆˆ‘็š„่ฟ™ไธช็‰ˆๆœฌไธญ๏ผŒๅทฒ็ปไธ่ƒฝไฝฟ็”จใ€‚ ``` import torch # ๅถๅญ่Š‚็‚น x = torch.Tensor([1, 2, 3]) y = torch.Tensor([3, 4, 5]) z = torch.Tensor([1, 2, 3]) x.requires_grad=True y.requires_grad=True z.requires_grad=True # ไธญ้—ด่Š‚็‚น xy = x + y # xyz = xy * z # xyz.backward(torch.Tensor([1, 1, 1])) xyz = torch.dot(xy, z) # ==================== def get_xy_grad(grad): print(F"xy็š„ๅฏผๆ•ฐ๏ผš{ grad }") # ๅฏไปฅไฟๅญ˜ๅˆฐๅ…จๅฑ€ๅ˜้‡ไฝฟ็”จใ€‚ xy.register_hook(get_xy_grad) # ==================== torch.autograd.backward(xyz) print(x.grad, y.grad, z.grad) print(xy.grad, y.grad, z.grad) ``` ## grad็š„ไฝฟ็”จ - ็”จๆฅ่ฎก็ฎ—่พ“ๅ‡บๅ…ณไบŽ่พ“ๅ…ฅ็š„ๆขฏๅบฆ็š„ๅ’Œ๏ผŒไธๆ˜ฏ่ฟ”ๅ›žๆ‰€ๆœ‰็š„ๆขฏๅบฆ๏ผŒ่€Œๆ˜ฏๅฏนๆŸไธช่พ“ๅ…ฅๅ˜้‡็š„ๆฑ‚ๅฏผ๏ผš$\dfrac{\partial{z}}{\partial{x}}$ - ่ฟ™ไธชๅ‡ฝๆ•ฐ็š„ๅŠŸ่ƒฝๅบ”่ฏฅไธŽhookๅŠŸ่ƒฝ็ฑปไผผใ€‚ - gradๅ‡ฝๆ•ฐ็š„ๅฎšไน‰๏ผš ```python torch.autograd.grad( outputs, inputs, grad_outputs=None, retain_graph=None, create_graph=False, only_inputs=True, allow_unused=False) ``` - ๅ‚ๆ•ฐ่ฏดๆ˜Ž๏ผš - outputs๏ผš่พ“ๅ‡บๅผ ้‡ๅˆ—่กจ๏ผŒไธŽbackwardๅ‡ฝๆ•ฐไธญ็š„tensorsไฝœ็”จไธ€ๆ ท๏ผ› - inputs๏ผš่พ“ๅ…ฅๅผ ้‡ๅˆ—่กจ๏ผŒ็”จๆฅ่ฐƒ็”จregister_hook็š„ๅผ ้‡๏ผ› - grad_outputs๏ผšๆขฏๅบฆๅผ ้‡ๅˆ—่กจ๏ผŒไธŽbackwardๅ‡ฝๆ•ฐไธญ็š„grad_tensorsไฝœ็”จไธ€ๆ ท๏ผ› - retain_graph๏ผš้€ป่พ‘ๅ€ผ๏ผŒ็”จๆฅๆŒ‡ๅฎš่ฟ็ฎ—ๅฎŒๆฏ•ๆ˜ฏๅฆๆธ…้™ค่ฎก็ฎ—ๅ›พ๏ผ› - create_graph๏ผš้€ป่พ‘ๅ€ผ๏ผŒ็”จๆฅๅˆ›ๅปบๆขฏๅบฆ็š„่ฎก็ฎ—ๅ›พ๏ผˆๆขฏๅบฆ็š„ๆขฏๅบฆๅฐฑๆ˜ฏ้ซ˜้˜ถๅฏผๆ•ฐ๏ผ‰ - only_inputs๏ผš้€ป่พ‘ๅ€ผ๏ผŒ็”จๆฅๆŒ‡ๅฎš่ฟ”ๅ›ž็š„่ฎก็ฎ—็ป“ๆžœ๏ผŒไธไป…ไป…ๆ˜ฏinputsๆŒ‡ๅฎš็š„ๅผ ้‡๏ผŒ่€Œๆ˜ฏ่ฎก็ฎ—ๆ‰€ๆœ‰ๅถๅญ่Š‚็‚น็š„ๅฏผๆ•ฐใ€‚้ป˜่ฎคๅ€ผTrue๏ผš่ฟ™ไธชๅ‚ๆ•ฐๅทฒ็ปไธๆŽจ่ไฝฟ็”จ๏ผŒ่€Œไธ”ๅทฒ็ปๆฒกๆœ‰ไฝœ็”จไบ†๏ผŒๅ‘่ฎก็ฎ—ๅถๅญ่Š‚็‚น็š„ๅฏผๆ•ฐๆฒกไฝฟ็”จbackwardๅ‡ฝๆ•ฐใ€‚ - allow_unused๏ผš้€ป่พ‘ๅ€ผ๏ผŒ็”จๆฅๆฃ€ๆต‹ๆ˜ฏๅฆๆฏไธช่พ“ๅ…ฅ้ƒฝ็”จๆฅ่ฎก็ฎ—่พ“ๅ‡บ๏ผŒFalse่กจ็คบไธ้œ€่ฆ๏ผŒTrue่กจ็คบๅฆ‚ๆžœๆœ‰่พ“ๅ…ฅๆฒกๆœ‰็”จไบŽ่พ“ๅ‡บ่ฎก็ฎ—๏ผŒๅˆ™ๆŠ›ๅ‡บ้”™่ฏฏใ€‚ๅฆ‚ๆžœๆฒกๆœ‰่พ“ๅ…ฅ้ƒฝๆ˜ฏ็”จ๏ผŒๅˆ™TrueไธŽFalse็ป“ๆžœ้ƒฝไธ€ๆ ทใ€‚้ป˜่ฎคๅ€ผFalse - grad็š„ไฝฟ็”จไพ‹ๅญ ``` import torch # ๅถๅญ่Š‚็‚น x = torch.Tensor([1, 2, 3]) y = torch.Tensor([3, 4, 5]) z = torch.Tensor([1, 2, 3]) x.requires_grad=True y.requires_grad=True z.requires_grad=True # ไธญ้—ด่Š‚็‚น xy = x + y xyz = torch.dot(xy, z) # ==================== gd = torch.autograd.grad(xyz, x, retain_graph=True) print(x.grad, y.grad, z.grad) print(xy.grad, y.grad, z.grad) print(gd) print(torch.autograd.grad(xyz, xy,retain_graph=True)) print(torch.autograd.grad(xyz, y,retain_graph=True)) print(torch.autograd.grad(xyz, z,retain_graph=True, allow_unused=True)) # ==================== ``` ### grad็š„้ซ˜้˜ถๆฑ‚ๅฏผ - ไฝฟ็”จcreate_graphๅˆ›ๅปบๅฏผๆ•ฐ็š„ๅ›พ๏ผŒๅนถๅฏนๅฏผๆ•ฐๅ†ๆฑ‚ๅฏผ๏ผŒไปŽ่€Œๅฎž็Žฐ้ซ˜้˜ถๆฑ‚ๅฏผใ€‚ ``` import torch x = torch.Tensor([1]) x.requires_grad=True z = x**6 # ๆฑ‚ๅฏผๅ‡ฝๆ•ฐ gd_1 = torch.autograd.grad(z, x, create_graph=True) gd_2 = torch.autograd.grad(gd_1, x) print(F"ไธ€้˜ถๅฏผๆ•ฐ๏ผš{gd_1},\nไบŒ้˜ถๅฏผๆ•ฐ๏ผš {gd_2}") ``` # ๆฑ‚ๅฏผ็š„ๆŽงๅˆถ ## set_grad_enabled็ฑป - set_grad_enabledๅ‡ฝๆ•ฐๅฏไปฅๅผ€ๅฏไธŽๅ…ณ้—ญๅฏผๆ•ฐ่ฎก็ฎ— - ไธ€ไธชไธŠไธ‹ๆ–‡็ฎก็†ๅฏน่ฑก - ๅ‡ฝๆ•ฐๅฃฐๆ˜Žๅฆ‚ไธ‹๏ผš ```python torch.autograd.set_grad_enabled(mode) ``` - ๅ‚ๆ•ฐ๏ผš - mode๏ผš้€ป่พ‘ๅ€ผ๏ผŒTrueๅผ€ๅฏ๏ผŒFalseๅ…ณ้—ญ ### ้€šๅธธไฝฟ็”จไพ‹ๅญ ``` import torch x = torch.Tensor([1, 2, 3]) y = torch.Tensor([3, 4, 5]) z = torch.Tensor([1, 2, 3]) x.requires_grad=True y.requires_grad=True z.requires_grad=True torch.autograd.set_grad_enabled(False) # ๅ…จๅฑ€ไธŠไธ‹ๆ–‡ xy = x + y xyz = torch.dot(xy, z) torch.autograd.set_grad_enabled(True) print(xy.requires_grad, xyz.requires_grad, z.requires_grad) ``` ### ไธŠไธ‹ๆ–‡ไฝฟ็”จไพ‹ๅญ ``` import torch x = torch.Tensor([1, 2, 3]) y = torch.Tensor([3, 4, 5]) z = torch.Tensor([1, 2, 3]) x.requires_grad=True y.requires_grad=True z.requires_grad=True with torch.autograd.set_grad_enabled(False) as grad_ctx: # ๅฑ€้ƒจไธŠไธ‹ๆ–‡ xy = x + y # ๅ—็ป“ๆŸ๏ผŒไฝœ็”จ่Œƒๅ›ด่‡ชๅŠจ็ป“ๆŸ xyz = torch.dot(xy, z) print(xy.requires_grad, xyz.requires_grad, z.requires_grad) ``` ## enable_grad็ฑป - ่ฟ™ไธช็ฑปๆ˜ฏไธ€ไธช่ฃ…้ฅฐๅ™จ็ฑป๏ผŒๆไพ›ๆ›ดๅŠ ็ฎ€ๆท็š„ๅผ€ๅฏๆ–นๅผใ€‚ - ไนŸๆ˜ฏไธ€ไธชไธŠไธ‹ๆ–‡็ฎก็†ๅ™จ๏ผ› - ่ฃ…้ฅฐๅ™จ็”จไบŽๅ‡ฝๆ•ฐไธŽ็ฑป๏ผ› ```python torch.autograd.enable_grad() ``` ### ่ฃ…้ฅฐๅ™จไฝฟ็”จไพ‹ๅญ ``` import torch x = torch.Tensor([1, 2, 3]) y = torch.Tensor([3, 4, 5]) z = torch.Tensor([1, 2, 3]) x.requires_grad=True y.requires_grad=True z.requires_grad=True @ torch.autograd.enable_grad() def func_xy(x, y): return x + y # ๅ—็ป“ๆŸ๏ผŒไฝœ็”จ่Œƒๅ›ด่‡ชๅŠจ็ป“ๆŸ xy = func_xy(x, y) xyz = torch.dot(xy, z) print(xy.requires_grad, xyz.requires_grad, z.requires_grad) ``` ### ไธŠไธ‹ๆ–‡ไฝฟ็”จไพ‹ๅญ ``` import torch x = torch.Tensor([1, 2, 3]) y = torch.Tensor([3, 4, 5]) z = torch.Tensor([1, 2, 3]) x.requires_grad=True y.requires_grad=True z.requires_grad=True with torch.autograd.enable_grad(): xy = x + y xyz = torch.dot(xy, z) print(xy.requires_grad, xyz.requires_grad, z.requires_grad) ``` ## no_grad็ฑป - ไธŽenable_grad็ฑปไธ€ๆ ท็š„ไฝฟ็”จๆ–นๅผ๏ผŒไฝœ็”จๅด็›ธๅใ€‚ - ๆณจๆ„๏ผš - no_gradไธŽenable_gradๆ˜ฏๅ‡ฝๆ•ฐ่ฃ…้ฅฐๅ™จ๏ผŒไธๆ˜ฏ็ฑป่ฃ…้ฅฐๅ™จ๏ผ› ### ่ฃ…้ฅฐๅ™จไฝฟ็”จๆ–นๅผ - ๅฏนๆ•ดไธชๅ‡ฝๆ•ฐไฝœ็”จ๏ผŒ้€‚ๅˆๅ‡ฝๆ•ฐๆจกๅผ๏ผŒๅฆ‚ๆžœๅ‡ฝๆ•ฐไธญๆœ‰็‰นๆฎŠ็š„ๆƒ…ๅ†ต๏ผŒๅฏไปฅๅตŒๅฅ—ไฝฟ็”จใ€‚ ``` import torch x = torch.Tensor([1, 2, 3]) y = torch.Tensor([3, 4, 5]) z = torch.Tensor([1, 2, 3]) x.requires_grad=True y.requires_grad=True z.requires_grad=True @ torch.autograd.no_grad() def func_xy(x, y): return x + y # ๅ—็ป“ๆŸ๏ผŒไฝœ็”จ่Œƒๅ›ด่‡ชๅŠจ็ป“ๆŸ xy = func_xy(x, y) xyz = torch.dot(xy, z) print(xy.requires_grad, xyz.requires_grad, z.requires_grad) ``` ### ไธŠไธ‹ๆ–‡ไฝฟ็”จๆ–นๅผ - ้€‚ๅˆไบŽๅœจ้žๅ‡ฝๆ•ฐๆƒ…ๅ†ตไธ‹ไฝฟ็”จ ``` import torch x = torch.Tensor([1, 2, 3]) y = torch.Tensor([3, 4, 5]) z = torch.Tensor([1, 2, 3]) x.requires_grad=True y.requires_grad=True z.requires_grad=True with torch.autograd.no_grad(): xy = x + y xyz = torch.dot(xy, z) print(xy.requires_grad, xyz.requires_grad, z.requires_grad) ``` ### no_gradไธŽenable_gradๆททๅˆไฝฟ็”จ - ่ฟ™็งๆททๅˆไฝฟ็”จ๏ผŒๅฏไปฅๆปก่ถณๅผ€ๅ‘็š„ไปปไฝ•ๆƒ…ๅ†ต็š„้œ€ๆฑ‚๏ผ› ``` import torch x = torch.Tensor([1, 2, 3]) y = torch.Tensor([3, 4, 5]) z = torch.Tensor([1, 2, 3]) x.requires_grad=True y.requires_grad=True z.requires_grad=True with torch.autograd.no_grad(): xy = x + y with torch.autograd.enable_grad(): z3 = z **3 xy2 = xy ** 2 # ๅ› ไธบxy็š„requires_grad=False๏ผŒๆ•ดไธช่ฟ็ฎ—ไนŸๆ˜ฏFalse print(xy.requires_grad, z3.requires_grad, xy2.requires_grad) ``` ----
github_jupyter
``` import numpy as np import os from PIL import Image from sklearn.preprocessing import LabelBinarizer import sys import glob import argparse import matplotlib.pyplot as plt import pickle as pkl from keras.applications.inception_v3 import InceptionV3, preprocess_input, decode_predictions from keras.models import Model, Sequential from keras.layers import Dropout, Flatten,Dense, GlobalAveragePooling2D, LeakyReLU from keras import optimizers from tensorflow.keras.optimizers import Adam,SGD from keras.preprocessing import image from keras.callbacks import ModelCheckpoint, EarlyStopping from sklearn.model_selection import train_test_split import tensorflow as tf from keras.utils.np_utils import to_categorical from keras import backend as K print('done') Data_X = np.load() # path for Major training image data Data_Y = np.load() # path for Major training class data Data_Y = np.array(Data_Y) print(np.unique(Data_Y)) Y_new = [] count_bcc = 0 count_bkl = 0 count_mel = 0 count_nv = 0 count_other = 0 for i in Data_Y: if i == 'bcc': Y_new.append(0) count_bcc = count_bcc + 1 elif i == 'bkl': Y_new.append(1) count_bkl = count_bkl + 1 elif i == 'mel': Y_new.append(2) count_mel = count_mel + 1 elif i == 'nv': Y_new.append(3) count_nv = count_nv + 1 elif i == 'other': Y_new.append(4) count_other = count_other + 1 print('bcc - ',count_bcc) print('bkl - ',count_bkl) print('mel - ', count_mel) print('nv - ',count_nv) print('other - ',count_other) print(Y_new) Y_new = np.array(Y_new) X_train, X_test, Y_train, Y_test = train_test_split(Data_X, Y_new, test_size=0.15, random_state=69,stratify= Y_new) print(X_train.shape) print(Y_train.shape) Y_train = to_categorical(Y_train, num_classes=5) Y_test = to_categorical(Y_test, num_classes=5) print('Train dataset shape',X_train.shape) print('Test dataset shape',X_test.shape) print(Y_train.shape) from tensorflow.keras.applications import ResNet50 base_model = ResNet50(input_shape=(76, 76,3), include_top=False, weights="imagenet") for layer in base_model.layers: layer.trainable = False from tensorflow.keras.applications import ResNet50 from tensorflow.python.keras.models import Sequential from tensorflow.python.keras.layers import Dense, Flatten, GlobalAveragePooling2D base_model.add(Dropout(0.1)) base_model = Sequential() base_model.add(ResNet50(include_top=False, weights='imagenet', pooling='max')) base_model.add(Dense(5, activation='softmax')) base_model.summary() import time from tensorflow.keras.preprocessing.image import ImageDataGenerator start = time.time() early_stopping_monitor = EarlyStopping(patience=100,monitor='val_accuracy') model_checkpoint_callback = ModelCheckpoint(filepath='resnet.h5', save_weights_only=False, monitor='val_accuracy', mode='auto', save_best_only=True, verbose=1) batch_size = 64 epochs = 250 optimizer = SGD(learning_rate=0.0001) base_model.compile(optimizer = optimizer, loss = 'categorical_crossentropy', metrics=['accuracy']) datagen = ImageDataGenerator(zoom_range = 0.2, shear_range=0.2) datagen.fit(X_train) history=base_model.fit(datagen.flow(X_train,Y_train), epochs=epochs, batch_size=batch_size, shuffle=True, callbacks=[early_stopping_monitor,model_checkpoint_callback], validation_data=(X_test, Y_test)) end = time.time() print("////////////////////////////Time Taken////////////////////////////////////////") print(end-start) ```
github_jupyter
##### Copyright 2020 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Taking advantage of context features <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/recommenders/examples/context_features"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/context_features.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/recommenders/blob/main/docs/examples/context_features.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/context_features.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> In [the featurization tutorial](featurization) we incorporated multiple features beyond just user and movie identifiers into our models, but we haven't explored whether those features improve model accuracy. Many factors affect whether features beyond ids are useful in a recommender model: 1. __Importance of context__: if user preferences are relatively stable across contexts and time, context features may not provide much benefit. If, however, users preferences are highly contextual, adding context will improve the model significantly. For example, day of the week may be an important feature when deciding whether to recommend a short clip or a movie: users may only have time to watch short content during the week, but can relax and enjoy a full-length movie during the weekend. Similarly, query timestamps may play an important role in modelling popularity dynamics: one movie may be highly popular around the time of its release, but decay quickly afterwards. Conversely, other movies may be evergreens that are happily watched time and time again. 2. __Data sparsity__: using non-id features may be critical if data is sparse. With few observations available for a given user or item, the model may struggle with estimating a good per-user or per-item representation. To build an accurate model, other features such as item categories, descriptions, and images have to be used to help the model generalize beyond the training data. This is especially relevant in [cold-start](https://en.wikipedia.org/wiki/Cold_start_(recommender_systems)) situations, where relatively little data is available on some items or users. In this tutorial, we'll experiment with using features beyond movie titles and user ids to our MovieLens model. ## Preliminaries We first import the necessary packages. ``` !pip install -q tensorflow-recommenders !pip install -q --upgrade tensorflow-datasets import os import tempfile import numpy as np import tensorflow as tf import tensorflow_datasets as tfds import tensorflow_recommenders as tfrs ``` We follow [the featurization tutorial](featurization) and keep the user id, timestamp, and movie title features. ``` ratings = tfds.load("movielens/100k-ratings", split="train") movies = tfds.load("movielens/100k-movies", split="train") ratings = ratings.map(lambda x: { "movie_title": x["movie_title"], "user_id": x["user_id"], "timestamp": x["timestamp"], }) movies = movies.map(lambda x: x["movie_title"]) ``` We also do some housekeeping to prepare feature vocabularies. ``` timestamps = np.concatenate(list(ratings.map(lambda x: x["timestamp"]).batch(100))) max_timestamp = timestamps.max() min_timestamp = timestamps.min() timestamp_buckets = np.linspace( min_timestamp, max_timestamp, num=1000, ) unique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000)))) unique_user_ids = np.unique(np.concatenate(list(ratings.batch(1_000).map( lambda x: x["user_id"])))) ``` ## Model definition ### Query model We start with the user model defined in [the featurization tutorial](featurization) as the first layer of our model, tasked with converting raw input examples into feature embeddings. However, we change it slightly to allow us to turn timestamp features on or off. This will allow us to more easily demonstrate the effect that timestamp features have on the model. In the code below, the `use_timestamps` parameter gives us control over whether we use timestamp features. ``` class UserModel(tf.keras.Model): def __init__(self, use_timestamps): super().__init__() self._use_timestamps = use_timestamps self.user_embedding = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=unique_user_ids, mask_token=None), tf.keras.layers.Embedding(len(unique_user_ids) + 1, 32), ]) if use_timestamps: self.timestamp_embedding = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()), tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32), ]) self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization() self.normalized_timestamp.adapt(timestamps) def call(self, inputs): if not self._use_timestamps: return self.user_embedding(inputs["user_id"]) return tf.concat([ self.user_embedding(inputs["user_id"]), self.timestamp_embedding(inputs["timestamp"]), self.normalized_timestamp(inputs["timestamp"]), ], axis=1) ``` Note that our use of timestamp features in this tutorial interacts with our choice of training-test split in an undesirable way. Because we have split our data randomly rather than chronologically (to ensure that events that belong to the test dataset happen later than those in the training set), our model can effectively learn from the future. This is unrealistic: after all, we cannot train a model today on data from tomorrow. This means that adding time features to the model lets it learn _future_ interaction patterns. We do this for illustration purposes only: the MovieLens dataset itself is very dense, and unlike many real-world datasets does not benefit greatly from features beyond user ids and movie titles. This caveat aside, real-world models may well benefit from other time-based features such as time of day or day of the week, especially if the data has strong seasonal patterns. ### Candidate model For simplicity, we'll keep the candidate model fixed. Again, we copy it from the [featurization](featurization) tutorial: ``` class MovieModel(tf.keras.Model): def __init__(self): super().__init__() max_tokens = 10_000 self.title_embedding = tf.keras.Sequential([ tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=unique_movie_titles, mask_token=None), tf.keras.layers.Embedding(len(unique_movie_titles) + 1, 32) ]) self.title_vectorizer = tf.keras.layers.experimental.preprocessing.TextVectorization( max_tokens=max_tokens) self.title_text_embedding = tf.keras.Sequential([ self.title_vectorizer, tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True), tf.keras.layers.GlobalAveragePooling1D(), ]) self.title_vectorizer.adapt(movies) def call(self, titles): return tf.concat([ self.title_embedding(titles), self.title_text_embedding(titles), ], axis=1) ``` ### Combined model With both `UserModel` and `MovieModel` defined, we can put together a combined model and implement our loss and metrics logic. Note that we also need to make sure that the query model and candidate model output embeddings of compatible size. Because we'll be varying their sizes by adding more features, the easiest way to accomplish this is to use a dense projection layer after each model: ``` class MovielensModel(tfrs.models.Model): def __init__(self, use_timestamps): super().__init__() self.query_model = tf.keras.Sequential([ UserModel(use_timestamps), tf.keras.layers.Dense(32) ]) self.candidate_model = tf.keras.Sequential([ MovieModel(), tf.keras.layers.Dense(32) ]) self.task = tfrs.tasks.Retrieval( metrics=tfrs.metrics.FactorizedTopK( candidates=movies.batch(128).map(self.candidate_model), ), ) def compute_loss(self, features, training=False): # We only pass the user id and timestamp features into the query model. This # is to ensure that the training inputs would have the same keys as the # query inputs. Otherwise the discrepancy in input structure would cause an # error when loading the query model after saving it. query_embeddings = self.query_model({ "user_id": features["user_id"], "timestamp": features["timestamp"], }) movie_embeddings = self.candidate_model(features["movie_title"]) return self.task(query_embeddings, movie_embeddings) ``` ## Experiments ### Prepare the data We first split the data into a training set and a testing set. ``` tf.random.set_seed(42) shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False) train = shuffled.take(80_000) test = shuffled.skip(80_000).take(20_000) cached_train = train.shuffle(100_000).batch(2048) cached_test = test.batch(4096).cache() ``` ### Baseline: no timestamp features We're ready to try out our first model: let's start with not using timestamp features to establish our baseline. ``` model = MovielensModel(use_timestamps=False) model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1)) model.fit(cached_train, epochs=3) train_accuracy = model.evaluate( cached_train, return_dict=True)["factorized_top_k/top_100_categorical_accuracy"] test_accuracy = model.evaluate( cached_test, return_dict=True)["factorized_top_k/top_100_categorical_accuracy"] print(f"Top-100 accuracy (train): {train_accuracy:.2f}.") print(f"Top-100 accuracy (test): {test_accuracy:.2f}.") ``` This gives us a baseline top-100 accuracy of around 0.2. ### Capturing time dynamics with time features Do the result change if we add time features? ``` model = MovielensModel(use_timestamps=True) model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1)) model.fit(cached_train, epochs=3) train_accuracy = model.evaluate( cached_train, return_dict=True)["factorized_top_k/top_100_categorical_accuracy"] test_accuracy = model.evaluate( cached_test, return_dict=True)["factorized_top_k/top_100_categorical_accuracy"] print(f"Top-100 accuracy (train): {train_accuracy:.2f}.") print(f"Top-100 accuracy (test): {test_accuracy:.2f}.") ``` This is quite a bit better: not only is the training accuracy much higher, but the test accuracy is also substantially improved. ## Next Steps This tutorial shows that even simple models can become more accurate when incorporating more features. However, to get the most of your features it's often necessary to build larger, deeper models. Have a look at the [deep retrieval tutorial](deep_recommenders) to explore this in more detail.
github_jupyter
``` %run ../Python_files/load_dicts.py %run ../Python_files/util.py #!/usr/bin/env python __author__ = "Jing Zhang" __email__ = "jingzbu@gmail.com" __status__ = "Development" from util import * import numpy as np from numpy.linalg import inv, matrix_rank import json # load logit_route_choice_probability_matrix P = zload('../temp_files/logit_route_choice_probability_matrix_ext.pkz') P = np.matrix(P) print(matrix_rank(P)) print(np.size(P,0), np.size(P,1)) # load path-link incidence matrix A = zload('../temp_files/path-link_incidence_matrix_ext.pkz') # assert(1 == 2) # load link counts data with open('../temp_files/link_day_minute_Apr_dict_ext_JSON_insert_links_adjusted.json', 'r') as json_file: link_day_minute_Apr_dict_ext_JSON = json.load(json_file) week_day_Apr_list = [2, 3, 4, 5, 6, 9, 10, 11, 12, 13, 16, 22, 18, 19, 20, 23, 24, 25, 26, 27, 30] # week_day_Apr_list = [22, 18, 19, 20, 23] link_day_minute_Apr_list = [] for link_idx in range(74): for day in week_day_Apr_list: for minute_idx in range(120): key = 'link_' + str(link_idx) + '_' + str(day) link_day_minute_Apr_list.append(link_day_minute_Apr_dict_ext_JSON[key] ['MD_flow_minute'][minute_idx]) print(len(link_day_minute_Apr_list)) x = np.matrix(link_day_minute_Apr_list) x = np.matrix.reshape(x, 74, 2520) x[x < 1] = 200 # x = np.nan_to_num(x) # y = np.array(np.transpose(x)) # y = y[np.all(y != 0, axis=1)] # x = np.transpose(y) # x = np.matrix(x) # print(np.size(x,0), np.size(x,1)) # print(x[:,:2]) # print(np.size(A,0), np.size(A,1)) # load node-link incidence matrix N = zload('../temp_files/node_link_incidence_MA_ext.pkz') N.shape n = 22 # number of nodes m = 74 # number of links x_0 = [x[:,2][i, 0] for i in range(m)] # x_0 OD_pair_label_dict = zload('../temp_files/OD_pair_label_dict_ext.pkz') len(OD_pair_label_dict) L = 22 * (22 - 1) # dimension of lam # od pair correspondence OD_pair_label_dict_MA_small = zload('../temp_files/OD_pair_label_dict__MA.pkz') # create a dictionary mapping nodes of small network to nodes of bigger network nodeToNode = {} nodeList = range(9)[1:] nodeListExt = [1, 4, 7, 12, 13, 14, 16, 17] for i in nodeList: nodeToNode[str(i)] = nodeListExt[i-1] # nodeToNode['1'] = 1 # nodeToNode['2'] nodeToNode odMap = {} for i in range(len(OD_pair_label_dict_MA_small)): key = str(i) origiSmall = OD_pair_label_dict_MA_small[key][0] destiSmall = OD_pair_label_dict_MA_small[key][1] origiExt = nodeToNode[str(origiSmall)] destiExt = nodeToNode[str(destiSmall)] odMap[key] = (origiExt, destiExt) # odMap odIdxExt = [] # OD pair idx in the extended network corresponding to the OD pairs in smaller network for i in range(len(odMap)): odIdxExt.append(OD_pair_label_dict[str(odMap[str(i)])]) with open('../temp_files/OD_demand_matrix_Apr_weekday_MD.json', 'r') as json_file: demandsSmall = json.load(json_file) # demandsSmall # implement GLS method to estimate OD demand matrix def GLS_Apr(x, A, P, L): """ x: sample matrix, each column is a link flow vector sample; 24 * K A: path-link incidence matrix P: logit route choice probability matrix L: dimension of lam ---------------- return: lam ---------------- """ K = np.size(x, 1) S = samp_cov(x) #print("rank of S is: \n") #print(matrix_rank(S)) #print("sizes of S are: \n") #print(np.size(S, 0)) #print(np.size(S, 1)) inv_S = inv(S).real A_t = np.transpose(A) P_t = np.transpose(P) # PA' PA_t = np.dot(P, A_t) #print("rank of PA_t is: \n") #print(matrix_rank(PA_t)) #print("sizes of PA_t are: \n") #print(np.size(PA_t, 0)) #print(np.size(PA_t, 1)) # AP_t AP_t = np.transpose(PA_t) Q_ = np.dot(np.dot(PA_t, inv_S), AP_t) Q = adj_PSD(Q_).real # Ensure Q to be PSD # Q = Q_ #print("rank of Q is: \n") #print(matrix_rank(Q)) #print("sizes of Q are: \n") #print(np.size(Q, 0)) #print(np.size(Q, 1)) b = sum([np.dot(np.dot(PA_t, inv_S), x[:, k]) for k in range(K)]) # print(b[0]) # assert(1==2) model = Model("OD_matrix_estimation") lam = [] for l in range(L): lam.append(model.addVar(name='lam_' + str(l))) model.update() # Set objective: (K/2) lam' * Q * lam - b' * lam obj = 0 for i in range(L): for j in range(L): obj += (1.0 / 2) * K * lam[i] * Q[i, j] * lam[j] for l in range(L): obj += - b[l] * lam[l] model.setObjective(obj) # Add constraint: lam >= 0 for l in range(L): model.addConstr(lam[l] >= 0) #model.addConstr(lam[l] <= 5000) fictitious_OD_list = zload('../temp_files/fictitious_OD_list') for l in fictitious_OD_list: model.addConstr(lam[l] == 0) for j in range(len(odMap)): model.addConstr(lam[odIdxExt[j]] - demandsSmall[str(j)] <= 0.2 * demandsSmall[str(j)]) model.addConstr(demandsSmall[str(j)] - lam[odIdxExt[j]] <= 0.2 * demandsSmall[str(j)]) # x = {} # for k in range(m): # for i in range(n+1)[1:]: # for j in range(n+1)[1:]: # if i != j: # key = str(k) + '->' + str(i) + '->' + str(j) # x[key] = model.addVar(name='x_' + key) # model.update() # for k in range(m): # s = 0 # for i in range(n+1)[1:]: # for j in range(n+1)[1:]: # if i != j: # key = str(k) + '->' + str(i) + '->' + str(j) # s += x[key] # model.addConstr(x[key] >= 0) # model.addConstr(s - x_0[k] == 0) # model.addConstr(x_0[k] - s == 0) # for l in range(n): # for i in range(n+1)[1:]: # for j in range(n+1)[1:]: # if i != j: # key_ = str(i) + '->' + str(j) # key__ = '(' + str(i) + ', ' + str(j) + ')' # s = 0 # for k in range(m): # key = str(k) + '->' + str(i) + '->' + str(j) # s += N[l, k] * x[key] # if (l+1 == i): # model.addConstr(s + lam[OD_pair_label_dict[key__]] == 0) # elif (l+1 == j): # model.addConstr(s - lam[OD_pair_label_dict[key__]] == 0) # else: # model.addConstr(s == 0) # if (i == 1 and j == 2): # print(s) model.update() model.setParam('OutputFlag', False) model.optimize() lam_list = [] for v in model.getVars(): # print('%s %g' % (v.varName, v.x)) lam_list.append(v.x) # print('Obj: %g' % obj.getValue()) return lam_list lam_list = GLS_Apr(x, A, P, L) # write estimation result to file n = 22 # number of nodes with open('../temp_files/OD_demand_matrix_Apr_weekday_MD_ext.txt', 'w') as the_file: idx = 0 for i in range(n + 1)[1:]: for j in range(n + 1)[1:]: if i != j: the_file.write("%d,%d,%f\n" %(i, j, lam_list[idx])) idx += 1 ```
github_jupyter
<h2> Basics of Python: Lists </h2> We review using Lists in Python here. Please run each cell and check the results. A list (or array) is a collection of objects (variables) separated by comma. The order is important, and we can access each element in the list with its index starting from 0. ``` # here is a list holding all even numbers between 10 and 20 L = [10, 12, 14, 16, 18, 20] # let's print the list print(L) # let's print each element by using its index but in reverse order print(L[5],L[4],L[3],L[2],L[1],L[0]) # let's print the length (size) of list print(len(L)) # let's print each element and its index in the list # we use a for-loop, and the number of iteration is determined by the length of the list # everthing is automatical :-) L = [10, 12, 14, 16, 18, 20] for i in range(len(L)): print(L[i],"is the element in our list with the index",i) # let's replace each number in the above list with its double value # L = [10, 12, 14, 16, 18, 20] # let's print the list before doubling operation print("the list before doubling operation is",L) for i in range(len(L)): current_element=L[i] # get the value of the i-th element L[i] = 2 * current_element # update the value of the i-th element # let's shorten the code as #L[i] = 2 * L[i] # or #L[i] *= 2 # let's print the list after doubling operation print("the list after doubling operation is",L) # after each execution of this cell, the latest values will be doubled # so the values in the list will be exponentially increased # let's define two lists L1 = [1,2,3,4] L2 = [-5,-6,-7,-8] # two lists can be concatenated # the result is a new list print("the concatenation of L1 and L2 is",L1+L2) # the order of terms is important print("the concatenation of L2 and L1 is",L2+L1) # this is a different list than L1+L2 # we can add a new element to a list, which increases its length/size by 1 L = [10, 12, 14, 16, 18, 20] print(L,"the current length is",len(L)) # we add two values by showing two different methods # L.append(value) directly adds the value as a new element to the list L.append(-4) # we can also use concatenation operator + L = L + [-8] # here [-8] is a list having a single element print(L,"the new length is",len(L)) # a list can be multiplied with an integer L = [1,2] # we can consider the multiplication of L by an integer as a repeated summation (concatenation) of L by itself # L * 1 is the list itself # L * 2 is L + L (the concatenation of L with itself) # L * 3 is L + L + L (the concatenation of L with itself twice) # L * m is L + ... + L (the concatenation of L with itself m-1 times) # L * 0 is the empty list # L * i is the same as i * L # let's print the different cases for i in range(6): print(i,"* L is",i*L) # this operation can be useful when initializing a list with the same value(s) # let's create a list of prime numbers less than 100 # here is a function that determines whether a given number is prime or not def prime(number): if number < 2: return False if number == 2: return True if number % 2 == 0: return False for i in range(3,number,2): if number % i == 0: return False return True # end of a function # let's start with an empty list L=[] # what can the length of this list be? print("my initial length is",len(L)) for i in range(2,100): if prime(i): L.append(i) # alternative methods: #L = L + [i] #L += [i] # print the final list print(L) print("my final length is",len(L)) ``` For a given integer $n \geq 0$, $ S(0) = 0 $, $ S(1)=1 $, and $ S(n) = 1 + 2 + \cdots + n $. We define list $ L(n) $ such that the element with index $n$ holds $ S(n) $. In other words, the elements of $ L(n) $ are $ [ S(0)~~S(1)~~S(2)~~\cdots~~S(n) ] $. Let's build the list $ L(20) $. ``` # let's define the list with S(0) L = [0] # let's iteratively define n and S # initial values n = 0 S = 0 # the number of iterations N = 20 while n <= N: # we iterate all values from 1 to 20 n = n + 1 S = S + n L.append(S) # print the final list print(L) ``` <h3> Task 1 </h3> Fibonacci sequence starts with $ 1 $ and $ 1 $. Then, each next element is equal to the summation of the previous two elements: $$ 1, 1, 2 , 3 , 5, 8, 13, 21, 34, 55, \ldots $$ Find the first 30 elements of the Fibonacci sequence, store them in a list, and then print the list. You can verify the first 10 elements of your result with the above list. ``` # # your solution is here # the first and second elements are 1 and 1 F = [1,1] for i in range(2,30): F.append(F[i-1] + F[i-2]) # print the final list print(F) ``` <h3> Lists of different objects </h3> A list can have any type of values. ``` # the following list stores certain information about Asja # name, surname, age, profession, height, weight, partner(s) if any, kid(s) if any, the creation date of list ASJA = ['Asja','Sarkane',34,'musician',180,65.5,[],['Eleni','Fyodor'],"October 24, 2018"] print(ASJA) # Remark that an element of a list can be another list as well. ``` <h3> Task 2 </h3> Define a list $ N $ with 11 elements such that $ N[i] $ is another list with four elements such that $ [i, i^2, i^3, i^2+i^3] $. The index $ i $ should be between $ 0 $ and $ 10 $. ``` # # your solution is here # # define an empty list N = [] for i in range(11): N.append([ i , i*i , i*i*i , i*i + i*i*i ]) # a list having four elements is added to the list N # Alternatively: #N.append([i , i**2 , i**3 , i**2 + i**3]) # ** is the exponent operator #N = N + [ [i , i*i , i*i*i , i*i + i*i*i] ] # Why using double brackets? #N = N + [ [i , i**2 , i**3 , i**2 + i**3] ] # Why using double brackets? # In the last two alternative solutions, you may try with a single bracket, # and then see why double brackets are needed for the exact answer. # print the final list print(N) # let's print the list N element by element for i in range(len(N)): print(N[i]) # let's print the list N element by element by using an alternative method for el in N: # el will iteratively takes the values of elements in N print(el) ``` <h3> Dictionaries </h3> The outcomes of a quantum program (circuit) will be stored in a dictionary. Therefore, we very shortly mention about the dictionary data type. A dictionary is a set of paired elements. Each pair is composed by a key and its value, and any value can be accessed by its key. ``` # let's define a dictionary pairing a person with her/his age ages = { 'Asja':32, 'Balvis':28, 'Fyodor':43 } # let print all keys for person in ages: print(person) # let's print the values for person in ages: print(ages[person]) ```
github_jupyter
# lab2 Logisitic Regression ``` %matplotlib inline import numpy as np import matplotlib import pandas as pd import matplotlib.pyplot as plt import scipy.optimize as op ``` ## 1. Load Data ``` data = pd.read_csv('ex2data1.txt') X = np.array(data.iloc[:,0:2]) y = np.array(data.iloc[:,2]) print('X.shape = ' + str(X.shape)) print('y.shape = ' + str(y.shape)) def plotData(X, y): k1 = (y==1) k2 = (y==0) plt.scatter(X[k1,0], X[k1,1], c='r',marker='+') plt.scatter(X[k2,0], X[k2,1], c='b',marker='o') plt.xlabel('Exam 1 score') plt.ylabel('Exam 2 score') plt.legend(['Admitted', 'Not admitted']) plotData(X, y) plt.show() # ๅœจXๅทฆไพงๆทปๅŠ ๅ…จ1็š„ๅˆ— m = X.shape[0] n = X.shape[1] X = np.hstack((np.ones((m,1)), X)) print('X.shape = ' + str(X.shape)) ini_theta = np.zeros((n+1, 1)) ``` ## 2. Cost and Gradient $$ g(z)=\frac{1}{1+e^{-z}} $$ $$ J(\theta)=\frac{1}{m}\sum_{i=1}^{m}[-y^{(i)}log(h_\theta(x^{(i)}))-(1-y^{(i)})log(1-h_\theta(x^{(i)}))] $$ $$ \frac{\partial J(\theta)}{\partial\theta_j}=\frac{1}{m}\sum_{i=1}^{m} [(h_\theta(x^{(i)})-y^{(i)})x^{(i)}_j] $$ ``` def sigmoid(z): return 1 / (1+np.exp(-z)) def gradient(theta, X, y): '''compute gradient args: X - X.shape = (m,n) theta - theta.shape = (n,1) y - y.shape = (m,1) return: grade - the gradient ''' m = X.shape[0] n = X.shape[1] theta = theta.reshape((n,1)) y = y.reshape((m,1)) h = sigmoid(np.dot(X, theta)) tmp = np.sum((h-y)*X, axis=0) / m grade = tmp.reshape(theta.shape) return grade def costFunction(theta, X, y): '''compute cost args: X - X.shape = (m,n) theta - theta.shape = (n,1) y - y.shape = (m,1) return: J - the cost ''' m = X.shape[0] n = X.shape[1] theta = theta.reshape((n,1)) y = y.reshape((m,1)) h = sigmoid(np.dot(X, theta)) term1 = y * np.log(h) term2 = (1-y) * np.log(1-h) J = sum(- term1 - term2) / m return J grade = gradient(ini_theta, X, y) cost= costFunction(ini_theta, X, y) print('cost = ' + str(cost)) grade test_theta = [[-24], [0.2], [0.2]] test_theta = np.array(test_theta) grade = gradient(test_theta, X, y) cost = costFunction(test_theta, X, y) print('cost = ' + str(cost)) grade ``` ## 3. predict ่ฟ™้‡Œไฝฟ็”จscipyไธญ็š„ๆ›ฟไปฃไผ˜ๅŒ–ๅ™จ ``` result = op.minimize(fun=costFunction, x0=ini_theta, args=(X, y), method='TNC', jac=gradient) optimal_theta = result.x optimal_theta def plotDecisionBoundary(theta, X, y): '''็ป˜ๅˆถ่พน็•Œ็›ด็บฟ ''' plotData(X[:,1:3], y) plot_x = np.array([np.min(X[:,1])-2, np.max(X[:,1])+2]) # theta0 + theta1 * x1 + theta2 * x2 == 0 # ไปฃๅ…ฅsigmoidๅ‡ฝๆ•ฐ # g(z) = 1/2 ๆ˜ฏๅˆคๆ–ญ1ๅ’Œ0็š„้˜ˆๅ€ผ plot_y = -1 / theta[2] * (theta[1]*plot_x + theta[0]) plt.plot(plot_x, plot_y) plotDecisionBoundary(optimal_theta, X, y) plt.show() def predict(theta, X): m = X.shape[0] pred = np.zeros((m,1)) h = sigmoid(np.dot(X, theta)) pred[h>=0.5] = 1 return pred.flatten() prob = np.array([1, 45, 85]) prob = sigmoid(np.dot(prob, optimal_theta)) prob # ่ฎก็ฎ—ๅ‡†็กฎ็އ๏ผŒ่ฟ™้‡Œ็š„meanๅ‡ฝๆ•ฐไฝฟ็”จๅทงๅฆ™ p = predict(optimal_theta, X) print('Train accuracy = {}%'.format(100 * np.mean(p==y)) ) ```
github_jupyter
``` import numpy as np import pandas as pd from pandas import Series ``` ## Series ``` Series? animals = ['tiger','shetta','monkey'] capitals = { 'Egypt' : 'Cairo', 'UK' : 'London', 'France' : 'Paris' } _series = Series([1,2,3,4,5]) _series Series([1,2,3,4],index=['one','two','three','four']) animals = Series(animals) animals capitals = Series(capitals) print (capitals) print (capitals.index) capitals.name animals.name animals capitals animals.append(capitals) animals ``` ### Quering in Pandas ``` sports = {'Archery': 'Bhutan', 'Golf': 'Scotland', 'Sumo': 'Japan', 'Taekwondo': 'South Korea'} s = pd.Series(sports) s s.index s[0] s['Sumo'] sports = {99: 'Bhutan', 100: 'Scotland', 101: 'Japan', 102: 'South Korea'} s = pd.Series(sports) # s[0] will throw err s.iloc[0] ``` ### Operation over series ``` l = (np.random.rand(10)*20).astype(int) _series = Series(l) # LOOPs ARE SLOW !!!! sum = 0 for x in _series: sum += x print (sum) sum == l.sum() ``` ### Using Vectorization ``` import numpy as np np.sum(_series) # or use _series.sum() ``` #### Testing Speed ``` test = pd.Series(np.random.randint(0,10,1000)) test.head() len(test) %%timeit -n 100 sum = 0; for item in test: sum += item %%timeit -n 100 np.sum(test) ``` ### Broadcasting ``` # apply operation to every value to the series, and change it _series ** 2 # braodcasting -- exponentail by 2 ``` ## DataFrame ``` from pandas import DataFrame _df = DataFrame([ {'Cost':1,'Name':2,'Total':3}, {'Cost':5,'Name':3,'Total':9}, {'Cost':3,'Name':5}, {'Cost':4,'Name':5}],index=['Store 1','Store 1','Store 2','Store 3']) _df df = DataFrame([_series.values,_series.values**2,_series.values**3],index=['x','x*2','x*3']) df df.T ``` ### Querying Dataframe ``` _df.loc['Store 2'] _df.loc['Store 1','Cost'] ``` #### Column Selection ``` _df.loc[:,['Total','Name']] ``` ### Transpose Dataframe ``` _df.T _df.loc[:,['Cost','Total']] # or we can directly get column from df df[['Cost']] return dataframe _df[['Cost','Total']] # will return Series / _df[['Cost']] will retrn DataFrame _df.T.loc['Cost'] # modification in chanining is the original _df.loc['Store 1']['Cost'].iloc[1] = 21 _df _df.drop('Store 3') # return a copy rather than change dataframe # make a copy copy_df = _df.copy() # copy_df.drop(1,inplace=True,axis=1) copy_df.drop('Cost',axis=1) # Appending new column with default value is None or List of Value # calculated column for other df columns _df['Revenue'] = [None, 2, 23, 5] _df['Calculated Column'] = _df['Cost'] + _df['Total'] # Series from other Series _df ``` ### Querying the Columns ``` _df[['Cost','Total']] ``` ### Quering the Rows ``` _df.loc['Store 1'] ```
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn import preprocessing from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder from sklearn.decomposition import PCA df = pd.read_csv("wat-all.csv") df dff = pd.DataFrame(df['packet_address'], columns=['packet_address']) le = LabelEncoder() encode = dff[dff.columns[:]].apply(le.fit_transform) df['packet_address_id'] = encode df df.corr() plt.figure(figsize=(15,15)) sns.heatmap(df.corr(), annot = True) plt.show() train_X = df.drop(columns=['target','time','packet_address','packet_address_id']) #standardization x = train_X.values min_max_scaler = preprocessing.MinMaxScaler() x_scaled = min_max_scaler.fit_transform(x) train_X = pd.DataFrame(x_scaled) train_X pca = PCA(0.95) pca.fit(train_X) principal_components = pca.transform(train_X) principal_components pca.explained_variance_ratio_ features = range(pca.n_components_) plt.bar(features, pca.explained_variance_) plt.xticks(features) plt.xlabel("PCs") plt.ylabel("Variance") principal_df = pd.DataFrame(data = principal_components) principal_df final_df = pd.concat([principal_df, df[['target']]], axis = 1) final_df final_df.corr() fig = plt.figure(figsize = (8,8)) ax = fig.add_subplot(1,1,1) ax.set_xlabel('Principal Component 1', fontsize = 15) ax.set_ylabel('Principal Component 2', fontsize = 15) ax.set_title('2D PCA', fontsize = 20) targets = [1, 0] colors = ['r', 'g'] for target, color in zip(targets,colors): indicesToKeep = final_df['target'] == target ax.scatter(final_df.loc[indicesToKeep, 0] , final_df.loc[indicesToKeep, 1] , c = color , s = 50) ax.legend(targets) ax.grid() sns.distplot(df['router'], kde = False, bins=30, color='blue') sns.distplot(df['src_router'], kde = False, bins=30, color='orange') sns.distplot(df['dst_router'], kde = False, bins=30, color='red') sns.distplot(df['inport'], kde = False, bins=30, color='green') direction = {'Local': 0,'North': 1, 'East': 2, 'South':3,'West':4} sns.distplot(df['outport'], kde = False, bins=30, color='black') direction = {'Local': 0,'North': 1, 'East': 2, 'South':3,'West':4} sns.distplot(df['packet_type'], kde = False, bins=30, color='grey') data = {'GETX': 0,'DATA': 1, 'PUTX': 2, 'WB_ACK':3} # scatter plot fig, ax = plt.subplots() ax.scatter(df['router'], df['time']) # set a title and labels ax.set_xlabel('router') ax.set_ylabel('time') df_500 = pd.read_csv('wat-all.csv',nrows=500) # scatter plot 500 fig, ax = plt.subplots() ax.scatter(df_500['router'], df_500['time']) # set a title and labels ax.set_xlabel('router') ax.set_ylabel('time') # bar chart by router fig, ax = plt.subplots() # count the occurrence of each class data = df_500['router'].value_counts() # get x and y data points = data.index frequency = data.values # create bar chart ax.bar(points, frequency) # set title and labels ax.set_xlabel('Routers') ax.set_ylabel('Frequency') # bar chart by time fig, ax = plt.subplots() # count the occurrence of each class data = df_500['time'].value_counts() # get x and y data points = data.index frequency = data.values # create bar chart ax.bar(points, frequency) # set title and labels ax.set_xlabel('Time') ax.set_ylabel('Frequency') corr_df = pd.concat([train_X, df[['target']]], axis = 1) corr_df.corr() train_Y = corr_df['target'] train_Y.value_counts() ``` #### machine learning models ``` X_train, X_test, y_train, y_test = train_test_split(principal_df, train_Y, test_size=0.3, random_state=0, shuffle = True) #logistic regression from sklearn.linear_model import LogisticRegression from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix import statsmodels.api as sm from sklearn import metrics from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve from sklearn.metrics import accuracy_score logit_model=sm.Logit(train_Y,train_X) result=logit_model.fit() print(result.summary2()) logreg = LogisticRegression(C=1,penalty='l2',random_state=42) logreg.fit(X_train, y_train) y_pred = logreg.predict(X_test) print('Accuracy {:.2f}'.format(accuracy_score(y_test,y_pred))) logreg_score_train = logreg.score(X_train,y_train) print("Train Prediction Score",logreg_score_train*100) logreg_score_test = accuracy_score(y_test,y_pred) print("Test Prediction ",logreg_score_test*100) cm = confusion_matrix(y_test, y_pred) print(cm) print(classification_report(y_test, y_pred)) logit_roc_auc = roc_auc_score(y_test, logreg.predict(X_test)) fpr, tpr, thresholds = roc_curve(y_test, logreg.predict_proba(X_test)[:,1]) plt.figure() plt.plot(fpr, tpr, label='Logistic Regression (area = %0.2f)' % logit_roc_auc) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic') plt.legend(loc="lower right") plt.savefig('Log_ROC') plt.show() #KNeighborsClassifier from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier() knn.fit(X_train,y_train) y_pred_knn= knn.predict(X_test) knn_score_train = knn.score(X_train,y_train) print("Train Prediction Score",knn_score_train*100) knn_score_test = accuracy_score(y_test,y_pred_knn) print("Test Prediction ",knn_score_test*100) cm = confusion_matrix(y_test, y_pred_knn) print(cm) print(classification_report(y_test,y_pred_knn)) logit_roc_auc = roc_auc_score(y_test, y_pred_knn) fpr, tpr, thresholds = roc_curve(y_test, knn.predict_proba(X_test)[:,1]) plt.figure() plt.plot(fpr, tpr, label='KNeighbors (area = %0.2f)' % logit_roc_auc) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic') plt.legend(loc="lower right") plt.savefig('Log_ROC') plt.show() #supportvectormachines from sklearn.svm import SVC ksvc = SVC(kernel = 'rbf',random_state = 42,probability=True) ksvc.fit(X_train,y_train) y_pred_ksvc= ksvc.predict(X_test) ksvc_score_train = ksvc.score(X_train,y_train) print("Train Prediction Score",ksvc_score_train*100) ksvc_score_test = accuracy_score(y_test,y_pred_ksvc) print("Test Prediction Score",ksvc_score_test*100) cm = confusion_matrix(y_test, y_pred_ksvc) print(cm) print(classification_report(y_test,y_pred_ksvc)) #naive_bayes from sklearn.naive_bayes import GaussianNB nb = GaussianNB() nb.fit(X_train,y_train) y_pred_nb= nb.predict(X_test) nb_score_train = nb.score(X_train,y_train) print("Train Prediction Score",nb_score_train*100) nb_score_test = accuracy_score(y_test,y_pred_nb) print("Test Prediction Score",nb_score_test*100) cm = confusion_matrix(y_test, y_pred_nb) print(cm) print(classification_report(y_test,y_pred_nb)) #neuralnetwork from keras.models import Sequential from keras.layers import Dense from keras.utils import to_categorical from keras.callbacks import EarlyStopping #2layer model = Sequential() n_cols = X_train.shape[1] n_cols model.add(Dense(2, activation='relu', input_shape=(n_cols,))) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) early_stopping_monitor = EarlyStopping(patience=20) model.fit(X_train, y_train, epochs=10, validation_split=0.4 ) #3layer model = Sequential() n_cols = X_train.shape[1] n_cols model.add(Dense(4, activation='relu', input_shape=(n_cols,))) model.add(Dense(2, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='sgd', loss='mean_squared_error', metrics=['accuracy']) early_stopping_monitor = EarlyStopping(patience=20) model.fit(X_train, y_train, epochs=20, validation_split=0.4 ) #4layer model = Sequential() n_cols = X_train.shape[1] n_cols model.add(Dense(8, activation='relu', input_shape=(n_cols,))) model.add(Dense(4, activation='relu')) model.add(Dense(2, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='sgd', loss='mean_squared_error', metrics=['accuracy']) early_stopping_monitor = EarlyStopping(patience=20) model.fit(X_train, y_train, epochs=20, validation_split=0.4 ) #4layer model = Sequential() n_cols = X_train.shape[1] n_cols model.add(Dense(16, activation='relu', input_shape=(n_cols,))) model.add(Dense(8, activation='relu')) model.add(Dense(4, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='sgd', loss='mean_squared_error', metrics=['accuracy']) early_stopping_monitor = EarlyStopping(patience=20) model.fit(X_train, y_train, epochs=20, validation_split=0.4 ) ```
github_jupyter
# Inference This notebook is dedicated to testing and visualizing results for both the wiki and podcast datasets Note: Apologies for the gratuitous warnings. Tensorflow is aware of these issues and has rectified them in later versions of TensorFlow. Unfortunately, they persist for version 1.13. ``` from src.SliceNet import SliceNet from src.netUtils import getSingleExample import matplotlib.pyplot as plt from pathlib import Path import numpy as np import pandas as pd import seaborn as sns import random import math import warnings warnings.filterwarnings('ignore') import tensorflow as tf tf.logging.set_verbosity(tf.logging.ERROR) import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' if type(tf.contrib) != type(tf): tf.contrib._warning = None %load_ext autoreload %autoreload 2 # Choose whether to use the base network or the network with self-attention attention = True # Current best networks best_base_wiki = '/home/bmmidei/SliceCast/models/04_20_2019_2300_final.h5' best_base_podcast = '/home/bmmidei/SliceCast/models/04_26_2019_1000_podcast.h5' best_attn_wiki = '/home/bmmidei/SliceCast/models/05_03_2019_0800_attn.h5' best_attn_podcast = '/home/bmmidei/SliceCast/models/05_02_2019_2200_attn_podcast.h5' if attention: weights_wiki = best_attn_wiki weights_podcast = best_attn_podcast else: weights_wiki = best_base_wiki weights_podcast = best_base_podcast net = SliceNet(classification=True, class_weights=[1.0,10,0.2], attention=attention) ``` ## Sample predictions on unseen wiki articles Note that this section relies on wikipedia ``` dataPath = Path('/home/bmmidei/SliceCast/data/wiki-sample/') files = [str(x) for x in dataPath.glob('**/*') if x.suffix=='.hdf5'] mask = random.sample(range(0,len(files)), 1) # randomly select a file to test test_file = [x for (i,x) in enumerate(files) if i in mask][0] k = 4 num_samples = 16 preds, labels, pk = net.predict(test_file=test_file, num_samples=num_samples, weights_path=weights_wiki, k=k) print('Average PK score with k={} on {} examples is: {:0.3f}'.format(k, num_samples, pk)) np.set_printoptions(suppress=True) preds = np.argmax(preds, axis=2) labels = np.argmax(labels, axis=2) # Choose the index of the document you want to examine idx = 2 # You can keep running this cell with different indices to visualize different # documents within this batch of testing # Note: The graph displays n sentences where n is the length of the longest # document in the batch. As such, there may be padding sections at the beginning # of the document with label and prediction of value 2 df = pd.DataFrame() df['preds'] = preds[idx,:] df['labels'] = labels[idx,:] df['sent_number'] = df.index fig, axes = plt.subplots(nrows=2, ncols=1) df.plot(x='sent_number', y='preds', figsize=(10,5), grid=True, ax=axes[0]) df.plot(x='sent_number', y='labels', figsize=(10,5), grid=True, ax=axes[1], color='green') ``` ## Sample predictions on unseen podcast data ``` test_file = '/home/bmmidei/SliceCast/data/podcasts/hdf5/batch0_0.hdf5' k = 33 num_samples = 2 preds, labels, pk = net.predict(test_file=test_file, num_samples=num_samples, weights_path=weights_podcast, k=k) print('Average PK score with k={} on {} examples is: {:0.3f}'.format(k, num_samples, pk)) np.set_printoptions(suppress=True) preds = np.argmax(preds, axis=2) labels = np.argmax(labels, axis=2) # Choose the document you want to examine idx = 1 df = pd.DataFrame() df['preds'] = preds[idx,:] df['labels'] = labels[idx,:] df['sent_number'] = df.index fig, axes = plt.subplots(nrows=2, ncols=1) df.plot(x='sent_number', y='preds', figsize=(10,5), grid=True, ax=axes[0]) df.plot(x='sent_number', y='labels', figsize=(10,5), grid=True, ax=axes[1], color='green') ``` ## Predictions on a single text file ``` text_file = '/home/bmmidei/SliceCast/data/podcasts/with_timestamps/joe1254.txt' is_labeled = True weights_path = weights_podcast # transfer learning sents, labels = getSingleExample(fname=text_file, is_labeled=is_labeled) sents = np.expand_dims(sents, axis=0) preds = net.singlePredict(sents, weights_path=weights_path) # Place data into a pandas dataframe for analysis df = pd.DataFrame() preds = np.argmax(np.squeeze(preds), axis=-1) df['raw_sentences'] = sents[0] if is_labeled: df['labels'] = labels df['preds'] = preds df['sent_number'] = df.index fig, axes = plt.subplots(nrows=2, ncols=1) df.plot(x='sent_number', y='preds', figsize=(10,5), grid=True, ax=axes[0]) df.plot(x='sent_number', y='labels', figsize=(10,5), grid=True, ax=axes[1], color='green') ``` ## Keyword Extraction The following cells are experimental code to extract keywords for each segment in order to provide context for each segment. ``` from src.postprocess import getSummaries, getTimeStamps import nltk nltk.download('stopwords') keywords = getSummaries(sents[0], preds) stamps = getTimeStamps(sents[0], '/home/bmmidei/SliceCast/data/podcasts/with_timestamps/joe1254.json', preds) seconds = [x%60 for x in stamps] minutes = [math.floor(x/60) for x in stamps] for i, (x, y)in enumerate(zip(minutes, seconds)): print("{}:{}".format(x, y), end="") print([x[0] for x in keywords[i]]) ```
github_jupyter
<a href="https://colab.research.google.com/github/naufalhisyam/TurbidityPrediction-thesis/blob/main/convert2png.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## **BATCH CONVERT FROM DNG TO PNG** INITIALIZATION ``` from google.colab import drive drive.mount('/content/gdrive') import os import os.path import imageio import numpy as np !pip install rawpy import rawpy from skimage import exposure ``` DEFINE DIRECTORIES ``` dataset_name = "0deg_LED_png" your_dir_path = '/content/gdrive/MyDrive/DATA/ORI/0_degree_LED/BATCH1' # String. The path of your directory containing all your subdirectories new_dir_path = f'/content/{dataset_name}' # The path of your new directory with the same architecture as the previous one, but with cropped and aligned faces ``` MAIN CODE ``` for subfolder in next(os.walk(your_dir_path))[1] : # Gives the list of all subdirectories inside the parent directory os.makedirs(os.path.join(new_dir_path, subfolder)) # Creates the new subdirectory. Note that it will also create new_dir_path, so there's no need to add a line os.makedirs(new_dir_path) for file in os.listdir(os.path.join(your_dir_path, subfolder)) : # Gives the list of all files inside the 'subfolder' directory if file.endswith('.dng'): print("converting " + str(file)) size = len(file) filename = file[:size - 4] raw = rawpy.imread(os.path.join(your_dir_path, subfolder, file)) # Postprocessing, i.e demosaicing here, will always #change the original pixel values. Typically what you want # is to get a linearly postprocessed image so that roughly #the number of photons are in linear relation to the pixel values. #You can do that with: rgb = raw.postprocess(gamma=(1,1), no_auto_bright=True, output_bps=8) #Apply gamma corrections: gamma values greater than 1 will shift the image histogram towards left and the output image will be darker than the input image. On the other hand, for gamma values less than 1, the histogram will shift towards right and the output image will be brighter than the input image. #gamma_corrected_rgb = exposure.adjust_gamma(rgb, gamma=0.5, gain=1) image=rgb imageio.imsave(os.path.join(new_dir_path, subfolder, f'{filename}.png'), image) ``` COPY TO DRIVE ``` save_path = f"/content/gdrive/MyDrive/DATA/PNG/{dataset_name}" if not os.path.exists(save_path): os.makedirs(save_path) oripath = new_dir_path + "/." !cp -a "{oripath}" "{save_path}" # copies files to google drive ``` DELETE TEST FOLDER ``` from shutil import rmtree # deletes a folder rmtree(new_dir_path) ```
github_jupyter
#Aula 01 ``` import pandas as pd url_dados = 'https://github.com/alura-cursos/imersaodados3/blob/main/dados/dados_experimentos.zip?raw=true' dados = pd.read_csv(url_dados, compression = 'zip') dados dados.head() dados.shape dados['tratamento'] dados['tratamento'].unique() dados['tempo'].unique() dados['dose'].unique() dados['droga'].unique() dados['g-0'].unique() dados['tratamento'].value_counts() dados['dose'].value_counts() dados['tratamento'].value_counts(normalize = True) dados['dose'].value_counts(normalize = True) dados['tratamento'].value_counts().plot.pie() dados['tempo'].value_counts().plot.pie() dados['tempo'].value_counts().plot.bar() dados_filtrados = dados[dados['g-0'] > 0] dados_filtrados.head() ``` #Aula 02 ``` dados # nome da coluna : novo nome da coluna mapa = {'droga': 'composto'} # inplace=True faz com que realmente seja renomeado dados.rename(columns=mapa, inplace=True) # mostrando as 5 primeiras linhas dados.head() # vai pegar os 5 primeiros elementos da tabela 'composto' dos valores somados cod_compostos = dados['composto'].value_counts().index[0:5] # definindo co cod_compostos cod_compostos # vai pegar tudo que tiver dentro de 'composto' que for igual a '@cod_compostos' # o @ diz para a query que ela foi definida no cรณdigo acima dados.query('composto in @cod_compostos') import seaborn as sns import matplotlib.pyplot as plt # salva as configuraรงรตes de imagens do seaborn sns.set() # aumentando a imagem plt.figure(figsize=(8, 6)) ax = sns.countplot(x = 'composto', data=dados.query('composto in @cod_compostos')) # adicionando tรญtulo ax.set_title('Top 5 compostos') # mostrando o grรกfico plt.show() # o total dos tamanhos รบnicos da coluna 'g-0' len(dados['g-0'].unique()) # o valor mรญnimo da coluna 'g-0' dados['g-0'].min() # o valor mรกximo da coluna 'g-0' dados['g-0'].max() # .hist gera o histograma, para aumentar o tamanho de caixinhas para enxergar melhor: bins dados['g-0'].hist(bins = 100) dados['g-19'].hist(bins = 100) # ele plota um dataframe com dados de estatรญstica dados.describe() # fatiamento, pegando apenas o 'g-0' e o 'g-1' dados[['g-0', 'g-1']] # selecionando todas as linhas (:) do primeiro elemento atรฉ o รบltimo elemento # selecionando do 'g-0' atรฉ o 'g-771' dados.loc[:,'g-0':'g-771'].describe() # copiou e colocou o cรณdigo acima, mas adicionou alguns recursos # T = transforma as colunhas em linhas e linhas em colunas calculando a mรฉdia # plotando o histograma com caixinhas maiores dados.loc[:,'g-0':'g-771'].describe().T['mean'].hist(bins=30) # vendo os menores valores dados.loc[:,'g-0':'g-771'].describe().T['min'].hist(bins=30) dados.loc[:,'g-0':'g-771'].describe().T['max'].hist(bins=30) dados.loc[:,'c-0':'c-99'].describe().T['mean'].hist(bins=50) # o boxplot faz comparaรงรตes, รฉ necessรกrio passar os valores dos elementos 'x' e a 'data' # ele tem um รบnico eixo sns.boxplot(x='g-0' , data=dados) # mostrando a figura plt.figure(figsize=(10,8)) # passando mais um valor (valor 'y') sns.boxplot(y='g-0', x='tratamento' , data=dados) dados.head() ``` Conforme nossa busca na [documentaรงรฃo do Pandas](https://pandas.pydata.org), encontramos uma maneira para construir uma tabela de frequรชncias, a funรงรฃo ```crosstab```. Esta funรงรฃo recebe como argumentos os dados que gostarรญamos de correlacionar de uma maneira bem simples: ```crosstab(dataframe['coluna1'], dataframe['coluna2'])``` e entรฃo, como retorno, temos uma matriz que relaciona essas variรกveis a partir da frequรชncia. Podemos ver que as categorias da variรกvel ```dose``` transformaram-se em linhas e as categorias da variรกvel ```tempo``` sรฃo colunas. ``` # passando os dados que queremos pegar, ele cruza as informaรงรตes e cria uma tabela de frequรชncia pd.crosstab(dados['dose'], dados['tempo']) ``` Entretanto, na matriz acima nรฃo estamos considerando o tratamento usado, apesar desta variรกvel ser de suma importรขncia, visto que, decide se hรก ou nรฃo presenรงa de um composto no evento. Por isso, vamos construir uma nova tabela com o ```crosstab``` considerando essa nova variรกvel. Para isso, usaremos a mesma sintaxe anterior, adicionando ```dados['tratamento']```ao final e, adicionalmente, deixamos as duas primeiras colunas declaradas entre colchetes, pois assim garantimos que suas informaรงรตes estarรฃo distribuรญdas nas linhas, ou seja, o cรณdigo final para este comando serรก: ```crosstab(dataframe[['coluna1'], dataframe['coluna2']], dataframe['coluna3'])```. O resultado serรก uma tabela de frequรชncias multi-index (mais de um รญndice) sendo que o index da esquerda se refere aos dados da variรกvel ```dose``` e o index da direita sรฃo as informaรงรตes de ```tempo```, respeitando a ordem na qual declaramos as respectivas variรกveis. ``` # 'dose' e 'tempo' precisam ficar entre [] porque a colunas vai ser tratamento e as linhas tenham informaรงรตes de dose e de tempo pd.crosstab([dados['dose'], dados['tempo']], dados['tratamento']) ``` Apesar de jรก termos construรญdo uma tabela bem interessante atravรฉs da frequรชncia de algumas variรกveis, podemos explorar a proporรงรฃo destes dados entre si. Para fazer isso vamos, novamente, copiar o nosso comando acrescentando um novo parรขmetro ao final, o ```normalize```. Entรฃo o cรณdigo ficarรก: ```crosstab([dados['dose'], dados['tempo']], dados['tratamento'], normalize='index')```. Esse parรขmetro normaliza a nossa tabela e escolhemos que ela faรงa isso a partir do รญndice, ou seja, ela farรก a comparaรงรฃo entre as categorias, isto รฉ, a soma de cada linha serรก igual a 1. Fazer este tipo de anรกlise possibilita que faรงamos algumas suposiรงรตes acerca do balanceamento entre as categorias e, analisando a nossa matriz, podemos concluir que hรก proporcionalidade na nossa base de dados. ``` # o cรณdigo for normalizado pelo index pd.crosstab([dados['dose'], dados['tempo']], dados['tratamento'], normalize='index') ``` Podemos tambรฉm agregar ร  nossa matriz uma mรฉtrica estatรญstica associada a uma coluna. Para que isso seja cumprido, adicionamos mais dois parรขmetros ร  nossa funรงรฃo ```crosstab```: o primeiro รฉ o ```values = dataframe['variavel']``` e o segundo รฉ o ```aggfunc``` que recebe como parรขmetro alguma mรฉtrica estatรญstica, como a mรฉdia. Logo, ```aggfunc = 'mean'```. Isso quer dizer que queremos comparar entre as diferentes categorias (```com_controle``` e ```com_droga```) a mรฉdia de valores associados a variรกvel ```g-0```. Aqui, podemos perceber algumas diferenรงas entre essas mรฉdias e podemos traรงar algumas hipรณteses a serem verificadas. Lembrando que a mรฉdia de uma variรกvel, รฉ uma conta feita a partir dos valores que aquela amostra apresenta e nรฃo รฉ o valor que ela assume, de fato. Por isso, nรฃo podemos concluir nada somente olhando a mรฉdia, mas entender seu comportamento nos dรก indรญcios por quais caminhos podemos seguir. ``` pd.crosstab([dados['dose'], dados['tempo']], dados['tratamento'], values=dados['g-0'], aggfunc='mean') ``` Para variรกveis contรญnuas, fazer tabelas de frequรชncias nรฃo รฉ a melhor estratรฉgia para analisรก-las. Mas, construir um novo tipo de grรกfico pode ser muito interessante para o nosso processo. Entรฃo, para fins de visualizaรงรฃo, o primeiro passo รฉ filtrar a nossa base de dados com as colunas que queremos investigar. No nosso caso, vamos analisar a relaรงรฃo entre as colunas ```g-0``` e ```g-3``` e, por isso, definimos uma lista de arrays com os nomes dessas colunas (```dataframe[['coluna1', 'coluna2']]```) e, como retorno, teremos nosso conjunto somente com as variรกveis alvo. ``` dados[['g-0', 'g-3']] ``` O ```scatterplot``` รฉ um tipo de grรกfico prรฉ programado da biblioteca Seaborn e recebe como parรขmetros a variรกvel que vai ser usada no eixo x, a variรกvel do eixo y e, por fim, o conjunto de dados. O cรณdigo ficarรก: ```sns.scatterplot(x = 'variavel para o eixo x', y = 'variavel para o eixo y', data = base de dados)``` E, como queremos investigar as variรกveis ```g-0``` e ```g-3```, atribuรญmos cada uma delas a um eixo. O grรกfico de dispersรฃo utiliza os dados como uma coleรงรฃo de pontos cartesianos e ele รฉ usado para apurar se hรก relaรงรฃo de causa e efeito entre duas variรกveis quantitativas. No nosso caso, cada linha serรก um par ordenado de acordo com o que declaramos no cรณdigo, ou seja, o valor de ```g-0``` serรก a cordenada x e o valor de ```g-3``` serรก a coordenada y. Por exemplo: para a linha 0 da base de dados teremos (1,0620 , -0,6208) Mas, por outro lado, a partir do grรกfico de dispersรฃo, nรฃo podemos dizer que uma variรกvel afeta a outra, podemos apenas definir se hรก relaรงรฃo entre elas e qual a intensidade disso. ``` sns.scatterplot(x='g-0', y = 'g-3', data=dados) ``` Observando o grรกfico que construรญmos acima, nรฃo parecemos encontrar nenhum padrรฃo tรฃo definido. Entรฃo, vamos confrontar mais duas colunas para verificar se encontramos algum padrรฃo melhor definido. Aqui, vamos usar a variรกvel ```g-0``` para o eixo x e a variรกvel ```g-8``` para o eixo y para construir o nosso novo grรกfico. Como retorno, recebemos um grรกfico de dispersรฃo onde a nuvem de pontos cartesianos parece desenhar melhor um padrรฃo: conforme o ```g-0``` aumenta, o valor de ```g-8``` diminui. Aparentemente, a relaรงรฃo entre essas duas variรกveis desenha uma curva com inclinaรงรฃo negativa. ``` sns.scatterplot(x='g-0', y = 'g-8', data=dados) ``` E, como parte do nosso trabalho รฉ levantar hipรณteses e confirmรก-las (ou nรฃo), precisamos verificar se a nossa suspeita de que a relaรงรฃo entre as variรกveis ```g-0``` e ```g-8```desenha uma curva com inclinaรงรฃo negativa. Para isso, vamos utilizar uma outra funรงรฃo do Seaborn, a ```lmplot```. A ```lmplot``` vai desenhar no nosso grรกfico de dispersรฃo uma linha de tendรชncia e, assim, poderemos confirmar o padrรฃo daquele conjunto de dados. Os parรขmetros a serem recebidos, sรฃo muito parecidos com aqueles usados no ```scatterplot```. Entรฃo teremos ```sns.lmplot(data=base de dados, x='variavel para o eixo x', y='variavel para o eixo y', line_kws={'color': 'cor da linha de tendencia'})``` Utilizamos o parรขmetro ```line_kws = {'color': 'red'}``` para criar um bom contraste entre os pontos do grรกfico de dispersรฃo e a linha de tendรชncia. Observando o nosso grรกfico, podemos concluir a nossa hipรณtese inicial, mas ele ainda nรฃo รฉ suficiente para finalizarmos a nossa anรกlise. ``` # criou o grรกfico de dispersรฃo e adicionou uma linha vermelha sns.lmplot(data=dados, x='g-0', y='g-8', line_kws={'color': 'red'}) ``` Para uma anรกlise mais real e completa, รฉ interessante que separemos ainda mais o nosso conjunto de dados. Isso porque, na imagem acima, apesar de termos uma linha de tendรชncia para a relaรงรฃo entre os dados ```g-0``` e ```g-8```, nรฃo hรก filtros para a dosagem, o tratamento e o tempo. E, pesando em drug discorevy, รฉ extremamente importante que faรงamos a separaรงรฃo desses conjuntos. Entรฃo, vamos acrescentar mais alguns parรขmetros para executar a separaรงรฃo. Acrescentamos o parรขmetro ```col = tramento``` para que sejam plotados grรกficos de acordo com as categorias da variรกvel em questรฃo nas colunas (```com_droga``` e ```com_controle```) e tambรฉm incluรญmos o parรขmetro ```row = 'tempo'``` para que mais uma subdivisรฃo seja feita e, as linhas apresentem novos grรกficos com as diferentes categorias (```24```,```48``` e ```72```). Assim, podemos perceber as nuances de cada grรกfico e o comportamento de determinado subconjunto. ``` # vai separar o grรกfico pelo parรขmetro 'col' e 'row' sns.lmplot(data=dados, x='g-0', y='g-8', line_kws={'color': 'red'}, col='tratamento', row='tempo') ``` Outra medida para analisar como as variรกveis estรฃo associadas รฉ a correlaรงรฃo. Para isso, vamos usar uma funรงรฃo jรก conhecida do Pandas, o ```loc``` e, vamos agregar o ```.corr```. O ```loc``` serve para definirmos o intervalo em que a correlaรงรฃo vai ser calculada. Aqui, estamos calculando a correlaรงรฃo entre todos os genes. Como retorno, temos uma tabela bem grande que correlaciona a variรกvel e apresenta valores entre 1 e -1. Por exemplo, o primeiro valor numรฉrico apresentado na primeira linha รฉ o resultado da correlaรงรฃo entre a variรกvel que estรก nesta linha e nesta coluna, no nosso caso, o ```g-0``` em ambas as extremidades. No primeiro valor numรฉrico apresentado na segunda linha, temos a correlaรงรฃo entre ```g-1``` e ```g-0``` e assim por diante. Mas, como interpretar esses valores? Bom, temos a seguinte divisรฃo: - Valores muito prรณximos de 1 ou -1: variรกveis altamente correlacionadas - Valores muito prรณximos de 0: variรกveis pouco ou nรฃo correlacionadas E, o que diferencia se essa correlaรงรฃo serรก proporcional ou inversamente proporcional, serรก o sinal. Quer dizer: - Valores muito prรณximos de 1: variรกveis proporcionalmente correlacionadas - Valores muito prรณximos de -1: variรกveis correlacionadas inversamente proporcionais Agora que jรก sabemos como analisar essa tabela, podemos voltar para o nosso grรกfico de dispersรฃo construรญdo com ```g-0``` e ```g-8`` e perceber que a nossa tabela confirma que ambas as variรกveis estรฃo correlacionadas e sรฃo inversamente proporcionais, visto que o valor apresentado na tabela รฉ de -0,604212. ``` # pegou todas as linhas da coluna # pegou o 'g-0' atรฉ o 'g-771' e calculou a correlaรงรฃo dados.loc[:,'g-0':'g-771'].corr() ``` Analisar essa grande tabela รฉ um desafio bem grande. Entรฃo, como auxรญlio visual costumamos plotar um mapa de calor para que possamos identificar com maior facilidade a correlaรงรฃo entre as variรกveis. E, como esse cรณdigo jรก estรก construรญdo na prรณpria documentaรงรฃo do Seaborn, vamos copiar o [cรณdigo](https://seaborn.pydata.org/examples/many_pairwise_correlations.html) de lรก, fazendo apenas algumas pequenas alteraรงรตes. Entรฃo, de ```corr = d.corr()``` mudamos para ```corr = dados.loc[:,'g-0':'g-50'].corr()``` pois ajustamos o ```d``` para a nossa base de dados (```dados```) e decidimos incluir um ```loc``` para fazer o mapa de calor apenas do ```g-0``` ao ```g-50```. Tambรฉm retiramos o parรขmetro ```vmax=.3``` da รบltima parte do cรณdigo pois este era um limitador da correlaรงรฃo que nรฃo nos interessa no momento. Adicionalmente, tambรฉm fizemos a importaรงรฃo da biblioteca Numpy que รฉ usada para gerar este mapa de calor (```import numpy as np```). ``` corr = dados.loc[:,'g-0':'g-50'].corr() ``` O mapa de calor mostra uma escala de cores em sua lateral direita, a legenda e, para cada pontinho, podemos perceber a forรงa da correlaรงรฃo sendo mostrada atravรฉs de uma cor associada. Olhando para o nosso grรกfico, percebemos que, em sua maioria, as expressรตes genicas nรฃo apresentam correlaรงรตes tรฃo altas entre si (podemos deduzir isso observando que o grรกfico em grande parte รฉ translรบcido). ร‰ importante destacar que nรฃo podemos inferir causalidade a partir da correlaรงรฃo, como jรก descrevemos anteriormente no grรกfico de dispersรฃo. Exemplificando: vimos que ```g-0``` e ```g-8``` tรชm correlaรงรฃo inversamente proporcional entre si mas nรฃo podemos concluir que รฉ o ```g-0``` que faz o ```g-8``` diminuir, ou seja, a causa. ``` import numpy as np # Generate a mask for the upper triangle mask = np.triu(np.ones_like(corr, dtype=bool)) # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = sns.diverging_palette(230, 20, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(corr, mask=mask, cmap=cmap, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5}) ``` Agora, vamos repetir o processo de construรงรฃo do mapa de calor para a a viabilidade celular (```c```). Definimos uma nova variรกvel ```corr_celular``` e ajustamos os parรขmetros de acordo com os nossos ```cs```. Observando o grรกfico de saรญda, podemos perceber uma grande diferenรงa entre os dois mapas de calor que construรญmos. A escala deste novo grรกfico รฉ bem diferente da escala anterior, temos valores apenas entre 0,65 e 0,90, correlaรงรตes altamente proporcionais. ``` corr_celular = dados.loc[:,'c-0':'c-50'].corr() # Generate a mask for the upper triangle mask = np.triu(np.ones_like(corr_celular, dtype=bool)) # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = sns.diverging_palette(230, 20, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(corr_celular, mask=mask, cmap=cmap, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5}) ```
github_jupyter
``` # This script should be revised to an object-oriented program, which defines TS extraction class, trajectories classfication class, etc. import numpy as np import sys import os import glob import shutil import matplotlib as mpl mpl.use('Agg') import matplotlib.pyplot as plt import matplotlib.patches as mpatches from matplotlib.collections import LineCollection from matplotlib.collections import PatchCollection from matplotlib.colors import colorConverter from matplotlib.ticker import FuncFormatter from scipy import stats from scipy.stats import norm # Modify as needed for trajectories. This should be whatever index comes after "runpoint" in the trajectory header RUNPOINT_IDX = 3 def mkdir(): if os.path.exists('./trajTS'): shutil.rmtree('./trajTS') if os.path.exists('./reorder'): shutil.rmtree('./reorder') if os.path.exists('./TDD'): shutil.rmtree('./TDD') if os.path.exists('./TDD_r2pA'): shutil.rmtree('./TDD_r2pA') if os.path.exists('./TDD_r2pB'): shutil.rmtree('./TDD_r2pB') if os.path.exists('./TDD_r2r'): shutil.rmtree('./TDD_r2r') if os.path.exists('./TDD_p2p'): shutil.rmtree('./TDD_p2p') if os.path.exists('./TDD_inter'): shutil.rmtree('./TDD_inter') os.mkdir('./trajTS') os.mkdir('./reorder') os.mkdir('./TDD') os.mkdir('./TDD_r2pA') os.mkdir('./TDD_r2pB') os.mkdir('./TDD_r2r') os.mkdir('./TDD_p2p') os.mkdir('./TDD_inter') def read_conf_file(): f = open('traj.conf') lines = [a.strip() for a in f.readlines()] f.close() reset = lines[0] if not lines[1].isdigit(): print('Invalid mode in conf file, must be a number') exit(1) else: mode = int(lines[1]) if mode < 1 or mode > 2: print('Invalid mode in conf file, too large or too small') exit(1) atoms = [] for num in lines[2].split(): if not num.isdigit(): print('Invalid atom index') exit(1) else: atoms.append(int(num)) if (len(atoms) % 2) != 0: raise TypeError('Odd number of atomic indices have been receivedโ€“this is ODD!') elif (mode == 1 and len(atoms) != 6) or (mode == 2 and len(atoms) != 4): print('Invalid number of atoms. Exiting program') exit(1) print ('Run ...') return reset, mode, atoms def Get_mode(): print('Available modes:\nMode 1:\t1 bond always forms, then 1 of 2 other bonds forms to create 2 products\nMode 2:\t1 of 2 possible bonds form to create 2 products\n') mode = input('Please choose analysis mode: ') if not mode.isdigit() or int(mode) < 1 or int(mode) > 2: print('Invalid mode selection. Exiting program') exit(1) return int(mode) def Get_atomindex(mode): if mode == 1: print('Enter indices for bond that always forms, then for the bonds corresponding to products A and B') elif mode == 2: print('Enter indices for bonds corresponding to product A, then product B') atoms = [int(x) for x in input('Input atomic indices corresponding to N bonds, using ' ' as delimiter. Make sure the forming bond goes first\n').split()] for i in range(0,len(atoms)): print('atom ', str(i+1), ' ', str(atoms[i])) judge = input('Do you think the indices are reasonable?(y/n): ') if judge != 'y'or 'Y': print('Working') exit(1) if (len(atoms) % 2) != 0: raise TypeError('Odd number of atomic indices have been receivedโ€“this is ODD!') elif (mode == 1 and len(atoms) != 6) or (mode == 2 and len(atoms) != 4): print('Invalid number of atoms. Exiting program') exit(1) print ('Run ...') return atoms class Trajectories: def __init__(self, file, atoms, mode): # trajectory format for ProgDyn output self.name = os.path.basename(file) print ('Working on '+self.name) if '.xyz' not in file: raise TypeError('A ProgDyn .xyz file must be provided') if os.stat(file).st_size == 0: print('The file '+self.name+' is empty.') return #Creating new folders for the following analysis #Open new file handles and get parameters self.atoms = atoms self.mode = mode self.lines = open(file).readlines() self.n_lines = len(self.lines) if self.n_lines == 0: print('The file ' + self.name + ' is empty.') return self.n_atoms = int(self.lines[0].split()[0]) self.n_idx = int((self.n_lines / (self.n_atoms + 2))) # This function is used to get distance from coordinates def Get_distance(self,n): if not hasattr(self, 'atoms'): print('The xyz file ' + self.name + ' has not been successfully initiated.') return elif self.atoms == 0: print('The xyz file ' + self.name + ' has zero atoms.') return X = np.zeros(len(self.atoms)) Y = np.zeros(len(self.atoms)) Z = np.zeros(len(self.atoms)) Bonds = np.zeros(int(len(self.atoms)/2)) for i in range(0, len(self.atoms)): X[i] = float(self.lines[(self.n_atoms + 2) * n + self.atoms[i] + 1].split()[1]) Y[i] = float(self.lines[(self.n_atoms + 2) * n + self.atoms[i] + 1].split()[2]) Z[i] = float(self.lines[(self.n_atoms + 2) * n + self.atoms[i] + 1].split()[3]) for j in range(0, len(Bonds)): Bonds[j] = round(((X[j*2+1] - X[j*2]) ** 2 + (Y[j*2+1] - Y[j*2]) ** 2 + (Z[j*2+1] - Z[j*2]) ** 2) ** .5, 3) return Bonds # TS finder is used to collect the sampled TS geometries from trajectories. The sampled TS is usually the starting point of each trajectory def TS_finder(self): if not hasattr(self, 'n_idx'): print('The xyz file ' + self.name + ' has not been successfully initiated.') return elif self.n_idx == 0: print('The xyz file ' + self.name + ' has zero snapshots.') return fileout_TS_xyz = open('./trajTS/trajTs.xyz', 'a') fileout_TS = open('./trajTS/trajTs.txt', 'a') for i in range(0, self.n_idx): if len(self.lines[1].split()) < RUNPOINT_IDX + 1: print('The xyz file ' + self.name + ' does not have the snapshot numeration on the 4th word of the title line.') break elif int(self.lines[1 + i * (self.n_atoms + 2)].split()[RUNPOINT_IDX]) == 1: bond_TS= self.Get_distance(i) fileout_TS.write(self.name + ', ') fileout_TS.write(', '.join([str(bond_TS[j]) for j in range(len(bond_TS))])) fileout_TS.write('\n') fileout_TS.close() for i in range(0, self.n_atoms + 2): fileout_TS_xyz.write(self.lines[i]) fileout_TS_xyz.close() break else: print('The xyz file ' + self.name + ' does not have the TS geometry!') def Rearrangement(self): if not hasattr(self, 'lines'): print('The xyz file ' + self.name + ' has not been successfully initiated.') return elif len(self.lines) == 0: print('The xyz file ' + self.name + ' has zero lines.') return fileout_reorder = open('./reorder/' + self.name, 'w') if len(self.lines[1].split()) < RUNPOINT_IDX + 1: print('The xyz file ' + self.name + ' does not have the snapshot numeration on the 4th word of the title line.') return elif int(self.lines[1].split()[RUNPOINT_IDX]) != 1: print('I cannot find the first TS and reorder is not feasible; break!') return else: for i in range(1, self.n_idx): if self.lines[1].split()[RUNPOINT_IDX] == self.lines[1 + i * (self.n_atoms + 2)].split()[RUNPOINT_IDX]: break n1 = i n2 = self.n_idx - i if n1 == n2 == 1: print('The file ' + self.name + ' only has two TS points.') return else: bond_TS = self.Get_distance(0) bond_D1 = self.Get_distance(n1 - 1) bond_D2 = self.Get_distance(self.n_idx - 1) print('Bond 1 changes from D1:', bond_D1[0], ' to TS:', bond_TS[0], ' then to D2:', bond_D2[0]) print('Assuming bond 1 forms from R to P') if (bond_D2[0] > bond_D1[0]): for i in range(0, n2): for j in range(0, self.n_atoms + 2): fileout_reorder.write(self.lines[(self.n_idx - 1 - i) * (self.n_atoms + 2) + j]) for i in range(0, n1): for j in range(0, self.n_atoms + 2): fileout_reorder.write(self.lines[i * (self.n_atoms + 2) + j]) if (bond_D1[0] > bond_D2[0]): for i in range(0, n1): for j in range(0, self.n_atoms + 2): fileout_reorder.write(self.lines[(n1 - 1 - i) * (self.n_atoms + 2) + j]) for i in range(0, n2): for j in range(0, self.n_atoms + 2): fileout_reorder.write(self.lines[(i + n1) * (self.n_atoms + 2) + j]) fileout_reorder.close() ## classification function take reordered trajectories to prcess, generating distance/angle/dihedral time series that inform where the trajectories come from and end up. def Classification(self): if not hasattr(self, 'name'): print('The xyz file ' + self.name + ' has not been successfully initiated.') return elif self.n_idx == 0: print('The xyz file ' + self.name + ' has zero snapshots.') return fileout_traj = open('./TDD/' + self.name + '.txt', 'w') for i in range(0, self.n_idx): if int(self.lines[1 + i * (self.n_atoms + 2)].split()[RUNPOINT_IDX]) == 1: break n1=i bond_R = self.Get_distance(0) bond_TS = self.Get_distance(n1) bond_P = self.Get_distance(self.n_idx-1) # now writing every snapshots to TDD for i in range(0,self.n_idx): runpoint = int(self.lines[1 + i * (self.n_atoms + 2)].split()[RUNPOINT_IDX]) bond = self.Get_distance(i) if i<n1: fileout_traj.write(str(-runpoint+1)+ ', ') fileout_traj.write(', '.join([str(bond[j]) for j in range(len(bond))])) fileout_traj.write('\n') elif i>n1: fileout_traj.write(str(runpoint-1) + ', ') fileout_traj.write(', '.join([str(bond[j]) for j in range(len(bond))])) fileout_traj.write('\n') fileout_traj.close() #Now start classifying trajectories if self.mode == 1: if (bond_R[0] > bond_TS[0] > bond_P[0]): if (bond_P[1] < bond_P[2]): shutil.copyfile('./reorder/' + self.name ,'./TDD_r2pA/' + self.name ) print('go to r2pA') return 'A' else: shutil.copyfile('./reorder/' + self.name , './TDD_r2pB/' + self.name ) os.system('cp ./ntraj/' + self.name +' ./TDD_r2pB/' + self.name) print('go to r2pB') return 'B' elif (bond_R[0] >= bond_TS[0]) and (bond_P[0] >= bond_TS[0]): shutil.copyfile( './reorder/' + self.name , './TDD_r2r/' + self.name ) print('go to r2r') return 're_R' elif (bond_R[0] <= bond_TS[0]) and (bond_P[0] <= bond_TS[0]): shutil.copyfile( './reorder/' + self.name ,'./TDD_p2p/' + self.name ) print('go to p2p') return 're_P' elif self.mode == 2: if (bond_R[0] > bond_TS[0] > bond_P[0]) or (bond_R[1] > bond_TS[1] > bond_P[1]): if (bond_P[0] < bond_P[1]): os.system('cp ./ntraj/' + self.name +' ./TDD_r2pA/' + self.name) print('go to r2pA') return 'A' else: os.system('cp ./ntraj/' + self.name +' ./TDD_r2pB/' + self.name) print('go to r2pB') return 'B' elif (bond_R[0] >= bond_TS[0] and bond_P[0] >= bond_TS[0] and bond_R[1] >= bond_TS[1] and bond_P[1] >= bond_TS[1]): shutil.copyfile( './reorder/' + self.name , './TDD_r2r/' + self.name ) print('go to r2r') return 'R' elif (bond_R[0] <= bond_TS[0]) and (bond_P[0] <= bond_TS[0]) or (bond_R[1] <= bond_TS[1]) and (bond_P[1] <= bond_TS[1]): shutil.copyfile( './reorder/' + self.name ,'./TDD_p2p/' + self.name ) print('go to p2p') return 'P' def log_results(total, A, B, re_R, re_P): out = open('./trajTS/traj_log', 'w+') out.write('Results\nTotal number of trajectories: '+str(total)+'\nTotal forming product: '+str(A+B)+'\nA: '+str(A)+' B: '+str(B)+' Reactant: '+str(re_R)+' \nPercent product A: '+str(A*100/(A+B))+'%\nPercent product B: '+str(B*100/(A+B))+'%\n') out.close() # main func def main(): # Remember to add a choice function regarding the removal of current folders if os.path.exists('traj.conf'): reset, mode, atom = read_conf_file() if reset == 'y': mkdir() else: judge = input('Do you want to start analyzing from the very beginning? (y/n) Type y to remove all analysis folders and n to keep the current folder (e.g. reorder, etc.) for analysis: ') if judge == 'y': mkdir() mode = Get_mode() atom = Get_atomindex(mode) for filename in glob.glob('./ntraj/*.xyz'): T = Trajectories(filename,atom,mode) T.TS_finder() T.Rearrangement() total, A, B, re_R, re_P = 0, 0, 0, 0, 0 for filename in glob.glob('./reorder/*.xyz'): T = Trajectories(filename,atom,mode) result = T.Classification() total += 1 if result == 'A': A += 1 elif result == 'B': B += 1 elif result == 're_R': re_R += 1 elif result == 're_P': re_P += 1 print('Trajectory analysis complete!') if (A+B == 0): print('Neither product A nor B was formed') else: log_results(total, A, B, re_R, re_P) print('Results\nTotal number of trajectories: '+str(total)+'\nTotal forming product: '+str(A+B)+'\nA: '+str(A)+' B: '+str(B)+' Recrossing_R_R: '+str(re_R)+' Recrossing_P_P: '+str(re_P)+'\nPercent product A: '+str(A*100/(A+B))+'%\nPercent product B: '+str(B*100/(A+B))+'%\n') if __name__ == '__main__': main() ```
github_jupyter
# "The Role of Wide Baseline Stereo in the Deep Learning World" > "Short history of wide baseline stereo in computer vision" - toc: false - image: images/doll_wbs_300.png - branch: master - badges: true - comments: true - hide: false - search_exclude: false ## Rise of Wide Multiple Baseline Stereo The *wide multiple baseline stereo (WxBS)* is a process of establishing a sufficient number of pixel or region correspondences from two or more images depicting the same scene to estimate the geometric relationship between cameras, which produced these images. Typically, WxBS relies on the scene rigidity -- the assumption that there is no motion in the scene except the motion of the camera itself. The stereo problem is called wide multiple baseline if the images are significantly different in more than one aspect: viewpoint, illumination, time of acquisition, and so on. Historically, people were focused on the simpler problem with a single baseline, which was geometrical, i.e., viewpoint difference between cameras, and the area was known as wide baseline stereo. Nowadays, the field is mature and research is focused on solving more challenging multi-baseline problems. WxBS is a building block of many popular computer vision applications, where spatial localization or 3D world understanding is required -- panorama stitching, 3D reconstruction, image retrieval, SLAM, etc. If the wide baseline stereo is a new concept for you, I recommend checking the [examplanation in simple terms](https://ducha-aiki.github.io/wide-baseline-stereo-blog/2021/01/09/wxbs-in-simple-terms.html). ![](00_intro_files/match_doll.png "Correspondences between two views found by wide baseline stereo algorithm. Photo and doll created by Olha Mishkina") **Where does wide baseline stereo come from?** As often happens, a new problem arises from the old -- narrow or short baseline stereo. In the narrow baseline stereo, images are taken from nearby positions, often exactly at the same time. One could find correspondence for the point $(x,y)$ from the image $I_1$ in the image $I_2$ by simply searching in some small window around $(x,y)$\cite{Hannah1974ComputerMO, Moravec1980} or, assuming that camera pair is calibrated and the images are rectified -- by searching along the epipolar line\cite{Hartley2004}. ![](2020-03-27-intro_files/att_00003.png "Correspondence search in narrow baseline stereo, from Moravec 1980 PhD thesis.") <!--- ![Wide baseline stereo model. "Baseline" is the distance between cameras. Image by Arne Nordmann (WikiMedia)](00_intro_files/Epipolar_geometry.svg) --> One of the first, if not the first, approaches to the wide baseline stereo problem was proposed by Schmid and Mohr \cite{Schmid1995} in 1995. Given the difficulty of the wide multiple baseline stereo task at the moment, only a single --- geometrical -- baseline was considered, thus the name -- wide baseline stereo (WBS). The idea of Schmid and Mohr was to equip each keypoint with an invariant descriptor. This allowed establishing tentative correspondences between keypoints under viewpoint and illumination changes, as well as occlusions. One of the stepping stones was the corner detector by Harris and Stevens \cite{Harris88}, initially used for the application of tracking. It is worth a mention, that there were other good choices for the local feature detector at the time, starting with the Forstner \cite{forstner1987fast}, Moravec \cite{Moravec1980} and Beaudet feature detectors \cite{Hessian78}. The Schmid and Mohr approach was later extended by Beardsley, Torr and Zisserman \cite{Beardsley96} by adding RANSAC \cite{RANSAC1981} robust geometry estimation and later refined by Pritchett and Zisserman \cite{Pritchett1998, Pritchett1998b} in 1998. The general pipeline remains mostly the same until now \cite{WBSTorr99, CsurkaReview2018, IMW2020}, which is shown in Figure below. <!--- ![image.png](00_intro_files/att_00002.png) --> ![](00_intro_files/matching-filtering.png "Commonly used wide baseline stereo pipeline") Let's write down the WxBS algorithm: 1. Compute interest points/regions in all images independently 2. For each interest point/region compute a descriptor of their neigborhood (local patch). 3. Establish tentative correspondences between interest points based on their descriptors. 4. Robustly estimate geometric relation between two images based on tentative correspondences with RANSAC. The reasoning behind each step is described in [this separate post](https://ducha-aiki.github.io/wide-baseline-stereo-blog/2021/02/11/WxBS-step-by-step.html). ## Quick expansion This algorithm significantly changed computer vision landscape for next forteen years. Soon after the introduction of the WBS algorithm, it became clear that its quality significantly depends on the quality of each component, i.e., local feature detector, descriptor, and geometry estimation. Local feature detectors were designed to be as invariant as possible, backed up by the scale-space theory, most notable developed by Lindenberg \cite{Lindeberg1993, Lindeberg1998, lindeberg2013scale}. A plethora of new detectors and descriptors were proposed in that time. We refer the interested reader to these two surveys: by Tuytelaars and Mikolajczyk \cite{Tuytelaars2008} (2008) and by Csurka \etal \cite{CsurkaReview2018} (2018). Among the proposed local features is one of the most cited computer vision papers ever -- SIFT local feature \cite{Lowe99, SIFT2004}. Besides the SIFT descriptor itself, Lowe's paper incorporated several important steps, proposed earlier with his co-authors, to the matching pipeline. Specifically, they are quadratic fitting of the feature responses for precise keypoint localization \cite{QuadInterp2002}, using the Best-Bin-First kd-tree \cite{aknn1997} as an approximate nearest neightbor search engine to speed-up the tentative correspondences generation, and using second-nearest neighbor (SNN) ratio to filter the tentative matches. It is worth noting that SIFT feature became popular only after Mikolajczyk benchmark paper \cite{MikoDescEval2003, Mikolajczyk05} that showed its superiority to the rest of alternatives. Robust geometry estimation was also a hot topic: a lot of improvements over vanilla RANSAC were proposed. For example, LO-RANSAC \cite{LOransac2003} proposed an additional local optimization step into RANSAC to significantly decrease the number of required steps. PROSAC \cite{PROSAC2005} takes into account the tentative correspondences matching score during sampling to speed up the procedure. DEGENSAC \cite{Degensac2005} improved the quality of the geometry estimation in the presence of a dominant plane in the images, which is the typical case for urban images. We refer the interested reader to the survey by Choi \etal \cite{RANSACSurvey2009}. Success of wide baseline stereo with SIFT features led to aplication of its components to other computer vision tasks, which were reformulated through wide baseline stereo lens: - **Scalable image search**. Sivic and Zisserman in famous "Video Google" paper\cite{VideoGoogle2003} proposed to treat local features as "visual words" and use ideas from text processing for searching in image collections. Later even more WBS elements were re-introduced to image search, most notable -- **spatial verification**\cite{Philbin07}: simplified RANSAC procedure to verify if visual word matches were spatially consistent. ![](00_intro_files/att_00004.png "Bag of words image search. Image credit: Filip Radenovic http://cmp.felk.cvut.cz/ radenfil/publications/Radenovic-CMPcolloq-2015.11.12.pdf") - **Image classification** was performed by placing some classifier (SVM, random forest, etc) on top of some encoding of the SIFT-like descriptors, extracted sparsely\cite{Fergus03, CsurkaBoK2004} or densely\cite{Lazebnik06}. ![](00_intro_files/att_00005.png "Bag of local features representation for classification from Fergus03") - **Object detection** was formulated as relaxed wide baseline stereo problem\cite{Chum2007Exemplar} or as classification of SIFT-like features inside a sliding window \cite{HoG2005} ![](00_intro_files/att_00003.png "Exemplar-representation of the classes using local features, cite{Chum2007Exemplar}") <!--- ![HoG-based pedestrian detection algorithm](00_intro_files/att_00006.png) ![Histogram of gradient visualization](00_intro_files/att_00007.png) --> - **Semantic segmentation** was performed by classicication of local region descriptors, typically, SIFT and color features and postprocessing afterwards\cite{Superparsing2010}. Of course,wide baseline stereo was also used for its direct applications: - **3D reconstruction** was based on camera poses and 3D points, estimated with help of SIFT features \cite{PhotoTourism2006, RomeInDay2009, COLMAP2016} ![](00_intro_files/att_00008.png "SfM pipeline from COLMAP") - **SLAM(Simultaneous localization and mapping)** \cite{Se02, PTAM2007, Mur15} were based on fast version of local feature detectors and descriptors. <!--- ![ORBSLAM pipeline](00_intro_files/att_00009.png) --> - **Panorama stiching** \cite{Brown07} and, more generally, **feature-based image registration**\cite{DualBootstrap2003} were initalized with a geometry obtained by WBS and then further optimized ## Deep Learning Invasion: retreal to the geometrical fortress In 2012 the deep learning-based AlexNet \cite{AlexNet2012} approach beat all methods in image classification at the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Soon after, Razavian et al.\cite{Astounding2014} have shown that convolutional neural networks (CNNs) pre-trained on the Imagenet outperform more complex traditional solutions in image and scene classification, object detection and image search, see Figure below. The performance gap between deep leaning and "classical" solutions was large and quickly increasing. In addition, deep learning pipelines, be it off-the-shelf pretrained, fine-tuned or the end-to-end learned networks, are simple from the engineering perspective. That is why the deep learning algorithms quickly become the default option for lots of computer vision problems. ![](00_intro_files/att_00010.png "CNN representation beats complex traditional pipelines. Reds are CNN-based and greens are the handcrafted. From Astounding2014") However, there was still a domain, where deep learned solutions failed, sometimes spectacularly: geometry-related tasks. Wide baseline stereo \cite{Melekhov2017relativePoseCnn}, visual localization \cite{PoseNet2015} and SLAM are still areas, where the classical wide baseline stereo dominates \cite{sattler2019understanding, zhou2019learn, pion2020benchmarking}. The full reasons why convolution neural network pipelines are struggling to perform tasks that are related to geometry, and how to fix that, are yet to be understood. The observations from the recent papers are following: - CNN-based pose predictions predictions are roughly equivalent to the retrieval of the most similar image from the training set and outputing its pose \cite{sattler2019understanding}. This kind of behaviour is also observed in a related area: single-view 3D reconstruction performed by deep networks is essentially a retrieval of the most similar 3D model from the training set \cite{Tatarchenko2019}. - Geometric and arithmetic operations are hard to represent via vanilla neural networks (i.e., matrix multiplication followed by non-linearity) and they may require specialized building blocks, approximating operations of algorithmic or geometric methods, e.g. spatial transformers \cite{STN2015} and arithmetic units \cite{NALU2018,NAU2020}. Even with such special-purpose components, the deep learning solutions require "careful initialization, restricting parameter space, and regularizing for sparsity" \cite{NAU2020}. - Vanilla CNNs suffer from sensitivity to geometric transformations like scaling and rotation \cite{GroupEqCNN2016} or even translation \cite{MakeCNNShiftInvariant2019}. The sensitivity to translations might sound counter-intuitive, because the concolution operation by definition is translation-covariant. However, a typical CNN contains also zero-padding and downscaling operations, which break the covariance \cite{MakeCNNShiftInvariant2019, AbsPositionCNN2020}. Unlike them, classical local feature detectors are grounded on scale-space \cite{lindeberg2013scale} and image processing theories. Some of the classical methods deal with the issue by explicit geometric normalization of the patches before description. - CNNs predictions can be altered by a change in a small localized area \cite{AdvPatch2017} or even a single pixel \cite{OnePixelAttack2019}, while the wide baseline stereo methods require the consensus of different independent regions. ## Today: assimilation and merging ### Wide baseline stereo as a task: formulate differentiably and learn modules This leads us to the following question -- **is deep learning helping WxBS today?** The answer is yes. After the quick interest in the black-box-style models, the current trend is to design deep learning solutions for the wide baseline stereo in a modular fashion \cite{cv4action2019}, resembling the one in Figure below. Such modules are learned separately. For example, the HardNet \cite{HardNet2017} descriptor replaces SIFT local descriptor. The Hessian detector can be replaced by deep learned detectors like KeyNet \cite{KeyNet2019} or the joint detector-descriptor \cite{SuperPoint2017, R2D22019, D2Net2019}. The matching and filtering are performed by the SuperGlue \cite{sarlin2019superglue} matching network, etc. There have been attempts to formulate the full pipeline solving problem like SLAM \cite{gradslam2020} in a differentiable way, combining the advantages of structured and learning-based approaches. ![](00_intro_files/att_00011.png "SuperGlue: separate matching module for handcrafter and learned features") ![](00_intro_files/gradslam.png "gradSLAM: differentiable formulation of SLAM pipeline") ### Wide baseline stereo as a idea: consensus of local independent predictions On the other hand, as an algorithm, wide baseline stereo is summarized into two main ideas 1. Image should be represented as set of local parts, robust to occlusion, and not influencing each other. 2. Decision should be based on spatial consensus of local feature correspondences. One of modern revisit of wide baseline stereo ideas is Capsule Networks\cite{CapsNet2011,CapsNet2017}. Unlike vanilla CNNs, capsule networks encode not only the intensity of feature response, but also its location. Geometric agreement between "object parts" is a requirement for outputing a confident prediction. Similar ideas are now explored for ensuring adversarial robustness of CNNs\cite{li2020extreme}. Another way of using "consensus of local independent predictions" is used in [Cross-transformers](https://arxiv.org/abs/2007.11498) paper: spatial attention helps to select relevant feature for few-shot learning, see Figure below. While wide multiple baseline stereo is a mature field now and does not attract even nearly as much attention as before, it continues to play an important role in computer vision. ![](2020-03-27-intro_files/att_00000.png "Cross-transformers: spatial attention helps to select relevant feature for few-shot learning") ![](00_intro_files/capsules.png "Capsule networks: revisiting the WBS idea. Each feature response is accompanied with its pose. Poses should be in agreement, otherwise object would not be recognized. Image by Aurรฉlien Gรฉron https://www.oreilly.com/content/introducing-capsule-networks/") # References [<a id="cit-Hannah1974ComputerMO" href="#call-Hannah1974ComputerMO">Hannah1974ComputerMO</a>] M. J., ``_Computer matching of areas in stereo images._'', 1974. [<a id="cit-Moravec1980" href="#call-Moravec1980">Moravec1980</a>] Hans Peter Moravec, ``_Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover_'', 1980. [<a id="cit-Hartley2004" href="#call-Hartley2004">Hartley2004</a>] R.~I. Hartley and A. Zisserman, ``_Multiple View Geometry in Computer Vision_'', 2004. [<a id="cit-Schmid1995" href="#call-Schmid1995">Schmid1995</a>] Schmid Cordelia and Mohr Roger, ``_Matching by local invariants_'', , vol. , number , pp. , 1995. [online](https://hal.inria.fr/file/index/docid/74046/filename/RR-2644.pdf) [<a id="cit-Harris88" href="#call-Harris88">Harris88</a>] C. Harris and M. Stephens, ``_A Combined Corner and Edge Detector_'', Fourth Alvey Vision Conference, 1988. [<a id="cit-forstner1987fast" href="#call-forstner1987fast">forstner1987fast</a>] W. F{\"o}rstner and E. G{\"u}lch, ``_A fast operator for detection and precise location of distinct points, corners and centres of circular features_'', Proc. ISPRS intercommission conference on fast processing of photogrammetric data, 1987. [<a id="cit-Hessian78" href="#call-Hessian78">Hessian78</a>] P.R. Beaudet, ``_Rotationally invariant image operators_'', Proceedings of the 4th International Joint Conference on Pattern Recognition, 1978. [<a id="cit-Beardsley96" href="#call-Beardsley96">Beardsley96</a>] P. Beardsley, P. Torr and A. Zisserman, ``_3D model acquisition from extended image sequences_'', ECCV, 1996. [<a id="cit-RANSAC1981" href="#call-RANSAC1981">RANSAC1981</a>] Fischler Martin A. and Bolles Robert C., ``_Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography_'', Commun. ACM, vol. 24, number 6, pp. 381--395, jun 1981. [<a id="cit-Pritchett1998" href="#call-Pritchett1998">Pritchett1998</a>] P. Pritchett and A. Zisserman, ``_Wide baseline stereo matching_'', ICCV, 1998. [<a id="cit-Pritchett1998b" href="#call-Pritchett1998b">Pritchett1998b</a>] P. Pritchett and A. Zisserman, ``_"Matching and Reconstruction from Widely Separated Views"_'', 3D Structure from Multiple Images of Large-Scale Environments, 1998. [<a id="cit-WBSTorr99" href="#call-WBSTorr99">WBSTorr99</a>] P. Torr and A. Zisserman, ``_Feature Based Methods for Structure and Motion Estimation_'', Workshop on Vision Algorithms, 1999. [<a id="cit-CsurkaReview2018" href="#call-CsurkaReview2018">CsurkaReview2018</a>] {Csurka} Gabriela, {Dance} Christopher R. and {Humenberger} Martin, ``_From handcrafted to deep local features_'', arXiv e-prints, vol. , number , pp. , 2018. [<a id="cit-IMW2020" href="#call-IMW2020">IMW2020</a>] Jin Yuhe, Mishkin Dmytro, Mishchuk Anastasiia <em>et al.</em>, ``_Image Matching across Wide Baselines: From Paper to Practice_'', arXiv preprint arXiv:2003.01587, vol. , number , pp. , 2020. [<a id="cit-Lindeberg1993" href="#call-Lindeberg1993">Lindeberg1993</a>] Lindeberg Tony, ``_Detecting Salient Blob-like Image Structures and Their Scales with a Scale-space Primal Sketch: A Method for Focus-of-attention_'', Int. J. Comput. Vision, vol. 11, number 3, pp. 283--318, December 1993. [<a id="cit-Lindeberg1998" href="#call-Lindeberg1998">Lindeberg1998</a>] Lindeberg Tony, ``_Feature Detection with Automatic Scale Selection_'', Int. J. Comput. Vision, vol. 30, number 2, pp. 79--116, November 1998. [<a id="cit-lindeberg2013scale" href="#call-lindeberg2013scale">lindeberg2013scale</a>] Lindeberg Tony, ``_Scale-space theory in computer vision_'', , vol. 256, number , pp. , 2013. [<a id="cit-Tuytelaars2008" href="#call-Tuytelaars2008">Tuytelaars2008</a>] Tuytelaars Tinne and Mikolajczyk Krystian, ``_Local Invariant Feature Detectors: A Survey_'', Found. Trends. Comput. Graph. Vis., vol. 3, number 3, pp. 177--280, July 2008. [<a id="cit-Lowe99" href="#call-Lowe99">Lowe99</a>] D. Lowe, ``_Object Recognition from Local Scale-Invariant Features_'', ICCV, 1999. [<a id="cit-SIFT2004" href="#call-SIFT2004">SIFT2004</a>] Lowe David G., ``_Distinctive Image Features from Scale-Invariant Keypoints_'', International Journal of Computer Vision (IJCV), vol. 60, number 2, pp. 91--110, 2004. [<a id="cit-QuadInterp2002" href="#call-QuadInterp2002">QuadInterp2002</a>] M. Brown and D. Lowe, ``_Invariant Features from Interest Point Groups_'', BMVC, 2002. [<a id="cit-aknn1997" href="#call-aknn1997">aknn1997</a>] J.S. Beis and D.G. Lowe, ``_Shape Indexing Using Approximate Nearest-Neighbour Search in High-Dimensional Spaces_'', CVPR, 1997. [<a id="cit-MikoDescEval2003" href="#call-MikoDescEval2003">MikoDescEval2003</a>] K. Mikolajczyk and C. Schmid, ``_A Performance Evaluation of Local Descriptors_'', CVPR, June 2003. [<a id="cit-Mikolajczyk05" href="#call-Mikolajczyk05">Mikolajczyk05</a>] Mikolajczyk K., Tuytelaars T., Schmid C. <em>et al.</em>, ``_A Comparison of Affine Region Detectors_'', IJCV, vol. 65, number 1/2, pp. 43--72, 2005. [<a id="cit-LOransac2003" href="#call-LOransac2003">LOransac2003</a>] O. Chum, J. Matas and J. Kittler, ``_Locally Optimized RANSAC_'', Pattern Recognition, 2003. [<a id="cit-PROSAC2005" href="#call-PROSAC2005">PROSAC2005</a>] O. Chum and J. Matas, ``_Matching with PROSAC -- Progressive Sample Consensus_'', Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01, 2005. [<a id="cit-Degensac2005" href="#call-Degensac2005">Degensac2005</a>] O. Chum, T. Werner and J. Matas, ``_Two-View Geometry Estimation Unaffected by a Dominant Plane_'', CVPR, 2005. [<a id="cit-RANSACSurvey2009" href="#call-RANSACSurvey2009">RANSACSurvey2009</a>] S. Choi, T. Kim and W. Yu, ``_Performance Evaluation of RANSAC Family._'', BMVC, 2009. [<a id="cit-VideoGoogle2003" href="#call-VideoGoogle2003">VideoGoogle2003</a>] J. Sivic and A. Zisserman, ``_Video Google: A Text Retrieval Approach to Object Matching in Videos_'', ICCV, 2003. [<a id="cit-Philbin07" href="#call-Philbin07">Philbin07</a>] J. Philbin, O. Chum, M. Isard <em>et al.</em>, ``_Object Retrieval with Large Vocabularies and Fast Spatial Matching_'', CVPR, 2007. [<a id="cit-Fergus03" href="#call-Fergus03">Fergus03</a>] R. Fergus, P. Perona and A. Zisserman, ``_Object Class Recognition by Unsupervised Scale-Invariant Learning_'', CVPR, 2003. [<a id="cit-CsurkaBoK2004" href="#call-CsurkaBoK2004">CsurkaBoK2004</a>] C.D. G. Csurka, J. Willamowski, L. Fan <em>et al.</em>, ``_Visual Categorization with Bags of Keypoints_'', ECCV, 2004. [<a id="cit-Lazebnik06" href="#call-Lazebnik06">Lazebnik06</a>] S. Lazebnik, C. Schmid and J. Ponce, ``_Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories_'', CVPR, 2006. [<a id="cit-Chum2007Exemplar" href="#call-Chum2007Exemplar">Chum2007Exemplar</a>] O. {Chum} and A. {Zisserman}, ``_An Exemplar Model for Learning Object Classes_'', CVPR, 2007. [<a id="cit-HoG2005" href="#call-HoG2005">HoG2005</a>] N. {Dalal} and B. {Triggs}, ``_Histograms of oriented gradients for human detection_'', CVPR, 2005. [<a id="cit-Superparsing2010" href="#call-Superparsing2010">Superparsing2010</a>] J. Tighe and S. Lazebnik, ``_SuperParsing: Scalable Nonparametric Image Parsing with Superpixels_'', ECCV, 2010. [<a id="cit-PhotoTourism2006" href="#call-PhotoTourism2006">PhotoTourism2006</a>] Snavely Noah, Seitz Steven M. and Szeliski Richard, ``_Photo Tourism: Exploring Photo Collections in 3D_'', ToG, vol. 25, number 3, pp. 835โ€“846, 2006. [<a id="cit-RomeInDay2009" href="#call-RomeInDay2009">RomeInDay2009</a>] Agarwal Sameer, Furukawa Yasutaka, Snavely Noah <em>et al.</em>, ``_Building Rome in a day_'', Communications of the ACM, vol. 54, number , pp. 105--112, 2011. [<a id="cit-COLMAP2016" href="#call-COLMAP2016">COLMAP2016</a>] J. Sch\"{o}nberger and J. Frahm, ``_Structure-From-Motion Revisited_'', CVPR, 2016. [<a id="cit-Se02" href="#call-Se02">Se02</a>] Se S., G. D. and Little J., ``_Mobile Robot Localization and Mapping with Uncertainty Using Scale-Invariant Visual Landmarks_'', IJRR, vol. 22, number 8, pp. 735--758, 2002. [<a id="cit-PTAM2007" href="#call-PTAM2007">PTAM2007</a>] G. {Klein} and D. {Murray}, ``_Parallel Tracking and Mapping for Small AR Workspaces_'', IEEE and ACM International Symposium on Mixed and Augmented Reality, 2007. [<a id="cit-Mur15" href="#call-Mur15">Mur15</a>] Mur-Artal R., Montiel J. and Tard{\'o}s J., ``_ORB-Slam: A Versatile and Accurate Monocular Slam System_'', IEEE Transactions on Robotics, vol. 31, number 5, pp. 1147--1163, 2015. [<a id="cit-Brown07" href="#call-Brown07">Brown07</a>] Brown M. and Lowe D., ``_Automatic Panoramic Image Stitching Using Invariant Features_'', IJCV, vol. 74, number , pp. 59--73, 2007. [<a id="cit-DualBootstrap2003" href="#call-DualBootstrap2003">DualBootstrap2003</a>] V. C., Tsai} {Chia-Ling and {Roysam} B., ``_The dual-bootstrap iterative closest point algorithm with application to retinal image registration_'', IEEE Transactions on Medical Imaging, vol. 22, number 11, pp. 1379-1394, 2003. [<a id="cit-AlexNet2012" href="#call-AlexNet2012">AlexNet2012</a>] Alex Krizhevsky, Ilya Sutskever and Geoffrey E., ``_ImageNet Classification with Deep Convolutional Neural Networks_'', 2012. [<a id="cit-Astounding2014" href="#call-Astounding2014">Astounding2014</a>] A. S., H. {Azizpour}, J. {Sullivan} <em>et al.</em>, ``_CNN Features Off-the-Shelf: An Astounding Baseline for Recognition_'', CVPRW, 2014. [<a id="cit-Melekhov2017relativePoseCnn" href="#call-Melekhov2017relativePoseCnn">Melekhov2017relativePoseCnn</a>] I. Melekhov, J. Ylioinas, J. Kannala <em>et al.</em>, ``_Relative Camera Pose Estimation Using Convolutional Neural Networks_'', , 2017. [online](https://arxiv.org/abs/1702.01381) [<a id="cit-PoseNet2015" href="#call-PoseNet2015">PoseNet2015</a>] A. Kendall, M. Grimes and R. Cipolla, ``_PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization_'', ICCV, 2015. [<a id="cit-sattler2019understanding" href="#call-sattler2019understanding">sattler2019understanding</a>] T. Sattler, Q. Zhou, M. Pollefeys <em>et al.</em>, ``_Understanding the limitations of cnn-based absolute camera pose regression_'', CVPR, 2019. [<a id="cit-zhou2019learn" href="#call-zhou2019learn">zhou2019learn</a>] Q. Zhou, T. Sattler, M. Pollefeys <em>et al.</em>, ``_To Learn or Not to Learn: Visual Localization from Essential Matrices_'', ICRA, 2020. [<a id="cit-pion2020benchmarking" href="#call-pion2020benchmarking">pion2020benchmarking</a>] !! _This reference was not found in biblio.bib _ !! [<a id="cit-Tatarchenko2019" href="#call-Tatarchenko2019">Tatarchenko2019</a>] M. Tatarchenko, S.R. Richter, R. Ranftl <em>et al.</em>, ``_What Do Single-View 3D Reconstruction Networks Learn?_'', CVPR, 2019. [<a id="cit-STN2015" href="#call-STN2015">STN2015</a>] M. Jaderberg, K. Simonyan and A. Zisserman, ``_Spatial transformer networks_'', NeurIPS, 2015. [<a id="cit-NALU2018" href="#call-NALU2018">NALU2018</a>] A. Trask, F. Hill, S.E. Reed <em>et al.</em>, ``_Neural arithmetic logic units_'', NeurIPS, 2018. [<a id="cit-NAU2020" href="#call-NAU2020">NAU2020</a>] A. Madsen and A. Rosenberg, ``_Neural Arithmetic Units_'', ICLR, 2020. [<a id="cit-GroupEqCNN2016" href="#call-GroupEqCNN2016">GroupEqCNN2016</a>] T. Cohen and M. Welling, ``_Group equivariant convolutional networks_'', ICML, 2016. [<a id="cit-MakeCNNShiftInvariant2019" href="#call-MakeCNNShiftInvariant2019">MakeCNNShiftInvariant2019</a>] R. Zhang, ``_Making convolutional networks shift-invariant again_'', ICML, 2019. [<a id="cit-AbsPositionCNN2020" href="#call-AbsPositionCNN2020">AbsPositionCNN2020</a>] M. Amirul, S. Jia and N. D., ``_How Much Position Information Do Convolutional Neural Networks Encode?_'', ICLR, 2020. [<a id="cit-AdvPatch2017" href="#call-AdvPatch2017">AdvPatch2017</a>] T. Brown, D. Mane, A. Roy <em>et al.</em>, ``_Adversarial patch_'', NeurIPSW, 2017. [<a id="cit-OnePixelAttack2019" href="#call-OnePixelAttack2019">OnePixelAttack2019</a>] Su Jiawei, Vargas Danilo Vasconcellos and Sakurai Kouichi, ``_One pixel attack for fooling deep neural networks_'', IEEE Transactions on Evolutionary Computation, vol. 23, number 5, pp. 828--841, 2019. [<a id="cit-cv4action2019" href="#call-cv4action2019">cv4action2019</a>] Zhou Brady, Kr{\"a}henb{\"u}hl Philipp and Koltun Vladlen, ``_Does computer vision matter for action?_'', Science Robotics, vol. 4, number 30, pp. , 2019. [<a id="cit-HardNet2017" href="#call-HardNet2017">HardNet2017</a>] A. Mishchuk, D. Mishkin, F. Radenovic <em>et al.</em>, ``_Working Hard to Know Your Neighbor's Margins: Local Descriptor Learning Loss_'', NeurIPS, 2017. [<a id="cit-KeyNet2019" href="#call-KeyNet2019">KeyNet2019</a>] A. Barroso-Laguna, E. Riba, D. Ponsa <em>et al.</em>, ``_Key.Net: Keypoint Detection by Handcrafted and Learned CNN Filters_'', ICCV, 2019. [<a id="cit-SuperPoint2017" href="#call-SuperPoint2017">SuperPoint2017</a>] Detone D., Malisiewicz T. and Rabinovich A., ``_Superpoint: Self-Supervised Interest Point Detection and Description_'', CVPRW Deep Learning for Visual SLAM, vol. , number , pp. , 2018. [<a id="cit-R2D22019" href="#call-R2D22019">R2D22019</a>] J. Revaud, ``_R2D2: Repeatable and Reliable Detector and Descriptor_'', NeurIPS, 2019. [<a id="cit-D2Net2019" href="#call-D2Net2019">D2Net2019</a>] M. Dusmanu, I. Rocco, T. Pajdla <em>et al.</em>, ``_D2-Net: A Trainable CNN for Joint Detection and Description of Local Features_'', CVPR, 2019. [<a id="cit-sarlin2019superglue" href="#call-sarlin2019superglue">sarlin2019superglue</a>] P. Sarlin, D. DeTone, T. Malisiewicz <em>et al.</em>, ``_SuperGlue: Learning Feature Matching with Graph Neural Networks_'', CVPR, 2020. [<a id="cit-gradslam2020" href="#call-gradslam2020">gradslam2020</a>] J. Krishna Murthy, G. Iyer and L. Paull, ``_gradSLAM: Dense SLAM meets Automatic Differentiation _'', ICRA, 2020 . [<a id="cit-CapsNet2011" href="#call-CapsNet2011">CapsNet2011</a>] G.E. Hinton, A. Krizhevsky and S.D. Wang, ``_Transforming auto-encoders_'', ICANN, 2011. [<a id="cit-CapsNet2017" href="#call-CapsNet2017">CapsNet2017</a>] S. Sabour, N. Frosst and G.E. Hinton, ``_Dynamic routing between capsules_'', NeurIPS, 2017. [<a id="cit-li2020extreme" href="#call-li2020extreme">li2020extreme</a>] Li Jianguo, Sun Mingjie and Zhang Changshui, ``_Extreme Values are Accurate and Robust in Deep Networks_'', , vol. , number , pp. , 2020. [online](https://openreview.net/forum?id=H1gHb1rFwr)
github_jupyter
``` ##### Comparisons between different plots import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.ticker as ticker plt.rcParams['figure.figsize'] = [20, 15] df1 = pd.read_csv('1.txt', sep='\t') df2 = pd.read_csv('3.txt', sep='\t') # scale func to show x-axis in years scale_x = 12 ticks_x = ticker.FuncFormatter(lambda x, pos: '{0:g}'.format(x/scale_x)) # blood_slide_prevalence plot fig = plt.figure() ax1 = fig.add_subplot(211) ax1.plot(df1['blood_slide_prev']) ax1.xaxis.set_major_locator(ticker.MultipleLocator(12*5)) ax1.xaxis.set_major_formatter(ticks_x) ax1.set_xlabel('years') ax1.set_ylabel('blood slide prev') ax1.set_title('Blood Slide Prev. Plot #0') # sum up total num of parasites in the silumation tot_para = df.iloc[:,22:150].sum(axis=1) # parasite freq plot ax2 = fig.add_subplot(212) # grouped comparisons for DHA-PPQ resistance ax2.plot(df1.iloc[:,0:151].filter(regex='.....C1.', axis=1).sum(axis=1)/tot_para, label='*C1.(wild)') ax2.plot(df1.iloc[:,0:151].filter(regex='.....C2.', axis=1).sum(axis=1)/tot_para, label='*C2.(PPQ)') ax2.plot(df1.iloc[:,0:151].filter(regex='.....Y1.', axis=1).sum(axis=1)/tot_para, label='*Y1.(Artim.)') ax2.plot(df1.iloc[:,0:151].filter(regex='.....Y2.', axis=1).sum(axis=1)/tot_para, label='*Y2.(double)') ax2.plot(df1.iloc[:,0:151].filter(regex='TY......', axis=1).sum(axis=1)/tot_para, label='TY*(AQ)') ax2.plot(df1.iloc[:,0:151].filter(regex='KN......', axis=1).sum(axis=1)/tot_para, label='KN*(Lum)') # format x-axis ax2.xaxis.set_major_locator(ticker.MultipleLocator(12*5)) ax2.xaxis.set_major_formatter(ticks_x) # format y-axis #ax2.yaxis.set_major_formatter(ticker.PercentFormatter(xmax=1.0)) ax2.set_xlabel('years') ax2.set_ylabel('parasite freq') ax2.set_title('Parasite Freq. Plot #0') ax2.legend() # DHAPPQ ASAQ AL DHAPPQ ASAQ AL DHAPPQ ASAQ AL # plot 1s # blood_slide_prevalence plot fig = plt.figure() ax1 = fig.add_subplot(211) ax1.plot(df1['blood_slide_prev']) ax1.xaxis.set_major_locator(ticker.MultipleLocator(60)) ax1.xaxis.set_major_formatter(ticks_x) ax1.set_xlabel('years') ax1.set_ylabel('blood slide prev') ax1.set_title('Blood Slide Prev. Plot #1') # sum up total num of parasites in the silumation tot_para = df1.iloc[:,22:150].sum(axis=1) # parasite freq plot ax2 = fig.add_subplot(212) # grouped comparisons for DHA-PPQ resistance ax2.plot(df2.iloc[:,0:151].filter(regex='.....C1.', axis=1).sum(axis=1)/tot_para, label='*C1.(wild)') ax2.plot(df2.iloc[:,0:151].filter(regex='.....C2.', axis=1).sum(axis=1)/tot_para, label='*C2.(PPQ)') ax2.plot(df2.iloc[:,0:151].filter(regex='.....Y1.', axis=1).sum(axis=1)/tot_para, label='*Y1.(Artim.)') ax2.plot(df2.iloc[:,0:151].filter(regex='.....Y2.', axis=1).sum(axis=1)/tot_para, label='*Y2.(double)') ax2.plot(df2.iloc[:,0:151].filter(regex='TY......', axis=1).sum(axis=1)/tot_para, label='TY*(AQ)') ax2.plot(df2.iloc[:,0:151].filter(regex='KN......', axis=1).sum(axis=1)/tot_para, label='KN*(Lum)') # format x-axis ax2.xaxis.set_major_locator(ticker.MultipleLocator(12*5)) ax2.xaxis.set_major_formatter(ticks_x) # format y-axis #ax2.yaxis.set_major_formatter(ticker.PercentFormatter(xmax=1.0)) ax2.set_xlabel('years') ax2.set_ylabel('parasite freq') ax2.set_title('Parasite Freq. Plot #1') ax2.legend() # ASAQ AL ASAQ AL ASAQ AL ASAQ AL ASAQ # plot 2s # blood_slide_prevalence plot majors = [10,13,16,19,22,25,28,31,34,37,40,43,46,49] fig = plt.figure() ax1 = fig.add_subplot(211) ax1.plot(df2['blood_slide_prev']) ax1.xaxis.set_major_locator(ticker.FixedLocator([i*12 for i in majors])) # 3-yr rotation ax1.xaxis.set_major_formatter(ticks_x) ax1.set_xlabel('years') ax1.set_ylabel('blood slide prev') ax1.set_title('Blood Slide Prev. Plot #2') # sum up total num of parasites in the silumation tot_para = df2.iloc[:,22:150].sum(axis=1) # parasite freq plot ax2 = fig.add_subplot(212) # grouped comparisons for DHA-PPQ resistance ax2.plot(df2.iloc[:,0:151].filter(regex='.....C1.', axis=1).sum(axis=1)/tot_para, label='wild') ax2.plot(df2.iloc[:,0:151].filter(regex='.....C2.', axis=1).sum(axis=1)/tot_para, label='PPQ-res') ax2.plot(df2.iloc[:,0:151].filter(regex='.....Y1.', axis=1).sum(axis=1)/tot_para, label='Artim.-res') ax2.plot(df2.iloc[:,0:151].filter(regex='.....Y2.', axis=1).sum(axis=1)/tot_para, label='PPQ-Art-double') ax2.plot(df2.iloc[:,0:151].filter(regex='TY......', axis=1).sum(axis=1)/tot_para, label='AQ-res') ax2.plot(df2.iloc[:,0:151].filter(regex='KN......', axis=1).sum(axis=1)/tot_para, label='Lum-res') # format x-axis ax2.xaxis.set_major_locator(ticker.FixedLocator([i*12 for i in majors])) # 3-yr rotation ax2.xaxis.set_major_formatter(ticks_x) # format y-axis #ax2.yaxis.set_major_formatter(ticker.PercentFormatter(xmax=1.0)) ax2.set_xlabel('years') ax2.set_ylabel('parasite freq') ax2.set_title('Parasite Freq. Plot #2') ax2.legend() # DHAPPQ ASAQ AL DHAPPQ ASAQ AL DHAPPQ ASAQ AL DHAPPQ ASAQ AL DHAPPQ ASAQ ```
github_jupyter
# Plotting with [cartopy](https://scitools.org.uk/cartopy/docs/latest/) From Cartopy website: * Cartopy is a Python package designed for geospatial data processing in order to produce maps and other geospatial data analyses. * Cartopy makes use of the powerful PROJ.4, NumPy and Shapely libraries and includes a programmatic interface built on top of Matplotlib for the creation of publication quality maps. * Key features of cartopy are its object oriented projection definitions, and its ability to transform points, lines, vectors, polygons and images between those projections. * You will find cartopy especially useful for large area / small scale data, where Cartesian assumptions of spherical data traditionally break down. If youโ€™ve ever experienced a singularity at the pole or a cut-off at the dateline, it is likely you will appreciate cartopyโ€™s unique features! ``` import numpy as np import matplotlib.pyplot as plt import xarray as xr import cartopy.crs as ccrs ``` # Read in data using xarray - Read in the Saildrone USV file either from a local disc `xr.open_dataset(file)` - change latitude and longitude to lat and lon `.rename({'longitude':'lon','latitude':'lat'})` ``` file = '../data/saildrone-gen_5-antarctica_circumnavigation_2019-sd1020-20190119T040000-20190803T043000-1440_minutes-v1.1564857794963.nc' ds_usv = ``` # Open the dataset, mask land, plot result * `ds_sst = xr.open_dataset(url)` * use `ds_sst = ds_sst.where(ds_sst.mask==1)` to mask values equal to 1 ``` #If you are offline use the first url #url = '../data/20111101120000-CMC-L4_GHRSST-SSTfnd-CMC0.2deg-GLOB-v02.0-fv02.0.nc' url = 'https://podaac-opendap.jpl.nasa.gov/opendap/allData/ghrsst/data/GDS2/L4/GLOB/CMC/CMC0.2deg/v2/2011/305/20111101120000-CMC-L4_GHRSST-SSTfnd-CMC0.2deg-GLOB-v02.0-fv02.0.nc' ``` ## explore the in situ data and quickly plot using cartopy * first set up the axis with the projection you want: https://scitools.org.uk/cartopy/docs/latest/crs/projections.html * plot to that axis and tell the projection that your data is in #### Run the cell below and see what the image looks like. Then try adding in the lines below, one at a time, and re-run cell to see what happens * set a background image `ax.stock_img()` * draw coastlines `ax.coastlines(resolution='50m')` * add a colorbary and label it `cax = plt.colorbar(cs1)` `cax.set_label('SST (K)')` ``` #for polar data, plot temperature datamin = 0 datamax = 12 ax = plt.axes(projection=ccrs.SouthPolarStereo()) #here is where you set your axis projection (ds_sst.analysed_sst-273.15).plot(ax=ax, transform=ccrs.PlateCarree(), #set data projection vmin=datamin, #data min vmax=datamax) #data min cs1 = ax.scatter(ds_usv.lon, ds_usv.lat, transform=ccrs.PlateCarree(), #set data projection s=10.0, #size for scatter point c=ds_usv.TEMP_CTD_MEAN, #make the color of the scatter point equal to the USV temperature edgecolor='none', #no edgecolor cmap='jet', #colormap vmin=datamin, #data min vmax=datamax) #data max ax.set_extent([-180, 180, -90, -45], crs=ccrs.PlateCarree()) #data projection ``` # Plot the salinity * Take the code from above but use `c=ds_usv.SAL_MEAN` * Run the code, what looks wrong? * Change `datamin` and `datamax` ``` ``` # Let's plot some data off California * Read in data from a cruise along the California / Baja Coast * `ds_usv = xr.open_dataset(url).rename({'longitude':'lon','latitude':'lat'})` ``` #use the first URL if you are offline #url = '../data/saildrone-gen_4-baja_2018-sd1002-20180411T180000-20180611T055959-1_minutes-v1.nc' url = 'https://podaac-opendap.jpl.nasa.gov/opendap/hyrax/allData/insitu/L2/saildrone/Baja/saildrone-gen_4-baja_2018-sd1002-20180411T180000-20180611T055959-1_minutes-v1.nc' ``` * Plot the data using the code from above, but change the projection `ax = plt.axes(projection=ccrs.PlateCarree())` ``` ``` * Zoom into the region of the cruise * First calculate the lat/lon box<br> `lonmin,lonmax = ds_usv.lon.min().data-2,ds_usv.lon.max().data+2`<br> `latmin,latmax = ds_usv.lat.min().data-2,ds_usv.lat.max().data+2` * Then, after plotting the data, change the extent `ax.set_extent([lonmin,lonmax,latmin,latmax], crs=ccrs.PlateCarree())`
github_jupyter
# Initialization Welcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can: - Speed up the convergence of gradient descent - Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify. ``` import numpy as np import matplotlib.pyplot as plt import sklearn import sklearn.datasets from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec %matplotlib inline plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # load image dataset: blue/red dots in circles train_X, train_Y, test_X, test_Y = load_dataset() ``` You would like a classifier to separate the blue dots from the red dots. ## 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument. - *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls. ``` def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"): """ Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID. Arguments: X -- input data, of shape (2, number of examples) Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples) learning_rate -- learning rate for gradient descent num_iterations -- number of iterations to run gradient descent print_cost -- if True, print the cost every 1000 iterations initialization -- flag to choose which initialization to use ("zeros","random" or "he") Returns: parameters -- parameters learnt by the model """ grads = {} costs = [] # to keep track of the loss m = X.shape[1] # number of examples layers_dims = [X.shape[0], 10, 5, 1] # Initialize parameters dictionary. if initialization == "zeros": parameters = initialize_parameters_zeros(layers_dims) elif initialization == "random": parameters = initialize_parameters_random(layers_dims) elif initialization == "he": parameters = initialize_parameters_he(layers_dims) # Loop (gradient descent) for i in range(0, num_iterations): # Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID. a3, cache = forward_propagation(X, parameters) # Loss cost = compute_loss(a3, Y) # Backward propagation. grads = backward_propagation(X, Y, cache) # Update parameters. parameters = update_parameters(parameters, grads, learning_rate) # Print the loss every 1000 iterations if print_cost and i % 1000 == 0: print("Cost after iteration {}: {}".format(i, cost)) costs.append(cost) # plot the loss plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(learning_rate)) plt.show() return parameters ``` ## 2 - Zero initialization There are two types of parameters to initialize in a neural network: - the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$ - the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$ **Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes. ``` # GRADED FUNCTION: initialize_parameters_zeros def initialize_parameters_zeros(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ parameters = {} L = len(layers_dims) # number of layers in the network for l in range(1, L): ### START CODE HERE ### (โ‰ˆ 2 lines of code) parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1])) parameters['b' + str(l)] = np.zeros((layers_dims[l], 1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_zeros([3,2,1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected Output**: <table> <tr> <td> **W1** </td> <td> [[ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td> **b1** </td> <td> [[ 0.] [ 0.]] </td> </tr> <tr> <td> **W2** </td> <td> [[ 0. 0.]] </td> </tr> <tr> <td> **b2** </td> <td> [[ 0.]] </td> </tr> </table> Run the following code to train your model on 15,000 iterations using zeros initialization. ``` parameters = model(train_X, train_Y, initialization = "zeros") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) ``` The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary: ``` print ("predictions_train = " + str(predictions_train)) print ("predictions_test = " + str(predictions_test)) plt.title("Model with Zeros initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ``` The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. <font color='blue'> **What you should remember**: - The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. ## 3 - Random initialization To break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters. ``` # GRADED FUNCTION: initialize_parameters_random def initialize_parameters_random(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours parameters = {} L = len(layers_dims) # integer representing the number of layers for l in range(1, L): ### START CODE HERE ### (โ‰ˆ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1])*10 parameters['b' + str(l)] = np.zeros((layers_dims[l], 1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_random([3, 2, 1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected Output**: <table> <tr> <td> **W1** </td> <td> [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] </td> </tr> <tr> <td> **b1** </td> <td> [[ 0.] [ 0.]] </td> </tr> <tr> <td> **W2** </td> <td> [[-0.82741481 -6.27000677]] </td> </tr> <tr> <td> **b2** </td> <td> [[ 0.]] </td> </tr> </table> Run the following code to train your model on 15,000 iterations using random initialization. ``` parameters = model(train_X, train_Y, initialization = "random") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) ``` If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s. ``` print (predictions_train) print (predictions_test) plt.title("Model with large random initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ``` **Observations**: - The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity. - Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization. <font color='blue'> **In summary**: - Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! ## 4 - He initialization Finally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.) **Exercise**: Implement the following function to initialize your parameters with He initialization. **Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation. ``` # GRADED FUNCTION: initialize_parameters_he def initialize_parameters_he(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ import math np.random.seed(3) parameters = {} L = len(layers_dims) - 1 # integer representing the number of layers for l in range(1, L + 1): ### START CODE HERE ### (โ‰ˆ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1])*math.sqrt(2./layers_dims[l-1]) parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))*math.sqrt(2./layers_dims[l-1]) ### END CODE HERE ### return parameters parameters = initialize_parameters_he([2, 4, 1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected Output**: <table> <tr> <td> **W1** </td> <td> [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] </td> </tr> <tr> <td> **b1** </td> <td> [[ 0.] [ 0.] [ 0.] [ 0.]] </td> </tr> <tr> <td> **W2** </td> <td> [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] </td> </tr> <tr> <td> **b2** </td> <td> [[ 0.]] </td> </tr> </table> Run the following code to train your model on 15,000 iterations using He initialization. ``` parameters = model(train_X, train_Y, initialization = "he") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) plt.title("Model with He initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ``` **Observations**: - The model with He initialization separates the blue and the red dots very well in a small number of iterations. ## 5 - Conclusions You have seen three different types of initializations. For the same number of iterations and same hyperparameters the comparison is: <table> <tr> <td> **Model** </td> <td> **Train accuracy** </td> <td> **Problem/Comment** </td> </tr> <td> 3-layer NN with zeros initialization </td> <td> 50% </td> <td> fails to break symmetry </td> <tr> <td> 3-layer NN with large random initialization </td> <td> 83% </td> <td> too large weights </td> </tr> <tr> <td> 3-layer NN with He initialization </td> <td> 99% </td> <td> recommended method </td> </tr> </table> <font color='blue'> **What you should remember from this notebook**: - Different initializations lead to different results - Random initialization is used to break symmetry and make sure different hidden units can learn different things - Don't intialize to values that are too large - He initialization works well for networks with ReLU activations.
github_jupyter
# Gazebo proxy The Gazebo proxy is an implementation of interfaces with all services provided by the `gazebo_ros_pkgs`. It allows easy use and from of the simulation through Python. It can be configured for different `ROS_MASTER_URI` and `GAZEBO_MASTER_URI` environment variables to access instances of Gazebo running in other hosts/ports. The tutorial below will make use of the simulation manager to start instances of Gazebo. ``` # Importing the Gazebo proxy from pcg_gazebo.task_manager import GazeboProxy ``` The Gazebo proxy may also work with an instance of Gazebo that has been started external to the scope of this package, for example by running ``` roslaunch gazebo_ros empty_world.launch ``` The only instance will be found by using the input hostname and ports for which they are running. Here we will use the simulation manager. ``` # If there is a Gazebo instance running, you can spawn the box into the simulation from pcg_gazebo.task_manager import Server # First create a simulation server server = Server() # Create a simulation manager named default server.create_simulation('default') simulation = server.get_simulation('default') # Run an instance of the empty.world scenario # This is equivalent to run # roslaunch gazebo_ros empty_world.launch # with all default parameters if not simulation.create_gazebo_empty_world_task(): raise RuntimeError('Task for gazebo empty world could not be created') # A task named 'gazebo' the added to the tasks list print(simulation.get_task_list()) # But it is still not running print('Is Gazebo running: {}'.format(simulation.is_task_running('gazebo'))) # Run Gazebo simulation.run_all_tasks() ``` Adding some models to the simulation to demonstrate the Gazebo proxy methods. ``` # Now create the Gazebo proxy with the default parameters. # If these input arguments are not provided, they will be used per default. gazebo_proxy = simulation.get_gazebo_proxy() # The timeout argument will be used raise an exception in case Gazebo # fails to start from pcg_gazebo.simulation import create_object from pcg_gazebo.generators import WorldGenerator generator = WorldGenerator(gazebo_proxy=gazebo_proxy) box = create_object('box') box.add_inertial(mass=20) print(box.to_sdf('model')) generator.spawn_model( model=box, robot_namespace='box_1', pos=[-2, -2, 3]) generator.spawn_model( model=box, robot_namespace='box_2', pos=[2, 2, 3]) ``` ## Pausing/unpausing the simulation ``` from time import time, sleep pause_timeout = 10 # seconds start_time = time() # Pausing simulation gazebo_proxy.pause() print('Simulation time before pause={}'.format(gazebo_proxy.sim_time)) while time() - start_time < pause_timeout: print('Gazebo paused, simulation time={}'.format(gazebo_proxy.sim_time)) sleep(1) print('Unpausing simulation!') gazebo_proxy.unpause() sleep(2) print('Simulation time after pause={}'.format(gazebo_proxy.sim_time)) ``` ## Get world properties The world properties return * Simulation time (`sim_time`) * List of names of models (`model_names`) * Is rendering enabled flag (`rendering_enabled`) The return of this function is simply the service object [`GetWorldProperties`](https://github.com/ros-simulation/gazebo_ros_pkgs/blob/kinetic-devel/gazebo_msgs/srv/GetWorldProperties.srv). ``` # The world properties returns the following gazebo_proxy.get_world_properties() ``` ## Model properties ``` # Get list of models gazebo_proxy.get_model_names() # Get model properties for model in gazebo_proxy.get_model_names(): print(model) print(gazebo_proxy.get_model_properties(model)) print('-----------------') # Get model state for model in gazebo_proxy.get_model_names(): print(model) print(gazebo_proxy.get_model_state(model_name=model, reference_frame='world')) print('-----------------') # Check if model exists print('Does ground_plane exist? {}'.format(gazebo_proxy.model_exists('ground_plane'))) print('Does my_model exist? {}'.format(gazebo_proxy.model_exists('my_model'))) # Get list of link names for a model for model in gazebo_proxy.get_model_names(): print(model) print(gazebo_proxy.get_link_names(model)) print('-----------------') # Test if model has a link print('Does ground_plane have a link named link? {}'.format(gazebo_proxy.has_link(model_name='ground_plane', link_name='link'))) # Get link properties for model in gazebo_proxy.get_model_names(): print(model) for link in gazebo_proxy.get_link_names(model_name=model): print(' - ' + link) print(gazebo_proxy.get_link_properties(model_name=model, link_name=link)) print('-----------------') print('==================') # Get link state for model in gazebo_proxy.get_model_names(): print(model) for link in gazebo_proxy.get_link_names(model_name=model): print(' - ' + link) print(gazebo_proxy.get_link_state(model_name=model, link_name=link)) print('-----------------') print('==================') ``` ## Get physics properties The physics properties returns the [GetPhysicsProperties](https://github.com/ros-simulation/gazebo_ros_pkgs/blob/kinetic-devel/gazebo_msgs/srv/GetPhysicsProperties.srv) response with the current parameters for the physics engine. Currently only the parameters for the ODE engine can be retrieved. ``` print(gazebo_proxy.get_physics_properties()) ``` ## Apply wrench ``` # Applying wrench to a link in the simulation # The input arguments are # - model_name # - link_name # - force: force vector [x, y, z] # - torque: torque vector [x, y, z] # - start_time: in seconds, if it is a value lower than simulation time, the wrench will be applied as soon as possible # - duration: in seconds # if duration < 0, apply wrench continuously without end # if duration = 0, do nothing # if duration < step size, apply wrench for one step size # - reference_point: [x, y, z] coordinate point where wrench will be applied wrt the reference frame # - reference_frame: reference frame for the reference point, if None it will be set as the provided model_name::link_name gazebo_proxy.apply_body_wrench( model_name='box_1', link_name='box', force=[100, 0, 0], torque=[0, 0, 100], start_time=0, duration=5, reference_point=[0, 0, 0], reference_frame=None) gazebo_proxy.apply_body_wrench( model_name='box_2', link_name='box', force=[10, 0, 200], torque=[0, 0, 150], start_time=0, duration=4, reference_point=[0, 0, 0], reference_frame=None) start_time = time() while time() - start_time < 10: sleep(1) ``` ## Move models in the simulation ``` gazebo_proxy.move_model( model_name='box_1', pos=[2, 2, 15], rot=[0, 0, 0], reference_frame='world') gazebo_proxy.move_model( model_name='box_2', pos=[-2, -1, 4], rot=[0, 0, 0], reference_frame='world') # End the simulation by killing the Gazebo task simulation.kill_all_tasks() ```
github_jupyter
``` import tensorflow as tf import numpy as np from copy import deepcopy epoch = 20 batch_size = 64 size_layer = 64 dropout_rate = 0.5 n_hops = 2 class BaseDataLoader(): def __init__(self): self.data = { 'size': None, 'val':{ 'inputs': None, 'questions': None, 'answers': None,}, 'len':{ 'inputs_len': None, 'inputs_sent_len': None, 'questions_len': None, 'answers_len': None} } self.vocab = { 'size': None, 'word2idx': None, 'idx2word': None, } self.params = { 'vocab_size': None, '<start>': None, '<end>': None, 'max_input_len': None, 'max_sent_len': None, 'max_quest_len': None, 'max_answer_len': None, } class DataLoader(BaseDataLoader): def __init__(self, path, is_training, vocab=None, params=None): super().__init__() data, lens = self.load_data(path) if is_training: self.build_vocab(data) else: self.demo = data self.vocab = vocab self.params = deepcopy(params) self.is_training = is_training self.padding(data, lens) def load_data(self, path): data, lens = bAbI_data_load(path) self.data['size'] = len(data[0]) return data, lens def build_vocab(self, data): signals = ['<pad>', '<unk>', '<start>', '<end>'] inputs, questions, answers = data i_words = [w for facts in inputs for fact in facts for w in fact if w != '<end>'] q_words = [w for question in questions for w in question] a_words = [w for answer in answers for w in answer if w != '<end>'] words = list(set(i_words + q_words + a_words)) self.params['vocab_size'] = len(words) + 4 self.params['<start>'] = 2 self.params['<end>'] = 3 self.vocab['word2idx'] = {word: idx for idx, word in enumerate(signals + words)} self.vocab['idx2word'] = {idx: word for word, idx in self.vocab['word2idx'].items()} def padding(self, data, lens): inputs_len, inputs_sent_len, questions_len, answers_len = lens self.params['max_input_len'] = max(inputs_len) self.params['max_sent_len'] = max([fact_len for batch in inputs_sent_len for fact_len in batch]) self.params['max_quest_len'] = max(questions_len) self.params['max_answer_len'] = max(answers_len) self.data['len']['inputs_len'] = np.array(inputs_len) for batch in inputs_sent_len: batch += [0] * (self.params['max_input_len'] - len(batch)) self.data['len']['inputs_sent_len'] = np.array(inputs_sent_len) self.data['len']['questions_len'] = np.array(questions_len) self.data['len']['answers_len'] = np.array(answers_len) inputs, questions, answers = deepcopy(data) for facts in inputs: for sentence in facts: for i in range(len(sentence)): sentence[i] = self.vocab['word2idx'].get(sentence[i], self.vocab['word2idx']['<unk>']) sentence += [0] * (self.params['max_sent_len'] - len(sentence)) paddings = [0] * self.params['max_sent_len'] facts += [paddings] * (self.params['max_input_len'] - len(facts)) for question in questions: for i in range(len(question)): question[i] = self.vocab['word2idx'].get(question[i], self.vocab['word2idx']['<unk>']) question += [0] * (self.params['max_quest_len'] - len(question)) for answer in answers: for i in range(len(answer)): answer[i] = self.vocab['word2idx'].get(answer[i], self.vocab['word2idx']['<unk>']) self.data['val']['inputs'] = np.array(inputs) self.data['val']['questions'] = np.array(questions) self.data['val']['answers'] = np.array(answers) def bAbI_data_load(path, END=['<end>']): inputs = [] questions = [] answers = [] inputs_len = [] inputs_sent_len = [] questions_len = [] answers_len = [] for d in open(path): index = d.split(' ')[0] if index == '1': fact = [] if '?' in d: temp = d.split('\t') q = temp[0].strip().replace('?', '').split(' ')[1:] + ['?'] a = temp[1].split() + END fact_copied = deepcopy(fact) inputs.append(fact_copied) questions.append(q) answers.append(a) inputs_len.append(len(fact_copied)) inputs_sent_len.append([len(s) for s in fact_copied]) questions_len.append(len(q)) answers_len.append(len(a)) else: tokens = d.replace('.', '').replace('\n', '').split(' ')[1:] + END fact.append(tokens) return [inputs, questions, answers], [inputs_len, inputs_sent_len, questions_len, answers_len] train_data = DataLoader(path='qa5_three-arg-relations_train.txt',is_training=True) test_data = DataLoader(path='qa5_three-arg-relations_test.txt',is_training=False, vocab=train_data.vocab, params=train_data.params) START = train_data.params['<start>'] END = train_data.params['<end>'] def hop_forward(question, memory_o, memory_i, response_proj, inputs_len, questions_len, is_training): match = tf.matmul(question, memory_i, transpose_b=True) match = pre_softmax_masking(match, inputs_len) match = tf.nn.softmax(match) match = post_softmax_masking(match, questions_len) response = tf.matmul(match, memory_o) return response_proj(tf.concat([response, question], -1)) def pre_softmax_masking(x, seq_len): paddings = tf.fill(tf.shape(x), float('-inf')) T = tf.shape(x)[1] max_seq_len = tf.shape(x)[2] masks = tf.sequence_mask(seq_len, max_seq_len, dtype=tf.float32) masks = tf.tile(tf.expand_dims(masks, 1), [1, T, 1]) return tf.where(tf.equal(masks, 0), paddings, x) def post_softmax_masking(x, seq_len): T = tf.shape(x)[2] max_seq_len = tf.shape(x)[1] masks = tf.sequence_mask(seq_len, max_seq_len, dtype=tf.float32) masks = tf.tile(tf.expand_dims(masks, -1), [1, 1, T]) return (x * masks) def shift_right(x): batch_size = tf.shape(x)[0] start = tf.to_int32(tf.fill([batch_size, 1], START)) return tf.concat([start, x[:, :-1]], 1) def embed_seq(x, vocab_size, zero_pad=True): lookup_table = tf.get_variable('lookup_table', [vocab_size, size_layer], tf.float32) if zero_pad: lookup_table = tf.concat((tf.zeros([1, size_layer]), lookup_table[1:, :]), axis=0) return tf.nn.embedding_lookup(lookup_table, x) def position_encoding(sentence_size, embedding_size): encoding = np.ones((embedding_size, sentence_size), dtype=np.float32) ls = sentence_size + 1 le = embedding_size + 1 for i in range(1, le): for j in range(1, ls): encoding[i-1, j-1] = (i - (le-1)/2) * (j - (ls-1)/2) encoding = 1 + 4 * encoding / embedding_size / sentence_size return tf.convert_to_tensor(np.transpose(encoding)) def input_mem(x, vocab_size, max_sent_len, is_training): x = embed_seq(x, vocab_size) x = tf.layers.dropout(x, dropout_rate, training=is_training) pos = position_encoding(max_sent_len, size_layer) x = tf.reduce_sum(x * pos, 2) return x def quest_mem(x, vocab_size, max_quest_len, is_training): x = embed_seq(x, vocab_size) x = tf.layers.dropout(x, dropout_rate, training=is_training) pos = position_encoding(max_quest_len, size_layer) return (x * pos) class QA: def __init__(self, vocab_size): self.questions = tf.placeholder(tf.int32,[None,None]) self.inputs = tf.placeholder(tf.int32,[None,None,None]) self.questions_len = tf.placeholder(tf.int32,[None]) self.inputs_len = tf.placeholder(tf.int32,[None]) self.answers_len = tf.placeholder(tf.int32,[None]) self.answers = tf.placeholder(tf.int32,[None,None]) self.training = tf.placeholder(tf.bool) max_sent_len = train_data.params['max_sent_len'] max_quest_len = train_data.params['max_quest_len'] max_answer_len = train_data.params['max_answer_len'] lookup_table = tf.get_variable('lookup_table', [vocab_size, size_layer], tf.float32) lookup_table = tf.concat((tf.zeros([1, size_layer]), lookup_table[1:, :]), axis=0) with tf.variable_scope('questions'): question = quest_mem(self.questions, vocab_size, max_quest_len, self.training) with tf.variable_scope('memory_o'): memory_o = input_mem(self.inputs, vocab_size, max_sent_len, self.training) with tf.variable_scope('memory_i'): memory_i = input_mem(self.inputs, vocab_size, max_sent_len, self.training) with tf.variable_scope('interaction'): response_proj = tf.layers.Dense(size_layer) for _ in range(n_hops): answer = hop_forward(question, memory_o, memory_i, response_proj, self.inputs_len, self.questions_len, self.training) question = answer with tf.variable_scope('memory_o', reuse=True): embedding = tf.get_variable('lookup_table') cell = tf.nn.rnn_cell.LSTMCell(size_layer) vocab_proj = tf.layers.Dense(vocab_size) state_proj = tf.layers.Dense(size_layer) init_state = state_proj(tf.layers.flatten(answer)) init_state = tf.layers.dropout(init_state, dropout_rate, training=self.training) helper = tf.contrib.seq2seq.TrainingHelper( inputs = tf.nn.embedding_lookup(embedding, shift_right(self.answers)), sequence_length = tf.to_int32(self.answers_len)) encoder_state = tf.nn.rnn_cell.LSTMStateTuple(c=init_state, h=init_state) decoder = tf.contrib.seq2seq.BasicDecoder(cell = cell, helper = helper, initial_state = encoder_state, output_layer = vocab_proj) decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder = decoder, maximum_iterations = tf.shape(self.inputs)[1]) self.outputs = decoder_output.rnn_output helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(embedding = embedding, start_tokens = tf.tile( tf.constant([START], dtype=tf.int32), [tf.shape(self.inputs)[0]]), end_token = END) decoder = tf.contrib.seq2seq.BasicDecoder( cell = cell, helper = helper, initial_state = encoder_state, output_layer = vocab_proj) decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode( decoder = decoder, maximum_iterations = max_answer_len) self.logits = decoder_output.sample_id correct_pred = tf.equal(self.logits[:,0], self.answers[:,0]) self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) self.cost = tf.reduce_mean(tf.contrib.seq2seq.sequence_loss(logits = self.outputs, targets = self.answers, weights = tf.ones_like(self.answers, tf.float32))) self.optimizer = tf.train.AdamOptimizer().minimize(self.cost) tf.reset_default_graph() sess = tf.InteractiveSession() model = QA(train_data.params['vocab_size']) sess.run(tf.global_variables_initializer()) batching = (train_data.data['val']['inputs'].shape[0] // batch_size) * batch_size for i in range(epoch): total_cost, total_acc = 0, 0 for k in range(0, batching, batch_size): batch_questions = train_data.data['val']['questions'][k:k+batch_size] batch_inputs = train_data.data['val']['inputs'][k:k+batch_size] batch_inputs_len = train_data.data['len']['inputs_len'][k:k+batch_size] batch_questions_len = train_data.data['len']['questions_len'][k:k+batch_size] batch_answers_len = train_data.data['len']['answers_len'][k:k+batch_size] batch_answers = train_data.data['val']['answers'][k:k+batch_size] acc, cost, _ = sess.run([model.accuracy,model.cost,model.optimizer], feed_dict={model.questions:batch_questions, model.inputs:batch_inputs, model.inputs_len:batch_inputs_len, model.questions_len:batch_questions_len, model.answers_len:batch_answers_len, model.answers:batch_answers, model.training:True}) total_cost += cost total_acc += acc total_cost /= (train_data.data['val']['inputs'].shape[0] // batch_size) total_acc /= (train_data.data['val']['inputs'].shape[0] // batch_size) print('epoch %d, avg cost %f, avg acc %f'%(i+1,total_cost,total_acc)) testing_size = 32 batch_questions = test_data.data['val']['questions'][:testing_size] batch_inputs = test_data.data['val']['inputs'][:testing_size] batch_inputs_len = test_data.data['len']['inputs_len'][:testing_size] batch_questions_len = test_data.data['len']['questions_len'][:testing_size] batch_answers_len = test_data.data['len']['answers_len'][:testing_size] batch_answers = test_data.data['val']['answers'][:testing_size] logits = sess.run(model.logits, feed_dict={model.questions:batch_questions, model.inputs:batch_inputs, model.inputs_len:batch_inputs_len, model.questions_len:batch_questions_len, model.answers_len:batch_answers_len, model.training:False}) for i in range(testing_size): print('QUESTION:',' '.join([train_data.vocab['idx2word'][k] for k in batch_questions[i]])) print('REAL:',train_data.vocab['idx2word'][batch_answers[i,0]]) print('PREDICT:',train_data.vocab['idx2word'][logits[i,0]],'\n') ```
github_jupyter
#### ้€š่ฟ‡RNNไฝฟ็”จimdbๆ•ฐๆฎ้›†ๅฎŒๆˆๆƒ…ๆ„Ÿๅˆ†็ฑปไปปๅŠก ``` from __future__ import absolute_import,print_function,division,unicode_literals import tensorflow as tf import tensorflow.keras as keras import numpy as np import os tf.__version__ tf.random.set_seed(22) np.random.seed(22) os.environ['TF_CPP_LOG_LEVEL'] = '2' # ่ถ…ๅ‚ๆ•ฐ vocab_size = 10000 max_review_length = 80 embedding_dim = 100 units = 64 num_classes = 2 batch_size = 32 epochs = 10 # ๅŠ ่ฝฝๆ•ฐๆฎ้›† imdb = keras.datasets.imdb (train_data,train_labels),(test_data,test_labels) = imdb.load_data(num_words = vocab_size) train_data[0] len(train_data) # ๅปบ็ซ‹่ฏๅ…ธ word_index = imdb.get_word_index() word_index = {k:(v + 3) for k ,v in word_index.items()} word_index["<PAD>"] = 0 word_index["<START>"] = 1 word_index["<UNK>"] = 2 word_index["<UNSED>"] = 3 reversed_word_index = dict([(value,key) for (key,value) in word_index.items()]) def decode_review(text): return ' '.join([reversed_word_index.get(i,'?') for i in text]) decode_review(train_data[0]) train_data = train_data[:20000] val_data = train_data[20000:25000] train_labels = train_labels[:20000] val_labels = train_labels[20000:25000] # ่กฅ้ฝๆ•ฐๆฎ train_data = keras.preprocessing.sequence.pad_sequences(train_data,value = word_index["<PAD>"],padding = 'post',maxlen = max_review_length ) test_data = keras.preprocessing.sequence.pad_sequences(test_data,value = word_index["<PAD>"],padding = 'post',maxlen = max_review_length ) train_data[0] # ๆž„ๅปบๆจกๅž‹ class RNNModel(keras.Model): def __init__(self,units,num_classes,num_layers): super(RNNModel,self).__init__() self.units = units self.embedding = keras.layers.Embedding(vocab_size,embedding_dim,input_length = max_review_length) """ self.lstm = keras.layers.LSTM(units,return_sequences = True) self.lstm_2 = keras.layers.LSTM(units) """ self.lstm = keras.layers.Bidirectional(keras.layers.LSTM(self.units)) self.dense = keras.layers.Dense(1) def call(self,x,training = None,mask = None): x = self.embedding(x) x = self.lstm(x) x = self.dense(x) return x model.summary() model = RNNModel(units,num_classes,num_layers=2) model.compile(optimizer = keras.optimizers.Adam(0.001), loss = keras.losses.BinaryCrossentropy(from_logits = True), metrics = ['accuracy']) model.fit(train_data,train_labels, epochs = epochs,batch_size = batch_size, validation_data = (test_data,test_labels)) model.summary() result = model.evaluate(test_data,test_labels) # output:loss: 0.6751 - accuracy: 0.8002 def GRU_Model(): model = keras.Sequential([ keras.layers.Embedding(input_dim = vocab_size,output_dim = 32,input_length = max_review_length), keras.layers.GRU(32,return_sequences = True), keras.layers.GRU(1,activation = 'sigmoid',return_sequences = False) ]) model.compile(optimizer = keras.optimizers.Adam(0.001), loss = keras.losses.BinaryCrossentropy(from_logits = True), metrics = ['accuracy']) return model model = GRU_Model() model.summary() %%time history = model.fit(train_data,train_labels,batch_size = batch_size,epochs = epochs,validation_split = 0.1) import matplotlib.pyplot as plt plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.legend(['training','validation'], loc = 'upper left') plt.xlabel('epoch') plt.ylabel('accuracy') plt.show() ```
github_jupyter
``` from google.colab import drive #drive.flush_and_unmount() drive.mount('/content/drive') ``` # 05 Bayesian Linear Regression for Student Grade Prediction In this notebook, we will develop bayesian linear regression for student grade prediction. We will conduct EDA to analyze data, develop conventional linear regresion, implement Bayesian Linear Regression using [PyMC3](https://docs.pymc.io/) and interpret the results. What is more, we will show the posterior predictive of a data sample generated by bayesian models could be used as a trigger measure to detect anomaly data (fraud case). It consists of two parts: **Exploratory Data Analysis** (EDA) and **Modeling** parts. This is the second part for modeling. The agenda is as follow: 1. Develop linear regression for student grade prediction 2. Develop bayesian linear regressopm for student grade prediction ### Import Libraries ``` # Pandas and numpy for data manipulation import pandas as pd import numpy as np np.random.seed(123) # Matplotlib and seaborn for plotting import matplotlib.pyplot as plt %matplotlib inline import matplotlib matplotlib.rcParams['font.size'] = 8 matplotlib.rcParams['figure.figsize'] = (5, 5) import seaborn as sns from IPython.core.pylabtools import figsize # Scipy helper functions from scipy.stats import percentileofscore from scipy import stats ``` ## Load the data ``` datafolder = "/content/drive/My Drive/fraud_analysis/datasets/" file_name = "student-mat.csv" df_data = pd.read_csv(datafolder + file_name, sep=';', index_col=None) df_data.rename(columns={'G3': 'Grade'}, inplace=True) df_data = df_data[~df_data['Grade'].isin([0, 1])] df_data.head(2).append(df_data.tail(2)) ``` ### Import Libraries ``` # Standard ML Models for comparison from sklearn.linear_model import LinearRegression # Splitting data into training/testing from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler # Metrics from sklearn.metrics import mean_squared_error # Distributions import scipy ``` ## Baseline and Linear Regression for Students Grade Prediction In the following we are going to build machine learning model to predict the student grade. we will select several columns as features: 1. failures number of past class failures (numeric: n if 1<=n<3, else 4) 2. Medu mother's education (numeric: 0 - none, 1 - primary education (4th grade), 2 means 5th to 9th grade, 3 means secondary education or 4 is higher education) 3. studytime weekly study time (numeric: 1 - <2 hours, 2 - 2 to 5 hours, 3 - 5 to 10 hours, or 4 - >10 hours) 4. absences number of school absences (numeric: from 0 to 93) 5. higher wants to take higher education (binary: yes or no) Usually, we need to conduct [feature selections/extraction](https://www.kaggle.com/kashnitsky/topic-6-feature-engineering-and-feature-selection) to arrive at those features. The target value will be the Grade column. In addition, since internet is the categorical features, we use one-hot encoding to convert it to numerical values Import Libraries ``` # Standard ML Models for comparison from sklearn.linear_model import LinearRegression # Splitting data into training/testing from sklearn.model_selection import train_test_split # Metrics from sklearn.metrics import mean_squared_error, mean_absolute_error # Distributions import scipy df_used = df_data[['failures', 'Medu', 'studytime', 'absences', 'higher', 'Grade']] df_used = pd.get_dummies(df_used) df_X = df_used[['failures', 'Medu', 'studytime', 'absences', 'higher_yes']] #store features df_X.rename(columns={'Medu': 'mother_edu'}, inplace=True) #store values df_y = df_used[['Grade']] df_X.head(3) # Split into training/testing sets with 25% split X_train, X_test, y_train, y_test = train_test_split(df_X, df_y, test_size = 0.25, random_state=123) ``` ### Use Linear Regression Model for predictions Sklearn provides very friendly functions. ``` lr = LinearRegression() lr.fit(X_train, y_train) ``` Evaluation Metrics For this regression task, we will use two standard metrics: * Mean Absolute Error (MAE): Average of the absolute value of the difference between predictions and the true values * Root Mean Squared Error (RMSE): The square root of the average of the squared differences between the predictions and the true values. Create a naive Baseline For a regression task, a simple naive baseline is to guess the median value on the training set for all testing cases. If our machine learning model cannot better this simple baseline, then perhaps we should try a different approach or features! ``` baseline = np.median(y_train) baseline_mae = np.mean(abs(baseline - y_test)) baseline_rmse = np.sqrt(np.mean((baseline - y_test) ** 2)) print('Baseline, MAE is %0.2f' % baseline_mae) print('Baseline, RMSE is %0.2f' % baseline_rmse) # Metrics predictions = lr.predict(X_test) mae = np.mean(abs(predictions - y_test)) rmse = np.sqrt(np.mean((predictions - y_test) ** 2)) print('Using Linear Regression, MAE is %0.2f' % mae) print('Using Linear Regression, MAE is %0.2f' % rmse) ols_formula = 'Grade = %0.2f +' % lr.intercept_ for i, col in enumerate(X_train.columns): ols_formula += ' %0.2f * %s +' % (lr.coef_[0][i], col) print(' '.join(ols_formula.split(' ')[:-1])) ``` ### Interpret model parameters It is quite intuitive. For the features: failures and absences, their coefficients are negative. However, the model parameters and its correponding prediction value are fixed numbers. It fails to capture **uncertainity**. In the following, we will develop bayesian linear regression to address the above issue. ## Using Bayesian Linear Regression We will create Bayesian Linear Regression in PyMC3. Markov Chain Monte Carlo algorithms will be used to draw samples from the posterior to approximate the the posterior for each of the model parameters. The version should be 3.8. ``` ! pip install pymc3==3.8 import pymc3 as pm print(pm.__version__) def model_build(df_train, df_label=None): """ build genearlized linear model """ with pm.Model() as model: sigma = pm.Uniform('sigma', 0, 10) #the error term is an uniform distribution num_fea = df_train.shape[1] mu_infe = pm.Normal('intercept', mu=0, sigma=10) #the bias term is an normal distribution (mean=0, sigma=10) for idx in range(num_fea): mu_infe = mu_infe + pm.Normal('coeff_for_{}'.format(df_train.columns[idx]), mu=0, sigma=1)*df_train.loc[:, df_train.columns[idx]] #the coefficient term for each feature is an normal distribution (mean=0, sigma=1) if df_label is None: # inference likelihood = pm.Normal('y', mu=mu_infe, sigma=sigma, observed = False) else: # training likelihood = pm.Normal('y', mu=mu_infe, sigma=sigma, observed = df_label['Grade'].values) return model ``` Monte Carlo sampling is design to estimate various characteristics of a distribution such as the mean, variance, kurtosis, or any other statistic. Markov chains involve a stochastic sequential process where we can sample states from some stationary distribution. The goal of MCMC is to design a Markov chain such that the stationary distribution of the chain is exactly the distribution that we are interesting in sampling from. This is called the **target distribution**. In other words, the states sampled from the Markov chain should follow the same statistics of samples drawn from the target distribution. The idea is to use some clever methods for setting up the proposal distribution such that no matter how we initialize each chain, we will convergence to the target distribution. ``` # Use MCMC algorithm to draw samples to approximate the posterior for model parameters (error term, bias term and all coefficients) with model_build(X_train, y_train): trace = pm.sample(draws=2000, chains = 2, tune = 500) ``` #### Check the posterior distribution for the model parameters $p(w|D)$ ``` print(pm.summary(trace).round(5)) # Shows the trace with a vertical line at the mean of the trace def plot_trace(trace): # Traceplot with vertical lines at the mean value ax = pm.traceplot(trace, figsize=(14, len(trace.varnames)*1.8), lines={k: v['mean'] for k, v in pm.summary(trace).iterrows()}) matplotlib.rcParams['font.size'] = 16 # Labels with the median value for i, mn in enumerate(pm.summary(trace)['mean']): ax[i, 0].annotate('{:0.2f}'.format(mn), xy = (mn, 0), xycoords = 'data', size = 8, xytext = (-18, 18), textcoords = 'offset points', rotation = 90, va = 'bottom', fontsize = 'large', color = 'red') plot_trace(trace); ``` The left side of the traceplot is the marginal posterior: the values for the variable are on the x-axis with the probability for the variable (as determined by sampling) on the y-axis. The different colored lines indicate that we performed two chains of Markov Chain Monte Carlo. From the left side we can see that there is a range of values for each weight. The right side shows the different sample values drawn as the sampling process runs. ``` pm.plot_posterior(trace, figsize = (10, 10)) ``` #### Makde prediction: posterior predictive distribution In linear regression, we only have a single best estimate for model parameters, which ignores uncertainity about model parameters. In bayesian linear regression, we are able to have the posterior distribution of model parameters $p(w|D$ depends on training data $D=[(x_0,y_0), \dots, (x_n,y_n)]$. Then, we can infer the posterior predictive distribution of the label $\tilde{y}$ given testing data $\tilde{x}$, which can be calculated by marginalizing the posterior distribution of model parameters and the distribution of $\tilde{y}$ given model parameters. $p(\tilde{y}|\tilde{x},D)=\int p(\tilde{y}|w,\tilde{x},D)p(w|D)dw$ MCMC is also used due to the intractable distribution $p(\tilde{y}|w,\tilde{x},D)p(w|D)$ ``` # sample the posterior predictive distribution with model_build(X_test): ppc = pm.sample_posterior_predictive(trace) post_predict = np.array(ppc['y']) print(post_predict.shape) ``` For each testing data sample, we obtain 4000 estimations instead of a single and fixed guess in LR. ``` true_test = y_test.Grade.values # check each sample predictive distribution def plot_posteriorestimation(estimates, actual): plt.figure(figsize(10, 10)) sns.distplot(estimates, hist = True, kde = True, bins = 19, hist_kws = {'edgecolor': 'k', 'color': 'darkblue'}, kde_kws = {'linewidth' : 4}, label = 'Estimated Dist.') plt.vlines(x = actual, ymin = 0, ymax = 0.15, linestyles = '--', colors = 'red', label = 'Observed Grade', linewidth = 2.5) mean_loc = np.mean(estimates) plt.vlines(x = mean_loc, ymin = 0, ymax = 0.15, linestyles = '-', colors = 'orange', label = 'Mean Estimate', linewidth = 2.5) plt.vlines(x = np.percentile(estimates, 95), ymin = 0, ymax = 0.08, linestyles = ':', colors = 'blue', label = '95% Confidence Level', linewidth = 2.5) plt.vlines(x = np.percentile(estimates, 5), ymin = 0, ymax = 0.08, linestyles = '-.', colors = 'blue', label = '5% Confidence Level', linewidth = 2.5) plt.legend(loc = 1) plt.title('Density Plot for Test Observation'); plt.xlabel('Grade'); plt.ylabel('Density'); print('True Grade = %d' % actual) print('Average Estimate = %0.4f' % mean_loc) print('5%% Estimate = %0.4f 95%% Estimate = %0.4f' % (np.percentile(estimates, 5), np.percentile(estimates, 95))) ``` #### Select two students and check their posterior predictive distribution of their grades. This posterior predictive distribution could be regarded as **our beliefs about each student's true long-term average grades** (if the training data is unbiased) Bob is suspicious because our beliefs about his true average rating are both narrow and close to 5, while Alice is less suspicious because our beliefs about her true average rating are more spread out. ``` student_id = 20 plot_posteriorestimation(post_predict[:,student_id], true_test[student_id]) student_id = 70 plot_posteriorestimation(post_predict[:,student_id], true_test[student_id]) ``` Student 70 is suspicious because his observed/true grade is out of the 90% confidence level, while student 20 is less suspicious because his grade is close to the mean value of our predictive distribution. #### Evaluate model performances We can use any statistic of the posterior predictive distributions such as mean or median values as our point estimations that can be used to compare with true values for model evaluation. ``` # We can use median value to represent the posterior predictive distribution median_prediction = np.median(post_predict, axis=0) mae = np.mean(abs(median_prediction - true_test)) rmse = np.sqrt(np.mean((median_prediction - true_test) ** 2)) print('Using Linear Regression, MAE is %0.2f' % mae) print('Using Linear Regression, RMSE is %0.2f' % rmse) ```
github_jupyter
# Bayes's Theorem Think Bayes, Second Edition Copyright 2020 Allen B. Downey License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) ``` # Get utils.py import os if not os.path.exists('utils.py'): !wget https://github.com/AllenDowney/ThinkBayes2/raw/master/code/soln/utils.py ``` ## Introduction In the previous chapter, we derived Bayes's Theorem: $P(A|B) = \frac{P(A) P(B|A)}{P(B)}$ As an example, we used data from the General Social Survey and Bayes's Theorem to compute conditional probabilities. But since we had the complete dataset, we didn't really need Bayes's Theorem. It was easy enough to compute the left side of the equation directly, and no easier to compute the right side. But often we don't have a complete dataset, and in that case Bayes's Theorem is more useful. In this chapter, we'll use it to solve several more challenging problems related to conditional probability. ## The Cookie Problem We'll start with a thinly disguised version of an [urn problem](https://en.wikipedia.org/wiki/Urn_problem): > Suppose there are two bowls of cookies. > > Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. > > Bowl 2 contains 20 vanilla cookies and 20 chocolate cookies. > > Now suppose you choose one of the bowls at random and, without looking, choose a cookie at random. > > If the cookie is vanilla, what is the probability that it came from Bowl 1? What we want is the conditional probability that we chose from Bowl 1 given that we got a vanilla cookie, $P(B_1 | V)$. But what we get from the statement of the problem is: * The conditional probability of getting a vanilla cookie, given that we chose from Bowl 1, $P(V | B_1)$ and * The conditional probability of getting a vanilla cookie, given that we chose from Bowl 2, $P(V | B_2)$. Bayes's Theorem tells us how they are related: $P(B_1|V) = \frac{P(B_1)~P(V|B_1)}{P(V)})$ The term on the left is what we want. The terms on the right are: - $P(B_1)$, the probability that we chose Bowl 1, unconditioned by what kind of cookie we got. Since the problem says we chose a bowl at random, we assume $P(B_1) = 1/2$. - $P(V|B_1)$, the probability of getting a vanilla cookie from Bowl 1, which is 3/4. - $P(V)$, the probability of drawing a vanilla cookie from either bowl. To compute $P(V)$, we can use the law of total probability: $P(V) = P(B_1)~P(V|B_1) ~+~ P(B_2)~P(V|B_2)$ Plugging in the numbers from the statement of the problem, we have $P(V) = (1/2)~(3/4) ~+~ (1/2)~(1/2) = 5/8$. We can also compute this result directly, like this: * Since we had an equal chance of choosing either bowl and the bowls contain the same number of cookies, we had the same chance of choosing any cookie. * Between the two bowls there are 50 vanilla and 30 chocolate cookies, so $P(V) = 5/8$. Finally, we can apply Bayes's Theorem to compute the posterior probability of Bowl 1: $P(B_1|V) = (1/2)~(3/4)~/~(5/8) = 3/5$. This example demonstrates one use of Bayes's theorem: it provides a way to get from $P(B|A)$ to $P(A|B)$. This strategy is useful in cases like this where it is easier to compute the terms on the right side than the term on the left. ## Diachronic Bayes There is another way to think of Bayes's theorem: it gives us a way to update the probability of a hypothesis, $H$, given some body of data, $D$. This interpretation is "diachronic", which means "related to change over time"; in this case, the probability of the hypotheses changes as we see new data. Rewriting Bayes's theorem with $H$ and $D$ yields: $P(H|D) = \frac{P(H)~P(D|H)}{P(D)}$ In this interpretation, each term has a name: - $P(H)$ is the probability of the hypothesis before we see the data, called the prior probability, or just **prior**. - $P(H|D)$ is the probability of the hypothesis after we see the data, called the **posterior**. - $P(D|H)$ is the probability of the data under the hypothesis, called the **likelihood**. - $P(D)$ is the **total probability of the data**, under any hypothesis. Sometimes we can compute the prior based on background information. For example, the cookie problem specifies that we choose a bowl at random with equal probability. In other cases the prior is subjective; that is, reasonable people might disagree, either because they use different background information or because they interpret the same information differently. The likelihood is usually the easiest part to compute. In the cookie problem, we are given the number of cookies in each bowl, so we can compute the probability of the data under each hypothesis. Computing the total probability of the data can be tricky. It is supposed to be the probability of seeing the data under any hypothesis at all, but it can be hard to nail down what that means. Most often we simplify things by specifying a set of hypotheses that are: * Mutually exclusive: If one hypothesis is true, the others must be false, and * Collectively exhaustive: There are no other possibilities. Together, these conditions imply that exactly one of the hypotheses in the set must be true. When these conditions apply, we can compute $P(D)$ using the law of total probability. For example, with two hypotheses, $H_1$ and $H_2$: $P(D) = P(H_1)~P(D|H_1) + P(H_2)~P(D|H_2)$ And more generally, with any number of hypotheses: $P(D) = \sum_i P(H_i)~P(D|H_i)$ The process in this section, using data to and a prior probability to compute a posterior probability, is called a **Bayesian update**. ## Bayes Tables A convenient tool for doing a Bayesian update is a Bayes table. You can write a Bayes table on paper or use a spreadsheet, but in this section I'll use a Pandas `DataFrame`. First I'll make empty `DataFrame` with one row for each hypothesis: ``` import pandas as pd table = pd.DataFrame(index=['Bowl 1', 'Bowl 2']) ``` Now I'll add a column to represent the priors: ``` table['prior'] = 1/2, 1/2 table ``` And a column for the likelihoods: ``` table['likelihood'] = 3/4, 1/2 table ``` Here we see a difference from the previous method: we compute likelihoods for both hypotheses, not just Bowl 1: * The chance of getting a vanilla cookie from Bowl 1 is 3/4. * The chance of getting a vanilla cookie from Bowl 2 is 1/2. You might notice that the likelihoods don't add up to 1. That's OK; each of them is a probability conditioned on a different hypothesis. There's no reason they should add up to 1 and no problem if they don't. The next step is similar to what we did with Bayes's Theorem; we multiply the priors by the likelihoods: ``` table['unnorm'] = table['prior'] * table['likelihood'] table ``` I call the result `unnorm` because these values are the "unnormalized posteriors". Each of them is the product of a prior and a likelihood: $P(B_i)~P(D|B_i)$ which is the numerator of Bayes's Theorem. If we add them up, we have $P(B_1)~P(D|B_1) + P(B_2)~P(D|B_2)$ which is the denominator of Bayes's Theorem, $P(D)$. So we can compute the total probability of the data like this: ``` prob_data = table['unnorm'].sum() prob_data ``` Notice that we get 5/8, which is what we got by computing $P(D)$ directly. And we can compute the posterior probabilities like this: ``` table['posterior'] = table['unnorm'] / prob_data table ``` The posterior probability for Bowl 1 is 0.6, which is what we got using Bayes's Theorem explicitly. As a bonus, we also get the posterior probability of Bowl 2, which is 0.4. When we add up the unnormalized posteriors and divide through, we force the posteriors to add up to 1. This process is called "normalization", which is why the total probability of the data is also called the "[normalizing constant](https://en.wikipedia.org/wiki/Normalizing_constant#Bayes'_theorem)". ## The Dice Problem A Bayes table can also solve problems with more than two hypotheses. For example: > Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die. I choose one of the dice at random, roll it, and report that the outcome is a 1. What is the probability that I chose the 6-sided die? In this example, there are three hypotheses with equal prior probabilities. The data is my report that the outcome is a 1. If I chose the 6-sided die, the probability of the data is 1/6. If I chose the 8-sided die, the probability is 1/8, and if I chose the 12-sided die, it's 1/12. Here's a Bayes table that uses integers to represent the hypotheses: ``` table2 = pd.DataFrame(index=[6, 8, 12]) ``` I'll use fractions to represent the prior probabilities and the likelihoods. That way they don't get rounded off to floating-point numbers. ``` from fractions import Fraction table2['prior'] = Fraction(1, 3) table2['likelihood'] = Fraction(1, 6), Fraction(1, 8), Fraction(1, 12) table2 ``` Once you have priors and likelhoods, the remaining steps are always the same, so I'll put them in a function: ``` def update(table): """Compute the posterior probabilities. table: DataFrame with priors and likelihoods returns: total probability of the data """ table['unnorm'] = table['prior'] * table['likelihood'] prob_data = table['unnorm'].sum() table['posterior'] = table['unnorm'] / prob_data return prob_data prob_data = update(table2) print(prob_data) ``` The total probability of the data is $1/8$. And here is the final Bayes table: ``` table2 ``` The posterior probability of the 6-sided die is 4/9. ## The Monty Hall problem Next we'll use a Bayes table to solve one of the most contentious problems in probability. The Monty Hall problem is based on a game show called *Let's Make a Deal*. If you are a contestant on the show, here's how the game works: - The host, Monty Hall, shows you three closed doors numbered 1, 2, and 3. He tells you that there is a prize behind each door. - One prize is valuable (traditionally a car), the other two are less valuable (traditionally goats). - The object of the game is to guess which door has the car. If you guess right, you get to keep the car. Suppose you pick Door 1. Before opening the door you chose, Monty opens Door 3 and reveals a goat. Then Monty offers you the option to stick with your original choice or switch to the remaining unopened door. To maximize your chance of winning the car, should you stick with Door 1 or switch to Door 2? To answer this question, we have to make some assumptions about the behavior of the host: 1. Monty always opens a door and offers you the option to switch. 2. He never opens the door you picked or the door with the car. 3. If you choose the door with the car, he chooses one of the other doors at random. Under these assumptions, you are better off switching. If you stick, you win $1/3$ of the time. If you switch, you win $2/3$ of the time. If you have not encountered this problem before, you might find that answer surprising. You would not be alone; many people have the strong intuition that it doesn't matter if you stick or switch. There are two doors left, they reason, so the chance that the car is behind Door A is 50%. But that is wrong. To see why, it can help to use a Bayes table. We start with three hypotheses: the car might be behind Door 1, 2, or 3. According to the statement of the problem, the prior probability for each door is 1/3. ``` table3 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3']) table3['prior'] = Fraction(1, 3) table3 ``` The data is that Monty opened Door 3 and revealed a goat. So let's consider the probability of the data under each hypothesis: - If the car is behind Door 3, Monty does not open it, so the probability of the data under this hypothesis is 0. - If the car is behind Door 2, Monty has to open Door 3, so the probability of the data under this hypothesis is 1. - If the car is behind Door 1, Monty choose Door 2 or 3 at random; the probability he would open Door 3 is $1/2$. Here are the likelihoods. ``` table3['likelihood'] = Fraction(1, 2), 1, 0 table3 ``` Now that we have priors and likelihoods, we can use `update` to compute the posterior probabilities. ``` update(table3) table3 ``` After Monty opens Door 3, the posterior probability of Door 1 is $1/3$; the posterior probability of Door 2 is $2/3$. So you are better off switching from Door 1 to Door 2. As this example shows, our intuition for probability is not always reliable. Bayes's Theorem can help by providing a divide-and-conquer strategy: 1. First, write down the hypotheses and the data. 2. Next, figure out the prior probabilities. 3. Finally, compute the likelihood of the data under each hypothesis. The Bayes table does the rest. ## Summary In this chapter we solved the Cookie Problem using Bayes's theorem explicitly and using a Bayes table. There's no real difference between these methods, but the Bayes table can make it easier to compute the total probability of the data, especially for problems with more than two hypotheses. Then we solved the Dice Problem, which we will see again in the next chapter, and the Monty Hall problem, which you might hope you never see again. If the Monty Hall problem makes your head hurt, you are not alone. But I think it demonstrates the power of Bayes's Theorem as a divide-and-conquer strategy for solving tricky problems. And I hope it provides some insight into *why* the answer is what it is. When Monty opens a door, he provides information we can use to update our belief about the location of the car. Part of the information is obvious. If he opens Door 3, we know the car is not behind Door 3. But part of the information is more subtle. Opening Door 3 is more likely if the car is behind Door 2, and less likely if it is behind Door 1. So the data is evidence in favor of Door 2. We will come back to this notion of evidence in future chapters. In the next chapter we'll extend the Cookie Problem and the Dice Problem, and take the next step from basic probability to Bayesian statistics. But first, you might want to work on the exercises. ## Exercises **Exercise:** Suppose you have two coins in a box. One is a normal coin with heads on one side and tails on the other, and one is a trick coin with heads on both sides. You choose a coin at random and see that one of the sides is heads. What is the probability that you chose the trick coin? ``` # Solution goes here ``` **Exercise:** Suppose you meet someone and learn that they have two children. You ask if either child is a girl and they say yes. What is the probability that both children are girls? Hint: Start with four equally likely hypotheses. ``` # Solution goes here ``` **Exercise:** There are many variations of the [Monty Hall problem](https://en.wikipedia.org/wiki/Monty_Hall_problem}). For example, suppose Monty always chooses Door 2 if he can, and only chooses Door 3 if he has to (because the car is behind Door 2). If you choose Door 1 and Monty opens Door 2, what is the probability the car is behind Door 3? If you choose Door 1 and Monty opens Door 3, what is the probability the car is behind Door 2? ``` # Solution goes here # Solution goes here ``` **Exercise:** M&M's are small candy-coated chocolates that come in a variety of colors. Mars, Inc., which makes M&M's, changes the mixture of colors from time to time. In 1995, they introduced blue M&M's. * In 1994, the color mix in a bag of plain M&M's was 30\% Brown, 20\% Yellow, 20\% Red, 10\% Green, 10\% Orange, 10\% Tan. * In 1996, it was 24\% Blue , 20\% Green, 16\% Orange, 14\% Yellow, 13\% Red, 13\% Brown. Suppose a friend of mine has two bags of M&M's, and he tells me that one is from 1994 and one from 1996. He won't tell me which is which, but he gives me one M&M from each bag. One is yellow and one is green. What is the probability that the yellow one came from the 1994 bag? Hint: The trick to this question is to define the hypotheses and the data carefully. ``` # Solution goes here ```
github_jupyter
``` from plot_helpers import * from source_files_extended import load_sfm_depth, load_aso_depth, load_classifier_data figure_style= dict(figsize=(8, 6)) aso_snow_depth_values = load_aso_depth() sfm_snow_depth_values = load_sfm_depth(aso_snow_depth_values.mask) ``` ## SfM snow depth distribution ``` data = [ { 'data': sfm_snow_depth_values, 'label': 'SfM', 'color': 'brown', } ] with Histogram.plot(data, (-5, 5), **figure_style) as ax: ax ``` ## Positive snow depth comparison ``` data = [ { 'data': aso_snow_depth_values, 'label': 'ASO', 'color': 'dodgerblue', }, { 'data': np.ma.masked_where(sfm_snow_depth_values <= 0.0, sfm_snow_depth_values, copy=True), 'label': 'SfM', 'color': 'brown', } ] with Histogram.plot(data, (0, 5), **figure_style) as ax: ax ``` ## Pixel Classification ``` casi_classification = load_classifier_data(aso_snow_depth_values.mask) casi_classes, classes_count = np.unique(casi_classification, return_counts=True) non_snow_casi = np.ma.masked_where(casi_classification == 1, casi_classification, copy=True) assert classes_count[1:4].sum() == np.count_nonzero(~non_snow_casi.mask) ``` ## ASO non-snow pixels depth values ``` data = [ { 'data': np.ma.masked_where(non_snow_casi.mask, aso_snow_depth_values, copy=True), 'label': 'ASO', 'color': 'dodgerblue', } ] with Histogram.plot(data, (0, 5), **figure_style) as ax: ax ``` ## CASI snow pixels snow depth values ``` data = [ { 'data': np.ma.masked_where(~non_snow_casi.mask, aso_snow_depth_values, copy=True), 'label': 'ASO', 'color': 'steelblue', }, { 'data': np.ma.masked_where(~non_snow_casi.mask, sfm_snow_depth_values, copy=True), 'label': 'SfM', 'color': 'beige', 'alpha': 0.7, } ] with Histogram.plot(data, (0, 5), **figure_style) as ax: ax.axvline(x=0.08, linestyle='dotted', color='dimgrey', label='ASO Precision') ``` ## SfM positive values ``` data = [ { 'data': np.ma.masked_where(sfm_snow_depth_values < 0, aso_snow_depth_values, copy=True), 'label': 'ASO', 'color': 'steelblue', }, { 'data': np.ma.masked_where(sfm_snow_depth_values < 0, sfm_snow_depth_values, copy=True), 'label': 'SfM', 'color': 'beige', 'alpha': 0.7, } ] with Histogram.plot(data, (0, 5), **figure_style) as ax: ax.axvline(x=0.08, linestyle='dotted', color='dimgrey', label='ASO Precision') ax.set_title('SfM positive area snow depth values'); ```
github_jupyter
``` from torchvision.models import * import wandb from sklearn.model_selection import train_test_split import os,cv2 import numpy as np import matplotlib.pyplot as plt from torch.optim import * from torch.nn import * import torch,torchvision from tqdm import tqdm device = 'cuda' PROJECT_NAME = 'Fruit-Recognition' def load_data(): labels = {} idx = 0 labels_r = {} data = [] for label in os.listdir('./data/'): idx += 1 labels[label] = idx labels_r[idx] = label for folder in os.listdir('./data/'): for file in os.listdir(f'./data/{folder}/'): img = cv2.imread(f'./data/{folder}/{file}') img = cv2.resize(img,(56,56)) img = img / 255.0 data.append([ img, np.eye(labels[folder]+1,len(labels))[labels[folder]] ]) X = [] y = [] for d in data: X.append(d[0]) y.append(d[1]) X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.125,shuffle=False) X_train = torch.from_numpy(np.array(X_train)).to(device).view(-1,3,56,56).float() y_train = torch.from_numpy(np.array(y_train)).to(device).float() X_test = torch.from_numpy(np.array(X_test)).to(device).view(-1,3,56,56).float() y_test = torch.from_numpy(np.array(y_test)).to(device).float() return X,y,X_train,X_test,y_train,y_test,labels,labels_r,idx,data X,y,X_train,X_test,y_train,y_test,labels,labels_r,idx,data = load_data() torch.save(X_train,'X_train.pt') torch.save(y_train,'y_train.pt') torch.save(X_test,'X_test.pt') torch.save(y_test,'y_test.pt') torch.save(labels_r,'labels_r.pt') torch.save(labels,'labels.pt') torch.save(X_train,'X_train.pth') torch.save(y_train,'y_train.pth') torch.save(X_test,'X_test.pth') torch.save(y_test,'y_test.pth') torch.save(labels_r,'labels_r.pth') torch.save(labels,'labels.pth') def get_loss(model,X,y,criterion): preds = model(X) loss = criterion(preds,y) return loss.item() def get_accuracy(model,X,y): correct = 0 total = 0 preds = model(X) for pred,yb in zip(preds,y): pred = int(torch.argmax(pred)) yb = int(torch.argmax(yb)) if pred == yb: correct += 1 total += 1 acc = round(correct/total,3)*100 return acc model = resnet18().to(device) model.fc = Linear(512,len(labels)) criterion = MSELoss() optimizer = Adam(model.parameters(),lr=0.001) epochs = 100 batch_size = 32 wandb.init(project=PROJECT_NAME,name='baseline') for _ in tqdm(range(epochs)): for i in range(0,len(X_train),batch_size): X_batch = X_train[i:i+batch_size] y_batch = y_train[i:i+batch_size] model.to(device) preds = model(X_batch) loss = criterion(preds,y_batch) optimizer.zero_grad() loss.backward() optimizer.step() model.eval() torch.cuda.empty_cache() wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion))/2}) torch.cuda.empty_cache() wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)}) torch.cuda.empty_cache() wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2}) torch.cuda.empty_cache() wandb.log({'Val ACC':get_accuracy(model,X_test,y_test)}) torch.cuda.empty_cache() model.train() wandb.finish() ```
github_jupyter
# Quickstart ## Creating an isotherm First, we need to import the package. ``` import pygaps ``` The backbone of the framework is the PointIsotherm class. This class stores the isotherm data alongside isotherm properties such as the material, adsorbate and temperature, as well as providing easy interaction with the framework calculations. There are several ways to create a PointIsotherm object: - directly from arrays - from a pandas.DataFrame - parsing json, csv files, or excel files - from an sqlite database See the [isotherm creation](../manual/isotherm.rst) part of the documentation for a more in-depth explanation. For the simplest method, the data can be passed in as arrays of *pressure* and *loading*. There are four other required parameters: the material name, the material batch or ID, the adsorbate used and the temperature (in K) at which the data was recorded. ``` isotherm = pygaps.PointIsotherm( pressure=[0.1, 0.2, 0.3, 0.4, 0.5, 0.4, 0.35, 0.25, 0.15, 0.05], loading=[0.1, 0.2, 0.3, 0.4, 0.5, 0.45, 0.4, 0.3, 0.15, 0.05], material= 'Carbon X1', adsorbate = 'N2', temperature = 77, ) isotherm.plot() ``` Unless specified, the loading is read in *mmol/g* and the pressure is read in *bar*, although these settings can be changed. Read more about it in the [units section](../manual/units.rst) of the manual. The isotherm can also have other properties which are passed in at creation. Alternatively, the data can be passed in the form of a pandas.DataFrame. This allows for other complementary data, such as isosteric enthalpy, XRD peak intensity, or other simultaneous measurements corresponding to each point to be saved. The DataFrame should have at least two columns: the pressures at which each point was recorded, and the loadings for each point. The `loading_key` and `pressure_key` parameters specify which column in the DataFrame contain the loading and pressure, respectively. The `other_keys` parameter should be a list of other columns to be saved. ``` import pandas data = pandas.DataFrame({ 'pressure': [0.1, 0.2, 0.3, 0.4, 0.5, 0.45, 0.35, 0.25, 0.15, 0.05], 'loading': [0.1, 0.2, 0.3, 0.4, 0.5, 0.5, 0.4, 0.3, 0.15, 0.05], 'isosteric_enthalpy (kJ/mol)': [10, 10, 10, 10, 10, 10, 10, 10, 10, 10] }) isotherm = pygaps.PointIsotherm( isotherm_data=data, pressure_key='pressure', loading_key='loading', other_keys=['isosteric_enthalpy (kJ/mol)'], material= 'Carbon X1', adsorbate = 'N2', temperature = 77, pressure_unit='bar', pressure_mode='absolute', loading_unit='mmol', loading_basis='molar', adsorbent_unit='g', adsorbent_basis='mass', material_batch = 'Batch 1', iso_type='characterisation' ) isotherm.plot() ``` pyGAPS also comes with a variety of parsers. Here we can use the JSON parser to get an isotherm previously saved on disk. For more info on parsing to and from various formats see the [manual](../manual/parsing.rst) and the associated [examples](../examples/parsing.ipynb). ``` with open(r'data/carbon_x1_n2.json') as f: isotherm = pygaps.isotherm_from_json(f.read()) ``` To see a summary of the isotherm as well as a graph, use the included function: ``` isotherm.print_info() ``` Now that the PointIsotherm is created, we are ready to do some analysis. --- ## Isotherm analysis The framework has several isotherm analysis tools which are commonly used to characterise porous materials such as: - BET surface area - the t-plot method / alpha s method - mesoporous PSD (pore size distribution) calculations - microporous PSD calculations - DFT kernel fitting PSD methods - isosteric enthalpy of adsorption calculation - etc. All methods work directly with generated Isotherms. For example, to perform a tplot analysis and get the results in a dictionary use: ``` result_dict = pygaps.t_plot(isotherm) import pprint pprint.pprint(result_dict) ``` If in an interactive environment, such as iPython or Jupyter, it is useful to see the details of the calculation directly. To do this, increase the verbosity of the method and use matplotlib to display extra information, including graphs: ``` import matplotlib.pyplot as plt result_dict = pygaps.area_BET(isotherm, verbose=True) plt.show() ``` Depending on the method, different parameters can be passed to tweak the way the calculations are performed. For example, if a mesoporous size distribution is desired using the Dollimore-Heal method on the desorption branch of the isotherm, assuming the pores are cylindrical and that adsorbate thickness can be described by a Halsey-type thickness curve, the code will look like: ``` result_dict = pygaps.psd_mesoporous( isotherm, psd_model='DH', branch='des', pore_geometry='cylinder', thickness_model='Halsey', verbose=True, ) plt.show() ``` For more information on how to use each method, check the [manual](../manual/characterisation.rst) and the associated [examples](../examples/characterisation.rst). --- ## Isotherm modelling The framework comes with functionality to fit point isotherm data with common isotherm models such as Henry, Langmuir, Temkin, Virial etc. The modelling is done through the ModelIsotherm class. The class is similar to the PointIsotherm class, and shares the same ability to store parameters. However, instead of data, it stores model coefficients for the model it's describing. To create a ModelIsotherm, the same parameters dictionary / pandas DataFrame procedure can be used. But, assuming we've already created a PointIsotherm object, we can use it to instantiate the ModelIsotherm instead. To do this we use the class method: ``` model_iso = pygaps.ModelIsotherm.from_pointisotherm(isotherm, model='BET', verbose=True) ``` A minimisation procedure will then attempt to fit the model's parameters to the isotherm points. If successful, the ModelIsotherm is returned. In the user wants to screen several models at once, the class method can also be passed a parameter which allows the ModelIsotherm to select the best fitting model. Below, we will attempt to fit several simple available models, and the one with the best RMSE will be returned. Depending on the models requested, this method may take significant processing time. ``` model_iso = pygaps.ModelIsotherm.from_pointisotherm(isotherm, guess_model='all', verbose=True) ``` More advanced settings can also be specified, such as the optimisation model to be used in the optimisation routine or the initial parameter guess. For in-depth examples and discussion check the [manual](../manual/modelling.rst) and the associated [examples](../examples/modelling.rst). To print the model parameters use the same print method as before. ``` # Prints isotherm parameters and model info model_iso.print_info() ``` We can calculate the loading at any pressure using the internal model by using the ``loading_at`` function. ``` # Returns the loading at 1 bar calculated with the model model_iso.loading_at(1.0) # Returns the loading in the range 0-1 bar calculated with the model pressure = [0.1,0.5,1] model_iso.loading_at(pressure) ``` ## Plotting pyGAPS makes graphing both PointIsotherm and ModelIsotherm objects easy to facilitate visual observations, inclusion in publications and consistency. Plotting an isotherm is as simple as: ``` import matplotlib.pyplot as plt pygaps.plot_iso([isotherm, model_iso], branch='ads') plt.show() ``` Many settings can be specified to change the look and feel of the graphs. More explanations can be found in the [manual](../manual/plotting.rst) and in the [examples](../examples/plotting.ipynb) section.
github_jupyter
# **Numba** ### Numba is a JIT Compiler and uses LLVM internally - No compilation required ! ![](./img/numba_flowchart.png) ``` import time def get_time_taken(func, *args): res = func(*args) start = time.time() func(*args) end = time.time() time_taken = end - start print(f"Total time - {time_taken:.5f} seconds") print(res) from numba import jit from math import tan, atan @jit def slow_function(n): result = 0 for x in range(n ** 7): result += tan(x) * atan(x) return result get_time_taken(slow_function, 10) ``` ### The speed up is obvious but there are a lot of caveats ### For example, any function used must also be "decorated" ``` from numba import jit, int32 @jit(int32(int32), nopython=True) def func(x): return tan(x) * atan(x) @jit(int32(int32), nopython=True) def slow_function(n): result = 0 for x in range(n ** 7): result += func(x) return result get_time_taken(slow_function, 10) ``` ### Notice the slight overhead ``` from numba import prange,jit, int32 @jit(int32(int32), nopython=True, parallel=True) def slow_function(n): result = 0 for x in prange(n ** 7): result += tan(x) * atan(x) return result get_time_taken(slow_function, 10) ``` ### prange is the parallel version of the range function in python and parallel=True option optimizes the code to use all the cores ### Lets see how it works with Numpy ``` from numba import jit, int32 import numpy as np @jit(int32(int32), nopython=True) def slow_func_in_numpy(n): result = 0 for x in np.arange(n ** 7): result += np.tan(x) * np.arctan(x) return result get_time_taken(slow_func_in_numpy, 10) ``` ### Do I have to write functions for every type? ``` from numba import jit, int32, int64, float32, float64 from math import tan, atan @jit([int32(int32), int64(int64), float32(float32), float64(float64)]) def slow_function(n): result = 0 for x in range(n ** 7): result += tan(x) * atan(x) return result get_time_taken(slow_function, 10) get_time_taken(slow_function, 10.2) ``` ### Let's see how we can create numpy ufuncs using numba ``` from numba import vectorize, int32, int64, float32, float64 import numpy as np @vectorize([int32(int32, int32), int64(int64, int64), float32(float32, float32), float64(float64, float64)]) def addfunc(x, y): return x + y @vectorize def simpler_addfunc(x, y): return x + y addfunc(2, 3) addfunc(6.42, 9.8) simpler_addfunc(2, 3.4) simpler_addfunc(np.array([1,2,3]), np.array([4,5,6])) ``` ### Limited support for classes ``` from numba import jitclass spec = [ ('x', int32), ('y', int32) ] @jitclass(spec) class Node(object): def __init__(self, x, y): self.x = x self.y = y def distance(self, n): return (self.x - n.x) ** 2 + (self.y - n.y) ** 2 def distance_from_point(self, x, y): return (self.x - x) ** 2 + (self.y - y) ** 2 n1 = Node(3,2) n2 = Node(9,6) %time n1.distance(n2) %time n1.distance_from_point(4,5) ``` ### This is just a glance into what numba can do, but remember, it does come with its own limitations Numba Limitations ================= 1. No Strings Support 2. No support for exception handling (try .. except, try .. finally) 3. No support for context management (the with statement) 4. list comprehension is supported, but not dict, set or generator comprehensions 5. No support for generator delegation (yield from) raise and assert are supported # **Exercise** Try using numba's @jit decorator with the function you wrote earlier and check with %time if there is any improvement in the performance **If you find any improvement, feel free to tweet about your experience with the handle @pyconfhyd**
github_jupyter
--- _You are currently looking at **version 1.3** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-machine-learning/resources/bANLa) course resource._ --- # Assignment 1 - Introduction to Machine Learning For this assignment, you will be using the Breast Cancer Wisconsin (Diagnostic) Database to create a classifier that can help diagnose patients. First, read through the description of the dataset (below). ``` import numpy as np import pandas as pd from sklearn.datasets import load_breast_cancer cancer = load_breast_cancer() print(cancer.DESCR) # Print the data set description ``` The object returned by `load_breast_cancer()` is a scikit-learn Bunch object, which is similar to a dictionary. ``` cancer.keys() ``` ### Question 0 (Example) How many features does the breast cancer dataset have? *This function should return an integer.* ``` # You should write your whole answer within the function provided. The autograder will call # this function and compare the return value against the correct solution value def answer_zero(): # This function returns the number of features of the breast cancer dataset, which is an integer. # The assignment question description will tell you the general format the autograder is expecting return len(cancer['feature_names']) # You can examine what your function returns by calling it in the cell. If you have questions # about the assignment formats, check out the discussion forums for any FAQs answer_zero() ``` ### Question 1 Scikit-learn works with lists, numpy arrays, scipy-sparse matrices, and pandas DataFrames, so converting the dataset to a DataFrame is not necessary for training this model. Using a DataFrame does however help make many things easier such as munging data, so let's practice creating a classifier with a pandas DataFrame. Convert the sklearn.dataset `cancer` to a DataFrame. *This function should return a `(569, 31)` DataFrame with * *columns = * ['mean radius', 'mean texture', 'mean perimeter', 'mean area', 'mean smoothness', 'mean compactness', 'mean concavity', 'mean concave points', 'mean symmetry', 'mean fractal dimension', 'radius error', 'texture error', 'perimeter error', 'area error', 'smoothness error', 'compactness error', 'concavity error', 'concave points error', 'symmetry error', 'fractal dimension error', 'worst radius', 'worst texture', 'worst perimeter', 'worst area', 'worst smoothness', 'worst compactness', 'worst concavity', 'worst concave points', 'worst symmetry', 'worst fractal dimension', 'target'] *and index = * RangeIndex(start=0, stop=569, step=1) ``` def answer_one(): df=pd.DataFrame(cancer['data'], columns=cancer['feature_names']) df.insert(len(cancer['feature_names']),'target' , cancer['target'], allow_duplicates=False) # Your code here return df; answer_one() ``` ### Question 2 What is the class distribution? (i.e. how many instances of `malignant` (encoded 0) and how many `benign` (encoded 1)?) *This function should return a Series named `target` of length 2 with integer values and index =* `['malignant', 'benign']` ``` def answer_two(): cancerdf = answer_one() ben=cancerdf['target'].sum() #return the benign(encoded 1) sum mal=len(cancerdf['target'])-ben #malignant sum target={'malignant': mal, 'benign': ben} # Your code here return pd.Series(data=target, index=['malignant', 'benign']) # Return your answer answer_two() ``` ### Question 3 Split the DataFrame into `X` (the data) and `y` (the labels). *This function should return a tuple of length 2:* `(X, y)`*, where* * `X`*, a pandas DataFrame, has shape* `(569, 30)` * `y`*, a pandas Series, has shape* `(569,)`. ``` def answer_three(): cancerdf = answer_one() X=cancerdf.drop('target',axis=1) y=cancerdf['target'] return X, y X, y=answer_three() X.shape, y.shape ``` ### Question 4 Using `train_test_split`, split `X` and `y` into training and test sets `(X_train, X_test, y_train, and y_test)`. **Set the random number generator state to 0 using `random_state=0` to make sure your results match the autograder!** *This function should return a tuple of length 4:* `(X_train, X_test, y_train, y_test)`*, where* * `X_train` *has shape* `(426, 30)` * `X_test` *has shape* `(143, 30)` * `y_train` *has shape* `(426,)` * `y_test` *has shape* `(143,)` ``` from sklearn.model_selection import train_test_split def answer_four(): X, y = answer_three() # Your code here X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) return X_train, X_test, y_train, y_test X_train, X_test, y_train, y_test = answer_four() X_train.shape, X_test.shape, y_train.shape, y_test.shape ``` ### Question 5 Using KNeighborsClassifier, fit a k-nearest neighbors (knn) classifier with `X_train`, `y_train` and using one nearest neighbor (`n_neighbors = 1`). *This function should return a * `sklearn.neighbors.classification.KNeighborsClassifier`. ``` from sklearn.neighbors import KNeighborsClassifier def answer_five(): X_train, X_test, y_train, y_test = answer_four() knn = KNeighborsClassifier(n_neighbors = 1) knn.fit(X_train, y_train) # Your code here return knn# Return your answer knn=answer_five() knn ``` ### Question 6 Using your knn classifier, predict the class label using the mean value for each feature. Hint: You can use `cancerdf.mean()[:-1].values.reshape(1, -1)` which gets the mean value for each feature, ignores the target column, and reshapes the data from 1 dimension to 2 (necessary for the precict method of KNeighborsClassifier). *This function should return a numpy array either `array([ 0.])` or `array([ 1.])`* ``` def answer_six(): cancerdf = answer_one() means = cancerdf.mean()[:-1].values.reshape(1, -1) # Your code here return knn.predict(means) # Return your answer predicts=answer_six() predicts[0] #is benign (1) ``` ### Question 7 Using your knn classifier, predict the class labels for the test set `X_test`. *This function should return a numpy array with shape `(143,)` and values either `0.0` or `1.0`.* ``` def answer_seven(): X_train, X_test, y_train, y_test = answer_four() knn = answer_five() # Your code here return knn.predict(X_test)# Return your answer answer_seven() ``` ### Question 8 Find the score (mean accuracy) of your knn classifier using `X_test` and `y_test`. *This function should return a float between 0 and 1* ``` def answer_eight(): X_train, X_test, y_train, y_test = answer_four() knn = answer_five() # Your code here return knn.score(X_test, y_test)# Return your answer answer_eight() ``` ### Optional plot Try using the plotting function below to visualize the differet predicition scores between training and test sets, as well as malignant and benign cells. ``` def accuracy_plot(): import matplotlib.pyplot as plt %matplotlib notebook X_train, X_test, y_train, y_test = answer_four() # Find the training and testing accuracies by target value (i.e. malignant, benign) mal_train_X = X_train[y_train==0] mal_train_y = y_train[y_train==0] ben_train_X = X_train[y_train==1] ben_train_y = y_train[y_train==1] mal_test_X = X_test[y_test==0] mal_test_y = y_test[y_test==0] ben_test_X = X_test[y_test==1] ben_test_y = y_test[y_test==1] knn = answer_five() scores = [knn.score(mal_train_X, mal_train_y), knn.score(ben_train_X, ben_train_y), knn.score(mal_test_X, mal_test_y), knn.score(ben_test_X, ben_test_y)] plt.figure() # Plot the scores as a bar chart bars = plt.bar(np.arange(4), scores, color=['#4c72b0','#4c72b0','#55a868','#55a868']) # directly label the score onto the bars for bar in bars: height = bar.get_height() plt.gca().text(bar.get_x() + bar.get_width()/2, height*.90, '{0:.{1}f}'.format(height, 2), ha='center', color='w', fontsize=11) # remove all the ticks (both axes), and tick labels on the Y axis plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='off', labelbottom='on') # remove the frame of the chart for spine in plt.gca().spines.values(): spine.set_visible(False) plt.xticks([0,1,2,3], ['Malignant\nTraining', 'Benign\nTraining', 'Malignant\nTest', 'Benign\nTest'], alpha=0.8); plt.title('Training and Test Accuracies for Malignant and Benign Cells', alpha=0.8) ``` Uncomment the plotting function to see the visualization. **Comment out** the plotting function when submitting your notebook for grading. ``` accuracy_plot() ```
github_jupyter
# Recommender Systems ### Reverse-engeneering users needs/desires Recommender systems have been in the heart of ML. Mostly that in order to get insigths on large populations it was necessary to understand how users behave, but this can only be done from the historical behaviour. Let's fix some setting that we use for the workshop. We have three main components: the business, the users, and the products. Most of the time a business would like to recommend products to its users. The business knows that the better it understands the user, the better the recommendations, and thus the user will be more likely to consume its products. Simple right? Well, not as much, the following things need to be considered: - What does it mean to know a user? How can we encode this? - If we have the purchase history of the user, do we want to recommend new items or old items? Why? - Business rules exists, like inventory, push products, maximize revenue, lower churn, etc. - What policies should be put in place? GDPR? - How to reduce bias. - Computational resources, speed. - Cold start for products and users. - Legacy systems. - UX integration. - etc Historically, two main approaches exist: collaborative filtering and content-based recommendations. These are often used together. # Collaborative Filtering ## Memory based - Easy to explain - Hard to scale - Not good for sparse data Usually based on similarity. ## Model based - Good for sparse - Difficult to explain - Hard to do inference Let's start with the most basic approach using a popular (light-weight) dataset [MovieLens](http://files.grouplens.org/datasets/movielens/ml-20m.zip) ``` import os import numpy as np import pandas as pd import matplotlib.pyplot as plt ``` ## Load MovieLens Data ``` data_dir = "../data/ml-20m" os.listdir(data_dir) movies = pd.read_csv(f"{data_dir}/movies.csv") ratings = pd.read_csv(f"{data_dir}/ratings.csv") ``` ## Exploring the data Use pandas to understand the distribution of your data, this helps understand what kind of goals are possible. It is always a good idea to extensively explore and understand the data. ### Exercise: 1. Choose a couple from the following list and use pandas to find out the answer: - What columns exist in the data? - What are the possible rankings? - How are the rankings distributed? - What is the average ranking? - What is the distribution of the average ranking among users? - How many genres are there? - What is the genre distribution for the movies? - What can you say about the timestamp? - Do all movies have a year? What is the distribution of Mystery movies during the years? 2. Come up with at least two more statistics that aren't from the above list. *Use the following couple of cells to answer your questions. Make sure to work on this before moving ahead.* ``` # What columns exist in the data? print(f"The movies dataset has columns: {movies.columns.values}") print(f"The ratings dataset has columns: {ratings.columns.values}") # What are the possible rankings? sorted(ratings.rating.unique()) # How are the rankings distributed? ratings[['userId', 'rating']].groupby('userId').mean().hist() plt.show() # What is the average ranking? print(f"The average rating is {round(ratings.rating.mean(), 2)}") ``` Note that the data is quite simple, we only have some info about the movies, which takes the form ``` movies.sample(1) ratings.sample(1) ``` Even though the information about the movies could help us create better recommenders, we won't be using it. Instead we only focus on the ratings dataframe. We can count the relevant users and movies from this: ``` ratings.nunique() ``` ~ 139K users and ~27K movies, rated in a 10 point scale. We can also plot two important pieces of information: - The histogram of how many ratings each movie has. - The histogram of how many ratings each user gives. ``` ratings.groupby("userId").agg({"movieId":len}).hist(bins=30) ratings.groupby("movieId").agg({"userId":len}).hist(bins=30) plt.show() np.log10(ratings.groupby("userId").agg({"movieId":len})).hist(bins=30) np.log10(ratings.groupby("movieId").agg({"userId":len})).hist(bins=30) plt.show() ``` The distribution (note the log) shows that most movies are rated by a handfull of users, and that most users don't rate many movies. Furthermore, note that an user x movie matrix should contain 756 million entries, but there are only 20 million ratings. this is only ~2.6 % of non-zero entries. That is we are in a sparse situation (which is not as bad in this case as it is in some other settings) ### Exercise: 1. According to the info above, for which movies/users is easier to make recommendations? Find at least one user or movie that you suspect is troublesome. 2. The dataframe encodes part of the user-item rating matrix. Suppose that you want to write this matrix, what is the size in GB of this matrix?
github_jupyter
# 1. Import libraries ``` #----------------------------Reproducible---------------------------------------------------------------------------------------- import numpy as np import random as rn import os seed=0 os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) rn.seed(seed) #----------------------------Reproducible---------------------------------------------------------------------------------------- os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' #-------------------------------------------------------------------------------------------------------------------------------- import matplotlib import matplotlib.pyplot as plt import matplotlib.cm as cm %matplotlib inline matplotlib.style.use('ggplot') import random import scipy.sparse as sparse import scipy.io from keras.utils import to_categorical from sklearn.ensemble import ExtraTreesClassifier from sklearn.model_selection import cross_val_score from sklearn.preprocessing import MinMaxScaler from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score import scipy.io from skfeature.function.similarity_based import SPEC import time import pandas as pd #-------------------------------------------------------------------------------------------------------------------------------- def ETree(p_train_feature,p_train_label,p_test_feature,p_test_label,p_seed): clf = ExtraTreesClassifier(n_estimators=50, random_state=p_seed) # Training clf.fit(p_train_feature, p_train_label) # Training accuracy print('Training accuracy๏ผš',clf.score(p_train_feature, np.array(p_train_label))) print('Training accuracy๏ผš',accuracy_score(np.array(p_train_label),clf.predict(p_train_feature))) #print('Training accuracy๏ผš',np.sum(clf.predict(p_train_feature)==np.array(p_train_label))/p_train_label.shape[0]) # Testing accuracy print('Testing accuracy๏ผš',clf.score(p_test_feature, np.array(p_test_label))) print('Testing accuracy๏ผš',accuracy_score(np.array(p_test_label),clf.predict(p_test_feature))) #print('Testing accuracy๏ผš',np.sum(clf.predict(p_test_feature)==np.array(p_test_label))/p_test_label.shape[0]) #-------------------------------------------------------------------------------------------------------------------------------- def write_to_csv(p_data,p_path): dataframe = pd.DataFrame(p_data) dataframe.to_csv(p_path, mode='a',header=False,index=False,sep=',') ``` # 2. Loading data ``` train_data_arr=np.array(pd.read_csv('./Dataset/Activity/final_X_train.txt',header=None)) test_data_arr=np.array(pd.read_csv('./Dataset/Activity/final_X_test.txt',header=None)) train_label_arr=(np.array(pd.read_csv('./Dataset/Activity/final_y_train.txt',header=None))-1) test_label_arr=(np.array(pd.read_csv('./Dataset/Activity/final_y_test.txt',header=None))-1) data_arr=np.r_[train_data_arr,test_data_arr] label_arr=np.r_[train_label_arr,test_label_arr] label_arr_onehot=label_arr print(data_arr.shape) print(label_arr_onehot.shape) data_arr=MinMaxScaler(feature_range=(0,1)).fit_transform(data_arr) C_train_x,C_test_x,C_train_y,C_test_y= train_test_split(data_arr,label_arr_onehot,test_size=0.2,random_state=seed) x_train,x_validate,y_train_onehot,y_validate_onehot= train_test_split(C_train_x,C_train_y,test_size=0.1,random_state=seed) x_test=C_test_x y_test_onehot=C_test_y print('Shape of x_train: ' + str(x_train.shape)) print('Shape of x_validate: ' + str(x_validate.shape)) print('Shape of x_test: ' + str(x_test.shape)) print('Shape of y_train: ' + str(y_train_onehot.shape)) print('Shape of y_validate: ' + str(y_validate_onehot.shape)) print('Shape of y_test: ' + str(y_test_onehot.shape)) print('Shape of C_train_x: ' + str(C_train_x.shape)) print('Shape of C_train_y: ' + str(C_train_y.shape)) print('Shape of C_test_x: ' + str(C_test_x.shape)) print('Shape of C_test_y: ' + str(C_test_y.shape)) key_feture_number=50 ``` # 3. Classifying 1 ### Extra Trees ``` train_feature=C_train_x train_label=C_train_y test_feature=C_test_x test_label=C_test_y print('Shape of train_feature: ' + str(train_feature.shape)) print('Shape of train_label: ' + str(train_label.shape)) print('Shape of test_feature: ' + str(test_feature.shape)) print('Shape of test_label: ' + str(test_label.shape)) p_seed=seed ETree(train_feature,train_label,test_feature,test_label,p_seed) ``` # 4. Model ``` start = time.clock() # construct affinity matrix kwargs = {'style': 0} # obtain the scores of features, and sort the feature scores in an ascending order according to the feature scores train_score = SPEC.spec(train_feature, **kwargs) train_idx = SPEC.feature_ranking(train_score, **kwargs) # obtain the dataset on the selected features train_selected_x = train_feature[:, train_idx[0:key_feture_number]] print("train_selected_x",train_selected_x.shape) # obtain the scores of features, and sort the feature scores in an ascending order according to the feature scores test_score = SPEC.spec(test_feature, **kwargs) test_idx = SPEC.feature_ranking(test_score, **kwargs) # obtain the dataset on the selected features test_selected_x = test_feature[:, test_idx[0:key_feture_number]] print("test_selected_x",test_selected_x.shape) time_cost=time.clock() - start write_to_csv(np.array([time_cost]),"./log/SPEC_time"+str(key_feture_number)+".csv") C_train_selected_x=train_selected_x C_test_selected_x=test_selected_x C_train_selected_y=C_train_y C_test_selected_y=C_test_y print('Shape of C_train_selected_x: ' + str(C_train_selected_x.shape)) print('Shape of C_test_selected_x: ' + str(C_test_selected_x.shape)) print('Shape of C_train_selected_y: ' + str(C_train_selected_y.shape)) print('Shape of C_test_selected_y: ' + str(C_test_selected_y.shape)) ``` # 5. Classifying 2 ### Extra Trees ``` train_feature=C_train_selected_x train_label=C_train_y test_feature=C_test_selected_x test_label=C_test_y print('Shape of train_feature: ' + str(train_feature.shape)) print('Shape of train_label: ' + str(train_label.shape)) print('Shape of test_feature: ' + str(test_feature.shape)) print('Shape of test_label: ' + str(test_label.shape)) p_seed=seed ETree(train_feature,train_label,test_feature,test_label,p_seed) ``` # 6. Reconstruction loss ``` from sklearn.linear_model import LinearRegression def mse_check(train, test): LR = LinearRegression(n_jobs = -1) LR.fit(train[0], train[1]) MSELR = ((LR.predict(test[0]) - test[1]) ** 2).mean() return MSELR train_feature_tuple=(C_train_selected_x,C_train_x) test_feature_tuple=(C_test_selected_x,C_test_x) reconstruction_loss=mse_check(train_feature_tuple, test_feature_tuple) print(reconstruction_loss) ```
github_jupyter
``` # biopy-isatab Python parser for ISA-tab from bcbio import isatab rec = isatab.parse(input_dir) print(rec.metadata) print('\n\n') print(rec.ontology_refs) print('\n\n') print(rec.publications) print('\n\n') print(rec.studies) # Import isa-files from Metabolights # (connection with Metabolights is necessary) from isatools.net import mtbls as MTBLS tmp_dir = MTBLS.get('MTBLS1') # Reading ISA-Tab from local files # (These are downloaded from Metabolights) import isatools import sys import os from isatools import isatab with open(os.path.join(input_dir)) as fp: ISA = isatab.load(fp) # Studies studies = ISA.studies print(len(studies)) # First study study_1 = ISA.studies[0] # Title of the first study object study_1.title # Description of the first study object in ISA Investigation object study_1.description # Protocols declared in the first study protocols = study_1.protocols print('Number of protocols = ', len(protocols), '\n\n') protocol_1 = protocols[0] print(protocol_1, '\n\n') print(protocol_1.description) protocols_descriptions = [protocol.description for protocol in protocols] assays = study_1.assays print("Number of assays = ", len(assays), '\n\n') assay1 = assays[0] print(assay1) print(assay1.measurement_type, '\n\n') # Assay Measurement and Technology Types that are used in this study [ assay.measurement_type.term + " using " + assay.technology_type.term for assay in assays ] # ISA Study source material [source.name for source in study_1.sources] study_1.sources[0].name = 'TEST' # Get all characteristics of the first Source Object first_source_characeristics = study_1.sources[0].characteristics print('Number of characteristics = ', len(first_source_characeristics)) print(first_source_characeristics[0]) # Change the category term of the first characteristic first_source_characeristics[0].category.term = 'TEST' print(first_source_characeristics[0]) # Porperties associated with first ISA Study sourcee [char.category.term for char in first_source_characeristics] [char.value.term for char in first_source_characeristics] # Export ISA files with adjustments isatab.dump(ISA, output_dir) print(type(ISA.studies[0])) print(type(ISA.studies[0].assays[0])) from local_package_installer.local_package_installer import install_local install_local('isatools') # Export ISA files with adjustments isatab.dump(ISA, output_dir) import json from isatools.isajson import ISAJSONEncoder test = json.dumps(ISA, cls=ISAJSONEncoder, sort_keys=True, indent=4, separators=(',', ': ')) print(type(test)) with open('test.json', 'w') as json_file: json_file.write(test) ISA.studies[0].description ISA.studies[0].assays[0] Assay = (type(ISA.studies[0].assays[0])) class Student(Assay): def add_comment2(self): return len(self) test = Student('Test', 'Test2') test2 = Student.add_comment2('test') print(test2) Assay1 = (ISA.studies[0].assays[0]) Assay2 = (ISA.studies[0].assays[1]) print(Assay1) print(Assay2) ```
github_jupyter
<img src="./pictures/DroneApp_logo.png" style="float:right; max-width: 180px; display: inline" alt="INSA" /></a> <img src="./pictures/logo_sizinglab.png" style="float:right; max-width: 100px; display: inline" alt="INSA" /></a> # Frame design The objective of this study, is to optimize the overall design in terms of mass. For this target, the frame will be sized to withstand the resulting loads of two sizing scenarios: the **maximum take-off thrust (arms)** and a **landing with an impact speed of 1m/s (body,arms, landing gears)**. Due to the great diversity of existing models of drones in the market, a simple design of quad-copter was considered for further calculations and steps **Scipy** and **math** packages will be used for this notebook in order to illustrate the optimization algorithms of python. ``` import scipy import scipy.optimize from math import pi from math import sqrt from math import sin,cos,tan import math import numpy as np import timeit import pandas as pd import ipywidgets as widgets from ipywidgets import interactive from IPython.display import display, HTML pd.options.display.float_format = '{:,.2f}'.format ``` #### Frame drawing *Simplified design of the drone frame and nomenclature of geometrical parameters used.* <img src="./img/FrameDesign.jpg" alt="4-arms drone structure" width="800"/> ## Sizing scenarios ### Take-Off scenario A maximum force produced at the take-off $F_{TO}$ generates a bending moment $M_{TO}$ equivalent to: $M_{TO}=\frac{F_{TO}\cdot L_{arm}}{N_{arms}}$ The maximum stress $\sigma_{max}$ for a beam of rectangular cross-section is estimated with safety coefficient $k_s$ as: $\displaystyle\sigma_{max}=\frac{H_{arm}}{2} \frac{12 \cdot Thrust \cdot l_{arm}}{H_{arm}^4-(H_{arm}-2e)^4} \leq \frac{\sigma_{alloy}}{k_s}$ which can be written with dimensionless arm aspect ratio $\pi_{arm}=\frac{e}{H_{arm}}$: $\displaystyle H_{arm}\geq \left ( \frac{6 \cdot Thrust \cdot l_{arm} \cdot k_s}{\sigma_{alloy}(1-(1-2 \cdot \pi_{arm})^4)} \right )^{\frac{1}{3}}$ ### Crash sizing scenario The crash sizing scenario considers a maximum speed $V_{impact}$ of the drone when hitting the ground. At such speed the structure should resist (i.e. the maximum stress should not be exceeded) and for higher speeds, the landing gears are the parts that break as structural fuses. To calculate the equivalent maximum load resisted by the landing gears, the energy conservation law applies the kinetic energy stored in drone mass to potential energy in structural parts transitory deformation: \begin{equation} \begin{gathered} \frac{1}{2}k_{eq} \cdot \delta x^2= \frac{1}{2} M_{tot} \cdot V_{impact}^2 \\ \Rightarrow F_{max} =\frac{1}{4}( k_{eq} \cdot \delta x + M_{total} \cdot g)=\frac{1}{4}(V_{impact} \cdot \sqrt{k_{eq}M_{total}} + M_{total} \cdot g) \end{gathered} \end{equation} To calculate the maximum stress induced by the maximum load $F_{max}$ applied to one landing gear, the equivalent stiffness $k_{eq}$ should be determined. For this purpose, the problem is broken down into simpler structural parts and the equivalent stiffness $k_{eq}$ is expressed considering the effect of each stiffness on the whole part. \begin{equation} k_{eq} = 4 \cdot \frac{\overset{\sim}{k_1} \cdot \overset{\sim}{k_2}}{\overset{\sim}{k_1}+\overset{\sim}{k_2}} \end{equation} *Equivalent stiffness problem decomposition.* <img src="./img/crash.jpg" alt="Equivalent stiffness problem" width="800"/> ## Sizing Code The set of equations of a sizing code can generate typical issues such : - Underconstrained set of equations: the lacking equations can come from additional scenarios, estimation models or additional sizing variable. - overconstrained equations often due to the selection of a component on multiple critera: the adding of over-sizing coefficients and constraints in the optimization problem can generally fix this issue - algebraic loops often due to selection criteria requiring informations generally available after the selection **Underconstraint singularities** Example: two variables in one equation: - Equation: cross section side of a beam resisting a normal stress: $\displaystyle H=\sqrt[3]{\frac{6*M_{to}}{\sigma_{bc}*(1-(1-2*T)^4)}}$ - Variables: thickness ($T$), cross section side ($H$) - Geometrical restriction:$\displaystyle T<H$ - Strategy: $\displaystyle T=k_{TH}*H$ where 0<$k_{TH}$<1 The equation is thus transformed into an inequality and through a large number of iterations the value of both variables can be estimated. $\displaystyle H>\sqrt[3]{\frac{6*M_{to}}{\sigma_{bc}*(1-(1-2*k_{TH})^4)}}$ **Algebraic loop** : beta and Hlg to fulfill objective and contraints. The final optimization problem depends thus of these parameters: - $k_{TH}$: aspect ratio : ratio thickness (T) / side of the beam (H) < 1. Underconstraint - $k_{BH}$ aspect ratio : ratio body height (Hbody)/ height beam (H) > 1. Underconstraint - $ \theta$ landing gear angle (0 is vertical beam) 0<Teta<90. Algebraic Loop - $k_{TT}$ ratio landing gear thickness ( body side dimensions). Underconstraint - $k_{L}$ aspect ratio: Length body(Lbody)/length arm (Larm). Underconstraint - $Hlg$: Height of landing gear (space for battery or sensors). Algebraic Loop The sizing code is defined here in a function which can give: - an evaluation of the objective: here the frame mass - an evaluation of the constraints: here the normal stress at the landing gear and body core, battery dimensions. **Restrictions applied**: 1. **Strength of Materials (two constraints):** the stress resisted by the components(arm, body, landing gear), $\sigma_j$ must be lower than the maximum material stress. 2. **Geometry (one constraint)**: Volume of the body must be larger than the battery one's. 3. **Geometry (one constraint)**: The landing gear must be higher than the deformation caused during the impact and a possible camera or body hanging on the drone. ## Parameters definition ### General specifications ``` # Input Geometrical dimensions Larm=0.35 # [m] one arm length Narm=4 # [-] arms number VolBat=0.132*0.043*0.027 #[m^3] Volume Battery (https://www.miniplanes.fr/eflite-accu-lipo-4s-148v-3300mah-50c-prise-ec3) # Specifications for take off F_to=32 # [N] global drone force for the take off M_total=2 # [kg] total drone mass # Specifications for landing impact v_impact=1 # [m/s] impact speed #Payload specifications H_camera=0.057#[m] height camera ``` ### Material assumptions ``` # Material properties # for beeam and core Ey_bc=70.3e9 # [Pa] Young modulus Rho_bc=2700 # [kg/m^3] Volumic mass Sigma_bc=80e6 # [Pa] Elastic strength # for landing gear Ey_lg=2e9 # [Pa] Young modulus Rho_lg=1070 # [kg/m^3] Volumic mass Sigma_lg=39e6 # [Pa] Elastic strength ``` ### Design assumptions (constant) ``` k_sec=4 # [-] security coefficient ``` ### Design variable (to optimize) ``` k_TH=0.1 # [-] aspect ratio : ratio thickness (T) / side of the beam (H) < 1 k_BH=2 # [-] aspect ratio : ratio body height (Hbody)/ height beam (H) > 1 Teta=20/90*pi/2 # [rad] landing gear angle (0 is vertical beam) 0<Teta<90 k_TT=1 # [-] aspect ratio : ratio landing gear thickness (Tlg)/ thickness beam (T). > 1 k_L=0.5 # [-] aspect ratio: Length body(Lbody)/length arm (Larm)<1 Hlg=.1 # [m] Height of landing gear (space for battery or sensors) #Vector of parameters parameters= scipy.array((k_TH,k_BH,Teta,k_TT,k_L,Hlg)) # Optimization bounds # k_TH, k_BH, Theta, k_TT, k_L, H_LG bounds = [(0.15,0.4), (1,4), (30/90*pi/2,pi/2), (1,100), (0,1), (0.01,1.165)] ``` <a id='#section5'></a> ``` def SizingCode(param,arg): #Design Variables k_TH=param[0] k_BH=param[1] Teta=param[2] k_TT=param[3] k_L=param[4] Hlg=param[5] #### Beam Sizing - Take Off M_to=F_to/Narm*Larm*k_sec # [N.m] Moment applied in the drone center # H=(M_to/Sigma_bc/(1-(1-2*k_TH)**4))**(1/3) # [m] Side length of the beam H=(6*M_to/Sigma_bc/(1-(1-2*k_TH)**4))**(1/3) # [m] Side length of the beam T=k_TH*H # [m] Thickness of the side beam #### Body and Landing gear sizing - Landing impact # Body stiffness calculation Hbody=k_BH*H # [m] height of the body Ibody=1/12*((H+2*T)*Hbody**3-H*(Hbody-2*T)**3) # [m^4] Section inertia of the body Lbody=k_L*Larm #[m] length of the body K1=3*Ey_bc*Ibody/(Lbody)**3 # [N/m] equivalent stiffness of the body # Landing gear stiffness calculation Llg=Hlg/cos(Teta) # [m] Landing gear length Tlg=k_TT*T # [m] landing gear thickness Ilg=1/12*(Tlg**4) # [m^4] Section inertia of the landing gear rectangular section K2=3*Ey_lg*Ilg/Llg**3/sin(Teta) # [N/m] equivalent stiffness of the landing gear # Global stiffness Kg=K1*K2/(K1+K2)*Narm # [N/m] global stiffness of all the arms # Impact force Fimpact= (v_impact*(Kg*M_total)**(1/2)+M_total*9.81)*k_sec # [N] Total impact force, we assume all the landing gear impact together # Stress calculation in the landing gear M_LG=Fimpact/Narm*Hlg*tan(Teta) # [N.m] Moment applied in the landing gear Sigma_lg_impact=M_LG*(Tlg/2)/Ilg # [Pa] Max stress in the landing gear # Stress calculation in the body M_Body=(Fimpact/Narm*Lbody+M_LG) # [N.m] Moment applied in the body Sigma_body_impact=M_Body*(Hbody/2)/Ibody # [Pa] Max stress in the landing gear # Mass calculation Mbeams=Narm*Larm*(H**2-(H-2*T)**2)*Rho_bc #[kg] Total beams' mass MLG=Narm*Llg*Tlg**2*Rho_lg #[kg] Total landing gears' mass Mbody=Narm*(Lbody)*(Hbody*(H+2*T)-(Hbody-2*T)*H)*Rho_bc #[kg] Total body's mass Mframe=Mbeams+MLG+Mbody #[kg] total frame mass Vbody=(2*Lbody)**2*Hbody #[m^3] volume body to integer battery # Contraintes : stress constraints = [(Sigma_bc-Sigma_body_impact)/Sigma_body_impact,(Sigma_lg-Sigma_lg_impact)/Sigma_lg_impact,(Vbody-VolBat)/VolBat,(Hlg-Fimpact/(Narm*Kg)-H_camera)/(Hlg)] # Objectif : masse totale if arg=='Obj': return Mframe elif arg == 'ObjP': P = 0. # Penalisation nulle for C in constraints: if (C < 0.): P = P-1e9*C return Mframe + P #mass optimizatin elif arg=='Prt': col_names_opt = ['Type', 'Name', 'Min', 'Value', 'Max', 'Unit', 'Comment'] df_opt = pd.DataFrame() df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'k_TH', 'Min': bounds[0][0], 'Value': k_TH, 'Max': bounds[0][1], 'Unit': '[-]', 'Comment': 'Aspect ratio for the beam\'s thickness (T/H), '}])[col_names_opt] df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'k_BH', 'Min': bounds[1][0], 'Value': k_BH, 'Max': bounds[1][1], 'Unit': '[-]', 'Comment': 'Aspect ratio for the body\'s height (Hbody/H)'}])[col_names_opt] df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'Theta', 'Min': bounds[2][0], 'Value': Teta/pi*180, 'Max': bounds[2][1], 'Unit': '[-]', 'Comment': 'Angle of the landing gear w.r.t. the beam'}])[col_names_opt] df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'k_TT', 'Min': bounds[3][0], 'Value': k_TT, 'Max': bounds[3][1], 'Unit': '[-]', 'Comment': 'Aspect ratio for the Landing gear\'s thickness (Tlg/T)'}])[col_names_opt] df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'k_L', 'Min': bounds[4][0], 'Value': k_L, 'Max': bounds[4][1], 'Unit': '[-]', 'Comment': 'Aspect ratio: Length body(Lbody)/length arm (Larm) k_L'}])[col_names_opt] df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'Hlg', 'Min': bounds[5][0], 'Value': Hlg, 'Max': bounds[5][1], 'Unit': '[-]', 'Comment': 'Landing gear height'}])[col_names_opt] df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'Mbeams', 'Min': 0, 'Value': Mbeams, 'Max': '-', 'Unit': '[kg]', 'Comment': 'Total beams mass'}])[col_names_opt] df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'MLG', 'Min': 0, 'Value': MLG, 'Max': '-', 'Unit': '[kg]', 'Comment': 'Total landing gear mass'}])[col_names_opt] df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'Mbody', 'Min': 0, 'Value': Mbody, 'Max': '-', 'Unit': '[kg]', 'Comment': 'Total body mass'}])[col_names_opt] df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'Const 0', 'Min': 0, 'Value': constraints[0], 'Max': '-', 'Unit': '[-]', 'Comment': 'Stress margin at the Body: (Sigma_bc-Sigma_body_impact)/Sigma_body_impact'}])[col_names_opt] df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'Const 1', 'Min': 0, 'Value': constraints[1], 'Max': '-', 'Unit': '[-]', 'Comment': 'Stress margin at the landing gears: (Sigma_lg-Sigma_lg_impact)/Sigma_lg_impact'}])[col_names_opt] df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'Const 2', 'Min': 0, 'Value': constraints[2], 'Max': '-', 'Unit': '[-]', 'Comment': '(Vbody-VolBat)/VolBat'}])[col_names_opt] df_opt = df_opt.append([{'Type': 'Optimization', 'Name': 'Const 3', 'Min': 0, 'Value': constraints[3], 'Max': '-', 'Unit': '[-]', 'Comment': '(Hlg-Fimpact/(Narm*Kg)-H_camera)/(Hlg)'}])[col_names_opt] col_names = ['Type', 'Name', 'Value', 'Unit', 'Comment'] df = pd.DataFrame() df = df.append([{'Type': 'Arm', 'Name': 'Larm', 'Value': Larm, 'Unit': '[m]', 'Comment': 'Arm length'}])[col_names] df = df.append([{'Type': 'Arm', 'Name': 'H', 'Value': H, 'Unit': '[m]', 'Comment': 'Height beam'}])[col_names] df = df.append([{'Type': 'Arm', 'Name': 'T', 'Value': T, 'Unit': '[m]', 'Comment': 'Thickness arm'}])[col_names] df = df.append([{'Type': 'Body', 'Name': 'Lbody', 'Value': Lbody, 'Unit': '[m]', 'Comment': 'Body length'}])[col_names] df = df.append([{'Type': 'Body', 'Name': 'Hbody', 'Value': Hbody, 'Unit': '[m]', 'Comment': 'Body height'}])[col_names] df = df.append([{'Type': 'Body', 'Name': 'H+2*T', 'Value': H+2*T, 'Unit': '[m]', 'Comment': 'Body width'}])[col_names] df = df.append([{'Type': 'Crash', 'Name': 'v_impact', 'Value': v_impact, 'Unit': '[m/s]', 'Comment': 'Crash speed'}])[col_names] df = df.append([{'Type': 'Crash', 'Name': 'Kg', 'Value': Kg, 'Unit': '[N/m]', 'Comment': 'Global stiffness'}])[col_names] df = df.append([{'Type': 'Crash', 'Name': 'k_sec', 'Value': k_sec, 'Unit': '[-]', 'Comment': 'Safety coef.'}])[col_names] df = df.append([{'Type': 'Crash', 'Name': 'Fimpact', 'Value': Fimpact, 'Unit': '[N]', 'Comment': 'Max crash load'}])[col_names] pd.options.display.float_format = '{:,.3f}'.format def view(x=''): #if x=='All': return display(df) if x=='Optimization' : return display(df_opt) return display(df[df['Type']==x]) items = sorted(df['Type'].unique().tolist())+['Optimization'] w = widgets.Select(options=items) return display(df,df_opt) else: return constraints ``` <a id='#section6'></a> ## Optimization problem We will now use the [optimization algorithms](https://docs.scipy.org/doc/scipy/reference/optimize.html) of the Scipy package to solve and optimize the configuration. We use here the SLSQP algorithm without explicit expression of the gradient (Jacobian). A course on Multidisplinary Gradient optimization algorithms and gradient optimization algorithm is given [here](http://mdolab.engin.umich.edu/sites/default/files/Martins-MDO-course-notes.pdf): > Joaquim R. R. A. Martins (2012). A Short Course on Multidisciplinary Design Optimization. University of Michigan We can print of the characterisitcs of the problem before optimization with the initial vector of optimization variables: ``` # Initial characteristics before optimization print("-----------------------------------------------") print("Initial characteristics before optimization :") SizingCode(parameters,'Prt') print("-----------------------------------------------") # Optimization with SLSQP algorithm contrainte = lambda x: SizingCode(x, 'Const') objectif = lambda x: SizingCode(x, 'Obj') objectifP = lambda x: SizingCode(x, 'ObjP') SLSQP = False # Optimization algorithm choice if SLSQP == True: # SLSQP omptimisation result = scipy.optimize.fmin_slsqp(func=objectif, x0=parameters, bounds=bounds, f_ieqcons=contrainte, iter=1500, acc=1e-12) else: # Differential evolution omptimisation result = scipy.optimize.differential_evolution(func=objectifP, bounds=bounds, tol=1e-12) # Final characteristics after optimization print("-----------------------------------------------") print("Final characteristics after optimization :") if SLSQP == True: SizingCode(result,'Obj') SizingCode(result, 'Prt') else: SizingCode(result.x,'Obj') SizingCode(result.x, 'Prt') print("-----------------------------------------------") ```
github_jupyter
<center> <img src="profitroll.png"> # <center><span style="font-size: 50px; color: blue;">PROFITROLL BACKUP DEMO</span></center> <center><span style="font-size: 25px; color: purple;">This notebook is an advanced tutorial for users already familiar with <b><i>profitroll<i/></b> basic use. If you discover profiteroll, you might want to begin with the <i>profiteroll_demo.ipynb<i/> notebook </span></center> <p></p> ``` from profitroll.core.grid import Grid from profitroll.core.state import State from profitroll.core.simulation import Simulation from profitroll.test.test_cases import v_stripe_test, bubble_test, gaussian_test # Scientific methods from profitroll.methods.pseudo_spectral_wind import pseudo_spectral_wind from profitroll.methods.wrap_advection_step_3P import wrap_advection_step_3P from profitroll.methods.wrap_wv import wrap_wv from profitroll.methods.end_pop import end_pop ``` # Simulation parameters ``` Lx = 2048e3 Ly = 1024e3 Nx = 256 Ny = 128 T = 3*3600 # Complete simulation is no more very long dt = 300 Nt = int(T//dt) dX = Nx//8 # used to shape the initial v-stripe data dY = Ny//15 nb_state = 2 # number of instants in initial data ``` # Simulation parameters and building ``` methods = [pseudo_spectral_wind, wrap_advection_step_3P,wrap_wv,end_pop] methods_kwargs = [{}, {'alpha_method' : 'damped_bicubic', 'order_alpha' : 2, 'F_method' : 'damped_bicubic'}, {'alpha_method' : 'damped_bicubic', 'order_alpha' : 2, 'F_method' : 'damped_bicubic'}, {}] output_folder = 'output_backup_test' save_rate = 2 backup_rate = 10 verbose = 1 # displaying level, usefull to inspect what's going wrong # Creation of the test case initialCDF = v_stripe_test('initial.nc', Lx, Ly, Nx, Ny, dt, nb_state, dX, dY) # Creation of the simulation object mySim = Simulation(initialCDF, methods, methods_kwargs, output_folder, verbose=verbose, name='testfb') # Run the simulation mySim.run(T, Nt, save_rate, backup_rate, first_run=True) ``` ### if you want to extend the simulation a little bit... We will extend the first run (3h of simulation) with a new one (5h) ``` T2 = 5*3600 Nt2 = int(T2//dt) ``` $\textbf{Try interrupting this run ! (interrupt the kernel during simulation)}$ You will see how to launch a new simulation from the backup file to continue the simulation later ``` save_rate = 1 backup_rate = 6 mySim.run(T2, Nt2, save_rate, backup_rate, first_run=False) ``` # netCDF results can easily be analyzed... ``` from netCDF4 import Dataset import numpy as np resultsCDF = Dataset(output_folder + '/results_testfb.nc', 'r', format='NETCDF4', parallel=False) backupCDF = Dataset(output_folder + '/backup_testfb.nc', 'r', format='NETCDF4', parallel=False) ``` To see the saved times and the last backup time : ``` print(resultsCDF['t'][:].data) print(backupCDF['t'][:].data) ``` To see the parameters of the different runs : ``` print(resultsCDF.T) print(resultsCDF.Nt) print(resultsCDF.save_rate) print(resultsCDF.backup_rate) ``` Don't forget to close the datasets ``` resultsCDF.close() backupCDF.close() ``` # Checking backup start If the last run has been interrupted, you'll see how to continue the simulation with the backup file and the result file (to copy the states saved from the beginning to the last backup). If the result file is corrupted or can't be used, you won't retrive the previous data, but if you're only interested in the end of the simulation you can begin the new run at the last backup thanks to this. ``` backupCDF = Dataset(output_folder + '/backup_testfb.nc', 'r', format='NETCDF4', parallel=False) pre_resultCDF = Dataset(output_folder + '/results_testfb.nc', 'r', format='NETCDF4', parallel=False) print(backupCDF.methods) mySim_fromb = Simulation.frombackup(backupCDF, methods, methods_kwargs, output_folder, resultCDF=pre_resultCDF, name='testfb_fb', verbose=2) mySim_fromb.run(#tocomplete,#tocomplete,#tocomplete,#tocomplete,first_run=True) resultsCDF = Dataset(output_folder + '/results_testfb_fb.nc', 'r', format='NETCDF4', parallel=False) print(resultsCDF['t'][:].data) print(resultsCDF.T) print(resultsCDF.Nt) print(resultsCDF.save_rate) print(resultsCDF.backup_rate) resultsCDF.close() previous_resultsCDF = Dataset(output_folder + '/results_testfb.nc', 'r', format='NETCDF4', parallel=False) print(previous_resultsCDF['t'][:].data) previous_resultsCDF.close() ``` ## Launching the all simulation in one go to verify the results ``` initialCDF = Dataset('initial.nc','r', format='NETCDF4', parallel=False) verbose=1 mySim = Simulation(initialCDF, methods, methods_kwargs, output_folder, verbose=verbose, name='control') T = 8*3600 dt = 300 Nt = int(T//dt) save_rate = 1 backup_rate = 10 mySim.run(T, Nt, save_rate, backup_rate, first_run=True) ``` The two simulation does not have the exact same save_rate (since we changed from 1 to 2 between the first and second part of the first simulation). Keep it in mind while comparing the results ``` reference = Dataset(output_folder + '/results_control.nc', 'r', parallel=False) perturbed = Dataset(output_folder + '/results_testfb_fb.nc', 'r', parallel=False) print(reference['t'][:].data) print(perturbed['t'][:].data) k_per = 34 t = perturbed['t'][k_per] k_ref = np.where(reference['t'][:].data == t)[0][0] print("t = {}".format(t)) print("k_ref = {}".format(k_ref)) variable = 'theta_t' np.min(np.equal(perturbed[variable][:,:,k_per], reference[variable][:,:,k_ref])) ``` True means that all the values (on the spatial grid) of the variable are equals at instant t ``` reference.close() perturbed.close() ```
github_jupyter
# Customizing visual appearance HoloViews elements like the `Scatter` points illustrated in the [Introduction](1-Introduction.ipynb) contain two types of information: - **Your data**, in as close to its original form as possible, so that it can be analyzed and accessed as you see fit. - **Metadata specifying what your data *is***, which allows HoloViews to construct a visual representation for it. What elements do *not* contain is: - The endless details that one might want to tweak about the visual representation, such as line widths, colors, fonts, and spacing. HoloViews is designed to let you work naturally with the meaningful features of your data, while making it simple to adjust the display details separately using the Options system. Among many other benefits, this separation of *content* from *presentation* simplifies your data analysis workflow, and makes it independent of any particular plotting backend. ## Visualizing neural spike trains To illustrate how the options system works, we will use a dataset containing ["spike"](https://en.wikipedia.org/wiki/Action_potential) (neural firing) events extracted from the recorded electrical activity of a [neuron](https://en.wikipedia.org/wiki/Neuron). We will be visualizing the first trial of this [publicly accessible neural recording](http://www.neuralsignal.org/data/04/nsa2004.4/433l019). First, we import pandas and holoviews and load our data: ``` import pandas as pd import holoviews as hv spike_train = pd.read_csv('../assets/spike_train.csv.gz') spike_train.head(n=3) ``` This dataset contains the spike times (in milliseconds) for each detected spike event in this five-second recording, along with a spiking frequency (in Hertz, averaging over a rolling 200 millisecond window). We will now declare ``Curve`` and ``Spike`` elements using this data and combine them into a ``Layout``: ``` curve = hv.Curve(spike_train, 'milliseconds', 'Hertz', group='Firing Rate') spikes = hv.Spikes(spike_train.sample(300), kdims='milliseconds', vdims=[], group='Spike Train') curve + spikes ``` Notice that the representation for this object is purely textual; so far we have not yet loaded any plotting system for HoloViews, and so all you can see is a description of the data stored in the elements. To be able to see a visual representation and adjust its appearance, we'll need to load a plotting system, and here let's load two so they can be compared: ``` hv.extension('bokeh', 'matplotlib') ``` Even though we can happily create, analyze, and manipulate HoloViews objects without using any plotting backend, this line is normally executed just after importing HoloViews so that objects can have a rich graphical representation rather than the very-limited textual representation shown above. Putting 'bokeh' first in this list makes visualizations default to using [Bokeh](http://bokeh.pydata.org), but including [matplotlib](http://matplotlib.org) as well means that backend can be selected for any particular plot as shown below. # Default appearance With the extension loaded, let's look at the default appearance as rendered with Bokeh: ``` curve + spikes ``` As you can see, we can immediately appreciate more about this dataset than we could from the textual representation. The curve plot, in particular, conveys clearly that the firing rate varies quite a bit over this 5-second interval. However, the spikes plot is much more difficult to interpret, because the plot is nearly solid black even though we already downsampled from 700 spikes to 300 spikes when we declared the element. One thing we can do is enable one of Bokeh's zoom tools and zoom in until individual spikes are clearly visible. Even then, though, it's difficult to relate the spiking and firing-rate representations to each other. Maybe we can do better by adjusting the display options away from their default settings? ## Customization Let's see what we can achieve when we do decide to customize the appearance: ``` %%output size=150 %%opts Curve [height=100 width=600 xaxis=None tools=['hover']] %%opts Curve (color='red' line_width=1.5) %%opts Spikes [height=100 width=600 yaxis=None] (color='grey' line_width=0.25) curve = hv.Curve( spike_train, 'milliseconds', 'Hertz') spikes = hv.Spikes(spike_train, 'milliseconds', []) (curve+spikes).cols(1) ``` Much better! It's the same underlying data, but now we can clearly see both the individual spike events and how they affect the moving average. You can also see how the moving average trails the actual spiking, due to how the window function was defined. A detailed breakdown of this exact customization is given in the [User Guide](../user_guide/03-Customizing_Plots.ipynb), but we can use this example to understand a number of important concepts: * The option system is based around keyword settings. * You can customize the output format using the ``%%output`` and the element appearance with the ``%%opts`` *cell magics*. * These *cell magics* affect the display output of the Jupyter cell where they are located. For use outside of the Jupyter notebook, consult the [User Guide](../user_guide/03-Customizing_Plots.ipynb) for equivalent Python-compatible syntax. * The layout container has a ``cols`` method to specify the number of columns in the layout. While the ``%%output`` cell magic accepts a simple list of keywords, we see some special syntax used in the ``%%opts`` magic: * The element type is specified following by special groups of keywords. * The keywords in square brackets ``[...]`` are ***plot options*** that instruct HoloViews how to build that type of plot. * The keywords in parentheses ``(...)`` are **style options** with keywords that are passed directly to the plotting library when rendering that type of plot. The corresponding [User Guide](../user_guide/03-Customizing_Plots.ipynb) entry explains the keywords used in detail, but a quick summary is that we have elongated the ``Curve`` and ``Scatter`` elements and toggled various axes with the ***plot options***. We have also specified the color and line widths of the [Bokeh glyphs](http://bokeh.pydata.org/en/latest/docs/user_guide/plotting.html) with the ***style options***. As you can see, these tools allow significant customization of how our elements appear. HoloViews offers many other tools for setting options either locally or globally, including the ``%output`` and ``%opts`` *line magics*, the ``.opts`` method on all HoloViews objects and the ``hv.output`` and ``hv.opts`` utilities. All these tools, how they work and details of the opts syntax can be found in the [User Guide](../user_guide/03-Customizing_Plots.ipynb). # Switching to matplotlib Now let's switch our backend to [matplotlib](http://matplotlib.org/) to show the same elements as rendered with different customizations, in a different output format (SVG), with a completely different plotting library: ``` %%output size=200 backend='matplotlib' fig='svg' %%opts Layout [sublabel_format='' vspace=0.1] %%opts Spikes [aspect=6 yaxis='bare'] (color='red' linewidth=0.25 ) %%opts Curve [aspect=6 xaxis=None show_grid=False] (color='blue' linewidth=2 linestyle='dashed') (hv.Curve(spike_train, 'milliseconds', 'Hertz') + hv.Spikes(spike_train, 'milliseconds', vdims=[])).cols(1) ``` Here we use the same tools with a different plotting extension. Naturally, a few changes needed to be made: * A few of the plotting options are different because of differences in how the plotting backends work. For instance, matplotlib uses ``aspect`` instead of setting ``width`` and ``height``. In some cases, but not all, HoloViews can smooth over such differences to make it simpler to switch backends. * The Bokeh hover tool is not supported by the matplotlib backend, as you might expect, nor are there any other interactive controls. * Some style options have different names; for instance, the Bokeh ``line_width`` option is called ``linewidth`` in matplotlib. * Containers like Layouts have plot options, but no style options, because they are processed by HoloViews itself. Here we adjust the gap betwen the plots using ``vspace``. Note that you can even write options that work across multiple backends, as HoloViews will ignore keywords that are not applicable to the current backend (as long as they are valid for *some* loaded backend). See the [User Guide](../user_guide/03-Customizing_Plots.ipynb) for more details. ## Persistent styles Let's switch back to the default (Bokeh) plotting extension for this notebook and apply the ``.select`` operation illustrated in the Introduction, to the ``spikes`` object we made earlier: ``` %output size=150 spikes.select(milliseconds=(2000,4000)) ``` Note how HoloViews remembered the Bokeh-specific styles we previously applied to the `spikes` object! This feature allows us to style objects once and then keep that styling as we work, without having to repeat the styles every time we work with that object. You can learn more about the output line magic and the exact semantics of the opts magic in the [User Guide](../user_guide/03-Customizing_Plots.ipynb). ## Setting axis labels If you look closely, the example above might worry you. First we defined our ``Spikes`` element with ``kdims=['milliseconds']``, which we then used as a keyword argument in ``select`` above. This is also the string used as the axis label. Does this mean we are limited to Python literals for axis labels, if we want to use the corresponding dimension with ``select``? Luckily, there is no limitation involved. Dimensions specified as strings are often convenient, but behind the scenes, HoloViews always uses a much richer ``Dimensions`` object which you can pass to the ``kdims`` and ``vdims`` explicitly (see the [User Guide](../user_guide/01-Annotating_Data.ipynb) for more information). One of the things each ``Dimension`` object supports is a long, descriptive ``label``, which complements the short programmer-friendly name. We can set the dimension labels on our existing ``spikes`` object as follows: ``` spikes= spikes.redim.label(milliseconds='Time in milliseconds (10โปยณ seconds)') curve = curve.redim.label(Hertz='Frequency (Hz)') (curve + spikes).select(milliseconds=(2000,4000)).cols(1) ``` As you can see, we can set long descriptive labels on our dimensions (including unicode) while still making use of the short dimension name in methods such as ``select``. Now that you know how to set up and customize basic visualizations, the next [Getting-Started sections](./3-Tabular_Datasets.ipynb) show how to work with various common types of data in HoloViews.
github_jupyter
# 2D Advection-Diffusion equation in this notebook we provide a simple example of the DeepMoD algorithm and apply it on the 2D advection-diffusion equation. ``` # General imports import numpy as np import torch # DeepMoD functions from deepymod import DeepMoD from deepymod.model.func_approx import NN from deepymod.model.library import Library2D_third from deepymod.model.constraint import LeastSquares from deepymod.model.sparse_estimators import Threshold,PDEFIND from deepymod.training import train from deepymod.training.sparsity_scheduler import TrainTestPeriodic from scipy.io import loadmat # Settings for reproducibility np.random.seed(1) torch.manual_seed(1) if torch.cuda.is_available(): device = 'cuda' else: device = 'cpu' ``` ## Prepare the data Next, we prepare the dataset. ``` data = loadmat('Diffusion_2D_space41.mat') data = np.real(data['Expression1']).reshape((41,41,41,4))[:,:,:,3] x_dim, y_dim, t_dim = data.shape time_range = [4,6,8,10,12,14] for i in time_range: # Downsample data and prepare data without noise: down_data= np.take(np.take(np.take(data,np.arange(0,x_dim,2),axis=0),np.arange(0,y_dim,2),axis=1),np.arange(0,t_dim,i),axis=2) print("Dowmsampled shape:",down_data.shape, "Total number of data points:", np.product(down_data.shape)) index = len(np.arange(0,t_dim,i)) width, width_2, steps = down_data.shape x_arr, y_arr, t_arr = np.linspace(0,1,width), np.linspace(0,1,width_2), np.linspace(0,1,steps) x_grid, y_grid, t_grid = np.meshgrid(x_arr, y_arr, t_arr, indexing='ij') X, y = np.transpose((t_grid.flatten(), x_grid.flatten(), y_grid.flatten())), np.float32(down_data.reshape((down_data.size, 1))) # Add noise noise_level = 0.0 y_noisy = y + noise_level * np.std(y) * np.random.randn(y.size, 1) # Randomize data idx = np.random.permutation(y.shape[0]) X_train = torch.tensor(X[idx, :], dtype=torch.float32, requires_grad=True).to(device) y_train = torch.tensor(y_noisy[idx, :], dtype=torch.float32).to(device) # Configure DeepMoD network = NN(3, [40, 40, 40, 40], 1) library = Library2D_third(poly_order=0) estimator = Threshold(0.05) sparsity_scheduler = TrainTestPeriodic(periodicity=50, patience=200, delta=1e-5) constraint = LeastSquares() model = DeepMoD(network, library, estimator, constraint).to(device) optimizer = torch.optim.Adam(model.parameters(), betas=(0.99, 0.99), amsgrad=True, lr=2e-3) logdir='final_runs/no_noise_x21/'+str(index)+'/' train(model, X_train, y_train, optimizer,sparsity_scheduler, log_dir=logdir, split=0.8, max_iterations=50000, delta=1e-6, patience=200) ```
github_jupyter
``` import torch from torchvision import transforms import torch.nn as nn import torchvision.datasets as datasets train_dataset = datasets.MNIST(root='../../data/', train=True, transform=transforms.ToTensor(), download=True) test_dataset = datasets.MNIST(root='../../data/', train=False, transform=transforms.ToTensor()) train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=32, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=32, shuffle=False) class RNN(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_classes): super(RNN, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True) self.fc = nn.Linear(hidden_size, num_classes) def forward(self, x): h = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) c = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) out, _ = self.lstm(x, (h, c)) out = self.fc(out[:, -1, :]) return out def get_default_device(): """Pick GPU if available, else CPU""" if torch.cuda.is_available(): return torch.device('cuda') else: return torch.device('cpu') def to_device(data, device): """Move tensor(s) to chosen device""" if isinstance(data, (list,tuple)): return [to_device(x, device) for x in data] return data.to(device, non_blocking=True) class DeviceDataLoader(): """Wrap a dataloader to move data to a device""" def __init__(self, dl, device): self.dl = dl self.device = device def __iter__(self): """Yield a batch of data after moving it to device""" for b in self.dl: yield to_device(b, self.device) def __len__(self): """Number of batches""" return len(self.dl) device = get_default_device() # Hyper parameters learning_rate = 0.001 sequence_length = 28 hidden_size = 128 num_classes = 10 batch_size = 64 input_size = 28 num_layers = 2 num_epochs = 3 model = RNN(input_size, hidden_size, num_layers, num_classes) to_device(model, device) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # Train the model total_step = len(train_loader) for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): images = images.reshape(-1, sequence_length, input_size).to(device) labels = labels.to(device) # Forward pass outputs = model(images) loss = criterion(outputs, labels) # Backward pass and optimize optimizer.zero_grad() loss.backward() optimizer.step() if (i+1) % 100 == 0: print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' .format(epoch+1, num_epochs, i+1, total_step, loss.item())) # Evaluate the model model.eval() with torch.no_grad(): right = 0 total = 0 for images, labels in test_loader: images = images.reshape(-1, sequence_length, input_size).to(device) labels = labels.to(device) outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) right += (predicted == labels).sum().item() print('Test Accuracy of the model on the 10000 test images: {} %'.format(100 * right / total)) ```
github_jupyter
<a href="https://colab.research.google.com/github/cxbxmxcx/EatNoEat/blob/master/Chapter_9_EatNoEat_Training.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import tensorflow as tf import numpy as np import random import matplotlib import matplotlib.pyplot as plt import math import glob import pickle import io import os import datetime import base64 from IPython.display import HTML from IPython import display as ipythondisplay from google.colab import drive drive.mount('/content/gdrive') use_NAS = False if use_NAS: IMG_SIZE = 224 # 299 for Inception, 224 for NASNetMobile IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3) else: IMG_SIZE = 299 # 299 for Inception, 224 for NASNetMobile IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3) def load_image(image_path): img = tf.io.read_file(image_path) img = tf.image.decode_jpeg(img, channels=3) img = tf.image.resize(img, (IMG_SIZE, IMG_SIZE)) if use_NAS: img = tf.keras.applications.nasnet.preprocess_input(img) else: img = tf.keras.applications.inception_v3.preprocess_input(img) return img, image_path def create_model(image_batch): tf.keras.backend.clear_session() if use_NAS: # Create the base model from the pre-trained model base_model = tf.keras.applications.NASNetMobile(input_shape=IMG_SHAPE, include_top=False, weights='imagenet') else: # Create the base model from the pre-trained model base_model = tf.keras.applications.InceptionResNetV2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet') feature_batch = base_model(image_batch) global_average_layer = tf.keras.layers.GlobalAveragePooling2D() feature_batch_average = global_average_layer(feature_batch) prediction_layer = tf.keras.layers.Dense(3) prediction_batch = prediction_layer(feature_batch_average) model = tf.keras.Sequential([ base_model, global_average_layer, prediction_layer]) base_learning_rate = 0.0001 model.compile(optimizer=tf.keras.optimizers.Nadam(lr=base_learning_rate), loss=tf.keras.losses.MeanAbsoluteError(), metrics=['mae', 'mse', 'accuracy']) return model import os from os import listdir my_drive = '/content/gdrive/My Drive/' image_folder = my_drive + 'TestImages/' models = my_drive + 'Models' training_folder = my_drive + "Traning/" def get_test_images(directory): images = [] for file in listdir(directory): if file.endswith(".jpg"): images.append(image_folder + file) return images images = get_test_images(image_folder) print(images) if len(images) < 0: raise Exception('Test images need to be loaded!') else: x, _ = load_image(images[0]) img = x[np.newaxis, ...] food_model = create_model(img) food_model.summary() latest = tf.train.latest_checkpoint(models) latest if latest != None: food_model.load_weights(latest) def observe_image(image, model): x, _ = load_image(image) img = x[np.newaxis, ...] return model.predict(img) import ipywidgets as widgets from IPython.display import display from IPython.display import Javascript test_states = [] #@title Eat/No Eat Training { run: "auto", vertical-output: true, display-mode: "form" } image_idx = 19 #@param {type:"slider", min:0, max:100, step:1} val = f"Images Trained {len(test_states)}" label = widgets.Label( value= val, disabled=False ) display(label) cnt = len(images) image_idx = image_idx if image_idx < cnt else cnt - 1 image = images[image_idx] x, _ = load_image(image) img = x[np.newaxis, ...] predict = food_model.predict(img) print(predict+5) print(image_idx,image) plt.imshow((x+1)/2) toggle = widgets.ToggleButtons( options=['Eat', 'No Eat'], disabled=False, button_style='', # 'success', 'info', 'warning', 'danger' or '' tooltip='Description', # icon='check' ) display(toggle) button = widgets.Button(description="Train!") output = widgets.Output() def button_clicked(b): # Display the message within the output widget. with output: test = (predict,toggle.index,image) test_states.append(test) button.on_click(button_clicked) display(button, output) if len(test_states) > 0: if os.path.isdir(training_folder) == False: os.makedirs(training_folder) pickle.dump( test_states, open( training_folder + "food_test.p", "wb" ) ) ```
github_jupyter
## 1. Loading your friend's data into a dictionary <p><img src="https://assets.datacamp.com/production/project_1237/img/netflix.jpg" alt="Someone's feet on table facing a television"></p> <p>Netflix! What started in 1997 as a DVD rental service has since exploded into the largest entertainment/media company by <a href="https://www.marketwatch.com/story/netflix-shares-close-up-8-for-yet-another-record-high-2020-07-10">market capitalization</a>, boasting over 200 million subscribers as of <a href="https://www.cbsnews.com/news/netflix-tops-200-million-subscribers-but-faces-growing-challenge-from-disney-plus/">January 2021</a>.</p> <p>Given the large number of movies and series available on the platform, it is a perfect opportunity to flex our data manipulation skills and dive into the entertainment industry. Our friend has also been brushing up on their Python skills and has taken a first crack at a CSV file containing Netflix data. For their first order of business, they have been performing some analyses, and they believe that the average duration of movies has been declining. </p> <p>As evidence of this, they have provided us with the following information. For the years from 2011 to 2020, the average movie durations are 103, 101, 99, 100, 100, 95, 95, 96, 93, and 90, respectively.</p> <p>If we're going to be working with this data, we know a good place to start would be to probably start working with <code>pandas</code>. But first we'll need to create a DataFrame from scratch. Let's start by creating a Python object covered in <a href="https://learn.datacamp.com/courses/intermediate-python">Intermediate Python</a>: a dictionary!</p> ``` # Create the years and durations lists years = [2011,2012,2013,2014,2015,2016,2017,2018,2019,2020] durations = [103,101,99,100,100,95,95,96,93,90] # Create a dictionary with the two lists movie_dict = {"years":years,"durations":durations} # Print the dictionary movie_dict ``` ## 2. Creating a DataFrame from a dictionary <p>To convert our dictionary <code>movie_dict</code> to a <code>pandas</code> DataFrame, we will first need to import the library under its usual alias. We'll also want to inspect our DataFrame to ensure it was created correctly. Let's perform these steps now.</p> ``` # Import pandas under its usual alias import pandas as pd # Create a DataFrame from the dictionary durations_df = pd.DataFrame(movie_dict) # Print the DataFrame print(durations_df) ``` ## 3. A visual inspection of our data <p>Alright, we now have a <code>pandas</code> DataFrame, the most common way to work with tabular data in Python. Now back to the task at hand. We want to follow up on our friend's assertion that movie lengths have been decreasing over time. A great place to start will be a visualization of the data.</p> <p>Given that the data is continuous, a line plot would be a good choice, with the dates represented along the x-axis and the average length in minutes along the y-axis. This will allow us to easily spot any trends in movie durations. There are many ways to visualize data in Python, but <code>matploblib.pyplot</code> is one of the most common packages to do so.</p> <p><em>Note: In order for us to correctly test your plot, you will need to initalize a <code>matplotlib.pyplot</code> Figure object, which we have already provided in the cell below. You can continue to create your plot as you have learned in Intermediate Python.</em></p> ``` # Import matplotlib.pyplot under its usual alias and create a figure import matplotlib.pyplot as plt fig = plt.figure() # Draw a line plot of release_years and durations plt.plot(years, durations) # Create a title plt.title("Netflix Movie Durations 2011-2020") # Show the plot plt.show() ``` ## 4. Loading the rest of the data from a CSV <p>Well, it looks like there is something to the idea that movie lengths have decreased over the past ten years! But equipped only with our friend's aggregations, we're limited in the further explorations we can perform. There are a few questions about this trend that we are currently unable to answer, including:</p> <ol> <li>What does this trend look like over a longer period of time?</li> <li>Is this explainable by something like the genre of entertainment?</li> </ol> <p>Upon asking our friend for the original CSV they used to perform their analyses, they gladly oblige and send it. We now have access to the CSV file, available at the path <code>"datasets/netflix_data.csv"</code>. Let's create another DataFrame, this time with all of the data. Given the length of our friend's data, printing the whole DataFrame is probably not a good idea, so we will inspect it by printing only the first five rows.</p> ``` # Read in the CSV as a DataFrame netflix_df = pd.read_csv("datasets/netflix_data.csv") # Print the first five rows of the DataFrame print(netflix_df[:5]) ``` ## 5. Filtering for movies! <p>Okay, we have our data! Now we can dive in and start looking at movie lengths. </p> <p>Or can we? Looking at the first five rows of our new DataFrame, we notice a column <code>type</code>. Scanning the column, it's clear there are also TV shows in the dataset! Moreover, the <code>duration</code> column we planned to use seems to represent different values depending on whether the row is a movie or a show (perhaps the number of minutes versus the number of seasons)?</p> <p>Fortunately, a DataFrame allows us to filter data quickly, and we can select rows where <code>type</code> is <code>Movie</code>. While we're at it, we don't need information from all of the columns, so let's create a new DataFrame <code>netflix_movies</code> containing only <code>title</code>, <code>country</code>, <code>genre</code>, <code>release_year</code>, and <code>duration</code>.</p> <p>Let's put our data subsetting skills to work!</p> ``` # Subset the DataFrame for type "Movie" netflix_df_movies_only = netflix_df[netflix_df['type'] == 'Movie'] # Select only the columns of interest columns=["title","country","genre","release_year","duration"] netflix_movies_col_subset = netflix_df_movies_only[['title', 'country', 'genre', 'release_year', 'duration']] # Print the first five rows of the new DataFrame print(netflix_movies_col_subset[0:5]) ``` ## 6. Creating a scatter plot <p>Okay, now we're getting somewhere. We've read in the raw data, selected rows of movies, and have limited our DataFrame to our columns of interest. Let's try visualizing the data again to inspect the data over a longer range of time.</p> <p>This time, we are no longer working with aggregates but instead with individual movies. A line plot is no longer a good choice for our data, so let's try a scatter plot instead. We will again plot the year of release on the x-axis and the movie duration on the y-axis.</p> <p><em>Note: Although not taught in Intermediate Python, we have provided you the code <code>fig = plt.figure(figsize=(12,8))</code> to increase the size of the plot (to help you see the results), as well as to assist with testing. For more information on how to create or work with a <code>matplotlib</code> <code>figure</code>, refer to the <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.figure.html">documentation</a>.</em></p> ``` # Create a figure and increase the figure size fig = plt.figure(figsize=(12,8)) # Create a scatter plot of duration versus year plt.scatter(netflix_movies_col_subset["release_year"],netflix_movies_col_subset["duration"]) # Create a title plt.title("Movie Duration by Year of Release") # Show the plot plt.show ``` ## 7. Digging deeper <p>This is already much more informative than the simple plot we created when our friend first gave us some data. We can also see that, while newer movies are overrepresented on the platform, many short movies have been released in the past two decades.</p> <p>Upon further inspection, something else is going on. Some of these films are under an hour long! Let's filter our DataFrame for movies with a <code>duration</code> under 60 minutes and look at the genres. This might give us some insight into what is dragging down the average.</p> ``` # Filter for durations shorter than 60 minutes short_movies = netflix_movies_col_subset[netflix_movies_col_subset['duration'] < 60] # Print the first 20 rows of short_movies print(short_movies[:20]) ``` ## 8. Marking non-feature films <p>Interesting! It looks as though many of the films that are under 60 minutes fall into genres such as "Children", "Stand-Up", and "Documentaries". This is a logical result, as these types of films are probably often shorter than 90 minute Hollywood blockbuster. </p> <p>We could eliminate these rows from our DataFrame and plot the values again. But another interesting way to explore the effect of these genres on our data would be to plot them, but mark them with a different color.</p> <p>In Python, there are many ways to do this, but one fun way might be to use a loop to generate a list of colors based on the contents of the <code>genre</code> column. Much as we did in Intermediate Python, we can then pass this list to our plotting function in a later step to color all non-typical genres in a different color!</p> <p><em>Note: Although we are using the basic colors of red, blue, green, and black, <code>matplotlib</code> has many named colors you can use when creating plots. For more information, you can refer to the documentation <a href="https://matplotlib.org/stable/gallery/color/named_colors.html">here</a>!</em></p> ``` # Define an empty list colors = [] # Iterate over rows of netflix_movies_col_subset for index, row in netflix_movies_col_subset.iterrows() : if row["genre"]=="Children" : colors.append("red") elif row["genre"]=="Documentaries" : colors.append("blue") elif row["genre"]=="Stand-Up" : colors.append("green") else: colors.append("black") # Inspect the first 10 values in your list for x in range(10): print(colors[x]) ``` ## 9. Plotting with color! <p>Lovely looping! We now have a <code>colors</code> list that we can pass to our scatter plot, which should allow us to visually inspect whether these genres might be responsible for the decline in the average duration of movies.</p> <p>This time, we'll also spruce up our plot with some additional axis labels and a new theme with <code>plt.style.use()</code>. The latter isn't taught in Intermediate Python, but can be a fun way to add some visual flair to a basic <code>matplotlib</code> plot. You can find more information on customizing the style of your plot <a href="https://matplotlib.org/stable/tutorials/introductory/customizing.html">here</a>!</p> ``` # Set the figure style and initalize a new figure plt.style.use('fivethirtyeight') fig = plt.figure(figsize=(12,8)) # Create a scatter plot of duration versus release_year plt.scatter(netflix_movies_col_subset["duration"],netflix_movies_col_subset["release_year"],color=colors) # Create a title and axis labels plt.title("Movie duration by year of release") plt.xlabel("Release year") plt.ylabel("Duration (min)") # Show the plot plt.show() ``` ## 10. What next? <p>Well, as we suspected, non-typical genres such as children's movies and documentaries are all clustered around the bottom half of the plot. But we can't know for certain until we perform additional analyses. </p> <p>Congratulations, you've performed an exploratory analysis of some entertainment data, and there are lots of fun ways to develop your skills as a Pythonic data scientist. These include learning how to analyze data further with statistics, creating more advanced visualizations, and perhaps most importantly, learning more advanced ways of working with data in <code>pandas</code>. This latter skill is covered in our fantastic course <a href="www.datacamp.com/courses/data-manipulation-with-pandas">Data Manipulation with pandas</a>.</p> <p>We hope you enjoyed this application of the skills learned in Intermediate Python, and wish you all the best on the rest of your journey!</p> ``` # Are we certain that movies are getting shorter? are_movies_getting_shorter = ... ```
github_jupyter
<a href="https://www.nvidia.com/dli"> <img src="imgs/header.png" alt="Header" style="width: 400px;"/> </a> <h1 align="center">์ธํ…”๋ฆฌ์ „ํŠธ ๋น„๋””์˜ค ๋ถ„์„์„ ์œ„ํ•œ ๋”ฅ๋Ÿฌ๋‹</h1> <h4 align="center">(1๋ถ€)</h4> <img src="imgs/intro.gif" alt="AFRL1" style="margin-top:50px"/> <p style="text-align: center;color:gray"> ๊ทธ๋ฆผ 1. "vehicle" ํด๋ž˜์Šค์— ๋Œ€ํ•œ ์‹ค์‹œ๊ฐ„ ๊ฐ์ฒด ๊ฒ€์ถœ </p> *์ธํ…”๋ฆฌ์ „ํŠธ ๋น„๋””์˜ค ๋ถ„์„(Intelligent Video Analytics; IVA)๋ฅผ ์œ„ํ•œ ๋”ฅ๋Ÿฌ๋‹* ์ฝ”์Šค์— ์˜ค์‹  ๊ฒƒ์„ ํ™˜์˜ํ•ฉ๋‹ˆ๋‹ค! ์˜ค๋Š˜๋‚  ์ˆ˜์‹ญ์–ต ๋Œ€์˜ ์นด๋ฉ”๋ผ๊ฐ€ ๋งค์ผ ์—„์ฒญ๋‚œ ์–‘์˜ ๋น„๋””์˜ค ๋ฐ์ดํ„ฐ๋ฅผ ๋งŒ๋“ค์–ด๋ƒ…๋‹ˆ๋‹ค. ์ด ์ •๋„ ๊ทœ๋ชจ์—์„œ ๋‹ค์–‘ํ•œ ์œ ํ˜•์˜ ๊ฐ์ฒด์— ๋Œ€ํ•œ ์‹๋ณ„, ์ถ”์ , ๋ถ„ํ•  ๋ฐ ์˜ˆ์ธก๊ณผ ๊ฐ™์€ ์ž‘์—…์„ ์œ„ํ•ด ํŠน์ง•์„ ์ถ”์ถœํ•˜๋Š” ๊ฒƒ์€ ๊ฐ„๋‹จํ•œ ์ผ์ด ์•„๋‹™๋‹ˆ๋‹ค. ๋”ฅ๋Ÿฌ๋‹์€ ๋น ๋ฅธ ์†๋„์™€ ํฐ ๊ทœ๋ชจ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ๋‹ค๋ฃจ๋Š”๋ฐ ํšจ๊ณผ์ ์ธ ๋ฐฉ๋ฒ•์œผ๋กœ ์•Œ๋ ค์ ธ ์žˆ์Šต๋‹ˆ๋‹ค. ์†Œ๋งค ๋งˆ์ผ“์˜ ์ตœ์ ์˜ ํŒ๋งค๋Ÿ‰์„ ์œ„ํ•œ ์‚ฌ๋žŒ๋“ค์˜ ์›€์ง์ž„ ๋ฐ ํ˜ผ์žกํ•จ์„ ๋ถ„์„ํ•˜๊ณ , ์Šค๋งˆํŠธ ์ฃผ์ฐจ ๊ด€๋ฆฌ๋ฅผ ์œ„ํ•ด ํŠธ๋ž˜ํ”ฝ ๋ถ„์„๋“ฑ์ด ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ ์ธํ…”๋ฆฌ์ „ํŠธ ๋น„๋””์–ด ๋ถ„์„ (IVA) ์‘์šฉ์˜ ์˜ˆ์ด๋‹ค. ๋ณธ ์›Œํฌ์ˆ์—์„œ ์—ฌ๋Ÿฌ๋ถ„์€ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋‚ด์šฉ์„ ๋ฐฐ์šฐ๊ฒŒ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. - ํ•˜๋“œ์›จ์–ด ๊ฐ€์† ๋””์ฝ”๋”ฉ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ๋น„๋””์˜ค ํ”ผ๋“œ๋ฅผ ํšจ์œจ์ ์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๊ณ  ์ค€๋น„ํ•˜๋Š” ๋ฐฉ๋ฒ• (๋žฉ 1 ๋ฐ ๋žฉ 3) - ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์„ ํ›ˆ๋ จ ๋ฐ ํ‰๊ฐ€ํ•˜๊ณ  "์ „์ด ํ•™์Šต (Transfer learning)" ๊ธฐ๋ฒ•์„ ํ™œ์šฉํ•˜์—ฌ ๋ชจ๋ธ์˜ ํšจ์œจ์„ฑ๊ณผ ์ •ํ™•์„ฑ์„ ๋†’์ด๋ฉฐ, ๋ฐ์ดํ„ฐ ํฌ์†Œ์„ฑ ๋ฌธ์ œ๋ฅผ ์™„ํ™”ํ•˜๋Š” ๋ฐฉ๋ฒ• (๋žฉ 2) - ๋Œ€๊ทœ๋ชจ ๋น„๋””์˜ค ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์—์„œ ์›€์ง์ด๋Š” ๊ฐ์ฒด๋ฅผ ์ถ”์ ํ•˜๊ธฐ ์œ„ํ•œ ๊ณ ํ’ˆ์งˆ ์‹ ๊ฒฝ๋ง ๋ชจ๋ธ ๊ฐœ๋ฐœ์— ์ˆ˜๋ฐ˜๋˜๋Š” ์ „๋žต ๋ฐ ํŠธ๋ ˆ์ด๋“œ์˜คํ”„ (๋žฉ 2) - __DeepStream SDK__ ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์—”๋“œ ํˆฌ ์—”๋“œ ๊ฐ€์†ํ™” ๋น„๋””์˜ค ๋ถ„์„ ์†”๋ฃจ์…˜ ๊ตฌ์ถ• (๋žฉ 3) ์›Œํฌ์ˆ์„ ๋งˆ์น˜๋ฉด ์ฃผ์ฐจ์žฅ ์นด๋ฉ”๋ผ ํ”ผ๋“œ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋“œ์›จ์–ด ๊ฐ€์† ๊ตํ†ต ๊ด€๋ฆฌ ์‹œ์Šคํ…œ์˜ ๋นŒ๋”ฉ ๋ธ”๋ก์„ ์„ค๊ณ„, ํ›ˆ๋ จ, ํ…Œ์ŠคํŠธ ๋ฐ ๋ฐฐ์น˜ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. #### ์ „์ œ ์กฐ๊ฑด ๋น„๋””์˜ค ์ฒ˜๋ฆฌ ๋ฐฉ๋ฒ•, ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ ๋ฐ ๊ฐ์ฒด ๊ฒ€์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๋Œ€ํ•œ ์ง€์‹์ด ์žˆ๋‹ค๋ฉด ์ข‹์ง€๋งŒ ํ•„์ˆ˜์ ์ธ ๊ฒƒ์€ ์•„๋‹™๋‹ˆ๋‹ค. ๋‹จ, ์—ฌ๋Ÿฌ๋ถ„์ด ํ”„๋กœ๊ทธ๋ž˜๋ฐ์˜ ๊ธฐ์ดˆ ๊ฐœ๋…, ํŠนํžˆ __Python__ ๊ณผ __C++__ ์— ์ต์ˆ™ํ•˜๋‹ค๊ณ  ๊ฐ€์ •ํ•ฉ๋‹ˆ๋‹ค. #### ์ฅฌํ”ผํ„ฐ ๋…ธํŠธ๋ถ์— ๋Œ€ํ•˜์—ฌ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์ด ์ฅฌํ”ผํ„ฐ ๋…ธํŠธ๋ถ์— ๋Œ€ํ•ด ์•Œ์•„๋‘์–ด์•ผ ํ•  ์‚ฌํ•ญ์ด ์žˆ์Šต๋‹ˆ๋‹ค. 1. ๋…ธํŠธ๋ถ์€ ๋ธŒ๋ผ์šฐ์ €์—์„œ ๋ Œ๋”๋ง๋˜๊ณ  ์žˆ์ง€๋งŒ, ๋‚ด์šฉ์€ GPU ์ง€์› ์ธ์Šคํ„ด์Šค์—์„œ ์‹คํ–‰๋˜๋Š” ๋Œ€ํ™”ํ˜• iPython ์ปค๋„์— ์˜ํ•ด ์ŠคํŠธ๋ฆฌ๋ฐ๋˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. 2. ๋…ธํŠธ๋ถ์€ ์…€๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์…€์€ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ์ฝ”๋“œ๋ฅผ ํฌํ•จํ•  ์ˆ˜๋„ ์žˆ๊ณ , ์ฝ์„ ์ˆ˜ ์žˆ๋Š” ํ…์ŠคํŠธ ๋ฐ ์ด๋ฏธ์ง€๋ฅผ ๋‹ด๊ณ  ์žˆ์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. 3. ๋ฉ”๋‰ด์—์„œ ```Run``` ์•„์ด์ฝ˜์„ ํด๋ฆญํ•˜๊ฑฐ๋‚˜, ํ‚ค๋ณด๋“œ ๋‹จ์ถ•ํ‚ค์ธ ```Shift-Enter```(์‹คํ–‰ ๋ฐ ๋‹ค์Œ ์…€ ์ด๋™) ๋˜๋Š” ```Ctrl-Enter```(์‹คํ–‰ ๋ฐ ํ˜„์žฌ ์…€์—์„œ ์œ ์ง€)๋ฅผ ์ด์šฉํ•ด ์…€์„ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 4. ์…€ ์‹คํ–‰์„ ์ค‘๋‹จํ•˜๋ ค๋ฉด ํˆด๋ฐ”์˜ ```Stop``` ๋ฒ„ํŠผ์„ ํด๋ฆญํ•˜๊ฑฐ๋‚˜ ```Kernel``` ๋ฉ”๋‰ด๋กœ ์ด๋™ํ•˜์—ฌ ```Interrupt ``` ๋ฅผ ์„ ํƒํ•˜์„ธ์š”. ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์€ ๋‹ค์Œ ์ฃผ์ œ๋ฅผ ๋‹ค๋ฃจ๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. * [1. ์†Œ๊ฐœ](#1) * [1.1 ์ •์ง€ ์˜์ƒ๊ณผ ๋™์˜์ƒ์—์„œ์˜ ๊ฐ์ฒด ๊ฒ€์ถœ](#1-1) * [1.2 Tensorflow ๊ฐ์ฒด ๊ฒ€์ถœ API](#1-2) * [1.3 ์–ด๋…ธํ…Œ์ด์…˜ (Annotations)](#1-3) * [2. ๋ฐ์ดํ„ฐ ์„ธํŠธ: NVIDIA ์ธ๋ฐ๋ฒ„(Endeavor) ์ฃผ์ฐจ์ฐฝ ๋ฐ์ดํ„ฐ ์„ธํŠธ](#2) * [3. ๋ชจ๋ธ์„ ์œ„ํ•œ ๋ฐ์ดํ„ฐ ์ค€๋น„](#3) * [3.1 Raw ์–ด๋…ธํ…Œ์ด์…˜ ๋ฐ์ดํ„ฐ๋ฅผ Pands DataFrame์— ๋„ฃ๊ธฐ](#3-1) * [์—ฐ์Šต 1](#e1) * [์—ฐ์Šต 2](#e2) * [4. ๋น„๋””์˜ค ๋ฐ์ดํ„ฐ ๊ฐ€์ง€๊ณ  ์ž‘์—… ํ•˜๊ธฐ](#4) * [4.1 ๋น„๋””์˜ค ํŒŒ์ผ์„ ํ”„๋ ˆ์ž„ ์ด๋ฏธ์ง€๋กœ ๋ฐ”๊พธ๊ธฐ](#4-1) * [์—ฐ์Šต 3](#e3) * [5. ์ถ”๋ก ](#5) * [5.1 ํ•œ ํ”„๋ ˆ์ž„์”ฉ ๊ฒ€์ถœํ•˜๊ธฐ](#5-1) * [5.2 ์ •๋Ÿ‰์  ๋ถ„์„ - Intersection over Union](#5-2) * [6. ์–ด๋…ธํ…Œ์ด์…˜์„ ์ž๋ฅด๊ณ  ์ •๊ทœํ™”ํ•˜๊ธฐ](#6) * [์—ฐ์Šต 4](#e4) * [7. TFRecord ํŒŒ์ผ ์ƒ์„ฑํ•˜๊ธฐ](#7) * [7.1 ์–ด๋…ธํ…Œ์ด์…˜๊ณผ ์˜์ƒ์„ TensorFlow Example๋“ค๋กœ ์ธ์ฝ”๋”ฉํ•˜๊ธฐ](#7-1) * [7.2 ํ•จ์ˆ˜๋“ค์„ ์—ฐ๊ฒฐํ•˜์—ฌ TFRecord ๋งŒ๋“ค๊ธฐ](#7-2) <a name="1"></a> ## 1. ์†Œ๊ฐœ <a name="1-1"></a> ### 1.1 ์ •์ง€ ์˜์ƒ๊ณผ ๋™์˜์ƒ์—์„œ์˜ ๊ฐ์ฒด ๊ฒ€์ถœ ๊ธ‰๊ฒฉํ•œ ๊ตํ†ต ์นด๋ฉ”๋ผ์˜ ์ฆ๊ฐ€, ์ž์œจ์ฃผํ–‰ ์ฐจ๋Ÿ‰์˜ ์ „๋ง ํ™•๋Œ€, "__์Šค๋งˆํŠธ ์‹œํ‹ฐ__"์˜ ์œ ๋งํ•œ ์ „๋ง์— ๋”ฐ๋ผ, ๋ณด๋‹ค ๋น ๋ฅด๊ณ  ํšจ์œจ์ ์ธ ๊ฐ์ฒด ๊ฒ€์ถœ ๋ฐ ์ถ”์  ๋ชจ๋ธ์˜ ์ˆ˜์š”๊ฐ€ ์ฆ๊ฐ€ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฏธ๊ตญ์ธ์€ ํ•˜๋ฃจ์— 75ํšŒ ์ด์ƒ ์นด๋ฉ”๋ผ์— ์žกํž ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๊ทธ ๊ฒฐ๊ณผ ์ผ์ฃผ์ผ๋งˆ๋‹ค [40์–ต ์‹œ๊ฐ„](https://www.forbes.com/sites/singularity/2012/08/30/dear-republicans-beware-big-brother-is-watching-you/#4317353620da)์˜ ๋น„๋””์˜ค ์˜์ƒ์ด ์ฒ˜๋ฆฌ๋˜๊ณ  ๊ทธ ์ค‘ ์ƒ๋‹น ๋ถ€๋ถ„์ด ๊ฐ์ฒด ๊ฒ€์ถœ ํŒŒ์ดํ”„๋ผ์ธ์„ ์‚ฌ์šฉํ•  ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค! ์ผ๋ฐ˜์ ์œผ๋กœ ๊ฐ์ฒด ๊ฒ€์ถœ์€ ์ •์ง€ ์˜์ƒ(ํ”„๋ ˆ์ž„)๊ณผ ๋™์˜์ƒ ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋‚ด์—์„œ ์‚ฌ์ „ ์ •์˜๋œ ํด๋ž˜์Šค(์˜ˆ: ๋ณดํ–‰์ž, ๋™๋ฌผ, ๊ฑด๋ฌผ ๋ฐ ์ž๋™์ฐจ)์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ์ฐพ๋Š” ๊ณผ์ •์ž…๋‹ˆ๋‹ค. ๊ฐ์ฒด ๊ฒ€์ถœ ํ•จ์ˆ˜๋Š” ์˜์ƒ ์ฒ˜๋ฆฌ ๋ถ„์•ผ์—์„œ ์—ฐ๊ตฌ๊ฐ€ ์‹ฌ๋„ ์žˆ๊ฒŒ ์ง„ํ–‰๋˜์–ด ์™”์Œ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ๋™์˜์ƒ ๋ฐ์ดํ„ฐ ๋ฐ ์‹œ๊ฐ„์  ์ •๋ณด ๊ด€์ ์—์„œ๋Š” ๋œ ๋‹ค๋ฃจ์–ด์กŒ์Šต๋‹ˆ๋‹ค. ์ •์ง€ ์˜์ƒ์—์„œ ๊ฐ์ฒด๋ฅผ ๊ฒ€์ถœํ•˜๊ณ  ๋ถ„๋ฅ˜ํ•˜๊ธฐ ์œ„ํ•ด ๊ฐ€์žฅ ๋„๋ฆฌ ์“ฐ์ด๋Š” ๋”ฅ๋Ÿฌ๋‹ ์ ‘๊ทผ๋ฐฉ์‹์€ ๊ทธ ์ฒซ๋ฒˆ์งธ๋กœ ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๋”ฅ ๋„คํŠธ์›Œํฌ ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ๋•Œ ์ฃผ๋กœ ์‚ฌ์šฉ๋˜๋Š” ๋ฐ์ดํ„ฐ๋Š” *ImageNet ๋˜๋Š” Coco* ๋ฐ์ดํ„ฐ ์…‹์ž…๋‹ˆ๋‹ค. ์ด ๋‹จ๊ณ„์˜ ๊ธฐ๋ณธ ์•„์ด๋””์–ด๋Š” ๋‹ค๋ฅธ ์ข…๋ฅ˜, ๋ชจ๋ธ ๋˜๋Š” ์›๋ž˜ ํด๋ž˜์Šค์™€ ๊ด€๋ จ๋œ ์„œ๋ธŒํด๋ž˜์Šค์˜ ์‹œ๊ฐ์  ํŠน์ง•์„ ์ถ”์ถœํ•˜๊ณ  ๋ชจ๋ธ๋งํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ดํ›„์˜ ๊ฐ์ฒด ๊ฒ€์ถœ ์ถ”๋ก  ๋‹จ๊ณ„๋Š” ๊ด€์‹ฌ ์˜์—ญ์— ๋Œ€ํ•œ ๊ฒฝ๊ณ„ ์ƒ์ž (bounding box) ํšŒ๊ท€ (regression) ๋ถ„์„์„ ์ˆ˜ํ–‰ํ•˜๊ณ  ๊ฒฐ๊ณผ์ ์œผ๋กœ ํ…Œ์ŠคํŠธ ์ •์ง€ ์˜์ƒ์ด๋‚˜ ๋™์˜์ƒ์— ๋ ˆ์ด๋ธ”์„ ๋ถ™์ž„์œผ๋กœ์จ ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค. ํ”„๋ ˆ์ž„๋ณ„ ์ฒ˜๋ฆฌ๋Š” ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ IVA ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์˜ ์ดˆ์ฐฝ๊ธฐ์— ๋งŽ์ด ์‚ฌ์šฉ๋˜์—ˆ์ง€๋งŒ, ์ดํ›„ ์‹œ๊ฐ„์„ ๊ณ ๋ คํ•œ ๋™์˜์ƒ ์ถ”์  ์ฒ˜๋ฆฌ ๊ธฐ๋ฒ•์œผ๋กœ ๋ฐœ์ „๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ •์ง€ ์˜์ƒ๊ณผ ๋น„๊ตํ•˜๋ฉด, ๋™์˜์ƒ ๋ฐ์ดํ„ฐ ์ฒ˜๋ฆฌ๋Š” ์‹ค์‹œ๊ฐ„ ๋ฐ์ดํ„ฐ ์ฒ˜๋ฆฌ ์žฅ๋ฒฝ์„ ํ•ด๊ฒฐํ•ด์•ผ ํ•  ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๋” ๋งŽ์€ ์ปดํ“จํŒ… ์ž‘์—…์„ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ๋™์˜์ƒ์—์„œ์˜ ๊ฐ์ฒด๋Š” ๋ชจ์…˜์˜ blur ํ˜„์ƒ์œผ๋กœ ์ธํ•ด ์—ดํ™”๋˜๊ฑฐ๋‚˜, ๊ฐ€๋ ค์ง€๊ฑฐ๋‚˜ (occlusion), ๋” ๋‚ฎ์€ ํŠน์ง• ํ’ˆ์งˆ์„ ๊ฐ€์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฒŒ๋‹ค๊ฐ€ Raw ๋ฐ์ดํ„ฐ๋ฅผ ์ •๋ณด๋กœ ์ „ํ™˜ํ•˜๋Š” ๋‹จ๊ณ„๋Š” ๋ฐ์ดํ„ฐ์˜ ํ”„๋กœ์„ธ์‹ฑ/์ด์šฉ/๋ฐฐํฌ์— ์žˆ์–ด์„œ ๋ณ‘๋ชฉ์œผ๋กœ ์ž‘์šฉํ•ฉ๋‹ˆ๋‹ค. ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜๋“ค์€ ์ˆ˜์ฒœ์‹œ๊ฐ„ ๋ถ„๋Ÿ‰์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ํ”„๋ ˆ์ž„์„ ๋ณด๊ณ , ์—ฐ๊ตฌํ•˜์—ฌ ์œ ์šฉํ•œ ์ •๋ณด๋กœ ์ „ํ™˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ธ๊ณต ์ง€๋Šฅ์ด ์ด๋Ÿฌํ•œ ์ž‘์—…์„ ํ•ด์•ผํ•˜๋Š” ๋ถ„์„์„๊ฐ€๋“ค์˜ ๋ถ€๋‹ด์„ ์ค„์—ฌ ์ค„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ณธ ์ฝ”์Šค์—์„œ๋Š” ํ”„๋ ˆ์ž„๋ณ„ ๋ฐ์ดํ„ฐ ์ค€๋น„์™€ ๊ฐ์ฒด ๊ฒ€์ถœ์„ ์ด์šฉํ•˜๋Š” IVA์˜ ๊ฐ€์žฅ ๋‹จ์ˆœํ•œ ์ ‘๊ทผ๋ฒ•์œผ๋กœ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜์—ฌ, ์‹œ๊ฐ„์  ๊ฐ์ฒด ์ถ”์  ๋ชจ๋ธ์— ํž˜์ž…์€ ๋™์˜์ƒ ํŠนํ™” ๋ชจ๋ธ๊นŒ์ง€ ์‚ดํŽด๋ณผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. <a name="1-2"></a> ### 1.2 TensorFlow ๊ฐ์ฒด ๊ฒ€์ถœ API ์ด ๊ณผ์ •์—์„œ๋Š” [TensorFlow ๊ฐ์ฒด ๊ฒ€์ถœ API](https://github.com/tensorflow/models/tree/master/research/object_detection)๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด API๋Š” TensorFlow ์œ„์—์„œ ๊ตฌ์ถ•๋œ ์˜คํ”ˆ ์†Œ์Šค ํ”„๋ ˆ์ž„์›Œํฌ๋กœ์„œ ๊ฐ์ฒด ๊ฒ€์ถœ ๋ชจ๋ธ์„ ์‰ฝ๊ฒŒ ๊ตฌ์„ฑํ•˜๊ณ  ํ•™์Šต์‹œ์ผœ ๋ฐฐํฌํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ด์ค๋‹ˆ๋‹ค. ๊ฐ์ฒด ๊ฒ€์ถœ API์—๋Š” ์ตœ๊ทผ์˜ ๋”ฅ๋Ÿฌ๋‹ ๋ฐœ์ „์— ๊ธฐ์—ฌํ•œ ๋‹ค์„ฏ ๊ฐ€์ง€ ๊ฒ€์ถœ ๋ชจ๋ธ์ด ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. 1. [MobileNets](https://arxiv.org/abs/1704.04861)์„ ์ด์šฉํ•œ Single Shot Multibox Detector ([SSD](https://arxiv.org/abs/1512.02325)) 2. [Inception v2](https://arxiv.org/abs/1512.00567)์„ ์ด์šฉํ•œ SSD 3. [Resnet](https://arxiv.org/abs/1512.03385) 101์„ ์ด์šฉํ•œ [Region-Based Fully Convolutional Networks](https://arxiv.org/abs/1605.06409) (R-FCN) 4. Resnet 101์„ ์ด์šฉํ•œ [Faster RCNN](https://arxiv.org/abs/1506.01497) 5. [Inception Resnet v2](https://arxiv.org/abs/1602.07261)์„ ์ด์šฉํ•œ Faster RCNN ๋ณธ ๋žฉ์—์„œ๋Š” Inception v2์„ ์ด์šฉํ•œ SSD, Inception Resnet v2๋ฅผ ์ด์šฉํ•œ Faster RCNN ๋ฐ NasNet์— ์ง‘์ค‘ํ•˜์—ฌ ๊ฒ€์ถœ๊ธฐ ํ›ˆ๋ จํ•˜๊ณ  ํ…Œ์ŠคํŠธํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๊ณผ์ ํ•ฉ (Overfitting)๊ณผ ๋ฐ์ดํ„ฐ ํŽธ์ฐจ ๋ฌธ์™€ ๊ฐ™์€ ๋ช‡ ๊ฐ€์ง€ ํ•จ์ •์— ๋Œ€ํ•ด ์ž˜ ์•Œ๊ณ  ์žˆ์–ด์•ผ๋งŒ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ๋žฉ์—์„œ๋Š” `Pandas` ํŒŒ์ด์ฌ ํŒจํ‚ค์ง€๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋Œ€๋Ÿ‰์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. IVA ๋ฐ์ดํ„ฐ ์ŠคํŠธ๋ฆผ์—์„œ ๊ฐ์ฒด๋ฅผ ๊ฒ€์ถœํ•˜๋Š” ์ž‘์—…๊ณผ ๊ด€๋ จํ•˜์—ฌ, ๊ฐ ๋ชจ๋ธ์€ ๊ฐ๊ฐ์˜ ์žฅ๋‹จ์ ์„ ๊ฐ€์ง€๊ณ  ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋ฐฐํฌ ๊ฐ€๋Šฅํ•œ ์‹œ์Šคํ…œ์„ ๊ฐœ๋ฐœํ•  ๋•Œ์— ์ด๋ฅผ ์ž˜ ๊ณ ๋ คํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, GPU๊ฐ€ ์žˆ์„ ๊ฒฝ์šฐ์— SSD๋Š” ํ†ต์ƒ์ ์ธ ๋น„๋””์˜ค ํ”„๋ ˆ์ž„ ์†๋„(25~30fps)๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ •ํ™•๋„๊ฐ€ ๋งŒ์กฑํ•  ๋งŒํ•œ ์ˆ˜์ค€์ด๊ธฐ๋Š” ํ•˜์ง€๋งŒ ํ•™์Šต์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐ์ดํ„ฐ์˜ ์–‘๊ณผ ๋‹ค์–‘์„ฑ์— ๋”ฐ๋ผ __๋งŽ์€ False Negatives์™€ False alarms__ ๋ฅผ ๋ฐœ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์™€๋Š” ๋Œ€์กฐ์ ์œผ๋กœ, NasNet์€ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋ฅผ ์ ๊ฒŒ ์‚ฌ์šฉํ•˜์—ฌ ๋งค์šฐ ์ •ํ™•ํ•œ ๊ฒ€์ถœ์„ ํ•˜๋Š” ๋Œ€์‹ ์— ์ฒ˜๋ฆฌ ์†๋„๋ฅผ ๊ทธ๋งŒํผ ์†Œ์š”ํ•ฉ๋‹ˆ๋‹ค. ๋ณดํ†ต, GPU๋ฅผ ์ด์šฉํ•ด์„œ ํ•œ ์ž๋ฆฟ์ˆ˜(๋˜๋Š” ๊ทธ ์ดํ•˜)์˜ fps ์„ฑ๋Šฅ์„ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ์•„๋ž˜์—์„œ ์ด ์„ธ ๊ฐ€์ง€ ์œ ํ˜•์˜ ๋ชจ๋ธ์„ ๊ฐ„๋žตํžˆ ์‚ดํŽด๋ณด๊ธฐ๋กœ ํ•ฉ๋‹ˆ๋‹ค. #### Single-Shot Multibox Detector (SSD) R-CNNs (Region-based Convolutional Neural Networks)์˜ ๋„์ž…์œผ๋กœ ์ธํ•ด ๊ฒ€์ถœ ์—ฐ์‚ฐ์€ ๋‘ ๊ฐ€์ง€ ์„œ๋ธŒ์ž‘์—…์œผ๋กœ ๋‚˜๋‰˜์–ด์กŒ์Šต๋‹ˆ๋‹ค. - __์ง€์—ญํ™” (Localization)__: ์ž ์žฌ์  ๊ฐ์ฒด์˜ ์ขŒํ‘œ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ธฐ ์œ„ํ•ด ํšŒ๊ท€ ๋ถ„์„์„ ์ ์šฉํ•˜๋Š” ํ”„๋ ˆ์ž„(์ด๋ฏธ์ง€) ๋‚ด์˜ ์œ„์น˜๋ฅผ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ๋„คํŠธ์›Œํฌ๋Š” ์ •๋‹ต ๊ฒฝ๊ณ„ ์ƒ์ž๋ฅผ ์ด์šฉํ•˜์—ฌ ํ›ˆ๋ จ๋˜๊ณ , L2 ๊ฑฐ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ •๋‹ต ์ขŒํ‘œ์™€ ํšŒ๊ท€ ์ขŒํ‘œ ์‚ฌ์ด์˜ ์†์‹ค๊ฐ’์„ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. - __๋ถ„๋ฅ˜ (Classification)__: ์ฃผ์–ด์ง„ ํ”„๋ ˆ์ž„์— ๋ชจ๋ธ์ด ํ›ˆ๋ จ๋ฐ›์€ ํด๋ž˜์Šค ์ค‘ ํ•˜๋‚˜๋กœ ๋ ˆ์ด๋ธ”์„ ๋ถ™์ด๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. SSD (Single-Shot Multibox Detector Network)๋Š” ๊ฒฝ๊ณ„ ๋ฐ•์Šค ์ง€์—ญํ™”์™€ ๋ถ„๋ฅ˜๋ฅผ ๊ฒฐํ•ฉํ•˜์—ฌ ํ•œ ๋ฒˆ์˜ ์ˆœ๋ฐฉํ–ฅ ๋„คํŠธ์›Œํฌ ํ๋ฆ„๋งŒ์œผ๋กœ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. SSD๋Š” VGG-16 ์•„ํ‚คํ…์ณ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ตฌํ˜„๋˜์–ด ์žˆ๋Š”๋ฐ, ๊ธฐ์กด์˜ ์™„์ „ ์—ฐ๊ฒฐ ๋ ˆ์ด์–ด (Fully-Connected Layer)๋Š” ์ƒˆ๋กœ์šด ์ฝ˜๋ณผ๋ฃจ์…˜ (Convolution) ํŠน์ง• ์ถ”์ถœ ๋ ˆ์ด์–ด์œผ๋กœ ๋Œ€์ฒด๋˜์—ˆ๊ณ , ๊ฐ ๋ ˆ์ด์–ด๋Š” ์ผ๋ จ์˜ *k* ๊ฐœ์˜ ๊ฒฝ๊ณ„ ์ƒ์ž(*์‚ฌ์ „* ์ •๋ณด์— ๊ธฐ๋ฐ˜ํ•จ)์™€ ๊ฒฝ๊ณ„ ์ƒ์ž ์ขŒํ‘œ๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ์•„๋ž˜์—์„œ SSD ๋„คํŠธ์›Œํฌ ์•„ํ‚คํ…์ณ๋ฅผ ํ™•์ธํ•˜์‹ญ์‹œ์˜ค. <img src="imgs/ssd.jpg" alt="SSD" style="width: 800px;"/> <p style="text-align: center;color:gray"> ๊ทธ๋ฆผ 2. SSD ์•„ํ‚คํ…์ณ</p> #### Faster-RCNN Faster-RCNN์€ SSD์— ๋น„ํ•ด ๋” ๋งŽ์€ ์ปดํฌ๋„ŒํŠธ์™€ ์„ธ๋ถ€ ์š”์†Œ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์–ด์„œ ์•„ํ‚คํ…์ณ๊ฐ€ ๋” ๋ณต์žกํ•ฉ๋‹ˆ๋‹ค. SSD ๋ง๊ณผ๋Š” ๋‹ฌ๋ฆฌ, Faster-RCNN์—์„œ๋Š” ์ง€์—ญํ™” ๋ฐ ๋ถ„๋ฅ˜ ์ž‘์—…์ด ์„œ๋กœ ๋‹ค๋ฅธ ๋ง์—์„œ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. ์ง€์—ญํ™” ๋ง์€ RPN (Region Proposal Network) ๋ผ๊ณ  ํ•˜๋Š”๋ฐ, ๊ทธ ์ถœ๋ ฅ์€ ํด๋ž˜์Šค ์œ ํ˜•์ด "์ „๊ฒฝ (Foreground)"๊ณผ "๋ฐฐ๊ฒฝ (Background)"์ธ ์†Œํ”„ํŠธ๋งฅ์Šค ๋ ˆ์ด์–ด๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ ์ถœ๋ ฅ์€ ์ œ์•ˆ๋œ "Anchors"์˜ ํšŒ๊ท€์ž (Regressor)์ž…๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, RPN ๋„คํŠธ์›Œํฌ์˜ ์ถœ๋ ฅ๊ณผ ํ•จ๊ป˜ ์›๋ž˜์˜ ํŠน์ง• ๋งต์„ ๋‘ ๋ฒˆ์งธ ๋ง์— ์ž…๋ ฅํ•˜๊ณ  ์—ฌ๊ธฐ์—์„œ ์‹ค์ œ ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์ด ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. <img src="imgs/RCNN.jpg" alt="RCNN" style="width: 800px;"/> <p style="text-align: center;color:gray"> ๊ทธ๋ฆผ 3. Faster R-CNN ์•„ํ‚คํ…์ณ</p> #### NasNet [NasNet](https://ai.googleblog.com/2017/11/automl-for-large-scale-image.html)์€ ์ง€๊ธˆ๊นŒ์ง€ ๊ตฌ์ถ•๋œ ๊ฐ€์žฅ ์ •ํ™•ํ•œ ๋ชจ๋ธ ์ค‘์˜ ํ•˜๋‚˜๋กœ, ImageNet ๊ฒ€์ฆ ๋‹จ๊ณ„์—์„œ 82.7%๋ฅผ ๊ธฐ๋กํ•˜์—ฌ ์ด์ „์˜ ๋ชจ๋“  ์ธ์…‰์…˜(Inception) ๋ชจ๋ธ๋ณด๋‹ค ๋†’์€ ๊ฐ’์„ ๊ธฐ๋กํ–ˆ์Šต๋‹ˆ๋‹ค. NasNet์€ AutoML์ด๋ผ๋Š” ์ ‘๊ทผ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜์—ฌ ์ฃผ์–ด์ง„ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ์ž˜ ์ž‘๋™ํ•˜๋Š” ๋ ˆ์ด์–ด๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค. NasnNet์˜ ๊ฒฝ์šฐ์—๋Š” COCO์™€ ImageNet์— AutoML์„ ์ ์šฉํ•œ ๊ฒฐ๊ณผ๊ฐ€ ๊ฒฐํ•ฉ๋˜์–ด NasNet ์•„ํ‚คํ…์ฒ˜๋ฅผ ํ˜•์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋ณธ ์ฝ”์Šค์˜ ํ›„๋ฐ˜๋ถ€์—์„œ๋Š”, ์ •ํ™•์„ฑ๊ณผ ์„ฑ๋Šฅ์„ ๋ชจ๋‘ ๋†’์ด๊ธฐ ์œ„ํ•ด ํŠน์ง•์˜ ์‹œ๊ฐ„์  ์ƒ๊ด€๊ด€๊ณ„๋ฅผ ์ด์šฉํ•˜์—ฌ ๋ณด๋‹ค ์ง„๋ณด๋œ ์‹œ์Šคํ…œ์„ ๊ฐœ๋ฐœํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋˜ํ•œ ์—ฌ๋Ÿฌ ๋น„๋””์˜ค ์ŠคํŠธ๋ฆผ์— ๋Œ€ํ•œ ์‹œ์Šคํ…œ ํ™•์žฅ์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด [__DeepStream__](https://developer.nvidia.com/deepstream-sdk)์„ ์ด์šฉํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. <img src="imgs/nas.jpg" alt="RCNN" style="width: 600px;"/> <p style="text-align: center;color:gray"> ๊ทธ๋ฆผ 4. AutoML ์„ ์ด์šฉํ•œ ๊ฐ•ํ™” ํ•™์Šต ๋„คํŠธ์›Œํฌ ์„ ํƒ</p> <a name="1-3"></a> ### 1.3 ์–ด๋…ธํ…Œ์ด์…˜ (Annotations) ์—ฌ๋Ÿฌ๋ถ„์€ ์ข…์ข… ํ›ˆ๋ จ๊ณผ ํ…Œ์ŠคํŠธ ์ƒ˜ํ”Œ์˜ ์ˆ˜๋ฅผ ์ฆ๊ฐ€์‹œ์ผœ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ณ  ํ‰๊ฐ€ํ•ด์•ผ ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๊ธฐ ์œ„ํ•ด์„œ๋Š” ground-truth ๋ฐ์ดํ„ฐ๋ฅผ ํ™•์žฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€์˜ ๋น„๊ณต๊ฐœ ๋ฐ ์˜คํ”ˆ ์†Œ์Šค ์ด๋ฏธ์ง€ ๋งˆํฌ์—… ๋„๊ตฌ๋“ค์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋ณธ ์ฝ”์Šค์— ์‚ฌ์šฉ๋œ ๋ชจ๋“  ๋™์˜์ƒ์€ `Vatic`์„ ์‚ฌ์šฉํ•˜์—ฌ ์–ด๋…ธํ…Œ์ด์…˜์„ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋„๊ตฌ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [์›น์‚ฌ์ดํŠธ](http://www.cs.columbia.edu/~vondrick/vatic/)๋ฅผ ์ฐธ์กฐํ•˜์‹ญ์‹œ์˜ค. <br/> <img src="imgs/vatic.jpg" alt="Vatic imaging" style="width: 800px;"/> <p style="text-align: center;color:gray"> ๊ทธ๋ฆผ 5. Vatic ์–ด๋…ธํ…Œ์ด์…˜ ๋„๊ตฌ </p> ์–ด๋…ธํ…Œ์ด์…˜์— ๋Œ€ํ•ด์„œ๋Š” ์˜จํ†จ๋กœ์ง€ (Ontology)์™€ ๋ถ„๋ฅ˜ ์ฒด๊ณ„ (Taxonomy)๋ฅผ ์ฃผ์˜ ๊นŠ๊ฒŒ ๊ณ ๋ คํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‚˜์ค‘์— ๋‹ค๋ฅธ ๊ฐ์ฒด ์œ ํ˜•์„ ์‰ฝ๊ฒŒ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ๋„๋ก ์œ ์—ฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ณธ ๋žฉ์—์„œ๋Š” ๋ถ„ํ• ๊ณผ ํ”ฝ์…€ ์ˆ˜์ค€์˜ ๋ถ„๋ฅ˜๋Š” ๋‹ค๋ฃจ์ง€ ์•Š๊ณ  ์˜ค์ง ๊ฐ์ฒด ๊ฒ€์ถœ๋งŒ์„ ๋‹ค๋ฃฐ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ฐ์ฒด ๋ถ„ํ• ์„ ์œ„ํ•œ ๋”ฅ๋Ÿฌ๋‹ ๋žฉ์„ ํฌํ•จํ•˜์—ฌ ๋‹ค๋ฅธ DLI ๋žฉ์—์„œ ์„ค๋ช…ํ•œ ๊ธฐ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋Ÿฌํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•ด์„œ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ค๊ธฐ ์œ„ํ•ด ์—ฌ๋Ÿฌ๋ถ„์˜ ๋ฐ์ดํ„ฐ์—๋„ ์œ ์‚ฌํ•œ ๋ฐฉ์‹์œผ๋กœ(๊ฒฝ๊ณ„ ์ƒ์ž, ๋‹ค๊ฐํ˜•, ๋งˆ์Šคํฌ) ๋ ˆ์ด๋ธ”์„ ๋ถ™์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์–ด๋–ค ๋ ˆ์ด๋ธ”๋ง ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ• ์ง€๋ฅผ ๊ฒฐ์ •ํ•  ๋•Œ์—๋Š” ์ž‘์—…์˜ ์ตœ์ข… ๋ชฉํ‘œ๋ฅผ ๊ณ ๋ คํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. <br /><br /> ์ด์ œ ๋‹ค์Œ ์„น์…˜์—์„œ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ์†Œ๊ฐœํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์ถ”๋ก  ์ž‘์—…์„ ์‹œ์ž‘ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. <a name="2"></a> ## 2. ๋ฐ์ดํ„ฐ ์„ธํŠธ: NVIDIA ์ธ๋ฐ๋ฒ„(Endeavor) ์ฃผ์ฐจ์žฅ ๋ฐ์ดํ„ฐ ์„ธํŠธ NVIDIA์—์„œ๋Š” NDIVIA ๋ณธ์‚ฌ๋ฅผ ์ธ๋ฐ๋ฒ„๋ผ๊ณ  ๋ถ€๋ฅด๋Š”๋ฐ, ๋ณธ ์ฝ”์Šค์—์„œ๋Š” ์ด๊ณณ ์ฃผ์ฐจ์žฅ์—์„œ ๋…นํ™”ํ•œ ๋น„๋””์˜ค ํŒŒ์ผ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋น„๋””์˜ค ํŒŒ์ผ์€ ์ „๋ฐฉํ–ฅ ์นด๋ฉ”๋ผ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋…นํ™”๋˜๋ฉฐ, ๊ฒฐ๊ณผ์ ์œผ๋กœ Raw ๋น„๋””์˜ค ํŒŒ์ผ์—์„œ ๋ชจ๋“  ์ง์„ ์€ ๊ณก์„ ์œผ๋กœ ํ‘œํ˜„๋˜์–ด ์žˆ์–ด์„œ ์šฐ๋ฆฌ์˜ ๋น„๋””์˜ค ์ฒ˜๋ฆฌ ์ž‘์—…์— ์ ํ•ฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋น„๋””์˜ค๋กœ ์ž‘์—…์„ ํ•˜๋ ค๋ฉด ์–ธ์›Œํ•‘(unwarping)์„ ํ•ด์•ผ ํ•˜๋Š”๋ฐ ์šฐ๋ฆฌ๊ฐ€ ์‚ฌ์šฉํ•  ๋น„๋””์˜ค๋Š” ์ด๋ฏธ ์ „์ฒ˜๋ฆฌ๊ฐ€ ๋˜์–ด ์žˆ์–ด์„œ ๋ฐ”๋กœ ์‚ฌ์šฉํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๋ณธ ์ฝ”์Šค์˜ ํ›„๋ฐ˜๋ถ€์—์„œ DeepStream SDK๋ฅผ ๊ฐ€์ง€๊ณ  ์ž‘์—…ํ•  ๋•Œ, ์šฐ๋ฆฌ๊ฐ€ ๋งŒ๋“ค ํŒŒ์ดํ”„๋ผ์ธ์˜ ์ผ๋ถ€์ธ ๋น„๋””์˜ค๋ฅผ ์–ธ์›Œํ•‘ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์šธ ๊ฒƒ์ž…๋‹ˆ๋‹ค. <img src="imgs/360.png" alt="Vatic imaging"/> <p style="text-align: center;color:gray"> ๊ทธ๋ฆผ 6. 360๋„ ์นด๋ฉ”๋ผ ๋…นํ™” ์ƒ˜ํ”Œ๊ณผ DeepStream Gst-nvdewarper ํ”Œ๋Ÿฌ๊ทธ์ธ์„ ์‚ฌ์šฉํ•œ ์–ธ์›Œํ•‘ ๊ฒฐ๊ณผ</p> ์ธ๋ฐ๋ฒ„ ์ฃผ์ฐจ์žฅ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์˜ ์–ด๋…ธํ…Œ์ด์…˜์€ JSON ํ˜•์‹์œผ๋กœ ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ ํ•ญ๋ชฉ์€ ๋น„๋””์˜ค ์•ˆ์— ๋“ฑ์žฅํ•˜๋Š” ๊ฐ ์ž๋™์ฐจ์— ๋Œ€ํ•œ ๊ณ ์œ  ์ธ๋ฑ์Šค ๊ฐ’์ธ `track_id`๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. track_id๋ฅผ ํ†ตํ•ด ๊ฒฝ๊ณ„ ์ƒ์ž์˜ ์ง‘ํ•ฉ๊ณผ ๊ฐ ๊ฒฝ๊ณ„ ์ƒ์ž์˜ ์œ„์น˜๋ฅผ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์•„๋ž˜์—์„œ ์–ด๋…ธํ…Œ์ด์…˜์ด ์–ด๋–ค ๊ฐ’๋“ค์„ ๊ฐ€์ง€๋Š”์ง€ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. __track_id__ : ๊ฐ ์ฐจ๋Ÿ‰์˜ ๊ณ ์œ  ID > __boxes__ : ๊ฒฝ๊ณ„ ์ƒ์ž์˜ ์ง‘ํ•ฉ๊ณผ ํ•œ ํ”„๋ ˆ์ž„ ๋‚ด์—์„œ์˜ ๊ฐ ๊ฒฝ๊ณ„ ์ƒ์ž์˜ ์œ„์น˜ ์ •๋ณด > > __frame_id__ : ํ”„๋ ˆ์ž„ ๋ฒˆํ˜ธ๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” ์ผ๋ จ์˜ ์ •์ˆ˜๊ฐ’ > > > __attributes__ : ์ฐจ๋Ÿ‰ ์ œ์กฐ์—…์ฒด, ๋ชจ๋ธ, ์ƒ‰์ƒ, ์ฃผ์ฐจ ์ƒํƒœ๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” *์ž„์˜์˜* ์†์„ฑ ์ง‘ํ•ฉ<br /> > > > __occluded__ : ์ฐจ๋Ÿ‰์ด ์™„์ „ํžˆ ๋ณด์ด๋Š”์ง€ ๋‹ค๋ฅธ ๊ฐ์ฒด์— ๊ฐ€๋ ธ๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋ƒ„<br /> > > > __outside__ : ์ฐจ๋Ÿ‰์ด ํ”„๋ ˆ์ž„ ๊ฒฝ๊ณ„ ์•ˆ์ชฝ์— ์žˆ๋Š”์ง€ ๋ฐ”๊นฅ์— ์žˆ๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋ƒ„<br /> > > > __xbr__ : ํ”„๋ ˆ์ž„ ํฌ๊ธฐ์— ์ƒ๋Œ€์ ์œผ๋กœ ๋‚˜ํƒ€๋‚ธ ๊ฒฝ๊ณ„์ƒ์ž์˜ ์šฐํ•˜๋‹จ x ์ขŒํ‘œ๊ฐ’. [0, frame width] ๋ฒ”์œ„.<br /> > > > __xtl__ : ํ”„๋ ˆ์ž„ ํฌ๊ธฐ์— ์ƒ๋Œ€์ ์œผ๋กœ ๋‚˜ํƒ€๋‚ธ ๊ฒฝ๊ณ„์ƒ์ž์˜ ์ขŒ์ƒ๋‹จ x ์ขŒํ‘œ๊ฐ’. [0, frame width] ๋ฒ”์œ„.<br /> > > > __ybr__ : ํ”„๋ ˆ์ž„ ํฌ๊ธฐ์— ์ƒ๋Œ€์ ์œผ๋กœ ๋‚˜ํƒ€๋‚ธ ๊ฒฝ๊ณ„์ƒ์ž์˜ ์šฐํ•˜๋‹จ y ์ขŒํ‘œ๊ฐ’. [0, frame height] ๋ฒ”์œ„.<br /> > > > __ytl__ : ํ”„๋ ˆ์ž„ ํฌ๊ธฐ์— ์ƒ๋Œ€์ ์œผ๋กœ ๋‚˜ํƒ€๋‚ธ ๊ฒฝ๊ณ„์ƒ์ž์˜ ์ขŒ์ƒ๋‹จ y ์ขŒํ‘œ๊ฐ’. [0, frame height] ๋ฒ”์œ„.<br /> ์•„๋ž˜์—์„œ ์ƒ˜ํ”Œ ๋น„๋””์˜ค์— ๋Œ€ํ•œ JSON ํŒŒ์ผ์˜ ์Šค๋ƒ…์ƒท์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <img src="imgs/json_structure.png" alt="Vatic imaging"/> <p style="text-align: center;color:gray"> ๊ทธ๋ฆผ 7. JSON ์–ด๋…ธํ…Œ์ด์…˜ ํŒŒ์ผ์˜ ์Šค๋ƒ…์ƒท </p> ์ด์ œ ์ด ์ฝ”์Šค์—์„œ ์‚ฌ์šฉํ•  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๊ฐ€์ ธ์˜ต์‹œ๋‹ค. ``` #%matplotlib notebook %matplotlib inline import pylab as pl pl.rcParams['figure.figsize'] = (8, 4) import os, sys, shutil import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import matplotlib.patches as patches import io import base64 from IPython.display import HTML from IPython.display import clear_output from IPython import display import matplotlib.patches as patches from matplotlib.pyplot import cm import time import cv2 import pickle import json import sort from os.path import join from mpl_toolkits.mplot3d import Axes3D import pandas as pd ``` ๋ณธ ์ฝ”์Šค์—์„œ๋Š” ์„ค์ • ํŒŒ์ผ์„ ํ™œ์šฉํ•˜์—ฌ __data__ ์†์„ฑ์— ์ ‘๊ทผํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, __models__ ์†์„ฑ์„ ์ฐธ์กฐํ•˜๋Š” ๋‹ค๋ฅธ ์„ค์ • ํŒŒ์ผ๋“ค๋„ ์‚ฌ์šฉํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ``` import configparser config = configparser.ConfigParser() config.sections() config.read("utils/iva.conf") config = config["General"] ``` ๋ชจ๋ธ์„ ๋งŒ๋“œ๋Š” ๋ฐ ํ•„์š”ํ•œ ๋ฐ์ดํ„ฐ์˜ ์œ ํ˜•์„ ํŒŒ์•…ํ•˜๊ธฐ ์œ„ํ•ด Raw ๋ฐ์ดํ„ฐ ์ค‘ ์ผ๋ถ€๋ฅผ ์‚ดํŽด๋ด…์‹œ๋‹ค. ์‰ฝ๊ฒŒ ๋ณผ ์ˆ˜ ์žˆ๋„๋ก Raw ๋น„๋””์˜ค๋ฅผ ์ž‘๊ฒŒ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. ์šฐ๋ฆฌ ํ™˜๊ฒฝ์—์„œ ์žฌ์ƒ์ด ์ž˜ ๋˜๋„๋ก ๋น„๋””์˜ค์˜ ํฌ๊ธฐ๋ฅผ ์กฐ์ ˆํ•˜๊ณ  ํ”„๋ ˆ์ž„ ์†๋„๋ฅผ ์ค„์˜€์Œ์„ ์œ ๋…ํ•˜์‹ญ์‹œ์˜ค. ``` def disp_video(fname): import io import base64 from IPython.display import HTML video = io.open(fname, 'r+b').read() encoded = base64.b64encode(video) return HTML(data='''<video alt="test" width="640" height="480" controls> <source src="data:video/mp4;base64,{0}" type="video/mp4" /> </video>'''.format(encoded.decode('ascii'))) mp4_path = 'imgs/sample.mp4' print ("Loading video...") disp_video(mp4_path) ``` ์ด๋“ค ์ค‘ ํ•˜๋‚˜๋ฅผ ๊ฒ€์‚ฌํ•˜์—ฌ Raw ๋ฐ์ดํ„ฐ๊ฐ€ ์–ด๋–ป๊ฒŒ ์ƒ๊ฒผ๋Š”์ง€ ์‚ดํŽด๋ด…์‹œ๋‹ค. ``` %%bash head -c 1000 /dli/data/videos/126_206-A0-3.json ``` <a name="3"></a> ## 3. ๋ชจ๋ธ์„ ์œ„ํ•œ ๋ฐ์ดํ„ฐ ์ค€๋น„ TensorFlow ๊ฐ์ฒด ๊ฒ€์ถœ API๋ฅผ ํ™œ์šฉํ•˜๊ณ  ๊ด€๋ จ KPI๋ฅผ ์ธก์ •ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” Raw ๋ฐ์ดํ„ฐ๋ฅผ `Pandas DataFrame` ๊ฐ์ฒด๋กœ ๋ณ€ํ™˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ ํ›„์—, ์ถ”๋ก ์„ ์œ„ํ•ด ์ด๋ฏธ์ง€์™€ ์–ด๋…ธํ…Œ์ด์…˜์„ ๊ฒฐํ•ฉํ•˜๊ณ  ๋ชจ๋ธ์˜ ์ •ํ™•๋„๋ฅผ ์ธก์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <a name="3-1"></a> ### 3.1 Raw ์–ด๋…ธํ…Œ์ด์…˜ ๋ฐ์ดํ„ฐ๋ฅผ Pandas DataFrame์— ๋„ฃ๊ธฐ ๋ณธ ์ฝ”์Šค์˜ ํ›„๋ฐ˜๋ถ€์—์„œ, ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์œ„ํ•ด TensorFlow ๋ ˆ์ฝ”๋“œ ํŒŒ์ผ, ์ฆ‰ TFRecord๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณ€ํ™˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ํŒŒ์ผ๋“ค์€ ๊ธฐ๋ก ์ค‘์‹ฌ์˜ ์ด์ง„ ํŒŒ์ผ์ด๋ฉฐ TensorFlow ํ”„๋กœ์„ธ์Šค๊ฐ€ ์‰ฝ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. TFRecord ํŠน์ง•์€ ์ด๋ฏธ์ง€ ํ”„๋ ˆ์ž„ ๋ฐ ํ•ด๋‹น ํ”„๋ ˆ์ž„๊ณผ ๊ด€๋ จ๋œ ๋ชจ๋“  ์–ด๋…ธํ…Œ์ด์…˜์„ ๋‹จ์ผ ํ–‰์œผ๋กœ ์ธ์ฝ”๋”ฉํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ฃผ์ฐจ์žฅ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ œ๊ณต๋œ ์–ด๋…ธํ…Œ์ด์…˜ ๋ฐ์ดํ„ฐ๋Š” frame_id๊ฐ€ ์•„๋‹Œ track_id์— ์˜ํ•ด ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์ด ์ฐจ์ด์ ์œผ๋กœ ์ธํ•ด ๋ฐ์ดํ„ฐ๋ฅผ ๋ถ„๋ฅ˜ํ•˜๊ณ  ์ •๋ฆฌํ•œ ํ›„ TFRecord๋กœ ๋ณ€ํ™˜ํ•ด์•ผ๋งŒ ํ•ฉ๋‹ˆ๋‹ค. Pandas๋Š” ์ด๋Ÿฌํ•œ ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„๋ฅผ ๊ตฌํ˜„ํ•˜๋Š” ์ผ์— ๋งค์šฐ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์— ๋‚˜์˜ค๋Š” ์—ฐ์Šต์—์„œ๋Š” ํ•œ ๋น„๋””์˜ค์—์„œ ์–ป์€ ๋ฐ์ดํ„ฐ๋งŒ์„ ๊ฐ€์ง€๊ณ  ์ž‘์—…ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—๋Š” ํ”„๋ ˆ์ž„๊ณผ ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ๊ฐ€ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ์ด ์ •๋„ ์–‘์˜ ๋ฐ์ดํ„ฐ๋งŒ์œผ๋กœ๋„ ํ›ˆ๋ จ์ด ๊ฐ€๋Šฅ์€ ํ•˜์ง€๋งŒ ๋‚˜์ค‘์—๋Š” ๋” ๋งŽ์€ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•˜์—ฌ ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ์ž‘์—…ํ•ด ๋ณผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ํŒŒ์ผ๊ณผ ๋ชจ๋ธ์€ ์‹œ๊ฐ„์„ ์ ˆ์•ฝํ•˜๊ธฐ ์œ„ํ•ด ๋ฏธ๋ฆฌ ๋งŒ๋“ค์–ด ๋‘์—ˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ Raw ์–ด๋…ธํ…Œ์ด์…˜ ํŒŒ์ผ์„ DataFrame์— ๋„ฃ๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ด…์‹œ๋‹ค. ``` with open(config["JSON_Sample"], 'r') as f: data = pd.read_json(f) ``` ํ…์ŠคํŠธ ํŒŒ์ผ ์•ˆ์— ๋“ค์–ด ์žˆ๋˜ ๋ชจ๋“  ๋ฐ์ดํ„ฐ๋Š” ์ด์ œ ๋ฐ์ดํ„ฐ ๋ณ€์ˆ˜ ์•ˆ์— ๋“ค์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฐ์ดํ„ฐ๋ฅผ ์–ด๋–ป๊ฒŒ ๋‹ค๋ฃจ๋Š”์ง€ ์•Œ์•„๋ด…์‹œ๋‹ค. ์šฐ๋ฆฌ๋Š” ๋‹จ์ˆœํžˆ ๋ฐ์ดํ„ฐ๋ฅผ ์ถœ๋ ฅํ•˜๊ฑฐ๋‚˜, ์ถœ๋ ฅ์˜ ์–‘์„ ์ค„์ด๊ธฐ ์œ„ํ•ด head() ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ DataFrame์˜ ์ฒ˜์Œ ๋ช‡ ํ–‰๋งŒ์„ ํ‘œ์‹œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ``` print(data.iloc[0].head()) ``` Vatic ์–ด๋…ธํ…Œ์ด์…˜ ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•จ์— ๋”ฐ๋ผ ์–ด๋…ธํ…Œ์ด์…˜ ํŒŒ์ผ์—๋Š” ์ค‘๋ณต ๋ฐ์ดํ„ฐ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ฐจ๋Ÿ‰์ด ํ”„๋ ˆ์ž„์„ ๋ฒ—์–ด๋‚ฌ์Œ์—๋„ ๊ฒฝ๊ณ„ ์ƒ์ž ์–ด๋…ธํ…Œ์ด์…˜์€ ๋น„๋””์˜ค๊ฐ€ ๋ชจ๋‘ ๋๋‚  ๋•Œ๊นŒ์ง€ ์œ ์ง€๋ฉ๋‹ˆ๋‹ค! ์ด๋Ÿฐ ์œ ํ˜•์˜ ์–ด๋…ธํ…Œ์ด์…˜์„ ๊ฑธ๋Ÿฌ๋‚ด๋Š” ์œ ์ผํ•œ ๋ฐฉ๋ฒ•์€ ์ฐจ๊ฐ€ ์‹œ์•ผ๋ฅผ ๋ฒ—์–ด๋‚˜์ž๋งˆ์ž `1`๋กœ ์„ค์ •๋˜๋Š” `outside` ์†์„ฑ์„ ํ™œ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์–ด๋…ธํ…Œ์ด์…˜๋“ค์€ ํ›ˆ๋ จ์ด๋‚˜ ํ‰๊ฐ€์— ์•„๋ฌด๋Ÿฐ ๋„์›€์ด ๋˜์ง€ ์•Š์œผ๋ฏ€๋กœ ์•ˆ์‹ฌํ•˜๊ณ  ์ œ๊ฑฐํ•ด๋„ ๋ฉ๋‹ˆ๋‹ค. ์ฃผ๋ชฉํ•ด์•ผ ํ•  ๋˜ ๋‹ค๋ฅธ ๋ฌธ์ œ์ ์€ ํ”„๋ ˆ์ž„์˜ ์ค‘๋ณต์ž…๋‹ˆ๋‹ค. ๋งŽ์€ ํ”„๋ ˆ์ž„๋“ค์— ์ž๋™์ฐจ๊ฐ€ ์ „ํ˜€ ์—†์Šต๋‹ˆ๋‹ค! (์นด๋ฉ”๋ผ์— ๋”ฐ๋ผ ๋‹ค๋ฅด์ง€๋งŒ) ์•„๋ฌด๋Ÿฐ ์›€์ง์ž„ ์—†์ด ์ฃผ์ฐจ๋œ ์ฐจ๋Ÿ‰์„ ํฌํ•จํ•œ ํ”„๋ ˆ์ž„์€ ๋”์šฑ๋” ๋งŽ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ”„๋ ˆ์ž„์„ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ํฌํ•จ์‹œํ‚ค๋ฉด ์ค‘๋ณต/ํŽธํ–ฅ ์ƒ˜ํ”Œ์ด ์ƒ์„ฑ๋˜๊ณ  ํ›ˆ๋ จ ํ’ˆ์งˆ์— ๋ถ€์ •์ ์ธ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์ œ๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•ด ์šฐ๋ฆฌ๋Š” ์›€์ง์ด๋Š” ์ฐจ๋Ÿ‰์ด ์žˆ๋Š” ํ”„๋ ˆ์ž„๋งŒ ์‚ฌ์šฉํ•˜๊ณ  ๋‚˜๋จธ์ง€๋Š” ๋ฌด์‹œํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. <img src="imgs/similars.jpg" alt="Vatic imaging"/> <p style="text-align: center;color:gray"> ๊ทธ๋ฆผ 8. ์–ด๋…ธํ…Œ์ด์…˜์ด ์žˆ๋Š” ํ”„๋ ˆ์ž„์˜ ์ค‘๋ณต</p> ๋‹ค์Œ์˜ ์ฝ”๋“œ ์Šค๋‹ˆํŽซ์€ ์œ„์—์„œ ์–ธ๊ธ‰ํ•œ ๋ฌธ์ œ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ณ  ์›€์ง์ด๋Š” ์ฐจ๋Ÿ‰๋งŒ ํฌํ•จํ•˜๋Š” ํ”„๋ ˆ์ž„ ๋ชฉ๋ก์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ``` tracks = data.keys() frames_list = [] frame_existance = np.zeros(15000) for i in range(len(tracks)): boxes = data[list(tracks)[i]]["boxes"] {frames_list.append(k) for k, v in boxes.items() if v['outside'] == 0 and 'Moving' in v['attributes'] and k not in frames_list} for i in frames_list: frame_existance[int(i)] = 1 ``` ์ตœ์ข… ํ”„๋ ˆ์ž„ ์„ธํŠธ๋ฅผ ์‚ดํŽด๋ด…์‹œ๋‹ค. - ์›€์ง์ด๋Š” ์ฐจ๋Ÿ‰์„ ํฌํ•จํ•˜๋Š” ํ”„๋ ˆ์ž„ - `outside` ์†์„ฑ์ด `1`์ธ ํ”„๋ ˆ์ž„์€ ์ œ๊ฑฐ๋จ ``` y_pos = np.arange(len(frame_existance)) pl.rcParams['figure.figsize'] = (18, 3) plt.bar(y_pos, frame_existance, align='center', alpha=0.5) plt.yticks([]) plt.title('Frame indices that include moving cars') plt.show() ``` ํ”„๋ ˆ์ž„์„ ๋ณด๋ฉด ์ „์ฒด ํ”„๋ ˆ์ž„ ์ค‘ ์•„์ฃผ ์ผ๋ถ€๋งŒ ์›€์ง์ด๋Š” ์ฐจ๋Ÿ‰์„ ํฌํ•จํ•˜๊ณ  ์žˆ๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ์ฒ˜๋ฆฌํ•  ํ”„๋ ˆ์ž„์˜ ์ˆ˜๋Š” ๊ธ‰๊ฒฉํ•˜๊ฒŒ ์ค„์–ด๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. <a name="e1"></a> ### ์—ฐ์Šต 1: ์•„๋ž˜์—์„œ ์›€์ง์ด๋Š” ์ฐจ๋Ÿ‰์„ ํฌํ•จํ•œ ํ”„๋ ˆ์ž„์˜ ๋น„์œจ์„ ๊ณ„์‚ฐํ•˜์‹ญ์‹œ์˜ค. ``` # ์—ฌ๊ธฐ์—์„œ ์ฝ”๋”ฉํ•˜์„ธ์š”. ``` ํ•ด๋‹ต์„ ๋ณด๋ ค๋ฉด [์—ฌ๊ธฐ๋ฅผ](#a1) ๋ณด์„ธ์š”. ์•ž์„œ ์‚ดํŽด๋ณธ ๊ฒƒ์ฒ˜๋Ÿผ ์ตœ์ƒ์œ„ ์ˆ˜์ค€์˜ ์–ด๋…ธํ…Œ์ด์…˜์€ `track_id` ํ•„๋“œ์ž…๋‹ˆ๋‹ค. ํ•˜์œ„ ์ˆ˜์ค€์—์„œ ๊ฐ ๊ฒฝ๊ณ„ ๋ฐ•์Šค๋Š” ํ”„๋ ˆ์ž„ ๋ฒˆํ˜ธ๋กœ ๋ถ„๋ฅ˜๋ฉ๋‹ˆ๋‹ค. ์ด ๊ตฌ์กฐ๋ฅผ ํ‰ํ‰ํ•˜๊ฒŒ ๋งŒ๋“ค์–ด์•ผ๋งŒ __DataFrame__ ์„ ๋” ์ž˜ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๊ณ  ์กฐ์ž‘ํ•˜๊ธฐ๋„ ์‰ฌ์›Œ์ง‘๋‹ˆ๋‹ค. ๋˜ํ•œ ์ œ๊ณต๋œ ๊ฒฝ๊ณ„ ์ƒ์ž๋Š” ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„ ํฌ๊ธฐ์˜ ์–ด๋…ธํ…Œ์ด์…˜์„ ๊ฐ€์ง€๊ณ  ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ˆ˜์ •ํ•  ํ•„์š”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์–ด๋…ธํ…Œ์ด์…˜์œผ๋กœ ์ž‘์—…๋œ ํ”„๋ ˆ์ž„์€ `(611, 480)` ํฌ๊ธฐ๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ์ œ๊ณต๋œ ๋น„๋””์˜ค์˜ ํ”„๋ ˆ์ž„ ํฌ๊ธฐ๋ฅผ ์‚ดํŽด๋ณด์‹ญ์‹œ์˜ค. ``` # get video frame size input_video = cv2.VideoCapture(config["Video_Sample"]) retVal, im = input_video.read() size = im.shape[1], im.shape[0] input_video.release() print("Video frame size (width, height):", size) ``` ๋‹ค์Œ ์ฝ”๋“œ๋Š” DataFrame์„ ํ‰ํƒ„ํ•˜๊ฒŒ ํ•˜๊ณ  ๊ธฐ์กด ํ”„๋ ˆ์ž„ ํฌ๊ธฐ์— ๋”ฐ๋ผ ๊ฒฝ๊ณ„ ์ƒ์ž๋ฅผ ์ •๊ทœํ™”ํ•ฉ๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ์‹œ๊ฐ„์ด ๋งŽ์ด ์†Œ์š”๋˜๋Š” ๊ณผ์ •์ด๋ฏ€๋กœ ์ฒ˜๋ฆฌํ•  ํŠธ๋ž™์˜ ์ˆ˜๋ฅผ `1`๋กœ ์ œํ•œํ–ˆ์Šต๋‹ˆ๋‹ค. (๋Œ€์‹ , ์ฒ˜๋ฆฌ๋œ ๋ฐ์ดํ„ฐ๋ฅผ ํ…์ŠคํŠธ ํŒŒ์ผ๋กœ๋ถ€ํ„ฐ ์ฝ์„ ๊ฒƒ ์ž…๋‹ˆ๋‹ค.). ๋™์ž‘ํ•˜๋Š” ์˜ˆ์ œ๋ฅผ ๋ณด๋ ค๋ฉด ์ฝ”๋“œ๋ฅผ ๋ชจ๋‘ ์„ ํƒํ•˜๊ณ  `Ctrl + /`๋ฅผ ๋ˆŒ๋Ÿฌ ์ปค๋ฉ˜ํŠธ๋ฅผ ์—†์•ค ํ›„, ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•˜์‹ญ์‹œ์˜ค. ``` # print("processing length:", len(frames_list)) # annotated_frames = pd.DataFrame() # ANNOTATE_SIZE = (611, 480) # limit = 1 #set this limit to avoid timely DataFrame generation # if len(frames_list) > 0: # for i in range(len(tracks)): # # remove the following line if the DataFrame is not read from CSV file # if i == limit: break # boxes = data[list(tracks)[i]]["boxes"] # print("\rprocessing track no: {}".format(i), end = '') # for k, v in boxes.items(): # if k in frames_list:# and v['outside']!=1: # # resizing the annotations # xmin, ymin, xmax, ymax = v["xtl"], v["ytl"], v["xbr"], v["ybr"] # xmin = int((float(xmin) / ANNOTATE_SIZE[0]) * size[0]) # xmax = int((float(xmax) / ANNOTATE_SIZE[0]) * size[0]) # ymin = int((float(ymin) / ANNOTATE_SIZE[1]) * size[1]) # ymax = int((float(ymax) / ANNOTATE_SIZE[1]) * size[1]) # annotated_frames = annotated_frames.append(pd.DataFrame({ # "frame_no": int(k), # "track_id": [list(tracks)[i]], # "occluded": [v["occluded"]], # "outside": [v["outside"]], # "xmin": [xmin], # "ymin": [ymin], # "xmax": [xmax], # "ymax": [ymax], # "label": ['vehicle'], # "attributes": [','.join(v["attributes"])], # "crop": [(0,0,0,0)], # "camera": config["Test_Video_ID"] # }), ignore_index=True) ``` ์šฐ๋ฆฌ๋Š” ํ”„๋ ˆ์ž„์„ ์˜คํ”„๋ผ์ธ์—์„œ ์ฒ˜๋ฆฌํ•ด์„œ ํ…์ŠคํŠธ ํŒŒ์ผ์— ๊ธฐ๋กํ–ˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋‹จ๊ณ„๋Š” DataFrame์„ ์ฝ์–ด๋“ค์ด๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ``` import ast annotated_frames = pd.read_csv(config['Path_To_DF_File'], converters={2:ast.literal_eval}) print("Length of the full DF object:", len(annotated_frames)) annotated_frames.head() ``` ์–ด๋…ธํ…Œ์ด์…˜์„ ๋‹จ ํ”„๋ ˆ์ž„์—๋Š” ์ œ๊ฑฐํ•ด์•ผ ํ•  *outside* ์ฐจ๋Ÿ‰์ด ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ``` occluded_filter = annotated_frames["outside"] == 0 annotated_frames = annotated_frames[occluded_filter] annotated_frames.head() ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ "occluded"๋กœ ํ‘œ์‹œ๋œ ๊ฐ์ฒด์˜ ์ˆ˜๋ฅผ ์ฐพ์•„๋ด…์‹œ๋‹ค. ๋ถ€์šธ ํ•„ํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ํŽธ๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ``` occluded_filter = annotated_frames["occluded"] == 1 occluded_only = annotated_frames[occluded_filter] print ('Total number of occluded objects: {}'.format(len(occluded_only))) occluded_only.head() ``` <a name="e2"></a> ### ์—ฐ์Šต 2: `annotation_frames` ๊ฐ์ฒด์—๋Š” ์šฐ๋ฆฌ๊ฐ€ ์‚ฌ์šฉํ•œ ๋ฐ์ดํ„ฐ ์ปฌ๋Ÿผ ์™ธ์—๋„ ๊ตฌ์กฐํ™”๋˜์ง€ ์•Š์€ ๋ ˆ์ด๋ธ”์ด ๋ช‡ ๊ฐœ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. `attributes` ์ปฌ๋Ÿผ์— ์ด๋Ÿฐ ๊ฐ’์ด ๋“ค์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋“ค ๊ฐ’ ์ค‘ ํ•˜๋‚˜๊ฐ€ ์ฐจ๋Ÿ‰ ์œ ํ˜•(์„ธ๋‹จ, SUV ๋“ฑ)์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ฐจ๋Ÿ‰ ์ค‘ ์„ธ๋‹จ ์ฐจ๋Ÿ‰์ด ๋ช‡ ๋Œ€์ธ์ง€ ํ™•์ธํ•ด ๋ณด์‹ญ์‹œ์˜ค. ``` #์—ฌ๊ธฐ์—์„œ ์ฝ”๋”ฉํ•˜์„ธ์š”. ``` ํ•ด๋‹ต์€ [์—ฌ๊ธฐ์—](#a2) ์žˆ์Šต๋‹ˆ๋‹ค. <a name="4"></a> ## 4. ๋น„๋””์˜ค ๋ฐ์ดํ„ฐ ๊ฐ€์ง€๊ณ  ์ž‘์—…ํ•˜๊ธฐ ์•ž์„œ ๋ณด์•˜๋“ฏ์ด ์ผ๋ถ€ ์ฐจ๋Ÿ‰์€ ํ™”๋ฉด์— ๋น„ํ•ด ํฌ๊ธฐ๊ฐ€ ์ž‘์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ๋น„๋””์˜ค๋Š” ์ •์‚ฌ๊ฐํ˜•์ด ์•„๋‹Œ ๋น„์œจ์„ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ์ด๋Ÿฐ ๊ฒƒ๋“ค์€ ๋‚˜์ค‘์— ์šฐ๋ฆฌ๊ฐ€ ํ›ˆ๋ จ ์ค€๋น„๋ฅผ ํ•  ๋•Œ ๋ช…์‹ฌํ•˜๊ณ  ๊ณ ๋ คํ•ด์•ผ ํ•  ์‚ฌํ•ญ๋“ค์ž…๋‹ˆ๋‹ค. <a name="4-1"></a> ### 4.1 ๋น„๋””์˜ค ํŒŒ์ผ์„ ํ”„๋ ˆ์ž„ ์ด๋ฏธ์ง€๋กœ ๋ฐ”๊พธ๊ธฐ ๊ฐ์ฒด ๊ฒ€์ถœ ๋ชจ๋ธ์€ ํ”„๋ ˆ์ž„ ๊ธฐ๋ฐ˜ ๋ฐ์ดํ„ฐ์—์„œ ์ž‘๋™ํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ์šฐ๋ฆฌ๋Š” ์›๋ž˜์˜ ๋™์˜์ƒ ํŒŒ์ผ์—์„œ ํ”„๋ ˆ์ž„์„ ์ƒ์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๊ธฐ ์œ„ํ•ด, OpenCV๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋น„๋””์˜ค ํŒŒ์ผ์„ ์—ฝ๋‹ˆ๋‹ค. mp4 ๋™์˜์ƒ ํŒŒ์ผ์„ ์‚ฌ์šฉํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์—ฌ๊ธฐ์„œ๋Š” ์–ด๋…ธํ…Œ์ด์…˜์ด ์žˆ๋Š” ๋ชจ๋“  ํ”„๋ ˆ์ž„์„ ์ €์žฅํ•  ๊ฒƒ์ด์ง€๋งŒ ๋งค n ๋ฒˆ์งธ ํ”„๋ ˆ์ž„๋งŒ ๊ณจ๋ผ์„œ ์ €์žฅํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์„ ์—ฌ๋Ÿฌ๋ถ„์ด ์Šค์Šค๋กœ ์ฐพ์•„๋‚ผ ์ˆ˜ ์žˆ๋Š”์ง€๋„ ํ™•์ธํ•ด ๋ณผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋น„๋””์˜ค ํ”„๋ ˆ์ž„์„ `jpg` ์ด๋ฏธ์ง€๋กœ ๋ณ€ํ™˜ํ•˜๋ฉด์„œ ์–ด๋…ธํ…Œ์ด์…˜์„ ๊ฒฝ๊ณ„ ๋ฐ•์Šค๋กœ ํ‘œ์‹œํ•˜๋Š” ๋™์˜์ƒ๋„ ๋งŒ๋“ค ๊ฒƒ์ž…๋‹ˆ๋‹ค. ``` colors = [(255, 255, 0), (255, 0, 255), (0, 255, 255), (0, 0, 255), (255, 0, 0), (0, 255, 0), (0, 0, 0), (255, 100, 0), (100, 255, 0), (100, 0, 255), (255, 0, 100)] def save_images(video_path, image_folder, frames_list, annotated_frames, video_out_path = '', fps=10): if not os.path.exists(image_folder): print("Creating image folder") os.makedirs(image_folder) input_video = cv2.VideoCapture(video_path) retVal, im = input_video.read() size = im.shape[1], im.shape[0] fourcc = cv2.VideoWriter_fourcc('h','2','6','4') output_video = cv2.VideoWriter(video_out_path, fourcc, fps, size) if not input_video.isOpened(): print("Sorry, couldn't open video") return frameCount = 0 index_ = 1 while retVal: #print("\r Processing frame no:", frameCount, end = '') if str(frameCount) in frames_list: print("\rsaving frame no:{}, index:{} out of {}".format(frameCount,index_,len(frames_list)), end = '') cv2.imwrite(join(image_folder, '{}.jpg'.format(frameCount)), im) index_ += 1 #print("frame:",'{}.jpg'.format(frameCount)) frame_items = annotated_frames[annotated_frames["frame_no"]==int(frameCount)] for index, box in frame_items.iterrows(): #print(box["crop"]) xmin, ymin, xmax, ymax = box["xmin"], box["ymin"], box["xmax"], box["ymax"] xmin2, ymin2, xmax2, ymax2 = box["crop"][0], box["crop"][1], box["crop"][2], box["crop"][3] cv2.rectangle(im, (xmin, ymin), (xmax, ymax), colors[0], 1) cv2.rectangle(im, (int(xmin2), int(ymin2)), (int(xmax2), int(ymax2)), colors[1], 1) output_video.write(im) retVal, im = input_video.read() frameCount += 1 input_video.release() output_video.release() return size ``` ์šฐ๋ฆฌ ๋น„๋””์˜ค ์ƒ˜ํ”Œ์— ๋Œ€ํ•˜์—ฌ ์•„๋ž˜ ํ•จ์ˆ˜๋ฅผ ํ˜ธ์ถœํ•˜๋ฉด ์‹คํ–‰์ด ์™„๋ฃŒ๋˜๊ธฐ๊นŒ์ง€ ์‹œ๊ฐ„์ด ์ข€ ๊ฑธ๋ฆด ๊ฒƒ์ž…๋‹ˆ๋‹ค. ``` save_images(config["Video_Sample"], '{}/images/{}'.format(config["Base_Dest_Folder"], config["Test_Video_ID"]), frames_list, annotated_frames, '{}/videos/{}.mp4'.format(config["Base_Dest_Folder"], config["Test_Video_ID"])) ``` frame_no๋ฅผ ๊ธฐ์ค€์œผ๋กœ ํ”„๋ ˆ์ž„์„ ์ •๋ ฌํ•˜๊ณ  ์ „์ฒด ์žฅ๋ฉด์—์„œ ๊ณ ์œ  ์ฐจ๋Ÿ‰์˜ ์ˆ˜๋ฅผ ์ถ”์ถœํ•ด ๋ด…์‹œ๋‹ค. ``` annotated_frames = annotated_frames.sort_values(by=['frame_no']) print("Number of unique track IDs in the video:", annotated_frames['track_id'].nunique()) ``` ๋˜ํ•œ ๊ฐ ๋Œ€์ƒ ํด๋ž˜์Šค(์ด ๊ฒฝ์šฐ ์ฐจ๋Ÿ‰)์— ์žˆ๋Š” ํ‰๊ท  ํ”ฝ์…€์„ ํ™•์ธํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ๊ฐ ์–ด๋…ธํ…Œ์ด์…˜๊ณผ ์—ฐ๊ด€๋œ ๊ฒฝ๊ณ„ ์ƒ์ž ์ขŒํ‘œ๋ฅผ ์‚ฌ์šฉํ•œ ๊ฐ„๋‹จํ•œ ๋ฉด์  ๊ณ„์‚ฐ์ž…๋‹ˆ๋‹ค. ๋„ํ‘œ๋ฅผ ํ†ตํ•ด ๊ฐ "track_id"์— ๋Œ€ํ•œ ํ‰๊ท  ๋ฉด์ ์˜ ํžˆ์Šคํ† ๊ทธ๋žจ ๋ถ„ํฌ๋ฅผ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ``` import matplotlib.pyplot as plt def calc_targ_area(row): area = (row['xmax'] - row['xmin']) * (row['ymax'] - row['ymin']) row['area'] = area return row #filter for frames that include items inside_items = annotated_frames[annotated_frames['outside']==0] # Group the data by label and calculate the area for each annotation of that type label_groups = inside_items.groupby(['track_id']).apply(calc_targ_area) label_groups = label_groups.groupby(['track_id']).mean() # Build up and view a histogram y_pos = np.arange(len(label_groups)) plt.bar(y_pos, label_groups["area"], align='center', alpha=0.5) plt.title('Average area of each vehicle in the video') plt.xlabel("Track ID") plt.ylabel("Area") plt.show() ``` <a name="e3"></a> ### ์—ฐ์Šต 3 ๋ฐ์ดํ„ฐ๋ฅผ ์ข€ ๋” ์กฐ์‚ฌํ•˜์‹ญ์‹œ์˜ค. ๊ฒฝ๊ณ„ ์ƒ์ž์˜ ํ‰๊ท  ๋„ˆ๋น„์™€ ๋†’์ด๋ฅผ ์ฐพ์•„๋ณด๋ฉด ํฅ๋ฏธ๋กœ์šธ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ``` # ์—ฌ๊ธฐ์—์„œ ์ฝ”๋”ฉํ•˜์„ธ์š”. annotated_frames.head() ``` ํ•ด๋‹ต์€ [์—ฌ๊ธฐ์—](#a3) ์žˆ์Šต๋‹ˆ๋‹ค. <a name="5"></a> ## 5. ์ถ”๋ก  <a name="5-1"></a> ### 5.1 ํ•œ ํ”„๋ ˆ์ž„์”ฉ ๊ฒ€์ถœํ•˜๊ธฐ ํ›ˆ๋ จ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์‹คํ–‰ํ•˜๊ธฐ ์ „์— ResNet, NasNet ๋ฐ SSD์™€ ํ•จ๊ป˜ Faster RCNN์— ๋Œ€ํ•œ ์ถ”๋ก  ํ”„๋กœ์„ธ์Šค๋ฅผ ์‚ดํŽด๋ณผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ ์šฐ๋ฆฌ๋Š” AVI ๋ฐ์ดํ„ฐ ํ”„๋ ˆ์ž„ ์•ˆ์— ์žˆ๋Š” ๊ฐ์ฒด๋ฅผ ๊ฒ€์ถœํ•˜๊ธฐ ์œ„ํ•ด ์ตœ์ ํ™”๋œ ๋ชจ๋ธ์ธ ์ถ”๋ก  ๊ทธ๋ž˜ํ”„๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์•„๋ž˜ ํ•จ์ˆ˜๋“ค์—์„œ ์šฐ๋ฆฌ๋Š” ๊ทธ๋ž˜ํ”„๋ฅผ ๋กœ๋“œํ•˜๊ณ , ์„ธ์…˜์„ ์ƒ์„ฑํ•œ ํ›„, ๋ง์„ ์ˆœ๋ฐฉํ–ฅ์œผ๋กœ ๋”ฐ๋ผ ๋‚ด๋ ค๊ฐ€๋Š” ๋ฃจํ”„๋ฅผ ๋•๋‹ˆ๋‹ค. TensorFlow ๊ทธ๋ž˜ํ”„๋Š” ๋ชจ๋ธ์˜ ์ž‘๋™ ๊ฐ„ ์ข…์†์„ฑ์„ ์ •์˜ํ•˜๋ฉฐ TensorFlow ์„ธ์…˜์€ ํ•˜๋‚˜ ์ด์ƒ์˜ ์žฅ์น˜์— ๊ฑธ์ณ ๊ทธ๋ž˜ํ”„์˜ ์ผ๋ถ€๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ž˜ํ”„ ๋ฐ ์„ธ์…˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ TensorFlow ๋ฌธ์„œ๋ฅผ ์ฐธ๊ณ ํ•˜์‹ญ์‹œ์˜ค. ๋˜ํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์‚ฌ์ „ ์ •์˜๋œ ์ œ์•ˆ ์ˆ˜์— ๋Œ€ํ•œ ์ ์ˆ˜, ๊ฒฝ๊ณ„ ์ƒ์ž ์œ„์น˜ ๋ฐ ํด๋ž˜์Šค๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๊ฐœ์ˆ˜๋Š” ๋ชจ๋ธ์˜ ์„ค์ • ํŒŒ์ผ์—์„œ ๋ณ€๊ฒฝํ•  ์ˆ˜ ์žˆ๋Š”๋ฐ, ์ˆ˜๋ฅผ ์ค„์ด๋ฉด ์„ฑ๋Šฅ์€ ํ–ฅ์ƒ๋˜์ง€๋งŒ ๋ชจ๋ธ์˜ ์ •ํ™•์„ฑ์— ๋ถ€์ •์ ์ธ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๋˜ํ•œ ์ฃผ์–ด์ง„ ํ”„๋ ˆ์ž„์— ๋Œ€ํ•œ ground-truth ๋ฐ์ดํ„ฐ๋ฅผ ์ถ”์ถœํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ground-truth ๋ฐ์ดํ„ฐ์™€ ์ถ”๋ก ๋œ ๋ฐ์ดํ„ฐ๋ฅผ ๋น„๊ตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ``` def get_info_from_DF(frame_no): result = [] temp = annotated_frames[annotated_frames["frame_no"] == frame_no] for i, box in temp.iterrows(): result.append([int(box["xmin"]), int(box["ymin"]), int(box["xmax"]), int(box["ymax"])]) return result from object_detection.utils import label_map_util from object_detection.utils import visualization_utils as vis_util def detect_frames(path_to_graph, path_to_labels, data_folder, video_path, min_index, max_index, frame_rate, threshold): # We load the label maps and access category names and their associated indicies label_map = label_map_util.load_labelmap(path_to_labels) categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=90, use_display_name=True) category_index = label_map_util.create_category_index(categories) # Import a graph by reading it as a string, parsing this string then importing it using the tf.import_graph_def command print('Importing graph...') detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(path_to_graph, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') # Generate a video object fourcc = cv2.VideoWriter_fourcc('h','2','6','4') print('Starting session...') with detection_graph.as_default(): with tf.Session(graph=detection_graph) as sess: # Define input and output Tensors for detection_graph image_tensor = detection_graph.get_tensor_by_name('image_tensor:0') # Each box represents a part of the image where a particular object was detected. detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0') # Each score represent how level of confidence for each of the objects. # Score is shown on the result image, together with the class label. detection_scores = detection_graph.get_tensor_by_name('detection_scores:0') detection_classes = detection_graph.get_tensor_by_name('detection_classes:0') num_detections = detection_graph.get_tensor_by_name('num_detections:0') frames_path = data_folder num_frames = max_index - min_index reference_image = os.listdir(data_folder)[0] image = cv2.imread(join(data_folder, reference_image)) height, width, channels = image.shape out = cv2.VideoWriter(video_path, fourcc, frame_rate, (width, height)) print('Running Inference:') total_time = 0 for fdx, file_name in \ enumerate(sorted(os.listdir(data_folder), key=lambda fname: int(fname.split('.')[0]) )): if fdx<=min_index or fdx>=max_index: continue; image = cv2.imread(join(frames_path, file_name)) image_np = np.array(image) # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) bboxes = get_info_from_DF(int(file_name.split('.')[0])) # Actual detection. tic = time.time() (boxes, scores, classes, num) = sess.run( [detection_boxes, detection_scores, detection_classes, num_detections], feed_dict={image_tensor: image_np_expanded}) toc = time.time() t_diff = toc - tic total_time = total_time + t_diff # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image, np.squeeze(boxes), np.squeeze(classes).astype(np.int32), np.squeeze(scores), category_index, use_normalized_coordinates=True, line_thickness=2, min_score_thresh= threshold) cv2.putText(image, 'frame: {}'.format(file_name), (30, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255)) for bbox in bboxes: cv2.rectangle(image, (int(bbox[0]), int(bbox[1])), (int(bbox[2]), int(bbox[3])), (0, 0, 255), 2) cv2.putText(image, 'FPS (GPU Inference) %.2f' % round(1 / t_diff, 2), (30, 60), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255)) prog = 'Completed %.2f%% in %.2f seconds' % ((100 * float(fdx - min_index + 1) / num_frames), total_time) print('\r{}'.format(prog), end = "") cv2.imwrite("data/temp/{}.jpg".format(fdx), image) out.write(image) out.release() ``` ์•ž์—์„œ ๊ฒ€ํ† ํ•œ ์„ธ ๊ฐ€์ง€ ๋ชจ๋ธ, "RCNN", "SSD", "NasNet"์˜ ๋ณ€ํ˜•์„ ๋ถ„์„ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. [TensorFlow Model Zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models)์—์„œ ์—ฌ๋Ÿฌ๊ฐ€์ง€ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๊ฒ€ํ† ํ•ด ๋ณผ ์ˆ˜ ์žˆ๋Š”๋ฐ, COCO ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์œผ๋กœ ํ›ˆ๋ จ ์ค‘์— ๋‚˜์˜จ ๋ชจ๋ธ์˜ ์ฒดํฌํฌ์ธํŠธ๋„ ๋‹ค์šด๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `COCO-trained models`๋ผ๋Š” ์ œ๋ชฉ์˜ ์„น์…˜์„ ๋ฐฉ๋ฌธํ•˜์—ฌ `speed` ์™€ `Mean Average Precision` ์ธก์ •๊ฐ’์„ ๋น„๊ตํ•ด ๋ณด์‹ญ์‹œ์˜ค. ์šฐ๋ฆฌ๋Š” ๋‹ค์Œ์— ์„ธ ๊ฐ€์ง€ ๋ชจ๋ธ์„ ์‹œํ—˜ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์„ธ ๊ฐ€์ง€ ๋ชจ๋ธ ์ค‘์—์„œ ์–ด๋–ค ๋ชจ๋ธ์ด ๊ฐ€์žฅ ๋น ๋ฅธ์ง€ ์•Œ์•„๋ณด์„ธ์š”. ๋˜ํ•œ ์–ด๋–ค ๋ชจ๋ธ์ด ๊ฐ€์žฅ ์ •ํ™•ํ•ฉ๋‹ˆ๊นŒ? ``` models = {'faster_rcnn_resnet_101': '/dli/data/tmp/faster_rcnn_resnet/frozen_inference_graph.pb' , 'nasnet': '/dli/data/tmp/faster_rcnn_nas/faster_rcnn_nas_coco_2018_01_28/frozen_inference_graph.pb', 'ssd_mobilenet_v2':'/dli/data/tmp/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb'} image_folder = '{}/images/{}'.format(config["Base_Dest_Folder"], config["Test_Video_ID"]) model_name = 'faster_rcnn_resnet_101' PATH_TO_LABELS = config["Path_To_COCO_Labels"] PATH_TO_DATA = image_folder VIDEO_OUT_PATH = 'imgs/inference_COCO.mp4' ``` ๋‹ค์Œ์œผ๋กœ, ์šฐ๋ฆฌ๋Š” ๊ฐ ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ถ”๋ก  ๊ณผ์ •์„ ์‹คํ–‰ํ•˜๊ณ , ๋‹น๋ถ„๊ฐ„์€ ๊ทธ ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐ์ ์œผ๋กœ ๋น„๊ตํ•˜๋ ค๊ณ  ๋…ธ๋ ฅํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. `detect_frames` ํ•จ์ˆ˜๋ฅผ ํ˜ธ์ถœํ•  ๋•Œ ์ถ”๋ก ์„ ์œ„ํ•œ ์ตœ์†Œ ๋ฐ ์ตœ๋Œ€ ํ”„๋ ˆ์ž„ ์ธ๋ฑ์Šค ๊ฐ’์„ ๋„˜๊ฒจ์ค„ ์ˆ˜ ์žˆ๋Š”๋ฐ, ์—ฌ๊ธฐ์—์„œ๋Š” ์‹œ๊ฐ„ ์ ˆ์•ฝ์„ ์œ„ํ•˜์—ฌ "100"์—์„œ "200"์œผ๋กœ ์„ค์ •ํ–ˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ๋„คํŠธ์›Œํฌ ์ถœ๋ ฅ์„ ์‹ ๋ขฐ๋„์— ๋”ฐ๋ผ ์ปท์•„์›ƒ ํ•  ์ˆ˜ ์žˆ๋Š” ์ž„๊ณ„๊ฐ’์„ ์ค„ ์ˆ˜๋„ ์žˆ๋Š”๋ฐ, ๊ธฐ๋ณธ์ ์œผ๋กœ 0.5๋กœ ์„ค์ •๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ``` # select model name: possible values: # 'faster_rcnn_resnet_101' # 'nasnet' # 'ssd_mobilenet_v2' model_name = 'faster_rcnn_resnet_101' detect_frames(models[model_name], PATH_TO_LABELS, PATH_TO_DATA, VIDEO_OUT_PATH, 100, 200, 10, 0.5) disp_video(VIDEO_OUT_PATH) ``` ์ •์„ฑ์ ์ธ ์ธก๋ฉด์—์„œ NasNet ๋ชจ๋ธ์ด ์„ธ ๊ฐ€์ง€ ์ค‘์—์„œ ๊ฐ€์žฅ ์ข‹์€ ๊ฒฐ๊ณผ๋ฅผ ๋ƒ…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ •ํ™•๋„ ๋ฉด์—์„œ 2์œ„ ๋ชจ๋ธ์ธ `faster_rcnn_resnet_101`์— ๋น„ํ•ด ์ถ”๋ก  ์‹œ๊ฐ„์ด 3 ๋ฐฐ ์ •๋„ ์ฆ๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. (๋‹ค์Œ ์ฑ•ํ„ฐ์—์„œ ์ •ํ™•๋„๋ฅผ ์ •๋Ÿ‰ํ™”ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค.) ๋ฐ˜๋ฉด SSD๋Š” ๋‹ค๋ฅธ ๋‘ ๊ฐ€์ง€์— ๋น„ํ•ด ๊ฐ€์žฅ ๋‚ฎ์€ ํ’ˆ์งˆ์˜ ๊ฒฐ๊ณผ๋ฅผ ๋ƒˆ์Šต๋‹ˆ๋‹ค. SSD๊ฐ€ ์œ ์šฉํ•˜์ง€ ์•Š๋‹ค๋Š” ๋œป์€ ์•„๋‹™๋‹ˆ๋‹ค. SSD๋Š” ๋งค์šฐ ํšจ์œจ์ ์ธ ๊ฒ€์ถœ๊ธฐ์ง€๋งŒ ๋”์šฑ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ์˜ฌ๋ฆฌ๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. <a name="5-2"></a> ### 5.2 ์ •๋Ÿ‰์  ๋ถ„์„ - Intersection over Union ๋ชจ๋ธ์ด ์–ด๋–ค ์„ฑ๋Šฅ์„ ๊ฐ€์ง€๋Š”์ง€๋ฅผ ์ •๋Ÿ‰์ ์œผ๋กœ ํŒ๋‹จํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š”, ์ ์–ด๋„ ๊ฒ€์ถœ์˜ ๊ด€์ ์—์„œ๋Š” IoU(Intersection over Union) ๊ณ„์‚ฐ๊ณผ False Negative rate๋ฅผ ๊ณ ๋ คํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ์ฒด ๊ฒ€์ถœ์˜ ๊ฒฝ์šฐ, ๊ฒ€์ถœ ์‹ ๋ขฐ๋„๊ฐ€ ๊ณ ์ • ์ž„๊ณ„๊ฐ’ ์ด์ƒ์ผ ๋•Œ IoU๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ๊ฒƒ์ด ํ˜„๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด, 300๊ฐœ์˜ ์ถœ๋ ฅ์—์„œ ๋‚˜์˜จ ๋ชจ๋“  ๊ฒฝ๊ณ„ ์ƒ์ž๋ฅผ ๊ณ ๋ คํ•ด์•ผ๋งŒ ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ž„๊ณ„๊ฐ’์„ 0.5๋กœ ์„ค์ •ํ•˜๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. ๋˜ํ•œ ๊ฐ ๋ชจ๋ธ์˜ ํ”„๋ ˆ์ž„๋ฅ (fps - ์ดˆ๋‹น ํ”„๋ ˆ์ž„ ์ˆ˜)๋„ ๊ณ ๋ คํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฐœ๋…๋“ค์€ ์ด์ „ DLI ์ฝ”์Šค์—์„œ ์ด๋ฏธ ์‚ดํŽด๋ณธ ๊ฒƒ๋“ค์ž…๋‹ˆ๋‹ค. <img src="imgs/IoU.jpg" alt="meta_arch" style="width: 600px;"/> <p style="text-align: center;color:gray"> ๊ทธ๋ฆผ 9. IoU ์ธก์ • </p> ์—ฌ๊ธฐ์—์„œ๋Š” ๊ฐ ํ”„๋ ˆ์ž„์—์„œ ๊ฐ ์ •๋‹ต ๊ฒฝ๊ณ„ ์ƒ์ž๋ฅผ ๊ฒ€์ถœํ•  ๋•Œ๋งˆ๋‹ค IoU๋ฅผ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ๊ฒ€์ถœ๊ธฐ์˜ ์„ฑ๋Šฅ์„ ์ธก์ •ํ•˜๋Š” ์ข‹์€ ๋ฐฉ๋ฒ•์ด์ง€๋งŒ ์‹ค์ „์— ์ ์šฉํ•  ๋•Œ ํ•„์š”ํ•œ ์ค‘์š”ํ•œ ์ •๋ณด๊ฐ€ ํ•˜๋‚˜ ๋น ์ ธ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‚˜์ค‘์— ๋” ์‚ดํŽด๋ณผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ฒซ์งธ, ์šฐ๋ฆฌ๋Š” ๊ฐ ๋น„๋””์˜ค์˜ ๋ชจ๋“  ํ”„๋ ˆ์ž„์— ๋Œ€ํ•œ ๋ชจ๋ธ, ๊ฒ€์ถœ, ์ •๋‹ต ๊ฒฝ๊ณ„ ์ƒ์ž ๋ฐ ์ ์ˆ˜์˜ ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์•„๋ž˜ ์ฝ”๋“œ์—์„œ ์ปค๋ฉ˜ํŠธ ์ฒ˜๋ฆฌ๋œ ๋ถ€๋ถ„์„ ๋ณด์„ธ์š”. ์ปค๋ฉ˜ํŠธ ์ฒ˜๋ฆฌ๋œ ์ด์œ ๋Š” ์‹คํ–‰ํ•˜๋Š”๋ฐ ์‹œ๊ฐ„์ด ์˜ค๋ž˜ ๊ฑธ๋ฆฌ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ๋žฉ์—์„œ๋Š” ๋ชจ๋ธ ๋น„๊ต๋ฅผ ์œ„ํ•ด pickle๋กœ ์••์ถ•๋œ ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ``` def detect_frames_for_comparison(path_to_graph, path_to_labels, data_folder, min_index, max_index): # We load the label maps and access category names and their associated indicies label_map = label_map_util.load_labelmap(path_to_labels) categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=1, use_display_name=True) category_index = label_map_util.create_category_index(categories) # Import a graph by reading it as a string, parsing this string then importing it using the tf.import_graph_def command print('Importing graph...') detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(path_to_graph, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') # Generate a video object print('Starting session...') output = [] with detection_graph.as_default(): with tf.Session(graph=detection_graph) as sess: # Define input and output Tensors for detection_graph image_tensor = detection_graph.get_tensor_by_name('image_tensor:0') # Each box represents a part of the image where a particular object was detected. detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0') # Each score represent how level of confidence for each of the objects. # Score is shown on the result image, together with the class label. detection_scores = detection_graph.get_tensor_by_name('detection_scores:0') detection_classes = detection_graph.get_tensor_by_name('detection_classes:0') num_detections = detection_graph.get_tensor_by_name('num_detections:0') frames_path = data_folder xml_path = join(data_folder, 'xml') num_frames = max_index - min_index reference_image = os.listdir(data_folder)[0] image = cv2.imread(join(data_folder, reference_image)) height, width, channels = image.shape print('Running Inference:') for fdx, file_name in \ enumerate(sorted(os.listdir(data_folder), key=lambda fname: int(fname.split('.')[0]) )): if fdx<=min_index or fdx>=max_index: continue; image = cv2.imread(join(frames_path, file_name)) image_np = np.array(image) # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) bboxes = get_info_from_DF(int(file_name.split(".")[0])) # Actual detection. tic = time.time() (boxes, scores, classes, num) = sess.run( [detection_boxes, detection_scores, detection_classes, num_detections], feed_dict={image_tensor: image_np_expanded}) toc = time.time() t_diff = toc - tic fps = 1/t_diff boxes = np.squeeze(boxes) classes = np.squeeze(classes) scores = np.squeeze(scores) vis_util.visualize_boxes_and_labels_on_image_array( image, boxes, classes.astype(np.int32), scores, category_index, use_normalized_coordinates=True, line_thickness=2, min_score_thresh=0.5) #cv2.imwrite(join('/dli/dli-v3/iv05/data/temp', file_name),image) prog = '\rCompleted %.2f %%' % (100 * float(fdx - min_index + 1) / num_frames) print('{}'.format(prog), end = "") boxes = np.array([(i[0]*height, i[1]*width, i[2]*height, i[3]*width) for i in boxes]) output.append((bboxes, (boxes, scores, classes, num, fps))) return output PATH_TO_DATA = image_folder model_name = 'faster_rcnn_resnet_101' detections = detect_frames_for_comparison(models[model_name], PATH_TO_LABELS, PATH_TO_DATA, 100, 200) ``` ๋‘ ๊ฐœ์˜ ์ขŒํ‘œ๊ฐ’์ด ์ฃผ์–ด์ง€๋ฉด `bbox_IoU` ํ•จ์ˆ˜๊ฐ€ IoU ๊ฐ’์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ``` # function to compute the intersection over union of these two bounding boxes def bbox_IoU(A, B): # A = list(ymin,xmin,ymax,xmax) # B = list(ymin,xmin,ymax,xmax) - (xmin, ymin, xmax, ymax) # assign for readability yminA, xminA, ymaxA, xmaxA = A xminB, yminB, xmaxB, ymaxB = B # figure out the intersecting rectangle coordinates xminI = max(xminA, xminB) yminI = max(yminA, yminB) xmaxI = min(xmaxA, xmaxB) ymaxI = min(ymaxA, ymaxB) # compute the width and height of the interesecting rectangle wI = xmaxI - xminI hI = ymaxI - yminI # compute the area of intersection rectangle (enforce area>=0) areaI = max(0, wI) * max(0, hI) # compute areas of the input bounding boxes areaA = (xmaxA - xminA) * (ymaxA - yminA) areaB = (xmaxB - xminB) * (ymaxB - yminB) # if intersecting area is zero, we're done (avoids IoU=0/0 also) if areaI == 0: return 0, areaI, areaA, areaB # finally, compute and return the intersection over union return areaI / (areaA + areaB - areaI), areaI, areaA, areaB ``` ์ด์ œ ์ƒ์„ฑ๋œ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ๋ฃจํ”„๋ฅผ ๋Œ๋ฉด์„œ `bbox_IoU` ํ•จ์ˆ˜๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๊ฐ ํ”„๋ ˆ์ž„์— ๋Œ€ํ•œ IoU ์ธก์ •๊ฐ’์„ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ``` vid_calcs = list() for frame_idx in range(len(detections)): det_boxes = detections[frame_idx][1][0] scores = detections[frame_idx][1][1] fps = detections[frame_idx][1][4] bbox_frame = detections[frame_idx][0] max_IoU_per_detection = list() #We loop over each bounding box and find the maximum IoU for each detection. for b_idx, bbox in enumerate(bbox_frame): IoU = 0 for det_idx, det_box in enumerate(det_boxes): if scores[det_idx] < 0.5: continue #We only include bounding box proposals with scores above and equal to 0.5 iou, I, A, B = bbox_IoU(det_box, bbox) IoU = max(iou, IoU) max_IoU_per_detection.append((IoU, fps)) vid_calcs.append(max_IoU_per_detection) ``` ๊ฒ€์ถœ์— ๋Œ€ํ•œ IoU ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐํ™”ํ•ด๋ด…์‹œ๋‹ค. ``` IoU_list=[] for item in vid_calcs: IoU_list.append(item[0][0]) y_pos = np.arange(len(IoU_list)) pl.rcParams['figure.figsize'] = (18, 3) plt.bar(y_pos, IoU_list, align='center', alpha=0.5) plt.title('IoU measure for detections') plt.show() ``` SSD ๋ชจ๋ธ์€ ๋‹ค๋ฅธ ๋‘ ๋ชจ๋ธ์— ๋น„ํ•ด ๋งค์šฐ ๋‚ฎ์€ IoU ๊ฐ’์„ ์ƒ์„ฑํ•˜๋ฉฐ ๊ฒ€์ถœ ์‹คํŒจ๋„ ๋งŽ์ด ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ˜๋ฉด์—, NasNet ๋ชจ๋ธ์€ ํ›จ์”ฌ ๋” ๋†’์€ IoU ๊ฐ’์„ ๊ฐ€์ง€๋ฉฐ ์‹œํ—˜ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ํ”„๋ ˆ์ž„์„ ๋ˆ„๋ฝํ•˜๋Š” ์ˆ˜๋„ ์ ์Šต๋‹ˆ๋‹ค. NasNet์˜ ์ค‘์š”ํ•œ ์‚ฌํ•ญ์€ ์˜ค๋žœ ์ถ”๋ก  ์‹œ๊ฐ„์ด๋ฉฐ, ์ด๋Š” ๊ธฐ์กด์˜ ํ•˜๋“œ์›จ์–ด ์ œํ•œ์„ ๊ณ ๋ คํ•  ๋•Œ ๋งŽ์€ ์˜จ๋ผ์ธ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์— ์ ํ•ฉํ•˜์ง€ ์•Š์€ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ๋ฐ˜๋ฉด `faster_rcnn_resnet_101`์€ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์— ์ ์šฉํ•  ๋•Œ ๊ท ํ˜• ์žกํžŒ ์ •ํ™•๋„์™€ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ๋‹ค์Œ ๋žฉ์—์„œ๋Š” `์ „์ดํ•™์Šต (Transfer Learning)` ๊ธฐ๋ฒ•์„ ์ด์šฉํ•˜์—ฌ ๋ชจ๋ธ์˜ ์ •ํ™•๋„๋ฅผ ๋”์šฑ ํ–ฅ์ƒ์‹œ์ผœ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. <a name="6"></a> ## 6. ์–ด๋…ธํ…Œ์ด์…˜์„ ์ž๋ฅด๊ณ  ์ •๊ทœํ™”ํ•˜๊ธฐ Example ๋ฐ์ดํ„ฐ ๊ตฌ์กฐ ํ˜•์‹์œผ๋กœ ์ธ์ฝ”๋”ฉํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์˜์ƒ์„ ์ ๋‹นํ•œ ํฌ๊ธฐ๋กœ ์ž˜๋ผ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๊ฐ€ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋Š” ๋ชจ๋ธ์€ 448 x 448 ํ”ฝ์…€์˜ ๊ณ ์ • ๋„ˆ๋น„ ์ž…๋ ฅ์„ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ๊ฐ์ฒด ๊ฒ€์ถœ API๋ฅผ ํ†ตํ•ด ์ด ๋ชจ๋ธ์— ๊ณต๊ธ‰๋˜๋Š” ๋ชจ๋“  ์ด๋ฏธ์ง€๋Š” 448 x 448 ํ”ฝ์…€ ํฌ๊ธฐ๋กœ ๋ณ€๊ฒฝ๋œ๋‹ค๋Š” ๋œป์ž…๋‹ˆ๋‹ค. ์›๋ณธ ์˜์ƒ์˜ ๊ฐ€๋กœ ์„ธ๋กœ ๋น„์œจ์ด๋‚˜ ๋†’์€ ํ•ด์ƒ๋„ ๋•Œ๋ฌธ์— ๋ชจ๋ธ์˜ ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„์—์„œ ๋ฐ์ดํ„ฐ๋ฅผ ๋งŽ์ด ๋ณ€๊ฒฝํ•ด์•ผ ํ•˜๋Š”๋ฐ, ์ด ๋•Œ๋ฌธ์— ๊ฒฐ๊ณผ๊ฐ€ ์ข‹์ง€ ์•Š๊ฒŒ ๋‚˜์˜ฌ ๊ฐ€๋Šฅ์„ฑ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชฉํ‘œ ํฌ๊ธฐ๊ฐ€ ์ „์ฒด ์ด๋ฏธ์ง€์— ๋น„ํ•ด ์ƒ๋Œ€์ ์œผ๋กœ ์ž‘๊ธฐ ๋•Œ๋ฌธ์— ํฌ๊ธฐ ์กฐ์ ˆ ๊ณผ์ •์— ๋ฏผ๊ฐํ•˜๊ฒŒ ๋ฐ˜์‘ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ „์ฒด ํฌ๊ธฐ๋ฅผ ์ค„์ด๋Š”(Resizing) ๋Œ€์‹ ์— ์˜์ƒ์˜ ์›๋ž˜ ํ•ด์ƒ๋„๋ฅผ ์œ ์ง€ํ•˜๋ฉด์„œ ์˜์ƒ์„ ์ž˜๋ผ๋‚ด๋ฉด(Cropping) ์˜์ƒ ํ’ˆ์งˆ๊ณผ ๊ด€์‹ฌ ๋ชฉํ‘œ ๊ด€์ ์—์„œ ์˜๋„ํ•˜์ง€ ์•Š์€ ํฌ๊ธฐ ์กฐ์ ˆ ๋ถ€์ž‘์šฉ์„ ์ œ๊ฑฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด ๋ฐฉ๋ฒ•์€ ๋ฐ์ดํ„ฐ ์ฒ˜๋ฆฌ๋ฅผ ์•ฝ๊ฐ„ ๋” ๋ณต์žกํ•˜๊ฒŒ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์•„๋ž˜์˜ ํ•จ์ˆ˜๋Š” ์ด๋ฏธ์ง€ ๋‚ด ์ž˜๋ผ๋‚ผ ๊ฐ’์„ ๊ฒฐ์ •ํ•˜๊ณ , ์ž˜๋ผ๋‚ด๋Š” ์˜์—ญ์ด ์•„๋‹Œ ์ด์™ธ์˜ ์˜์—ญ์˜ ์–ด๋…ธํ…Œ์ด์…˜์„ ์ œ์™ธํ•˜๋Š” ๋ฐ ํ•„์š”ํ•œ ๋ณ€๊ฒฝ์‚ฌํ•ญ์„ ๊ณ„์‚ฐํ•˜๋ฉฐ, ๋‚จ์€ ์–ด๋…ธํ…Œ์ด์…˜์„ ์ƒˆ๋กœ ์ž˜๋ผ๋‚ธ ์ด๋ฏธ์ง€ ์ขŒํ‘œ๊ณ„๋กœ ๋‹ค์‹œ ์ธ๋ฑ์‹ฑํ•ฉ๋‹ˆ๋‹ค. ๊ฒฝ๊ณ„ ์ƒ์ž์˜ ๊ฐ’๋“ค์€ ์ด๋ฏธ์ง€ ๋†’์ด์™€ ๋„ˆ๋น„์— ๋น„๋ก€ํ•˜์—ฌ ์ •๊ทœํ™”๋˜๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ๋”์šฑ ์œ ์—ฐํ•˜๊ฒŒ ๋งŒ๋“ค์–ด ํ–ฅํ›„์— ์žฌ์‚ฌ์šฉ์ด ํŽธ๋ฆฌํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—์„œ๋Š” ์˜์ƒ์˜ ๊ฐ€์šด๋ฐ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ ์ž๋ฅด๊ธฐ๋ฅผ ํ–ˆ์ง€๋งŒ ์–ด๋””๋ฅผ ์ž๋ฅผ์ง€์— ๋Œ€ํ•œ ์ œ์•ฝ์‚ฌํ•ญ์€ ์—†์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์„ ํƒ์€ ์ด ๋ชจ๋ธ์— ๋Œ€ํ•ด ๊ตฌ์ถ•๋œ ์ถ”๋ก  ํŒŒ์ดํ”„๋ผ์ธ์—์„œ ๋‹ค์šด์ŠคํŠธ๋ฆผ (Downstream) ์˜ํ–ฅ์„ ๋ฏธ์นœ๋‹ค๋Š” ์ ์— ์œ ์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ์ถ”๋ก ์„ ์œ„ํ•ด ๊ทธ๋ž˜ํ”„๋ฅผ ํ†ต๊ณผํ•˜๋Š” ๋ชจ๋“  ๋ฐ์ดํ„ฐ๋„ ๋น„์Šทํ•œ ํฌ๊ธฐ๋ฅผ ๊ฐ€์ ธ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ถ„๋ช…ํžˆ, ์„ผ์„œ๋Š” ์šฐ๋ฆฌ ๋ชจ๋ธ์— ์˜ํ•ด ์ œํ•œ๋œ ๋ฒ”์œ„๋ณด๋‹ค ํ›จ์”ฌ ๋” ํฐ ์‹œ์•ผ์—์„œ ์ •๋ณด๋ฅผ ์ˆ˜์ง‘ํ•˜๋ฉฐ, ์ด์™€ ๋น„์Šทํ•œ ๋ชจ๋ธ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฐ์ฒด ๊ฒ€์ถœ๊ธฐ๋ฅผ ๋ชฉํ‘œ๋กœ ํ•˜๋Š” ์ถ”๋ก  ํŒŒ์ดํ”„๋ผ์ธ์„ ๋งŒ๋“ค ๊ฒฝ์šฐ, ํŒŒ์ดํ”„๋ผ์ธ์€ ๋ถ„ํ• , ์ค‘์ฒฉ ๋ฐ ๊ด€๋ จ๋œ ๋ชจ๋“  ์–ด๋…ธํ…Œ์ด์…˜ ์ฒ˜๋ฆฌ๋ฅผ ํ•ด์•ผํ•  ํ•„์š”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ํŽธ์˜์„ฑ์„ ์œ„ํ•ด DataFrame `width` and `height` ์ปฌ๋Ÿผ์„ ์ถ”๊ฐ€ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ``` img_height = 692 img_width = 882 annotated_frames.insert(1, 'width', img_width) annotated_frames.insert(1, 'height', img_height) ``` ์•„๋ž˜์—์„œ๋Š” ๊ฐ ์ด๋ฏธ์ง€๋ฅผ ์ž๋ฅด๋Š” ํฌ๊ธฐ๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ž๋ฅด๊ธฐ ํฌ๊ธฐ๋ฅผ ์„ค์ •ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ๋Š” ๋งŽ์€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋ชจ๋“  ์ด๋ฏธ์ง€๋ฅผ ๊ฐ€์šด๋ฐ ์ฃผ์œ„๋กœ ์ž๋ฅด๊ณ  DataFrame์„ ํ•„ํ„ฐ๋งํ•˜์—ฌ ์ด๋™ ์ค‘์ธ ์ฐจ๋Ÿ‰์ด ์—†๋Š” ํ”„๋ ˆ์ž„์„ ์ œ๊ฑฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•œ ๊ฑธ์Œ ๋” ๋‚˜์•„๊ฐ€์„œ ๊ฐ ํ”„๋ ˆ์ž„ ๋‚ด์—์„œ ์›€์ง์ด๋Š” ๊ฐ์ฒด์˜ ๊ฐ€์šด๋ฐ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ ์ž๋ฅด๊ธฐ ์˜์—ญ์„ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ์ขŒํ‘œ๊ฐ€ ์˜์ƒ ๊ฒฝ๊ณ„ ๋‚ด์— ๋“ค์–ด์˜ค๋Š”์ง€ ํ™•์ธํ•˜๊ณ  ์ž˜๋ชป๋œ ๊ฒฝ๊ณ„ ์ƒ์ž (์Œ์ˆ˜๊ฐ’์„ ๊ฐ€์ง„ ์ขŒํ‘œ)๊ฐ€ ์ƒ์„ฑ๋˜์ง€ ์•Š๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ``` # define the crop size which is equal to the input size of our neural network g_image_size = (448.0, 448.0) def set_crop_size(crop_size, frames): for i, box in frames.iterrows(): center_box_x = int (box['xmin'] + (box['xmax'] - box['xmin']) / 2) center_box_y = int (box['ymin'] + (box['ymax'] - box['ymin']) / 2) start_x = center_box_x - crop_size[0] / 2 end_x = start_x + crop_size[0] if start_x < 0: if box['xmin'] - 5 >= 0: start_x = box['xmin'] - 5 else: start_x = box['xmin'] end_x = start_x + crop_size[0] elif end_x >= box['width']: end_x = box['width'] start_x = end_x - crop_size[0] start_y = center_box_y - crop_size[1] / 2 end_y = start_y + crop_size[1] if start_y < 0: if box['ymin'] - 5 >= 0: start_y = box['ymin'] - 5 else: start_y = box['ymin'] end_y = start_y + crop_size[1] elif end_y >= box['height']: end_y = box['height'] start_y = end_y - crop_size[1] frames.at[i,'crop'] = [(start_x, start_y, end_x, end_y)] return frames ``` ์ •๊ทœํ™”๋œ ๋ฐ์ดํ„ฐ์—์„œ ๋˜ ๋‹ค๋ฅธ ๋ฌธ์ œ๋Š” ๊ฐ์ฒด์˜ ํฌ๊ธฐ๊ฐ€ ์ž๋ฅด๊ธฐ ํฌ๊ธฐ๋ณด๋‹ค ํฐ ๊ฒฝ์šฐ์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฐ ๊ฒฝ์šฐ ๊ฐ์ฒด๋ฅผ ์ž๋ฅผ ์ˆ˜ ์—†๊ธฐ ๋•Œ๋ฌธ์— ์ ์ ˆํ•œ ๋ง ์ž…๋ ฅ์„ ๋งŒ๋“ค๊ธฐ ์œ„ํ•œ ์˜์‚ฌ๊ฒฐ์ •์„ ํ•ด์•ผ๋งŒ ํ•ฉ๋‹ˆ๋‹ค. ์šฐ๋ฆฌ์˜ ์ƒ˜ํ”Œ ๋น„๋””์˜ค์—์„œ ๊ทธ๋Ÿฌํ•œ ์‚ฌ๋ก€๋Š” ์ฐจ๋Ÿ‰์ด ํ”„๋ ˆ์ž„ ํ•˜๋‹จ์— ๋„๋‹ฌํ•˜๋ฉด ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ์•„๋ž˜ ์˜ˆ๋ฅผ ์ฐธ์กฐํ•˜์‹ญ์‹œ์˜ค. <img src="imgs/resize_samples.jpg" alt="Vatic imaging"/> <p style="text-align: center;color:gray"> ๊ทธ๋ฆผ 10. ๋ง ์ž…๋ ฅ ํฌ๊ธฐ๋ณด๋‹ค ํฐ ๊ฒฝ๊ณ„ ์ƒ์ž๋ฅผ ๊ฐ€์ง€๋Š” ์ฐจ๋Ÿ‰</p> ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ ์šฐ๋ฆฌ์˜ ์„ ํƒ์€ ์ฐจ๋Ÿ‰์˜ ํฌ๊ธฐ๊ฐ€ ์ž…๋ ฅ ํฌ๊ธฐ๋ณด๋‹ค ํฐ ํ”„๋ ˆ์ž„์˜ ํฌ๊ธฐ๋ฅผ ์ค„์ธ ํ›„์— ์ž๋ฅด๊ธฐ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋˜ ํ”„๋ ˆ์ž„์˜ ํฌ๊ธฐ๊ฐ€ ํŽธ์˜์ƒ ์กฐ์ ˆ๋๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” `resize` ์นผ๋Ÿผ์„ ์ƒˆ๋กœ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ๋””ํดํŠธ๊ฐ’์€ False์ž…๋‹ˆ๋‹ค. ``` annotated_frames.insert(1, 'resize', False) ``` DataFrame์„ ์‚ดํŽด๋ด…์‹œ๋‹ค. ``` annotated_frames.head() ``` <a name="e4"></a> ### ์—ฐ์Šต 4 ์šฐ๋ฆฌ์˜ ๋ฐ์ดํ„ฐ์—์„œ ๋ถˆํ•„์š”ํ•œ ์ปฌ๋Ÿผ์„ ์‚ญ์ œํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์•„๋ž˜์˜ ์ฝ”๋“œ๋Š” `occluded` ์ปฌ๋Ÿผ์„ ์‚ญ์ œํ•˜๋„๋ก ์„ค์ •๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. `outside` ์ปฌ๋Ÿผ์„ ์ œ๊ฑฐํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜์‹ญ์‹œ์˜ค. ``` # Remove the unnecessary columns since they are all the same value now annotated_frames = annotated_frames.drop("occluded", axis=1) annotated_frames.head() # <<<<<<<<<<<<<<<<<<<<YOUR CODE HERE >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ``` ๊ฒฝ๊ณ„ ์ƒ์ž์˜ ๋†’์ด์™€ ๋„ˆ๋น„์„ ๋ฐ”ํƒ•์œผ๋กœ ์ •์ƒ ํฌ๊ธฐ์˜ ๊ฒฝ๊ณ„ ์ƒ์ž๋ฅผ ํฌํ•จํ•˜๋Š” `normal_size_frames`์™€ ์ง€๋‚˜์น˜๊ฒŒ ํฐ ๊ฒฝ๊ณ„ ์ƒ์ž๋ฅผ ํฌํ•จํ•˜๋Š” `oversized_frames` ๋“ฑ, ๋‘ ์„ธํŠธ์˜ DataFrames์„ ๋งŒ๋“ค ๊ฒƒ์ž…๋‹ˆ๋‹ค. ``` normal_size_frames = annotated_frames[annotated_frames.apply(lambda x: x['xmax'] - x['xmin'] <= g_image_size[0], axis=1) & annotated_frames.apply(lambda x: x['ymax'] - x['ymin'] <= g_image_size[1], axis=1)] oversized_frames = annotated_frames[annotated_frames.apply(lambda x: x['xmax'] - x['xmin'] > g_image_size[0], axis=1) | annotated_frames.apply(lambda x: x['ymax'] - x['ymin'] > g_image_size[1], axis=1)] print("Number of frames within the crop size:{}, number of oversized vehicles/frames: {}".format(len(normal_size_frames), len(oversized_frames))) normal_size_frames = set_crop_size(g_image_size, normal_size_frames) normal_size_frames.head() ``` ํ”„๋ ˆ์ž„์˜ ํฌ๊ธฐ ์กฐ์ ˆ์€ ๊ฒฝ๊ณ„ ์ƒ์ž ์–ด๋…ธํ…Œ์ด์…˜ ๊ฐ’(xmin, ymin, xmax ๋ฐ ymax)์— ์˜ํ–ฅ์„ ๋ฏธ์นฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฐ’์€ ํฌ๊ธฐ ์กฐ์ ˆ ๋น„์œจ์— ๋”ฐ๋ผ ๋ณ€๊ฒฝํ•  ํ•„์š”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ํ”„๋ ˆ์ž„ ๋‚ด์— ๊ฐ์ฒด๋ฅผ ํฌํ•จํ•˜๋ ค๋ฉด `max(bounding_box_width, bounding_box_height) + some_offset_value` ๋ฅผ ๊ณ„์‚ฐํ•˜์—ฌ ํฌ๊ธฐ ์กฐ์ ˆ ๋น„์œจ๋กœ ์‚ผ์Šต๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ๊ทธ์— ๋”ฐ๋ผ __bounding_box__ ์ขŒํ‘œ๋ฅผ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค. ``` for i, box in oversized_frames.iterrows(): resize_ratio = 0.0 diff_x = box['xmax'] - box['xmin'] + 50 # adding offset to prevent round up errors diff_y = box['ymax'] - box['ymin'] + 50 #find the maximum of x and y required ratio reduction resize_ratio = g_image_size[0]/max(diff_x, diff_y) #correct the existing bounding box values according to the ratio oversized_frames.at[i,'xmin'] = int(box['xmin'] * resize_ratio) oversized_frames.at[i,'xmax'] = int(box['xmax'] * resize_ratio) oversized_frames.at[i,'ymin'] = int(box['ymin'] * resize_ratio) oversized_frames.at[i,'ymax'] = int(box['ymax'] * resize_ratio) #correct height and width values and set the resize column value to True oversized_frames.at[i,'width'] = int(box['width'] * resize_ratio) oversized_frames.at[i,'height'] = int(box['height'] * resize_ratio) oversized_frames.at[i,'resize'] = True oversized_frames = set_crop_size(g_image_size, oversized_frames) oversized_frames.head() ``` ์–ผ๋งˆ๋‚˜ ๋งŽ์€ ์˜ค๋ฒ„์‚ฌ์ด์ฆˆ ํ”„๋ ˆ์ž„์ด ์ƒ์„ฑ๋˜์—ˆ๋Š”์ง€ ์•„๋ž˜์—์„œ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ``` print('Number of oversized frames:', len(oversized_frames)) ``` ํš๋“ํ•œ ๊ฐ’์ด ์—ฌ์ „ํžˆ ์œ ํšจํ•˜๊ณ , ๋ฐ˜์˜ฌ๋ฆผ ์—๋Ÿฌ๊ฐ€ ๊ฒฐ๊ณผ์— ์˜ํ–ฅ์„ ๋ฏธ์น˜์ง€ ์•Š์•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ ์˜์ƒ ์ขŒํ‘œ๊ฐ€ ์ž˜๋ฆฐ ์˜์—ญ์œผ๋กœ ๋ฐ”๋€Œ๊ธฐ ๋•Œ๋ฌธ์— ๊ฒฝ๊ณ„ ์ƒ์ž ์ขŒํ‘œ์—์„œ ์ž˜๋ฆฐ ๋ฐ•์Šค์˜ ์ƒ๋‹จ ์ขŒํ‘œ์™€ ์™ผ์ชฝ ์ขŒํ‘œ๋ฅผ ๋นผ์ค˜์•ผ ํ•ฉ๋‹ˆ๋‹ค. ``` def normalize_frames(frames): normalized_frames = frames[frames.apply(lambda x: x['crop'][0][0] >= 0, axis=1) & frames.apply(lambda x: x['crop'][0][1] >= 0, axis=1) & frames.apply(lambda x: x['crop'][0][2] <= x['width'], axis=1) & frames.apply(lambda x: x['crop'][0][3] <= x['height'], axis=1) & frames.apply(lambda x: x['crop'][0][0] <= x['xmin'], axis=1) & frames.apply(lambda x: x['crop'][0][1] <= x['ymin'], axis=1) & frames.apply(lambda x: x['crop'][0][2] >= x['xmax'], axis=1) & frames.apply(lambda x: x['crop'][0][3] >= x['ymax'], axis=1)] for i, box in normalized_frames.iterrows(): normalized_frames.at[i, 'xmin'] = box['xmin'] - int(box["crop"][0][0]) normalized_frames.at[i, 'ymin'] = box['ymin'] - int(box["crop"][0][1]) normalized_frames.at[i, 'xmax'] = box['xmax'] - int(box["crop"][0][0]) normalized_frames.at[i, 'ymax'] = box['ymax'] - int(box["crop"][0][1]) return normalized_frames cropped_frames = normalize_frames(normal_size_frames) print('Number of normal sized objects:', len(cropped_frames)) cropped_frames_oversize = normalize_frames(oversized_frames) print('Number of oversized objects:', len(cropped_frames_oversize)) ``` `normal_size_frames` ์™€ `oversized_frames`์—์„œ ์ƒ˜ํ”Œ์„ ์ถ”์ถœํ•ด์„œ ๋„ํ‘œ๋ฅผ ๊ทธ๋ฆฌ๊ณ  ๋น„๊ตํ•ฉ๋‹ˆ๋‹ค. 8 ๊ฐœ์˜ ์ž๋ฅธ ์˜์ƒ ์ƒ˜ํ”Œ์ด ์•„๋ž˜์— ๋‚˜์™€ ์žˆ์Šต๋‹ˆ๋‹ค. <img src="imgs/sample8.jpg" alt="Vatic imaging"/> <p style="text-align: center;color:gray"> ๊ทธ๋ฆผ 11. ๋ฌด์ž‘์œ„๋กœ ์„ ํƒํ•œ ํ”„๋ ˆ์ž„ ์˜ˆ</p> ์ด๋Ÿฌํ•œ ์ด๋ฏธ์ง€๋ฅผ ์ฝ๊ณ , ์ž๋ฅด๊ณ , ๋„ํ‘œ๋กœ ๊ทธ๋ฆฌ๋ ค๋ฉด ๋ช‡ ๊ฐ€์ง€ ๋ณด์กฐ ํ•จ์ˆ˜๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - __crop_image__: ์ž…๋ ฅ ์ฐธ์กฐ ์ขŒํ‘œ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํฌ๊ธฐ๊ฐ€ ์กฐ์ ˆ๋œ ์ด๋ฏธ์ง€๋ฅผ ๋ฆฌํ„ดํ•ฉ๋‹ˆ๋‹ค. - __showarray__: ์ž˜๋ฆฐ ์ด๋ฏธ์ง€๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” ๋ฐฐ์—ด์„ ๊ทธ๋ฆฝ๋‹ˆ๋‹ค. - __draw_rectangle__: ๊ฐ ์ƒ˜ํ”Œ์—์„œ ๋งค์นญ๋œ ์ฐจ๋Ÿ‰์˜ ๊ฒฝ๊ณ„ ์ƒ์ž๋ฅผ ๊ทธ๋ฆฝ๋‹ˆ๋‹ค. - __plot_random_samples__: DataFrame์˜ ์ƒ˜ํ”Œ์„ ์–ป๊ณ  ๊ฐ๊ฐ์˜ ํ•ญ๋ชฉ์— ์œ„ ํ•จ์ˆ˜๋“ค์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ``` from IPython.display import clear_output, Image, display from io import StringIO import PIL.Image # Helper function to crop images def crop_image(pil_image, coordinates): # get the size of the image xmin, ymin, xmax, ymax = int(coordinates[0]), int(coordinates[1]), int(coordinates[2]), int(coordinates[3]) crop_img = pil_image[ymin:ymax, xmin:xmax] return crop_img def showarray(a, fmt='jpeg'): a = np.uint8(np.clip(a, 0, 255)) f = StringIO() PIL.Image.fromarray(a).save(f, fmt) display(Image(data=f.getvalue())) def draw_rectangle(draw, coordinates, color, width=1): for i in range(width): rect_start = (coordinates[0][0] - i, coordinates[0][1] - i) rect_end = (coordinates[1][0] + i, coordinates[1][1] + i) draw.rectangle((rect_start, rect_end), outline = color) ``` `plot_random_samples` ์€ ์ฃผ์–ด์ง„ ์ง‘ํ•ฉ์—์„œ 8๊ฐœ์˜ ๋ฌด์ž‘์œ„ ์ƒ˜ํ”Œ์„ ์„ ํƒํ•˜๊ณ  ๊ฐ ํ‘œ๋ณธ์— ๋Œ€ํ•ด ์ฃผ์–ด์ง„ DataFrame์—์„œ ์ •๋‹ต ๋ฐ์ดํ„ฐ๋ฅผ ์ฐพ๊ณ , ํ”„๋ ˆ์ž„ ์ƒ์— ๋ฐ•์Šค๋ฅผ ๊ทธ๋ฆฐ ํ›„, ์ตœ์ข…์ ์œผ๋กœ ๊ฒฐ๊ณผ ์ƒ˜ํ”Œ์„ ๊ทธ๋ฆฝ๋‹ˆ๋‹ค. ``` from PIL import Image, ImageFont, ImageDraw, ImageEnhance from matplotlib.pyplot import imshow def plot_random_samples(frames): sample_frames = frames.sample(n=8) fig=plt.figure(figsize=(15, 8)) columns = 4 rows = 2 i = 1 for index, box in sample_frames.iterrows(): #print(box["crop"]) im = Image.open('{}/images/{}/{}.jpg'.format(config["Base_Dest_Folder"], config["Test_Video_ID"], box["frame_no"])) if box['resize']: im = im.resize((int(box['width']), int(box['height'])), Image.ANTIALIAS) xmin, ymin, xmax, ymax = box["xmin"], box["ymin"], box["xmax"], box["ymax"] cropped_im = im.crop(box["crop"][0]) draw = ImageDraw.Draw(cropped_im) draw.rectangle(((xmin, ymin), (xmax, ymax)), fill=None, outline='red') draw_rectangle(draw, ((xmin, ymin), (xmax, ymax)), color=colors[2], width=3) fig.add_subplot(rows, columns, i) i += 1 plt.imshow(np.asarray(cropped_im)) plt.show() ``` ์ฒซ์งธ, ํฌ๊ธฐ๊ฐ€ ๋„ˆ๋ฌด ํฐ DataFrame์˜ ์ƒ˜ํ”Œ์„ ๊ทธ๋ฆฝ๋‹ˆ๋‹ค. ``` plot_random_samples(cropped_frames_oversize) ``` ๋‹ค์Œ์œผ๋กœ, ์ •์ƒ ํ”„๋ ˆ์ž„์˜ ์ƒ˜ํ”Œ์„ ๊ทธ๋ฆฝ๋‹ˆ๋‹ค. ``` plot_random_samples(cropped_frames) ``` ๋‘ ํ”„๋ ˆ์ž„ ์ง‘ํ•ฉ๊ณผ ์ตœ์ข… ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ํ•ฉ์น˜๊ณ  ์ƒ˜ํ”Œ์„ ๋‹ค์‹œ ๊ทธ๋ฆฝ๋‹ˆ๋‹ค. ``` temp_frames = [cropped_frames_oversize, cropped_frames] cropped_frames = pd.concat(temp_frames) ``` ์กฐํ•ฉ๋œ ์ง‘ํ•ฉ์—์„œ ์ƒ˜ํ”Œ ์˜์ƒ์„ ๊ทธ๋ ค ๋ด…์‹œ๋‹ค. ``` plot_random_samples(cropped_frames) ``` ์ด ๋‹จ๊ณ„์—์„œ, ์šฐ๋ฆฌ๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ž๋ฅด๊ณ  ์ฃผ๋ณ€ ๊ฒฝ๊ณ„ ์ƒ์ž๋ฅผ ๊ทธ๋ฆฌ๋Š” ๋ฐ ํ•„์š”ํ•œ ๋ชจ๋“  ํ”„๋ ˆ์ž„ ์ •๋ณด๋ฅผ ํฌํ•จํ•˜๋Š” DataFrame์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋…ธ์ด์ฆˆ ๋ฐ์ดํ„ฐ๋Š” ํ•„ํ„ฐ๋ง๋˜๊ณ , ์ง€๋‚˜์น˜๊ฒŒ ํฐ ํ”„๋ ˆ์ž„์€ ์ถ•์†Œ๋˜์–ด, ํฌํ•จ๋œ ์ฐจ๋Ÿ‰์ด ์šฐ๋ฆฌ ๋ชจ๋ธ์˜ ์ž…๋ ฅ ์Šคํ‚ค๋งˆ์— ๋งž๋„๋ก ์กฐ์ ˆ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์ค€๋น„๋Š” ์„ฑ๊ณต์ ์ธ IVA ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๊ตฌ์ถ•์„ ์œ„ํ•œ ๊ฐ€์žฅ ๊ธธ๊ณ  ๊ฐ€์žฅ ์ค‘์š”ํ•œ ์ž‘์—…์ด ๋  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๋Œ€๊ฐœ ๋ฐ์ดํ„ฐ ์œ ํ˜•, ์นด๋ฉ”๋ผ, ์กฐ๋ช…, ๋‚ ์”จ ๋“ฑ์— ๋งค์šฐ ์˜ํ–ฅ์„ ๋ฐ›์Šต๋‹ˆ๋‹ค. ์ด์ œ๋Š” __TFRecords__ ๋ฅผ ๋งŒ๋“œ๋Š” ๋‹ค์Œ ๋‹จ๊ณ„๋กœ ๋„˜์–ด๊ฐˆ ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. <a name="7"></a> ## 7. TFRecord ํŒŒ์ผ ์ƒ์„ฑํ•˜๊ธฐ TFRecord๋Š” TensorFlow ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ์ €์žฅํ•˜๋Š” ์ด์ง„ ํ˜•์‹์ž…๋‹ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ๋ฐ์ดํ„ฐ๋ฅผ ์••์ถ•ํ•ด์„œ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ์„ ๋ฟ ์•„๋‹ˆ๋ผ ๋ฐ์ดํ„ฐ ๊ฒ€์ƒ‰ ๋ฐ ๋ฉ”๋ชจ๋ฆฌ ๊ด€๋ฆฌ ์„ฑ๋Šฅ๋„ ํ–ฅ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์–ด๋…ธํ…Œ์ด์…˜ ์ขŒํ‘œ๋ฅผ ์ •๊ทœํ™”ํ–ˆ์œผ๋‹ˆ ์ด์ œ ๋ฐ์ดํ„ฐ๋ฅผ TFRecord ์•ˆ์— ๋„ฃ์–ด๋ด…์‹œ๋‹ค. ํ•จ์ˆ˜๋ฅผ ๊ฐ๊ฐ ์‚ดํŽด๋ณธ ํ›„, ๋งˆ์ง€๋ง‰์— ๊ทธ๊ฒƒ์„ ๋ชจ๋‘ ํ•ฉ์ณ์„œ ์ฒ˜๋ฆฌํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. TFRecords๋Š” ๋ฐ์ดํ„ฐ๋ฅผ ์ด์ง„ ํ˜•์‹์œผ๋กœ ์ €์žฅํ•˜๊ธฐ ๋•Œ๋ฌธ์— `๊ตฌ์กฐํ™”๋œ` ํ˜•์‹์œผ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ์ œ๊ณตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. TensorFlow๋Š” ๋ฐ์ดํ„ฐ ๊ตฌ์กฐ๋ฅผ TFrecords๋กœ ์ง๋ ฌํ™”ํ•˜๋Š” ๋‘ ๊ฐ€์ง€ ํ•จ์ˆ˜๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. `tf.trian.Example` ์™€ `tf.train.SequenceExample`์ด ๊ทธ ๋‘ ๊ฐœ์ธ๋ฐ, {"string": tf.train.Feature} ๋งคํ•‘์„ ์ œ๊ณตํ•˜์—ฌ ๋ฐ์ดํ„ฐ๋ฅผ TensorFlow ํ‘œ์ค€ ๋ชจ๋ธ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. TFRecords ๋ฅผ ๋งŒ๋“ค ๋•Œ ์ค‘์š”ํ•œ ๊ณ ๋ ค์‚ฌํ•ญ์€ ๊ฒฝ๊ณ„ ์ƒ์ž ์ขŒํ‘œ๋ฅผ 0๊ณผ 1 ์‚ฌ์ด์˜ float ๊ฐ’์œผ๋กœ ํ‘œ์ค€ํ™”ํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ๋˜ํ•œ ์•ž์—์„œ ์„ค๋ช…ํ•œ ๋ฐ”์™€ ๊ฐ™์ด, Example ๋ฐ์ดํ„ฐ ๊ตฌ์กฐ๋กœ ์ธ์ฝ”๋”ฉ๋˜๊ธฐ ์ „์— ์ด๋ฏธ์ง€๋ฅผ ์ž˜๋ผ๋‚ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ํ•จ์ˆ˜๋Š” TFRecords๋ฅผ ์ƒ์„ฑํ•˜๊ณ  `crop` ์ขŒํ‘œ๋ฅผ ์ด์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€๋ฅผ ์ž๋ฆ…๋‹ˆ๋‹ค. ``` from PIL import * # test_output = {} def To_tf_example(frame_data, img_path, img_name, label_map_dict, img_size, single_class): pil_image = Image.open(os.path.join(img_path,img_name)) if frame_data['resize']: pil_image = pil_image.resize((int(frame_data['width']), int(frame_data['height'])), Image.ANTIALIAS) cropped_im = pil_image.crop(frame_data["crop"][0]) encoded = cv2.imencode('.jpg', np.asarray(cropped_im))[1].tostring() xmin = [] ymin = [] xmax = [] ymax = [] classes = [] classes_text = [] # Append the coordinates to the overall lists of coordinates xmin.append(float(frame_data['xmin'])/float(img_size[0])) ymin.append(float(frame_data['ymin'])/float(img_size[1])) xmax.append(float(frame_data['xmax'])/float(img_size[0])) ymax.append(float(frame_data['ymax'])/float(img_size[1])) # If only detecting object/not object then ignore the class-specific labels if single_class: classes.append(1) else: class_name = frame_data['label'] classes_text.append(class_name.encode('utf8')) classes.append(label_map_dict[class_name]) # Generate a TF Example using the object information example = tf.train.Example(features=tf.train.Features(feature={ 'image/height': dataset_util.int64_feature(int(img_size[1])), 'image/width': dataset_util.int64_feature(int(img_size[0])), 'image/filename': dataset_util.bytes_feature( img_name.encode('utf8')), 'image/source_id': dataset_util.bytes_feature( img_name.encode('utf8')), 'image/filepath': dataset_util.bytes_feature( img_path.encode('utf8')), 'image/encoded': dataset_util.bytes_feature(encoded), 'image/format': dataset_util.bytes_feature('jpeg'.encode('utf8')), 'image/object/bbox/xmin': dataset_util.float_list_feature(xmin), 'image/object/bbox/xmax': dataset_util.float_list_feature(xmax), 'image/object/bbox/ymin': dataset_util.float_list_feature(ymin), 'image/object/bbox/ymax': dataset_util.float_list_feature(ymax), 'image/object/class/text': dataset_util.bytes_list_feature(classes_text), 'image/object/class/label': dataset_util.int64_list_feature(classes)})) return example ``` <a name="7-1"></a> ### 7.1 ์–ด๋…ธํ…Œ์ด์…˜๊ณผ ์˜์ƒ์„ TensorFlow Example๋“ค๋กœ ์ธ์ฝ”๋”ฉํ•˜๊ธฐ `generate_tf_records` ํ•จ์ˆ˜๋Š” ์ด๋ฏธ์ง€์— ํŠนํ™”๋œ DataFrames์„ ์ž…๋ ฅ์œผ๋กœ ๋ฐ›์Šต๋‹ˆ๋‹ค. ๊ฐ€๋ ค์ง€๊ฑฐ๋‚˜, ์‚ฌ๋ผ์กŒ๊ฑฐ๋‚˜, ์ž˜๋ฆฐ ์˜์—ญ ๋ฐ”๊นฅ์— ์žˆ๋Š” ๊ฐ์ฒด๋“ค์„ ํ•„ํ„ฐ๋งํ•˜๋Š” ๊ฒƒ์— ๋Œ€ํ•ด ์ด์ „ ์„น์…˜์—์„œ ๋ฐฐ์› ๋Š”๋ฐ, ์ด๋“ค DataFrames์€ ๊ทธ๋Ÿฌํ•œ ํ•„ํ„ฐ๋ง ๊ณผ์ •์œผ๋กœ๋ถ€ํ„ฐ ์–ป์–ด์ง‘๋‹ˆ๋‹ค. ์•„๋ž˜ ํ•จ์ˆ˜๋Š” `To_tf_example` ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•˜์—ฌ ์›๋ณธ ์˜์ƒ์„ ์ž๋ฆ…๋‹ˆ๋‹ค. ์ด ๋ฐ์ดํ„ฐ๋Š” jpeg์œผ๋กœ ์ธ์ฝ”๋”ฉ๋˜์–ด ๋‹ค์Œ ๋‹จ๊ณ„์—์„œ TFRecord์— ๊ธฐ๋ก๋ฉ๋‹ˆ๋‹ค. ํ”„๋ ˆ์ž„๊ณผ ๊ด€๋ จ๋œ ๊ฐ๊ฐ์˜ ์–ด๋…ธํ…Œ์ด์…˜, ๊ทธ๋ฆฌ๊ณ  ์ปฌ๋ง (Culling)์—์„œ ์‚ด์•„๋‚จ์€ ์–ด๋…ธํ…Œ์ด์…˜๋„ ํ•จ๊ป˜ ๊ธฐ๋ก๋ฉ๋‹ˆ๋‹ค. ``` from object_detection.utils import dataset_util from object_detection.utils import label_map_util def generate_tf_records(writer, frames_df, image_folder, reference_frames, label_map_dict): for index, the_item in frames_df.iterrows(): #check if frame belongs to the reference set; i.e. test/train if int(the_item["frame_no"]) in reference_frames: print("\r frame: {:>6}".format(int(the_item["frame_no"])), end='\r', flush=True) file_name = "{}.jpg".format(the_item["frame_no"]) tf_example = To_tf_example(the_item,image_folder, file_name, label_map_dict, g_image_size, False) writer.write(tf_example.SerializeToString()) ``` <a name="7-2"></a> ### 7.2 ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ๋งŒ๋“ค๊ธฐ ํ›ˆ๋ จ ๊ณผ์ •์„ ์ง€์›ํ•˜๊ธฐ ์œ„ํ•ด, ์šฐ๋ฆฌ๋Š” ํ›ˆ๋ จ ์…‹์„ ํฌํ•จํ•œ ๋ ˆ์ฝ”๋“œ์™€ ๊ฒ€์ฆ ์…‹์„ ํฌํ•จํ•œ ๋˜ ๋‹ค๋ฅธ ๋ ˆ์ฝ”๋“œ๋ฅผ ๋งŒ๋“ค์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์ง‘ํ•ฉ์˜ ์…‹์€ ํ•™์Šต ๋‹จ๊ณ„๋ฅผ ์ง„ํ–‰ํ•˜๋ฉด์„œ ๋ฐ์ดํ„ฐ์— ๊ฐ€์žฅ ์ ํ•ฉํ•œ ๋ชจ๋ธ ๋งค๊ฐœ๋ณ€์ˆ˜(๊ฐ€์ค‘์น˜ ๋ฐ ํŽธํ–ฅ)๋ฅผ ๊ฒฐ์ •ํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๊ฒ€์ฆ ์ง‘ํ•ฉ์˜ ์…‹์€ ๋ชจ๋ธ์˜ ํ‰๊ฐ€๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ณ  ํ›ˆ๋ จ ์ค‘์ธ ๋ชจ๋ธ์˜ ํ˜„์žฌ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ์— ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์ตœ์ข…์ ์œผ๋กœ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์‹œํ—˜ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ (ํ•™์Šต๋ฐ์ดํ„ฐ๋กœ ์‚ฌ์šฉ๋˜์ง€ ์•Š์€) ์ง‘ํ•ฉ์„ ์ค€๋น„ํ•ด ๋‘์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ๋Š” ๋ชจ๋ธ์˜ ํ˜„์žฌ ์„ฑ๋Šฅ์„ ๋‚˜ํƒ€๋‚ด๋Š” ์ง€ํ‘œ๋กœ ์ž‘์šฉํ•˜๋ฉฐ, ๋ชจ๋ธ์ด ๊ณผ์ ํ•ฉ๋๊ฑฐ๋‚˜ ๋„ˆ๋ฌด ๋นจ๋ฆฌ ์ˆ˜๋ ด๋˜์—ˆ๋Š”์ง€ ์—ฌ๋ถ€๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” ๋ฐ์— ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๊ด€์ฐฐ ์ค‘์ธ ๊ฒฐ๊ณผ ๋ชจ๋ธ์— ๋Œ€ํ•ด ๊ธ์ •์  ํ‰๊ฐ€๊ฐ€ ๋‚˜์˜ค๋“  ๋ถ€์ •์  ํ‰๊ฐ€๊ฐ€ ๋‚˜์˜ค๋“ , ์—ฐ๊ตฌ์ž๊ฐ€ ํ›ˆ๋ จ ๊ณผ์ •์„ ์กฐ๊ธฐ์— ์ข…๋ฃŒํ•˜๋Š” ๋ฐ์— ๋„์›€์„ ์ค๋‹ˆ๋‹ค. ์กฐ๊ธฐ ์ข…๋ฃŒ๋ฅผ ํ•˜๊ฒŒ ๋˜๋ฉด ๊ธ์ •์ ์ธ ๊ฒฝ์šฐ ์ตœ์ข… ๋ชจ๋ธ์ด ๋˜๋Š” ๊ฒƒ์ด๊ณ  ๋ถ€์ •์ ์ธ ๊ฒฝ์šฐ ์„ค์ •์„ ๋ณ€๊ฒฝํ•˜์—ฌ ํ•™์Šต์„ ๋‹ค์‹œ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ๋Š” ์‹ค์ œ๋กœ Low-level ํ•™์Šต์— ์˜ํ–ฅ์„ ๋ฏธ์น˜๊ฑฐ๋‚˜ ์ตœ์ข… ํ‰๊ฐ€์— ๊ธฐ์—ฌํ•˜์ง€ ์•Š์ง€๋งŒ ํ•™์Šต ๊ณผ์ •์— ์ค‘์š”ํ•œ ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š” ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๊ฐ€ ๋ฐฉ๊ธˆ ๋งŒ๋“  ๋ฐ์ดํ„ฐ ํ”„๋ ˆ์ž„์€ ๋‹จ์ผ ์˜์ƒ์˜ ๋ชจ๋“  ์–ด๋…ธํ…Œ์ด์…˜์„ ๊ฐ€์ง„ ๋ฐ์ดํ„ฐ๋ฅผ ํฌํ•จํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. `Pandas` ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์žฅ์  ์ค‘ ํ•˜๋‚˜๋Š” ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์กฐ์ž‘ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” `scikit-learn`๋ฅผ ์ด์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ๋ฅผ ํ›ˆ๋ จ์…‹๊ณผ ๊ฒ€์ฆ ์…‹์œผ๋กœ ๋ถ„ํ•  ๋น„์œจ์— ๋”ฐ๋ผ ๋‚˜๋ˆŒ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ``` import ast from sklearn.model_selection import train_test_split from object_detection.utils import label_map_util from object_detection.utils import visualization_utils as vis_util from object_detection.utils import dataset_util from random import shuffle unique_frames = cropped_frames.frame_no.unique() #shuffle and split the set shuffle(unique_frames) split = 0.2 train, test = train_test_split(unique_frames, test_size=split) ``` <a name="7-3"></a> ### 7.3 ํ•จ์ˆ˜๋“ค์„ ์—ฐ๊ฒฐํ•˜์—ฌ TFRecord ๋งŒ๋“ค๊ธฐ ์ด์ œ ๋ชจ๋“  ๊ฒƒ์„ ํ•ฉ์ณ์„œ TFRecord ํŒŒ์ผ์„ ์ƒ์„ฑํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋‹จ๊ณ„๋ฅผ ๊ฑฐ์น˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. 1. ๋ ˆ์ด๋ธ” ๋งต ํŒŒ์ผ(์•„๋ž˜ ์„ค๋ช…)์„ ๋”•์…”๋„ˆ๋ฆฌ ํ˜•์‹์œผ๋กœ ์ฝ์–ด๋“ค์ž…๋‹ˆ๋‹ค. ์ด ์ž‘์—…์€ API์—์„œ ์ œ๊ณตํ•˜๋Š” ์œ ํ‹ธ๋ฆฌํ‹ฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์†์‰ฝ๊ฒŒ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 2. ๋ชจ๋“  ๋ฐ์ดํ„ฐ์˜ ์ƒ์œ„ ๋””๋ ‰ํ† ๋ฆฌ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. 3. ์ƒ์„ฑ๋œ ๋ ˆ์ฝ”๋“œ ํŒŒ์ผ์˜ ์ถœ๋ ฅ ๊ฒฝ๋กœ๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. 4. ๊ฐ DataFrame ๊ทธ๋ฃน(ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ์šฉ)์„ `generate_tf_records` ํ•จ์ˆ˜๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 5. ์–ด๋…ธํ…Œ์ด์…˜์„ ํ”„๋ ˆ์ž„๋ณ„๋กœ ๊ธฐ๋กํ•˜์—ฌ TensorFlow Example์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. 6. ์ƒ์„ฑ๋œ Example์„ TFRecord ํŒŒ์ผ์— ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ชจ๋“  ํ•จ์ˆ˜์„ ํ•œ๋ฐ ๋ฌถ์–ด ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ์šฉ TFRecord๋ฅผ ์ƒ์„ฑํ•˜๋ ค๋ฉด ํŠน์ • ํ˜•์‹์˜ ๋ ˆ์ด๋ธ” ๋งต์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด ํŒŒ์ผ์€ ๊ธฐ๋ณธ์ ์œผ๋กœ ํด๋ž˜์Šค ID๋ฅผ ํด๋ž˜์Šค ์ด๋ฆ„์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๋Ÿฌ๋ถ„์€ ์ œ๊ณต๋œ ํŒŒ์ผ์„ ์‚ฌ์šฉํ•˜๋ฉด ๋˜๋Š”๋ฐ ์•„๋ž˜ ์ฝ”๋“œ์—์„œ ๊ทธ ํŒŒ์ผ์„ ์ด์šฉํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ํŒŒ์ผ์˜ ๋‚ด์šฉ์€ ์•„๋ž˜์ฒ˜๋Ÿผ ๋ณด์ผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ```json item { id: 1 name: 'Object' } ``` ๋žฉ ํ™˜๊ฒฝ์—์„œ์˜ ์‹œ๊ฐ„๊ณผ ์ปดํ“จํŒ… ์ œ์•ฝ์œผ๋กœ ์ธํ•ด ๋‹จ์ผ ์˜์ƒ์˜ ๋ฐ์ดํ„ฐ๋กœ๋งŒ ์ž‘์—…ํ•˜๊ณ  ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ๊ธฐ์–ตํ•˜์‹ญ์‹œ์˜ค. ์ด๋Ÿฌํ•œ ์ ‘๊ทผ ๋ฐฉ์‹์€ ๋” ํฐ ๋ฐ์ดํ„ฐ ์…‹์œผ๋กœ ํ™•์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ดํ›„์˜ ๋žฉ์—์„œ๋Š” ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ค๊ธฐ ์œ„ํ•ด ๋” ํฐ ๋ ˆ์ฝ”๋“œ ํŒŒ์ผ์„ ๊ฐ™์€ ๋ฐฉ์‹์œผ๋กœ ๋งŒ๋“ค๊ฒŒ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์•„๋ž˜ ๊ณผ์ •์€ ์‹œ๊ฐ„์ด ์ข€ ๊ฑธ๋ฆฝ๋‹ˆ๋‹ค. ๋‹ค์Œ ๋žฉ์˜ ๋‚ด์šฉ์„ ๋ฏธ๋ฆฌ ์ฝ๊ธฐ์— ์ข‹์€ ๊ธฐํšŒ์ž…๋‹ˆ๋‹ค. ``` video_list = ['126_206-A0-3'] label_map_dict = label_map_util.get_label_map_dict(config["Label_Map"]) train_writer = tf.python_io.TFRecordWriter(join(config["Base_Dest_Folder"],'train.record')) eval_writer = tf.python_io.TFRecordWriter(join(config["Base_Dest_Folder"],'eval.record')) for xx in video_list: #create train record generate_tf_records(train_writer, cropped_frames, '{}/images/{}'.format(config["Base_Dest_Folder"], config["Test_Video_ID"]), train, label_map_dict) #create eval record generate_tf_records(eval_writer, cropped_frames, '{}/images/{}'.format(config["Base_Dest_Folder"], config["Test_Video_ID"]), test, label_map_dict) train_writer.close() eval_writer.close() ``` ## ์ •๋ฆฌ IVA ๊ณผ์ •์˜ ์ฒซ ๋ฒˆ์งธ๋ฅผ ์ˆ˜๋ฃŒํ•œ ๊ฒƒ์„ ์ถ•ํ•˜ํ•ฉ๋‹ˆ๋‹ค! ์‹œ๊ฐ„์ด ์žˆ๋‹ค๋ฉด ์œ„์˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ˆ˜์ •ํ•˜์—ฌ ๋‹ค๋ฅธ ๋ชจ๋ธ๊ณผ ๋น„๋””์˜ค๋ฅผ ์กฐํ•ฉํ•˜์—ฌ ์‹คํ—˜ํ•ด๋ณด์„ธ์š”. ์ง€๊ธˆ๊นŒ์ง€ ์šฐ๋ฆฌ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒƒ๋“ค์„ ๋ฐฐ์› ์Šต๋‹ˆ๋‹ค. * ๋ช‡ ๊ฐ€์ง€ ๊ฐ์ฒด ๊ฒ€์ถœ ๋ฐฉ๋ฒ•์™€ ๊ทธ๋“ค์˜ ์ฐจ์ด์ ๊ณผ ์žฅ๋‹จ์  * ๋น„๋””์˜ค ํ”„๋ ˆ์ž„๊ณผ ์–ด๋…ธํ…Œ์ด์…˜์„ TensorFlow ๊ฐ์ฒด ๊ฒ€์ถœ API์™€ ๋ฉ”ํŠธ๋ฆญ ์ •์˜์— ์‰ฝ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜๊ธฐ * ๋ชจ๋ธ ์ •ํ™•๋„๋ฅผ ์ •์„ฑ์ ์œผ๋กœ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•œ ์ถœ๋ ฅ์„ ์ดํ•ดํ•˜๋Š” ๋ฐฉ๋ฒ• * IoU ์ง€ํ‘œ๋ฅผ ์ด์šฉํ•˜์—ฌ ๊ฐ์ฒด ๊ฒ€์ถœ ๋ชจ๋ธ์˜ ์ •ํ™•๋„ ๋ฐ ์„ฑ๋Šฅ์„ ์ •๋Ÿ‰์ ์œผ๋กœ ์ธก์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• ๋‹ค์Œ ๋žฉ์—์„œ๋Š” ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ค๊ณ  ๋„คํŠธ์›Œํฌ ๊ฐ€์ค‘์น˜๋ฅผ ํŒŒ์ธํŠœ๋‹ (Fine-tuning)ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์šธ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋˜ํ•œ ๋น„๋””์˜ค์—์„œ ๊ฐ์ฒด๋ฅผ ์ถ”์ ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์šธ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ฐธ์—ฌํ•ด ์ฃผ์…”์„œ ๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค! ## ํ•ด๋‹ต <a name="a1"></a> ### ์—ฐ์Šต 1: ``` print("Ratio of frames with moving vehicles to tatal: {0:.2f}%".format((frame_existance == 1.0).sum() / len(frame_existance) * 100)) ``` ์•ž์œผ๋กœ ๋Œ์•„๊ฐ€์‹œ๋ ค๋ฉด [์—ฌ๊ธฐ](#e1)๋ฅผ ํด๋ฆญํ•˜์„ธ์š”. <a name="a2"></a> ### ์—ฐ์Šต 2: ``` sedans = annotated_frames[annotated_frames["attributes"].str.contains("sedan") == True] print ('Total number of sedans: {}'.format(len(sedans))) ``` ์•ž์œผ๋กœ ๋Œ์•„๊ฐ€์‹œ๋ ค๋ฉด [์—ฌ๊ธฐ](#e2)๋ฅผ ํด๋ฆญํ•˜์„ธ์š”. <a name="a3"></a> ### ์—ฐ์Šต 3: ``` # YOUR CODE GOES HERE def calc_average_HW(row): row['Average_Height'] = row['ymax'] - row['ymin'] row['Average_Width'] = row['xmax'] - row['xmin'] return row Average_HW = inside_items.groupby(['track_id']).apply(calc_average_HW) Average_HW = Average_HW.groupby(['track_id']).mean() Average_HW.head() ``` ์•ž์œผ๋กœ ๋Œ์•„๊ฐ€์‹œ๋ ค๋ฉด [์—ฌ๊ธฐ](#e3)๋ฅผ ํด๋ฆญํ•˜์„ธ์š”.
github_jupyter
``` %load_ext autoreload %reload_ext autoreload %autoreload 2 %matplotlib inline import os # TO USE A DATABASE OTHER THAN SQLITE, USE THIS LINE # Note that this is necessary for parallel execution amongst other things... # os.environ['SNORKELDB'] = 'postgres:///snorkel-intro' from snorkel import SnorkelSession session = SnorkelSession() # Here, we just set how many documents we'll process for automatic testing- you can safely ignore this! n_docs = 500 if 'CI' in os.environ else 2591 from snorkel.models import candidate_subclass Spouse = candidate_subclass('Spouse', ['person1', 'person2']) train_cands = session.query(Spouse).filter(Spouse.split == 0).order_by(Spouse.id).all() dev_cands = session.query(Spouse).filter(Spouse.split == 1).order_by(Spouse.id).all() test_cands = session.query(Spouse).filter(Spouse.split == 2).order_by(Spouse.id).all() from util import load_external_labels #%time load_external_labels(session, Spouse, annotator_name='gold') from snorkel.annotations import load_gold_labels #L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1, zero_one=True) #L_gold_test = load_gold_labels(session, annotator_name='gold', split=2, zero_one=True) L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1) L_gold_test = load_gold_labels(session, annotator_name='gold', split=2) #gold_labels_dev = [x[0,0] for x in L_gold_dev.todense()] #for i,L in enumerate(gold_labels_dev): # print(i,gold_labels_dev[i]) gold_labels_dev = [] for i,L in enumerate(L_gold_dev): gold_labels_dev.append(L[0,0]) gold_labels_test = [] for i,L in enumerate(L_gold_test): gold_labels_test.append(L[0,0]) print(len(gold_labels_dev),len(gold_labels_test)) import re from snorkel.lf_helpers import ( get_left_tokens, get_right_tokens, get_between_tokens, get_text_between, get_tagged_text, ) #spouses = {'spouse', 'wife', 'husband', 'ex-wife', 'ex-husband'} spouses = {'spouse', 'wife', 'ex-wife','ex-husband'} # one fourth #spouses = { 'wife', 'ex-wife','ex-husband'} #half #spouses = {'ex-wife'} # three fourth #family = {'father', 'mother', 'sister', 'brother', 'son', 'daughter', # 'grandfather', 'grandmother', 'uncle', 'aunt', 'cousin'} family = {'father', 'mother', 'brother', 'daughter', 'grandfather', 'grandmother', 'aunt', 'cousin'} #one fourth #family = { 'mother', 'brother', # 'grandfather', 'aunt', 'cousin'} #half #family = { 'brother', # 'grandfather', 'aunt'} #three fourth family = family | {f + '-in-law' for f in family} #other = {'boyfriend', 'girlfriend', 'boss', 'employee', 'secretary', 'co-worker'} other = {'boyfriend', 'girlfriend', 'employee', 'secretary' } # one fourth #other = {'boyfriend', 'employee', 'secretary' } # half #other = {'secretary'} #three fourth # Helper function to get last name def last_name(s): name_parts = s.split(' ') return name_parts[-1] if len(name_parts) > 1 else None def LF_husband_wife(c): return (1,1) if len(spouses.intersection(get_between_tokens(c))) > 0 else (0,1) def LF_husband_wife_left_window(c): if len(spouses.intersection(get_left_tokens(c[0], window=2))) > 0: return (1,1) elif len(spouses.intersection(get_left_tokens(c[1], window=2))) > 0: return (1,1) else: return (0,1) def LF_same_last_name(c): p1_last_name = last_name(c.person1.get_span()) p2_last_name = last_name(c.person2.get_span()) if p1_last_name and p2_last_name and p1_last_name == p2_last_name: if c.person1.get_span() != c.person2.get_span(): return (1,1) return (0,1) def LF_no_spouse_in_sentence(c): return (-1,1) if np.random.rand() < 0.75 and len(spouses.intersection(c.get_parent().words)) == 0 else (0,1) def LF_and_married(c): return (1,1) if 'and' in get_between_tokens(c) and 'married' in get_right_tokens(c) else (0,1) def LF_familial_relationship(c): return (-1,1) if len(family.intersection(get_between_tokens(c))) > 0 else (0,1) def LF_family_left_window(c): if len(family.intersection(get_left_tokens(c[0], window=2))) > 0: return (-1,1) elif len(family.intersection(get_left_tokens(c[1], window=2))) > 0: return (-1,1) else: return (0,1) def LF_other_relationship(c): return (-1,1) if len(other.intersection(get_between_tokens(c))) > 0 else (0,1) import bz2 # Function to remove special characters from text def strip_special(s): return ''.join(c for c in s if ord(c) < 128) # Read in known spouse pairs and save as set of tuples with bz2.BZ2File('data/spouses_dbpedia.csv.bz2', 'rb') as f: known_spouses = set( tuple(strip_special(x).strip().split(',')) for x in f.readlines() ) # Last name pairs for known spouses last_names = set([(last_name(x), last_name(y)) for x, y in known_spouses if last_name(x) and last_name(y)]) def LF_distant_supervision(c): p1, p2 = c.person1.get_span(), c.person2.get_span() return (1,1) if (p1, p2) in known_spouses or (p2, p1) in known_spouses else (0,1) def LF_distant_supervision_last_names(c): p1, p2 = c.person1.get_span(), c.person2.get_span() p1n, p2n = last_name(p1), last_name(p2) return (1,1) if (p1 != p2) and ((p1n, p2n) in last_names or (p2n, p1n) in last_names) else (0,1) LFs = [ LF_distant_supervision, LF_distant_supervision_last_names, LF_husband_wife, LF_husband_wife_left_window, LF_same_last_name, LF_no_spouse_in_sentence, LF_and_married, LF_familial_relationship, LF_family_left_window, LF_other_relationship ] import numpy as np import math def PHI(K,LAMDAi,SCOREi): return [K*l*s for (l,s) in zip(LAMDAi,SCOREi)] def softmax(THETA,LAMDAi,SCOREi): x = [] for k in [1,-1]: product = np.dot(PHI(k,LAMDAi,SCOREi),THETA) x.append(product) return np.exp(x) / np.sum(np.exp(x), axis=0) def function_conf(THETA,LAMDA,P_cap,Confidence): s = 0.0 i = 0 for LAMDAi in LAMDA: s = s + Confidence[i]*np.dot(np.log(softmax(THETA,LAMDAi)),P_cap[i]) i = i+1 return -s def function(THETA,LAMDA,SCORE,P_cap): s = 0.0 i = 0 for i in range(len(LAMDA)): s = s + np.dot(np.log(softmax(THETA,LAMDA[i],SCORE[i])),P_cap[i]) i = i+1 return -s def P_K_Given_LAMDAi_THETA(K,THETA,LAMDAi,SCOREi): x = softmax(THETA,LAMDAi,SCOREi) if(K==1): return x[0] else: return x[1] np.random.seed(78) THETA = np.random.rand(len(LFs),1) def PHIj(j,K,LAMDAi,SCOREi): return LAMDAi[j]*K*SCOREi[j] def RIGHT(j,LAMDAi,SCOREi,THETA): phi = [] for k in [1,-1]: phi.append(PHIj(j,k,LAMDAi,SCOREi)) x = softmax(THETA,LAMDAi,SCOREi) return np.dot(phi,x) def function_conf_der(THETA,LAMDA,P_cap,Confidence): der = [] for j in range(len(THETA)): i = 0 s = 0.0 for LAMDAi in LAMDA: p = 0 for K in [1,-1]: s = s + Confidence[i]*(PHIj(j,K,LAMDAi)-RIGHT(j,LAMDAi,THETA))*P_cap[i][p] p = p+1 i = i+1 der.append(-s) return np.array(der) def function_der(THETA,LAMDA,SCORE,P_cap): der = [] for j in range(len(THETA)): i = 0 s = 0.0 for index in range(len(LAMDA)): p = 0 for K in [1,-1]: s = s + (PHIj(j,K,LAMDA[index],SCORE[index])-RIGHT(j,LAMDA[index],SCORE[index],THETA))*P_cap[i][p] p = p+1 i = i+1 der.append(-s) return np.array(der) import numpy as np def get_LAMDA(cands): LAMDA = [] SCORE = [] for ci in cands: L=[] S=[] P_ik = [] for LF in LFs: #print LF.__name__ l,s = LF(ci) L.append(l) S.append((s+1)/2) #to scale scores in [0,1] LAMDA.append(L) SCORE.append(S) return LAMDA,SCORE def get_Confidence(LAMDA): confidence = [] for L in LAMDA: Total_L = float(len(L)) No_zeros = L.count(0) No_Non_Zeros = Total_L - No_zeros confidence.append(No_Non_Zeros/Total_L) return confidence def get_Initial_P_cap(LAMDA): P_cap = [] for L in LAMDA: P_ik = [] denominator=float(L.count(1)+L.count(-1)) if(denominator==0): denominator=1 P_ik.append(L.count(1)/denominator) P_ik.append(L.count(-1)/denominator) P_cap.append(P_ik) return P_cap #print(np.array(LAMDA)) #print(np.array(P_cap))append(L) #LAMDA=np.array(LAMDA).astype(int) #P_cap=np.array(P_cap) #print(np.array(LAMDA).shape) #print(np.array(P_cap).shape) #print(L) #print(ci.chemical.get_span(),ci.disease.get_span(),"No.Os",L.count(0),"No.1s",L.count(1),"No.-1s",L.count(-1)) #print(ci.chemical.get_span(),ci.disease.get_span(),"P(0):",L.count(0)/len(L)," P(1)",L.count(1)/len(L),"P(-1)",L.count(-1)/len(L)) def get_P_cap(LAMDA,SCORE,THETA): P_cap = [] for i in range(len(LAMDA)): P_capi = softmax(THETA,LAMDA[i],SCORE[i]) P_cap.append(P_capi) return P_cap def score(predicted_labels,gold_labels): tp =0.0 tn =0.0 fp =0.0 fn =0.0 for i in range(len(gold_labels)): if(predicted_labels[i]==gold_labels[i]): if(predicted_labels[i]==1): tp=tp+1 else: tn=tn+1 else: if(predicted_labels[i]==1): fp=fp+1 else: fn=fn+1 print("tp",tp,"tn",tn,"fp",fp,"fn",fn) precision = tp/(tp+fp) recall = tp/(tp+fn) f1score = (2*precision*recall)/(precision+recall) print("precision:",precision) print("recall:",recall) print("F1 score:",f1score) from scipy.optimize import minimize import cPickle as pickle def get_marginals(P_cap): marginals = [] for P_capi in P_cap: marginals.append(P_capi[0]) return marginals def predict_labels(marginals): predicted_labels=[] for i in marginals: if(i<0.5): predicted_labels.append(-1) else: predicted_labels.append(1) return predicted_labels def print_details(label,THETA,LAMDA,SCORE): print(label) P_cap = get_P_cap(LAMDA,SCORE,THETA) marginals=get_marginals(P_cap) plt.hist(marginals, bins=20) plt.show() plt.bar(range(0,2796),marginals) plt.show() predicted_labels=predict_labels(marginals) print(len(marginals),len(predicted_labels),len(gold_labels_dev)) #score(predicted_labels,gold_labels_dev) print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(predicted_labels),average='binary')) def train(No_Iter,Use_Confidence=True,theta_file_name="THETA"): global THETA global dev_LAMDA,dev_SCORE LAMDA,SCORE = get_LAMDA(train_cands) P_cap = get_Initial_P_cap(LAMDA) Confidence = get_Confidence(LAMDA) for iteration in range(No_Iter): if(Use_Confidence==True): res = minimize(function_conf,THETA,args=(LAMDA,P_cap,Confidence), method='BFGS',jac=function_conf_der,options={'disp': True, 'maxiter':20}) #nelder-mead else: res = minimize(function,THETA,args=(LAMDA,SCORE,P_cap), method='BFGS',jac=function_der,options={'disp': True, 'maxiter':20}) #nelder-mead THETA = res.x # new THETA print(THETA) P_cap = get_P_cap(LAMDA,SCORE,THETA) #new p_cap print_details("train iteration: "+str(iteration),THETA,dev_LAMDA,dev_SCORE) #score(predicted_labels,gold_labels) NP_P_cap = np.array(P_cap) np.savetxt('Train_P_cap.txt', NP_P_cap, fmt='%f') pickle.dump(NP_P_cap,open("Train_P_cap.p","wb")) NP_THETA = np.array(THETA) np.savetxt(theta_file_name+'.txt', NP_THETA, fmt='%f') pickle.dump( NP_THETA, open( theta_file_name+'.p', "wb" )) # save the file as "outfile_name.npy" def test(THETA): global dev_LAMDA,dev_SCORE P_cap = get_P_cap(dev_LAMDA,dev_SCORE,THETA) print_details("test:",THETA,dev_LAMDA,dev_SCORE) NP_P_cap = np.array(P_cap) np.savetxt('Dev_P_cap.txt', NP_P_cap, fmt='%f') pickle.dump(NP_P_cap,open("Dev_P_cap.p","wb")) def load_marginals(s): marginals = [] if(s=="train"): train_P_cap = np.load("Train_P_cap.npy") marginals = train_P_cap[:,0] return marginals # with one fourth removed from sklearn.metrics import precision_recall_fscore_support import matplotlib.pyplot as plt dev_LAMDA,dev_SCORE = get_LAMDA(dev_cands) train(3,Use_Confidence=False,theta_file_name="THETA") test(THETA) # with half removed from sklearn.metrics import precision_recall_fscore_support import matplotlib.pyplot as plt dev_LAMDA,dev_SCORE = get_LAMDA(dev_cands) train(3,Use_Confidence=False,theta_file_name="THETA") test(THETA) # three fourth removed from sklearn.metrics import precision_recall_fscore_support import matplotlib.pyplot as plt dev_LAMDA,dev_SCORE = get_LAMDA(dev_cands) train(3,Use_Confidence=False,theta_file_name="THETA") test(THETA) #All are descrete, nothing removed train(3,Use_Confidence=False,"THETA") test(THETA) def print_details(label,THETA,LAMDA,SCORE): print(label) P_cap = get_P_cap(LAMDA,SCORE,THETA) marginals=get_marginals(P_cap) plt.hist(marginals, bins=20) plt.show() #plt.bar(range(0,2796),marginals) #plt.show() predicted_labels=predict_labels(marginals) print(len(marginals),len(predicted_labels),len(gold_labels_dev)) #score(predicted_labels,gold_labels_dev) print(precision_recall_fscore_support(np.array(gold_labels_dev),np.array(predicted_labels),average='binary')) def predict_labels(marginals): predicted_labels=[] for i in marginals: if(i<0.5): predicted_labels.append(-1) else: predicted_labels.append(1) return predicted_labels #import cPickle as pickle #THETA = pickle.load( open( "THETA.p", "rb" ) ) #test(THETA) #LAMDA,SCORE = get_LAMDA(dev_cands) #Confidence = get_Confidence(LAMDA) #P_cap = get_P_cap(LAMDA,SCORE,THETA) #marginals=get_marginals(P_cap) #plt.hist(marginals, bins=20) #plt.show() #plt.bar(range(0,888),train_marginals) #plt.show() print_details("dev set",THETA,dev_LAMDA,dev_SCORE) predicted_labels=predict_labels(marginals) sorted_predicted_labels=[x for (y,x) in sorted(zip(Confidence,predicted_labels))] #sort Labels as per Confidence sorted_predicted_labels=list(reversed(sorted_predicted_labels)) for i,j in enumerate(reversed(sorted(zip(Confidence,predicted_labels,gold_labels_dev)))): if i>20: break print i,j #print(len(marginals),len(predicted_labels),len(gold_labels_dev)) #no_of_labels=186#int(len(predicted_labels)*0.1) #54 - >0.2 , 108>= 0.15 , 186>= 0.12 #print(len(sorted_predicted_labels[0:no_of_labels])) no_of_labels=2796 score(predicted_labels[0:no_of_labels],gold_labels_dev[0:no_of_labels]) ```
github_jupyter
# Recurrent Neural Networks (RNN) with Keras ## Learning Objectives 1. Add built-in RNN layers. 2. Build bidirectional RNNs. 3. Using CuDNN kernels when available. 4. Build a RNN model with nested input/output. ## Introduction Recurrent neural networks (RNN) are a class of neural networks that is powerful for modeling sequence data such as time series or natural language. Schematically, a RNN layer uses a `for` loop to iterate over the timesteps of a sequence, while maintaining an internal state that encodes information about the timesteps it has seen so far. The Keras RNN API is designed with a focus on: - **Ease of use**: the built-in `keras.layers.RNN`, `keras.layers.LSTM`, `keras.layers.GRU` layers enable you to quickly build recurrent models without having to make difficult configuration choices. - **Ease of customization**: You can also define your own RNN cell layer (the inner part of the `for` loop) with custom behavior, and use it with the generic `keras.layers.RNN` layer (the `for` loop itself). This allows you to quickly prototype different research ideas in a flexible way with minimal code. Each learning objective will correspond to a __#TODO__ in the notebook where you will complete the notebook cell's code before running. Refer to the [solution](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/text_classification/solutions/rnn.ipynb) for reference. ## Setup ``` import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers ``` ## Built-in RNN layers: a simple example There are three built-in RNN layers in Keras: 1. `keras.layers.SimpleRNN`, a fully-connected RNN where the output from previous timestep is to be fed to next timestep. 2. `keras.layers.GRU`, first proposed in [Cho et al., 2014](https://arxiv.org/abs/1406.1078). 3. `keras.layers.LSTM`, first proposed in [Hochreiter & Schmidhuber, 1997](https://www.bioinf.jku.at/publications/older/2604.pdf). In early 2015, Keras had the first reusable open-source Python implementations of LSTM and GRU. Here is a simple example of a `Sequential` model that processes sequences of integers, embeds each integer into a 64-dimensional vector, then processes the sequence of vectors using a `LSTM` layer. ``` model = keras.Sequential() # Add an Embedding layer expecting input vocab of size 1000, and # output embedding dimension of size 64. model.add(layers.Embedding(input_dim=1000, output_dim=64)) # Add a LSTM layer with 128 internal units. # TODO -- your code goes here # Add a Dense layer with 10 units. # TODO -- your code goes here model.summary() ``` Built-in RNNs support a number of useful features: - Recurrent dropout, via the `dropout` and `recurrent_dropout` arguments - Ability to process an input sequence in reverse, via the `go_backwards` argument - Loop unrolling (which can lead to a large speedup when processing short sequences on CPU), via the `unroll` argument - ...and more. For more information, see the [RNN API documentation](https://keras.io/api/layers/recurrent_layers/). ## Outputs and states By default, the output of a RNN layer contains a single vector per sample. This vector is the RNN cell output corresponding to the last timestep, containing information about the entire input sequence. The shape of this output is `(batch_size, units)` where `units` corresponds to the `units` argument passed to the layer's constructor. A RNN layer can also return the entire sequence of outputs for each sample (one vector per timestep per sample), if you set `return_sequences=True`. The shape of this output is `(batch_size, timesteps, units)`. ``` model = keras.Sequential() model.add(layers.Embedding(input_dim=1000, output_dim=64)) # The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256) model.add(layers.GRU(256, return_sequences=True)) # The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128) model.add(layers.SimpleRNN(128)) model.add(layers.Dense(10)) model.summary() ``` In addition, a RNN layer can return its final internal state(s). The returned states can be used to resume the RNN execution later, or [to initialize another RNN](https://arxiv.org/abs/1409.3215). This setting is commonly used in the encoder-decoder sequence-to-sequence model, where the encoder final state is used as the initial state of the decoder. To configure a RNN layer to return its internal state, set the `return_state` parameter to `True` when creating the layer. Note that `LSTM` has 2 state tensors, but `GRU` only has one. To configure the initial state of the layer, just call the layer with additional keyword argument `initial_state`. Note that the shape of the state needs to match the unit size of the layer, like in the example below. ``` encoder_vocab = 1000 decoder_vocab = 2000 encoder_input = layers.Input(shape=(None,)) encoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)( encoder_input ) # Return states in addition to output output, state_h, state_c = layers.LSTM(64, return_state=True, name="encoder")( encoder_embedded ) encoder_state = [state_h, state_c] decoder_input = layers.Input(shape=(None,)) decoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)( decoder_input ) # Pass the 2 states to a new LSTM layer, as initial state decoder_output = layers.LSTM(64, name="decoder")( decoder_embedded, initial_state=encoder_state ) output = layers.Dense(10)(decoder_output) model = keras.Model([encoder_input, decoder_input], output) model.summary() ``` ## RNN layers and RNN cells In addition to the built-in RNN layers, the RNN API also provides cell-level APIs. Unlike RNN layers, which processes whole batches of input sequences, the RNN cell only processes a single timestep. The cell is the inside of the `for` loop of a RNN layer. Wrapping a cell inside a `keras.layers.RNN` layer gives you a layer capable of processing batches of sequences, e.g. `RNN(LSTMCell(10))`. Mathematically, `RNN(LSTMCell(10))` produces the same result as `LSTM(10)`. In fact, the implementation of this layer in TF v1.x was just creating the corresponding RNN cell and wrapping it in a RNN layer. However using the built-in `GRU` and `LSTM` layers enable the use of CuDNN and you may see better performance. There are three built-in RNN cells, each of them corresponding to the matching RNN layer. - `keras.layers.SimpleRNNCell` corresponds to the `SimpleRNN` layer. - `keras.layers.GRUCell` corresponds to the `GRU` layer. - `keras.layers.LSTMCell` corresponds to the `LSTM` layer. The cell abstraction, together with the generic `keras.layers.RNN` class, make it very easy to implement custom RNN architectures for your research. ## Cross-batch statefulness When processing very long sequences (possibly infinite), you may want to use the pattern of **cross-batch statefulness**. Normally, the internal state of a RNN layer is reset every time it sees a new batch (i.e. every sample seen by the layer is assumed to be independent of the past). The layer will only maintain a state while processing a given sample. If you have very long sequences though, it is useful to break them into shorter sequences, and to feed these shorter sequences sequentially into a RNN layer without resetting the layer's state. That way, the layer can retain information about the entirety of the sequence, even though it's only seeing one sub-sequence at a time. You can do this by setting `stateful=True` in the constructor. If you have a sequence `s = [t0, t1, ... t1546, t1547]`, you would split it into e.g. ``` s1 = [t0, t1, ... t100] s2 = [t101, ... t201] ... s16 = [t1501, ... t1547] ``` Then you would process it via: ```python lstm_layer = layers.LSTM(64, stateful=True) for s in sub_sequences: output = lstm_layer(s) ``` When you want to clear the state, you can use `layer.reset_states()`. > Note: In this setup, sample `i` in a given batch is assumed to be the continuation of sample `i` in the previous batch. This means that all batches should contain the same number of samples (batch size). E.g. if a batch contains `[sequence_A_from_t0_to_t100, sequence_B_from_t0_to_t100]`, the next batch should contain `[sequence_A_from_t101_to_t200, sequence_B_from_t101_to_t200]`. Here is a complete example: ``` paragraph1 = np.random.random((20, 10, 50)).astype(np.float32) paragraph2 = np.random.random((20, 10, 50)).astype(np.float32) paragraph3 = np.random.random((20, 10, 50)).astype(np.float32) lstm_layer = layers.LSTM(64, stateful=True) output = lstm_layer(paragraph1) output = lstm_layer(paragraph2) output = lstm_layer(paragraph3) # reset_states() will reset the cached state to the original initial_state. # If no initial_state was provided, zero-states will be used by default. # TODO -- your code goes here ``` ### RNN State Reuse <a id="rnn_state_reuse"></a> The recorded states of the RNN layer are not included in the `layer.weights()`. If you would like to reuse the state from a RNN layer, you can retrieve the states value by `layer.states` and use it as the initial state for a new layer via the Keras functional API like `new_layer(inputs, initial_state=layer.states)`, or model subclassing. Please also note that sequential model might not be used in this case since it only supports layers with single input and output, the extra input of initial state makes it impossible to use here. ``` paragraph1 = np.random.random((20, 10, 50)).astype(np.float32) paragraph2 = np.random.random((20, 10, 50)).astype(np.float32) paragraph3 = np.random.random((20, 10, 50)).astype(np.float32) lstm_layer = layers.LSTM(64, stateful=True) output = lstm_layer(paragraph1) output = lstm_layer(paragraph2) existing_state = lstm_layer.states new_lstm_layer = layers.LSTM(64) new_output = new_lstm_layer(paragraph3, initial_state=existing_state) ``` ## Bidirectional RNNs For sequences other than time series (e.g. text), it is often the case that a RNN model can perform better if it not only processes sequence from start to end, but also backwards. For example, to predict the next word in a sentence, it is often useful to have the context around the word, not only just the words that come before it. Keras provides an easy API for you to build such bidirectional RNNs: the `keras.layers.Bidirectional` wrapper. ``` model = keras.Sequential() # Add Bidirectional layers # TODO -- your code goes here model.summary() ``` Under the hood, `Bidirectional` will copy the RNN layer passed in, and flip the `go_backwards` field of the newly copied layer, so that it will process the inputs in reverse order. The output of the `Bidirectional` RNN will be, by default, the concatenation of the forward layer output and the backward layer output. If you need a different merging behavior, e.g. concatenation, change the `merge_mode` parameter in the `Bidirectional` wrapper constructor. For more details about `Bidirectional`, please check [the API docs](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Bidirectional/). ## Performance optimization and CuDNN kernels In TensorFlow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN kernels by default when a GPU is available. With this change, the prior `keras.layers.CuDNNLSTM/CuDNNGRU` layers have been deprecated, and you can build your model without worrying about the hardware it will run on. Since the CuDNN kernel is built with certain assumptions, this means the layer **will not be able to use the CuDNN kernel if you change the defaults of the built-in LSTM or GRU layers**. E.g.: - Changing the `activation` function from `tanh` to something else. - Changing the `recurrent_activation` function from `sigmoid` to something else. - Using `recurrent_dropout` > 0. - Setting `unroll` to True, which forces LSTM/GRU to decompose the inner `tf.while_loop` into an unrolled `for` loop. - Setting `use_bias` to False. - Using masking when the input data is not strictly right padded (if the mask corresponds to strictly right padded data, CuDNN can still be used. This is the most common case). For the detailed list of constraints, please see the documentation for the [LSTM](https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM/) and [GRU](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GRU/) layers. ### Using CuDNN kernels when available Let's build a simple LSTM model to demonstrate the performance difference. We'll use as input sequences the sequence of rows of MNIST digits (treating each row of pixels as a timestep), and we'll predict the digit's label. ``` batch_size = 64 # Each MNIST image batch is a tensor of shape (batch_size, 28, 28). # Each input sequence will be of size (28, 28) (height is treated like time). input_dim = 28 units = 64 output_size = 10 # labels are from 0 to 9 # Build the RNN model def build_model(allow_cudnn_kernel=True): # CuDNN is only available at the layer level, and not at the cell level. # This means `LSTM(units)` will use the CuDNN kernel, # while RNN(LSTMCell(units)) will run on non-CuDNN kernel. if allow_cudnn_kernel: # The LSTM layer with default options uses CuDNN. lstm_layer = keras.layers.LSTM(units, input_shape=(None, input_dim)) else: # Wrapping a LSTMCell in a RNN layer will not use CuDNN. lstm_layer = keras.layers.RNN( keras.layers.LSTMCell(units), input_shape=(None, input_dim) ) model = keras.models.Sequential( [ lstm_layer, keras.layers.BatchNormalization(), keras.layers.Dense(output_size), ] ) return model ``` Let's load the MNIST dataset: ``` mnist = keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 sample, sample_label = x_train[0], y_train[0] ``` Let's create a model instance and train it. We choose `sparse_categorical_crossentropy` as the loss function for the model. The output of the model has shape of `[batch_size, 10]`. The target for the model is an integer vector, each of the integer is in the range of 0 to 9. ``` model = build_model(allow_cudnn_kernel=True) # Compile the model # TODO -- your code goes here model.fit( x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1 ) ``` Now, let's compare to a model that does not use the CuDNN kernel: ``` noncudnn_model = build_model(allow_cudnn_kernel=False) noncudnn_model.set_weights(model.get_weights()) noncudnn_model.compile( loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer="sgd", metrics=["accuracy"], ) noncudnn_model.fit( x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1 ) ``` When running on a machine with a NVIDIA GPU and CuDNN installed, the model built with CuDNN is much faster to train compared to the model that uses the regular TensorFlow kernel. The same CuDNN-enabled model can also be used to run inference in a CPU-only environment. The `tf.device` annotation below is just forcing the device placement. The model will run on CPU by default if no GPU is available. You simply don't have to worry about the hardware you're running on anymore. Isn't that pretty cool? ``` import matplotlib.pyplot as plt with tf.device("CPU:0"): cpu_model = build_model(allow_cudnn_kernel=True) cpu_model.set_weights(model.get_weights()) result = tf.argmax(cpu_model.predict_on_batch(tf.expand_dims(sample, 0)), axis=1) print( "Predicted result is: %s, target result is: %s" % (result.numpy(), sample_label) ) plt.imshow(sample, cmap=plt.get_cmap("gray")) ``` ## RNNs with list/dict inputs, or nested inputs Nested structures allow implementers to include more information within a single timestep. For example, a video frame could have audio and video input at the same time. The data shape in this case could be: `[batch, timestep, {"video": [height, width, channel], "audio": [frequency]}]` In another example, handwriting data could have both coordinates x and y for the current position of the pen, as well as pressure information. So the data representation could be: `[batch, timestep, {"location": [x, y], "pressure": [force]}]` The following code provides an example of how to build a custom RNN cell that accepts such structured inputs. ### Define a custom cell that supports nested input/output See [Making new Layers & Models via subclassing](https://www.tensorflow.org/guide/keras/custom_layers_and_models/) for details on writing your own layers. ``` class NestedCell(keras.layers.Layer): def __init__(self, unit_1, unit_2, unit_3, **kwargs): self.unit_1 = unit_1 self.unit_2 = unit_2 self.unit_3 = unit_3 self.state_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])] self.output_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])] super(NestedCell, self).__init__(**kwargs) def build(self, input_shapes): # expect input_shape to contain 2 items, [(batch, i1), (batch, i2, i3)] i1 = input_shapes[0][1] i2 = input_shapes[1][1] i3 = input_shapes[1][2] self.kernel_1 = self.add_weight( shape=(i1, self.unit_1), initializer="uniform", name="kernel_1" ) self.kernel_2_3 = self.add_weight( shape=(i2, i3, self.unit_2, self.unit_3), initializer="uniform", name="kernel_2_3", ) def call(self, inputs, states): # inputs should be in [(batch, input_1), (batch, input_2, input_3)] # state should be in shape [(batch, unit_1), (batch, unit_2, unit_3)] input_1, input_2 = tf.nest.flatten(inputs) s1, s2 = states output_1 = tf.matmul(input_1, self.kernel_1) output_2_3 = tf.einsum("bij,ijkl->bkl", input_2, self.kernel_2_3) state_1 = s1 + output_1 state_2_3 = s2 + output_2_3 output = (output_1, output_2_3) new_states = (state_1, state_2_3) return output, new_states def get_config(self): return {"unit_1": self.unit_1, "unit_2": unit_2, "unit_3": self.unit_3} ``` ### Build a RNN model with nested input/output Let's build a Keras model that uses a `keras.layers.RNN` layer and the custom cell we just defined. ``` unit_1 = 10 unit_2 = 20 unit_3 = 30 i1 = 32 i2 = 64 i3 = 32 batch_size = 64 num_batches = 10 timestep = 50 cell = NestedCell(unit_1, unit_2, unit_3) rnn = keras.layers.RNN(cell) input_1 = keras.Input((None, i1)) input_2 = keras.Input((None, i2, i3)) outputs = rnn((input_1, input_2)) model = keras.models.Model([input_1, input_2], outputs) model.compile(optimizer="adam", loss="mse", metrics=["accuracy"]) ``` ### Train the model with randomly generated data Since there isn't a good candidate dataset for this model, we use random Numpy data for demonstration. ``` input_1_data = np.random.random((batch_size * num_batches, timestep, i1)) input_2_data = np.random.random((batch_size * num_batches, timestep, i2, i3)) target_1_data = np.random.random((batch_size * num_batches, unit_1)) target_2_data = np.random.random((batch_size * num_batches, unit_2, unit_3)) input_data = [input_1_data, input_2_data] target_data = [target_1_data, target_2_data] model.fit(input_data, target_data, batch_size=batch_size) ``` With the Keras `keras.layers.RNN` layer, You are only expected to define the math logic for individual step within the sequence, and the `keras.layers.RNN` layer will handle the sequence iteration for you. It's an incredibly powerful way to quickly prototype new kinds of RNNs (e.g. a LSTM variant). For more details, please visit the [API docs](https://www.tensorflow.org/api_docs/python/tf/keras/layers/RNN/).
github_jupyter
# Hypothesis tests In this notebook, we will be performing hypothesis tests to valiate certain speculations. ``` # Load the required packages import json import pandas as pd import plotly.express as px import matplotlib.pyplot as plt import seaborn as sns from scipy.stats import ttest_ind, chi2_contingency import plotly.io as pio pio.renderers.default = "vscode" # Load the data df = pd.read_csv('./../../../data/cleaned_data.csv') # Load lists of numerical and categorical columns from the static file with open('./../../../data/statics.json') as f: statics = json.load(f) categorical_columns = statics['categorical_columns'] numerical_columns = statics['numerical_columns'] # Seggregate attrition member groups attr = df[df['Attrition'] == 'Yes'] nattr = df[df['Attrition'] == 'No'] ``` Followig are the some speculations are we are going to consider for our analysis: 1. There is difference between mean salaries within people who leave the company and people who stay. 2. There is difference between mean percentage hike for the two above mentioned groups. 3. Frequent travelling for employees results in attrition. 4. Overtime results in attrition. ## Claim 1 - Difference in monthly salary ``` fig = px.histogram(df, x='MonthlyIncome', color='Attrition', histnorm='probability', marginal='rug') fig.show() ``` At lower incomes the probability of attrition is higher while the trend reverses at the higher range of salaries. Above the value of 11k, the proabability see a sharp decrease. After 14k, the probability of attrition literally diminishes to 0 before picking up lower values at 19k to 20k. For the given case, the null hypothesis and the alternate hypothesis can be framed as - $H_0$ : The difference between the mean salaries for for people who leave and for people who stay is 0. $H_1$: There is difference in the mean salaries. ``` tstat, tpvalue = ttest_ind(attr['MonthlyIncome'], nattr['MonthlyIncome'], equal_var=False) print(f"T Statistic for the test is {tstat}, and the p-value is {tpvalue}") ``` Choosing alpha of 5%, the p-value of the test is too small as compare to 0.05 and hence the null hypothesis is rejected. This signiffies that the there is difference between the salaries for the people who leave the company and people who stay. ## Claim 2 - Difference in percentage of hike in salary" ``` fig = px.histogram(df, x='PercentSalaryHike', color='Attrition', marginal='rug', histnorm='probability') fig.show() ``` There seems to be no significant difference in probability for attrition in terms of difference in salary hike. People does not seem to care for recent salary hikes which considering for a shift. For the given case, the null hypothesis and the alternate hypothesis can be framed as - $H_0$: There is no difference between the mean percent salary hike for the two groups of interest. $H_1$: There is difference between the mean percent salary hike for the groups of interest. ``` tstat, tpvalue = ttest_ind(attr['PercentSalaryHike'], nattr['PercentSalaryHike'], equal_var=False) print(f"T Statistic for the test is {tstat}, and the p-value is {tpvalue}") ``` Again choosing the alpha of 5%, the p-value is greate than 0.05. This signifies that there is no difference between the mean salary hike for the people who leave the company and the people who stay. ## Claim 3 - Frequent travelling For performing, we first need the contingency table which is the count of category for each group of target variable. ``` travel_contingency = pd.crosstab(df['Attrition'], df['BusinessTravel']) travel_contingency ``` For the case at hand, the null hypothesis and the alternate hypothesis can be framed as - $H_0$: There is no relationship between attrition and business travel $H_1$: There is relationship between attrition and business travel ``` stat, p, dof, expected = chi2_contingency(travel_contingency.values.tolist()) print(f"The chi-squared test statistics is {stat} with p-value as {p}") ``` Considering an alpha of 5%, the p-value is too small as compared to 0.05 and hence the null hypothesis is rejected. To forward the result in words, there is relationship between the variable attrition and business travel. ## Claim 4 - Overtime Starting with the contingency table for the case at hand. ``` time_contingency = pd.crosstab(df['Attrition'], df['OverTime']) time_contingency ``` For this scenario, the null hypothesis and the alternate hypothesis can be framed as - $H_0$: There is no relationship between attrition and overtime $H_1$: There is relationship between attrition and overtime ``` stat, p, dof, expected = chi2_contingency(time_contingency.values.tolist()) print(f"The chi-squared test statistics is {stat} with p-value as {p}") ``` Again considering the alpha of 5%, the p-value is too small as compared to 0.05 and hence the null hypothesis can be rejected. This means that there is some relationship between attrition and overtime. ``` import plotly.io as pio import plotly.express as px import plotly.offline as py df = px.data.iris() fig = px.scatter(df, x="sepal_width", y="sepal_length", color="species", size="sepal_length") fig ```
github_jupyter
# Pretrained GPT2 Model Deployment Example In this notebook, we will run an example of text generation using GPT2 model exported from HuggingFace and deployed with Seldon's Triton pre-packed server. the example also covers converting the model to ONNX format. The implemented example below is of the Greedy approach for the next token prediction. more info: https://huggingface.co/transformers/model_doc/gpt2.html?highlight=gpt2 After we have the module deployed to Kubernetes, we will run a simple load test to evaluate the module inference performance. ## Steps: 1. Download pretrained GPT2 model from hugging face 2. Convert the model to ONNX 3. Store it in MinIo bucket 4. Setup Seldon-Core in your kubernetes cluster 5. Deploy the ONNX model with Seldonโ€™s prepackaged Triton server. 6. Interact with the model, run a greedy alg example (generate sentence completion) 7. Run load test using vegeta 8. Clean-up ## Basic requirements * Helm v3.0.0+ * A Kubernetes cluster running v1.13 or above (minkube / docker-for-windows work well if enough RAM) * kubectl v1.14+ * Python 3.6+ ``` %%writefile requirements.txt transformers==4.5.1 torch==1.8.1 tokenizers<0.11,>=0.10.1 tensorflow==2.4.1 tf2onnx !pip install --trusted-host=pypi.python.org --trusted-host=pypi.org --trusted-host=files.pythonhosted.org -r requirements.txt ``` ### Export HuggingFace TFGPT2LMHeadModel pre-trained model and save it locally ``` from transformers import GPT2Tokenizer, TFGPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = TFGPT2LMHeadModel.from_pretrained( "gpt2", from_pt=True, pad_token_id=tokenizer.eos_token_id ) model.save_pretrained("./tfgpt2model", saved_model=True) ``` ### Convert the TensorFlow saved model to ONNX ``` !python -m tf2onnx.convert --saved-model ./tfgpt2model/saved_model/1 --opset 11 --output model.onnx ``` ### Copy your model to a local MinIo #### Setup MinIo Use the provided [notebook](https://docs.seldon.io/projects/seldon-core/en/latest/examples/minio_setup.html) to install MinIo in your cluster and configure `mc` CLI tool. Instructions also [online](https://docs.min.io/docs/minio-client-quickstart-guide.html). -- Note: You can use your prefer remote storage server (google/ AWS etc.) #### Create a Bucket and store your model ``` !mc mb minio-seldon/onnx-gpt2 -p !mc cp ./model.onnx minio-seldon/onnx-gpt2/gpt2/1/ ``` ### Run Seldon in your kubernetes cluster Follow the [Seldon-Core Setup notebook](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html) to Setup a cluster with Ambassador Ingress or Istio and install Seldon Core ### Deploy your model with Seldon pre-packaged Triton server ``` %%writefile secret.yaml apiVersion: v1 kind: Secret metadata: name: seldon-init-container-secret type: Opaque stringData: RCLONE_CONFIG_S3_TYPE: s3 RCLONE_CONFIG_S3_PROVIDER: minio RCLONE_CONFIG_S3_ENV_AUTH: "false" RCLONE_CONFIG_S3_ACCESS_KEY_ID: minioadmin RCLONE_CONFIG_S3_SECRET_ACCESS_KEY: minioadmin RCLONE_CONFIG_S3_ENDPOINT: http://minio.minio-system.svc.cluster.local:9000 %%writefile gpt2-deploy.yaml apiVersion: machinelearning.seldon.io/v1alpha2 kind: SeldonDeployment metadata: name: gpt2 spec: predictors: - graph: implementation: TRITON_SERVER logger: mode: all modelUri: s3://onnx-gpt2 envSecretRefName: seldon-init-container-secret name: gpt2 type: MODEL name: default replicas: 1 protocol: kfserving !kubectl apply -f secret.yaml -n default !kubectl apply -f gpt2-deploy.yaml -n default !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=gpt2 -o jsonpath='{.items[0].metadata.name}') ``` #### Interact with the model: get model metadata (a "test" request to make sure our model is available and loaded correctly) ``` !curl -v http://localhost:80/seldon/default/gpt2/v2/models/gpt2 ``` ### Run prediction test: generate a sentence completion using GPT2 model - Greedy approach ``` import json import numpy as np import requests from transformers import GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained("gpt2") input_text = "I enjoy working in Seldon" count = 0 max_gen_len = 10 gen_sentence = input_text while count < max_gen_len: input_ids = tokenizer.encode(gen_sentence, return_tensors="tf") shape = input_ids.shape.as_list() payload = { "inputs": [ { "name": "input_ids:0", "datatype": "INT32", "shape": shape, "data": input_ids.numpy().tolist(), }, { "name": "attention_mask:0", "datatype": "INT32", "shape": shape, "data": np.ones(shape, dtype=np.int32).tolist(), }, ] } ret = requests.post( "http://localhost:80/seldon/default/gpt2/v2/models/gpt2/infer", json=payload ) try: res = ret.json() except: continue # extract logits logits = np.array(res["outputs"][1]["data"]) logits = logits.reshape(res["outputs"][1]["shape"]) # take the best next token probability of the last token of input ( greedy approach) next_token = logits.argmax(axis=2)[0] next_token_str = tokenizer.decode( next_token[-1:], skip_special_tokens=True, clean_up_tokenization_spaces=True ).strip() gen_sentence += " " + next_token_str count += 1 print(f"Input: {input_text}\nOutput: {gen_sentence}") ``` ### Run Load Test / Performance Test using vegeta #### Install vegeta, for more details take a look in [vegeta](https://github.com/tsenart/vegeta#install) official documentation ``` !wget https://github.com/tsenart/vegeta/releases/download/v12.8.3/vegeta-12.8.3-linux-amd64.tar.gz !tar -zxvf vegeta-12.8.3-linux-amd64.tar.gz !chmod +x vegeta ``` #### Generate vegeta [target file](https://github.com/tsenart/vegeta#-targets) contains "post" cmd with payload in the requiered structure ``` import base64 import json from subprocess import PIPE, Popen, run import numpy as np from transformers import GPT2Tokenizer, TFGPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained("gpt2") input_text = "I enjoy working in Seldon" input_ids = tokenizer.encode(input_text, return_tensors="tf") shape = input_ids.shape.as_list() payload = { "inputs": [ { "name": "input_ids:0", "datatype": "INT32", "shape": shape, "data": input_ids.numpy().tolist(), }, { "name": "attention_mask:0", "datatype": "INT32", "shape": shape, "data": np.ones(shape, dtype=np.int32).tolist(), }, ] } cmd = { "method": "POST", "header": {"Content-Type": ["application/json"]}, "url": "http://localhost:80/seldon/default/gpt2/v2/models/gpt2/infer", "body": base64.b64encode(bytes(json.dumps(payload), "utf-8")).decode("utf-8"), } with open("vegeta_target.json", mode="w") as file: json.dump(cmd, file) file.write("\n\n") !vegeta attack -targets=vegeta_target.json -rate=1 -duration=60s -format=json | vegeta report -type=text ``` ### Clean-up ``` !kubectl delete -f gpt2-deploy.yaml -n default ```
github_jupyter
# Naive forecasting ## Setup ``` import numpy as np import matplotlib.pyplot as plt def plot_series(time, series, format="-", start=0, end=None, label=None): plt.plot(time[start:end], series[start:end], format, label=label) plt.xlabel("Time") plt.ylabel("Value") if label: plt.legend(fontsize=14) plt.grid(True) def trend(time, slope=0): return slope * time def seasonal_pattern(season_time): """Just an arbitrary pattern, you can change it if you wish""" return np.where(season_time < 0.4, np.cos(season_time * 2 * np.pi), 1 / np.exp(3 * season_time)) def seasonality(time, period, amplitude=1, phase=0): """Repeats the same pattern at each period""" season_time = ((time + phase) % period) / period return amplitude * seasonal_pattern(season_time) def white_noise(time, noise_level=1, seed=None): rnd = np.random.RandomState(seed) return rnd.randn(len(time)) * noise_level ``` ## Trend and Seasonality ``` time = np.arange(4 * 365 + 1) slope = 0.05 baseline = 10 amplitude = 40 series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude) noise_level = 5 noise = white_noise(time, noise_level, seed=42) series += noise plt.figure(figsize=(10, 6)) plot_series(time, series) plt.show() ``` All right, this looks realistic enough for now. Let's try to forecast it. We will split it into two periods: the training period and the validation period (in many cases, you would also want to have a test period). The split will be at time step 1000. ``` split_time = 1000 time_train = time[:split_time] x_train = series[:split_time] time_valid = time[split_time:] x_valid = series[split_time:] ``` ## Naive Forecast ``` naive_forecast = series[split_time - 1:-1] plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid, label="Series") plot_series(time_valid, naive_forecast, label="Forecast") ``` Let's zoom in on the start of the validation period: ``` plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid, start=0, end=150, label="Series") plot_series(time_valid, naive_forecast, start=1, end=151, label="Forecast") ``` You can see that the naive forecast lags 1 step behind the time series. Now let's compute the mean absolute error between the forecasts and the predictions in the validation period: ``` errors = naive_forecast - x_valid abs_errors = np.abs(errors) mae = abs_errors.mean() mae ``` That's our baseline, now let's try a moving average.
github_jupyter
# Working model **Version 13a**: - Word level tokens - GRU type RNNs - 'sparse_categorical_crossentropy' to save memory - dropout to hinder overfitting **Conclusions:** - 'sparse' works! - 'sparse' runs 6x faster, strange, perhaps less work on fewer data? - testing 'dropout', works soso - 'so so' translation, perfect on training data, bad on validation data **Improvments to be implemented:** - randomize input data? - try / understand 'TimeDistributed': decoder_dense = TimeDistributed(Dense(Y_lstm.shape[2], activation = 'relu')) - **Done** dropout in RNN layer: - dropout as layer - L2 reg - **Done** Simplify by suing GRU RNN - **Done** ' to_categorical' as one-hot encoder, makes huge matrices - **Done** "sparse_categorical_crossentropy" to reduce the 'one hot' tensor - operates right now with long sentences: 8*std_div, shound be less when longer sentences are trained - deeper models to represent more complex sentences, more RNN layers? - bi-directional layers: https://stackoverflow.com/questions/50815354/seq2seq-bidirectional-encoder-decoder-in-keras - train on larger dataset - model.fit_generator to handle larger datasets - attention - Gradient clipping is important for RNNs training (clipvalue=1.0), book page 309 - test: metrics=['accuracy'] - **Done** something is wrong with the index of the one-hot; the model allows to return "0" as the best index, but the token2word starts from "1". It seems to be OK - set 'return_sequences' or 'return_stage' to false in models? Something is rotten **Credits to many fine people on the internet:** - https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html - https://medium.com/@dev.elect.iitd/neural-machine-translation-using-word-level-seq2seq-model-47538cba8cd7 - https://stackoverflow.com/questions/49477097/keras-seq2seq-word-embedding - https://github.com/devm2024/nmt_keras/blob/master/base.ipynb - https://www.kaggle.com/ievgenvp/lstm-encoder-decoder-via-keras-lb-0-5 ``` import tensorflow as tf from keras.models import Model from keras.layers import Input, Embedding, LSTM, GRU, Dense from tensorflow.python.keras.optimizers import RMSprop from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.utils.np_utils import to_categorical import numpy as np tf.__version__ tf.keras.__version__ # global variables num_samples = 100000 # Number of samples to train on num_words = 10000 # Limit vocabulary in translation latent_dim = 256 # Latent dimensionality of the encoding space batch_size = 512 # Batch size for training. numEpochs = 200 # Number of epochs to train for. DropOut = 0.4 # Used in GRU layers truncate_std_div = 99 # truncate sentences after x tokens mark_start = 'ssss ' # start and end markes for destination sentences mark_end = ' eeee' data_path = 'dan-eng/dan.txt' ``` ### Read training data into tables ``` # Read data into tables input_texts = [] target_texts = [] with open(data_path, 'r', encoding='utf-8') as f: lines = f.read().split('\n') for line in lines[: min(num_samples, len(lines) - 1)]: input_sentence, target_sentence = line.split('\t') target_sentence = mark_start + target_sentence.strip() + mark_end input_texts.append(input_sentence) target_texts.append(target_sentence) # Examples print(input_texts[15:20]) print(target_texts[15:20]) ``` ### Tokenize input sentences ``` # crate input tokenizer and create vocabulary from the texts tokenizer_inp = Tokenizer(num_words=num_words) tokenizer_inp.fit_on_texts(input_texts) print('Found %s unique source tokens.' % len(tokenizer_inp.word_index)) # translate from word sentences to token sentences tokens_inp = tokenizer_inp.texts_to_sequences(input_texts) # Shorten the longest token sentences, Find the length of all sentences, truncate after x * std deviations num_tokens = [len(x) for x in tokens_inp] print('Longest sentence is %s tokens.' % max(num_tokens)) max_tokens_input = np.mean(num_tokens) + truncate_std_div * np.std(num_tokens) max_tokens_input = min(int(max_tokens_input), max(num_tokens)) print('Sentences shortened to max %s tokens.' % max_tokens_input) # Pad / truncate all token-sequences to the given length tokens_padded_input = pad_sequences(tokens_inp, maxlen=max_tokens_input, padding='post', truncating='post') print('Shape of input tokens:', tokens_padded_input.shape) print('Input example: ', tokens_padded_input[10000]) # Create inverse lookup from integer-tokens to words index_to_word_input = dict(zip(tokenizer_inp.word_index.values(), tokenizer_inp.word_index.keys())) # function to return readable text from tokens string def tokens_to_string_inp(tokens): words = [index_to_word_input[token] for token in tokens if token != 0] text = " ".join(words) return text # demo to show that it works idx = 10000 print(tokens_to_string_inp(tokens_padded_input[idx])) print(input_texts[idx]) print(tokens_padded_input[idx]) ``` ### Tokenize destination sentences ``` # crate input tokenizer and create vocabulary from the texts tokenizer_target = Tokenizer(num_words=num_words) tokenizer_target.fit_on_texts(target_texts) print('Found %s unique target tokens.' % len(tokenizer_target.word_index)) # translate from word sentences to token sentences tokens_target = tokenizer_target.texts_to_sequences(target_texts) # translate from word sentences to token sentences tokens_target = tokenizer_target.texts_to_sequences(target_texts) # Shorten the longest token sentences, Find the length of all sentences, truncate after x * std deviations num_tokens = [len(x) for x in tokens_target] print('Longest sentence is %s tokens.' % max(num_tokens)) max_tokens_target = np.mean(num_tokens) + truncate_std_div * np.std(num_tokens) max_tokens_target = min(int(max_tokens_target), max(num_tokens)) print('Sentences shortened to max %s tokens.' % max_tokens_target) # Pad / truncate all token-sequences to the given length tokens_padded_target = pad_sequences(tokens_target, maxlen=max_tokens_target, padding='post', truncating='post') print('Shape of target tokens:', tokens_padded_target.shape) print('Target example: ', tokens_padded_target[10000]) # Create inverse lookup from integer-tokens to words index_to_word_target = dict(zip(tokenizer_target.word_index.values(), tokenizer_target.word_index.keys())) # function to return readable text from tokens string def tokens_to_string_target(tokens): words = [index_to_word_target[token] for token in tokens if token != 0] text = " ".join(words) return text # demo to show that it works idx = 10000 print(tokens_to_string_target(tokens_padded_input[idx])) print(target_texts[idx]) print(tokens_padded_target[idx]) # start and end marks as tokens, needed when translating token_start = tokenizer_target.word_index[mark_start.strip()] token_end = tokenizer_target.word_index[mark_end.strip()] print(token_start, token_end) ``` ### Traing data - Input to the encoder is simply the source language as it is - Inputs to the decoder are slightly more complicated, since the two input strings are shiften one time-step: The model has to learn to predict the "next" token in the output from the input. Slizing is used to get two "views" to the data ``` encoder_input_data = tokens_padded_input encoder_input_data.shape decoder_input_data = tokens_padded_target[:, :-1] decoder_input_data.shape decoder_target_data = tokens_padded_target[:, 1:] decoder_target_data.shape ``` Examples showing the training data to the model ``` encoder_input_data[idx] decoder_input_data[idx] decoder_target_data[idx] ``` One-hot encode 'decoder_target_data' since this is what the decoder produces as output ### Create training model ``` # GRU encoder encoder_inputs = Input(shape=(None,)) encoder_embed = Embedding(num_words, latent_dim) encoder_embed_final = encoder_embed(encoder_inputs) encoder = GRU(latent_dim, dropout=DropOut, recurrent_dropout=DropOut, return_state=True) encoder_outputs, state_h = encoder(encoder_embed_final) # Set up GRU decoder, using `encoder_states` as initial state decoder_inputs = Input(shape=(None,)) decoder_embed = Embedding(num_words, latent_dim) decoder_embed_final = decoder_embed(decoder_inputs) decoder_gru = GRU(latent_dim, dropout=DropOut, recurrent_dropout=DropOut, return_sequences=True, return_state=True) decoder_outputs, dec_states_h = decoder_gru(decoder_embed_final, initial_state=state_h) decoder_dense = Dense(num_words, activation='linear') decoder_outputs = decoder_dense(decoder_outputs) model = Model([encoder_inputs, decoder_inputs], decoder_outputs) model.summary() # visualise model as a graph from IPython.display import SVG from keras.utils.vis_utils import model_to_dot import pydot_ng as pydot import graphviz as graphviz SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg')) ``` ### Train the model ``` # custom loss function since sparse does not work: https://github.com/tensorflow/tensorflow/issues/17150 def sparse_cross_entropy(y_true, y_pred): # Calculate the loss. This outputs a 2-rank tensor of shape [batch_size, sequence_length] loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y_true, logits=y_pred) loss_mean = tf.reduce_mean(loss) return loss_mean ``` ``` decoder_target = tf.placeholder(dtype='int32', shape=(None, None)) model.compile(optimizer='rmsprop', loss=sparse_cross_entropy, target_tensors=[decoder_target]) # Note that `decoder_target_data` needs to be one-hot encoded, # rather than sequences of integers like `decoder_input_data`! history = model.fit([encoder_input_data, decoder_input_data], decoder_target_data, # _onehot batch_size=batch_size, epochs=numEpochs, validation_split=0.2) model.save('TGC_trans.h5') history_dict = history.history history_dict.keys() import matplotlib.pyplot as plt plt.show() # plotter historikken for 'loss' loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, numEpochs+1) plt.plot(epochs, loss, 'bo', label='Training loss') # bo = "blue dot" plt.plot(epochs, val_loss, 'b', label='Validation loss') # b = "solid blue line" plt.title('Training and validation loss') plt.legend() plt.show() ``` ## Inference mode = testing the model ### create sampling model ``` # encoder model used to create internal representation / states encoder_model = Model(encoder_inputs, state_h) encoder_model.summary() decoder_state_input_h = Input(shape=(latent_dim,)) decoder_state_inputs = [decoder_state_input_h,] # reuse the decoder we have trained decoder_embed_final2 = decoder_embed(decoder_inputs) decoder_outputs2, state_h2 = decoder_gru(decoder_embed_final2, initial_state=decoder_state_inputs) decoder_outputs2 = decoder_dense(decoder_outputs2) decoder_model = Model( [decoder_inputs] + decoder_state_inputs, [decoder_outputs2] + [state_h2]) # notice the '+' operator requires [] to work !!! decoder_model.summary() def decode_sequence(input_seq): # tokenize the text to be translated, and reverse input_tokens = tokenizer_inp.texts_to_sequences([input_seq]) input_tokens = pad_sequences(input_tokens, maxlen=max_tokens_input, padding='post', truncating='post') # encode the input sentence states_value = encoder_model.predict(input_tokens) # Generate empty target sequence of length 1 and insert start token target_seq = np.zeros((1,1)) target_seq[0, 0] = token_start # # sampling loop to generate translated words using decoder, word by word stop_condition = False decoded_sentence = '' while not stop_condition: # predict one next word, decoder returns probabilities for all words/tokens output_tokens, h = decoder_model.predict([target_seq] + [states_value]) # pick most probable token / word sampled_token_index = np.argmax(output_tokens[0, -1, :]) sampled_word = index_to_word_target[sampled_token_index] decoded_sentence += ' '+sampled_word # Exit condition: either hit max length or find stop character. if (sampled_word == 'eeee' or len(decoded_sentence) > 52): stop_condition = True # Update the target sequence (of length 1). target_seq = np.zeros((1,1)) target_seq[0, 0] = sampled_token_index # Update states, so they can be re-injected in next token/word prediction states_value = h return decoded_sentence ``` ### Doing translation ... ``` # testing on known sentences from training data for idx in range(6001, 6100): input_seq = input_texts[idx] decoded_sentence = decode_sequence(input_seq) print(input_seq, decoded_sentence, '\n') # testing on known sentences from validation data for idx in range(12000, 12100): input_seq = input_texts[idx] decoded_sentence = decode_sequence(input_seq) print(input_seq, decoded_sentence, '\n') input_seq = 'see you later' decoded_sentence = decode_sequence(input_seq) print(input_seq, decoded_sentence, '\n') input_seq = 'how are you' decoded_sentence = decode_sequence(input_seq) print(input_seq, decoded_sentence, '\n') ```
github_jupyter
##### Copyright 2020 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # TF Lattice ์‚ฌ์ „ ์ œ์ž‘ ๋ชจ๋ธ <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/premade_models"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org์—์„œ ๋ณด๊ธฐ</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/premade_models.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab์—์„œ ์‹คํ–‰ํ•˜๊ธฐ</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/premade_models.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub์—์„œ ์†Œ์Šค ๋ณด๊ธฐ</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/lattice/tutorials/premade_models.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">๋…ธํŠธ๋ถ ๋‹ค์šด๋กœ๋“œํ•˜๊ธฐ</a></td> </table> ## ๊ฐœ์š” ์‚ฌ์ „ ์ œ์ž‘๋œ ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์ธ ์‚ฌ์šฉ ์‚ฌ๋ก€๋ฅผ ์œ„ํ•ด TFL `tf.keras.model` ์ธ์Šคํ„ด์Šค๋ฅผ ๊ตฌ์ถ•ํ•˜๋Š” ๋น ๋ฅด๊ณ  ์‰ฌ์šด ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” TFL ์‚ฌ์ „ ์ œ์ž‘ ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•˜๊ณ  ํ›ˆ๋ จ/ํ…Œ์ŠคํŠธํ•˜๋Š” ๋ฐ ํ•„์š”ํ•œ ๋‹จ๊ณ„๋ฅผ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ## ์„ค์ • TF Lattice ํŒจํ‚ค์ง€ ์„ค์น˜ํ•˜๊ธฐ ``` #@test {"skip": true} !pip install tensorflow-lattice pydot ``` ํ•„์ˆ˜ ํŒจํ‚ค์ง€ ๊ฐ€์ ธ์˜ค๊ธฐ ``` import tensorflow as tf import copy import logging import numpy as np import pandas as pd import sys import tensorflow_lattice as tfl logging.disable(sys.maxsize) ``` UCI Statlog(Heart) ๋ฐ์ดํ„ฐ์„ธํŠธ ๋‹ค์šด๋กœ๋“œํ•˜๊ธฐ ``` csv_file = tf.keras.utils.get_file( 'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv') df = pd.read_csv(csv_file) train_size = int(len(df) * 0.8) train_dataframe = df[:train_size] test_dataframe = df[train_size:] df.head() ``` ํŠน์„ฑ๊ณผ ๋ ˆ์ด๋ธ”์„ ์ถ”์ถœํ•˜๊ณ  ํ…์„œ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ``` # Features: # - age # - sex # - cp chest pain type (4 values) # - trestbps resting blood pressure # - chol serum cholestoral in mg/dl # - fbs fasting blood sugar > 120 mg/dl # - restecg resting electrocardiographic results (values 0,1,2) # - thalach maximum heart rate achieved # - exang exercise induced angina # - oldpeak ST depression induced by exercise relative to rest # - slope the slope of the peak exercise ST segment # - ca number of major vessels (0-3) colored by flourosopy # - thal 3 = normal; 6 = fixed defect; 7 = reversable defect # # This ordering of feature names will be the exact same order that we construct # our model to expect. feature_names = [ 'age', 'sex', 'cp', 'chol', 'fbs', 'trestbps', 'thalach', 'restecg', 'exang', 'oldpeak', 'slope', 'ca', 'thal' ] feature_name_indices = {name: index for index, name in enumerate(feature_names)} # This is the vocab list and mapping we will use for the 'thal' categorical # feature. thal_vocab_list = ['normal', 'fixed', 'reversible'] thal_map = {category: i for i, category in enumerate(thal_vocab_list)} # Custom function for converting thal categories to buckets def convert_thal_features(thal_features): # Note that two examples in the test set are already converted. return np.array([ thal_map[feature] if feature in thal_vocab_list else feature for feature in thal_features ]) # Custom function for extracting each feature. def extract_features(dataframe, label_name='target', feature_names=feature_names): features = [] for feature_name in feature_names: if feature_name == 'thal': features.append( convert_thal_features(dataframe[feature_name].values).astype(float)) else: features.append(dataframe[feature_name].values.astype(float)) labels = dataframe[label_name].values.astype(float) return features, labels train_xs, train_ys = extract_features(train_dataframe) test_xs, test_ys = extract_features(test_dataframe) # Let's define our label minimum and maximum. min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys)) # Our lattice models may have predictions above 1.0 due to numerical errors. # We can subtract this small epsilon value from our output_max to make sure we # do not predict values outside of our label bound. numerical_error_epsilon = 1e-5 ``` ์ด ๊ฐ€์ด๋“œ์—์„œ ํ›ˆ๋ จ์— ์‚ฌ์šฉ๋˜๋Š” ๊ธฐ๋ณธ๊ฐ’ ์„ค์ •ํ•˜๊ธฐ ``` LEARNING_RATE = 0.01 BATCH_SIZE = 128 NUM_EPOCHS = 500 PREFITTING_NUM_EPOCHS = 10 ``` ## ํŠน์„ฑ ๊ตฌ์„ฑ ํŠน์„ฑ ๋ณด์ • ๋ฐ ํŠน์„ฑ๋ณ„ ๊ตฌ์„ฑ์€ [tfl.configs.FeatureConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/FeatureConfig)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์„ค์ •๋ฉ๋‹ˆ๋‹ค. ํŠน์„ฑ ๊ตฌ์„ฑ์—๋Š” ๋‹จ์กฐ ์ œ์•ฝ ์กฐ๊ฑด, ํŠน์„ฑ๋ณ„ ์ •๊ทœํ™”([tfl.configs.RegularizerConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/RegularizerConfig) ์ฐธ์กฐ) ๋ฐ ๊ฒฉ์ž ๋ชจ๋ธ์— ๋Œ€ํ•œ ๊ฒฉ์ž ํฌ๊ธฐ๊ฐ€ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ์ธ์‹ํ•ด์•ผ ํ•  ๋ชจ๋“  ํŠน์„ฑ์— ๋Œ€ํ•œ ํŠน์„ฑ ๊ตฌ์„ฑ์„ ์™„์ „ํ•˜๊ฒŒ ์ง€์ •ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด ๋ชจ๋ธ์€ ์ด๋Ÿฌํ•œ ํŠน์„ฑ์ด ์กด์žฌํ•˜๋Š”์ง€ ์•Œ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ### ๋ถ„์œ„์ˆ˜ ๊ณ„์‚ฐํ•˜๊ธฐ `tfl.configs.FeatureConfig`์—์„œ `pwl_calibration_input_keypoints`์˜ ๊ธฐ๋ณธ ์„ค์ •์€ 'quantiles'์ด์ง€๋งŒ ์‚ฌ์ „ ์ œ์ž‘๋œ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ ์ž…๋ ฅ ํ‚คํฌ์ธํŠธ๋ฅผ ์ˆ˜๋™์œผ๋กœ ์ •์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋จผ์ € ๋ถ„์œ„์ˆ˜ ๊ณ„์‚ฐ์„ ์œ„ํ•œ ์ž์ฒด ๋„์šฐ๋ฏธ ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ``` def compute_quantiles(features, num_keypoints=10, clip_min=None, clip_max=None, missing_value=None): # Clip min and max if desired. if clip_min is not None: features = np.maximum(features, clip_min) features = np.append(features, clip_min) if clip_max is not None: features = np.minimum(features, clip_max) features = np.append(features, clip_max) # Make features unique. unique_features = np.unique(features) # Remove missing values if specified. if missing_value is not None: unique_features = np.delete(unique_features, np.where(unique_features == missing_value)) # Compute and return quantiles over unique non-missing feature values. return np.quantile( unique_features, np.linspace(0., 1., num=num_keypoints), interpolation='nearest').astype(float) ``` ### ํŠน์„ฑ ๊ตฌ์„ฑ ์ •์˜ํ•˜๊ธฐ ์ด์ œ ๋ถ„์œ„์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ๋ชจ๋ธ์ด ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ์›ํ•˜๋Š” ๊ฐ ํŠน์„ฑ์— ๋Œ€ํ•œ ํŠน์„ฑ ๊ตฌ์„ฑ์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ``` # Feature configs are used to specify how each feature is calibrated and used. feature_configs = [ tfl.configs.FeatureConfig( name='age', lattice_size=3, monotonicity='increasing', # We must set the keypoints manually. pwl_calibration_num_keypoints=5, pwl_calibration_input_keypoints=compute_quantiles( train_xs[feature_name_indices['age']], num_keypoints=5, clip_max=100), # Per feature regularization. regularizer_configs=[ tfl.configs.RegularizerConfig(name='calib_wrinkle', l2=0.1), ], ), tfl.configs.FeatureConfig( name='sex', num_buckets=2, ), tfl.configs.FeatureConfig( name='cp', monotonicity='increasing', # Keypoints that are uniformly spaced. pwl_calibration_num_keypoints=4, pwl_calibration_input_keypoints=np.linspace( np.min(train_xs[feature_name_indices['cp']]), np.max(train_xs[feature_name_indices['cp']]), num=4), ), tfl.configs.FeatureConfig( name='chol', monotonicity='increasing', # Explicit input keypoints initialization. pwl_calibration_input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0], # Calibration can be forced to span the full output range by clamping. pwl_calibration_clamp_min=True, pwl_calibration_clamp_max=True, # Per feature regularization. regularizer_configs=[ tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4), ], ), tfl.configs.FeatureConfig( name='fbs', # Partial monotonicity: output(0) <= output(1) monotonicity=[(0, 1)], num_buckets=2, ), tfl.configs.FeatureConfig( name='trestbps', monotonicity='decreasing', pwl_calibration_num_keypoints=5, pwl_calibration_input_keypoints=compute_quantiles( train_xs[feature_name_indices['trestbps']], num_keypoints=5), ), tfl.configs.FeatureConfig( name='thalach', monotonicity='decreasing', pwl_calibration_num_keypoints=5, pwl_calibration_input_keypoints=compute_quantiles( train_xs[feature_name_indices['thalach']], num_keypoints=5), ), tfl.configs.FeatureConfig( name='restecg', # Partial monotonicity: output(0) <= output(1), output(0) <= output(2) monotonicity=[(0, 1), (0, 2)], num_buckets=3, ), tfl.configs.FeatureConfig( name='exang', # Partial monotonicity: output(0) <= output(1) monotonicity=[(0, 1)], num_buckets=2, ), tfl.configs.FeatureConfig( name='oldpeak', monotonicity='increasing', pwl_calibration_num_keypoints=5, pwl_calibration_input_keypoints=compute_quantiles( train_xs[feature_name_indices['oldpeak']], num_keypoints=5), ), tfl.configs.FeatureConfig( name='slope', # Partial monotonicity: output(0) <= output(1), output(1) <= output(2) monotonicity=[(0, 1), (1, 2)], num_buckets=3, ), tfl.configs.FeatureConfig( name='ca', monotonicity='increasing', pwl_calibration_num_keypoints=4, pwl_calibration_input_keypoints=compute_quantiles( train_xs[feature_name_indices['ca']], num_keypoints=4), ), tfl.configs.FeatureConfig( name='thal', # Partial monotonicity: # output(normal) <= output(fixed) # output(normal) <= output(reversible) monotonicity=[('normal', 'fixed'), ('normal', 'reversible')], num_buckets=3, # We must specify the vocabulary list in order to later set the # monotonicities since we used names and not indices. vocabulary_list=thal_vocab_list, ), ] ``` ๋‹ค์Œ์œผ๋กœ ์‚ฌ์šฉ์ž ์ •์˜ ์–ดํœ˜(์œ„์˜ 'thal'๊ณผ ๊ฐ™์€)๋ฅผ ์‚ฌ์šฉํ•œ ํŠน์„ฑ์— ๋Œ€ํ•ด ๋‹จ์กฐ๋ฅผ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์„ค์ •ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ``` tfl.premade_lib.set_categorical_monotonicities(feature_configs) ``` ## ๋ณด์ •๋œ ์„ ํ˜• ๋ชจ๋ธ TFL ์‚ฌ์ „ ์ œ์ž‘ ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•˜๋ ค๋ฉด ๋จผ์ € [tfl.configs](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs)์—์„œ ๋ชจ๋ธ ๊ตฌ์„ฑ์„ ๊ฐ–์ถ”์„ธ์š”. ๋ณด์ •๋œ ์„ ํ˜• ๋ชจ๋ธ์€ [tfl.configs.CalibratedLinearConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLinearConfig)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ํŠน์„ฑ์— ๊ตฌ๊ฐ„ ์„ ํ˜• ๋ฐ ๋ฒ”์ฃผํ˜• ๋ณด์ •์„ ์ ์šฉํ•œ ๋‹ค์Œ ์„ ํ˜• ์กฐํ•ฉ ๋ฐ ์„ ํƒ์  ์ถœ๋ ฅ ๊ตฌ๊ฐ„ ์„ ํ˜• ๋ณด์ •์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ถœ๋ ฅ ๋ณด์ •์„ ์‚ฌ์šฉํ•˜๊ฑฐ๋‚˜ ์ถœ๋ ฅ ๊ฒฝ๊ณ„๊ฐ€ ์ง€์ •๋œ ๊ฒฝ์šฐ ์„ ํ˜• ๋ ˆ์ด์–ด๋Š” ๋ณด์ •๋œ ์ž…๋ ฅ์— ๊ฐ€์ค‘์น˜ ํ‰๊ท ์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ๋Š” ์ฒ˜์Œ 5๊ฐœ ํŠน์„ฑ์— ๋Œ€ํ•ด ๋ณด์ •๋œ ์„ ํ˜• ๋ชจ๋ธ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ``` # Model config defines the model structure for the premade model. linear_model_config = tfl.configs.CalibratedLinearConfig( feature_configs=feature_configs[:5], use_bias=True, # We must set the output min and max to that of the label. output_min=min_label, output_max=max_label, output_calibration=True, output_calibration_num_keypoints=10, output_initialization=np.linspace(min_label, max_label, num=10), regularizer_configs=[ # Regularizer for the output calibrator. tfl.configs.RegularizerConfig(name='output_calib_hessian', l2=1e-4), ]) # A CalibratedLinear premade model constructed from the given model config. linear_model = tfl.premade.CalibratedLinear(linear_model_config) # Let's plot our model. tf.keras.utils.plot_model(linear_model, show_layer_names=False, rankdir='LR') ``` ์ด์ œ ๋‹ค๋ฅธ [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model)๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ชจ๋ธ์„ ๋ฐ์ดํ„ฐ์— ๋งž๊ฒŒ ์ปดํŒŒ์ผํ•˜๊ณ  ์ ํ•ฉํ•˜๋„๋ก ๋งž์ถฅ๋‹ˆ๋‹ค. ``` linear_model.compile( loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.AUC()], optimizer=tf.keras.optimizers.Adam(LEARNING_RATE)) linear_model.fit( train_xs[:5], train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False) ``` ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•œ ํ›„ ํ…Œ์ŠคํŠธ์„ธํŠธ์—์„œ ํ‰๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ``` print('Test Set Evaluation...') print(linear_model.evaluate(test_xs[:5], test_ys)) ``` ## ๋ณด์ •๋œ ๊ฒฉ์ž ๋ชจ๋ธ ๋ณด์ •๋œ ๊ฒฉ์ž ๋ชจ๋ธ์€ [tfl.configs.CalibratedLatticeConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLatticeConfig)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ๋ณด์ •๋œ ๊ฒฉ์ž ๋ชจ๋ธ์€ ์ž…๋ ฅ ํŠน์„ฑ์— ๊ตฌ๊ฐ„๋ณ„ ์„ ํ˜• ๋ฐ ๋ฒ”์ฃผํ˜• ๋ณด์ •์„ ์ ์šฉํ•œ ๋‹ค์Œ ๊ฒฉ์ž ๋ชจ๋ธ ๋ฐ ์„ ํƒ์  ์ถœ๋ ฅ ๊ตฌ๊ฐ„๋ณ„ ์„ ํ˜• ๋ณด์ •์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ์—์„œ๋Š” ์ฒ˜์Œ 5๊ฐœ์˜ ํŠน์„ฑ์— ๋Œ€ํ•ด ๋ณด์ •๋œ ๊ฒฉ์ž ๋ชจ๋ธ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ``` # This is a calibrated lattice model: inputs are calibrated, then combined # non-linearly using a lattice layer. lattice_model_config = tfl.configs.CalibratedLatticeConfig( feature_configs=feature_configs[:5], output_min=min_label, output_max=max_label - numerical_error_epsilon, output_initialization=[min_label, max_label], regularizer_configs=[ # Torsion regularizer applied to the lattice to make it more linear. tfl.configs.RegularizerConfig(name='torsion', l2=1e-2), # Globally defined calibration regularizer is applied to all features. tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-2), ]) # A CalibratedLattice premade model constructed from the given model config. lattice_model = tfl.premade.CalibratedLattice(lattice_model_config) # Let's plot our model. tf.keras.utils.plot_model(lattice_model, show_layer_names=False, rankdir='LR') ``` ์ด์ „๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ชจ๋ธ์„ ์ปดํŒŒ์ผํ•˜๊ณ  ์ ํ•ฉํ•˜๋„๋ก ๋งž์ถ”๊ณ  ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ``` lattice_model.compile( loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.AUC()], optimizer=tf.keras.optimizers.Adam(LEARNING_RATE)) lattice_model.fit( train_xs[:5], train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False) print('Test Set Evaluation...') print(lattice_model.evaluate(test_xs[:5], test_ys)) ``` ## ๋ณด์ •๋œ ๊ฒฉ์ž ์•™์ƒ๋ธ” ๋ชจ๋ธ ํŠน์„ฑ ์ˆ˜๊ฐ€ ๋งŽ์œผ๋ฉด ์•™์ƒ๋ธ” ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.์ด ๋ชจ๋ธ์€ ํŠน์„ฑ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ์— ๋Œ€ํ•ด ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์ž‘์€ ๊ฒฉ์ž๋ฅผ ๋งŒ๋“ค๊ณ , ํ•˜๋‚˜์˜ ๊ฑฐ๋Œ€ํ•œ ๊ฒฉ์ž๋ฅผ ๋งŒ๋“œ๋Š” ๋Œ€์‹  ์ถœ๋ ฅ์„ ํ‰๊ท ํ™”ํ•ฉ๋‹ˆ๋‹ค. ์•™์ƒ๋ธ” ๊ฒฉ์ž ๋ชจ๋ธ์€ [tfl.configs.CalibratedLatticeEnsembleConfig](https://www.tensorflow.org/lattice/api_docs/python/tfl/configs/CalibratedLatticeEnsembleConfig)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ๋ณด์ •๋œ ๊ฒฉ์ž ์•™์ƒ๋ธ” ๋ชจ๋ธ์€ ์ž…๋ ฅ ํŠน์„ฑ์— ๊ตฌ๊ฐ„๋ณ„ ์„ ํ˜• ๋ฐ ๋ฒ”์ฃผํ˜• ๋ณด์ •์„ ์ ์šฉํ•œ ๋‹ค์Œ ๊ฒฉ์ž ๋ชจ๋ธ ์•™์ƒ๋ธ”๊ณผ ์„ ํƒ์  ์ถœ๋ ฅ ๊ตฌ๊ฐ„๋ณ„ ์„ ํ˜• ๋ณด์ •์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ### ๋ช…์‹œ์  ๊ฒฉ์ž ์•™์ƒ๋ธ” ์ดˆ๊ธฐํ™” ๊ฒฉ์ž์— ๊ณต๊ธ‰ํ•  ํŠน์„ฑ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ์ด๋ฏธ ์•Œ๊ณ  ์žˆ๋Š” ๊ฒฝ์šฐ ํŠน์„ฑ ์ด๋ฆ„์„ ์‚ฌ์šฉํ•˜์—ฌ ๊ฒฉ์ž๋ฅผ ๋ช…์‹œ์ ์œผ๋กœ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ์—์„œ๋Š” 5๊ฐœ์˜ ๊ฒฉ์ž์™€ ๊ฒฉ์ž๋‹น 3๊ฐœ์˜ ํŠน์„ฑ์ด ์žˆ๋Š” ๋ณด์ •๋œ ๊ฒฉ์ž ์•™์ƒ๋ธ” ๋ชจ๋ธ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ``` # This is a calibrated lattice ensemble model: inputs are calibrated, then # combined non-linearly and averaged using multiple lattice layers. explicit_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig( feature_configs=feature_configs, lattices=[['trestbps', 'chol', 'ca'], ['fbs', 'restecg', 'thal'], ['fbs', 'cp', 'oldpeak'], ['exang', 'slope', 'thalach'], ['restecg', 'age', 'sex']], num_lattices=5, lattice_rank=3, output_min=min_label, output_max=max_label - numerical_error_epsilon, output_initialization=[min_label, max_label]) # A CalibratedLatticeEnsemble premade model constructed from the given # model config. explicit_ensemble_model = tfl.premade.CalibratedLatticeEnsemble( explicit_ensemble_model_config) # Let's plot our model. tf.keras.utils.plot_model( explicit_ensemble_model, show_layer_names=False, rankdir='LR') ``` ์ด์ „๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ชจ๋ธ์„ ์ปดํŒŒ์ผํ•˜๊ณ  ์ ํ•ฉํ•˜๋„๋ก ๋งž์ถ”๊ณ  ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ``` explicit_ensemble_model.compile( loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.AUC()], optimizer=tf.keras.optimizers.Adam(LEARNING_RATE)) explicit_ensemble_model.fit( train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False) print('Test Set Evaluation...') print(explicit_ensemble_model.evaluate(test_xs, test_ys)) ``` ### ๋ฌด์ž‘์œ„ ๊ฒฉ์ž ์•™์ƒ๋ธ” ๊ฒฉ์ž์— ์–ด๋–ค ํŠน์„ฑ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ์ œ๊ณตํ• ์ง€ ํ™•์‹คํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ๊ฐ ๊ฒฉ์ž์— ๋Œ€ํ•ด ๋ฌด์ž‘์œ„์˜ ํŠน์„ฑ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ์‚ฌ์šฉํ•ด๋ณด๋Š” ์˜ต์…˜์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ์—์„œ๋Š” 5๊ฐœ์˜ ๊ฒฉ์ž์™€ ๊ฒฉ์ž๋‹น 3๊ฐœ์˜ ํŠน์„ฑ์ด ์žˆ๋Š” ๋ณด์ •๋œ ๊ฒฉ์ž ์•™์ƒ๋ธ” ๋ชจ๋ธ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ``` # This is a calibrated lattice ensemble model: inputs are calibrated, then # combined non-linearly and averaged using multiple lattice layers. random_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig( feature_configs=feature_configs, lattices='random', num_lattices=5, lattice_rank=3, output_min=min_label, output_max=max_label - numerical_error_epsilon, output_initialization=[min_label, max_label], random_seed=42) # Now we must set the random lattice structure and construct the model. tfl.premade_lib.set_random_lattice_ensemble(random_ensemble_model_config) # A CalibratedLatticeEnsemble premade model constructed from the given # model config. random_ensemble_model = tfl.premade.CalibratedLatticeEnsemble( random_ensemble_model_config) # Let's plot our model. tf.keras.utils.plot_model( random_ensemble_model, show_layer_names=False, rankdir='LR') ``` ์ด์ „๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ชจ๋ธ์„ ์ปดํŒŒ์ผํ•˜๊ณ  ์ ํ•ฉํ•˜๋„๋ก ๋งž์ถ”๊ณ  ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ``` random_ensemble_model.compile( loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.AUC()], optimizer=tf.keras.optimizers.Adam(LEARNING_RATE)) random_ensemble_model.fit( train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False) print('Test Set Evaluation...') print(random_ensemble_model.evaluate(test_xs, test_ys)) ``` ### RTL ๋ ˆ์ด์–ด ๋ฌด์ž‘์œ„ ๊ฒฉ์ž ์•™์ƒ๋ธ” ๋ฌด์ž‘์œ„ ๊ฒฉ์ž ์•™์ƒ๋ธ”์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๋ชจ๋ธ์ด ๋‹จ์ผ `tfl.layers.RTL` ๋ ˆ์ด์–ด๋ฅผ ์‚ฌ์šฉํ•˜๋„๋ก ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `tfl.layers.RTL`์€ ๋‹จ์กฐ ์ œ์•ฝ ์กฐ๊ฑด๋งŒ ์ง€์›ํ•˜๋ฉฐ ๋ชจ๋“  ํŠน์„ฑ์— ๋Œ€ํ•ด ๊ฐ™์€ ๊ฒฉ์ž ํฌ๊ธฐ๋ฅผ ๊ฐ€์ ธ์•ผ ํ•˜๊ณ  ํŠน์„ฑ๋ณ„ ์ •๊ทœํ™”๊ฐ€ ์—†์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `tfl.layers.RTL` ๋ ˆ์ด์–ด๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ณ„๋„์˜ `tfl.layers.Lattice` ์ธ์Šคํ„ด์Šค๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ํ›จ์”ฌ ๋” ํฐ ์•™์ƒ๋ธ”๋กœ ํ™•์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ์—์„œ๋Š” 5๊ฐœ์˜ ๊ฒฉ์ž์™€ ๊ฒฉ์ž๋‹น 3๊ฐœ์˜ ํŠน์„ฑ์ด ์žˆ๋Š” ๋ณด์ •๋œ ๊ฒฉ์ž ์•™์ƒ๋ธ” ๋ชจ๋ธ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ``` # Make sure our feature configs have the same lattice size, no per-feature # regularization, and only monotonicity constraints. rtl_layer_feature_configs = copy.deepcopy(feature_configs) for feature_config in rtl_layer_feature_configs: feature_config.lattice_size = 2 feature_config.unimodality = 'none' feature_config.reflects_trust_in = None feature_config.dominates = None feature_config.regularizer_configs = None # This is a calibrated lattice ensemble model: inputs are calibrated, then # combined non-linearly and averaged using multiple lattice layers. rtl_layer_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig( feature_configs=rtl_layer_feature_configs, lattices='rtl_layer', num_lattices=5, lattice_rank=3, output_min=min_label, output_max=max_label - numerical_error_epsilon, output_initialization=[min_label, max_label], random_seed=42) # A CalibratedLatticeEnsemble premade model constructed from the given # model config. Note that we do not have to specify the lattices by calling # a helper function (like before with random) because the RTL Layer will take # care of that for us. rtl_layer_ensemble_model = tfl.premade.CalibratedLatticeEnsemble( rtl_layer_ensemble_model_config) # Let's plot our model. tf.keras.utils.plot_model( rtl_layer_ensemble_model, show_layer_names=False, rankdir='LR') ``` ์ด์ „๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ชจ๋ธ์„ ์ปดํŒŒ์ผํ•˜๊ณ  ์ ํ•ฉํ•˜๋„๋ก ๋งž์ถ”๊ณ  ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ``` rtl_layer_ensemble_model.compile( loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.AUC()], optimizer=tf.keras.optimizers.Adam(LEARNING_RATE)) rtl_layer_ensemble_model.fit( train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False) print('Test Set Evaluation...') print(rtl_layer_ensemble_model.evaluate(test_xs, test_ys)) ``` ### Crystal ๊ฒฉ์ž ์•™์ƒ๋ธ” ์‚ฌ์ „ ์ œ์ž‘์€ ๋˜ํ•œ [Crystal](https://papers.nips.cc/paper/6377-fast-and-flexible-monotonic-functions-with-ensembles-of-lattices) ์ด๋ผ๋Š” ํœด๋ฆฌ์Šคํ‹ฑ ํŠน์„ฑ ๋ฐฐ์—ด ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. Crystal ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ๋จผ์ € ์Œ๋ณ„ ํŠน์„ฑ ์ƒํ˜ธ ์ž‘์šฉ์„ ์ถ”์ •ํ•˜๋Š” ์‚ฌ์ „ ์ ํ•ฉ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋” ๋งŽ์€ ๋น„์„ ํ˜• ์ƒํ˜ธ ์ž‘์šฉ์ด ์žˆ๋Š” ํŠน์„ฑ์ด ๊ฐ™์€ ๊ฒฉ์ž์— ์žˆ๋„๋ก ์ตœ์ข… ์•™์ƒ๋ธ”์„ ๋ฐฐ์—ดํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ ์ œ์ž‘ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์‚ฌ์ „ ์ ํ•ฉ ๋ชจ๋ธ ๊ตฌ์„ฑ์„ ๊ตฌ์„ฑํ•˜๊ณ  ๊ฒฐ์ • ๊ตฌ์กฐ๋ฅผ ์ถ”์ถœํ•˜๊ธฐ ์œ„ํ•œ ๋„์šฐ๋ฏธ ํ•จ์ˆ˜๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ ์ ํ•ฉ ๋ชจ๋ธ์€ ์™„์ „ํ•˜๊ฒŒ ํ›ˆ๋ จ๋  ํ•„์š”๊ฐ€ ์—†์œผ๋ฏ€๋กœ ๋ช‡ ๋ฒˆ์˜ epoch๋ฉด ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ์—์„œ๋Š” 5๊ฐœ์˜ ๊ฒฉ์ž์™€ ๊ฒฉ์ž๋‹น 3๊ฐœ์˜ ํŠน์„ฑ์ด ์žˆ๋Š” ๋ณด์ •๋œ ๊ฒฉ์ž ์•™์ƒ๋ธ” ๋ชจ๋ธ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ``` # This is a calibrated lattice ensemble model: inputs are calibrated, then # combines non-linearly and averaged using multiple lattice layers. crystals_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig( feature_configs=feature_configs, lattices='crystals', num_lattices=5, lattice_rank=3, output_min=min_label, output_max=max_label - numerical_error_epsilon, output_initialization=[min_label, max_label], random_seed=42) # Now that we have our model config, we can construct a prefitting model config. prefitting_model_config = tfl.premade_lib.construct_prefitting_model_config( crystals_ensemble_model_config) # A CalibratedLatticeEnsemble premade model constructed from the given # prefitting model config. prefitting_model = tfl.premade.CalibratedLatticeEnsemble( prefitting_model_config) # We can compile and train our prefitting model as we like. prefitting_model.compile( loss=tf.keras.losses.BinaryCrossentropy(), optimizer=tf.keras.optimizers.Adam(LEARNING_RATE)) prefitting_model.fit( train_xs, train_ys, epochs=PREFITTING_NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False) # Now that we have our trained prefitting model, we can extract the crystals. tfl.premade_lib.set_crystals_lattice_ensemble(crystals_ensemble_model_config, prefitting_model_config, prefitting_model) # A CalibratedLatticeEnsemble premade model constructed from the given # model config. crystals_ensemble_model = tfl.premade.CalibratedLatticeEnsemble( crystals_ensemble_model_config) # Let's plot our model. tf.keras.utils.plot_model( crystals_ensemble_model, show_layer_names=False, rankdir='LR') ``` ์ด์ „๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ชจ๋ธ์„ ์ปดํŒŒ์ผํ•˜๊ณ  ์ ํ•ฉํ•˜๋„๋ก ๋งž์ถ”๊ณ  ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ``` crystals_ensemble_model.compile( loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.AUC()], optimizer=tf.keras.optimizers.Adam(LEARNING_RATE)) crystals_ensemble_model.fit( train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False) print('Test Set Evaluation...') print(crystals_ensemble_model.evaluate(test_xs, test_ys)) ```
github_jupyter
``` import covid from datetime import datetime,timedelta, date from IPython.display import clear_output from IPython.display import Markdown import ipywidgets as widgets import plotly.express as px import plotly.graph_objects as go from plotly.subplots import make_subplots import plotly.io as pio import pandas as pd %%html <style> .p-Widget.js-plotly-plot { height: 34vw; width: 82vw; } </style> # Config log_scale = False renderer = 'notebook_connected' pio.renderers.default = renderer plotly_conf = {'displayModeBar': True, 'responsive': True} plt_template = 'simple_white+gridon' covid.load_regional() covid.set_region('Piemonte') covid.update_data() data = covid.data updated_at = covid.updated_at ``` # Dati Coronavirus Piemonte ``` def format_row_wise(styler, formatter): for row, row_formatter in formatter.items(): row_num = styler.index.get_loc(row) for col_num in range(len(styler.columns)): styler._display_funcs[(row_num, col_num)] = row_formatter return styler def color_negative(val): color = 'red' if val < 0 else 'black' weight = 'bold' if val < 0 else 'normal' return 'color: %s; font-weight:%s' % (color, weight) styles = [ dict(selector="*", props=[("font-size", "105%")]) ] statistics = covid.statistics recap = covid.recap display(Markdown("### *Aggiornamento al " + updated_at + "*")) formatters = { 'Mortalitร ': '{:+.2%}'.format, 'Critici': '{:+.2%}'.format, 'Ricoverati': '{:+.2%}'.format, 'Guariti': '{:+.2%}'.format } recap_style = recap.style recap_style.set_table_styles(styles) recap_style.format('{:,.0f}'.format) recap_style.format('{:+,.0f}'.format, subset=['Variazione tot rispetto a ieri']) recap_style.format('{:+.2%}'.format, subset=['Variazione % rispetto a ieri']) recap_style.applymap(color_negative) styler = format_row_wise(recap_style, formatters) out1 = widgets.Output() with out1: display(recap_style) out2 = widgets.Output() pivot = covid.pivot formatters = { '% mortalita': '{:,.2%}'.format, '% intensivi': '{:,.2%}'.format, '% ricoverati': '{:,.2%}'.format, '% guariti': '{:,.2%}'.format } pivot_style = pivot.iloc[:,0:7].style pivot_style.set_table_styles(styles) pivot_style.format('{:,.0f}'.format) styler = format_row_wise(pivot_style, formatters) with out2: display(styler) tab_result = widgets.Tab(children = [out1, out2]) tab_result._dom_classes = ['rendered_html'] tab_result.set_title(0, 'Dati odierni') tab_result.set_title(1, 'Ultimi 7 giorni') display(tab_result) ``` ## Grafici andamenti ``` # draw reference areas for lockdown periods def draw_reflines(fig): opacity = 0.25 line_width = 1 line_color="Black" shapes = [] for xref in range(1,4): shapes.extend([ dict( type="rect", # x-reference is assigned to the x-values xref=f"x{xref}", # y-reference is assigned to the plot paper [0,1] yref="paper", x0=datetime(2020,3,9), y0=0, y1=1, x1=datetime(2020,3,21), fillcolor="LightSalmon", opacity=opacity, layer="below", line_width=line_width, line_color=line_color, line_dash='dash' ), dict( type="rect", xref=f"x{xref}", yref="paper", x0=datetime(2020,3,21), y0=0, y1=1, x1=datetime(2020,5,3), fillcolor="LightCoral", opacity=opacity, layer="below", line_width=line_width, line_color=line_color, line_dash='dash' ), dict( type="rect", xref=f"x{xref}", yref="paper", x0=datetime(2020,5,3), y0=0, y1=1, x1=datetime(2020,5,17), fillcolor="Green", opacity=opacity, layer="below", line_width=line_width, line_color=line_color, line_dash='dash' ), dict( type="rect", xref=f"x{xref}", yref="paper", x0=datetime(2020,5,17), y0=0, x1=datetime(2020,6,2), y1=1, fillcolor="LightGreen", opacity=opacity, layer="below", line_width=line_width, line_color=line_color, line_dash='dash' )]) fig.update_layout(shapes=shapes) # plot def smooth(datas): return datas.rolling(7, win_type='gaussian', min_periods=1, center=True).mean(std=2).round() line_color = ['blue','red','green'] def draw_plotly(log_scale): fig = make_subplots(rows=1, cols=3, shared_xaxes=False, subplot_titles=("Totale positivi", "Variazione positivi", "Terapia intensiva")) fig.add_trace( go.Scatter(x=data.date, y=data.totale_positivi, line_color=line_color[0], name='totale positivi'), row=1, col=1 ) if log_scale: fig.update_yaxes(type="log", row=1, col=1) fig.add_trace( go.Scatter(x=data.date, y=data.variazione_totale_positivi, line_color='lightgreen', name='variazione positivi'), row=1, col=2 ) fig.add_trace( go.Scatter(x=data.date, y=smooth(data.variazione_totale_positivi), line_color=line_color[1], line_dash='dashdot', name='media mobile variazione positivi'), row=1, col=2 ) if log_scale: fig.update_yaxes(type="log", row=1, col=2) ax = fig.add_trace( go.Scatter(x=data.date, y=data.terapia_intensiva, line_color=line_color[2], name='terapia intensiva'), row=1, col=3 ) if log_scale: fig.update_yaxes(type="log", row=1, col=3) fig.update_layout(template=plt_template, showlegend=False) draw_reflines(fig) return fig def draw_plotly2(log_scale): fig = make_subplots(rows=1, cols=3, shared_xaxes=False, subplot_titles=("Ospedalizzati","Deceduti", "Dimessi")) fig.add_trace( go.Scatter(x=data.date, y=data.totale_ospedalizzati, line_color=line_color[0], name='ospedalizzati'), row=1, col=1 ) if log_scale: fig.update_yaxes(type="log", row=1, col=1) fig.add_trace( go.Scatter(x=data.date, y=data.deceduti, line_color=line_color[1], name='deceduti'), row=1, col=2 ) if log_scale: fig.update_yaxes(type="log", row=1, col=2) fig.add_trace( go.Scatter(x=data.date, y=data.dimessi_guariti, line_color=line_color[2], name='dimessi'), row=1, col=3 ) if log_scale: fig.update_yaxes(type="log", row=1, col=3) fig.update_layout(template=plt_template, showlegend=False) draw_reflines(fig) return fig def draw_plotly3(log_scale): fig = make_subplots(rows=1, cols=3, shared_xaxes=False, subplot_titles=("Nuovi decessi", "Nuovi positivi", "Nuovi Tamponi")) fig.add_trace( go.Scatter(x=data.date, y=data.nuovi_decessi, line_color='lightgreen', name='nuovi decessi'), row=1, col=1 ) fig.add_trace( go.Scatter(x=data.date, y=smooth(data.nuovi_decessi), line_color=line_color[0], line_dash='dashdot',name='media mobile nuovi decessi'), row=1, col=1 ) if log_scale: fig.update_yaxes(type="log", row=1, col=1) fig.add_trace( go.Scatter(x=data.date, y=data.nuovi_positivi, line_color='lightgreen', name='nuovi positivi'), row=1, col=2 ) fig.add_trace( go.Scatter(x=data.date, y=smooth(data.nuovi_positivi), line_color=line_color[1], line_dash='dashdot', name='media mobile nuovi positivi'), row=1, col=2 ) if log_scale: fig.update_yaxes(type="log", row=1, col=2) fig.add_trace( go.Scatter(x=data.date, y=data.tamponi.diff().rolling(7).mean(), line_color=line_color[2], name='tamponi'), row=1, col=3 ) if log_scale: fig.update_yaxes(type="log", row=1, col=3) fig.update_layout(template=plt_template, showlegend=False) draw_reflines(fig) return fig #create tabs log_scale = False out1 = draw_plotly(log_scale) out2 = draw_plotly2(log_scale) out3 = draw_plotly3(log_scale) log_scale = True out4 = draw_plotly(log_scale) out5 = draw_plotly2(log_scale) out6 = draw_plotly3(log_scale) # config={'displayModeBar': False} g1 = go.FigureWidget(data=out1, layout=go.Layout()) g2 = go.FigureWidget(data=out2, layout=go.Layout()) g3 = go.FigureWidget(data=out3, layout=go.Layout()) g4 = go.FigureWidget(data=out4, layout=go.Layout()) g5 = go.FigureWidget(data=out5, layout=go.Layout()) g6 = go.FigureWidget(data=out6, layout=go.Layout()) container1 = widgets.VBox(children=[g1,g2,g3]) container2 = widgets.VBox(children=[g4,g5,g6]) tab_nest = widgets.Tab(children = [container1, container2]) tab_nest.set_title(0, 'Lineari') tab_nest.set_title(1, 'Logaritmici') display(tab_nest) ``` ## Andamento casi ``` # Plotly chart opacity = 0.2 line_width = 1 line_color="Black" shapes = [ dict( type="rect", # x-reference is assigned to the x-values xref="x", # y-reference is assigned to the plot paper [0,1] yref="paper", x0=datetime(2020,3,9), y0=0, y1=1, x1=datetime(2020,3,21), fillcolor="LightSalmon", opacity=opacity, layer="below", line_width=line_width, line_color=line_color, line_dash='dash' ), dict( type="rect", xref=f"x", yref="paper", x0=datetime(2020,3,21), y0=0, y1=1, x1=datetime(2020,5,3), fillcolor="LightCoral", opacity=opacity, layer="below", line_width=line_width, line_color=line_color, line_dash='dash' ), dict( type="rect", xref=f"x", yref="paper", x0=datetime(2020,5,3), y0=0, y1=1, x1=datetime(2020,5,17), fillcolor="Green", opacity=opacity, layer="below", line_width=line_width, line_color=line_color, line_dash='dash' ), dict( type="rect", xref=f"x", yref="paper", x0=datetime(2020,5,17), y0=0, x1=datetime(2020,6,2), y1=1, fillcolor="LightGreen", opacity=opacity, layer="below", line_width=line_width, line_color=line_color, line_dash='dash' )] fig = px.bar(data, data.date, [data.totale_ospedalizzati, data.terapia_intensiva, data.nuovi_decessi], color_discrete_map={ "totale_ospodalizzati": "green", "terapia_intensiva": "blue", "nuovi_decessi": "red"} ) fig.update_layout(template=plt_template, shapes=shapes, legend=dict( yanchor="top", y=0.99, xanchor="left", x=0.01, bordercolor="Darkgrey", borderwidth=1, title='' )) fig.show(config={'responsive':True}) ``` ## Analisi predittiva variazione nuovi decessi ``` from fbprophet import Prophet from fbprophet.plot import plot_plotly, plot_components_plotly delta_positivi = data[['date', 'nuovi_decessi']][20:] delta_positivi = delta_positivi.rename(columns={'date':'ds', 'nuovi_decessi':'y'}) # remove outliers # model fit model = Prophet(daily_seasonality=False, weekly_seasonality=False, yearly_seasonality=False) model.fit(delta_positivi) future_days = model.make_future_dataframe(periods=60, freq='D') # forecasting forecast = model.predict(future_days) fig = plot_plotly(model, forecast) fig.update_layout(template=plt_template) fig.show(config={'displayModeBar': True}) fig = plot_components_plotly(model, forecast) fig.update_layout(template=plt_template, height=400) fig.show(config={'displayModeBar': True, 'responsive': True}) result = covid.estimate_rt(data, sigma=0.05) display(Markdown(f"## Stima $R_t$= {result['ML'].values[-1]} \ ({result['Low_90'].values[-1]} - {result['High_90'].values[-1]})")) fig = go.Figure() fig.add_trace(go.Scatter(x=result.index, y=result['ML'], mode='lines+markers', marker=dict( size=6, color=result['ML'], #set color equal to a variable showscale=True ), error_y=dict( type='data', array = result['High_90'] - result['Low_90'], width = 0, thickness = 0.7, color = 'darkgray' ), name='Rt')) fig.update_layout(template=plt_template, yaxis_range=[-2,3]) fig.show(config=plotly_conf) ``` ## Analisi andamento $R_t$ ``` rt = pd.DataFrame({'ds':result.index, 'y': result['ML'].values}) # model fit model_rt = Prophet(daily_seasonality=False, yearly_seasonality=False) model_rt.fit(rt) future_days = model.make_future_dataframe(periods=60, freq='D') # forecasting forecast = model_rt.predict(future_days) fig = plot_components_plotly(model_rt, forecast) fig.update_layout(template=plt_template, height=600) fig.show(config=plotly_conf) ``` *fonte*: [Dati DPC](https://github.com/pcm-dpc/COVID-19/blob/master/dati-andamento-nazionale/dpc-covid19-ita-andamento-nazionale.csv)
github_jupyter
``` import numpy as np # Zadanie 1 np.random.seed(123) x = np.random.uniform(-4, 4, 30) x (x>=0).sum() np.sum(x>=0) x_abs = np.abs(x) x_abs.mean() x.min() x.max() x_abs.max() x[x_abs.argmax()] x[x_abs.argmin()] x[np.argmax(x_abs)] odleglosc = np.abs(x-0) np.argmin(odleglosc) x[np.argmin(odleglosc)] x[np.argmin(np.abs(x-2))] # odleglosc najmniejsza od liczby 2 x x_int = x.astype('int') x - x_int np.modf(x)[0] ((x - x.min())/(x.max() - x.min()))*(1-(-1)) +(-1) np.mean((x[(x>3)|(x<-2)])**2) y = np.where(x>0, 'nieujemna', 'ujemna') y napisy = np.array(['nieujemne', 'ujemne']) napisy[0,0,0,1,1,1,0,0,0] napisy[(x<=0).astype(int)] x.sum()/x.size np.sort(x)[[0,-1]] np.trunc(-5.6) y = np.ceil(x)-0.5 y x = np.arange(1,11) y = np.arange(1,11) x_mean = x.mean() y_mean = y.mean() x_mean licznik = np.sum((x - x_mean)*(y - y_mean)) licznik mianownik = np.sqrt(np.sum((x-x_mean)**2)) * np.sqrt(np.sum((y-y_mean)**2)) mianownik r = licznik/mianownik r n = x.size n x * y n*x_mean * y_mean licznik = np.sum((x * y)) - (n*x_mean * y_mean) licznik n*(x_mean**2) np.sqrt(np.sum((x**2))-(n*(x_mean**2))) mianownik = np.sqrt(np.sum((x**2))-(n*np.power(x_mean,2)))*np.sqrt(np.sum((y**2))-(n*np.power(y_mean,2))) mianownik r = licznik/mianownik r a =(x-x_mean) / np.std(x) a.mean(), a.std() xy = np.sum(x*y) xy xy2 = np.dot(x,y) xy2 np.matmul(x,y) x @ y np.median(x) np.mean(x) y[-2:] = y[-2:] y[5:7] = 5 y[-1] = 110 y np.sum((y - x)**2) def rmse(x,y): n = x.size return np.sqrt(1/n * np.sum((y - x)**2)) def mae(x,y): return 1/n * np.sum(np.abs(y - x)) np.abs(y - x) def medAE(x,y): return np.median(np.abs(y - x)) vec1 = np.array([5,3,2,1]) vec2 = np.array([1,2,2,5]) rmse(vec1,vec2) medAE(vec1, vec2) mae(vec1, vec2) n = vec1.size (vec1 != vec2).sum()/n yp = np.random.random(10) n = yp.size -(1/n) * np.sum((y*np.log(yp))+(1+y)*np.log(1-yp)) x = np.array([1,2,3,4, np.NaN, 7]) x mean = np.mean(x[~np.isnan(x)]) mean2 = np.nanmean(x) np.where(np.isnan(x),mean,x) mean2 x = np.array([-100, -50, 3,4,-8,5,80,100]) x x.sort() x[:2] = x[2] x[-2:] = x[-3] x x.mean() def lead(x,n): return np.r_[x[n:], np.repeat(np.nan, n)] lead(x,2) def lag(x,n): return np.r_[np.repeat(np.nan, n),x[:-n]] lag(x,2) x = np.array([True,True,True,True,True]) np.where(~x)[0].size def cumall(x): if np.where(~x)[0].size == 0: return np.repeat(True, x.size) else: return np.r_[np.repeat(True, np.where(~x)[0][0]), np.repeat(False, x.size-np.where(~x)[0][0])] cumall(x) np.e def factorial_stirling(n): return np.power(n/np.e,n) * np.sqrt(2 * np.pi * n) factorial_stirling(np.array([2,5,20,100])) factorial_vec = np.vectorize(np.math.factorial) factorial_vec(np.array([2,5,20])) np.abs(factorial_stirling(np.array([2,5, 10, 20])) - factorial_vec(np.array([2,5, 10, 20]))) np.abs(factorial_stirling(np.array([0,2,5, 10, 20])) - factorial_vec(np.array([0,2,5, 10, 20])))/factorial_vec(np.array([0,2,5, 10, 20])) %%timeit n = 100000 i = np.arange(n) 4 * np.sum(((-1)**i)/(2*i+1)) %%timeit jedynki = np.tile([1,-1],int(n/2)) 4 * np.sum((jedynki)/(2*i+1)) %%timeit jedynki = np.ones(n) jedynki[1::2] = -1 4 * np.sum((jedynki)/(2*i+1)) %%timeit nieparz = np.arange(1,2*n+1,2) 4 * np.sum(jedynki/nieparz) %%timeit jedynki = np.ones(n) jedynki[1::2] = -1 n=10000 x = np.random.uniform(low=-1, high=1, size=n) y = np.random.uniform(low=-1, high=1, size=n) ile = np.sum((x**2+y**2)<=1) (ile/n)*4 str(np.pi)[0] pi = np.pi int(pi) n = 10 10**np.arange(n) (np.pi*10**np.arange(n) % 10).astype(int) x = np.random.randint(0,5,100) x value, count = np.unique(x, return_counts=True) value[count.argmax()] np.unique(x, return_counts=True) import numpy as np import matplotlib import matplotlib.pyplot as plt %matplotlib inline n = 150 x = np.random.uniform(-5,5,n) y = np.random.uniform(-5,5,n) x_mean = np.mean(x) y_mean = np.mean(y) beta_estimated = 5 alfa_estimated = 3 print(f'beta estimated = {beta_estimated}, alfa_estimated = {alfa_estimated}') x_plot = np.arange(-5, 5, 0.1) y_plot = beta_estimated*x_plot+alfa_estimated plt.plot(x, y,'o') plt.plot(x_plot, y_plot,'r') plt.show() x = np.array([10,50,30,70,20]) y = np.array([200,300,800,500,400]) np.r_[np.repeat(True, np.floor(5/2)), np.repeat(False, np.ceil(5/2))] def crossover(x,y): n = x.size i = np.r_[np.repeat(True, np.floor(n/2)), np.repeat(False, np.ceil(n/2))] np.random.shuffle(i) return np.where(i, x,y) crossover(x,y) p = np.array([0.3, 0.1, 0.4, 0.05, 0.15]) x def mutate(x, p): test = np.random.rand(0,1,p.size) return np.where(test>=p, x+1,x) mutate(x,p) x x + np.where() x p def losuj(n, x, p): przedzialy = np.cumsum(p) numbers = np.random.uniform(0,1,n) podzial = np.digitize(numbers, przedzialy) return x[podzial] a = losuj(1000,x,p) np.unique(a,return_counts=True)[1]/1000 def interpoluj_liniowo(x,y,z): import numpy as np import matplotlib import matplotlib.pyplot as plt %matplotlib inline x = np.array([-5,-3,2,7]) y = np.array([4,5,-8,8]) z = np.array([-4,-2,0,1,5]) plt.plot(x, y,'r') plt.plot(z,np.repeat(0,z.size),'o') plt.show() lewo = np.digitize(z,x) prawo = np.digitize(z,x) y_prim = y[lewo] + (y[prawo]-y[lewo])/(x[prawo]-x[lewo]) * (z[prawo] - x[lewo]) A = np.arange(12).reshape(4,3) A np.mean(A,axis=0) A2 = np.arange(4*5*3).reshape(4,5,3) A2 A2.mean(axis=1) A2.mean(axis=(1,2)) A * np.array([1,10,100]) ```
github_jupyter
``` import os import torch from torch import nn import torchtext import torchtext.vocab as Vocab import torch.utils.data as Data import torch.nn.functional as F import sys sys.path.append("..") # import d2lzh_pytorch as d2l # os.environ["CUDA_VISIBLE_DEVICES"] = "0" device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') DATA_ROOT = "/S1/CSCL/tangss/Datasets" from tqdm.notebook import tqdm import random def read_imdb(folder='train', data_root=r"H:\DBAI\BenchMark_DataSet\imdb\aclImdb"): data = [] for label in ['pos', 'neg']: folder_name = os.path.join(data_root, folder, label) for file in tqdm(os.listdir(folder_name)): with open(os.path.join(folder_name, file), 'rb') as f: review = f.read().decode('utf-8').replace('\n', '').lower() data.append([review, 1 if label == 'pos' else 0]) random.shuffle(data) return data # train_data, test_data = read_imdb('train'), read_imdb('test') # re.sub('\.',' .',train_data[0][0]) # re.sub('<br />',' ',train_data[0][0]) # train_data[2][0] import collections import re def get_tokenized_imdb(data): """ data: list of [string, label] """ def tokenizer(text): text = re.sub('\.',' . ',text) # text = re.sub('\.',' .',text) text = re.sub('<br />',' ',text) return [tok.lower() for tok in text.split()] return [tokenizer(review) for review, _ in data] def get_vocab_imdb(data): tokenized_data = get_tokenized_imdb(data) counter = collections.Counter([tk for st in tokenized_data for tk in st]) return torchtext.vocab.Vocab(counter, min_freq=5) # vocab = get_vocab_imdb(train_data) def preprocess_imdb(data, vocab): max_l = 500 # ๅฐ†ๆฏๆก่ฏ„่ฎบ้€š่ฟ‡ๆˆชๆ–ญๆˆ–่€…่กฅ0๏ผŒไฝฟๅพ—้•ฟๅบฆๅ˜ๆˆ500 def pad(x): return x[:max_l] if len(x) > max_l else x + [0] * (max_l - len(x)) tokenized_data = get_tokenized_imdb(data) features = torch.tensor([pad([vocab.stoi[word] for word in words]) for words in tokenized_data]) labels = torch.tensor([score for _, score in data]) return features, labels import torch.utils.data as Data batch_size = 64 train_data = read_imdb('train') test_data = read_imdb('test') vocab = get_vocab_imdb(train_data) train_set = Data.TensorDataset(*preprocess_imdb(train_data, vocab)) test_set = Data.TensorDataset(*preprocess_imdb(test_data, vocab)) train_iter = Data.DataLoader(train_set, batch_size, shuffle=True) test_iter = Data.DataLoader(test_set, batch_size) '# words in vocab:', len(vocab) for X, y in train_iter: print('X', X.shape, 'y', y.shape) break '#batches:', len(train_iter) class BiRNN(nn.Module): def __init__(self, vocab, embed_size, num_hiddens, num_layers): super(BiRNN, self).__init__() self.embedding = nn.Embedding(len(vocab), embed_size) # bidirectional่ฎพไธบTrueๅณๅพ—ๅˆฐๅŒๅ‘ๅพช็Žฏ็ฅž็ป็ฝ‘็ปœ self.encoder = nn.LSTM(input_size=embed_size, hidden_size=num_hiddens, num_layers=num_layers, bidirectional=True) # ๅˆๅง‹ๆ—ถ้—ดๆญฅๅ’Œๆœ€็ปˆๆ—ถ้—ดๆญฅ็š„้š่—็Šถๆ€ไฝœไธบๅ…จ่ฟžๆŽฅๅฑ‚่พ“ๅ…ฅ self.decoder = nn.Linear(4*num_hiddens, 2) def forward(self, inputs): # inputs็š„ๅฝข็Šถๆ˜ฏ(ๆ‰น้‡ๅคงๅฐ, ่ฏๆ•ฐ)๏ผŒๅ› ไธบLSTM้œ€่ฆๅฐ†ๅบๅˆ—้•ฟๅบฆ(seq_len)ไฝœไธบ็ฌฌไธ€็ปด๏ผŒๆ‰€ไปฅๅฐ†่พ“ๅ…ฅ่ฝฌ็ฝฎๅŽ # ๅ†ๆๅ–่ฏ็‰นๅพ๏ผŒ่พ“ๅ‡บๅฝข็Šถไธบ(่ฏๆ•ฐ, ๆ‰น้‡ๅคงๅฐ, ่ฏๅ‘้‡็ปดๅบฆ) embeddings = self.embedding(inputs.permute(1, 0)) # rnn.LSTMๅชไผ ๅ…ฅ่พ“ๅ…ฅembeddings๏ผŒๅ› ๆญคๅช่ฟ”ๅ›žๆœ€ๅŽไธ€ๅฑ‚็š„้š่—ๅฑ‚ๅœจๅ„ๆ—ถ้—ดๆญฅ็š„้š่—็Šถๆ€ใ€‚ # outputsๅฝข็Šถๆ˜ฏ(่ฏๆ•ฐ, ๆ‰น้‡ๅคงๅฐ, 2 * ้š่—ๅ•ๅ…ƒไธชๆ•ฐ) outputs, _ = self.encoder(embeddings) # output, (h, c) # ่ฟž็ป“ๅˆๅง‹ๆ—ถ้—ดๆญฅๅ’Œๆœ€็ปˆๆ—ถ้—ดๆญฅ็š„้š่—็Šถๆ€ไฝœไธบๅ…จ่ฟžๆŽฅๅฑ‚่พ“ๅ…ฅใ€‚ๅฎƒ็š„ๅฝข็Šถไธบ # (ๆ‰น้‡ๅคงๅฐ, 4 * ้š่—ๅ•ๅ…ƒไธชๆ•ฐ)ใ€‚ encoding = torch.cat((outputs[0], outputs[-1]), -1) outs = self.decoder(encoding) return outs embed_size, num_hiddens, num_layers = 100, 100, 2 net = BiRNN(vocab, embed_size, num_hiddens, num_layers) net # glove_vocab['.'] glove_vocab = torchtext.vocab.Vectors(name='glove.6B.100d.txt',cache='H:\DBAI\word_vec\glove.6B') def load_pretrained_embedding(words, pretrained_vocab): """ไปŽ้ข„่ฎญ็ปƒๅฅฝ็š„vocabไธญๆๅ–ๅ‡บwordsๅฏนๅบ”็š„่ฏๅ‘้‡""" embed = torch.zeros(len(words), pretrained_vocab.vectors[0].shape[0]) # ๅˆๅง‹ๅŒ–ไธบ0 oov_count = 0 # out of vocabulary for i, word in enumerate(words): try: idx = pretrained_vocab.stoi[word] embed[i, :] = pretrained_vocab.vectors[idx] except KeyError: oov_count += 1 # print(word) if oov_count > 0: print("There are %d oov words." % oov_count) return embed net.embedding.weight.data.copy_( load_pretrained_embedding(vocab.itos, glove_vocab)) net.embedding.weight.requires_grad = False # ็›ดๆŽฅๅŠ ่ฝฝ้ข„่ฎญ็ปƒๅฅฝ็š„, ๆ‰€ไปฅไธ้œ€่ฆๆ›ดๆ–ฐๅฎƒ # vocab.itos import time def train(train_iter, test_iter, net, loss, optimizer, device, num_epochs): net = net.to(device) print("training on ", device) batch_count = 0 for epoch in range(num_epochs): train_l_sum, train_acc_sum, n, start = 0.0, 0.0, 0, time.time() for X, y in train_iter: X = X.to(device) y = y.to(device) y_hat = net(X) l = loss(y_hat, y) optimizer.zero_grad() l.backward() optimizer.step() train_l_sum += l.cpu().item() train_acc_sum += (y_hat.argmax(dim=1) == y).sum().cpu().item() n += y.shape[0] batch_count += 1 print('time:',time.time()-start) test_acc = evaluate_accuracy(test_iter, net, device) print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f, time %.1f sec' % (epoch + 1, train_l_sum / batch_count, train_acc_sum / n, test_acc, time.time() - start)) def evaluate_accuracy(data_iter, net, device=None): if device is None and isinstance(net, torch.nn.Module): # ๅฆ‚ๆžœๆฒกๆŒ‡ๅฎšdeviceๅฐฑไฝฟ็”จnet็š„device device = list(net.parameters())[0].device acc_sum, n = 0.0, 0 with torch.no_grad(): for X, y in data_iter: if isinstance(net, torch.nn.Module): net.eval() # ่ฏ„ไผฐๆจกๅผ, ่ฟ™ไผšๅ…ณ้—ญdropout acc_sum += (net(X.to(device)).argmax(dim=1) == y.to(device)).float().sum().cpu().item() net.train() # ๆ”นๅ›ž่ฎญ็ปƒๆจกๅผ else: # ่‡ชๅฎšไน‰็š„ๆจกๅž‹, 3.13่Š‚ไน‹ๅŽไธไผš็”จๅˆฐ, ไธ่€ƒ่™‘GPU if('is_training' in net.__code__.co_varnames): # ๅฆ‚ๆžœๆœ‰is_training่ฟ™ไธชๅ‚ๆ•ฐ # ๅฐ†is_training่ฎพ็ฝฎๆˆFalse acc_sum += (net(X, is_training=False).argmax(dim=1) == y).float().sum().item() else: acc_sum += (net(X).argmax(dim=1) == y).float().sum().item() n += y.shape[0] return acc_sum / n %%time lr, num_epochs = 0.001, 3 optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, net.parameters()), lr=lr) loss = nn.CrossEntropyLoss() train(train_iter, test_iter, net, loss, optimizer, device, num_epochs) def predict_sentiment(net, vocab, sentence): """sentenceๆ˜ฏ่ฏ่ฏญ็š„ๅˆ—่กจ""" device = list(net.parameters())[0].device sentence = torch.tensor([vocab.stoi[word] for word in sentence], device=device) label = torch.argmax(net(sentence.view((1, -1))), dim=1) return 'positive' if label.item() == 1 else 'negative' predict_sentiment(net, vocab, ['this', 'movie', 'is', 'so', 'great']) # positive predict_sentiment(net, vocab, ['this', 'movie', 'is', 'so', 'bad']) ```
github_jupyter
Copyright 2020 Andrew M. Olney and made available under [CC BY-SA](https://creativecommons.org/licenses/by-sa/4.0) for text and [Apache-2.0](http://www.apache.org/licenses/LICENSE-2.0) for code. # Gradient boosting: Problem solving This session will use a dataset of video game sales for games that sold at least 100,000 copies. Because the dataset is so large, only 1000 randomly sampled rows are included. | Variable | Type | Description | |:--------------|:---------|:---------------------------------------------------------------------------------------------| | Rank | Interval | Ranking of overall sales | | Name | Nominal | The games name | | Platform | Nominal | Platform of the games release (i.e. PC,PS4, etc.) | | Year | Ratio | Year of the game's release | | Genre | Nominal | Genre of the game | | Publisher | Nominal | Publisher of the game | | NA_Sales | Ratio | Sales in North America (in millions) | | EU_Sales | Ratio | Sales in Europe (in millions) | | JP_Sales | Ratio | Sales in Japan (in millions) | | Other_Sales | Ratio | Sales in the rest of the world (in millions) | | Global_Sales | Ratio | Total worldwide sales. | <div style="text-align:center;font-size: smaller"> <b>Source:</b> This dataset was taken from <a href="https://www.kaggle.com/gregorut/videogamesales">Kaggle</a>. </div> <br> The goal is to predict `Global_Sales` using the other non-sales variables in the data. ## Load data Import `pandas` for dataframes. Load the dataframe with `datasets/vgsales-1000.csv`, using `index_col="Name"`. ## Explore data ### Describe and drop missing Describe the data. ----------- **QUESTION:** Does the min/mean/max of each variable make sense to you? **ANSWER: (click here to edit)** <hr> Try to remove missing values to see if any rows are incomplete. ----------- **QUESTION:** How many rows had missing values? **ANSWER: (click here to edit)** <hr> ### Visualize Import `plotly.express`. And create a correlation matrix heatmap. ----------- **QUESTION:** What's going on with `Rank` and the `*_Sales` variables? **ANSWER: (click here to edit)** <hr> Do a scatterplot matrix to see the relationships between these variables. ----------- **QUESTION:** Take a look at the scatterplots of the nominal variables against the `Global_Sales`. Is there any obvious pattern? **ANSWER: (click here to edit)** <hr> Make a histogram of `Global_Sales` so we can see how it is distributed. ------------------ **QUESTION:** Do you think we need to transform `Global_Sales` to make it more normal? Why or why not? **ANSWER: (click here to edit)** <hr> ## Prepare train/test sets ### X, Y, and dummies Make a new dataframe called `X` by either dropping all the sales related variables or creating a dataframe with just the columns you want to keep. Import `numpy` to square root transform `Y`. Save a dataframe with just `Global_Sales` in `Y`, but use `numpy` to log transform in a freestyle block: `np.sqrt(dataframe[[ "Global_Sales"]])`. Replace the nominal variables with dummies and save in `X`. ### Train/test splits Import `sklearn.model_selection`. Create the data splits. Make sure to use `random_state=1` so we get the same answers. Don't bother stratifying. ## Fit model Since the response/target variable is numeric, we need to use a gradient boosting regressor rather than a classifier. Import `sklearn.ensemble`. Create the gradient boosting regressor, using `random_state=1` and `subsample=.5`. `fit` the classifier. Get and save predictions. ## Evaluate the model Because this is regression not classification, you can't use classification metrics like accuracy, precision, recall, and f1. Instead, you'll use $r^2$. Some examples are in the `Regression-trees-PS` notebook. - Get the $r^2$ on the *training* set - Get the $r^2$ on the *testing* set ------------------ **QUESTION:** Compare the *training data performance* to the *testing data performance*. Which is better? What do these differences tell you? **ANSWER: (click here to edit)** <hr> ## Visualizing ### Feature importance Visualize feature importance using a bar chart. ------------------ **QUESTION:** Hover over the bars to see the corresponding predictor and value. What are the most important features? **ANSWER: (click here to edit)** <hr> ### Overfit Use the OOB error to test if the model is overfit. Import `plotly.graph_objects` Create an empty figure to draw lines on. And add the two lines, one for training deviance and one for testing deviance. ------------------ **QUESTION:** Do you think it would help our test data performance if we stopped training earlier? Why? **ANSWER: (click here to edit)** <hr> **QUESTION:** Now that you are familiar with this data and how gradient boosting performed with it, what other models would you try? **ANSWER: (click here to edit)** <hr>
github_jupyter
**[Intermediate Machine Learning Home Page](https://www.kaggle.com/learn/intermediate-machine-learning)** --- In this exercise, you will use **pipelines** to improve the efficiency of your machine learning code. # Setup The questions below will give you feedback on your work. Run the following cell to set up the feedback system. ``` # Set up code checking import os if not os.path.exists("../input/train.csv"): os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv") os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv") from learntools.core import binder binder.bind(globals()) from learntools.ml_intermediate.ex4 import * print("Setup Complete") ``` You will work with data from the [Housing Prices Competition for Kaggle Learn Users](https://www.kaggle.com/c/home-data-for-ml-course). ![Ames Housing dataset image](https://i.imgur.com/lTJVG4e.png) Run the next code cell without changes to load the training and validation sets in `X_train`, `X_valid`, `y_train`, and `y_valid`. The test set is loaded in `X_test`. ``` import pandas as pd from sklearn.model_selection import train_test_split # Read the data X_full = pd.read_csv('../input/train.csv', index_col='Id') X_test_full = pd.read_csv('../input/test.csv', index_col='Id') # Remove rows with missing target, separate target from predictors X_full.dropna(axis=0, subset=['SalePrice'], inplace=True) y = X_full.SalePrice X_full.drop(['SalePrice'], axis=1, inplace=True) # Break off validation set from training data X_train_full, X_valid_full, y_train, y_valid = train_test_split(X_full, y, train_size=0.8, test_size=0.2, random_state=0) # "Cardinality" means the number of unique values in a column # Select categorical columns with relatively low cardinality (convenient but arbitrary) categorical_cols = [cname for cname in X_train_full.columns if X_train_full[cname].nunique() < 10 and X_train_full[cname].dtype == "object"] # Select numerical columns numerical_cols = [cname for cname in X_train_full.columns if X_train_full[cname].dtype in ['int64', 'float64']] # Keep selected columns only my_cols = categorical_cols + numerical_cols X_train = X_train_full[my_cols].copy() X_valid = X_valid_full[my_cols].copy() X_test = X_test_full[my_cols].copy() X_train.head() ``` The next code cell uses code from the tutorial to preprocess the data and train a model. Run this code without changes. ``` from sklearn.compose import ColumnTransformer from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import OneHotEncoder from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_absolute_error # Preprocessing for numerical data numerical_transformer = SimpleImputer(strategy='constant') # Preprocessing for categorical data categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='most_frequent')), ('onehot', OneHotEncoder(handle_unknown='ignore')) ]) # Bundle preprocessing for numerical and categorical data preprocessor = ColumnTransformer( transformers=[ ('num', numerical_transformer, numerical_cols), ('cat', categorical_transformer, categorical_cols) ]) # Define model model = RandomForestRegressor(n_estimators=100, random_state=0) # Bundle preprocessing and modeling code in a pipeline clf = Pipeline(steps=[('preprocessor', preprocessor), ('model', model) ]) # Preprocessing of training data, fit model clf.fit(X_train, y_train) # Preprocessing of validation data, get predictions preds = clf.predict(X_valid) print('MAE:', mean_absolute_error(y_valid, preds)) ``` The code yields a value around 17862 for the mean absolute error (MAE). In the next step, you will amend the code to do better. # Step 1: Improve the performance ### Part A Now, it's your turn! In the code cell below, define your own preprocessing steps and random forest model. Fill in values for the following variables: - `numerical_transformer` - `categorical_transformer` - `model` To pass this part of the exercise, you need only define valid preprocessing steps and a random forest model. ``` # Preprocessing for numerical data numerical_transformer = SimpleImputer(strategy='constant') # Your code here # Preprocessing for categorical data categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='most_frequent')), ('onehot', OneHotEncoder(handle_unknown='ignore')) ]) # Your code here # Bundle preprocessing for numerical and categorical data preprocessor = ColumnTransformer( transformers=[ ('num', numerical_transformer, numerical_cols), ('cat', categorical_transformer, categorical_cols) ]) # Define model model = RandomForestRegressor(n_estimators=100, random_state=1)# Your code here # Check your answer step_1.a.check() # Lines below will give you a hint or solution code # step_1.a.hint() # step_1.a.solution() ``` ### Part B Run the code cell below without changes. To pass this step, you need to have defined a pipeline in **Part A** that achieves lower MAE than the code above. You're encouraged to take your time here and try out many different approaches, to see how low you can get the MAE! (_If your code does not pass, please amend the preprocessing steps and model in Part A._) ``` # Bundle preprocessing and modeling code in a pipeline my_pipeline = Pipeline(steps=[('preprocessor', preprocessor), ('model', model) ]) # Preprocessing of training data, fit model my_pipeline.fit(X_train, y_train) # Preprocessing of validation data, get predictions preds = my_pipeline.predict(X_valid) # Evaluate the model score = mean_absolute_error(y_valid, preds) print('MAE:', score) # Check your answer step_1.b.check() # Line below will give you a hint # step_1.b.hint() # step_1.b.solution() ``` # Step 2: Generate test predictions Now, you'll use your trained model to generate predictions with the test data. ``` # Preprocessing of test data, fit model preds_test = my_pipeline.predict(X_test) # Your code here # Check your answer step_2.check() # Lines below will give you a hint or solution code #step_2.hint() #step_2.solution() ``` Run the next code cell without changes to save your results to a CSV file that can be submitted directly to the competition. ``` # Save test predictions to file output = pd.DataFrame({'Id': X_test.index, 'SalePrice': preds_test}) output.to_csv('submission.csv', index=False) ``` # Step 3: Submit your results Once you have successfully completed Step 2, you're ready to submit your results to the leaderboard! If you choose to do so, make sure that you have already joined the competition by clicking on the **Join Competition** button at [this link](https://www.kaggle.com/c/home-data-for-ml-course). 1. Begin by clicking on the blue **Save Version** button in the top right corner of the window. This will generate a pop-up window. 2. Ensure that the **Save and Run All** option is selected, and then click on the blue **Save** button. 3. This generates a window in the bottom left corner of the notebook. After it has finished running, click on the number to the right of the **Save Version** button. This pulls up a list of versions on the right of the screen. Click on the ellipsis **(...)** to the right of the most recent version, and select **Open in Viewer**. This brings you into view mode of the same page. You will need to scroll down to get back to these instructions. 4. Click on the **Output** tab on the right of the screen. Then, click on the blue **Submit** button to submit your results to the leaderboard. You have now successfully submitted to the competition! If you want to keep working to improve your performance, select the blue **Edit** button in the top right of the screen. Then you can change your code and repeat the process. There's a lot of room to improve, and you will climb up the leaderboard as you work. # Keep going Move on to learn about [**cross-validation**](https://www.kaggle.com/alexisbcook/cross-validation), a technique you can use to obtain more accurate estimates of model performance! --- **[Intermediate Machine Learning Home Page](https://www.kaggle.com/learn/intermediate-machine-learning)** *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161289) to chat with other Learners.*
github_jupyter
# Paging Algorithms Visualization ``` import matplotlib.pyplot as plt ``` ## FIFO ``` # Number of page frames pf = 3 reference_string = [1,3,0,3,5,6,3] def calculate_page_hits(rf_string,page_frames): page = [] hits = 0 for el in rf_string: if len(page) < page_frames: hits = hits+1 page.append(el) else: if el in page: pass else: hits = hits +1 page.pop(0) page.append(el) return hits def plot_graph(x,y,color:str): plt.title("Page Faults") plt.xlabel("No. of Frames") plt.ylabel("No. of Page Faults") plt.plot(x,y,color=color) plt.show() faults = {} for i in range(3,10): f = calculate_page_hits(reference_string,i) faults[str(i)] = f faults #Plotting plot_graph(faults.keys(),faults.values(),"blue") ref_str_2 = [3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4] faults_2 = {} for i in range(3,10): f = calculate_page_hits(ref_str_2,i) faults_2[str(i)] = f faults_2 #Plotting plot_graph(faults_2.keys(),faults_2.values(),"orange") ``` ## 2. Optimal Page replacement ``` def check_distance(rf,page): indices = [] for el in page: if el in rf: index = rf.index(el) else: index = len(page)+1 indices.append(index) max_distance = max(indices) max_distance_el_index = indices.index(max_distance) return max_distance_el_index def optimal_page_hits(rf_string,page_frames): page = [] hits = 0 key = 0 for el in rf_string: key = key+1 if len(page) < page_frames: hits = hits+1 page.append(el) else: if el in page: pass else: hits = hits+1 index = check_distance(rf_string[key:len(rf_string)],page) page[index] = el return hits rf_string = [7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,] optimal_faults = {} for i in range(3,10): hits = optimal_page_hits(rf_string,i) optimal_faults[str(i)] = hits optimal_faults #Plotting plot_graph(optimal_faults.keys(),optimal_faults.values(),"green") optimal_faults_2 = {} for i in range(3,10): hits = optimal_page_hits(ref_str_2,i) optimal_faults_2[str(i)] = hits optimal_faults_2 #Plotting plot_graph(optimal_faults_2.keys(),optimal_faults_2.values(),"teal") ``` ## 3. Least Recently Used ``` def least_recent_page_hits(rf_string,page_frames): page = [] hits = 0 for el in rf_string: if len(page) < page_frames: hits = hits+1 page.append(el) else: if el in page: page.remove(el) page.append(el) else: hits = hits + 1 page.pop(0) page.append(el) return hits ref = [7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2] lru_faults = {} for i in range(3,10): hits = least_recent_page_hits(ref,i) lru_faults[str(i)] = hits lru_faults #Plotting plot_graph(lru_faults.keys(),lru_faults.values(),"red") lru_faults_2 = {} for i in range(3,10): hits = least_recent_page_hits(ref_str_2,i) lru_faults_2[str(i)] = hits lru_faults_2 #Plotting plot_graph(lru_faults_2.keys(),lru_faults_2.values(),"black") ```
github_jupyter
``` import pandas as pd import numpy as np import altair as alt df = pd.read_csv("data/train.csv") df.head() df.info() numeric_features_eda = [ "floor_area", "year_built", "energy_star_rating", "ELEVATION", "january_min_temp", "january_avg_temp", "january_max_temp", "february_min_temp", "february_avg_temp", "february_max_temp", "march_min_temp", "march_avg_temp", "march_max_temp", "april_min_temp", "april_avg_temp", "april_max_temp", "may_min_temp", "may_avg_temp", "may_max_temp", "june_min_temp", "june_avg_temp", "june_max_temp", "july_min_temp", "july_avg_temp", "july_max_temp", "august_min_temp", "august_avg_temp", "august_max_temp", "september_min_temp", "september_avg_temp", "september_max_temp", "october_min_temp", "october_avg_temp", "october_max_temp", "november_min_temp", "november_avg_temp", "november_max_temp", "december_min_temp", "december_avg_temp", "december_max_temp", "cooling_degree_days", "heating_degree_days", "precipitation_inches", "snowfall_inches", "snowdepth_inches", "avg_temp", "days_below_30F", "days_below_20F", "days_below_10F", "days_below_0F", "days_above_80F", "days_above_90F", "days_above_100F", "days_above_110F", "max_wind_speed", "days_with_fog", "site_eui" ] categorical_features_eda = [ "Year_Factor", "State_Factor", "building_class", "facility_type", "direction_max_wind_speed", "direction_peak_wind_speed", "site_eui" ] drop_columns = [ "id" ] df.describe(include='all') pd.DataFrame(df[categorical_features_eda].drop(columns=["site_eui"]).nunique()) pd.DataFrame(df[categorical_features_eda].drop(columns=["site_eui"]).isna().sum()) pd.DataFrame(df[numeric_features_eda].drop(columns=["site_eui"]).isna().sum()) chart_cat = alt.Chart(df.drop(columns="site_eui")).mark_bar().encode( x= alt.X(alt.repeat(), type='nominal'), y=alt.Y('count()') ).properties( width=600, height=200 ).repeat(categorical_features_eda, columns=1, title = "Distribution of Categorical features" ) chart_cat NG = alt.Chart(df, title="Site Energy units vs Year").mark_boxplot().encode( x='Year_Factor', y='site_eui', ).properties( width=600, height=300 ) RT = alt.Chart(df, title="Site Energy Units per Facility Type").mark_boxplot().encode( x='facility_type', y='site_eui', ).properties( width=800, height=300 ) LM = alt.Chart(df, title="Direction Peak Wind Speed vs Site Energy Units").mark_boxplot().encode( x='direction_peak_wind_speed', y='site_eui', ).properties( width=600, height=300 ) WM = alt.Chart(df, title="Direction Max Wind Speed vs Site Energy Units").mark_boxplot().encode( x='direction_max_wind_speed', y='site_eui', ).properties( width=600, height=250 ) chart = alt.vconcat(NG, RT, LM, WM) chart # Numerical features alt.Chart(df).mark_bar().encode( x=alt.X(alt.repeat(), type='quantitative',bin=alt.Bin(maxbins=10)), y=alt.Y('count()') ).properties( width=200, height=100 ).repeat(numeric_features_eda, columns=3, title="Distribution of Numerical Features" ) num_rel_plot = alt.Chart(df).mark_circle().encode( alt.X(alt.repeat(), type='quantitative'), y=alt.Y("site_eui", type='quantitative') ).properties( width=200, height=200 ).repeat(numeric_features_eda, columns = 3, title = "Site Energy Units vs Numerical features" ) num_rel_plot eui_over_years = alt.Chart(df.dropna(subset=["year_built"]), title = "Trend over years ").mark_area().encode( x = "year_built:T", y = "site_eui") eui_over_years corr_df = ( df.drop(["id"], axis = 1) .corr('spearman') .abs() .stack() .reset_index(name='corr')) corr_matrix = alt.Chart(corr_df).mark_rect().encode( x='level_0', y='level_1', size='corr', color='corr') corr_matrix ```
github_jupyter
``` import pandas as pd from matplotlib import pyplot as plt import numpy as np from matplotlib.pyplot import figure ``` # Onset of symptoms to death ``` df = pd.read_csv('/Users/julianeoliveira/Desktop/github/PAMEpi-Reproducibility-of-published-results/RISK FACTORS AND DISEASE PROFILE OF SARS-COV-2 INFECTIONS IN BRAZIL_A RETROSPECTIVE STUDY/Results/TimeSintomasMorte.csv') df.name.values df['name'] = df.name.replace({'CS_SEXO_M': 'Male', 'CS_SEXO_F': 'Female', 'AGEGRP_AG0t18': '0 to 17 years', 'AGEGRP_AG18t30': '18 to 29 years', 'AGEGRP_AG30t40': '30 to 39 years', 'AGEGRP_AG40t50': '40 to 49 years', 'AGEGRP_AG50t65': '50 to 64 years', 'AGEGRP_AG65t75': '65 to 74 years', 'AGEGRP_AG75t85': '75 to 84 years', 'AGEGRP_AG85+': '85+ years', 'RACA_Branca': 'White', 'RACA_Preta':'Black', 'RACA_Parda': 'Mixed', 'RACA_Amarela': 'Yellow','RACA_Indigena':'Indigenous', 'BDIGRP_BDI0':'Class 0','BDIGRP_BDI1':'Class 1', 'BDIGRP_BDI2': 'Class 2', 'BDIGRP_BDI3': 'Class 3', 'BDIGRP_BDI4':'Class 4', 'VACINA_False':'Not vaccinated', 'VACINA_True':'Vaccinated', 'COMOR_NO':'No Comorbidities', 'COMOR_YES':'Comorbidities'}) df; values = ['Female','Male', '0 to 17 years', '18 to 29 years', '30 to 39 years', '40 to 49 years', '50 to 64 years','65 to 74 years','75 to 84 years', '85+ years', 'White', 'Black', 'Mixed', 'Yellow', 'Indigenous', 'Class 0','Class 1', 'Class 2','Class 3', 'Class 4', 'Not vaccinated', 'Vaccinated','No Comorbidities', 'Comorbidities'] data = df[df.name.isin(values)] data.head() figure(figsize=(8, 8), dpi=80) for lower,upper,y in zip(data['CIme_L'],data['CIme_H'],range(len(data))): plt.plot((lower,upper),(y,y),'ro-',color='red',linewidth=1) plt.yticks(range(len(data)),list(data['name'])) #plt.axvline(linewidth=4, color='r') plt.axvline(x=19.562317, linestyle='--', color='grey') plt.xlabel('Days') plt.grid() plt.title('Onset of symptoms to death') #plt.savefig('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/onset_death.png',bbox_inches='tight') data.describe() ``` # Onset of symptoms to hospital in clinical beds admissions ``` df = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/TimeSintomasInterna.csv') df.head() #df.name.values df['name'] = df.name.replace({'CS_SEXO_M': 'Male', 'CS_SEXO_F': 'Female', 'AGEGRP_AG0t18': '0 to 17 years', 'AGEGRP_AG18t30': '18 to 29 years', 'AGEGRP_AG30t40': '30 to 39 years', 'AGEGRP_AG40t50': '40 to 49 years', 'AGEGRP_AG50t65': '50 to 64 years', 'AGEGRP_AG65t75': '65 to 74 years', 'AGEGRP_AG75t85': '75 to 84 years', 'AGEGRP_AG85+': '85+ years', 'RACA_Branca': 'White', 'RACA_Preta':'Black', 'RACA_Parda': 'Mixed', 'RACA_Amarela': 'Yellow','RACA_Indigena':'Indigenous', 'BDIGRP_BDI0':'Class 0','BDIGRP_BDI1':'Class 1', 'BDIGRP_BDI2': 'Class 2', 'BDIGRP_BDI3': 'Class 3', 'BDIGRP_BDI4':'Class 4', 'VACINA_False':'Not vaccinated', 'VACINA_True':'Vaccinated', 'COMOR_NO':'No Comorbidities', 'COMOR_YES':'Comorbidities', 'AGEGRP_AG50t65_CURA':'50 to 64 years survived','AGEGRP_AG50t65_MORTE': '50 to 64 years non survived', 'AGEGRP_AG75t85_MORTE':'75 to 84 years non survived', 'AGEGRP_AG75t85_CURA':'75 to 84 years survived', 'AGEGRP_AG65t75_CURA': '65 to 74 years survived', 'AGEGRP_AG65t75_MORTE': '65 to 74 years non survived', 'AGEGRP_AG30t40_CURA':'30 to 39 years survived', 'AGEGRP_AG30t40_MORTE':'30 to 39 years non survived', 'AGEGRP_AG85+_MORTE':'85+ years non survived', 'AGEGRP_AG85+_CURA':'85+ years survived', 'AGEGRP_AG18t30_CURA':'18 to 29 years survived', 'AGEGRP_AG18t30_MORTE':'18 to 29 years non survived', 'AGEGRP_AG40t50_CURA':'40 to 49 years survived','AGEGRP_AG40t50_MORTE':'40 to 49 years non survived', 'AGEGRP_AG0t18_CURA':'0 to 17 years survived', 'AGEGRP_AG0t18_MORTE':'0 to 17 years non survived', 'CS_SEXO_M_CURA':'Male survived', 'CS_SEXO_M_MORTE':'Male non survived', 'CS_SEXO_F_CURA':'Female survived','CS_SEXO_F_MORTE':'Female non survived', 'RACA_Branca_MORTE': 'White non survived', 'RACA_Branca_CURA':'White survived', 'RACA_Preta_CURA':'Black survived', 'RACA_Preta_MORTE':'Black non survived', 'RACA_Parda_MORTE':'Mixed non survived', 'RACA_Parda_CURA':'Mixed survived', 'RACA_Amarela_MORTE':'Yellow non survived', 'RACA_Amarela_CURA':'Yellow survived', 'RACA_Indigena_CURA':'Indigenous survived', 'RACA_Indigena_MORTE':'Indigenous non survived', 'BDIGRP_BDI1_CURA':'Class 1 survived', 'BDIGRP_BDI1_MORTE':'Class 1 non survived', 'BDIGRP_BDI2_CURA':'Class 2 survived', 'BDIGRP_BDI2_MORTE':'Class 2 non survived', 'BDIGRP_BDI0_MORTE':'Class 0 non survived','BDIGRP_BDI0_CURA':'Class 0 survived', 'BDIGRP_BDI3_MORTE':'Class 3 non survived', 'BDIGRP_BDI3_CURA':'Class 3 survived', 'BDIGRP_BDI4_CURA':'Class 4 survived', 'BDIGRP_BDI4_MORTE':'Class 4 non survived', 'VACINA_False_CURA':'Not vaccinated survived','VACINA_False_MORTE':'Not vaccinated non survived', 'VACINA_True_CURA':'Vaccinated survived', 'VACINA_True_MORTE':'Vaccinated non survived', 'COMOR_NO_CURA':'No Comorbidities survived', 'COMOR_NO_MORTE':'No Comorbidities non survived', 'COMOR_YES_MORTE':'Comorbidities non survived','COMOR_YES_CURA':'Comorbidities survived'}) values = ['Female','Male', '0 to 17 years', '18 to 29 years', '30 to 39 years', '40 to 49 years', '50 to 64 years','65 to 74 years','75 to 84 years', '85+ years', 'White', 'Black', 'Mixed', 'Yellow', 'Indigenous', 'Class 0','Class 1', 'Class 2','Class 3', 'Class 4', 'Not vaccinated', 'Vaccinated','No Comorbidities', 'Comorbidities'] values1 = ['Female survived','Male survived','0 to 17 years survived','18 to 29 years survived', '30 to 39 years survived', '40 to 49 years survived','50 to 64 years survived', '65 to 74 years survived','75 to 84 years survived', '85+ years survived', 'White survived','Black survived','Mixed survived','Yellow survived', 'Indigenous survived', 'Class 0 survived','Class 1 survived','Class 2 survived','Class 3 survived','Class 4 survived', 'Vaccinated survived','Not vaccinated survived','No Comorbidities survived', 'Comorbidities survived'] values2 = ['Female non survived','Male non survived','0 to 17 years non survived','18 to 29 years non survived', '30 to 39 years non survived','40 to 49 years non survived','50 to 64 years non survived', '65 to 74 years non survived','75 to 84 years non survived','85+ years non survived', 'White non survived','Black non survived','Mixed non survived','Yellow non survived', 'Indigenous non survived', 'Class 0 non survived','Class 1 non survived', 'Class 2 non survived', 'Class 3 non survived','Class 4 non survived', 'Vaccinated non survived','Not vaccinated non survived','No Comorbidities non survived', 'Comorbidities non survived'] df.head() data = df[df.name.isin(values)] data1 = df[df.name.isin(values1)] data2 = df[df.name.isin(values2)] data1.describe() data2.describe() figure(figsize=(8, 8), dpi=80) for lower,upper,y in zip(data['CIme_L'],data['CIme_H'],range(len(data))): plt.plot((lower,upper),(y,y),'ro-',color='black',linewidth=1) plt.yticks(range(len(data)),list(data['name'])) #plt.axvline(linewidth=4, color='r') plt.axvline(x=7.995195, linestyle='--', color='grey') for lower,upper,y in zip(data1['CIme_L'],data1['CIme_H'],range(len(data1))): plt.plot((lower,upper),(y,y),'ro-',color='blue',linewidth=1) #plt.yticks(range(len(data1)),list(data1['name'])) plt.yticks(range(len(data)),list(data['name'])) plt.axvline(x=8.199849, linestyle='--', color='blue') for lower,upper,y in zip(data2['CIme_L'],data2['CIme_H'],range(len(data2))): plt.plot((lower,upper),(y,y),'ro-',color='red',linewidth=1) plt.axvline(x=7.657935, linestyle='--', color='red') plt.xlabel('Days') plt.grid() plt.title('Onset of symptoms to hospital in clinical beds admissions') #plt.savefig('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/onset_H.png',bbox_inches='tight') ``` # Onset of symptoms to ICU admissions ``` df = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/TimeSintomasICU.csv') df.head() df['name'] = df.name.replace({'CS_SEXO_M': 'Male', 'CS_SEXO_F': 'Female', 'AGEGRP_AG0t18': '0 to 17 years', 'AGEGRP_AG18t30': '18 to 29 years', 'AGEGRP_AG30t40': '30 to 39 years', 'AGEGRP_AG40t50': '40 to 49 years', 'AGEGRP_AG50t65': '50 to 64 years', 'AGEGRP_AG65t75': '65 to 74 years', 'AGEGRP_AG75t85': '75 to 84 years', 'AGEGRP_AG85+': '85+ years', 'RACA_Branca': 'White', 'RACA_Preta':'Black', 'RACA_Parda': 'Mixed', 'RACA_Amarela': 'Yellow','RACA_Indigena':'Indigenous', 'BDIGRP_BDI0':'Class 0','BDIGRP_BDI1':'Class 1', 'BDIGRP_BDI2': 'Class 2', 'BDIGRP_BDI3': 'Class 3', 'BDIGRP_BDI4':'Class 4', 'VACINA_False':'Not vaccinated', 'VACINA_True':'Vaccinated', 'COMOR_NO':'No Comorbidities', 'COMOR_YES':'Comorbidities', 'AGEGRP_AG50t65_CURA':'50 to 64 years survived','AGEGRP_AG50t65_MORTE': '50 to 64 years non survived', 'AGEGRP_AG75t85_MORTE':'75 to 84 years non survived', 'AGEGRP_AG75t85_CURA':'75 to 84 years survived', 'AGEGRP_AG65t75_CURA': '65 to 74 years survived', 'AGEGRP_AG65t75_MORTE': '65 to 74 years non survived', 'AGEGRP_AG30t40_CURA':'30 to 39 years survived', 'AGEGRP_AG30t40_MORTE':'30 to 39 years non survived', 'AGEGRP_AG85+_MORTE':'85+ years non survived', 'AGEGRP_AG85+_CURA':'85+ years survived', 'AGEGRP_AG18t30_CURA':'18 to 29 years survived', 'AGEGRP_AG18t30_MORTE':'18 to 29 years non survived', 'AGEGRP_AG40t50_CURA':'40 to 49 years survived','AGEGRP_AG40t50_MORTE':'40 to 49 years non survived', 'AGEGRP_AG0t18_CURA':'0 to 17 years survived', 'AGEGRP_AG0t18_MORTE':'0 to 17 years non survived', 'CS_SEXO_M_CURA':'Male survived', 'CS_SEXO_M_MORTE':'Male non survived', 'CS_SEXO_F_CURA':'Female survived','CS_SEXO_F_MORTE':'Female non survived', 'RACA_Branca_MORTE': 'White non survived', 'RACA_Branca_CURA':'White survived', 'RACA_Preta_CURA':'Black survived', 'RACA_Preta_MORTE':'Black non survived', 'RACA_Parda_MORTE':'Mixed non survived', 'RACA_Parda_CURA':'Mixed survived', 'RACA_Amarela_MORTE':'Yellow non survived', 'RACA_Amarela_CURA':'Yellow survived', 'RACA_Indigena_CURA':'Indigenous survived', 'RACA_Indigena_MORTE':'Indigenous non survived', 'BDIGRP_BDI1_CURA':'Class 1 survived', 'BDIGRP_BDI1_MORTE':'Class 1 non survived', 'BDIGRP_BDI2_CURA':'Class 2 survived', 'BDIGRP_BDI2_MORTE':'Class 2 non survived', 'BDIGRP_BDI0_MORTE':'Class 0 non survived','BDIGRP_BDI0_CURA':'Class 0 survived', 'BDIGRP_BDI3_MORTE':'Class 3 non survived', 'BDIGRP_BDI3_CURA':'Class 3 survived', 'BDIGRP_BDI4_CURA':'Class 4 survived', 'BDIGRP_BDI4_MORTE':'Class 4 non survived', 'VACINA_False_CURA':'Not vaccinated survived','VACINA_False_MORTE':'Not vaccinated non survived', 'VACINA_True_CURA':'Vaccinated survived', 'VACINA_True_MORTE':'Vaccinated non survived', 'COMOR_NO_CURA':'No Comorbidities survived', 'COMOR_NO_MORTE':'No Comorbidities non survived', 'COMOR_YES_MORTE':'Comorbidities non survived','COMOR_YES_CURA':'Comorbidities survived'}) values = ['Female','Male', '0 to 17 years', '18 to 29 years', '30 to 39 years', '40 to 49 years', '50 to 64 years','65 to 74 years','75 to 84 years', '85+ years', 'White', 'Black', 'Mixed', 'Yellow', 'Indigenous', 'Class 0','Class 1', 'Class 2','Class 3', 'Class 4', 'Not vaccinated', 'Vaccinated','No Comorbidities', 'Comorbidities'] values1 = ['Female survived','Male survived','0 to 17 years survived','18 to 29 years survived', '30 to 39 years survived', '40 to 49 years survived','50 to 64 years survived', '65 to 74 years survived','75 to 84 years survived', '85+ years survived', 'White survived','Black survived','Mixed survived','Yellow survived', 'Indigenous survived', 'Class 0 survived','Class 1 survived','Class 2 survived','Class 3 survived','Class 4 survived', 'Vaccinated survived','Not vaccinated survived','No Comorbidities survived', 'Comorbidities survived'] values2 = ['Female non survived','Male non survived','0 to 17 years non survived','18 to 29 years non survived', '30 to 39 years non survived','40 to 49 years non survived','50 to 64 years non survived', '65 to 74 years non survived','75 to 84 years non survived','85+ years non survived', 'White non survived','Black non survived','Mixed non survived','Yellow non survived', 'Indigenous non survived', 'Class 0 non survived','Class 1 non survived', 'Class 2 non survived', 'Class 3 non survived','Class 4 non survived', 'Vaccinated non survived','Not vaccinated non survived','No Comorbidities non survived', 'Comorbidities non survived'] df.head() data = df[df.name.isin(values)] data1 = df[df.name.isin(values1)] data2 = df[df.name.isin(values2)] data1.describe() data2.describe() figure(figsize=(8, 8), dpi=80) #for lower,upper,y in zip(data['CIme_L'],data['CIme_H'],range(len(data))): # plt.plot((lower,upper),(y,y),'ro-',color='black',linewidth=1) #plt.yticks(range(len(data)),list(data['name'])) #plt.axvline(linewidth=4, color='r') #plt.axvline(x=8.979720, linestyle='--', color='grey') for lower,upper,y in zip(data1['CImd_L'],data1['CImd_H'],range(len(data1))): plt.plot((lower,upper),(y,y),'ro-',color='blue',linewidth=1) #plt.yticks(range(len(data1)),list(data1['name'])) plt.yticks(range(len(data)),list(data['name'])) plt.axvline(x=8.979720, linestyle='--', color='grey') for lower,upper,y in zip(data2['CImd_L'],data2['CImd_H'],range(len(data2))): plt.plot((lower,upper),(y,y),'ro-',color='red',linewidth=1) plt.xlabel('Days') plt.grid() plt.title('Onset of symptoms to ICU admissions') #plt.savefig('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/onset_U.png',bbox_inches='tight') ``` # Mean hospitalization in clinical beds period ``` df = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/HOSP_dur.csv') df.head() df['name'] = df.name.replace({'CS_SEXO_M': 'Male', 'CS_SEXO_F': 'Female', 'AGEGRP_AG0t18': '0 to 17 years', 'AGEGRP_AG18t30': '18 to 29 years', 'AGEGRP_AG30t40': '30 to 39 years', 'AGEGRP_AG40t50': '40 to 49 years', 'AGEGRP_AG50t65': '50 to 64 years', 'AGEGRP_AG65t75': '65 to 74 years', 'AGEGRP_AG75t85': '75 to 84 years', 'AGEGRP_AG85+': '85+ years', 'RACA_Branca': 'White', 'RACA_Preta':'Black', 'RACA_Parda': 'Mixed', 'RACA_Amarela': 'Yellow','RACA_Indigena':'Indigenous', 'BDIGRP_BDI0':'Class 0','BDIGRP_BDI1':'Class 1', 'BDIGRP_BDI2': 'Class 2', 'BDIGRP_BDI3': 'Class 3', 'BDIGRP_BDI4':'Class 4', 'VACINA_False':'Not vaccinated', 'VACINA_True':'Vaccinated', 'COMOR_NO':'No Comorbidities', 'COMOR_YES':'Comorbidities', 'AGEGRP_AG50t65_CURA':'50 to 64 years survived','AGEGRP_AG50t65_MORTE': '50 to 64 years non survived', 'AGEGRP_AG75t85_MORTE':'75 to 84 years non survived', 'AGEGRP_AG75t85_CURA':'75 to 84 years survived', 'AGEGRP_AG65t75_CURA': '65 to 74 years survived', 'AGEGRP_AG65t75_MORTE': '65 to 74 years non survived', 'AGEGRP_AG30t40_CURA':'30 to 39 years survived', 'AGEGRP_AG30t40_MORTE':'30 to 39 years non survived', 'AGEGRP_AG85+_MORTE':'85+ years non survived', 'AGEGRP_AG85+_CURA':'85+ years survived', 'AGEGRP_AG18t30_CURA':'18 to 29 years survived', 'AGEGRP_AG18t30_MORTE':'18 to 29 years non survived', 'AGEGRP_AG40t50_CURA':'40 to 49 years survived','AGEGRP_AG40t50_MORTE':'40 to 49 years non survived', 'AGEGRP_AG0t18_CURA':'0 to 17 years survived', 'AGEGRP_AG0t18_MORTE':'0 to 17 years non survived', 'CS_SEXO_M_CURA':'Male survived', 'CS_SEXO_M_MORTE':'Male non survived', 'CS_SEXO_F_CURA':'Female survived','CS_SEXO_F_MORTE':'Female non survived', 'RACA_Branca_MORTE': 'White non survived', 'RACA_Branca_CURA':'White survived', 'RACA_Preta_CURA':'Black survived', 'RACA_Preta_MORTE':'Black non survived', 'RACA_Parda_MORTE':'Mixed non survived', 'RACA_Parda_CURA':'Mixed survived', 'RACA_Amarela_MORTE':'Yellow non survived', 'RACA_Amarela_CURA':'Yellow survived', 'RACA_Indigena_CURA':'Indigenous survived', 'RACA_Indigena_MORTE':'Indigenous non survived', 'BDIGRP_BDI1_CURA':'Class 1 survived', 'BDIGRP_BDI1_MORTE':'Class 1 non survived', 'BDIGRP_BDI2_CURA':'Class 2 survived', 'BDIGRP_BDI2_MORTE':'Class 2 non survived', 'BDIGRP_BDI0_MORTE':'Class 0 non survived','BDIGRP_BDI0_CURA':'Class 0 survived', 'BDIGRP_BDI3_MORTE':'Class 3 non survived', 'BDIGRP_BDI3_CURA':'Class 3 survived', 'BDIGRP_BDI4_CURA':'Class 4 survived', 'BDIGRP_BDI4_MORTE':'Class 4 non survived', 'VACINA_False_CURA':'Not vaccinated survived','VACINA_False_MORTE':'Not vaccinated non survived', 'VACINA_True_CURA':'Vaccinated survived', 'VACINA_True_MORTE':'Vaccinated non survived', 'COMOR_NO_CURA':'No Comorbidities survived', 'COMOR_NO_MORTE':'No Comorbidities non survived', 'COMOR_YES_MORTE':'Comorbidities non survived','COMOR_YES_CURA':'Comorbidities survived'}) values = ['Female','Male', '0 to 17 years', '18 to 29 years', '30 to 39 years', '40 to 49 years', '50 to 64 years','65 to 74 years','75 to 84 years', '85+ years', 'White', 'Black', 'Mixed', 'Yellow', 'Indigenous', 'Class 0','Class 1', 'Class 2','Class 3', 'Class 4', 'Not vaccinated', 'Vaccinated','No Comorbidities', 'Comorbidities'] values1 = ['Female survived','Male survived','0 to 17 years survived','18 to 29 years survived', '30 to 39 years survived', '40 to 49 years survived','50 to 64 years survived', '65 to 74 years survived','75 to 84 years survived', '85+ years survived', 'White survived','Black survived','Mixed survived','Yellow survived', 'Indigenous survived', 'Class 0 survived','Class 1 survived','Class 2 survived','Class 3 survived','Class 4 survived', 'Vaccinated survived','Not vaccinated survived','No Comorbidities survived', 'Comorbidities survived'] values2 = ['Female non survived','Male non survived','0 to 17 years non survived','18 to 29 years non survived', '30 to 39 years non survived','40 to 49 years non survived','50 to 64 years non survived', '65 to 74 years non survived','75 to 84 years non survived','85+ years non survived', 'White non survived','Black non survived','Mixed non survived','Yellow non survived', 'Indigenous non survived', 'Class 0 non survived','Class 1 non survived', 'Class 2 non survived', 'Class 3 non survived','Class 4 non survived', 'Vaccinated non survived','Not vaccinated non survived','No Comorbidities non survived', 'Comorbidities non survived'] df.head() data = df[df.name.isin(values)] data1 = df[df.name.isin(values1)] data2 = df[df.name.isin(values2)] data1.describe() data2.describe() figure(figsize=(8, 8), dpi=80) #for lower,upper,y in zip(data['CIme_L'],data['CIme_H'],range(len(data))): # plt.plot((lower,upper),(y,y),'ro-',color='black',linewidth=1) #plt.yticks(range(len(data)),list(data['name'])) #plt.axvline(linewidth=4, color='r') #plt.axvline(x=13.269612, linestyle='--', color='grey') for lower,upper,y in zip(data1['CIme_L'],data1['CIme_H'],range(len(data1))): plt.plot((lower,upper),(y,y),'ro-',color='blue',linewidth=1) #plt.yticks(range(len(data1)),list(data1['name'])) plt.axvline(x=13.269612, linestyle='--', color='grey') plt.yticks(range(len(data)),list(data['name'])) for lower,upper,y in zip(data2['CIme_L'],data2['CIme_H'],range(len(data2))): plt.plot((lower,upper),(y,y),'ro-',color='red',linewidth=1) plt.xlabel('Days') plt.grid() plt.title('Mean hospitalization in clinical beds period') plt.savefig('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/mean_H_period.png',bbox_inches='tight') ``` # Mean ICU period ``` df = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/ICU_dur.csv') df['name'] = df.name.replace({'CS_SEXO_M': 'Male', 'CS_SEXO_F': 'Female', 'AGEGRP_AG0t18': '0 to 17 years', 'AGEGRP_AG18t30': '18 to 29 years', 'AGEGRP_AG30t40': '30 to 39 years', 'AGEGRP_AG40t50': '40 to 49 years', 'AGEGRP_AG50t65': '50 to 64 years', 'AGEGRP_AG65t75': '65 to 74 years', 'AGEGRP_AG75t85': '75 to 84 years', 'AGEGRP_AG85+': '85+ years', 'RACA_Branca': 'White', 'RACA_Preta':'Black', 'RACA_Parda': 'Mixed', 'RACA_Amarela': 'Yellow','RACA_Indigena':'Indigenous', 'BDIGRP_BDI0':'Class 0','BDIGRP_BDI1':'Class 1', 'BDIGRP_BDI2': 'Class 2', 'BDIGRP_BDI3': 'Class 3', 'BDIGRP_BDI4':'Class 4', 'VACINA_False':'Not vaccinated', 'VACINA_True':'Vaccinated', 'COMOR_NO':'No Comorbidities', 'COMOR_YES':'Comorbidities', 'AGEGRP_AG50t65_CURA':'50 to 64 years survived','AGEGRP_AG50t65_MORTE': '50 to 64 years non survived', 'AGEGRP_AG75t85_MORTE':'75 to 84 years non survived', 'AGEGRP_AG75t85_CURA':'75 to 84 years survived', 'AGEGRP_AG65t75_CURA': '65 to 74 years survived', 'AGEGRP_AG65t75_MORTE': '65 to 74 years non survived', 'AGEGRP_AG30t40_CURA':'30 to 39 years survived', 'AGEGRP_AG30t40_MORTE':'30 to 39 years non survived', 'AGEGRP_AG85+_MORTE':'85+ years non survived', 'AGEGRP_AG85+_CURA':'85+ years survived', 'AGEGRP_AG18t30_CURA':'18 to 29 years survived', 'AGEGRP_AG18t30_MORTE':'18 to 29 years non survived', 'AGEGRP_AG40t50_CURA':'40 to 49 years survived','AGEGRP_AG40t50_MORTE':'40 to 49 years non survived', 'AGEGRP_AG0t18_CURA':'0 to 17 years survived', 'AGEGRP_AG0t18_MORTE':'0 to 17 years non survived', 'CS_SEXO_M_CURA':'Male survived', 'CS_SEXO_M_MORTE':'Male non survived', 'CS_SEXO_F_CURA':'Female survived','CS_SEXO_F_MORTE':'Female non survived', 'RACA_Branca_MORTE': 'White non survived', 'RACA_Branca_CURA':'White survived', 'RACA_Preta_CURA':'Black survived', 'RACA_Preta_MORTE':'Black non survived', 'RACA_Parda_MORTE':'Mixed non survived', 'RACA_Parda_CURA':'Mixed survived', 'RACA_Amarela_MORTE':'Yellow non survived', 'RACA_Amarela_CURA':'Yellow survived', 'RACA_Indigena_CURA':'Indigenous survived', 'RACA_Indigena_MORTE':'Indigenous non survived', 'BDIGRP_BDI1_CURA':'Class 1 survived', 'BDIGRP_BDI1_MORTE':'Class 1 non survived', 'BDIGRP_BDI2_CURA':'Class 2 survived', 'BDIGRP_BDI2_MORTE':'Class 2 non survived', 'BDIGRP_BDI0_MORTE':'Class 0 non survived','BDIGRP_BDI0_CURA':'Class 0 survived', 'BDIGRP_BDI3_MORTE':'Class 3 non survived', 'BDIGRP_BDI3_CURA':'Class 3 survived', 'BDIGRP_BDI4_CURA':'Class 4 survived', 'BDIGRP_BDI4_MORTE':'Class 4 non survived', 'VACINA_False_CURA':'Not vaccinated survived','VACINA_False_MORTE':'Not vaccinated non survived', 'VACINA_True_CURA':'Vaccinated survived', 'VACINA_True_MORTE':'Vaccinated non survived', 'COMOR_NO_CURA':'No Comorbidities survived', 'COMOR_NO_MORTE':'No Comorbidities non survived', 'COMOR_YES_MORTE':'Comorbidities non survived','COMOR_YES_CURA':'Comorbidities survived'}) values = ['Female','Male', '0 to 17 years', '18 to 29 years', '30 to 39 years', '40 to 49 years', '50 to 64 years','65 to 74 years','75 to 84 years', '85+ years', 'White', 'Black', 'Mixed', 'Yellow', 'Indigenous', 'Class 0','Class 1', 'Class 2','Class 3', 'Class 4', 'Not vaccinated', 'Vaccinated','No Comorbidities', 'Comorbidities'] values1 = ['Female survived','Male survived','0 to 17 years survived','18 to 29 years survived', '30 to 39 years survived', '40 to 49 years survived','50 to 64 years survived', '65 to 74 years survived','75 to 84 years survived', '85+ years survived', 'White survived','Black survived','Mixed survived','Yellow survived', 'Indigenous survived', 'Class 0 survived','Class 1 survived','Class 2 survived','Class 3 survived','Class 4 survived', 'Vaccinated survived','Not vaccinated survived','No Comorbidities survived', 'Comorbidities survived'] values2 = ['Female non survived','Male non survived','0 to 17 years non survived','18 to 29 years non survived', '30 to 39 years non survived','40 to 49 years non survived','50 to 64 years non survived', '65 to 74 years non survived','75 to 84 years non survived','85+ years non survived', 'White non survived','Black non survived','Mixed non survived','Yellow non survived', 'Indigenous non survived', 'Class 0 non survived','Class 1 non survived', 'Class 2 non survived', 'Class 3 non survived','Class 4 non survived', 'Vaccinated non survived','Not vaccinated non survived','No Comorbidities non survived', 'Comorbidities non survived'] data = df[df.name.isin(values)] data1 = df[df.name.isin(values1)] data2 = df[df.name.isin(values2)] df.head() data1.describe() data2.describe() figure(figsize=(8, 8), dpi=80) for lower,upper,y in zip(data['CIme_L'],data['CIme_H'],range(len(data))): plt.plot((lower,upper),(y,y),'ro-',color='black',linewidth=1) plt.yticks(range(len(data)),list(data['name'])) #plt.axvline(linewidth=4, color='r') plt.axvline(x=13.684864, linestyle='--', color='grey') for lower,upper,y in zip(data1['CIme_L'],data1['CIme_H'],range(len(data1))): plt.plot((lower,upper),(y,y),'ro-',color='blue',linewidth=1) #plt.yticks(range(len(data1)),list(data1['name'])) for lower,upper,y in zip(data2['CIme_L'],data2['CIme_H'],range(len(data2))): plt.plot((lower,upper),(y,y),'ro-',color='red',linewidth=1) plt.xlabel('Days') plt.grid() plt.title('Mean ICU period') plt.savefig('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/mean_ICU_period.png',bbox_inches='tight') ``` # CFR ``` from matplotlib.pyplot import figure from datetime import datetime pop = pd.read_csv('/Users/julianeoliveira/Desktop/github/Datasets from the gitcomputations/Populacao/population.csv') pop.Populacao.sum() muni_ibp = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/mun_ibp.csv') muni_ibp.ip_vl_n.min() #.head() muni_ibp.ip_vl_n.max() muni_ibp = muni_ibp.rename(columns={'mun_res': 'code'}) pop_muni = pd.read_csv('/Users/julianeoliveira/Desktop/github/Datasets from the gitcomputations/Populacao/pop_2021.csv',sep=';') pop_muni.head() pop_muni['code'] = pop_muni['Cod'].astype(str).str[:-1].astype(np.int64) pop_muni['UF'] = pop_muni['Nome'].map(lambda x: str(x)[-3:-1]) data_pop_ibp = pd.merge(pop_muni,muni_ibp, on='code', how='left') data_pop_ibp data_pop_ibp.to_csv('/Users/julianeoliveira/Desktop/github/Datasets from the gitcomputations/Populacao/pop_ibp.csv') data_pop_ibp['cat_ibp'] = pd.cut(data_pop_ibp.ip_vl_n, bins=[-1.7627, -1.4323, -1.333, -1.0774, -0.6327, 2.7295], include_lowest=True, labels=['0', '1', '2','3','4']) data_pop_ibp['Pop'].sum() data_pop_ibp.groupby(['cat_ibp'])['Pop'].sum().reset_index() data_pop_ibp.groupby(['cat_ibp'])['Pop'].sum().reset_index().Pop.sum() dfall = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/cfrs_all.csv') df0 = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/cfrs_q0.csv') df1 = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/cfrs_q1.csv') df2 = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/cfrs_q2.csv') df3 = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/cfrs_q3.csv') df4 = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/cfrs_q4.csv') def nCFR(cases,deaths,baseline_cfr,alpha,area,pop): nCFR = alpha*deaths.sum()/cases.sum() lang = pd.date_range('2020-02-26', periods=len(cases)).tolist() d = {'cases': cases, 'deaths': deaths, 'cases_cum': cases.cumsum(), 'case_adj': deaths/baseline_cfr, 'case_adj_cum': (alpha*deaths/baseline_cfr).cumsum(), 'date':lang} data = pd.DataFrame(data=d) cases_underreported = data.case_adj_cum[-1:] - data.cases_cum[-1:] per_cases_underreported = (data.case_adj_cum[-1:] - data.cases_cum[-1:])*100/data.case_adj_cum[-1:] plt.plot(data.date, data.cases_cum*100/pop, label='Observed') #semilogy plt.plot(data.date, data.case_adj_cum*100/pop, label='Adjusted') plt.grid() plt.ylabel('( % )', fontsize=12) plt.legend() plt.title(area,fontsize=15) plt.xticks(rotation=45) #plt.yticks(rotation=90) plt.show() print('Total cases =', data.cases.sum(), '\nTotal deaths =', data.deaths.sum(), '\nnCFR =', round(nCFR,3) , '\nTotal cases adjusted =' , data.case_adj_cum[-1:], '\ncases_underreported =', int(cases_underreported), '\nper_cases_underreported', round(per_cases_underreported,2), '\nPrevalencia = ', data.case_adj_cum[-1:]/pop ) data.to_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/'+ area +'.csv') return data nCFR(dfall.cases, dfall.deaths,0.006,1,'Brazil',213317639) nCFR(df0.cases, df0.deaths,0.006,1,'Class 0',230683919) 1.799867e+07 nCFR(df1.cases, df1.deaths,0.006,1,'Class 1',33654268) nCFR(df2.cases, df2.deaths,0.006,1,'Class 2',35161926) nCFR(df3.cases, df3.deaths,0.006,1,'Class 3',40558037) nCFR(df4.cases, df4.deaths,0.006,2,'Class 4',73204429) dtabr = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/Brazil.csv') dta0 = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/Class 0.csv') dta1 = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/Class 1.csv') dta2 = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/Class 2.csv') dta3 = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/Class 3.csv') dta4 = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/Class 4.csv') dtabr.cases_cum[-1:] dta0.cases_cum[-1:] dta1#.cases_cum figure(figsize=(14, 8), dpi=80) plt.subplot(2, 3, 1) #plt.plot(dtabr.cases_cum*100/213317639, label= 'Brazil') # plt.plot(dta0.cases_cum*100/230683919, label= 'Class 0') # plt.plot(dta1.cases_cum*100/33654268, label= 'Class 1') # plt.plot(dta2.cases_cum*100/35161926, label= 'Class 2') # plt.plot(dta3.cases_cum*100/40558037, label= 'Class 3') # plt.plot(dta4.cases_cum*100/73204429, label= 'Class 4') # plt.grid() plt.title('Cummulated cases') plt.ylabel('( % )', fontsize=12) plt.legend() plt.subplot(2, 3, 2) #plt.plot(dtabr.cases_cum*100/213317639, label= 'Brazil') # plt.plot(dta0.deaths.cumsum()*100/230683919, label= 'Class 0') # plt.plot(dta1.deaths.cumsum()*100/33654268, label= 'Class 1') # plt.plot(dta2.deaths.cumsum()*100/35161926, label= 'Class 2') # plt.plot(dta3.deaths.cumsum()*100/40558037, label= 'Class 3') # plt.plot(dta4.deaths.cumsum()*100/73204429, label= 'Class 4') # plt.grid() plt.title('Cummulated deaths') #plt.ylabel('( % )', fontsize=12) plt.legend() figure(figsize=(14, 8), dpi=80) #plot 1: plt.subplot(2, 3, 1) plt.plot(dtabr.cases_cum*100/213317639, label= 'Observed') plt.plot(dtabr.case_adj_cum*100/213317639, label= 'Adjusted') plt.grid() plt.title('Brazil') plt.ylabel('( % )', fontsize=12) plt.legend() #plot 2: plt.subplot(2, 3, 2) plt.plot(dta0.cases_cum*100/230683919, label= 'Observed') plt.plot(dta0.case_adj_cum*100/230683919, label= 'Adjusted') plt.title('Class 0') plt.legend() plt.grid() #plot 3: plt.subplot(2, 3, 3) plt.plot(dta1.cases_cum*100/33654268, label= 'Observed') plt.plot(dta1.case_adj_cum*100/33654268, label= 'Adjusted') plt.title('Class 1') plt.ylabel('( % )', fontsize=12) plt.grid() plt.legend() #plot 2: plt.subplot(2, 3, 4) plt.plot(dta2.cases_cum*100/35161926, label= 'Observed') plt.plot(dta2.case_adj_cum*100/35161926, label= 'Adjusted') plt.title('Class 2') plt.ylabel('( % )', fontsize=12) plt.grid() plt.legend() plt.subplot(2, 3, 5) plt.plot(dta3.cases_cum*100/40558037, label= 'Observed') plt.plot(dta3.case_adj_cum*100/40558037, label= 'Adjusted') plt.title('Class 3') plt.ylabel('( % )', fontsize=12) plt.grid() plt.legend() plt.subplot(2, 3, 6) plt.plot(dta4.cases_cum*100/73204429, label= 'Observed') plt.plot(dta4.case_adj_cum*100/73204429, label= 'Adjusted') plt.title('Class 4') plt.ylabel('( % )', fontsize=12) plt.grid() plt.legend() plt.savefig('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/under_cases.png',bbox_inches='tight') plt.show() 16734170+17998670+17840670+19835830+20544170 16734170+17998670+17840670+19835830+ 41088330 2.054417e+07*100/73204429 figure(figsize=(14, 4), dpi=80) #plot 1: plt.subplot(2, 3, 1) plt.plot(df0.nCFR[20:], label= 'Class 0') plt.grid() plt.legend() #plot 2: plt.subplot(2, 3, 2) plt.plot(df1.nCFR[20:], label= 'Class 1') #plt.axis([0, 1, 0, 0.25]) plt.legend() plt.grid() #plot 3: plt.subplot(2, 3, 3) plt.plot(df2.nCFR[29:], label= 'Class 2') plt.grid() plt.legend() #plot 2: plt.subplot(2, 3, 4) plt.plot(df3.nCFR[33:], label= 'Class 3') plt.grid() plt.legend() plt.subplot(2, 3, 5) plt.plot(df4.nCFR[43:], label= 'Class 4') plt.grid() plt.legend() plt.show() ``` # Odds ratio ``` odds = pd.read_csv('/Users/julianeoliveira/Desktop/github/PAMEpi-Reproducibility-of-published-results/RISK FACTORS AND DISEASE PROFILE OF SARS-COV-2 INFECTIONS IN BRAZIL_A RETROSPECTIVE STUDY/Results/logit_results_ag_tint.csv') odds odds.variable.values odds['variable'] = odds['variable'].replace({'NVACC':'Not Vaccinated', 'eUTI': 'Need ICU','PNEUMOPATI':'Lung Disease', 'SEXO': 'Sex (Male)','IMUNODEPRE':'Immunodeficiency', 'OBESIDADE':'Obesity', 'HEMATOLOGI':'Chronic Hematologic Disease', 'SIND_DOWN':'Down Syndrome', 'RENAL':'Kidney Disease', 'DIABETES':'Diabetes', 'PUERPERA':'Puerperal', 'NEUROLOGIC':'Chronic Neurological Disease', 'OUT_MORBI':' Other comorbidities', 'ASMA':'Asthma', 'HEPATICA':'Liver Disease', 'CARDIOPATI':'Heart Disease', 'AG0t18_AG18t30':'18-29 years', 'AG0t18_AG30t40':'30-39 years', 'AG0t18_AG40t50':'40-49 years','AG0t18_AG50t65':'50-64 years', 'AG0t18_AG65t75':'65-74 years', 'AG0t18_AG75t85':'75-84 years', 'AG0t18_AG85+':'85+ years', 'Branca_Preta':'Black', 'Branca_Parda':'mixed', 'Branca_Amarela':'Yellow', 'Branca_Indigena': 'Indigenous', 'BDI_0_BDI_1':'Class 1', 'BDI_0_BDI_2':'Class 2', 'BDI_0_BDI_3':'Class 3', 'BDI_0_BDI_4':'Class 4', 'T4t12_T0t4': 'Mean hops. 0 - 4 days', 'T4t12_T12t40':'Mean hops. 12 - 40 days', 'T4t12_TM40': 'Mean hops. > 40 days'}) odds = odds[odds.variable != 'MORTE'] odds odds['IC_upper'] = np.exp(odds['value'] + 1.959963984540054*odds['std']) odds['IC_low'] = np.exp(odds['value'] - 1.959963984540054*odds['std']) figure(figsize=(8, 8), dpi=80) for lower,upper,y in zip(odds['IC_low'],odds['IC_upper'],range(len(odds))): plt.plot((lower,upper),(y,y),'ro-',color='black',linewidth=1) plt.yticks(range(len(odds)),list(odds['variable'])) #plt.axvline(linewidth=4, color='r') plt.axvline(x=0, linestyle='--', color='grey') #plt.xlim(xmax = 2, xmin = 0) plt.gca().set_xscale('log') plt.xlim([4e-1, 40]) plt.xlabel('Odds ratio') plt.grid() #plt.title('Mean ICU period') plt.savefig('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/odds.png',bbox_inches='tight') odds.to_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/logit_corr.csv') ``` # Mortality rates ``` mor_H = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/Mortalidade_HOSP.csv') mor_U = pd.read_csv('/Users/julianeoliveira/Desktop/Projects/ICODA/Artigo1/Data/Mortalidade_ICU.csv') mor_H ```
github_jupyter
### Importante: El primer paso para poder responder a la pregunta: ยฟCuรกnto de buenos son los resultados de las mรฉtricas de tu modelo? (mae,rmse,...) Necesitas tener unas mรฉtricas con las que poder compararlas. Para ello, debes entrenar el modelo mรกs sencillo (regresiรณn/clasificaciรณn) para poder hacerlo. Este modelo se denomina "baseline". Con las mรฉtricas de este modelo ya puedes realizar una comparaciรณn y saber si el siguiente modelo da mejores o peores resultados. ``` import numpy as np import sklearn.metrics as metrics #importing the Linear Regression Algorithm from sklearn.linear_model import LinearRegression import matplotlib.pyplot as plt import seaborn as sns def regression_results(y_true, y_pred): # Regression metrics explained_variance=metrics.explained_variance_score(y_true, y_pred) mean_absolute_error=metrics.mean_absolute_error(y_true, y_pred) mse=metrics.mean_squared_error(y_true, y_pred) # RMSLE es usado cuando la variable targen se ha convertido al logaritmo (por ser su valor muy grande) if (y_true >= 0).all() and (y_pred >= 0).all(): mean_squared_log_error=metrics.mean_squared_log_error(y_true, y_pred) print('mean_squared_log_error: ', round(mean_squared_log_error,4)) median_absolute_error=metrics.median_absolute_error(y_true, y_pred) r2=metrics.r2_score(y_true, y_pred) print('explained_variance: ', round(explained_variance,4)) # Si se acerca a 1, habrรก aprendido todos los patrones de nuestro modelo. print('r2: ', round(r2,4)) print('MAE: ', round(mean_absolute_error,4)) print('MSE: ', round(mse,4)) print('RMSE: ', round(np.sqrt(mse),4)) X = np.random.rand(1000, 1) X = X.reshape(-1, 1) y = 5 + 9 * X + np.random.randn(1000, 1) plt.scatter(X, y, color='b') from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) plt.scatter(X_train, y_train, color='b') linear_model = LinearRegression() linear_model.fit(X_train, y_train) y_real = y_train y_pred = linear_model.predict(X) y_pred_train = linear_model.predict(X_train) y_pred_test = linear_model.predict(X_test) regression_results(y_true=y_train[:], y_pred=y_pred_train[:]) """ mean_squared_log_error: 0.0114 res usado cuando la variable targen se ha convertido al logaritmo por ser un valor muyu grande -> solo con valores positivos los logaritmos negativos no existen explained_variance: 0.8812 r2: 0.8812 si etso se acerca a 1, correlacion perfecta -> varianza explicativa, cuanto mas bajo peor, ha entendido el 88 % de los datos el R cuadrado es igual de valor que la varianza explicativa. el linear regression, el score es el r2. EN REGRESION LINEAL ES EL R2 PERO EN EL RESTO NO TIENE XQ SER ESTE PODRIA SER R2 MAS NO SE QUE BLEBLEBLE MAE: 0.7903 de media entre todos los puntos, se tiene un error de 0,79. el mae diferencia un outlayer con respecto al mae porque el mae es una media sin mas de errores y el rmse muestra outlayers -> ver foto carpeta MSE: 0.9641 #mas bajo mejor entre varios modelos el que lo tenga mรกs bajo es mejor. RMSE: 0.9819 #mas bajo mejor -> raiz cuadrada si le das una raiz cuadrada a un numero mayor que uno disminuye pero si lo haces a un numero menor de 1 aumenta es decir la raiz cuadrada de 4 es 2 y la de 0,5 es 0,70, un rmse que se acerque al 1 mejor se usa el rmse frente al mse al hacer al cuadrado haces mas grandes los errores, y es mas eficiente computacionalmente hablando regression_results(y_true=y_test, y_pred=y_pred_test) linear_model.score(X, y) sns.distplot((y_test - y_pred_test), bins = 50, hist_kws=dict(edgecolor="black", linewidth=1),color='Blue') sns.distplot((y_train - y_pred_train), bins = 50, hist_kws=dict(edgecolor="black", linewidth=1),color='Blue') sns.distplot((y - y_pred), bins = 50, hist_kws=dict(edgecolor="black", linewidth=1),color='Blue') ```
github_jupyter
<a href="https://colab.research.google.com/github/claytonchagas/intpy_prod/blob/main/9_4_automatic_evaluation_dataone_Digital_RADs_ast_only_files.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` !sudo apt-get update !sudo apt-get install python3.9 !python3.9 -V !which python3.9 ``` #**i. Colab hardware and software specs:** - n1-highmem-2 instance - 2vCPU @ 2.3GHz - 13GB RAM - 100GB Free Space - idle cut-off 90 minutes - maximum lifetime 12 hours ``` # Colab hardware info (processor and memory): # !cat /proc/cpuinfo # !cat /proc/memoinfo # !lscpu !lscpu | egrep 'Model name|Socket|Thread|NUMA|CPU\(s\)' print("---------------------------------") !free -m # Colab SO structure and version !ls -a print("---------------------------------") !ls -l / print("---------------------------------") !lsb_release -a ``` #**ii. Cloning IntPy repository:** - https://github.com/claytonchagas/intpy_dev.git ``` !git clone https://github.com/claytonchagas/intpy_dev.git !ls -a print("---------------------------------") %cd intpy_dev/ !git checkout 7b2fe6c !ls -a print("---------------------------------") !git branch print("---------------------------------") #!git log --pretty=oneline --abbrev-commit #!git log --all --decorate --oneline --graph ``` #**iii. dataone_Digital_RADs experiments' evolutions and cutoff by approach** - This evaluation does not make sense as the simulation parameters are fixed. #**iv. dataone_Digital_RADs distribution experiments', three mixed trials** - This evaluation does not make sense as the simulation parameters are fixed. #**1. Fast execution, all versions (v0.1.x and from v0.2.1.x to v0.2.7.x)** ##**1.1 Fast execution: only intra-cache** ###**1.1.1 Fast execution: only intra-cache => experiment's executions** ``` !cd Digital_RADs;\ rm -rf .intpy;\ echo "IntPy only intra-cache";\ experimento=Digital_RADs.py;\ echo "Experiment: $experimento";\ for i in "--no-cache" "v01x" "v021x" "v022x" "v023x" "v024x" "v025x" "v026x" "v027x";\ do rm -rf output_intra_$i.dat;\ rm -rf .intpy;\ echo "---------------------------------";\ echo "IntPy version $i";\ for j in {1..5};\ do echo "Execution $j";\ rm -rf .intpy;\ if [ "$i" = "--no-cache" ]; then python3.9 $experimento NC_005213.fasta out.fasta 1 GAATC 2 $i >> output_intra_$i.dat;\ else python3.9 $experimento NC_005213.fasta out.fasta 1 GAATC 2 -v $i >> output_intra_$i.dat;\ fi;\ echo "Done execution $j";\ done;\ echo "Done IntPy distribution version $i";\ done;\ !ls -a %cd Digital_RADs/ !ls -a !echo "Statistics evaluation:";\ rm -rf stats_intra.dat;\ for k in "--no-cache" "v01x" "v021x" "v022x" "v023x" "v024x" "v025x" "v026x" "v027x";\ do echo "Statistics version $k" >> stats_intra.dat;\ echo "Statistics version $k";\ python3.9 stats_colab_digi_rads.py output_intra_$k.dat;\ python3.9 stats_colab_digi_rads.py output_intra_$k.dat >> stats_intra.dat;\ echo "---------------------------------";\ done;\ ``` ###**1.1.2 Fast execution: only intra-cache => charts generation** ``` %matplotlib inline import matplotlib.pyplot as plt versions = ['--no-cache', 'v01x', 'v021x', 'v022x', 'v023x', 'v024x', 'v025x', 'v026x', 'v027x'] colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:purple', 'tab:grey', 'tab:olive', 'tab:cyan', 'tab:brown', 'tab:pink'] filev = "f_intra_" data = "data_intra_" dataf = "dataf_intra_" for i, j in zip(versions, colors): filev_version = filev+i data_version = data+i dataf_version = dataf+i file_intra = open("output_intra_"+i+".dat", "r") data_intra = [] dataf_intra = [] for x in file_intra.readlines()[89::90]: data_intra.append(float(x)) file_intra.close() #print(data_intra) for y in data_intra: dataf_intra.append(round(y, 5)) print(i+": ",dataf_intra) running1_1 = ['1st', '2nd', '3rd', '4th', '5th'] plt.figure(figsize = (10, 5)) plt.bar(running1_1, dataf_intra, color =j, width = 0.4) plt.grid(axis='y') for index, datas in enumerate(dataf_intra): plt.text(x=index, y=datas, s=datas, ha = 'center', va = 'bottom', fontweight='bold') plt.xlabel("Running only with intra cache "+i, fontweight='bold') plt.ylabel("Time in seconds", fontweight='bold') plt.title("Chart "+i+" intra - Heat distribution - with intra cache, no inter cache - IntPy "+i+" version", fontweight='bold') plt.savefig("chart_intra_"+i+".png") plt.close() #plt.show() import matplotlib.pyplot as plt file_intra = open("stats_intra.dat", "r") data_intra = [] for x in file_intra.readlines()[5::8]: data_intra.append(round(float(x[8::]), 5)) file_intra.close() print(data_intra) versions = ["--no-cache", "0.1.x", "0.2.1.x", "0.2.2.x", "0.2.3.x", "0.2.4.x", "0.2.5.x", "0.2.6.x", "0.2.7.x"] #colors =['royalblue', 'forestgreen', 'orangered', 'purple', 'skyblue', 'lime', 'lightgrey', 'tan'] colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:purple', 'tab:grey', 'tab:olive', 'tab:cyan', 'tab:brown', 'tab:pink'] plt.figure(figsize = (10, 5)) plt.bar(versions, data_intra, color = colors, width = 0.7) plt.grid(axis='y') for index, datas in enumerate(data_intra): plt.text(x=index, y=datas, s=datas, ha = 'center', va = 'bottom', fontweight='bold') plt.xlabel("Median for 5 executions in each version, intra cache", fontweight='bold') plt.ylabel("Time in seconds", fontweight='bold') plt.title("Heat distribution, cache intra-running, comparison of all versions", fontweight='bold') plt.savefig('compare_median_intra.png') plt.close() #plt.show() ``` ##**1.2 Fast execution: full cache -> intra and inter-cache** ###**1.2.1 Fast execution: full cache -> intra and inter-cache => experiment's executions** ``` !rm -rf .intpy;\ echo "IntPy full cache -> intra and inter-cache";\ experimento=Digital_RADs.py;\ echo "Experiment: $experimento";\ for i in "--no-cache" "v01x" "v021x" "v022x" "v023x" "v024x" "v025x" "v026x" "v027x";\ do rm -rf output_full_$i.dat;\ rm -rf .intpy;\ echo "---------------------------------";\ echo "IntPy version $i";\ for j in {1..5};\ do echo "Execution $j";\ if [ "$i" = "--no-cache" ]; then python3.9 $experimento NC_005213.fasta out.fasta 1 GAATC 2 $i >> output_full_$i.dat;\ else python3.9 $experimento NC_005213.fasta out.fasta 1 GAATC 2 -v $i >> output_full_$i.dat;\ fi;\ echo "Done execution $j";\ done;\ echo "Done IntPy distribution version $i";\ done;\ #!ls -a #%cd Digital_RADs/ !ls -a !echo "Statistics evaluation:";\ rm -rf stats_full.dat;\ for k in "--no-cache" "v01x" "v021x" "v022x" "v023x" "v024x" "v025x" "v026x" "v027x";\ do echo "Statistics version $k" >> stats_full.dat;\ echo "Statistics version $k";\ python3.9 stats_colab_digi_rads.py output_full_$k.dat;\ python3.9 stats_colab_digi_rads.py output_full_$k.dat >> stats_full.dat;\ echo "---------------------------------";\ done;\ ``` ###**1.2.2 Fast execution: full cache -> intra and inter-cache => charts generation** ``` %matplotlib inline import matplotlib.pyplot as plt versions = ['--no-cache', 'v01x', 'v021x', 'v022x', 'v023x', 'v024x', 'v025x', 'v026x', 'v027x'] colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:purple', 'tab:grey', 'tab:olive', 'tab:cyan', 'tab:brown', 'tab:pink'] filev = "f_full_" data = "data_full_" dataf = "dataf_full_" for i, j in zip(versions, colors): filev_version = filev+i data_version = data+i dataf_version = dataf+i file_full = open("output_full_"+i+".dat", "r") data_full = [] dataf_full = [] for x in file_full.readlines()[89::90]: data_full.append(float(x)) file_full.close() for y in data_full: dataf_full.append(round(y, 5)) print(i+": ",dataf_full) running1_1 = ['1st', '2nd', '3rd', '4th', '5th'] plt.figure(figsize = (10, 5)) plt.bar(running1_1, dataf_full, color =j, width = 0.4) plt.grid(axis='y') for index, datas in enumerate(dataf_full): plt.text(x=index, y=datas, s=datas, ha = 'center', va = 'bottom', fontweight='bold') plt.xlabel("Running full cache "+i, fontweight='bold') plt.ylabel("Time in seconds", fontweight='bold') plt.title("Chart "+i+" full - Heat distribution - with intra and inter cache - IntPy "+i+" version", fontweight='bold') plt.savefig("chart_full_"+i+".png") plt.close() #plt.show() import matplotlib.pyplot as plt file_full = open("stats_full.dat", "r") data_full = [] for x in file_full.readlines()[5::8]: data_full.append(round(float(x[8::]), 5)) file_full.close() print(data_full) versions = ["--no-cache", "0.1.x", "0.2.1.x", "0.2.2.x", "0.2.3.x", "0.2.4.x", "0.2.5.x", "0.2.6.x", "0.2.7.x"] #colors =['royalblue', 'forestgreen', 'orangered', 'purple', 'skyblue', 'lime', 'lightgrey', 'tan'] colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:purple', 'tab:grey', 'tab:olive', 'tab:cyan', 'tab:brown', 'tab:pink'] plt.figure(figsize = (10, 5)) plt.bar(versions, data_full, color = colors, width = 0.7) plt.grid(axis='y') for index, datas in enumerate(data_full): plt.text(x=index, y=datas, s=datas, ha = 'center', va = 'bottom', fontweight='bold') plt.xlabel("Median for 5 executions in each version, full cache", fontweight='bold') plt.ylabel("Time in seconds", fontweight='bold') plt.title("Heat distribution, cache intra and inter-running, all versions", fontweight='bold') plt.savefig('compare_median_full.png') plt.close() #plt.show() ``` ##**1.3 Displaying charts to all versions** ###**1.3.1 Only intra-cache charts** ``` versions = ['--no-cache', 'v01x', 'v021x', 'v022x', 'v023x', 'v024x', 'v025x', 'v026x', 'v027x'] from IPython.display import Image, display for i in versions: display(Image("chart_intra_"+i+".png")) print("=====================================================================================") ``` ###**1.3.2 Full cache charts -> intra and inter-cache** ``` versions = ['--no-cache', 'v01x', 'v021x', 'v022x', 'v023x', 'v024x', 'v025x', 'v026x', 'v027x'] from IPython.display import Image, display for i in versions: display(Image("chart_full_"+i+".png")) print("=====================================================================================") ``` ###**1.3.3 Only intra-cache: median comparison chart of all versions** ``` from IPython.display import Image, display display(Image("compare_median_intra.png")) ``` ###**1.3.4 Full cache -> intra and inter-cache: median comparison chart of all versions** ``` from IPython.display import Image, display display(Image("compare_median_full.png")) ```
github_jupyter
import libs ``` # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory import os print(len(os.listdir("../input/train"))) # Any results you write to the current directory are saved as output. os.mkdir("modifiedtrain") os.mkdir("modifiedtrain/cat") os.mkdir("modifiedtrain/dog") os.listdir("modifiedtrain") from shutil import copyfile for file in os.listdir("../input/train"): name=file.split('.')[0] filename="../input/train/"+file if name=='cat': copyfile(filename,"modifiedtrain/cat/"+file) elif name=='dog': copyfile(filename,"modifiedtrain/dog/"+file) os.listdir("modifiedtrain/dog/") %pylab inline import matplotlib.pyplot as plt from PIL import Image image = Image.open('modifiedtrain/dog/dog.411.jpg') plt.imshow(image) plt.show() import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator train_datagen=ImageDataGenerator(rescale=1./255) train_generator=train_datagen.flow_from_directory("modifiedtrain/",batch_size=20,target_size=(150,150), class_mode='binary') model=tf.keras.models.Sequential([ tf.keras.layers.Conv2D(16,(3,3),activation='relu',input_shape=(150,150,3)), tf.keras.layers.MaxPool2D(2,2), tf.keras.layers.Conv2D(32,(3,3),activation='relu'), tf.keras.layers.MaxPool2D(2,2), tf.keras.layers.Conv2D(64,(3,3),activation='relu'), tf.keras.layers.MaxPool2D(2,2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(512,activation='relu'), tf.keras.layers.Dense(1,activation='sigmoid') ]) model.summary() from tensorflow.keras.optimizers import RMSprop model.compile(loss='binary_crossentropy',optimizer=RMSprop(lr=0.001),metrics=['accuracy']) history=model.fit_generator(train_generator,steps_per_epoch=100,epochs=15) !pip install wget import wget url='https://ichef.bbci.co.uk/images/ic/720x405/p0517py6.jpg' wget.download(url, 'test_image.jpg') import matplotlib.pyplot as plt def load_image(img_path, show=False): img = image.load_img(img_path, target_size=(150, 150)) img_tensor = image.img_to_array(img) # (height, width, channels) img_tensor = np.expand_dims(img_tensor, axis=0) # (1, height, width, channels), add a dimension because the model expects this shape: (batch_size, height, width, channels) img_tensor /= 255. # imshow expects values in the range [0, 1] if show: plt.imshow(img_tensor[0]) plt.axis('off') plt.show() return img_tensor from tensorflow.keras.preprocessing import image img = load_image('test_image.jpg',True) model.predict(img) train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') train_generator=train_datagen.flow_from_directory("modifiedtrain/",batch_size=20,target_size=(150,150), class_mode='binary') history=model.fit_generator(train_generator,steps_per_epoch=100,epochs=5) import os from tensorflow.keras import layers from tensorflow.keras import Model !wget --no-check-certificate \ https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 \ -O /tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 from tensorflow.keras.applications.inception_v3 import InceptionV3 local_weights_file = '/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5' pre_trained_model = InceptionV3(input_shape = (256, 256, 3), include_top = False, weights = None) pre_trained_model.load_weights(local_weights_file) for layer in pre_trained_model.layers: layer.trainable = False # pre_trained_model.summary() last_layer = pre_trained_model.get_layer('mixed7') print('last layer output shape: ', last_layer.output_shape) last_output = last_layer.output from tensorflow.keras.optimizers import RMSprop # Flatten the output layer to 1 dimension x = layers.Flatten()(last_output) # Add a fully connected layer with 1,024 hidden units and ReLU activation x = layers.Dense(1024, activation='relu')(x) # Add a dropout rate of 0.2 x = layers.Dropout(0.2)(x) # Add a final sigmoid layer for classification x = layers.Dense (1, activation='sigmoid')(x) model = Model( pre_trained_model.input, x) model.compile(optimizer = RMSprop(lr=0.0001), loss = 'binary_crossentropy', metrics = ['acc']) from tensorflow.keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') train_generator=train_datagen.flow_from_directory("modifiedtrain/",batch_size=20,target_size=(256,256), class_mode='binary') history=model.fit_generator(train_generator,steps_per_epoch=100,epochs=5) import matplotlib.pyplot as plt acc = history.history['acc'] loss = history.history['loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'r', label='Training accuracy') plt.legend(loc=0) plt.figure() plt.plot(epochs, loss, 'b', label='Training loss') plt.legend(loc=0) plt.figure() ```
github_jupyter
# Calculate the rotation distribution for hot stars ``` import numpy as np import matplotlib.pyplot as plt from plotstuff import colours cols = colours() %matplotlib inline plotpar = {'axes.labelsize': 20, 'text.fontsize': 20, 'legend.fontsize': 15, 'xtick.labelsize': 20, 'ytick.labelsize': 20, 'text.usetex': True} plt.rcParams.update(plotpar) KID, Teff, logg, Mass, Prot, Prot_err, Rper, LPH, w, DC, Flag = \ np.genfromtxt("Table_1_Periodic.txt", delimiter=",", skip_header=1).T m = Teff > 6250 Prot, Rper, Teff = Prot[m], Rper[m], Teff[m] plt.scatter(Prot, np.log(Rper), c=Teff) plt.colorbar() plt.hist(Prot, 50) N, P_bins = np.histogram(Prot, 50) m = N == max(N) ind = int(np.arange(len(P_bins))[m][0] + 1) plt.axvline((P_bins[m] + P_bins[ind])/2, color="r") print((P_bins[m] + P_bins[ind])/2) ``` Fit a Gaussian ``` def Gaussian(par, x): A, mu, sig = par return A * np.exp(-.5*(x-mu)**2/sig**2) def chi2(par, x, y): return sum((y - Gaussian(par, x))**2) import scipy.optimize as sco par_init = 300, 2.10053, 5. x, y = P_bins[1:], N result1 = sco.minimize(chi2, par_init, args=(x, y)) A, mu, sig = result1.x print(A, mu, sig) plt.hist(Prot, 50) xs = np.linspace(0, 70, 1000) ys = Gaussian(result1.x, xs) plt.plot(xs, ys, "r") ``` Fit two Gaussians ``` def Double_Gaussian(par, x): A1, A2, mu1, mu2, sig1, sig2 = par return A1 * np.exp(-.5*(x-mu1)**2/sig1**2) + A2 * np.exp(-.5*(x-mu2)**2/sig2**2) def Double_chi2(par, x, y): return sum((y - Double_Gaussian(par, x))**2) double_par_init = A, mu, sig, 12, 5, 3 result2 = sco.minimize(Double_chi2, double_par_init, args=(x, y)) A1, A2, mu1, mu2, sig1, sig2 = result2.x print(result2.x) print(mu1, mu2) print(sig1, sig2) plt.hist(Prot, 50, color="w", histtype="stepfilled", label="$P_{\mathrm{rot}}~(T_{\mathrm{eff}} > 6250)$") # ,~\mathrm{McQuillan~et~al.~(2013)}$") ys = Double_Gaussian(result2.x, xs) ys1 = Gaussian([A1, mu1, sig1], xs) ys2 = Gaussian([A2, mu2, sig2], xs) plt.plot(xs, ys, color=cols.blue, lw=2, label="$G1 + G2$") plt.plot(xs, ys1, color=cols.orange, lw=2, label="$G1:\mu={0:.1f}, \sigma={1:.1f}$".format(mu1, sig1)) plt.plot(xs, ys2, color=cols.pink, lw=2, label="$G2:\mu={0:.1f}, \sigma={1:.1f}$".format(mu2, sig2)) plt.xlim(0, 30) plt.legend() plt.xlabel("$P_{\mathrm{rot}}~\mathrm{(Days)}$") plt.ylabel("$\mathrm{Number~of~stars}$") plt.subplots_adjust(bottom=.25, left=.25) plt.savefig("hot_star_hist.pdf") print(chi2(result1.x, x, y)/(len(x)-3-1), Double_chi2(result2.x, x, y)/(len(x)-6-1)) ```
github_jupyter
# Implementing logistic regression from scratch The goal of this notebook is to implement your own logistic regression classifier. You will: * Extract features from Amazon product reviews. * Convert an SFrame into a NumPy array. * Implement the link function for logistic regression. * Write a function to compute the derivative of the log likelihood function with respect to a single coefficient. * Implement gradient ascent. * Given a set of coefficients, predict sentiments. * Compute classification accuracy for the logistic regression model. Let's get started! ## Fire up GraphLab Create Make sure you have the latest version of GraphLab Create. Upgrade by ``` pip install graphlab-create --upgrade ``` See [this page](https://dato.com/download/) for detailed instructions on upgrading. ``` # import graphlab import turicreate as tc import pandas as pd import re import string ``` ## Load review dataset For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews. ``` products = tc.SFrame('amazon_baby_subset.gl/') ``` One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment. ``` products['sentiment'] ``` Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews. ``` products.head(10)['name'] print('# of positive reviews =', len(products[products['sentiment']==1])) print('# of negative reviews =', len(products[products['sentiment']==-1])) ``` **Note:** For this assignment, we eliminated class imbalance by choosing a subset of the data with a similar number of positive and negative reviews. ## Apply text cleaning on the review data In this section, we will perform some simple feature cleaning using **SFrames**. The last assignment used all words in building bag-of-words features, but here we limit ourselves to 193 words (for simplicity). We compiled a list of 193 most frequent words into a JSON file. Now, we will load these words from this JSON file: ``` import json with open('important_words.json', 'r') as f: # Reads the list of most frequent words important_words = json.load(f) important_words = [str(s) for s in important_words] print(important_words) ``` Now, we will perform 2 simple data transformations: 1. Remove punctuation using [Python's built-in](https://docs.python.org/2/library/string.html) string functionality. 2. Compute word counts (only for **important_words**) We start with *Step 1* which can be done as follows: ``` # def remove_punctuation(text): # import string # return text.translate(string.punctuation) def remove_punctuation(text): regex = re.compile('[%s]' % re.escape(string.punctuation)) return regex.sub('', text) products['review_clean'] = products['review'].apply(remove_punctuation) ``` Now we proceed with *Step 2*. For each word in **important_words**, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in **important_words** which keeps a count of the number of times the respective word occurs in the review text. **Note:** There are several ways of doing this. In this assignment, we use the built-in *count* function for Python lists. Each review string is first split into individual words and the number of occurances of a given word is counted. ``` for word in important_words: # products[word] = products['review_clean'].apply(lambda s : s.split().count(word)) products[word] = products['review_clean'].apply(lambda s : sum(1 for match in re.finditer(r"\b%s\b"% word, s))) ``` The SFrame **products** now contains one column for each of the 193 **important_words**. As an example, the column **perfect** contains a count of the number of times the word **perfect** occurs in each of the reviews. ``` products['perfect'] ``` Now, write some code to compute the number of product reviews that contain the word **perfect**. **Hint**: * First create a column called `contains_perfect` which is set to 1 if the count of the word **perfect** (stored in column **perfect**) is >= 1. * Sum the number of 1s in the column `contains_perfect`. **Quiz Question**. How many reviews contain the word **perfect**? ``` # products.filter_by([0], 'perfect', exclude=True)['perfect'].shape[0] # no reason to make a new column. df = pd.DataFrame() df['perfect'] = products['perfect'] df['contains_perfect'] = df.apply(lambda x : 1 if x['perfect'] >= 1 else 0, axis=1) print(df['contains_perfect'].sum()) ``` ## Convert SFrame to NumPy array As you have seen previously, NumPy is a powerful library for doing matrix manipulation. Let us convert our data to matrices and then implement our algorithms with matrices. First, make sure you can perform the following import. ``` import numpy as np ``` We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. Note that the feature matrix includes an additional column 'intercept' to take account of the intercept term. ``` def get_numpy_data(data_sframe, features, label): data_sframe['intercept'] = 1 features = ['intercept'] + features features_sframe = data_sframe[features] feature_matrix = features_sframe.to_numpy() label_sarray = data_sframe[label] label_array = label_sarray.to_numpy() return(feature_matrix, label_array) ``` Let us convert the data into NumPy arrays. ``` # Warning: This may take a few minutes... feature_matrix, sentiment = get_numpy_data(products, important_words, 'sentiment') ``` **Are you running this notebook on an Amazon EC2 t2.micro instance?** (If you are using your own machine, please skip this section) It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running `get_numpy_data` function. Instead, download the [binary file](https://s3.amazonaws.com/static.dato.com/files/coursera/course-3/numpy-arrays/module-3-assignment-numpy-arrays.npz) containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands: ``` arrays = np.load('module-3-assignment-numpy-arrays.npz') feature_matrix, sentiment = arrays['feature_matrix'], arrays['sentiment'] ``` ``` feature_matrix.shape ``` ** Quiz Question:** How many features are there in the **feature_matrix**? ** Quiz Question:** Assuming that the intercept is present, how does the number of features in **feature_matrix** relate to the number of features in the logistic regression model? ``` feature_matrix.shape[1] # number of features ``` Now, let us see what the **sentiment** column looks like: ``` sentiment ``` ## Estimating conditional probability with link function Recall from lecture that the link function is given by: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}, $$ where the feature vector $h(\mathbf{x}_i)$ represents the word counts of **important_words** in the review $\mathbf{x}_i$. Complete the following function that implements the link function: ``` ''' produces probablistic estimate for P(y_i = +1 | x_i, w). estimate ranges between 0 and 1. ''' def predict_probability(feature_matrix, coefficients): # Take dot product of feature_matrix and coefficients # YOUR CODE HERE scores = np.dot(feature_matrix, coefficients) # Compute P(y_i = +1 | x_i, w) using the link function # YOUR CODE HERE predictions = (1/(1+np.exp(-scores))) # return predictions return predictions ``` **Aside**. How the link function works with matrix algebra Since the word counts are stored as columns in **feature_matrix**, each $i$-th row of the matrix corresponds to the feature vector $h(\mathbf{x}_i)$: $$ [\text{feature_matrix}] = \left[ \begin{array}{c} h(\mathbf{x}_1)^T \\ h(\mathbf{x}_2)^T \\ \vdots \\ h(\mathbf{x}_N)^T \end{array} \right] = \left[ \begin{array}{cccc} h_0(\mathbf{x}_1) & h_1(\mathbf{x}_1) & \cdots & h_D(\mathbf{x}_1) \\ h_0(\mathbf{x}_2) & h_1(\mathbf{x}_2) & \cdots & h_D(\mathbf{x}_2) \\ \vdots & \vdots & \ddots & \vdots \\ h_0(\mathbf{x}_N) & h_1(\mathbf{x}_N) & \cdots & h_D(\mathbf{x}_N) \end{array} \right] $$ By the rules of matrix multiplication, the score vector containing elements $\mathbf{w}^T h(\mathbf{x}_i)$ is obtained by multiplying **feature_matrix** and the coefficient vector $\mathbf{w}$. $$ [\text{score}] = [\text{feature_matrix}]\mathbf{w} = \left[ \begin{array}{c} h(\mathbf{x}_1)^T \\ h(\mathbf{x}_2)^T \\ \vdots \\ h(\mathbf{x}_N)^T \end{array} \right] \mathbf{w} = \left[ \begin{array}{c} h(\mathbf{x}_1)^T\mathbf{w} \\ h(\mathbf{x}_2)^T\mathbf{w} \\ \vdots \\ h(\mathbf{x}_N)^T\mathbf{w} \end{array} \right] = \left[ \begin{array}{c} \mathbf{w}^T h(\mathbf{x}_1) \\ \mathbf{w}^T h(\mathbf{x}_2) \\ \vdots \\ \mathbf{w}^T h(\mathbf{x}_N) \end{array} \right] $$ **Checkpoint** Just to make sure you are on the right track, we have provided a few examples. If your `predict_probability` function is implemented correctly, then the outputs will match: ``` dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]]) dummy_coefficients = np.array([1., 3., -1.]) correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] ) correct_predictions = np.array( [ 1./(1+np.exp(-correct_scores[0])), 1./(1+np.exp(-correct_scores[1])) ] ) print('The following outputs must match ') print('------------------------------------------------') print('correct_predictions =', correct_predictions) print('output of predict_probability =', predict_probability(dummy_feature_matrix, dummy_coefficients)) ``` ## Compute derivative of log likelihood with respect to a single coefficient Recall from lecture: $$ \frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) $$ We will now write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts two arguments: * `errors` vector containing $\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})$ for all $i$. * `feature` vector containing $h_j(\mathbf{x}_i)$ for all $i$. Complete the following code block: ``` def feature_derivative(errors, feature): # Compute the dot product of errors and feature derivative = np.dot(errors, feature) # Return the derivative return derivative ``` In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm. The log likelihood is computed using the following formula (see the advanced optional video if you are curious about the derivation of this equation): $$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$ We provide a function to compute the log likelihood for the entire dataset. ``` def compute_log_likelihood(feature_matrix, sentiment, coefficients): indicator = (sentiment==+1) scores = np.dot(feature_matrix, coefficients) logexp = np.log(1. + np.exp(-scores)) # Simple check to prevent overflow mask = np.isinf(logexp) logexp[mask] = -scores[mask] lp = np.sum((indicator-1)*scores - logexp) return lp ``` **Checkpoint** Just to make sure we are on the same page, run the following code block and check that the outputs match. ``` dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]]) dummy_coefficients = np.array([1., 3., -1.]) dummy_sentiment = np.array([-1, 1]) correct_indicators = np.array( [ -1==+1, 1==+1 ] ) correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] ) correct_first_term = np.array( [ (correct_indicators[0]-1)*correct_scores[0], (correct_indicators[1]-1)*correct_scores[1] ] ) correct_second_term = np.array( [ np.log(1. + np.exp(-correct_scores[0])), np.log(1. + np.exp(-correct_scores[1])) ] ) correct_ll = sum( [ correct_first_term[0]-correct_second_term[0], correct_first_term[1]-correct_second_term[1] ] ) print('The following outputs must match ') print('------------------------------------------------') print('correct_log_likelihood =', correct_ll) print('output of compute_log_likelihood =', compute_log_likelihood(dummy_feature_matrix, dummy_sentiment, dummy_coefficients)) ``` ## Taking gradient steps Now we are ready to implement our own logistic regression. All we have to do is to write a gradient ascent function that takes gradient steps towards the optimum. Complete the following function to solve the logistic regression model using gradient ascent: ``` from math import sqrt def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter): coefficients = np.array(initial_coefficients) # make sure it's a numpy array for itr in range(max_iter): # Predict P(y_i = +1|x_i,w) using your predict_probability() function # YOUR CODE HERE predictions = predict_probability(feature_matrix, coefficients) # Compute indicator value for (y_i = +1) indicator = (sentiment==+1) # Compute the errors as indicator - predictions errors = indicator - predictions for j in range(len(coefficients)): # loop over each coefficient # Recall that feature_matrix[:,j] is the feature column associated with coefficients[j]. # Compute the derivative for coefficients[j]. Save it in a variable called derivative # YOUR CODE HERE derivative = feature_derivative(errors, feature_matrix[:,j]) # add the step size times the derivative to the current coefficient ## YOUR CODE HERE coefficients[j] += step_size*derivative # Checking whether log likelihood is increasing if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \ or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0: lp = compute_log_likelihood(feature_matrix, sentiment, coefficients) print('iteration %*d: log likelihood of observed labels = %.8f' % \ (int(np.ceil(np.log10(max_iter))), itr, lp)) return coefficients ``` Now, let us run the logistic regression solver. ``` coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients=np.zeros(194), step_size=1e-7, max_iter=301) print(np.sum(coefficients)) ``` **Quiz Question:** As each iteration of gradient ascent passes, does the log likelihood increase or decrease? ## Predicting sentiments Recall from lecture that class predictions for a data point $\mathbf{x}$ can be computed from the coefficients $\mathbf{w}$ using the following formula: $$ \hat{y}_i = \left\{ \begin{array}{ll} +1 & \mathbf{x}_i^T\mathbf{w} > 0 \\ -1 & \mathbf{x}_i^T\mathbf{w} \leq 0 \\ \end{array} \right. $$ Now, we will write some code to compute class predictions. We will do this in two steps: * **Step 1**: First compute the **scores** using **feature_matrix** and **coefficients** using a dot product. * **Step 2**: Using the formula above, compute the class predictions from the scores. Step 1 can be implemented as follows: ``` # Compute the scores as a dot product between feature_matrix and coefficients. scores = np.dot(feature_matrix, coefficients) print(np.sum(scores)) ``` Now, complete the following code block for **Step 2** to compute the class predictions using the **scores** obtained above: ``` apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.) predictions = apply_threshold(scores) positive_sentiments = (predictions == 1).sum() #(predictions == sentiment).sum() negative_sentiments = (predictions == -1).sum()#sentiment.shape[0] - positive_sentiments ``` ** Quiz Question: ** How many reviews were predicted to have positive sentiment? ``` print('Positive sentiments: %d and Negative sentiments: %d' % (positive_sentiments, negative_sentiments)) ``` ## Measuring accuracy We will now measure the classification accuracy of the model. Recall from the lecture that the classification accuracy can be computed as follows: $$ \mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}} $$ Complete the following code block to compute the accuracy of the model. ``` # I like using pandas so lets organize this data df = pd.DataFrame() df['sentiment'] = products['sentiment'] df['score'] = scores df['predicted sentiment'] = df['score'].apply(lambda x: 1 if(x>0) else -1) df['correct prediction'] = df.apply(lambda x : True if x['sentiment'] == x['predicted sentiment'] else False, axis=1) print(df.head(10)) print(df[df['predicted sentiment']==1].shape) num_mistakes = df[df['correct prediction'] == False].shape[0] accuracy = df[df['correct prediction'] == True].shape[0] / df.shape[0] print('-----------------------------------------------------') print('# Reviews correctly classified =', len(products) - num_mistakes) print('# Reviews incorrectly classified =', num_mistakes) print('# Reviews total =', len(products)) print('-----------------------------------------------------') print('Accuracy = %.2f' % accuracy) ``` **Quiz Question**: What is the accuracy of the model on predictions made above? (round to 2 digits of accuracy) ## Which words contribute most to positive & negative sentiments? Recall that in Module 2 assignment, we were able to compute the "**most positive words**". These are words that correspond most strongly with positive reviews. In order to do this, we will first do the following: * Treat each coefficient as a tuple, i.e. (**word**, **coefficient_value**). * Sort all the (**word**, **coefficient_value**) tuples by **coefficient_value** in descending order. ``` coefficients = list(coefficients[1:]) # exclude intercept word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)] word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True) ``` Now, **word_coefficient_tuples** contains a sorted list of (**word**, **coefficient_value**) tuples. The first 10 elements in this list correspond to the words that are most positive. ### Ten "most positive" words Now, we compute the 10 words that have the most positive coefficient values. These words are associated with positive sentiment. ``` word_coefficient_tuples[:10] ``` ** Quiz Question:** Which word is **not** present in the top 10 "most positive" words? ### Ten "most negative" words Next, we repeat this exercise on the 10 most negative words. That is, we compute the 10 words that have the most negative coefficient values. These words are associated with negative sentiment. ``` word_coefficient_tuples[-10:] ``` ** Quiz Question:** Which word is **not** present in the top 10 "most negative" words?
github_jupyter
# Project: Exploring and Analysing European Football ## Table of Contents <ul> <li><a href="#intro">Introduction</a></li> <li><a href="#wrangling">Data Wrangling</a></li> <li><a href="#eda">Exploratory Data Analysis</a></li> <li><a href="#conclusions">Conclusions</a></li> </ul> <a id='intro'></a> ## Introduction > The dataset chosen for the following analysis is "The Soccer Database" put together by Hugo Mathien on Kaggle. The dataset provides in depth information about European Soccer Competitions held between 2008-2016 including data of about 25,000 matches, 10,000 players, 300 teams from all the major European leagues. To explore the data, relative tables were created in SQL to effectively depict the relation between team attributes and thier Home and Away success through the years. > **Q1**: What team has improved the most when comparing the performance for 2008-09 season to the 2015-16 season? > **Q2**: Analysis of playing styles for the most successful teams for both 2008-09 and 2015-16 season, Is there a change in playing philosophy? ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline ``` <a id='wrangling'></a> ## Data Wrangling ### General Properties > The tables were loaded through the pandas read function and some basic information is printed for clarity about the structure of the dataframes ``` # Load your data and print out a few lines. Perform operations to inspect data # types and look for instances of missing or possibly errant data. team_08=pd.read_csv('teamatt08.csv') #Team name and Team attributes (Playing style) team_16=pd.read_csv('teamatt16.csv') #Team name and Team attributes (Playing style) play_08=pd.read_csv('playatt08.csv') #Player name and Player attributes(2008-09) play_16=pd.read_csv('playatt16.csv') #Player name and Player attributes(2015-16) home=pd.read_csv('home_match.csv') #Matches played at home (2008-2016) away=pd.read_csv('away_match.csv') #Matches played away from home (2008-2016) team_08.head(3) team_08.info() play_16.head(3) play_16.info() home.head() home.info() ``` ### Data Cleaning > The id, team_fifa_api_id, team_api_id columns were recurring so additional ones were dropped from both team_08 and team_16 dataframes. Date column was no longer relevant to our analysis as Season column would provide a fair idea of time, so date was also dropped. ``` team_08.drop(['id.1','team_fifa_api_id.1','team_api_id.1','date'],axis=1,inplace=True) team_16.drop(['id.1','team_fifa_api_id.1','team_api_id.1','date'],axis=1,inplace=True) ``` > buildUpPlayDribbling column in both team tables had large amount of missing values. Just dropping those missing values would have made data inconsistent so, the column is dropped to increase legibility of the data. ``` team_08.drop(['buildUpPlayDribbling'],axis=1,inplace=True) team_16.drop(['buildUpPlayDribbling'],axis=1,inplace=True) ``` > Date column was not relevant to our analysis to it was dropped in the following cell. ``` play_08.drop(['date'],axis=1,inplace=True) play_16.drop(['date'],axis=1,inplace=True) play_16.head(3) ``` <a id='eda'></a> ## Exploratory Data Analysis ### Q1: What team has improved the most when comparing the performance for 2008-09 season to the 2015-16 season? > The following queries are used to list out all the home victories by every team in 2008-09 season ``` home.head() home_wins=home.query('home_goals>away_goals') home_wins=home_wins.query('Season=="2008/2009"') home_wins.head() ``` > The detailed view of all victories were found above. But to judge success of a particular team we need the number of victories in that particular season. > Number of wins was calculated by using the value_counts function > The resulting list was again converted to a dataframe for ease of calculations later in the analysis. ``` home_win08=home_wins.Team.value_counts() home_vic08=home_win08.to_frame() home_vic08.columns=['Wins'] home_vic08.head() ``` > The process was repeated to find the number of away wins in the same season 2008-09 ``` away_wins=away.query('away_goals>home_goals') away_wins=away_wins.query('Season=="2008/2009"') away_win08=away_wins.Team.value_counts() away_vic08=away_win08.to_frame() away_vic08.columns=['Wins'] away_vic08.head() ``` > Total victories are calculated by adding both home and away results > This gives the list of wins by each team in the 2008-09 seaon ``` total_vic08=home_vic08+away_vic08 total_vic08.sort_values(['Wins'],ascending=[False],inplace=True) total_vic08.head() ``` > The same process was carried out to find the number of wins by each team in the 2015-16 season ``` home_wins=home.query('home_goals>away_goals') home_wins=home_wins.query('Season=="2015/2016"') home_win16=home_wins.Team.value_counts() home_vic16=home_win16.to_frame() home_vic16.columns=['Wins'] home_vic16.head() away_wins=away.query('away_goals>home_goals') away_wins=away_wins.query('Season=="2015/2016"') away_win16=away_wins.Team.value_counts() away_vic16=away_win16.to_frame() away_vic16.columns=['Wins'] away_vic16.head() total_vic16=home_vic16+away_vic16 total_vic16.sort_values(['Wins'],ascending=[False],inplace=True) total_vic16.head() ``` > The dataframe most_improved was created consisting the teams that had improved the most in terms of wins over the 8 year period ``` most_improved=total_vic16-total_vic08 most_improved.sort_values(['Wins'],ascending=[False],inplace=True) improvement=most_improved.head() improvement.plot(kind='bar',subplots=True,figsize=(8,8)) improvement.head() ``` ### Q2: Analysis of playing styles for the most successful teams for both 2008-09 and 2015-16 season, Is there a change in playing philosophy? > The total victories table that was obtained above is used here to list out the five most dominant teams in the 2008-09 season ``` total_vic08.head() ``` > The detailed team attributes data was listed out for these five teams from the initial team_08 table ``` vic_stats08=team_08.query('team_long_name=="Manchester United" | team_long_name=="FC Barcelona" | team_long_name=="Rangers"| team_long_name=="Real Madrid CF"|team_long_name=="Inter"') vic_stats08.index=vic_stats08['team_long_name'] vic_stats08.head() ``` > The columns containing string values were dropped as our analysis was based on visualisation of different playing techniques such as : > **1**: Speed of build up > **2**: Accuracy of passing in build up > **3**: Chances created by shooting > **4**: Chances created by crossing into the box > **5**: Defensive organisation and aggression ``` vic_stats08.drop(['id','team_api_id','team_fifa_api_id','team_long_name','team_short_name','buildUpPlayDribblingClass','buildUpPlayPassingClass','chanceCreationShootingClass','chanceCreationPositioningClass','defencePressureClass','defenceAggressionClass','defenceTeamWidthClass','defenceDefenderLineClass'],axis=1,inplace=True) vic_stats08.drop(['buildUpPlaySpeedClass','buildUpPlayPositioningClass','chanceCreationPassingClass','chanceCreationCrossingClass'],axis=1,inplace=True) vic_stats08.mean().plot(kind='area',figsize=(16,8)) total_vic16.head() vic_stats16=team_16.query('team_long_name=="Paris Saint-Germain" | team_long_name=="FC Barcelona" | team_long_name=="Juventus"| team_long_name=="SL Benfica"|team_long_name=="FC Bayern Munich"') vic_stats16.index=vic_stats16['team_long_name'] vic_stats16.drop(['id','team_api_id','team_fifa_api_id','team_long_name','team_short_name','buildUpPlayDribblingClass','buildUpPlayPassingClass','chanceCreationShootingClass','chanceCreationPositioningClass','defencePressureClass','defenceAggressionClass','defenceTeamWidthClass','defenceDefenderLineClass'],axis=1,inplace=True) vic_stats16.drop(['buildUpPlaySpeedClass','buildUpPlayPositioningClass','chanceCreationPassingClass','chanceCreationCrossingClass'],axis=1,inplace=True) vic_stats16.mean().plot(kind='area',figsize=(16,8)) ``` <a id='conclusions'></a> ## Conclusions > **1** : After performing the necessary analysis, It is evident that teams such as <i> Napoli, Benfica, Paris Saint Germain </i> were among the most improved teams improving by 13, 12, 11 victories respectively > **2** : Observing the 2008-09 playing style bar-graph, dominance of high chance creation by shooting is noticed. This is accompanied by high speed of passing and high volume of crosses into the box pointing towards a more <b>attacking</b> style of play by successful teams in that particular time > **3** : Observing the 2015-16 playing style bar-graph, a more <b>conservative</b> playing style is observed with focus on defensive aggression and pressure
github_jupyter
<a href="https://colab.research.google.com/github/ghost331/Recurrent-Neural-Network/blob/main/Covid_19_Analysis_using_RNN_with_LSTM.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` #Data: https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv import pandas as pd import numpy as np import matplotlib.pyplot as plt country = "India" #Total COVID confirmed cases df_confirmed = pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv") df_confirmed_country = df_confirmed[df_confirmed["Country/Region"] == country] df_confirmed_country = pd.DataFrame(df_confirmed_country[df_confirmed_country.columns[4:]].sum(),columns=["confirmed"]) df_confirmed_country.index = pd.to_datetime(df_confirmed_country.index,format='%m/%d/%y') df_confirmed_country.plot(figsize=(10,5),title="COVID confirmed cases") df_confirmed_country.tail(10) print("Total days in the dataset", len(df_confirmed_country)) #Use data until 14 days before as training x = len(df_confirmed_country)-14 train=df_confirmed_country.iloc[300:x] test = df_confirmed_country.iloc[x:] ##scale or normalize data as the data is too skewed from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() scaler.fit(train) train_scaled = scaler.transform(train) test_scaled = scaler.transform(test) ## Use TimeSeriestrain_generator to generate data in sequences. #Alternatively we can create our own sequences. from keras.preprocessing.sequence import TimeseriesGenerator #Sequence size has an impact on prediction, especially since COVID is unpredictable! seq_size = 7 ## number of steps (lookback) n_features = 1 ## number of features. This dataset is univariate so it is 1 train_generator = TimeseriesGenerator(train_scaled, train_scaled, length = seq_size, batch_size=1) print("Total number of samples in the original training data = ", len(train)) # 660 print("Total number of samples in the generated data = ", len(train_generator)) #653 with seq_size=7 #Check data shape from generator x,y = train_generator[10] #Check train_generator #Takes 7 days as x and 8th day as y (for seq_size=7) #Also generate test data test_generator = TimeseriesGenerator(test_scaled, test_scaled, length=seq_size, batch_size=1) print("Total number of samples in the original training data = ", len(test)) # 14 as we're using last 14 days for test print("Total number of samples in the generated data = ", len(test_generator)) # 7 #Check data shape from generator x,y = test_generator[0] from keras.models import Sequential from keras.layers import Dense, LSTM, Dropout, Activation #Define Model model = Sequential() model.add(LSTM(128, activation='relu', return_sequences=True, input_shape=(seq_size, n_features))) model.add(LSTM(64, activation='relu')) model.add(Dense(32)) model.add(Dense(1)) model.compile(optimizer='adam', loss='mean_squared_error') model.summary() print('Train...') history = model.fit_generator(train_generator, validation_data=test_generator, epochs=30, steps_per_epoch=10) #plot the training and validation accuracy and loss at each epoch loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(loss) + 1) plt.plot(epochs, loss, 'y', label='Training loss') plt.plot(epochs, val_loss, 'r', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() #forecast prediction = [] #Empty list to populate later with predictions current_batch = train_scaled[-seq_size:] #Final data points in train current_batch = current_batch.reshape(1, seq_size, n_features) #Reshape ## Predict future, beyond test dates future = 7 #Days for i in range(len(test) + future): current_pred = model.predict(current_batch)[0] prediction.append(current_pred) current_batch = np.append(current_batch[:,1:,:],[[current_pred]],axis=1) ### Inverse transform to before scaling so we get actual numbers rescaled_prediction = scaler.inverse_transform(prediction) time_series_array = test.index #Get dates for test data #Add new dates for the forecast period for k in range(0, future): time_series_array = time_series_array.append(time_series_array[-1:] + pd.DateOffset(1)) #Create a dataframe to capture the forecast data df_forecast = pd.DataFrame(columns=["actual_confirmed","predicted"], index=time_series_array) df_forecast.loc[:,"predicted"] = rescaled_prediction[:,0] df_forecast.loc[:,"actual_confirmed"] = test["confirmed"] #Plot df_forecast.plot(title="Predictions for next 7 days") ```
github_jupyter
### Installation ``` pip install -q tensorflow tensorflow-datasets ``` #### Imports ``` import tensorflow as tf import matplotlib.pyplot as plt import numpy as np from tensorflow import keras import tensorflow_datasets as tfds ``` ### Checking datasets ``` print(tfds.list_builders()) ``` ### Getting data Infomation ``` builder = tfds.builder('rock_paper_scissors') info = builder.info print(info) ``` ### Data Preparation ``` train = tfds.load(name='rock_paper_scissors', split="train") test = tfds.load(name='rock_paper_scissors', split='test') ``` ### Iterating over data > To iterate over a tensorflow dataset we do it as follows ``` for data in train: print(data['image'], data['label']) break ``` ### Creating a Numpy data > We are going to scale our data and convert it to a nummpy array ``` train_images = np.array([data['image'].numpy()/255 for data in train]) train_labels =np.array([data['label'].numpy() for data in train]) test_image = np.array([data['image'].numpy()/255 for data in test]) test_labels = np.array([data['label'].numpy() for data in test]) train_images[0] ``` ### Class Names 0 - Rock 1 - Paper 2 - Scissors ``` class_names = np.array(["rock", "paper", "scissor"]) ``` ### Creating a NN ``` input_shape = train_images[0].shape input_shape model = keras.Sequential([ keras.layers.Conv2D(32, (3, 3), input_shape=input_shape, activation='relu'), keras.layers.MaxPool2D((3,3)) , keras.layers.Conv2D(64, (2, 2), activation='relu'), keras.layers.MaxPool2D((2,2)), keras.layers.Conv2D(64, (2, 2), activation='relu'), keras.layers.MaxPool2D((2,2)), keras.layers.Flatten(), keras.layers.Dense(64, activation='relu'), keras.layers.Dense(32, activation='relu'), keras.layers.Dense(3, activation='softmax') ]) model.summary() ``` ### Combiling the Model ``` model.compile( optimizer = keras.optimizers.Adam(learning_rate=.0001), metrics=["accuracy"], loss = keras.losses.SparseCategoricalCrossentropy(from_logits=True) ) ``` ### Fitting the ModeL ``` EPOCHS = 5 BATCH_SIZE = 4 VALIDATION_SET = (test_image, test_labels) history = model.fit(train_images, train_labels, epochs=EPOCHS, validation_data=VALIDATION_SET, batch_size=BATCH_SIZE) ``` ### Model Evaluation Conclusion Our model is performing perfect. The loss on the train_set is almost 0 as well as the validation loss. The accuracy on the train set is `100%` compared to `83%` accuracy on the test set. > The model is just overtraining but giving us good results on the validation set. ### Making Predictions ``` predictions = model.predict(test_image[:10]) for i, j in zip(predictions, test_labels[:10]): print(class_names[np.argmax(i)],"-------->", class_names[j]) ``` ### Tunning Hyper Parameters -- Keras-Tunner * [Docs](https://www.tensorflow.org/tutorials/keras/keras_tuner) ### Installation ``` pip install -q -U keras-tuner ``` ### Importing ``` import kerastuner as kt def model_builder(hp): model = keras.Sequential() # we want the model to find the best unit and the activation function for the first layer for us model.add(keras.layers.Conv2D(hp.Int('units', min_value=32, max_value=512, step=32),(3, 3), input_shape=input_shape, activation=hp.Choice('activation-fn',values=['relu', 'sgd']))) model.add(keras.layers.MaxPool2D((3,3))) model.add(keras.layers.Conv2D(64, (2, 2), activation='relu')) model.add(keras.layers.MaxPool2D((2,2))) model.add(keras.layers.Conv2D(64, (2, 2), activation='relu')) model.add(keras.layers.MaxPool2D((2,2))) model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(64, activation='relu')) model.add(keras.layers.Dense(32, activation='relu')) model.add(keras.layers.Dense(3, activation='softmax')) model.compile(optimizer=keras.optimizers.Adam(learning_rate=hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4])), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) return model tuner = kt.Hyperband(model_builder, objective='val_accuracy', max_epochs=10, ) tuner.search(train_images, train_labels, validation_data=VALIDATION_SET, epochs=EPOCHS, batch_size=BATCH_SIZE) ``` > That's basically how the `kerastunner` works
github_jupyter
<table width="100%"> <tr> <td style="background-color:#ffffff;"> <a href="https://qsoftware.lu.lv/index.php/qworld/" target="_blank"><img src="../images/qworld.jpg" width="35%" align="left"> </a></td> <td style="background-color:#ffffff;vertical-align:bottom;text-align:right;"> prepared by Abuzer Yakaryilmaz (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>) </td> </tr></table> <table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table> $ \newcommand{\bra}[1]{\langle #1|} $ $ \newcommand{\ket}[1]{|#1\rangle} $ $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $ $ \newcommand{\dot}[2]{ #1 \cdot #2} $ $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $ $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $ $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $ $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $ $ \newcommand{\mypar}[1]{\left( #1 \right)} $ $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $ $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $ $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $ $ \newcommand{\onehalf}{\frac{1}{2}} $ $ \newcommand{\donehalf}{\dfrac{1}{2}} $ $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $ $ \newcommand{\vzero}{\myvector{1\\0}} $ $ \newcommand{\vone}{\myvector{0\\1}} $ $ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $ $ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $ $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $ $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $ $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $ $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $ $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $ $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $ <h2> <font color="blue"> Solutions for </font>Coin Flip: A Probabilistic Bit</h2> <a id="task1"></a> <h3> Task 1: Simulating FairCoin in Python</h3> Flip a fair coin 100 times. Calculate the total number of heads and tails, and then check the ratio of the number of heads and the number of tails. Do the same experiment 1000 times. Do the same experiment 10,000 times. Do the same experiment 100,000 times. Do your results get close to the ideal case (the numbers of heads and tails are equal)? <h3>Solution</h3> ``` from random import randrange for experiment in [100,1000,10000,100000]: heads = tails = 0 for i in range(experiment): if randrange(2) == 0: heads = heads + 1 else: tails = tails + 1 print("experiment:",experiment) print("heads =",heads," tails = ",tails) print("the ratio of #heads/#tails is",(round(heads/tails,4))) print() # empty line ``` <a id="task2"></a> <h3> Task 2: Simulating BiasedCoin in Python</h3> Flip the following biased coin 100 times. Calcuate the total numbers of heads and tails, and then check the ratio of the number of heads and the number of tails. $ BiasedCoin = \begin{array}{c|cc} & \mathbf{Head} & \mathbf{Tail} \\ \hline \mathbf{Head} & 0.6 & 0.6 \\ \mathbf{Tail} & 0.4 & 0.4 \end{array} $ Do the same experiment 1000 times. Do the same experiment 10,000 times. Do the same experiment 100,000 times. Do your results get close to the ideal case $ \mypar{ \dfrac{ \mbox{# of heads} }{ \mbox{# of tails} } = \dfrac{0.6}{0.4} = 1.50000000 } $? <h3>Solution</h3> ``` from random import randrange # let's pick a random number between {0,1,...,99} # it is expected to be less than 60 with probability 0.6 # and greater than or equal to 60 with probability 0.4 for experiment in [100,1000,10000,100000]: heads = tails = 0 for i in range(experiment): if randrange(100) <60: heads = heads + 1 # with probability 0.6 else: tails = tails + 1 # with probability 0.4 print("experiment:",experiment) print("heads =",heads," tails = ",tails) print("the ratio of #heads/#tails is",(round(heads/tails,4))) print() # empty line ``` <a id="task3"></a> <h3> Task 3</h3> Write a function to implement the described biased coin, The inputs are integers $ N >0 $ and $ 0 \leq B < N $. The output is either "Heads" or "Tails". <h3>Solution</h3> ``` def biased_coin(N,B): from random import randrange random_number = randrange(N) if random_number < B: return "Heads" else: return "Tails" ``` <a id="task4"></a> <h3> Task 4</h3> We use the biased coin described in Task 3. (You may use the function given in the solution.) We pick $ N $ as 101. Our task is to determine the value of $ B $ experimentially without checking its value directly. Flip the (same) biased coin 500 times, collect the statistics, and then guess the bias. Compare your guess with the actual bias by calculating the error (the absolute value of the difference). <h3>Solution</h3> ``` def biased_coin(N,B): from random import randrange random_number = randrange(N) if random_number < B: return "Heads" else: return "Tails" from random import randrange N = 101 B = randrange(100) total_tosses = 500 the_number_of_heads = 0 for i in range(total_tosses): if biased_coin(N,B) == "Heads": the_number_of_heads = the_number_of_heads + 1 my_guess = the_number_of_heads/total_tosses real_bias = B/N error = abs(my_guess-real_bias)/real_bias*100 print("my guess is",my_guess) print("real bias is",real_bias) print("error (%) is",error) ```
github_jupyter
### Test spatial distribution of molecular clusters: 1) to determine the spatiall distribution of molecular cell types (a.k.a. whether they are clustered, dispersed or uniformly distributed), we compared the cell types with a CSR (complete spatial randomness) process and performed a monte carlo test of CSR (Cressie; Waller). We simulated the CSR process by randomly sampling cells in the data 1,000 times to generate a distribution of the averaged distance to nearest neighbor under CSR (ANNCSR). The number of random sampled cells was matched to that in each molecular cell type. The ANN from each molecular cell types (ANNMol) was calculated and compared to the CSR distribution to calculate the p-value. 2) to determine whether the molecular cell types are enriched within proposed subregions, we used an approach similar to the Quadrat statistic (Cressie; Waller), instead of quadrat, the proposed anatomical parcellations are used for this analysis. One hypothesis was that the unequal distributions of molecular types within propose LHA subdomains are due to differences in cell/point densities in these subregions. To test this, we simulated the distribution by shuffling neurons' molecular identity 1000 times to compute the distribution of the ฯ‡ 2 statistics for each cell type. The ฯ‡ 2 statistic from the observed molecular cell types was compared to the distribution of expected ฯ‡ 2 statistics under the above hypothesis to calculate the p values. 3) to determine which subregion the given molecular cluster is enriched in, we performed the permutation test, where we shuffled the position of neurons from each molecular type 1,000 times and calculated the distribution of regional enrichment for any given molecular cell type. The observed fraction of neurons enriched in a given subregion from each molecular cell type was compared to the expected distribution from the random process to calculate the p values. ``` import os, sys import numpy as np import pandas as pd from glob import glob from skimage.io import imread, imsave from os.path import abspath, dirname import matplotlib.pyplot as plt import matplotlib matplotlib.style.use('default') from scipy import stats, spatial import seaborn as sns from scipy.stats import kde, pearsonr from sklearn.utils import shuffle #import scanpy as sc lha_neuron=pd.read_csv('directory/spotcount/neuron',sep=',', index_col=0) ex_m=pd.read_csv('/Slc17a6/molecular/type/metadata',sep=',', index_col=0) inh_m=pd.read_csv('/Slc32a1/molecular/type/metadata',sep=',', index_col=0) lha_neuron=lha_neuron.T lha_neuron=lha_neuron.where(lha_neuron>=0, 0) roi=pd.read_csv('directory/roi/metadata',sep=',', index_col=0) cluster=pd.concat([ex_m,inh_m],axis=0) c=['Ex-1', 'Ex-2', 'Ex-3', 'Ex-4', 'Ex-5', 'Ex-6', 'Ex-7', 'Ex-8', 'Ex-9', 'Ex-10', 'Ex-11', 'Ex-12', 'Ex-13', 'Ex-14', 'Ex-15', 'Ex-16', 'Ex-17', 'Ex-18', 'Ex-19', 'Ex-20', 'Ex-21', 'Ex-22', 'Ex-23', 'Ex-24', 'Ex-25','Inh-1', 'Inh-2','Inh-3', 'Inh-4', 'Inh-5', 'Inh-6', 'Inh-7', 'Inh-8', 'Inh-9', 'Inh-10', 'Inh-11', 'Inh-12', 'Inh-13', 'Inh-14', 'Inh-15', 'Inh-16', 'Inh-17', 'Inh-18', 'Inh-19', 'Inh-20', 'Inh-21', 'Inh-22', 'Inh-23'] ``` ###### Generate random distribution and compute ANN ``` distrib=pd.DataFrame(np.empty([len(c),1000]),index=c,columns=range(1,1001)) for n in c: for i in range(1,1001): idx=np.random.choice(roi.index,df.loc[n,'size'].astype('int')) X=roi[roi.index.isin(idx)] dist,r=spatial.KDTree(X.to_numpy()[:,:3]).query(X.to_numpy()[:,:3], k=2) distrib.loc[n,i]=np.mean(dist[dist!=0]) matrix=pd.DataFrame(np.empty([len(c),0]),index=c) for n in c: C=roi[roi.index.isin(cluster[cluster.x==n].index)].to_numpy()[:,:3] dist,r=spatial.KDTree(C).query(C, k=2) matrix.loc[n,'ANN']=np.mean(dist[dist!=0]) csr_test=pd.DataFrame(np.empty([len(c),0]),index=c) csr_test.loc[j,'p_value']=-1 csr_test.loc[j,'diff']=-1 for j in c: d=distrib.loc[j].to_numpy() a=len(d[d<=matrix.loc[j,'ANN']]) # b=1001-a csr_test.loc[j,'p_value']=a/1001 csr_test.loc[j,'diff']=matrix.loc[j,'ANN']-distrib.loc[j].min() ``` ###### ฯ‡ 2 test ``` img=imread('LHA/parcellation/mask') A=roi.copy() A=A[(A.x<777)&(A.y<772)&(A.z<266)] # roi.loc[:,'subregion']=0 lb=np.unique(img[img!=0]) df_q=pd.DataFrame(np.zeros([len(c),len(lb)]),index=c,columns=lb) for j in c: C=A[A.index.isin(cluster[cluster.x==j].index)] for x in C.index: coord=np.array(np.floor(C.loc[x].to_numpy()[:3])-1) C.loc[x,'subregion']=img[tuple(coord)] roi.loc[x,'subregion']=img[tuple(coord)] if len(C)>0: for y in lb: df_q.loc[j,y]=len(C[C['subregion']==y]) ``` ###### Shuffle data and compare spatial distribution within LHA parcellations ``` from sklearn.utils import shuffle a={} for j in c: shuffle_s=pd.DataFrame(np.zeros([1000,len(lb)]),columns=lb) for ind in range (0,1000): roi_s=shuffle(roi.subregion.to_numpy()) roi_shuffle=roi.copy() roi_shuffle['subregion']=roi_s X=roi_shuffle[roi_shuffle.index.isin(cluster[cluster.x==j].index)] if len(X)>0: for y in lb: shuffle_s.loc[ind,y]=len(X[X['subregion']==y]) ind+=1 a[j]=shuffle_s for j in c: a[j]=a[j].rename(columns={1.0: "LHAd-db",3.0: "LHAdl",4.0: "LHAs-db",5.0: "ZI", 6.0: "EP", 7.0: "fornix",9.0: "LHA-vl",11.0:"LHAf",17.0:"LHAhcrt-db"}) a[j]=a[j][['ZI', 'LHAd-db','LHAhcrt-db','LHAdl','LHAf','fornix', 'LHAs-db','LHA-vl','EP']] a[j]=a[j].drop(columns='fornix') a[j]=a[j].rename(columns={"LHA-vl":"LHAf-l"}) chi_square_shuffle=pd.DataFrame(np.zeros([len(c),1000]),index=c) for i in c: for ind in range(0,1000): chi_square_shuffle.loc[i,ind]=stats.mstats.chisquare(a[i].loc[ind,:])[0] for i in c: d=stats.chisquare(df_q.loc[i,:])[0] chi_square.loc[i,'r_pval']=len(np.where(chi_square_shuffle.loc[i,:]>d)[0])/1000 ``` ###### permutation (shuffle) test to determine which LHA subregion molecular cell types are enriched in ``` A=roi.copy() A=A[(A.x<777)&(A.y<772)&(A.z<266)] roi.loc[:,'subregion']=0 lb=np.unique(img[img!=0]) df=pd.DataFrame(np.zeros([len(c),len(lb)]),index=c,columns=lb) for j in c: C=A[A.index.isin(cluster[cluster.x==j].index)] for x in C.index: coord=np.array(np.floor(C.loc[x].to_numpy()[:3])-1) C.loc[x,'subregion']=img[tuple(coord)] roi.loc[x,'subregion']=img[tuple(coord)] if len(C)>0: for y in lb: df.loc[j,y]=len(C[C.subregion==y])/len(C) df=df.rename(columns={1.0: "LHAd-db",3.0: "LHAdl",4.0: "LHAs-db",5.0: "ZI", 6.0: "EP", 7.0: "fornix",9.0: "LHAf-l",11.0:"LHAf",17.0:"LHAhcrt-db"}) df=df[['ZI', 'LHAd-db','LHAhcrt-db','LHAdl','LHAf','fornix', 'LHAs-db','LHAf-l','EP']] A=roi.copy() A=A[(A.x<777)&(A.y<772)&(A.z<266)] A.loc[:,'shuffle']=1 for i in c: m[i]=np.zeros([len(lb)+1,1]) for ind in range(1,1001): cluster_shuffle=cluster.copy() cluster_shuffle.x=shuffle(cluster.x.to_numpy()) for i in c: ct=A[A.index.isin(cluster_shuffle[cluster_shuffle.x==i].index)] x=pd.DataFrame(data=np.zeros([len(lb)+1,1]),index=[0,1,3,4,5,6,7,9,11,17], columns=['shuffle']) y=ct.groupby('subregion').sum() for j in y.index: x.loc[j,'shuffle']=y.loc[j,'shuffle']/len(ct) m[i]=np.append(m[i],x.to_numpy().reshape(10,1),axis=1) df_p=pd.DataFrame(data=np.ones(df.shape),index=df.index,columns=df.columns) df_p.shape ind=0 for i in c: print(i) for n in range(0,9): print(n,ind) df_p.iloc[ind,n]=len(np.where(m[i][n,1:]>df.iloc[ind,n])[0])/1000 ind+=1 df_p=df_p.reindex(df_p.index[a.dendrogram_col.reordered_ind]) df_p=df_p[df_p.columns[::-1]] df_p=df_p.drop(columns='fornix') ```
github_jupyter