code
stringlengths
38
801k
repo_path
stringlengths
6
263
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Constraint Satisfaction Problems # --- # Constraint satisfaction is a general problem solving technique for solving a class of combinatorial optimization problems by imposing limits on the values in the solution. The goal of this exercise is to practice formulating some classical example problems as constraint satisfaction problems (CSPs), and then to explore using a powerful open source constraint satisfaction tool called [Z3](https://github.com/Z3Prover/z3) from Microsoft Research to solve them. Practicing with these simple problems will help you to recognize real-world problems that can be posed as CSPs; some solvers even have specialized utilities for specific types of problem (vehicle routing, planning, scheduling, etc.). # # There are many different kinds of CSP solvers available for CSPs. Z3 is a "Satisfiability Modulo Theories" (SMT) solver, which means that unlike the backtracking and variable assignment heuristics discussed in lecture, Z3 first converts CSPs to satisfiability problems then uses a [boolean satisfiability](https://en.wikipedia.org/wiki/Boolean_satisfiability_problem) (SAT) solver to determine feasibility. Z3 includes a number of efficient solver algorithms primarily developed to perform formal program verification, but it can also be used on general CSPs. Google's [OR tools](https://developers.google.com/optimization/) includes a CSP solver using backtracking with specialized subroutines for some common CP domains. # # ## I. The Road Ahead # # 0. [Cryptarithmetic](#I.-Cryptarithmetic) - introducing the Z3 API with simple word puzzles # 0. [Map Coloring](#II.-Map-Coloring) - solving the map coloring problem from lectures # 0. [N-Queens](#III.-N-Queens) - experimenting with problems that scale # 0. [Revisiting Sudoku](#IV.-Revisiting-Sudoku) - revisit the sudoku project with the Z3 solver # <div class="alert alert-box alert-info"> # NOTE: You can find solutions to this exercise in the "solutions" branch of the git repo, or on GitHub [here](https://github.com/udacity/artificial-intelligence/blob/solutions/Exercises/1_Constraint%20Satisfaction/AIND-Constraint_Satisfaction.ipynb). # </div> # %matplotlib inline # + import matplotlib as mpl import matplotlib.pyplot as plt # from util import displayBoard from itertools import product from IPython.display import display from z3 import * # - # --- # ## I. Cryptarithmetic # # We'll start by exploring the Z3 module with a _very_ simple & classic CSP problem called cryptarithmetic. A cryptarithmetic puzzle is posed as an arithmetic equation made up of words where each letter represents a distinct digit in the range (0-9). (This problem has no practical significance in AI, but it is a useful illustration of the basic ideas of CSPs.) For example, consider the problem and one possible solution shown below: # # ``` # T W O : 9 3 8 # + T W O : + 9 3 8 # ------- : ------- # F O U R : 1 8 7 6 # ``` # There are six distinct variables (F, O, R, T, U, W), and when we require each letter to represent a disctinct number (e.g., F != O, R != T, ..., etc.) and disallow leading zeros (i.e., T != 0 and F != 0) then one possible solution is (F=1, O=8, R=6, T=9, U=7, W=3). # # ### IMPLEMENTATION: Declaring Variables # For this problem we need a single variable for each distinct letter in the puzzle, and each variable will have an integer values between 0-9. (We will handle restricting the leading digits separately.) Complete the declarations in the next cell to create all of the remaining variables and constraint them to the range 0-9. # + ca_solver = Solver() # create an instance of a Z3 CSP solver F = Int('F') # create an z3.Int type variable instance called "F" ca_solver.add(0 <= F, F <= 9) # add constraints to the solver: 0 <= F <= 9 # ... # TODO: Add all the missing letter variables O = Int('O') ca_solver.add(0 <= O, O <= 9) R = Int('R') ca_solver.add(0 <= R, R <= 9) T = Int('T') ca_solver.add(0 <= T, T <= 9) U = Int('U') ca_solver.add(0 <= U, U <= 9) W = Int('W') ca_solver.add(0 <= W, W <= 9) # - # ### IMPLEMENTATION: Encoding Assumptions as Constraints # We had two additional assumptions that need to be added as constraints: 1) leading digits cannot be zero, and 2) no two distinct letters represent the same digits. The first assumption can simply be added as a boolean statement like M != 0. And the second is a _very_ common CSP constraint (so common, in fact, that most libraries have a built in function to support it); z3 is no exception, with the Distinct(var_list) constraint function. # + # TODO: Add constraints prohibiting leading digits F & T from taking the value 0 ca_solver.add(F != 0 ) ca_solver.add(T != 0) # TODO: Add a Distinct constraint for all the variables ca_solver.add(Distinct([F,O,R,T,U,W])) # - # ### Choosing Problem Constraints # There are often multiple ways to express the constraints for a problem. For example, in this case we could write a single large constraint combining all of the letters simultaneously $T\times10^2 + W\times10^1 + O\times10^0 + T\times10^2 + W\times10^1 + O\times10^0 = F\times10^3 + O\times10^2 + U\times10^1 + R\times10^0$. This kind of constraint works fine for some problems, but large constraints cannot usually be evaluated for satisfiability unless every variable is bound to a specific value. Expressing the problem with smaller constraints can sometimes allow the solver to finish faster. # # For example, we can break out each pair of digits in the summands and introduce a carry variable for each column: $(O + O)\times10^0 = R\times10^0 + carry_1\times10^1$ This constraint can be evaluated as True/False with only four values assigned. # # The choice of encoding on this problem is unlikely to have any effect (because the problem is so small), however it is worth considering on more complex problems. # # ### Implementation: Add the Problem Constraints # Pick one of the possible encodings discussed above and add the required constraints into the solver in the next cell. # TODO: add any required variables and/or constraints to solve the cryptarithmetic puzzle # Primary solution using single constraint for the cryptarithmetic equation carry_1 = Int('carry_1') carry_2 = Int('carry_2') carry_3 = Int('carry_3') ca_solver.add(*[And(c >= 0, c <= 9) for c in [carry_1, carry_2, carry_3]]) ca_solver.add(O + O == R + 10 * carry_1) ca_solver.add(W + W + carry_1 == U + 10 * carry_2) ca_solver.add(T + T + carry_2 == O + 10 * carry_3) ca_solver.add(F == carry_3) assert ca_solver.check() == sat, "Uh oh...the solver did not find a solution. Check your constraints." print(" T W O : {} {} {}".format(ca_solver.model()[T], ca_solver.model()[W], ca_solver.model()[O])) print("+ T W O : + {} {} {}".format(ca_solver.model()[T], ca_solver.model()[W], ca_solver.model()[O])) print("------- : -------") print("F O U R : {} {} {} {}".format(ca_solver.model()[F], ca_solver.model()[O], ca_solver.model()[U], ca_solver.model()[R])) # ### Cryptarithmetic Challenges # 0. Search online for [more cryptarithmetic puzzles](https://www.reddit.com/r/dailyprogrammer/comments/7p5p2o/20180108_challenge_346_easy_cryptarithmetic_solver/) (or create your own). Come to office hours or join a discussion channel to chat with your peers about the trade-offs between monolithic constraints & splitting up the constraints. (Is one way or another easier to generalize or scale with new problems? Is one of them faster for large or small problems?) # 0. Can you extend the solution to handle complex puzzles (e.g., using multiplication WORD1 x WORD2 = OUTPUT)? # --- # ## II. Map Coloring # # [Map coloring](https://en.wikipedia.org/wiki/Map_coloring) is a classic example of CSPs. A map coloring problem is specified by a set of colors and a map showing the borders between distinct regions. A solution to a map coloring problem is an assignment of one color to each region of the map such that no pair of adjacent regions have the same color. # # Run the first cell below to declare the color palette and a solver. The color palette specifies a mapping from integer to color. We'll use integers to represent the values in each constraint; then we can decode the solution from Z3 to determine the color applied to each region in the map. # # ![Map coloring is a classic example CSP](map.png) # create instance of Z3 solver & declare color palette mc_solver = Solver() colors = {'0': "Blue", '1': "Red", '2': "Green"} # ### IMPLEMENTATION: Add Variables # Add a variable to represent each region on the map above. Use the abbreviated name for the regions: WA=Western Australia, SA=Southern Australia, NT=Northern Territory, Q=Queensland, NSW=New South Wales, V=Victoria, T=Tasmania. Add constraints to each variable to restrict it to one of the available colors: 0=Blue, 1=Red, 2=Green. WA = Int('WA') mc_solver.add(0 <= WA, WA <= 2) # ... # TODO: add the remaining six regions and color constraints NT = Int('NT') mc_solver.add(0 <= NT, NT <= 2) SA = Int('SA') mc_solver.add(0 <= SA, SA <= 2) Q = Int('Q') mc_solver.add(0 <= Q, Q <= 2) NSW = Int('NSW') mc_solver.add(0 <= NSW, NSW <= 2) V = Int('V') mc_solver.add(0 <= V, V <= 2) T = Int('T') mc_solver.add(0 <= T, T <= 2) # ### IMPLEMENTATION: Distinct Adjacent Colors Constraints # As in the previous example, there are many valid ways to add constraints that enforce assigning different colors to adjacent regions of the map. One way is to add boolean constraints for each pair of adjacent regions, e.g., WA != SA; WA != NT; etc. # # Another way is to use so-called pseudo-boolean cardinality constraint, which is a constraint of the form $ \sum w_i l_i = k $. Constraints of this form can be created in Z3 using `PbEq(((booleanA, w_A), (booleanB, w_B), ...), k)`. Distinct neighbors can be written with k=0, and w_i = 1 for all values of i. (Note: Z3 also has `PbLe()` for $\sum w_i l_i <= k $ and `PbGe()` for $\sum w_i l_i >= k $) # # Choose one of the encodings discussed above and add the required constraints to the solver in the next cell. # TODO: add constraints to require adjacent regions to take distinct colors mc_solver.add(PbEq(((WA==NT, 1), (WA==SA, 1)), 0)) mc_solver.add(PbEq(((NT==WA, 1), (NT==SA, 1), (NT==Q, 1)), 0)) mc_solver.add(PbEq(((SA==WA, 1), (SA==NT, 1), (SA==NSW, 1), (SA==Q, 1), (SA==V, 1)), 0)) mc_solver.add(PbEq(((Q==NT, 1), (Q==SA, 1), (Q==NSW, 1)), 0)) mc_solver.add(PbEq(((NSW==Q, 1), (NSW==SA, 1), (NSW==V, 1)), 0)) mc_solver.add(PbEq(((V==SA, 1), (V==NSW, 1)), 0)) assert mc_solver.check() == sat, "Uh oh. The solver failed to find a solution. Check your constraints." print("WA={}".format(colors[mc_solver.model()[WA].as_string()])) print("NT={}".format(colors[mc_solver.model()[NT].as_string()])) print("SA={}".format(colors[mc_solver.model()[SA].as_string()])) print("Q={}".format(colors[mc_solver.model()[Q].as_string()])) print("NSW={}".format(colors[mc_solver.model()[NSW].as_string()])) print("V={}".format(colors[mc_solver.model()[V].as_string()])) print("T={}".format(colors[mc_solver.model()[T].as_string()])) x = Int('x') y = Int('y') solve(x > 2, y < 10, x + 2*y == 7) # #### Map Coloring Challenge Problems # 1. Generalize the procedure for this problem and try it on a larger map (countries in Africa, states in the USA, etc.) # 2. Extend your procedure to perform [graph coloring](https://en.wikipedia.org/wiki/Graph_coloring) (maps are planar graphs; extending to all graphs generalizes the concept of "neighbors" to any pair of connected nodes). (Note: graph coloring is [NP-hard](https://en.wikipedia.org/wiki/Graph_coloring#Computational_complexity), so it may take a very long time to color large graphs.) # --- # ## III. N-Queens # # In the next problem domain you'll solve the 8-queens puzzle, then use it to explore the complexity of solving CSPs. The 8-queens problem asks you to place 8 queens on a standard 8x8 chessboard such that none of the queens are in "check" (i.e., no two queens occupy the same row, column, or diagonal). The N-queens problem generalizes the puzzle to to any size square board. # # ![The 8-queens problem is another classic CSP example](EightQueens.gif) # # There are many acceptable ways to represent the N-queens problem, but one convenient way is to recognize that one of the constraints (either the row or column constraint) can be enforced implicitly by the encoding. If we represent a solution as an array with N elements, then each position in the array can represent a column of the board, and the value at each position can represent which row the queen is placed on. # # In this encoding, we only need a constraint to make sure that no two queens occupy the same row, and one to make sure that no two queens occupy the same diagonal. # # #### IMPLEMENTATION: N-Queens Solver # Complete the function below to take an integer N >= 5 and return a Z3 solver instance with appropriate constraints to solve the N-Queens problem. NOTE: it may take a few minutes for the solver to complete the suggested sizes below. # + def Abs(x): return If(x >= 0, x, -x) def nqueens(N): # TODO: Finish this function! mc_solver = Solver() Q = [ Int('Q_%i' % (i + 1)) for i in range(N) ] val_c = [ And(1 <= Q[i], Q[i] <= N) for i in range(N) ] col_c = [ Distinct(Q) ] diag_c = [ If(i == j, True, And(Q[i] - Q[j] != i - j, Q[i] - Q[j] != j - i)) for i in range(N) for j in range(i) ] mc_solver.add(val_c + col_c + diag_c) return mc_solver # + import time from itertools import chain runtimes = [] solutions = [] sizes = [8, 16, 32, 64] for N in sizes: nq_solver = nqueens(N) start = time.perf_counter() assert nq_solver.check(), "Uh oh...The solver failed to find a solution. Check your constraints." end = time.perf_counter() print("{}-queens: {}ms".format(N, (end-start) * 1000)) runtimes.append((end - start) * 1000) solutions.append(nq_solver) plt.plot(sizes, runtimes) # - from util import displayBoard s = solutions[0] displayBoard([(int(str(v)[1:]), s[v].as_long()) for v in s], len(s)); # ### Queen Problem Challenges # - Extend the loop to run several times and estimate the variance in the solver. How consistent is the solver timing between runs? # - Read the `displayBoard()` function in the `util.py` module and use it to show your N-queens solution. # --- # ## IV. Revisiting Sudoku # For the last CSP we'll revisit Sudoku from the first project. You previously solved Sudoku using backtracking search with constraint propagation. This time you'll re-write your solver using Z3. The backtracking search solver relied on domain-specific heuristics to select assignments during search, and to apply constraint propagation strategies (like elimination, only-choice, naked twins, etc.). The Z3 solver does not incorporate any domain-specific information, but makes up for that by incorporating a more sophisticated and a compiled solver routine. # # ![Example of an easy sudoku puzzle](sudoku.png) from itertools import chain # flatten nested lists; chain(*[[a, b], [c, d], ...]) == [a, b, c, d, ...] rows = 'ABCDEFGHI' cols = '123456789' boxes = [[Int("{}{}".format(r, c)) for c in cols] for r in rows] # declare variables for each box in the puzzle s_solver = Solver() # create a solver instance # #### IMPLEMENTATION: General Constraints # Add constraints for each of the following conditions: # - Boxes can only have values between 1-9 (inclusive) # - Each box in a row must have a distinct value # - Each box in a column must have a distinct value # - Each box in a 3x3 block must have a distinct value # + # TODO: Add constraints that every box has a value between 1-9 (inclusive) s_solver.add(*chain(*[(1 <= b, b <= 9) for b in chain(*boxes)])) # TODO: Add constraints that every box in a row has a distinct value s_solver.add(*[Distinct(row) for row in boxes]) # TODO: Add constraints that every box in a column has a distinct value s_solver.add(*[Distinct(cols) for cols in zip(*boxes)]) # TODO: Add constraints so that every box in a 3x3 block has a distinct value s_solver.add(*[Distinct([boxes[i + ii][j + jj] for ii in range(3) for jj in range(3)]) for j in range(0, 9, 3) for i in range(0, 9, 3)]) # - # #### IMPLMENTATION: Puzzle-Specific Constraints # Given the hints provided in the initial puzzle layout, you must also add constraints binding the box values to the specified values. For example, to solve the example puzzle you must specify A3 == 3 and B1 == 9, etc. The cells with a value of zero in the board below are "blank", so you should **not** create any constraint with the associate box. # + # use the value 0 to indicate that a box does not have an assigned value board = ((0, 0, 3, 0, 2, 0, 6, 0, 0), (9, 0, 0, 3, 0, 5, 0, 0, 1), (0, 0, 1, 8, 0, 6, 4, 0, 0), (0, 0, 8, 1, 0, 2, 9, 0, 0), (7, 0, 0, 0, 0, 0, 0, 0, 8), (0, 0, 6, 7, 0, 8, 2, 0, 0), (0, 0, 2, 6, 0, 9, 5, 0, 0), (8, 0, 0, 2, 0, 3, 0, 0, 9), (0, 0, 5, 0, 1, 0, 3, 0, 0)) # TODO: Add constraints boxes[i][j] == board[i][j] for each box where board[i][j] != 0 s_solver.add(*[boxes[i][j] == board[i][j] for i in range(9) for j in range(9) if board[i][j] != 0]) # - assert s_solver.check() == sat, "Uh oh. The solver didn't find a solution. Check your constraints." for row, _boxes in enumerate(boxes): if row and row % 3 == 0: print('-'*9+"|"+'-'*9+"|"+'-'*9) for col, box in enumerate(_boxes): if col and col % 3 == 0: print('|', end='') print(' {} '.format(s_solver.model()[box]), end='') print() # #### Sudoku Challenges # 1. Solve the "[hardest sudoku puzzle](# https://www.telegraph.co.uk/news/science/science-news/9359579/Worlds-hardest-sudoku-can-you-crack-it.html)" # 2. Search for "3d Sudoku rules", then extend your solver to handle 3d puzzles
Exercises/1_Constraint Satisfaction/AIND-Constraint_Satisfaction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: rtc_analysis # language: python # name: rtc_analysis # --- # <img src="NotebookAddons/blackboard-banner.png" width="100%" /> # <font face="Calibri"> # <br> # <font size="7"> <b> GEOS 657: Microwave Remote Sensing<b> </font> # # <font size="5"> <b>Lab 8: Change Detection in SAR Amplitude Time Series Data <font color='rgba(200,0,0,0.2)'> -- [20 Points] </font> </b> </font> # # <br> # <font size="4"> <b> <NAME>; University of Alaska Fairbanks & <NAME>, <a href="http://earthbigdata.com/" target="_blank">Earth Big Data, LLC</a> </b> <br> # <img src="NotebookAddons/UAFLogo_A_647.png" width="170" align="right" /><font color='rgba(200,0,0,0.2)'> <b>Due Date: </b> April 23, 2019 </font> # </font> # # <font size="3"> This Lab is part of the UAF course <a href="https://radar.community.uaf.edu/" target="_blank">GEOS 657: Microwave Remote Sensing</a>. It is introducing you to the methods of change detection in deep multi-temporal SAR image data stacks. Specifically, the lab applies the method of <i>Cumulative Sums</i> to perform change detection in a 60 image deep Sentinel-1 data stack over Niamey, Niger. # # As previously, the work will be done within the framework of a Jupyter Notebook. <br> # # <b>In this chapter we introduce the following data analysis concepts:</b> # # - The concepts of time series slicing by month, year, and date. # - The concepts and workflow of Cumulative Sum-based change point detection. # - The identification of change dates for each identified change point. # </font> # # <font size="4"> <font color='rgba(200,0,0,0.2)'> <b>THIS NOTEBOOK INCLUDES FOUR HOMEWORK ASSIGNMENTS.</b></font> Complete all four assignments to achieve full score. </font> <br> # <font size="3"> To submit your homework, please download your Jupyter Notebook from the server both as PDF (*.pdf) and Notebook file (*.ipynb) and submit them as a ZIP bundle via Blackboard or email (to <EMAIL>). To download, please select the following options in the main menu of the notebook interface: # # <ol type="1"> # <li><font color='rgba(200,0,0,0.2)'> <b> Save your notebook with all of its content</b></font> by selecting <i> File / Save and Checkpoint </i> </li> # <li><font color='rgba(200,0,0,0.2)'> <b>To export in Notebook format</b></font>, click on <i>File / Download as / Notebook (.ipynb)</i> <font color='gray'>--- Downloading your file may take a bit depending on its size.</font></li> # <li>The best option to <font color='rgba(200,0,0,0.2)'> <b>export your notebook in PDF format</b></font> is to print the content of the browser window to a PDF. To do so, <i>right click</i> in your browser window and select the <i>print</i> option in the pop-up menu.</li> # </ol> # # Contact me at <EMAIL> should you run into any problems. # </font> # # </font> # <hr> # <font face="Calibri" size="5" color="darkred"> <b>Important Note about JupyterHub</b> </font> # <br><br> # <font face="Calibri" size="3"> <b>Your JupyterHub server will automatically shutdown when left idle for more than 1 hour. Your notebooks will not be lost but you will have to restart their kernels and re-run them from the beginning. You will not be able to seamlessly continue running a partially run notebook.</b> </font> # # + pycharm={"name": "#%%\n"} # %%javascript var kernel = Jupyter.notebook.kernel; var command = ["notebookUrl = ", "'", window.location, "'" ].join('') kernel.execute(command) # + from IPython.display import Markdown from IPython.display import display # user = !echo $JUPYTERHUB_USER # env = !echo $CONDA_PREFIX if env[0] == '': env[0] = 'Python 3 (base)' if env[0] != '/home/jovyan/.local/envs/rtc_analysis': display(Markdown(f'<text style=color:red><strong>WARNING:</strong></text>')) display(Markdown(f'<text style=color:red>This notebook should be run using the "rtc_analysis" conda environment.</text>')) display(Markdown(f'<text style=color:red>It is currently using the "{env[0].split("/")[-1]}" environment.</text>')) display(Markdown(f'<text style=color:red>Select the "rtc_analysis" from the "Change Kernel" submenu of the "Kernel" menu.</text>')) display(Markdown(f'<text style=color:red>If the "rtc_analysis" environment is not present, use <a href="{notebookUrl.split("/user")[0]}/user/{user[0]}/notebooks/conda_environments/Create_OSL_Conda_Environments.ipynb"> Create_OSL_Conda_Environments.ipynb </a> to create it.</text>')) display(Markdown(f'<text style=color:red>Note that you must restart your server after creating a new environment before it is usable by notebooks.</text>')) # - # <hr> # <font face="Calibri"> # # <font size="5"> <b> 0. Importing Relevant Python Packages </b> </font> # # <font size="3"> The first step of this lab exercise on SAR image time series analysis and change detection is to <b>import the necessary python libraries into your Jupyter Notebook.</b></font> # + # %%capture import os from copy import deepcopy import pandas as pd from osgeo import gdal # for Info import numpy as np # %matplotlib inline import matplotlib.pylab as plt import matplotlib.patches as patches import asf_notebook as asfn asfn.jupytertheme_matplotlib_format() # - # <hr> # <font face="Calibri"> # # <font size="5"> <b> 1. Load Data Stack for this Lab </b> </font> <img src="NotebookAddons/Lab8-Agrhymet.JPG" width="400" align="right" /> # # <font size="3"> This Lab will be using a 60-image deep C-band Sentinel-1 SAR data stack over Niamey, Niger to demonstrate the concepts of time series change detection. The data are available to us through the services of the <a href="https://www.asf.alaska.edu/" target="_blank">Alaska Satellite Facility</a>. # # Specifically we will use a small image segment over the campus of <a href="http://www.agrhymet.ne/eng/" target="_blank">AGRHYMET Regional Centre</a>, a regional organization supporting West Africa in the use or remote sensing. # # This site was picked as we had information about construction going on at this site sometime in the 2015 - 2017 time frame. Land was cleared and a building was erected. In this exercise we will see if we can detect the construction activity and if we are able to determine when construction began and when it ended. # # In this case, we will <b>retrieve the relevant data</b> from an <a href="https://aws.amazon.com/" target="_blank">Amazon Web Service (AWS)</a> cloud storage bucket.</font></font> # <hr> # # <font face="Calibri" size="4"> <b> 1.1 Download The Data: </b> </font> # <br><br> # <font face="Calibri" size="3"> Before we download anything, <b>create a working directory for this analysis and change into it:</b> </font> path = "/home/jovyan/notebooks/ASF/GEOS_657_Labs/2019/lab_8_data" asfn.new_directory(path) os.chdir(path) print(f"Current working directory: {os.getcwd()}") # <font face="Calibri" size="3"><b>Download the data from the AWS bucket:</b> </font> # !aws --region=us-east-1 --no-sign-request s3 cp s3://asf-jupyter-data/Niamey.zip Niamey.zip # <font face="Calibri" size="3"><b>Unzip the file and clean up:</b> </font> f = "Niamey.zip" asfn.asf_unzip(path, f) if asfn.path_exists(f): os.remove(f) # <hr> # <font face="Calibri" size="4"> <b> 1.2 Switch to the Data Directory: </b> </font> # <br><br> # <font face="Calibri" size="3"> The following lines set variables that capture path variables needed for data processing. <b>Change into the unzipped /cra directory and define variables for names of the files containing data and image information:</b> </font> cra_path = f"{path}/cra/" if asfn.path_exists(cra_path): os.chdir(cra_path) date_file = "S32631X402380Y1491460sS1_A_vv_0001_A_mtfil.dates" image_file = "S32631X402380Y1491460sS1_A_vv_0001_A_mtfil.vrt" # <hr> # <font face="Calibri" size="4"> <b> 1.3 Assess Image Acquisition Dates </b> </font> # # <font face="Calibri" size="3"> Before we start analyzing the available image data, we want to examine the content of our data stack. <b>To do so, we read the image acquisition dates for all files in the time series and create a *pandas* date index:</b> </font> with open(date_file, 'r') as d: dates = d.readlines() time_index = pd.DatetimeIndex(dates) j = 1 print('Bands and dates for', image_file) for i in time_index: print("{:4d} {}".format(j, i.date()), end=' ') j += 1 if j%5 == 1: print() # <hr> # <font face="Calibri" size="4"> <b> 1.4 Read in the Data Stack </b> </font> # # <font face="Calibri" size="3"> We Read in the time series raster stack from the entire data set. </font> raster_stack = gdal.Open(image_file).ReadAsArray() # <hr><hr> # <font face="Calibri" size="4"> <b> 1.5 Lay the groundwork for saving plots and level-3 products.</b> </font> # <br><br> # <font face="Calibri" size="3"><b>Create a directory in which store our output, and move into it:</b></font> os.chdir(path) product_path = 'plots_and_products' asfn.new_directory(product_path) if asfn.path_exists(product_path) and os.getcwd() != f"{path}/{product_path}": os.chdir(product_path) print(f"Current working directory: {os.getcwd()}") # <font face="Calibri" size="3">We will need the upper-left and lower-right corner coordinates when saving our products as GeoTiffs. In this situation, you have been given a pre-subset vrt image stack. # <br><br> # <b>Retrieve the corner coordinates from the vrt using gdal.Info():</b></font> vrt = gdal.Open(f"{cra_path}{image_file}") vrt_info = gdal.Info(vrt, format='json') coords = [vrt_info['cornerCoordinates']['upperLeft'], vrt_info['cornerCoordinates']['lowerRight']] print(coords) # <font face="Calibri" size="3"><b>Retrieve the utm zone from the vrt:</b></font> utm_zone = vrt_info['coordinateSystem']['wkt'].split(',')[-1][0:-2] print(f"utm zone: {utm_zone}") # <font face="Calibri" size="3"><b>Write a function to convert our plots into GeoTiffs:</b></font> # do not include a file extension in out_filename # extent must be in the form of a list: [[upper_left_x, upper_left_y], [lower_right_x, lower_right_y]] def geotiff_from_plot(source_image, out_filename, extent, utm_zone, cmap=None, vmin=None, vmax=None, interpolation=None, dpi=300): plt.figure() plt.axis('off') plt.imshow(source_image, cmap=cmap, vmin=vmin, vmax=vmax, interpolation=interpolation) temp = f"{out_filename}_temp.png" plt.savefig(temp, dpi=dpi, transparent='true', bbox_inches='tight', pad_inches=0) cmd = f"gdal_translate -of Gtiff -a_ullr {extent[0][0]} {extent[0][1]} {extent[1][0]} {extent[1][1]} -a_srs EPSG:{utm_zone} {temp} {out_filename}.tiff" # !{cmd} try: os.remove(temp) except FileNotFoundError: pass # <br> # <hr><hr> # <font face="Calibri" size="5"> <b> 2. Plot the Global Means of the Time Series </b> </font> # # <font face="Calibri" size="3"> To accomplish this task, <b>complete the following steps:</b> # <ol> # <li>Conversion to power-scale</li> # <li>Compute mean values</li> # <li>Convert to dB-scale</li> # <li>Create time series of means using Pandas</li> # <li>Plot time series of means</li> # </ol> # </font> # <br> # <font face="Calibri" size="3"><b>Convert to Power-scale:</b></font> caldB = -83 calPwr = np.power(10.0, caldB/10.0) raster_stack_pwr = np.power(raster_stack, 2.0) * calPwr # <font face="Calibri" size="3"><b>Compute means:</b></font> rs_means_pwr = np.mean(raster_stack_pwr, axis=(1, 2)) # <font face="Calibri" size="3"><b>Convert to dB-scale:</b></font> rs_means_dB = 10.0 * np.log10(rs_means_pwr) # <font face="Calibri" size="3"><b>Make a pandas time series object:</b></font> ts = pd.Series(rs_means_dB,index=time_index) # <font face="Calibri" size="3"><b>Use pandas to plot the time series object with band numbers as data point labels. Save the plot as a png (time_series_means.png):</b></font> plt.rcParams.update({'font.size': 14}) plt.figure(figsize=(16, 8)) plt.title(f"Time Series of Means") ts.plot() xl = plt.xlabel('Date') yl = plt.ylabel('$\overline{\gamma^o}$ [dB]') for xyb in zip(ts.index, rs_means_dB, range(1, len(ts)+1)): plt.annotate(xyb[2], xy=xyb[0:2]) plt.grid() plt.savefig('time_series_means.png', dpi=72) # <br> # <hr> # <div class="alert alert-success"> # <font face="Calibri" size="5"> <b> <font color='rgba(200,0,0,0.2)'> <u>ASSIGNMENT #1</u>: </font> Analyze Global Means Time Series Plot</b> <font color='rgba(200,0,0,0.2)'> -- [3 Points] </font> </font> # # <font face="Calibri" size="3"> Look at the global means time series plot above and determine from the <i>time_index</i> array at which dates you see maximum and minimum values. Are relative peaks associated with seasons? # <br><br> # PROVIDE ANSWER HERE: # # </font> # </div> # <br> # <hr> # <font face="Calibri" size="5"> <b> 3. Generate Time Series for Point Locations or Subsets</b> </font> # # <font face="Calibri" size="3"> In python, we can use the matrix slicing tools (Like Matlab) to obtain subsets of the data. For example to pick one pixel at a line/pixel location and obtain all band values, use: # # > [:,line,pixel] notation. # # Or, if we are interested in a subset at a offset location we can use: # # > [:,yoffset:(yoffset+yrange),xoffset:(xoffset+xrange)] # # In the section below we will learn how to generate time series plots for point locations (pixels) or areas (e.g. a 5x5 window region). To show individual bands, we define a <i>show_image</i> function which incorporates the matrix slicing from above. # # </font> # <hr> # <font face="Calibri" size="4"> <b> 3.1 Plotting Time Series for Subset </b> </font> # # <font face="Calibri" size="3"><b>Write a function to plot the calibrated time series for a pre-defined subset:</b></font> # Preconditions: # raster_stack must be a stack of images in SAR power units # time_index must be a pandas date-time index # band_number must represent a valid bandnumber in the raster_stack def show_image(raster_stack, time_index, band_number, output_filename=None, subset=None, vmin=None, vmax=None): fig = plt.figure(figsize=(16, 8)) ax1 = fig.add_subplot(121) ax2 = fig.add_subplot(122) # If vmin or vmax are None we use percentiles as limits: if vmin == None: vmin = np.percentile(raster_stack[band_number-1].flatten(), 5) if vmax == None: vmax = np.percentile(raster_stack[band_number-1].flatten(), 95) ax1.imshow(raster_stack[band_number-1], cmap='gray', vmin=vmin, vmax=vmax) ax1.set_title('Image Band {} {}'.format(band_number, time_index[band_number-1].date())) if subset == None: bands, ydim, xdim = raster_stack.shape subset = (0, 0, xdim, ydim) ax1.add_patch(patches.Rectangle((subset[0], subset[1]), subset[2], subset[3], fill=False, edgecolor='red')) ax1.xaxis.set_label_text('Pixel') ax1.yaxis.set_label_text('Line') ts_pwr = np.mean(raster_stack[:, subset[1]:(subset[1]+subset[3]), subset[0]:(subset[0]+subset[2])], axis=(1,2)) ts_dB = 10.0 * np.log10(ts_pwr) ax2.plot(time_index, ts_dB) ax2.yaxis.set_label_text('$\gamma^o$ [dB]') ax2.set_title('$\gamma^o$ Backscatter Time Series') # Add a vertical line for the date where the image is displayed ax2.axvline(time_index[band_number-1], color='red') plt.grid() fig.autofmt_xdate() if output_filename: plt.savefig(output_filename, dpi=72) print(f"Saved plot: {output_filename}") # <font face="Calibri" size="3">Call show_image() on different bands to compare the information content of different time steps in our area of interest. # <br><br> # <b>Call show_image() on band number 24:</b></font> band_number = 24 subset = [5, 20, 3, 3] show_image(raster_stack_pwr, time_index, band_number, subset=subset, output_filename=f"band_{band_number}.png") # <font face="Calibri" size="3"><b>Call show_image() on band number 43:</b></font> band_number = 43 show_image(raster_stack_pwr, time_index, band_number, subset=subset, output_filename=f"band_{band_number}.png") # <hr> # <font face="Calibri" size="4"> <b> 3.2 Helper Function to Generate a Time Series Object </b> </font> # # <font face="Calibri" size="3"><b>Write a function that creates an object representing the time series for an image subset:<b></font> # Extract the means along the time series axes # raster shape is time steps, lines, pixels. # With axis=1,2, we average lines and pixels for each time step (axis 0) # returns pandas time series object def timeSeries(raster_stack_pwr, time_index, subset, ndv=0.0): raster = raster_stack_pwr.copy() if ndv != np.nan: raster[np.equal(raster, ndv)] = np.nan ts_pwr = np.nanmean(raster[:,subset[1]:(subset[1]+subset[3]), subset[0]:(subset[0]+subset[2])], axis=(1, 2)) # convert the means to dB ts_dB = 10.0 * np.log10(ts_pwr) # make the pandas time series object ts = pd.Series(ts_dB, index=time_index) return ts # <font face="Calibri" size="3"><b>Call timeSeries() to make a time series object for the chosen subset:</b></font> ts = timeSeries(raster_stack_pwr, time_index, subset) # <font face="Calibri" size="3"><b>Plot the time series object:</b></font> fig = ts.plot(figsize=(16, 4)) fig.yaxis.set_label_text('mean dB') fig.set_title('Time Series for Chosen Subset') plt.grid() # <br> # <hr> # <font face="Calibri" size="5"> <b> 4. Create Seasonal Subsets of Time Series Records</b> </font> # # <font face="Calibri" size="3"> Let's expand upon SAR time series analysis. Often it is desirable to subset time series by season or months to compare data acquired under similar weather/growth/vegetation cover conditions. For example, in analyzing C-Band backscatter data, it might be useful to limit comparative analysis to dry season observations only as soil moisture might confuse signals during the wet seasons. To subset time series along the time axis we will make use of the following <i>Pandas</i> datatime index tools: # <ul> # <li>month</li> # <li>day of year</li> # </ul> # <br> # <b>Extract a hectare-sized area around our subset location (5,20,5,5):</b></font> subset = (5, 20, 5, 5) time_series_1 = timeSeries(raster_stack_pwr, time_index, subset) # <font face="Calibri" size="3"><b>Convert the time series to a pandas DataFrame</b> to allow for more processing options.</font> data_frame = pd.DataFrame(time_series_1, index=ts.index, columns=['g0']) # <font face="Calibri" size="3"><b>Label the data value column as 'g0' for $\gamma^0$ and plot the time series backscatter profile:</b></font> ylim = (-15, -5) data_frame.plot(figsize=(16, 4)) plt.title('Sentinel-1 C-VV Time Series Backscatter Profile, Subset: 5,20,5,5 ') plt.ylabel('$\gamma^o$ [dB]') plt.ylim(ylim) _ = plt.legend(["C-VV Time Series"]) plt.grid() # <hr> # <font face="Calibri" size="4"><b> 4.1 Change Start Date of Time Series to November 2015 </b></font> # <br><br> # <font face="Calibri" size="3"><b>Plot the cropped time series and save it as a png (time_series_backscatter_profile.png):</b></font> # + data_frame_sub1 = data_frame[data_frame.index>'2015-11-01'] # Plot data_frame_sub1.plot(figsize=(16, 4)) plt.title('Sentinel-1 C-VV Time Series Backscatter Profile, Subset: {}'.format(subset)) plt.ylabel('$\gamma^o$ [dB]') plt.ylim(ylim) _ = plt.legend(["C-VV Time Series"]) plt.grid() plt.savefig('time_series_backscatter_profile', dpi=72) # - # <hr> # <font face="Calibri" size="4"> <b> 4.2 Subset Time Series by Months </b> </font> # # <font face="Calibri" size="3">Using the Pandas <i>DateTimeIndex</i> object index.month and numpy's logical_and function, we can easily subset the time series by month: # <br><br> # <b>Create subset data_frames. In one, replace the data from June-February with NaNs. In the other, replace the data from March-May with NaNs:</b></font> # + data_frame_sub2 = deepcopy(data_frame_sub1) for index, row in data_frame_sub2.iterrows(): if index.month < 3 or index.month > 5: row['g0'] = np.nan data_frame_sub3 = deepcopy(data_frame_sub1) for index, row in data_frame_sub3.iterrows(): if index.month > 2 and index.month < 6: row['g0'] = np.nan # - # <font face="Calibri" size="3"><b>Plot the time series backscatter profile for March - May. Save the plot as a png (march2may_time_series_backscatter_profile.png):</b></font> # Plot fig, ax = plt.subplots(figsize=(16, 4)) data_frame_sub2.plot(ax=ax) plt.title(f'Sentinel-1 C-VV Time Series Backscatter Profile, Subset: {subset}') plt.ylabel('$\gamma^o$ [dB]') plt.ylim(ylim) _ = plt.legend(["C-VV Time Series (March - May)"], loc='best') plt.grid() plt.savefig('march2may_time_series_backscatter_profile', dpi=72) # <font face="Calibri" size="3"> Using numpy's **invert** function, we can invert a selection. In this example, we extract all other months from the time series. # <br><br> # <b>Plot the time series backscatter profile for June - Feburary. Save the plot as a png (june2feb_time_series_backscatter_profile.png):</b></font> # Plot fig, ax = plt.subplots(figsize=(16, 4)) data_frame_sub3.plot(ax=ax) plt.title(f'Sentinel-1 C-VV Time Series Backscatter Profile, Subset: {subset}') plt.ylabel('$\gamma^o$ [dB]') plt.ylim(ylim) _ = plt.legend(["C-VV Time Series (June-February)"], loc='best') plt.grid() plt.savefig('june2feb_time_series_backscatter_profile', dpi=72) # <hr> # <font face="Calibri" size="4"> <b> 4.3 Split Time Series by Year to Compare Year-to-Year Patterns </b> </font> # # <font face="Calibri" size="3"> Sometimes it is useful to compare year-to-year $\sigma^0$ values to identify changes in backscatter characteristics. This helps to distinguish true change from seasonal variability. # <br><br> # <b>Split the time series into different years:</b></font> data_frame_by_year = data_frame_sub1.groupby(pd.Grouper(freq="Y")) # <font face="Calibri" size="3"><b>Plot the split time series. Save the plot as a png (yearly_time_series_backscatter_profile.png):</b></font> fig, ax = plt.subplots(figsize=(16, 4)) for label, df in data_frame_by_year: df.g0.plot(ax=ax, label=label.year) plt.legend() # data_frame_by_year.plot(ax=ax) plt.title('Sentinel-1 C-VV Time Series Backscatter Profile, Subset: {}'.format(subset)) plt.ylabel('$\gamma^o$ [dB]') plt.ylim(ylim) plt.grid() plt.savefig('yearly_time_series_backscatter_profile', dpi=72) # <hr> # <font face="Calibri" size="4"> <b> 4.4 Create a Pivot Table to Group Years and Sort Data for Plotting Overlapping Time Series </b> </font> # # <font face="Calibri" size="3"> Pivot Tables are a technique in data processing. They enable a person to arrange and rearrange (or "pivot") statistics in order to draw attention to useful information. To do so, we first <b>add columns for day-of-year and year to the data frame:</b> # </font> # Add day of year data_frame_sub1 = data_frame_sub1.assign(doy=data_frame_sub1.index.dayofyear) # Add year data_frame_sub1 = data_frame_sub1.assign(year=data_frame_sub1.index.year) # <font face="Calibri" size="3"><b>Create a pivot table which has day-of-year as the index and years as columns:</b></font> pivot_table = pd.pivot_table(data_frame_sub1, index=['doy'], columns=['year'], values=['g0']) # Set the names for the column indices pivot_table.columns.set_names(['g0', 'year'], inplace=True) print(pivot_table.head(10)) print('...\n',pivot_table.tail(10)) # <font face="Calibri" size="3"> As we can see, there are NaN values on the days in a year where no acquisition took place. Now we use time weighted interpolation to fill the dates for all the observations in any given year. For **time weighted interpolation** to work we need to create a dummy year as a date index, perform the interpolation, and reset the index to the day of year.</font> # <br><br> # <font face="Calibri" size="3"><b>Create a dummy year as a date index:</b></font> # + # Add fake dates for year 100 to enable time sensitive interpolation # of missing values in the pivot table year_doy = ['2100-{}'.format(x) for x in pivot_table.index] y100_doy=pd.DatetimeIndex(pd.to_datetime(year_doy,format='%Y-%j')) # make a copy of the piv table and add two columns pivot_table_2 = pivot_table.copy() pivot_table_2 = pivot_table_2.assign(d100=y100_doy) # add the fake year dates pivot_table_2 = pivot_table_2.assign(doy=pivot_table_2.index) # add doy as a column to replace as index later again # Set the index to the dummy year pivot_table_2.set_index('d100', inplace=True, drop=True) # - # <font face="Calibri" size="3"><b>Perform the time-weighted interpolation:</b></font> pivot_table_2 = pivot_table_2.interpolate(method='time') # <font face="Calibri" size="3"><b>Reset the index to the day of year:</b></font> pivot_table_2.set_index('doy', inplace=True, drop=True) # <font face="Calibri" size="3"><b>Inspect the new pivot table and see whether we interpolated the NaN values where it made sense:</b></font> print(pivot_table_2.head(10)) print('...\n',pivot_table_2.tail(10)) # <font face="Calibri" size="3"><b>Plot the time series data with overlapping years. Save the plot as a png (overlapping_years_time_series_backscatter_profile.png):</b></font> pivot_table_2.plot(figsize=(16, 8)) plt.title('Sentinel-1 C-VV Time Series Backscatter Profile,\ Subset: 5,20,5,5 ') plt.ylabel('$\gamma^o$ [dB]') plt.xlabel('Day of Year') _ = plt.ylim(ylim) plt.grid() plt.savefig('overlapping_years_time_series_backscatter_profile', dpi=72) # <br> # <hr> # <div class="alert alert-success"> # <font face="Calibri" size="5"> <b> <font color='rgba(200,0,0,0.2)'> <u>ASSIGNMENT #2</u>: </font> Interpret the Year-to-Year Time Series Plot</b> <font color='rgba(200,0,0,0.2)'> -- [4 Points] </font> </font> # # <font face="Calibri" size="3"> Answer the following questions related to the year-to-year time series plot shown above: # <Ol> # <li>Describe the $\gamma^0$ time series for year 2016. What kind of seasonal patterns do you see? Based on the observed seasonal patterns, what type of surface cover do you think was present at this area in 2016? <font color='rgba(200,0,0,0.2)'> -- [2 Points] </font></li><br> # <li>Describe the $\gamma^0$ time series for year 2017. What kind of seasonal patterns do you see and how do they differ from the previous year? <font color='rgba(200,0,0,0.2)'> -- [2 Points] </font></li><br> # </Ol> # <br> # # PROVIDE YOUR ANSWERS BELOW: # # </font> # </div> # <br> # <hr> # <font face="Calibri" size="5"> <b> 5. Time Series Change Detection</b> </font> # # <font face="Calibri" size="3"> Now we are ready to perform efficient change detection on the time series data. We will discuss two approaches: # <ol> # <li>Year-to-year differencing of the subsetted time series</li> # <li>Cumulative Sum-based change detection</li> # </ol> # </font> # <br> # <font face="Calibri" size="3"><b>Set a dB change threshold.</b></font> threshold = 3 # <font face="Calibri" size="3"><b>Calculate the difference between years (2016 and 2017):</b></font> diff_2017_2016 = pivot_table_2.g0[2017] - pivot_table_2.g0[2016] # <hr> # <font face="Calibri" size="4"> <b> 5.1 Change Detection based on Year-to-Year Differencing </b> </font> # # <font face="Calibri" size="3"><b>Compute and plot the differences between the interpolated time series and look for change using a threshold value. Save the plot as a png (year2year_differencing_time_series.png):</b></font> _ = diff_2017_2016.plot(kind='line', figsize=(16,8)) plt.title('Year-to-Year Difference Time Series') plt.ylabel('$\Delta\gamma^o$ [dB]') plt.xlabel('Day of Year') plt.grid() plt.savefig('year2year_differencing_time_series', dpi=72) # <font face="Calibri" size="3"><b>Calculate the days-of-year on which the threshold was exceeded:</b></font> threshold_exceeded = diff_2017_2016[abs(diff_2017_2016) > threshold] threshold_exceeded # <font face="Calibri" size="3"> From the <i>threshold_exceeded</i> dataframe we can infer the first date at which the threshold was exceeded. We would label this date as a **change point**. As an additional criteria for labeling a change point, one can also consider the number of observations after an identified change point that also excided the threshold. If only one or two observations differed from the year before this could be considered an outlier. Additional smoothing of the time series may sometimes be useful to avoid false detections. </font> # <br> # <hr> # <div class="alert alert-success"> # <font face="Calibri" size="5"> <b> <font color='rgba(200,0,0,0.2)'> <u>ASSIGNMENT #3</u>: </font> Perform Year-to-Year Differencing-based Change Detection for a Different Subset</b> <font color='rgba(200,0,0,0.2)'> -- [5 Points] </font> </font> # # <font face="Calibri" size="3"> Go back to the beginning of Section 4 and change the subset coordinates to a different subset (i.e., modify to <i>subset=($X$,$Y$,5,5)</i> with $X$ and $Y$ being the center of your modified subset). Work through the workbook from the beginning of Section 4 until the end of Section 5.1 with your modified subset. Discuss whether or not your new subset shows change according to the 3dB change threshold. # <br><br> # # DISCUSS BELOW WHETHER YOUR NEW SUBSET SHOWS CHANGE: # # </font> # </div> # <hr> # <font face="Calibri" size="4"> <b> 5.2 Cumulative Sums for Change Detection</b> </font> # # <font face="Calibri" size="3"> Another approach to detect change in regularly acquired data is employing the method of **cumulative sums**. Changes are determined by comparing the time series data against its mean. A full explanation and examples from the financial sector can be found at [http://www.variation.com/cpa/tech/changepoint.html](http://www.variation.com/cpa/tech/changepoint.html) # <br><br><hr> # <u><b>5.2.A First let's consider a time series and it's mean observation</b></u>:<br> # We look at two full years of observations from Sentinel-1 data for an area where we suspect change. In the following, we define $X$ as our time series # <br><br> # \begin{equation} # X = (X_1,X_2,...,X_n) # \end{equation} # # with $X_i$ being the SAR backscatter values at times $i=1,...,n$ and $n$ is the number of observations in the time series. # <br><br> # <b>Create a times series of the subset and calculate the backscatter values:</b> # </font> subset = (5, 20, 3, 3) #subset=(12,5,3,3) time_series_1 = timeSeries(raster_stack_pwr, time_index, subset) backscatter_values = time_series_1[time_series_1.index>'2015-10-31'] # <hr> # <font face="Calibri" size="3"> <b><u>5.2.B Filtering the time series for outliers</u></b>:<br> # It is advantageous in noisy SAR time series like those from C-Band Sentinel-1 data to reduce noise by <b>applying a filter along the time axis</b>. Pandas offers a <i>"rolling"</i> function for these purposes. Using the <i>rolling</i> function, we will apply a <i>median filter</i> to our data.</font> # <br><br> # <font face="Calibri" size="3"><b>Calculate the median backscatter values and plot them against the original values. Save the plot as a png (Original vs. Median Time Series):</b></font> backscatter_values_median = backscatter_values.rolling(5, center=True).median() backscatter_values_median.plot(figsize=(16, 4)) _ = backscatter_values.plot() plt.title('Original vs. Median Time Series') plt.ylabel('$\gamma^o$ [dB]') plt.xlabel('Time') plt.grid() plt.savefig('original_vs_median_time_series', dpi=72) # <font face="Calibri" size="3"><b>Calculate the time series' mean value and plot it against the original values. Save the plot as a png (original_time_series_vs_mean_val.png):</b></font> fig, ax = plt.subplots(figsize=(16, 4)) backscatter_values.plot() plt.title('Original Time Series vs. Mean Value') plt.ylabel('$\gamma^o$ [dB]') ax.axhline(backscatter_values.mean(), color='red') _ = plt.legend(['$\gamma^o$', '$\overline{\gamma^o}$']) plt.grid() plt.savefig('original_time_series_vs_mean_val', dpi=72) # <font face="Calibri" size="3"><b>Calculate the time series' mean value and plot it against the median values. Save the plot as a png (median_time_series_vs_mean_val.png):</b></font> # + backscatter_values_mean = backscatter_values.mean() fig, ax = plt.subplots(figsize=(16, 4)) backscatter_values_median.plot() plt.title('Median Time Series vs. Mean Value') plt.ylabel('$\gamma^o$ [dB]') ax.axhline(backscatter_values.mean(), color='red') _ = plt.legend(['$\gamma^o$', '$\overline{\gamma^o}$']) plt.grid() plt.savefig('median_time_series_vs_mean_val', dpi=72) # - # <hr> # <font face="Calibri" size="3"> <b><u>5.2.C Calculate the Residuals of the Time Series Against the Mean $\overline{\gamma^o}$</u></b>:<br> # To get to the residual, we calculate # # \begin{equation} # R = X_i - \overline{X} # \end{equation}</font> # <br><br> # <font face="Calibri" size="3"><b>Calculate the residuals:</b></font> residuals = backscatter_values - backscatter_values_mean # <hr> # <font face="Calibri" size="3"> <b><u>5.2.D Calculate Cumulative Sum of the Residuals</u></b>:<br> # The cumulative sum is defined as: # # \begin{equation} # S = \displaystyle\sum_1^n{R_i} # \end{equation}</font> # <br><br> # <font face="Calibri" size="3"> <b>Calculate and plot the cumulative sum of the residuals. Save the plot as a png (cumulative_sum_residuals.png):</b></font> # + sums = residuals.cumsum() _ = sums.plot(figsize=(16, 6)) plt.title('Cumulative Sum of the Residuals') plt.ylabel('Cummulative Sum $S$ [dB]') plt.xlabel('Time') plt.grid() plt.savefig('cumulative_sum_residuals', dpi=72) # - # <font face="Calibri" size="3"> The **cumulative sum** is a good indicator of change in the time series. An estimator for the magnitude of change is given as the difference between the maximum and minimum value of the cumulative sum $S$: # # \begin{equation} # S_{DIFF} = S_{MAX} - S_{MIN} # \end{equation} # <br><br> # <b>Calculate the magnitude of change:</b></font> change_mag = sums.max() - sums.min() print('Change magnitude: %5.3f dB' % (change_mag)) # <hr> # <font face="Calibri" size="3"> <b><u>5.2.E Identify Change Point in the Time Series</u></b>:<br> # A candidate change point is identified from $S$ at the time where $S_{MAX}$ is found: # # \begin{equation} # T_{{CP}_{before}} = T(S_i = S_{MAX}) # \end{equation} # # with $T_{{CP}_{before}}$ being the timestamp of the last observation <i>before</i> the identified change point, $S_i$ is the cumulative sum of $R$ with $i=1,...n$, and $n$ is the number of observations in the time series. # # The first observation <i>after</i> a change occurred ($T_{{CP}_{after}}$) is then found as the first observation in the time series following $T_{{CP}_{before}}$. # <br><br> # <b>Calculate $T_{{CP}_{before}}$:</b> # </font> change_point_before = sums[sums==sums.max()].index[0] print('Last date before change occurred: {}'.format(change_point_before.date())) # <font face="Calibri" size="3"><b>Calculate $T_{{CP}_{after}}$:</b></font> change_point_after = sums[sums.index > change_point_before].index[0] print('First date after change occurred: {}'.format(change_point_after.date())) # <hr> # <font face="Calibri" size="3"> <b><u>5.2.F Determine our Confidence in the Identified Change Point using Bootstrapping</u></b>:<br> # We can determine if an identified change point is indeed a valid detection by <b>randomly reordering the time series</b> and <b>comparing the various $S$ curves</b>. During this <b>"bootstrapping"</b> approach, we count how many times the $S_{DIFF}$ values are greater than $S_{{DIFF}_{random}}$ of the identified change point. # # After bootstrapping, we define the <b>confidence level $CL$</b> in a detected change point according to: # # \begin{equation} # CL = \frac{N_{GT}}{N_{bootstraps}} # \end{equation} # # where $N_{GT}$ is the number of times $S_{DIFF}$ > $S_{{DIFF}_{random}}$ and $N_{bootstraps}$ is the number of bootstraps randomizing $R$. # <br><br><br> # As another quality metric we can also calculate the <b>significance $CP_{significance}$</b> of a change point according to: # # \begin{equation} # CP_{significance} = 1 - \left( \frac{\sum_{b=1}^{N_{bootstraps}}{S_{{DIFF}_{{random}_i}}}}{N_{bootstraps}} \middle/ S_{DIFF} \right) # \end{equation} # # The closer $CP_{significance}$ is to 1, the more significant the change point.</font> # <br><br> # <font face="Calibri" size="3"><b>Write a function that implements the bootstrapping algorithm:</b></font> # pyplot must be imported as plt import random def bootstrap(n_bootstraps, sums, residuals, output_file=False): fig, ax = plt.subplots(figsize=(16,6)) ax.set_ylabel('Cumulative Sums of the Residuals') change_mag_random_sum = 0 change_mag_random_max = 0 # to keep track of the maximum change magnitude of the bootstrapped sample qty_change_mag_above_random = 0 # to keep track of the maxium Sdiff of the bootstrapped sample print("Running Bootstrapping for %4.1f iterations ..." % (n_bootstraps)) colors = ['C0', 'C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9', 'c', 'm', 'y', 'k', 'g'] for i in range(n_bootstraps): residuals_random = residuals.sample(frac=1) # Randomize the time steps of the residuals sums_random = residuals_random.cumsum() change_mag_random = sums_random.max() - sums_random.min() change_mag_random_sum += change_mag_random if change_mag_random > change_mag_random_max: change_mag_random_max = change_mag_random if change_mag > change_mag_random: qty_change_mag_above_random += 1 sums_random.plot(ax=ax, color=random.choice(colors), label='_nolegend_') if ((i+1)/n_bootstraps*100) % 10 == 0: print("\r%4.1f percent completed ..." % ((i+1)/n_bootstraps*100), end='\r', flush=True) sums.plot(ax=ax, color='r', linewidth=3) fig.legend(['S Curve for Candidate Change Point']) print(f"Bootstrapping Complete") _ = ax.axhline(change_mag_random_sum/n_bootstraps, color='b') plt.grid() if output_file: plt.savefig(f"bootstrap_{n_bootstraps}", dpi=72) print(f"Saved plot: bootstrap_{n_bootstraps}.png") return [qty_change_mag_above_random, change_mag_random_sum] # <font face="Calibri" size="3"><b>Call the bootstrap function with a sample size of 2000:</b></font> n_bootstraps = 2000 bootstrapped_change_mag = bootstrap(n_bootstraps, sums, residuals, output_file=True) # <hr> # <font face="Calibri" size="3"> Based on the bootstrapping results, we can now calculate <b>Confidence Level $CL$</b> and <b>Significance $CP_{significance}$</b> for our candidate change point.</font> # <br><br> # <font face="Calibri" size="3"><b>Calculate the confidence level:</b></font> confidence_level = 1.0 * bootstrapped_change_mag[0] / n_bootstraps print('Confidence Level for change point {} percent'.format(confidence_level*100.0)) # <font face="Calibri" size="3"><b>Calculate the change point significance:</b></font> change_point_significance = 1.0 - (bootstrapped_change_mag[1]/n_bootstraps)/change_mag print('Change point significance metric: {}'.format(change_point_significance)) # <hr> # <font face="Calibri" size="3"> <b><u>5.2.G TRICK: Detrending of Time Series Before Change Detection to Improve Robustness</u></b>:<br> # De-trending the time series with global image means improves the robustness of change point detection as global image time series anomalies stemming from calibration or seasonal trends are removed prior to time series analysis. This de-trending needs to be performed with large subsets so real change is not influencing the image statistics. # # NOTE: Due to the small size of our subset, we will see some distortions when we detrend the time series. # # <b>Let's start by building a global image means time series and plot the global means. Save the plot as a png (global_means_time_series.png):</b> # </font> means_pwr = np.mean(raster_stack_pwr, axis=(1, 2)) means_dB = 10.0 * np.log10(means_pwr) global_means_ts = pd.Series(means_dB, index=time_index) global_means_ts = global_means_ts[global_means_ts.index > '2015-10-31'] # filter dates global_means_ts = global_means_ts.rolling(5, center=True).median() global_means_ts.plot(figsize=(16, 6)) plt.title('Time Series of Global Means') plt.ylabel('[dB]') plt.xlabel('Time') plt.grid() plt.savefig(f"global_means_time_series", dpi=72) # <font face="Calibri" size="3"><b>Compare the time series of global means (above) to the time series of our small subset (below):</b> # </font> backscatter_values.plot(figsize=(16, 6)) plt.title('Sentinel-1 C-VV Time Series Backscatter Profile,\ Subset: 5,20,5,5 ') plt.ylabel('[dB]') plt.xlabel('Time') plt.grid() # <font face="Calibri" size="3"> There are some signatures of the global seasonal trend in our subset time series. To remove these signatures and get a cleaner time series of change, we subtract the global mean time series from our subset time series.</font> # <br><br> # <font face="Calibri" size="3"><b>De-trend the subset and re-plot the backscatter profile. Save the plot (detrended_time_series.png):</b></font> backscatter_minus_seasonal = backscatter_values - global_means_ts backscatter_minus_seasonal.plot(figsize=(16, 6)) plt.title('De-trended Sentinel-1 C-VV Time Series Backscatter Profile, Subset: 5,20,5,5') plt.ylabel('[dB]') plt.xlabel('Time') plt.grid() plt.savefig(f"detrended_time_series", dpi=72) # <font face="Calibri" size="3"><b>Save a plot comparing the original, global means, and detrended time-series (globalMeans_original_detrended_time_series.png):</b></font> means_pwr = np.mean(raster_stack_pwr, axis=(1, 2)) means_dB = 10.0 * np.log10(means_pwr) global_means_ts = pd.Series(means_dB, index=time_index) global_means_ts = global_means_ts[global_means_ts.index > '2015-10-31'] # filter dates global_means_ts = global_means_ts.rolling(5, center=True).median() global_means_ts.plot(figsize=(16, 6)) backscatter_values.plot(figsize=(16, 6)) backscatter_minus_seasonal = (backscatter_values - global_means_ts) backscatter_minus_seasonal.plot(figsize=(16, 6)) plt.title('Time Series of Global Means') plt.ylabel('[dB]') plt.xlabel('Time') plt.grid() plt.savefig(f"globalMeans_original_detrended_time_series", dpi=72) # <font face="Calibri" size="3"><b>Recalculate the residuals based on the de-trended data:</b> and plot it: # </font> residuals = backscatter_minus_seasonal - backscatter_values_mean # <font face="Calibri" size="3"><b>Compute, plot, and save the cumulative sum of the detrended time series (cumualtive_sum_detrended_time_series.png):</b></font> sums = residuals.cumsum() _ = sums.plot(figsize=(16, 6)) plt.title("Cumulative Sum of the Detrended Time Series") plt.ylabel('CumSum $S$ [dB]') plt.xlabel('Time') plt.grid() plt.savefig(f"cumualtive_sum_detrended_time_series", dpi=72) # <font face="Calibri" size="3"><b>Detect Change Point and extract related change dates:</b></font> # + change_point_before = sums[sums==sums.max()].index[0] print('Last date before change occurred: {}'.format(change_point_before.date())) change_point_after = sums[sums.index > change_point_before].index[0] print('First date after change occurred: {}'.format(change_point_after.date())) # - # <font face="Calibri" size="3"><b>Perform bootstrapping:</b></font> n_bootstraps = 2000 bootstrapped_change_mag = bootstrap(n_bootstraps, sums, residuals, output_file=True) # <font face="Calibri" size="3"><b>Calculate the confidence level:</b></font> confidence_level = bootstrapped_change_mag[0] / n_bootstraps print('Confidence Level for change point {} percent'.format(confidence_level*100.0)) # <hr> # <font face="Calibri" size="3">Note how the <b>change point significance $CP_{significance}$</b> has increased in the detrended time series: # </font> change_point_significance = 1.0 - (bootstrapped_change_mag[1]/n_bootstraps) / change_mag print('Change point significance metric: {}'.format(change_point_significance)) # <br> # <hr> # <div class="alert alert-success"> # <font face="Calibri" size="5"> <b> <font color='rgba(200,0,0,0.2)'> <u>ASSIGNMENT #4</u>: </font> Perform Cumulative Sum-based Change Detection for a Different Subset</b> <font color='rgba(200,0,0,0.2)'> -- [6 Points] </font> </font> # # <font face="Calibri" size="3"> Go back to the beginning of Section 5.2 and change the subset coordinates to a different subset (i.e., modify to <i>subset=($X$,$Y$,5,5)</i> with $X$ and $Y$ being the center of your selected subset). Work through the workbook from the beginning to the end of Section 5.2with your selected subset. Discuss whether or not your new subset shows change according to the <i>Cumulative Sum</i> approach. # <br><br> # # DISCUSS BELOW WHETHER YOUR NEW SUBSET SHOWS CHANGE: # # </font> # </div> # <br> # <hr> # <font face="Calibri" size="5"> <b> 6. Cumulative Sum-based Change Detection Across an Entire Image</b> </font> # # <font face="Calibri" size="3"> With numpy arrays we can apply the concept of **cumulative sum change detection** analysis effectively on the entire image stack. We take advantage of array slicing and axis-based computing in numpy. Axis 0 is the time domain in our raster stacks. # # <hr> # <font size="4"><b>6.1 We first create our time series stack:</b></font> # <br><br> # <font size="3"><b>Filter out the first layer (Dates >= '2015-11-1'):</b></font> raster_stack = raster_stack_pwr raster_stack_sub = raster_stack_pwr[1:, :, :] time_index_sub = time_index[1:] # <font size="3"><b>Run the following code cell <u>if</u> you wish to change to dB scale:</b></font> raster_stack = 10.0 * np.log10(raster_stack_sub) # <font size="3"><b>Plot and save Band-1 (band_1.png):</b></font> plt.figure(figsize=(12, 8)) band_number = 0 vmin = np.percentile(raster_stack[band_number], 5) vmax = np.percentile(raster_stack[band_number], 95) plt.title('Band  {} {}'.format(band_number+1, time_index_sub[band_number].date())) plt.imshow(raster_stack[0], cmap='gray', vmin=vmin, vmax=vmax) _ = plt.colorbar() plt.savefig('band_1.png', dpi=300, transparent='true') # <font size="3"><b>Save the plot as a GeoTiff (band_1.tiff):</b></font> # %%capture geotiff_from_plot(raster_stack[0], 'band_1', coords, utm_zone, cmap='gray', dpi=600) # <br> # <hr> # <font face="Calibri" size="4"> <b> 6.2 Calculate Mean Across Time Series to Prepare for Calculation of Cumulative Sum $S$:</b> </font> # <br><br> # <font face="Calibri" size="3"> <b>Plot and save the the raster stack mean (raster_stack_mean.png):</b></font> raster_stack_mean = np.mean(raster_stack, axis=0) plt.figure(figsize=(12, 8)) plt.imshow(raster_stack_mean, cmap='gray') _ = plt.colorbar() plt.savefig('raster_stack_mean.png', dpi=300, transparent='true') # <font size="3"><b>Save the raster stack mean as a GeoTiff (raster_stack_mean.tiff):</b></font> # %%capture geotiff_from_plot(raster_stack_mean, 'raster_stack_mean', coords, utm_zone, cmap='gray', dpi=600) # <font face="Calibri" size="3"><b>Calculate the residuals:</b></font> residuals = raster_stack - raster_stack_mean # <font face="Calibri" size="3"><b>Close img, as it is no longer needed in the notebook:</b></font> raster_stack = None # <font face="Calibri" size="3"><b>Plot and save the residuals for band 1 (residuals_band_1.png):</b></font> plt.figure(figsize=(12, 8)) plt.imshow(residuals[0]) plt.title('Residuals for Band  {} {}'.format(band_number+1, time_index_sub[band_number].date())) _ = plt.colorbar() plt.savefig('residuals_band_1', dpi=300, transparent='true') # <font face="Calibri" size="3"><b>Save the residuals for band 1 as a GeoTiff (residuals_band_1.tiff):</b></font> # %%capture geotiff_from_plot(residuals[0], 'residuals_band_1', coords, utm_zone, dpi=600) # <br> # <hr> # <font face="Calibri" size="4"> <b> 6.3 Calculate Cumulative Sum $S$ as well as Change Magnitude $S_{diff}$:</b> </font> # <br><br> # <font face="Calibri" size="3"><b>Plot and save the cumulative sum max, min, and change magnitude (Smax_Smin_Sdiff.png):</b></font> sums = np.cumsum(residuals, axis=0) sums_max = np.max(sums, axis=0) sums_min = np.min(sums, axis=0) change_mag = sums_max - sums_min fig, ax = plt.subplots(1, 3, figsize=(16, 4)) vmin = sums_min.min() vmax = sums_max.max() sums_max_plot = ax[0].imshow(sums_max, vmin=vmin, vmax=vmax) ax[0].set_title('$S_{max}$') ax[1].imshow(sums_min, vmin=vmin, vmax=vmax) ax[1].set_title('$S_{min}$') ax[2].imshow(change_mag, vmin=vmin, vmax=vmax) ax[2].set_title('Change Magnitude') fig.subplots_adjust(right=0.8) cbar_ax = fig.add_axes([0.85, 0.15, 0.02, 0.7]) _ = fig.colorbar(sums_max_plot, cax=cbar_ax) plt.savefig('Smax_Smin_Sdiff', dpi=300, transparent='true') # <font face="Calibri" size="3"><b>Save Smax as a GeoTiff (Smax.tiff):</b></font> # %%capture geotiff_from_plot(sums_max, 'Smax', coords, utm_zone, vmin=vmin, vmax=vmax) # <font face="Calibri" size="3"><b>Save Smin as a GeoTiff (Smin.tiff):</b></font> # %%capture geotiff_from_plot(sums_min, 'Smin', coords, utm_zone, vmin=vmin, vmax=vmax) # <font face="Calibri" size="3"><b>Save the change magnitude as a GeoTiff (Sdiff.tiff):</b></font> # %%capture geotiff_from_plot(change_mag, 'Sdiff', coords, utm_zone, vmin=vmin, vmax=vmax) # <br> # <hr> # <font face="Calibri" size="4"> <b> 6.4 Mask $S_{diff}$ With a-priori Threshold To Idenfity Change Candidates:</b> </font> # # <font face="Calibri" size="3">To identified change candidate pixels, we can threshold $S_{diff}$ to reduce computation of the bootstrapping. For land cover change we would not expect more than 5-10% change pixels in a landscape. So, if the test region is reasonably large, setting a threshold for expected change to 10% is appropriate. In our example we'll start out with a very conservative threshold of 20%. # # <b>Plot and save the histogram for the change magnitude and the change magnitude cumulative distribution function (Sdiff_histogram_CDF.png):</b></font> plt.rcParams.update({'font.size': 14}) fig = plt.figure(figsize=(14, 6)) # Initialize figure with a size ax1 = fig.add_subplot(121) # 121 determines: 2 rows, 2 plots, first plot ax2 = fig.add_subplot(122) # Second plot: Histogram # IMPORTANT: To get a histogram, we first need to *flatten* # the two-dimensional image into a one-dimensional vector. histogram = ax1.hist(change_mag.flatten(), bins=200, range=(0, np.max(change_mag))) ax1.xaxis.set_label_text('Change Magnitude') ax1.set_title('Change Magnitude Histogram') plt.grid() n, bins, patches = ax2.hist(change_mag.flatten(), bins=200, range=(0, np.max(change_mag)), cumulative='True', density='True', histtype='step', label='Empirical') ax2.xaxis.set_label_text('Change Magnitude') ax2.set_title('Change Magnitude CDF') plt.grid() plt.savefig('Sdiff_histogram_CDF', dpi=72, transparent='true') # <font face="Calibri" size="3">Using this threshold, we can create a plot to <b>visualize our change candidate areas. Save the plot (change_candidate.png):</b></font> # + precentile = 0.8 out_indicies = np.where(n>precentile) threshold_indicies = np.min(out_indicies) threshold = bins[threshold_indicies] print('At the {}% percentile, the threshold value is {:2.2f}'.format(precentile*100,threshold)) change_mag_mask = change_mag < threshold plt.figure(figsize=(12, 8)) plt.title('Change Candidate Areas (black)') _ = plt.imshow(change_mag_mask, cmap='gray') plt.savefig('change_candidate', dpi=300, transparent='true') # - # <font face="Calibri" size="3"><b>Save the change candidate plot as a GeoTiff (change_candidate.tiff):</b></font> # %%capture geotiff_from_plot(change_mag_mask, 'change_candidate', coords, utm_zone, cmap='gray') # <br> # <hr> # <font face="Calibri" size="4"> <b> 6.5 Bootstrapping to Prepare for Change Point Selection:</b> </font> # # <font face="Calibri" size="3">We can now perform bootstrapping over the candidate pixels. The workflow is as follows: # <ul> # <li>Filter our residuals to the change candidate pixels</li> # <li>Perform bootstrapping over candidate pixels</li> # </ul> # <br> # <b>For efficient computing we <b>permutate the index of the time axis:</b></font> residuals_mask = np.broadcast_to(change_mag_mask, residuals.shape) residuals_masked = np.ma.array(residuals, mask=residuals_mask) # <font face="Calibri" size="3">On the masked time series stack of residuals, we can <b>re-compute the cumulative sums:</b></font> sums_masked = np.ma.cumsum(residuals_masked, axis=0) # <font face="Calibri" size="3"><b>Plot the min sums, max sums, and change magnitude of the masked subset (masked_Smax_Smin_Sdiff.png):</b></font> sums_masked_max = np.ma.max(sums_masked, axis=0) sums_masked_min = np.ma.min(sums_masked, axis=0) change_mag = sums_masked_max - sums_masked_min fig, ax = plt.subplots(1, 3, figsize=(16, 4)) vmin = sums_masked_min.min() vmax = sums_masked_max.max() sums_masked_max_plot = ax[0].imshow(sums_masked_max, vmin=vmin, vmax=vmax) ax[0].set_title('$S_{max}$') ax[1].imshow(sums_masked_min, vmin=vmin, vmax=vmax) ax[1].set_title('$S_{min}$') ax[2].imshow(change_mag, vmin=vmin, vmax=vmax) ax[2].set_title('Change Magnitude') fig.subplots_adjust(right=0.8) cbar_ax = fig.add_axes([0.85, 0.15, 0.02, 0.7]) _ = fig.colorbar(sums_masked_max_plot, cax=cbar_ax) plt.savefig('masked_Smax_Smin_Sdiff', dpi=300, transparent='true') # <font face="Calibr i" size="3"><b>Save the Smax of the masked subset as a GeoTiff (masked_Smax.tiff):</b></font> # %%capture geotiff_from_plot(sums_masked_max, 'masked_Smax', coords, utm_zone, vmin=vmin, vmax=vmax) # <font face="Calibr i" size="3"><b>Save the Smin of the masked subset as a GeoTiff (masked_Smin.tiff):</b></font> # %%capture geotiff_from_plot(sums_masked_min, 'masked_Smin', coords, utm_zone, vmin=vmin, vmax=vmax) # <font face="Calibri" size="3"><b>Save the change magnitude of the masked subset as a GeoTiff (masked_Sdiff.tiff):</b></font> # %%capture geotiff_from_plot(change_mag, 'masked_Sdiff', coords, utm_zone, vmin=vmin, vmax=vmax) # <font face="Calibri" size="3"><b>Perform bootstrapping:</b>: # </font> random_index = np.random.permutation(residuals_masked.shape[0]) residuals_random = residuals_masked[random_index,:,:] n_bootstraps = 2000 # bootstrap sample size # to keep track of the maxium Sdiff of the bootstrapped sample: change_mag_random_max = np.ma.copy(change_mag) change_mag_random_max[~change_mag_random_max.mask] = 0 # to compute the Sdiff sums of the bootstrapped sample: change_mag_random_sum = np.ma.copy(change_mag) change_mag_random_sum[~change_mag_random_max.mask] = 0 # to keep track of the count of the bootstrapped sample qty_change_mag_above_random = change_mag_random_sum qty_change_mag_above_random[~qty_change_mag_above_random.mask] = 0 print("Running Bootstrapping for %4.1f iterations ..." % (n_bootstraps)) for i in range(n_bootstraps): # For efficiency, we shuffle the time axis index and use that #to randomize the masked array random_index = np.random.permutation(residuals_masked.shape[0]) # Randomize the time step of the residuals residuals_random = residuals_masked[random_index, :, :] sums_random = np.ma.cumsum(residuals_random, axis=0) sums_random_max = np.ma.max(sums_random, axis=0) sums_random_min = np.ma.min(sums_random, axis=0) change_mag_random = sums_random_max - sums_random_min change_mag_random_sum += change_mag_random change_mag_random_max[np.ma.greater(change_mag_random, change_mag_random_max)] = \ change_mag_random[np.ma.greater(change_mag_random, change_mag_random_max)] qty_change_mag_above_random[np.ma.greater(change_mag, change_mag_random)] += 1 if ((i+1)/n_bootstraps*100)%10 == 0: print("\r%4.1f percent completed ..." % ((i+1)/n_bootstraps*100), end='\r', flush=True) print(f"Bootstrapping Complete. ") # <br> # <hr> # <font face="Calibri" size="4"> <b> 6.6 Extract Confidence Metrics and Select Final Change Points:</b> </font> # # <font face="Calibri" size="3">We first <b>compute for all pixels the confidence level $CL$, the change point significance metric $CP_{significance}$ and the product of the two as our confidence metric for identified change points. Plot and save them (confidence_change_point.png):</b></font> confidence_level = qty_change_mag_above_random / n_bootstraps change_point_significance = 1.0 - (change_mag_random_sum/n_bootstraps) / change_mag #Plot fig, ax = plt.subplots(1, 3 ,figsize=(16, 4)) a = ax[0].imshow(confidence_level*100) fig.colorbar(a, ax=ax[0]) ax[0].set_title('Confidence Level %') a = ax[1].imshow(change_point_significance) fig.colorbar(a, ax=ax[1]) ax[1].set_title('Change Point Significance') a = ax[2].imshow(confidence_level*change_point_significance) fig.colorbar(a, ax=ax[2]) _ = ax[2].set_title('Confidence Level\nx\nChange Point Significance') plt.savefig('confidence_change_point', dpi=300, transparent='true') # <font face="Calibri" size="3"><b>Save the confidence level of the masked subset as a GeoTiff (confidence_level.tiff):</b></font> # %%capture geotiff_from_plot(confidence_level*100, 'confidence_level', coords, utm_zone) # <font face="Calibri" size="3"><b>Save the change point significance of the masked subset as a GeoTiff (change_point.tiff):</b></font> # %%capture geotiff_from_plot(change_point_significance, 'change_point', coords, utm_zone) # <font face="Calibri" size="3"><b>Save the confidence level x change point significance of the masked subset as a GeoTiff (CL_x_CP.tiff):</b></font> # %%capture geotiff_from_plot(confidence_level*change_point_significance, 'CL_x_CP', coords, utm_zone) # <font face="Calibri" size="3"><b>Set a change point threshold of 0.5:</b></font> change_point_threshold = 0.5 # <font face="Calibri" size="3"><b>Create and save a plot showing the final change points (change_point_thresh.png):</b></font> fig = plt.figure(figsize=(12, 8)) ax = fig.add_subplot(1, 1, 1) plt.title('Detected Change Pixels based on Threshold %2.1f' % (change_point_threshold)) a = ax.imshow(confidence_level*change_point_significance < change_point_threshold, cmap='cool') plt.savefig('change_point_thresh', dpi=300, transparent='true') # <font face="Calibri" size="3"><b>Save the thresholded change point significance of the masked subset as a GeoTiff (change_point_thresh.tiff):</b></font> # %%capture geotiff_from_plot(confidence_level*change_point_significance < change_point_threshold, 'change_point_thresh', coords, utm_zone, cmap='cool') # <br> # <hr> # <font face="Calibri" size="4"> <b> 6.7 Derive Timing of Change for Each Change Pixel:</b> </font> # # <font face="Calibri" size="3">Our last step in the identification of the change points is to extract the timing of the change. We will produce a raster layer that shows the band number of the first date after a change was detected. We will make use of the numpy indexing scheme.</font> # <br><br> # <font face="Calibri" size="3"><b>Create a combined mask of the first threshold and the identified change points after the bootstrapping:</b></font> # make a mask of our change points from the new threhold and the previous mask change_point_mask = np.ma.mask_or(confidence_level*change_point_significance<change_point_threshold, confidence_level.mask) # Broadcast the mask to the shape of the masked S curves change_point_mask2 = np.broadcast_to(change_point_mask, sums_masked.shape) # Make a numpy masked array with this mask change_point_raster = np.ma.array(sums_masked.data, mask=change_point_mask2) # <font face="Calibri" size="3">To retrieve the dates of the change points we <b>find the band indices in the time series along the time axis where the maximum of the cumulative sums was located.</b> Numpy offers the "argmax" function for this purpose. # </font> change_point_index = np.ma.argmax(change_point_raster, axis=0) change_indices = list(np.unique(change_point_index)) change_indices.remove(0) print(f"Change Indicies:\n{change_indices}") # Look up the dates from the indices to get the change dates all_dates = time_index[time_index>'2015-10-31'] change_dates = [str(all_dates[x].date()) for x in change_indices] print(f"\nChange Dates:\n{change_dates}") # <font face="Calibri" size="3">Lastly, we <b>plot and the change dates using the change point index raster and labeling the change dates (change_dates.png):</b></font> # + ticks = change_indices tick_labels = change_dates cmap = plt.cm.get_cmap('tab20',ticks[-1]) fig, ax = plt.subplots(figsize=(12, 12)) cax = ax.imshow(change_point_index, interpolation='nearest', cmap=cmap) # fig.subplots_adjust(right=0.8) # cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7]) # fig.colorbar(p,cax=cbar_ax) ax.set_title('Dates of Change') # cbar = fig.colorbar(cax, ticks=ticks) cbar = fig.colorbar(cax, ticks=ticks, orientation='horizontal') _ = cbar.ax.set_xticklabels(tick_labels, size=10, rotation=45, ha='right') plt.savefig('change_dates', dpi=300, transparent='true') # - # <font face="Calibri" size="3"><b>Save the dates of change plot as a GeoTiff (change_dates.tiff):</b></font> # %%capture geotiff_from_plot(change_point_index, 'change_dates', coords, utm_zone, interpolation='nearest', cmap=cmap) # <font face="Calibri" size="2"> <i>GEOS 657-Lab8-SARTSChangeDetection.ipynb - Version 1.3.0 - April 2021 # <br> # <b>Version Changes:</b> # <ul> # <li>namespace asf_notebook</li> # </ul> # </i> # </font>
ASF/GEOS_657_Labs/2019/GEOS 657-Lab8-SARTSChangeDetection.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: geocomp # language: python # name: geocomp # --- # # Random data for HC volume notebook # imports import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline import seaborn as sns col_list = ['Prospect Name','Thick[m]','Area[km2]','GeomFactor','N:G','phi','So','Bo','Psrc','Pmig','Pres','Ptrap','x[mE]','y[mN]'] input_data = pd.DataFrame(columns=col_list) input_data table_length = 100 for i in range(table_length): input_data.loc[i,'Prospect Name'] = 'Prospect_' + str(i+1) input_data.loc[i,'Thick[m]'] = np.random.randint(10, 500) input_data.loc[i,'Area[km2]'] = np.random.lognormal(mean=2.5, sigma=1) input_data.loc[i,'GeomFactor'] = np.random.randint(40, 100) / 100 input_data.loc[i,'N:G'] = np.random.randint(1, 60) / 100 input_data.loc[i,'phi'] = np.random.randint(10, 47) / 100 input_data.loc[i,'So'] = np.random.randint(0, 85) / 100 input_data.loc[i,'Bo'] = np.random.randint(100, 140) / 100 for column in input_data.columns[8:12]: input_data.loc[i,column] = np.random.randint(1, 100) / 100 input_data.loc[i,'x[mE]'] = np.random.randint(400000, 500000) input_data.loc[i,'y[mN]'] = np.random.randint(1000000, 1100000) sns.pairplot(input_data); input_data.head() input_data.to_csv('./HC_volumes_random_input.csv', sep=',', columns=col_list, header=True, index=True, index_label=None, mode='w', decimal='.')
HC_volume_random_generator.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Plot example Loss for Log-Likelihood Ratio Estimation (LLLR) and Cross-Entropy (CE) training results # Experiments with LLLR and CE were ran 56 times each to calculate the evaluation metric, Normalized Mean Squared Error (NMSE). # For the training details, see README of this repo. # + # execute below under the folder "example_results." import numpy as np import matplotlib.pyplot as plt # load the training results X = np.load('./xtick.npy') LLLR_NMSE_pool = np.load('./LLLR_NMSE_pool.npy') CE_NMSE_pool = np.load('./CE_NMSE_pool.npy') # calculate mean and standard error of the mean (SEM). LLLR_mean = np.mean(LLLR_NMSE_pool, axis=0) LLLR_sem = np.std(LLLR_NMSE_pool, axis=0) / np.sqrt(LLLR_NMSE_pool.shape[0]) CE_mean = np.mean(CE_NMSE_pool, axis=0) CE_sem = np.std(CE_NMSE_pool, axis=0) / np.sqrt(CE_NMSE_pool.shape[0]) color1 = np.array([20, 80, 148, 128])/255 color2 = np.array([128, 0, 0, 128])/255 # - # visualize the results fig = plt.figure(figsize=(10,6.2)) fig.patch.set_facecolor('white') plt.rcParams['font.size'] = 25 plt.plot(X, CE_mean, '-', color=color1[:-1], linewidth=3) plt.plot(X, LLLR_mean, '-', color=color2[:-1], linewidth=3) plt.fill_between(X, CE_mean - CE_sem, CE_mean + CE_sem, color=color1) plt.fill_between(X, LLLR_mean - LLLR_sem, LLLR_mean + LLLR_sem, alpha=0.5, color=color2) plt.xlabel('Iteration') plt.ylabel('NMSE') plt.legend(('Cross-Entropy', 'LLLR(proposed)'), loc='upper right') plt.yscale('log') plt.show()
example_results/plot_example_runs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import glob import IPython.display as ipd decoded_wavs = sorted(glob.glob("../waves/decoded-*wav"), reverse=True) for wav_file in decoded_wavs: print(wav_file) display(ipd.Audio(wav_file)) print()
tools/play_decoded.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Code Details # Author: <NAME><br> # Created: 19NOV18<br> # Version: 0.1<br> # *** # This code is to test a writing of data to a MongoDB. <br> # This is a proof of concept and the data is real. However, it does not bring all of it into Mongo, only the key fields. # This uses data that was extracted after the SSO data model was implemented at LE on the platform. # The data now is in two parts. The groups and their members as well as the users linked to the results. # Please note that the results from doing the survey can be more than two per journey. # # Package Importing + Variable Setting # + import pandas as pd import numpy as np import datetime # mongo stuff import pymongo from pymongo import MongoClient from bson.objectid import ObjectId import bson # json stuff import json # - # the file to read. This needs to be manually updated readLoc = "~/datasets/CLARA/190328_052400_LE_LivePlatform_IndividualResults.json" # if true the code outputs to the notebook a whole of diagnostic data that is helpful when writing but not so much when running it for real verbose = False # first run will truncate the target database and reload it from scratch. Once delta updates have been implmented this needs adjusting first_run = True # # Set display options # + # further details found by running: # pd.describe_option('display') # set the values to show all of the columns etc. pd.set_option('display.max_columns', None) # or 1000 pd.set_option('display.max_rows', None) # or 1000 pd.set_option('display.max_colwidth', -1) # or 199 # locals() # show all of the local environments # - # # Connect to Mongo DB # + # create the connection to MongoDB # define the location of the Mongo DB Server # in this instance it is a local copy running on the dev machine. This is configurable at this point. client = MongoClient('127.0.0.1', 27017) # define what the database is called. db = client.CLARA # define the collection raw_data_collection = db.raw_data_user_results # - # Command to clean the databzse if needed when running this code # Delete the raw_data_collection - used for testing if first_run: raw_data_collection.drop() # # Functions Definitions # The web framework gets post_id from the URL and passes it as a string def get(post_id): # Convert from string to ObjectId: document = client.db.collection.find_one({'_id': ObjectId(post_id)}) # # Place CLARA Results from JSON File into Mongo # + #import the data file claraDf = pd.read_json(readLoc, orient='records') if verbose: display(claraDf) # - if verbose: # count columns and rows print("Number of columns are " + str(len(claraDf.columns))) print("Number of rows are " + str(len(claraDf.index))) print() # output the shape of the dataframe print("The shape of the data frame is " + str(claraDf.shape)) print() # output the column names print("The column names of the data frame are: ") print(*claraDf, sep='\n') print() # output the column names and datatypes print("The datatypes of the data frame are: ") print(claraDf.dtypes) print() # ### To Do # The index need to be replaced with the unique identifier for the student # + # Loop through the data frame and build a list # the list will be used for a bulk update of MongoDB # I am having to convert to strings for the intergers as Mongo cannot handle the int64 datatype. # It also cant handle the conversion to int32 at the point of loading the rows, so string is the fall back position # define the list to hold the data clara_row = [] # loop through dataframe and create each item in the list for index, row in claraDf.iterrows(): clara_row.insert( index, { "rowIndex": index, "userId": claraDf['userId'].iloc[index], "nameId": claraDf['nameId'].iloc[index], "primaryEmail": claraDf['primaryEmail'].iloc[index], "journeyId": claraDf['journeyId'].iloc[index].astype('str'), "journeyTitle": claraDf['journeyTitle'].iloc[index], "journeyPurpose": claraDf['journeyPurpose'].iloc[index], "journeyGoal": claraDf['journeyGoal'].iloc[index], "journeyCreatedAt": claraDf['journeyCreatedAt'].iloc[index], "claraId": claraDf['claraId'].iloc[index].astype('str'), "claraResultsJourneyStep": claraDf['claraResultsJourneyStep'].iloc[index], "claraResultsCreatedAt": claraDf['claraResultsCreatedAt'].iloc[index], "claraResultCompletedAt": claraDf['claraResultCompletedAt'].iloc[index], "claraResult1": claraDf['claraResult1'].iloc[index], "claraResult2": claraDf['claraResult2'].iloc[index], "claraResult3": claraDf['claraResult3'].iloc[index], "claraResult4": claraDf['claraResult4'].iloc[index], "claraResult5": claraDf['claraResult5'].iloc[index], "claraResult6": claraDf['claraResult6'].iloc[index], "claraResult7": claraDf['claraResult7'].iloc[index], "claraResult8": claraDf['claraResult8'].iloc[index], "insertdate": datetime.datetime.utcnow() }) if verbose: print(clara_row[0]) # + # bulk update the database raw_data_collection.insert_many(clara_row) if verbose: print(raw_data_collection.inserted_ids) # - # ## Create Index # Only create the indexes onthe first run through if first_run: # put the restult into a list so it can be looked at later. result = [] # Create some indexes result.append( raw_data_collection.create_index([('index', pymongo.ASCENDING)], unique=False)) result.append( raw_data_collection.create_index([('userId', pymongo.ASCENDING)], unique=False)) result.append( raw_data_collection.create_index([('nameId', pymongo.ASCENDING)], unique=False)) result.append( raw_data_collection.create_index([('primaryEmail', pymongo.ASCENDING)], unique=False)) result.append( raw_data_collection.create_index([('journeyId', pymongo.ASCENDING)], unique=False)) result.append( raw_data_collection.create_index( [('journeyCreatedAt', pymongo.ASCENDING)], unique=False)) result.append( raw_data_collection.create_index([('claraId', pymongo.ASCENDING)], unique=False)) result.append( raw_data_collection.create_index( [('claraResultsCreatedAt', pymongo.ASCENDING)], unique=False)) result.append( raw_data_collection.create_index( [('claraResultCompletedAt', pymongo.ASCENDING)], unique=False)) result.append( raw_data_collection.create_index( [('claraResultsStep', pymongo.ASCENDING)], unique=False)) result.append( raw_data_collection.create_index([('insertdate', pymongo.ASCENDING)], unique=False)) if verbose: print(result)
3_Code/01_CLARA_to_MongoDB_RealData_ClaraResults_SSODataModel.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy import os import sys import tensorflow as tf import datasets # - collection_directory = 'images/coll_28_28_20000' data_sets = datasets.read_data_sets(collection_directory) # should be floats, look like [ 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.] print(data_sets.train.labels[0]) # + print("train examples:", data_sets.train.num_examples) print("test examples:", data_sets.test.num_examples) print(data_sets.train.images.shape) print(data_sets.train.labels.shape) print("num train examples:", data_sets.train.num_examples) print("num test examples:", data_sets.test.num_examples) print("Shape of training image data:", data_sets.train.images.shape) print("Shape of training label data:", data_sets.train.labels.shape) assert data_sets.train.images.ndim == 2 assert data_sets.train.labels.ndim == 2 assert data_sets.train.images.shape[1:] == data_sets.test.images.shape[1:] assert data_sets.train.labels.shape[1:] == data_sets.test.labels.shape[1:] # - edge_labels = data_sets.train.labels.shape[1] print(edge_labels) def get_image_index_with_num_edges(labels_array, num_edges): assert 3 <= num_edges <= 9 for index in range(labels_array.shape[0]): if labels_array[index][num_edges] == 1.0: return index # + triangle_index = get_image_index_with_num_edges(data_sets.train.labels, 3) square_index = get_image_index_with_num_edges(data_sets.train.labels, 4) example_image_index = triangle_index an_image = data_sets.train.images[example_image_index] values = [] for y in range(28): print() for x in range(28): value = an_image[x + 28*y] values.append(value) if value > 0: print("1", end="") else: print("0", end="") print("\nThe label array for this image is:", data_sets.train.labels[example_image_index]) # + # unflatten an image and show with matplotlib unflattened = an_image.reshape(28, 28) import matplotlib.pyplot as plt # %matplotlib inline plt.imshow(unflattened) plt.colorbar() # - print(values) print(example_image_index) sess = tf.InteractiveSession() # + edge_labels = data_sets.train.labels.shape[1] # number of different possible output labels image_flat_size = data_sets.train.images.shape[1] assert image_flat_size == data_sets.train.original_image_width * data_sets.train.original_image_height print("Number of allowed num_edges (ie, size of output vector):", edge_labels) print("Total image flat size (width * height):", image_flat_size) # - x = tf.placeholder(tf.float32, [None, image_flat_size]) print(x) W = tf.Variable(tf.zeros([image_flat_size, edge_labels])) print(W) b = tf.Variable(tf.zeros([edge_labels])) y = tf.nn.softmax(tf.matmul(x, W) + b) # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, edge_labels]) cross_entropy = -tf.reduce_sum(y_ * tf.log(y)) train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy) # Train tf.initialize_all_variables().run() for i in range(1000): batch_xs, batch_ys = data_sets.train.next_batch(100) train_step.run({x: batch_xs, y_: batch_ys}) # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(accuracy.eval({ x: data_sets.test.images, y_: data_sets.test.labels}))
my_simple_softmax.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + deletable=false editable=false # Initialize Otter import otter grader = otter.Notebook("eps130_hw2_EQ_Forecasting_v1.1.ipynb") # - # # Probability of Occurrence of Mainshocks and Aftershocks # # ## Introduction # In the previous homework we learned about the particularly well-behaved statistics of the earthquake magnitude distribution, and aftershock occurrence. As we saw, it is possible to use the frequency of event occurrence over a range of magnitudes to extrapolate to the less frequent large earthquakes of interest. How far this extrapolation may be extended depends upon a number of factors. It is certainly not unbounded as fault dimension, segmentation, strength and frictional properties will play a role in the maximum size earthquake that a fault will produce. Paleoseismic data is used to provide a better understanding of the recurrence of the large earthquakes of interest. The large earthquakes have greater fault offset, rupture to the surface of the Earth and leave a telltale geologic record. This record is used to determine the recurrence of the large characteristic earthquakes and probabilistic earthquake forecasts. Finally, this type of analysis is perhaps one of the most visible products of earthquake hazard research in that earthquake forecasts and probabilities of aftershock occurrence are generally released to the public. # # ## Objective # In this homework we will assume a Poisson distribution to determine the probability of events based on the Gutenberg-Richter recurrence relationship. Given the statistical aftershock rate model of Reasenberg and Jones (1996) we will forecast the probability of occurrence of large aftershocks for the 2014 Napa earthquake sequence. For the Mojave segment of the San Andreas Fault we will compare probability density models to the recurrence data and use the best fitting model to determine the 30-year conditionally probability of occurrence of a magnitude 8 earthquake. # # Use the code provided in this Jupyter Notebook to analyze the provided data, and then answer the questions to complete this homework. Submit your completed notebook in a \*.pdf format. Write your answers embedded as Markdown inside the notebook where specified. # + code_folding=[17, 26] #Initial Setup and Subroutine Definitions import math import datetime import numpy as np from scipy import stats import matplotlib import matplotlib.pyplot as plt import pandas as pd import cartopy.crs as ccrs import cartopy.feature as cfeature def countDays(c,y,m,d): days=np.zeros(c) for i in range(0,c,1): d0 = datetime.date(y[0], m[0], d[0]) d1 = datetime.date(y[i], m[i], d[i]) delta = d1 - d0 days[i]=delta.days return days def readAnssCatalog(p): # slices up an ANSS catalog loaded as a pandas dataframe and returns event info d=np.array(p) # load the dataframe into numpy as an array year=d[:,0].astype(int) # define variables from the array month=d[:,1].astype(int) day=d[:,2].astype(int) hour=d[:,3].astype(int) minute=d[:,4].astype(int) sec=d[:,5].astype(int) lat=d[:,6] lon=d[:,7] mag=d[:,8] days = countDays(len(year),year,month,day) return year,month,day,hour,minute,sec,lat,lon,mag,days # - # ## Exercise 1 # # The simplest model description of the probability that an earthquake of a given magnitude will happen is that of random occurrence. In fact when you examine the earthquake catalog it does in fact appear to be randomly distributed in time with the exception of aftershocks and a slight tendency of clustering. The Poisson distribution is often used to examine the probability of occurrence of an event within a given time window based on the catalog statistics. A Poisson process occurs randomly with no “memory” of time, size or location of any preceding event. Note that this assumption is inconsistent with the implications of elastic rebound theory applied to a single fault for large repeating earthquakes, but is consistent with the gross seismicity catalog. # # The Poisson distribution is defined as, # # $$ # p(x)=\frac{u^x e^{-u}}{x!} # $$ # # where $x$ is the number of events, and $u$ is the number of events that occur in time $\delta t$ given the rate of event occurrence $\lambda$, or $u = \lambda*\delta t$. Consider the case in which we would like to know the probability of an event of a certain magnitude occurring within a certain time. Using the Poisson distribution, we can define the probability of one or more events occuring to be, # # $$ # p(x >= 1)=1.0 - e^{-u}. # $$ # # The probability of one or more events occuring in a specified time period, for example $\delta t =$ 30 years, can be shown to be # # $$ # p(x >= 1)=1.0 - e^{-\lambda \delta t}, # $$ # # where $\lambda$ is the annual rate of event occurrence (N), taken from Gutenberg-Richter analysis. # # + [markdown] deletable=false editable=false # ### Question 1.1 # # Using the Poisson model estimate the probability of a magnitude 5+ earthquake in a given week, month, year and 5 year period using the annual rate determined from the Gutenberg-Richter relationship for the Greater San Francisco Bay Area: # # $$ # Log(N) = 3.45 - 0.830M # $$ # # <!-- # BEGIN QUESTION # name: q1.1 # --> # + # You can use this cell for questions 1 & 2, just skip ahead to the # question 2 test cell when using the question 2 magnitude # Average annual rate of occurrence of M mag+ from G-R stats mag = ... lam = ... # Time range in years # You can compute all four (week, month, 1y, 5y) probabilities # at once by making a list of time ranges in years dt = [ , , , ] # the tester expects these in ascending order [w,m,1y,5y] # The probability of an event of Magnitude M occurring P = [... for t in dt] # this is called a "list comprehension" # the tester expects plain probability values in the range [0,1]; don't convert to percentages print('The probability of an M5+ event occurring in 1 week, 1 month, 1 year, and 5 years=') print(P) # + deletable=false editable=false grader.check("q1.1") # - # _Type your answer here, replacing this text._ # + [markdown] deletable=false editable=false # ### Question 1.2 # # Compare the estimated probability of a magnitude 7.0+ earthquake for the same time periods. # # <!-- # BEGIN QUESTION # name: q1.2 # --> # + deletable=false editable=false grader.check("q1.2") # - # _Type your answer here, replacing this text._ # ## Exercise 2 # # The Poisson probability function above may also be used to determine the probability of one or more aftershocks of given magnitude range and time period following the mainshock. # # Typically an estimate of the probability of magnitude 5 and larger earthquakes is given for the period of 7 days following a large mainshock. This aftershock probability estimate is found to decay rapidly with increasing time. Reasenberg and Jones (1989) studied the statistics of aftershocks throughout California and arrived at the following equation describing the rate of occurrence of one or more events as a function of elapsed time for a generic California earthquake sequence: # # $$ # rate(t,M)=10^{(-1.67 + 0.91*(Mm - M))} * (t + 0.05)^{-1.08}, # $$ # # where Mm is the mainshock magnitude, M is magnitude of aftershocks (can be larger than Mm), and t is time in units of days. This equation describes the daily production rate of aftershocks with magnitude (M) after the mainshock with magnitude Mm. The rate is a function of time (t) and the aftershock magnitude. Elements of both the Gutenberg-Richter relationship and Omori’s Law are evident in the above equation. # # The Poisson probability of one or more aftershocks with magnitude M in range of M1 < M < M2, and time t in range t1 < t < t2 is: # # $$ # p(M1,M2,t1,t2) = 1.0 - e^{-\int_{M1}^{M2} \int_{t1}^{t2}rate(t,M)dtdM} # $$ # # The double integral in the exponent may be approximated by nested summations. That is, for each magnitude from M1 to M2 the sum of the rate function over the time period of interest (typically from t1=0 to t2=7 days) can be computed. # We can also evaluate the integral exactly for the number of earthquakes # in the magnitude range [M1,M2] and time range [t1, t2]: # # $$ # p(M1,M2,t1,t2) = 1.0 - e^{-\int_{M1}^{M2} \int_{t1}^{t2}rate(t,M)dtdM} # $$ # $$ # u = \int_{M1}^{M2} \int_{t1}^{t2}rate(t,M)dtdM # $$ # $$ # u= \int_{M1}^{M2} \int_{t1}^{t2} 10^{(-1.67 + 0.91*(Mm - M))} * (t + 0.05)^{-1.08} dtdM # $$ # $$ # u= 10^{-1.67+0.91Mm} \int_{M1}^{M2} 10^{-0.91M} dM \int_{t1}^{t2} (t + 0.05)^{-1.08} dt # $$ # $$ # u= \frac{10^{-1.67+0.91Mm}}{ln(10)(-0.91) (-0.08)} [10^{-0.91M_2} - 10^{-0.91M_1}][ (t_2 + 0.05)^{-0.08} - (t_1 + 0.05)^{-0.08}] # $$ # # Then the probability (p(x)) of having one of more earthquakes in the magnitude range [M1,M2] and time range [t1,t2] is: # # $$ # p = 1-e^{-u} # $$ # + [markdown] deletable=false editable=false # ### Question 2.1 # Use these relationships to estimate the probability of one or more magnitude 5 and larger (potentially damaging) aftershocks in the 7 days following the October 18, 1989 Loma Prieta Earthquake studied in Homework 1. # # <!-- # BEGIN QUESTION # name: q2.1 # --> # + # For the Loma Prieta earthquake, Mm = 6.9, # M1 = 5.0 and M_2 = 6.8 (since the question asks for aftershocks, # the aftershock maximum magnitude should be less than the mainshock # magnitude, otherwise it will be mainshock and Loma Prieta earthquake will # be termed "foreshock".) # P = ... # + deletable=false editable=false grader.check("q2.1") # - # _Type your answer here, replacing this text._ # + [markdown] deletable=false editable=false # ### Question 2.2 # By the end of day two how much has the probability of occurrence of a magnitude 5+ aftershock decayed? That is, what is the new 7-day probability starting on day 2? # # <!-- # BEGIN QUESTION # name: q2.2 # --> # - P = ... # + deletable=false editable=false grader.check("q2.2") # - # _Type your answer here, replacing this text._ # ### Question 2.3 # We want to compare the expected number of aftershocks per day for various magnitude thresholds (M > 2, M > 3 etc) and the observed outcome for the Loma Prieta earthquake sequence. Start by making a table of the observed aftershocks per day. # + # Load the catalog from HW #1 (provided in your current working directory) print('The magnitude-time distribution of Loma Prieta aftershocks is shown here:') data=pd.read_csv('anss_catalog_1900to2018all.txt', sep=' ', delimiter=None, header=None, names = ['Year','Month','Day','Hour','Min','Sec','Lat','Lon','Mag']) EQ_1989 = data[(data.Year>=1989) & (data.Year<1990)] #get one year of data fall_eq = EQ_1989[(EQ_1989.Month>9) & (EQ_1989.Month<=12)] #collect months of Oct, Nov and Dec LP_eq = fall_eq[(~((fall_eq.Month==10) & (fall_eq.Day<18)))] #negate events before day (assumes first month is 10) LP_eq = LP_eq[(~((LP_eq.Month==12) & (LP_eq.Day>18)))] #negate events after day (assumes last month is 12) LP_eq.reset_index(drop=True, inplace=True) year,month,day,hour,minute,sec,lat,lon,mag,days = readAnssCatalog(LP_eq) days = days[1:] # remove mainshock mag = mag[1:] # Plot of magnitude vs. day for entire catalog fig, ax = plt.subplots(figsize=(7,3)) ax.plot(days,mag,'o',alpha=0.2,markersize=5) ax.set(xlabel='Days', ylabel='Magnitude', title='Raw Event Catalog') ax.grid() ax.set_ylim([0,7]) plt.show() # + # Count aftershocks each day from 10/18 to 10/25 and make a table aftershocks_observed aftershock_days = np.arange(18,26) # day dates aftershock_mags = np.arange(2,6) # mags to count aftershocks_observed = pd.DataFrame(columns = [f'10/{d}' for d in aftershock_days], index=[f'M>={m}' for m in aftershock_mags]) # set up table # Fill in the table with the number of aftershocks per day # Hint: the easiest way to find the number of aftershocks per day in a magnitude range is to # further refine the LP_eq catalog uisng boolean statements. # - aftershocks_observed # + deletable=false editable=false grader.check("q2.3") # - # ### Question 2.4 # We want to compare the expected number of aftershocks per day for various magnitude thresholds (M > 2, M > 3 etc) and the observed outcome for the Loma Prieta earthquake sequence. Now compute the expected number of aftershocks per day from the analytical integral of the rate function. # + Mm = 6.9 aftershocks_RJ = pd.DataFrame(columns = [f'10/{d}' for d in aftershock_days], index=[f'M>={m}' for m in aftershock_mags]) # set up rate table def RJ(Mm,M1,M2,t1,t2): u = ... return int(np.round(u,0)) # fill in aftershocks_RJ table ... # - aftershocks_RJ aftershocks_observed # _Type your answer here, replacing this text._ # + deletable=false editable=false grader.check("q2.4") # + [markdown] deletable=false editable=false # ### Question 2.5 # The statistics compiled by Reasenberg and Jones also allows for the estimation of the probability of an event larger than the mainshock occurring, or in other words the probability that a given event is in fact a foreshock. Immediately following the Loma Prieta earthquake, after a lapse time of 0.1 day, what was the 7-day probability that a larger earthquake might occur? # # <!-- # BEGIN QUESTION # name: q2.5 # --> # - P = ... print('After 0.1 days, the probability that the Loma Prieta M6.9 earthquake was a foreshock to a larger earthquake was') print(str(round(P,4))) # + deletable=false editable=false grader.check("q2.5") # + [markdown] deletable=false editable=false # <!-- BEGIN QUESTION --> # # ### Question 2.6 # Practically speaking, what was the duration of the Loma Prieta sequence? Explain you answer in terms Omori statistics and the probability of aftershock occurrence with increasing time following the main shock. This is an open-ended question. You might compare pre-event and omori-decay seismicity rates. You could use Reasenberg and Jones to find a time when the probability of a felt earthquake has fallen to a low level. # # <!-- # BEGIN QUESTION # name: q2.6 # manual: true # --> # + #You can answer this by 1) comparing pre-event and omori decay rates, 2) and from reasonberg and jones finding the #time when the probability of say a M3+ earthquake falls to a low level. i.e. integrate from say t1 to t2=infinity # - # _Type your answer here, replacing this text._ # <!-- END QUESTION --> # # # # ## Exercise 3 # # As discussed in class paleoseismic trench data at Pallet Creek in southern California reveals the quasi-periodic behavior of large earthquakes on the San Andreas fault. From the very careful mapping of offset stratigraphy in the trench and carbon-14 radiometric dating these large earthquakes have been found to have occurred in 1857, 1812, 1480, 1346, 1100, 1048, 997, 797, 734, 671, 529 (see figure from <NAME>., <NAME>. and <NAME>., 1989). These earthquakes include M8 earthquakes on the southern segment of the San Andreas fault, which extends from Parkfield southward through the Big Bend into southern California. Each earthquake may not have been as large as M8, however, given the mapped slip, each event is considered to be M>7. The 1857 earthquake was M8. # # <img src="palletCreek.png"> # # Using this recurrence data we are going to examine the periodicity, plot the distribution of events in terms of binned interval time, compare the observed distribution with idealized probability density functions, and then use those functions to estimate the conditional probability of occurrence of these events. # + [markdown] deletable=false editable=false # <!-- BEGIN QUESTION --> # # ### Question 3.1 # # Given the time intervals separating the event list given above, compare the fits of a Gaussian and Lognormal probability density model. # # Gaussian: # $$ # pd(u)=\frac{e^{\frac{-(u - T_{ave})^2}{2 {\sigma}^2}}}{\sigma \sqrt{2 \pi}} # $$ # # # Log-Normal: # $$ # pd(u)=\frac{e^{\frac{-{(ln(u/T_{ave}))}^2}{2 {(\sigma / T_{ave})}^2}}}{(\frac{\sigma}{T_{ave}}) u \sqrt{2 \pi}} # $$ # # The models depend on the mean interval recurrence time ($T_{ave}$), the standard deviation to the mean ($\sigma$), and the random variable ($u$) which in this case represents the interval time. # # # To do this make a histogram with bins from 1-51, 51-101, 101-151, etc. The center dates of the bins will be 26, 76, 126, etc. Then fit each probability density model. This part is done for you. # # **Question**: Which type of distribution appears to fit the data better? # # <!-- # BEGIN QUESTION # name: q3.1 # manual: true # --> # + # hint: matplotlib.pyplot and pandas.DataFrame both have # histogram functions # Enter the event years and intervals into a table # There are other (better) ways to do this, can you think of one? print('\nInterval Times:') c = {0:[1857,45],1:[1812,332],2:[1480,134],3:[1346,246],4:[1100,52],5:[1048,51],6:[997,200],7:[797,63], 8:[734,63],9:[671,142],10:[529,0]} df = pd.DataFrame.from_dict(data=c,orient='index',columns=['Date','Intervals']) print(df) # With so few data points we can get away with manually counting the bins # Think about how you could make python do more of the work here print('\nHistogram Data:') c = {0:['0<=T<51',1],1:['51<=T<101',4],2:['101<=T<151',2],3:['151<T<201',1], 4:['201<T<251',1],5:['251<T<301',0],6:['301<T<351',1]} hf = pd.DataFrame.from_dict(data=c,orient='index',columns=['Time Range','Count']) print(hf) # Models Tave = np.mean(df.Intervals) # mean of each bin sig = np.std(df.Intervals) u=np.arange(0.1,351,1,) # number of years spanned by all bins # Gaussian uG=np.exp(-(u - Tave)**2/(2*sig**2))/(sig*np.sqrt(2*np.pi)) # Log-normal uLN=np.exp(-(np.log(u /Tave))**2/(2*(sig/Tave)**2))/((sig/Tave)*u*np.sqrt(2*np.pi)) # Plot the result plt.figure() hf.Count.plot(kind='bar'); plt.plot(u*(6/351),uG*500,'r-'); # scaling u and uG to match bar plot dimensions plt.plot(u*(6/351),uLN*500,'b-'); plt.xticks(range(len(hf)), hf['Time Range'].values, size='small',rotation=30); # - # _Type your answer here, replacing this text._ # <!-- END QUESTION --> # # # # ## Exercise 4 # In this problem we will estimate the probability of occurrence of a magnitude M8 earthquake based on the historic Pallet Creek recurrence data and the best fitting probability density model determined in Exercise 3. # # The probability that an event will occur within a given time window, for example 30-years, is the definite integral of the probability density function computed over that time window: # $$ # P(T_e <= T <= T_e + \Delta T)=\int_{T_e}^{T_e + \Delta T} pd(u)du, # $$ # # where $\Delta T$ is the length of the forecast window and $T_e$ is the time since the previous event. Note how P varies as a function of elapsed time. For any given forecast window, the value of P is small but is greatest near the mean of the distribution. Note that the Gaussian and lognormal probability density functions defined above are normalized to unit area. # ### Question 4.1 # Estimate the 10-year, 20-year and 30-year probabilities for a repeat of this large Pallet Creek fault segment event using your estimates of $T_{ave}$, $\sigma$, and $T_e=164$ years (time since 1857). # # The first step is to find the probability that the event will occur in the window, $\Delta T$, with the condition that the event did not occur before $T_e$. This effectively reduces the sample space. The result is the following normalization for the conditional probability: # # $$ # P(T_e <= T <= T_e + \Delta T | T >= T_e) = \frac{\int_{T_e}^{T_e + \Delta T} pd(u)du}{1.0 - \int_{0}^{T_e}pd(u)du} # $$ # + Te = ... Tave = ... sig = ... # suggestion: make functions to calculate uG and uLN that you can use again in later questions def calc_uG(Tave,sig,u): uG= ... return uG def calc_uLN(Tave,sig,u): uLN= ... return uLN u= ... uG = calc_uG(Tave,sig,u) uLN = ... # if we use a step size of 1 (year) then we can numerically integrate by just taking the sum pG10_Te = np.sum(uG[Te:(Te+10)])/(1-np.sum(uG[0:Te])) pLN10_Te = ... pG20_Te = ... pLN20_Te = ... pG30_Te = ... pLN30_Te = ... print("Gaussian model") print("{:.6f}, {:.6f}, {:.6f}".format(pG10_Te,pG20_Te,pG30_Te)) print("Log-normal model") print("{:.6f}, {:.6f}, {:.6f}".format(pLN10_Te,pLN20_Te,pLN30_Te)) # + [markdown] deletable=false editable=false # These are the conditional probabilities of an earthquake occurring within a time interval of $\Delta T$ years between $T_e$ And $T_e$+$\Delta T$ years given that it did not occur before time $T_e$ (for $\Delta T$ = 10 years, 20 years, and 30 years). # # <!-- # BEGIN QUESTION # name: q4.1 # --> # + deletable=false editable=false grader.check("q4.1") # + [markdown] deletable=false editable=false # <!-- BEGIN QUESTION --> # # ### Question 4.2 # # Make two plots showing (a) both pd(u) models for u = [0,500] years and (b) the 10-year, 20-year and 30-year probability windows for $T_e = [0,500]$ years (done for you). Describe the second plot. What does it tell you? # # <!-- # BEGIN QUESTION # name: q4.2 # manual: true # --> # + # Plot Models over 500 years plt.figure() plt.plot(u,uG,label='Gaussian') plt.plot(u,uLN,label='Log-Normal') plt.xlim([0,500]) plt.ylim([0,max(uLN)]) plt.legend() plt.xlabel('Interval time [years]') plt.ylabel('Number of Events') # We can integrate the definite integrals described above, using # Gaussian and Log-Normal distributions for pd(u), by np.trapz, np.sum, etc. te = range(0,5000,1) pG10 = np.zeros(np.shape(te)) pLN10 = np.zeros(np.shape(te)) for t in te: uG = calc_uG(Tave,sig,u) # print(np.shape(uG)) pG10[t] = np.sum(uG[t:t+10]) uLN = calc_uLN(Tave,sig,u) pLN10[t] = np.sum(uLN[t:t+10]) pG20 = np.zeros(np.shape(te)) pLN20 = np.zeros(np.shape(te)) for t in te: uG = calc_uG(Tave,sig,u) pG20[t] = np.sum(uG[t:t+20]) uLN = calc_uLN(Tave,sig,u) pLN20[t] = np.sum(uLN[t:t+20]) pG30 = np.zeros(np.shape(te)) pLN30 = np.zeros(np.shape(te)) for t in te: uG = calc_uG(Tave,sig,u) pG30[t] = np.sum(uG[t:t+30]) uLN = calc_uLN(Tave,sig,u) pLN30[t] = np.sum(uLN[t:t+30]) # Plot Probabilities plt.figure() plt.plot(u,pG10,'-',color='r',label='10-year Gaussian'); plt.plot(u,pLN10,'-',color='b',label='10-year Log-Normal'); plt.plot(u,pG20,'--',color='r',label='20-year Gaussian'); plt.plot(u,pLN20,'--',color='b',label='20-year Log-Normal'); plt.plot(u,pG30,':',color='r',label='30-year Gaussian'); plt.plot(u,pLN30,':',color='b',label='30-year Log-Normal'); plt.vlines(x=Tave,ymin=0,ymax=(max(pLN30)),linestyles='-',label='$T_{ave}$') plt.xlim([0,500]) plt.ylim([0,max(pLN30)]) plt.xlabel('Te [years]'); plt.ylabel('Probability'); plt.legend(); # - # _Type your answer here, replacing this text._ # + [markdown] deletable=false editable=false # <!-- END QUESTION --> # # ### Question 4.3 # # Estimate the change in the 30-year probability if the event does not occur in the next 10 years. # # <!-- # BEGIN QUESTION # name: q4.3 # --> # - Te = ... pLN30_Te = ... print(f'{pLN30_Te:.6f}') # + deletable=false editable=false grader.check("q4.3") # - # _Type your answer here, replacing this text._ # + [markdown] deletable=false editable=false # <!-- BEGIN QUESTION --> # # ### Question 4.4 # # Can you identify a weakness of this model? # # <!-- # BEGIN QUESTION # name: q4.4 # manual: true # --> # - # _Type your answer here, replacing this text._ # <!-- END QUESTION --> # # # # # Submission # # Make sure you have run all cells in your notebook in order before running the cell below, so that all images/graphs appear in the output. The cell below will generate a pdf file for you to submit. **Please save before exporting!** The exporter will not see any unsaved changes to your notebook. # !../eps130_export eps130_hw2_EQ_Forecasting_v1.0.ipynb # [Access your pdf here.](./eps130_hw2_EQ_Forecasting_v1.0.pdf) # # Remember to check that you pdf shows your most recent work before submitting.
HW2/eps130_hw2_EQ_Forecasting_v1.1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Fast GP implementations # ## Benchmarking our implementation # Let's do some timing tests and compare them to what we get with two handy GP packages: ``george`` and ``celerite``. We'll learn how to use both along the way. # <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> # <h1 style="line-height:2.5em; margin-left:1em;">Exercise 1a</h1> # </div> # # Let's time how long our custom implementation of a GP takes for a rather long dataset. Create a time array of ``10,000`` points between 0 and 10 and time how long it takes to sample the prior of the GP for the default kernel parameters (unit amplitude and timescale). Add a bit of noise to the sample and then time how long it takes to evaluate the log likelihood for the dataset. Make sure to store the value of the log likelihood for later. # <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> # <h1 style="line-height:2.5em; margin-left:1em;">Exercise 1b</h1> # </div> # # Let's time how long it takes to do the same operations using the ``george`` package (``pip install george``). # # The kernel we'll use is # # ```python # kernel = amp ** 2 * george.kernels.ExpSquaredKernel(tau ** 2) # ``` # # where ``amp = 1`` and ``tau = 1`` in this case. # # To instantiate a GP using ``george``, simply run # # ```python # gp = george.GP(kernel) # ``` # # The ``george`` package pre-computes a lot of matrices that are re-used in different operations, so before anything else, ask it to compute the GP model for your timeseries: # # ```python # gp.compute(t, sigma) # ``` # # Note that we've only given it the time array and the uncertainties, so as long as those remain the same, you don't have to re-compute anything. This will save you a lot of time in the long run! # # Finally, the log likelihood is given by ``gp.log_likelihood(y)`` and a sample can be drawn by calling ``gp.sample()``. # # How do the speeds compare? Did you get the same value of the likelihood (assuming you computed it for the same sample in both cases)? # <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> # <h1 style="line-height:2.5em; margin-left:1em;">Exercise 1c</h1> # </div> # # ``george`` offers a fancy GP solver called the HODLR solver, which makes some approximations that dramatically speed up the matrix algebra. Instantiate the GP object again by passing the keyword ``solver=george.HODLRSolver`` and re-compute the log likelihood. How long did that take? # # (I wasn't able to draw samples using the HODLR solver; unfortunately this may not be implemented.) # <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> # <h1 style="line-height:2.5em; margin-left:1em;">Exercise 2</h1> # </div> # # The ``george`` package is super useful for GP modeling, and I recommend you read over the [docs and examples](https://george.readthedocs.io/en/latest/). It implements several different [kernels](https://george.readthedocs.io/en/latest/user/kernels/) that come in handy in different situations, and it has support for multi-dimensional GPs. But if all you care about are GPs in one dimension (in this case, we're only doing GPs in the time domain, so we're good), then ``celerite`` is what it's all about: # # ```bash # pip install celerite # ``` # # Check out the [docs](https://celerite.readthedocs.io/en/stable/) here, as well as several tutorials. There is also a [paper](https://arxiv.org/abs/1703.09710) that discusses the math behind ``celerite``. The basic idea is that for certain families of kernels, there exist **extremely efficient** methods of factorizing the covariance matrices. Whereas GP fitting typically scales with the number of datapoints $N$ as $N^3$, ``celerite`` is able to do everything in order $N$ (!!!) This is a **huge** advantage, especially for datasets with tens or hundreds of thousands of data points. Using ``george`` or any homebuilt GP model for datasets larger than about ``10,000`` points is simply intractable, but with ``celerite`` you can do it in a breeze. # # Repeat the timing tests, but this time using ``celerite``. Note that the Exponential Squared Kernel is not available in ``celerite``, because it doesn't have the special form needed to make its factorization fast. Instead, use the ``Matern 3/2`` kernel, which is qualitatively similar, and which can be approximated quite well in terms of the ``celerite`` basis functions: # # ```python # kernel = celerite.terms.Matern32Term(np.log(1), np.log(1)) # ``` # # Note that ``celerite`` accepts the **log** of the amplitude and the **log** of the timescale. Other than this, you should be able to compute the likelihood and draw a sample with the same syntax as ``george``. # # How much faster did it run? # <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> # <h1 style="line-height:2.5em; margin-left:1em;">Exercise 3</h1> # </div> # # Let's use ``celerite`` for a real application: fitting an exoplanet transit model in the presence of correlated noise. # # Below is a (fictitious) light curve for a star with a transiting planet. There is a transit visible to the eye at $t = 0$, which (say) is when you'd expect the planet to transit if its orbit were perfectly periodic. However, a recent paper claims that the planet shows transit timing variations, which are indicative of a second, perturbing planet in the system, and that a transit at $t = 0$ can be ruled out at 3 $\sigma$. Your task is to verify this claim. # # Assume you have no prior information on the planet other than the transit occurs in the observation window, the depth of the transit is somewhere in the range $(0, 1)$, and the transit duration is somewhere between $0.1$ and $1$ day. You don't know the exact process generating the noise, but you are certain that there's correlated noise in the dataset, so you'll have to pick a reasonable kernel and estimate its hyperparameters. # # # Fit the transit with a simple inverted Gaussian with three free parameters: # # ```python # def transit_shape(depth, t0, dur): # return -depth * np.exp(-0.5 * (t - t0) ** 2 / (0.2 * dur) ** 2) # ``` # # Read the celerite docs to figure out how to solve this problem efficiently. # # *HINT: I borrowed heavily from [this tutorial](https://celerite.readthedocs.io/en/stable/tutorials/modeling/), so you might want to take a look at it...* import matplotlib.pyplot as plt t, y, yerr = np.loadtxt("data/sample_transit.txt", unpack=True) plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0) plt.xlabel("time") plt.ylabel("relative flux");
gps/03-FastGPs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # # 2D numerical experiment of the inverted pendulum # + pycharm={"name": "#%%\n"} import gpytorch import torch from src.TVBO import TimeVaryingBOModel from src.objective_functions_LQR import lqr_objective_function_4D # + [markdown] pycharm={"name": "#%% md\n"} # ### Hyperparameters # + pycharm={"name": "#%%\n"} # parameters regarding the objective function objective_function_options = {'objective_function': lqr_objective_function_4D, # optimize the 4D feedback gain 'spatio_dimensions': 4, # approximate noise level from the objective function 'noise_lvl': 0.005, # feasible set for the optimization 'feasible_set': torch.tensor([[-3.5, -7, -62.5, -5], [-1.5, -4, -12.5, -1]], dtype=torch.float), # initial feasible set consisting of only controllers 'initial_feasible_set': torch.tensor([[-3, -6, -50, -4], [-2, -4, -25, -2]], dtype=torch.float), # scaling \theta to have approximately equal lengthscales in each dimension 'scaling_factors': torch.tensor([1 / 8, 1 / 4, 3, 1 / 4])} # parameters regarding the model model_options = {'constrained_dimensions': None, # later specified for each variation 'forgetting_type': None, # later specified for each variation 'forgetting_factor': None, # later specified for each variation # specification for the constraints (cf. Agrell 2019) 'nr_samples': 10000, 'xv_points_per_dim': 4, # VOPs per dimensions 'truncation_bounds': [0, 2], # specification of prior 'prior_mean': 0., 'lengthscale_constraint': gpytorch.constraints.Interval(0.5, 6), 'lengthscale_hyperprior': gpytorch.priors.GammaPrior(6, 1 / 0.3), 'outputscale_constraint_spatio': gpytorch.constraints.Interval(0, 20), 'outputscale_hyperprior_spatio': None, } # - # ### Specify variations # + pycharm={"name": "#%%\n"} # UI -> UI-TVBO, B2P_OU -> TV-GP-UCB variations = [ # in the paper color blue {'forgetting_type': 'UI', 'forgetting_factor': 0.03, 'constrained_dims': []}, # in the paper color red {'forgetting_type': 'B2P_OU', 'forgetting_factor': 0.03, 'constrained_dims': []}, ] # + [markdown] pycharm={"name": "#%% md\n"} # ### Start optimization # + pycharm={"name": "#%%\n", "is_executing": true} trials_per_variation = 25 # number of different runs for variation in variations: # update variation specific parameters model_options['forgetting_type'] = variation['forgetting_type'] model_options['forgetting_factor'] = variation['forgetting_factor'] model_options['constrained_dimensions'] = variation['constrained_dims'] tvbo_model = TimeVaryingBOModel(objective_function_options=objective_function_options, model_options=model_options, post_processing_options={}, add_noise=False, ) # noise is added during the simulation of the pendulum # specify name to safe results method_name = model_options['forgetting_type'] forgetting_factor = model_options['forgetting_factor'] string = 'constrained' if model_options['constrained_dimensions'] else 'unconstrained' NAME = f"{method_name}_2DLQR_{string}_forgetting_factor_{forgetting_factor}".replace('.', '_') # run optimization for trial in range(1, trials_per_variation + 1): tvbo_model.run_TVBO(n_initial_points=30, time_horizon=300, safe_name=NAME, trial=trial, ) print('Finished.')
LQR_4D.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.5 64-bit # name: python38564bit9cc922a0cab94d29981091902f2a8012 # --- from Player import Player from Game import CodeConquerorGame as ccg from config import game_conf userList = [Player(name = "A" + str(i), display_mode = game_conf.display_mode) for i in range(100)] game = ccg(userList) silent_table = [] # simulation for i in range(10000): game.play() silent_table += game.compute_silent() mn_silent = min(silent_table) mx_silent = max(silent_table) avg_silent = sum(silent_table)//len(silent_table) print(f"Minimum Silent Time : {mn_silent//60}m {mn_silent%60}s") print(f"Maximum Silent Time : {mx_silent//60}m {mx_silent%60}s") print(f"Average Silent Time : {avg_silent//60}m {avg_silent%60}s") import matplotlib.pyplot as plt silent_min = [x//60 for x in silent_table] # silent minute plt.hist(silent_min , color="blue", edgecolor="black",bins=30) plt.title('Histogram of Silent Duration') plt.xlabel('Duration (min)') plt.ylabel('Frequency') mn_end = min(game.end_time) mx_end = max(game.end_time) avg_end = sum(game.end_time)//len(game.end_time) print(f"Minimum End Time : {mn_end//60}m {mn_end%60}s") print(f"Maximum End Time : {mx_end//60}m {mx_end%60}s") print(f"Average End Time : {avg_end//60}m {avg_end%60}s") end_min = [x//60 for x in game.end_time] plt.hist(end_min , color="blue", edgecolor="black",bins=30) plt.title('Histogram of Game Duration') plt.xlabel('Duration (min)') plt.ylabel('Frequency') print(f"Give up Match (No move left) : {game.nomove_cnt}") print(f"Time up Match : {game.timeup_cnt}")
summary.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.8 ('oshwind') # language: python # name: python3 # --- import numpy as np import pandas as pd import xarray as xr import matplotlib.pyplot as plt import seaborn as sns import cartopy.crs as ccrs from matplotlib import cm from matplotlib import ticker from cmaqpy.runcmaq import CMAQModel import cmaqpy.prepemis as emis # + # Read in the dispatch data file gen_df = pd.read_csv(f'../cmaqpy/data/ny_emis/ed_output/thermal_with_renewable_20160805_20160815.csv', index_col=0, parse_dates=True) gen_da = gen_df.sum(axis=1) # Read in the solar data file solar_df = pd.read_csv(f'/share/mzhang/jas983/wrf_data/wrf2power/data/solar_gen_d02_2016-08-05_00:00:00.csv', index_col=0) solar_df = solar_df.drop(columns=['latitude', 'longitude']) solar_da = solar_df.sum() # Converting the index as date solar_da.index = pd.to_datetime(solar_da.index) # Converting from W to MW solar_da = solar_da/(10 ** 6) # Read in the wind data file wind_df = pd.read_csv(f'/share/mzhang/jas983/wrf_data/wrf2power/data/wind_gen_d02_2016-08-05_00:00:00.csv', index_col=0, parse_dates=True) wind_da = wind_df.sum(axis=1) # Converting from kW to MW wind_da = wind_da/(10 ** 3) # - solar_da.plot(ylabel='Solar Generation (MW)') wind_da.plot(ylabel='Wind Generation (MW)') gen_da.plot(ylabel='Fossil Generation (MW)') ren_frac = (solar_da + wind_da)/(solar_da + wind_da + gen_da) * 100 ren_frac.plot(ylabel='Added Renewables Fraction (%)') # + date='2016-08-13' plot_df = ren_frac.loc[pd.Timestamp(f'{date} 00'):pd.Timestamp(f'{date} 23')] f = plt.figure(figsize=(7,3)) p = plot_df.plot(color=['purple'], linewidth=2, style=['x'], ax=f.gca()) # plt.title(f'{ed_gen["NYISO Name"][gen_idx]}') plt.ylabel('Added Renewables Fraction (%)') # plt.savefig(f'../cmaqpy/data/plots/ren_frac_{date}.png', dpi=300, transparent=True, bbox_inches='tight') plt.show() # - # Read in NY Simple Net generation gen_file = '../cmaqpy/data/ny_emis/ed_output/thermal_with_renewable_20160805_20160815.csv' gen_base_file = '../cmaqpy/data/ny_emis/ed_output/thermal_without_renewable_20160805_20160815.csv' lu_file = '../cmaqpy/data/ny_emis/ed_output/RGGI_to_NYISO.csv' ed_gen = emis.fmt_like_camd(data_file=gen_file, lu_file=lu_file) ed_gen_base = emis.fmt_like_camd(data_file=gen_base_file, lu_file=lu_file) def gen_and_ren_frac(gen_idx, gen_df1, gen_df2, ren_frac_df, date=None, column_names=['Base Case', 'w/ Renewables'], figsize=(7,7), colors=['purple','orange'], linewidth=2, linestyles=['-','-.'], titlestr1='(baseload/load following)', ylabelstr1='Power (MW)', ylabelstr2='Added Renewables Fraction (%)', savefig=False, outfile_pfix='../cmaqpy/data/plots/gen_profs_'): """ Compares changes in generation or emissions for two user-specified units. """ if date is None: change_df1 = pd.concat([gen_df1.iloc[gen_idx,5:], gen_df2.iloc[gen_idx,5:]], axis=1) else: change_df1 = pd.concat([gen_df1.loc[gen_idx,pd.Timestamp(f'{date} 00'):pd.Timestamp(f'{date} 23')], gen_df2.loc[gen_idx,pd.Timestamp(f'{date} 00'):pd.Timestamp(f'{date} 23')]], axis=1) change_df1.columns = column_names # Converting the index as date change_df1.index = pd.to_datetime(change_df1.index) ren_frac_df = ren_frac_df.loc[pd.Timestamp(f'{date} 00'):pd.Timestamp(f'{date} 23')] _, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=figsize) change_df1.plot(color=colors, linewidth=linewidth, style=linestyles, ax=ax1) ren_frac_df.plot(color=colors[0], linewidth=linewidth, style=['x'], ax=ax2) # ax2.get_legend().remove() ax1.legend(loc='lower center', bbox_to_anchor=(0.5, -0.18), ncol=2) # ax1.legend(loc='lower center', ncol=2) ax1.set_title(f'{gen_df1["NYISO Name"][gen_idx]} {titlestr1}') # ax2.set_title(f'{gen_df1["NYISO Name"][gen_idx2]} {titlestr2}') ax1.set_ylabel(ylabelstr1) ax2.set_ylabel(ylabelstr2) if savefig: plt.savefig(f'{outfile_pfix}{gen_df1["NYISO Name"][gen_idx]}.png', dpi=300, transparent=True, bbox_inches='tight') else: plt.show() gen_and_ren_frac(2, ed_gen_base, ed_gen, ren_frac, date='2016-08-12', column_names=['Base Case', 'w/ Renewables'], figsize=(7,7), colors=['purple','orange'], linewidth=2, linestyles=['-','-.'], titlestr1='(baseload/load following)', ylabelstr1='Power (MW)', ylabelstr2='Added Renewables Fraction (%)', savefig=True, outfile_pfix='../cmaqpy/data/plots/gen_ren_frac_')
notebooks/Vis renewable fraction dev.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # New Languages for NLP (Yiddish) # Purpose - linguistic diversity in annotating linguistic data and training statistical language models https://spacy.io/usage/models # Language features # - orthographical, # - morphological, # - semantic # # Machines can # - parse a text into parts # - tag the linguistic attributes of those parts # - perform entity recognition # - perform sentiment analysis # #### Main concepts # ##### PoS # - Noun # - Verb # - Pronoun # - Adjective # - Adverb # ##### Lemmatization: making, made -> make # ##### Stemming: make -> mak (Porter, Krovetz) # # To begin, let's import spaCy and the create_object script. This includes as `create_object()` function that will generate a generic language object in the folder `new_lang/{language_name}`. All of the object's files are contained there. # https://spacy.io/usage/linguistic-features # + # Install needed util files if missing import spacy if 'google.colab' in str(get_ipython()): # !mkdir util # !wget -O /content/util/corpus.py https://raw.githubusercontent.com/New-Languages-for-NLP/cadet-the-notebook/main/util/corpus.py # !wget -O /content/util/create_object.py https://raw.githubusercontent.com/New-Languages-for-NLP/cadet-the-notebook/main/util/create_object.py # !wget -O /content/util/export.py https://raw.githubusercontent.com/New-Languages-for-NLP/cadet-the-notebook/main/util/export.py # !wget -O /content/util/tokenization.py https://raw.githubusercontent.com/New-Languages-for-NLP/cadet-the-notebook/main/util/tokenization.py #colab currently uses spacy 2.2.4, need 3 if '3' not in spacy.__version__[:1]: # !pip install spacy --upgrade import spacy from util.create_object import create_object spacy.__version__ # - from spacy.lang.en import English txt = "I am reading this book, and enjoying it a lot." nlp = English() # spacy.blank(“en”) doc = nlp(txt) for token in doc: print(token.text, token.lemma_, token.pos_, token.is_stop) nlp = spacy.load("en_core_web_sm") doc = nlp(txt) for token in doc: print(token.text, token.lemma_, token.pos_, token.is_stop) txt1 = "I am reading this book, and enjoying it a lot." txt2 = "I was reading that book, but did not enjoy it too much." doc1 = nlp(txt1) doc2 = nlp(txt2) doc1.similarity(doc2) nlp = spacy.load("en_core_web_lg") doc1 = nlp(txt1) doc2 = nlp(txt2) doc1.similarity(doc2) # + lang_name = 'Yiddish' lang_code ='yi' direction = 'rtl' has_case = False has_letters = True create_object(lang_name, lang_code, direction, has_case, has_letters) # - # !ls ./new_lang # To assess how the tokenizer defaults will work with your language, add example sentences to the [`examples.py`](./new_lang/examples.py) file. from IPython.core.display import HTML from util.tokenization import tokenization HTML(tokenization(lang_name)) # To adjust the tokenizer you can add unique exceptions or regular exceptions to the [tokenizer_exceptions.py](./new_lang/tokenizer_exceptions.py) file # # - To join two tokens, add an exception `{'BIG YIKES':[{ORTH: 'BIG YIKES'}]}` # - To split a token in two, `{'Kummerspeck':[{ORTH:"Kummer"},{ORTH:"speck"}]}` # # Note in both cases that we add a dictionary. The key is the string to match on, with a list of tokens. In the first case we had a single token where we would otherwise have two and vice versa. You can find more details in the spaCy documentation and [here](https://new-languages-for-nlp.github.io/course-materials/w1/tokenization.html). # ## Lookups # # The `create_object()` function creates a `new_lang/lookups` directory that contains three files. These are simple json lookups for unambiguous pos, lemma and features. You can add your data to these files and automatically update token values. Keep in mind that you'll need to find a balance between the convenience of automatically annotating tokens and the inconvenience of having to correct machine errors. Once you're done updating the files with your existing linguistic data, proceed to the next step. # # https://universaldependencies.org/u/pos/index.html # ## Texts # # For us to identify frequent tokens for automatic annotation, you'll need to provide texts. Place your machine-readable utf-8 text files in the `new_lang/texts` folder. # ##### Text Normalization # # https://en.wikipedia.org/wiki/Hebrew_(Unicode_block) # # **Diphthongs**: Rule of thumb - expansion # - chr(0x05f0): װ -> chr(0x05d5) + chr(0x05d5) # - chr(0x05f1): ױ -> chr(0x05d5) + chr(0x05d9) # - chr(0x05f2): ײ -> chr(0x05d9) + chr(0x05d9) # # **Special characters**: # תל־אַביב vs תל-אביב # - chr(0x05be) # - chr(0x05f3) vs chr(39) # - chr(0x05f3) = '׳' # - chr(39) = "'" # # **Diacritics - approaches** # # + from util.corpus import make_corpus # https://www.online-toolz.com/tools/text-unicode-entities-convertor.php make_corpus(lang_name) # - # The output of make_corpus is a json file at [`new_lang/corpus_json/tokens.json`](./new_lang/corpus_json/tokens.json). For each token, you'll find a `text` key for the token's string as well as keys for pos_, lemma_ and ent_type_. Keep in mind that this system is not able to process ambiguous lookups. Only enter data for tokens or spans with very little semantic variation. # + import srsly from pathlib import Path def get_percentages(): thirds = [] halfs = [] two_thirds = [] tokens = srsly.read_json(Path.cwd() / 'new_lang' / 'corpus_json' / 'tokens.json') tokens = srsly.json_loads(tokens) for token in tokens: if token['rank'] == 1: total_tokens = token['count'] + token['remain'] percent_annotated = 1 - (token['remain'] / total_tokens) percent_annotated = int((percent_annotated * 100)) if percent_annotated == 33: thirds.append(token) if percent_annotated == 50: halfs.append(token) if percent_annotated == 66: two_thirds.append(token) return thirds[0], halfs[0], two_thirds[0] #let percent_annotated = 1 - (token.remain / total_tokens); # let percent_annotated_str = (percent_annotated*100).toFixed(0); third, half, two_thirds = get_percentages() print(f""" 🍉 To bulk annotate 33% of the corpus, add data to first {third['rank']} tokens 🍅 To bulk annotate 50% of the corpus, add data to first {half['rank']} tokens 🍒 To bulk annotate 66% of the corpus, add data to first {two_thirds['rank']} tokens """) # - # Next we will export your texts and lookups in a TSV file in the CoreNLP format. This data can then be loaded into INCEpTION for annotation work # + from util.export import download download(lang_name) # - # When you have completed all annotation work in INCEpTION, you're ready to begin model training (CNN). This final step will export your spaCy language object. From there you can follow the spaCy documentation on model training! # # 1. package the object into a usable folder, that can be moved, and initialized using projects # 2. nlp.to_disk("/tmp/checkpoint")? # # Create a spaCy project file for your project. from util.project import make_project # + import shutil from util.project import make_project new_lang = Path.cwd() / "new_lang" make_project(lang_name,lang_code) #make export directory export_path = Path.cwd() / lang_name #shutil.make_archive("zipped_sample_directory", "zip", "sample_directory") shutil.make_archive(str(export_path), 'zip', str(new_lang)) zip_file = Path.cwd() / (lang_name + '.zip') print(f'created file {zip_file}') # -
research/cadet-notebook/New_Language.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import psycopg2 import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pprint import os import json import time import pickle import requests conn = psycopg2.connect( host = 'project.cgxhdwn5zb5t.us-east-1.rds.amazonaws.com', port = 5432, user = 'postgres', password = '<PASSWORD>', database = 'postgres') cursor = conn.cursor() # - DEC2FLOAT = psycopg2.extensions.new_type( psycopg2.extensions.DECIMAL.values, 'DEC2FLOAT', lambda value, curs: float(value) if value is not None else None) psycopg2.extensions.register_type(DEC2FLOAT) # + cursor.execute('SELECT * FROM household_181') rows = cursor.fetchall() col_names = [] for elt in cursor.description: col_names.append(elt[0]) household_181 = pd.DataFrame(data=rows, columns=col_names ) # - household_181.info() household_181.dtypes household_181.describe() #find the index of target column household_181.columns.get_loc("ratingnh") #drop the target column df = household_181.drop(columns=['ratingnh']) #total column should be 180 since we drop target column df.describe() # + from sklearn.preprocessing import LabelEncoder # Extract our X and y data #X = df.iloc[:,0:50] X = df y = household_181["ratingnh"] # + #identify the feature importance from sklearn.ensemble import RandomForestClassifier from yellowbrick.model_selection import FeatureImportances model = RandomForestClassifier(n_estimators=100) viz = FeatureImportances(model) viz.fit(X, y) viz.show() # + # Create a dictionary that will map the feature name with its feature importance feats = {} # Loop through Feature for feature, importance in zip(X.columns, model.feature_importances_): feats[feature] = importance # Add the name/value pair # View our dictionary, but sorted in order of importance sorted(feats.items(), key=lambda x: x[1], reverse=True) # + # We will pick top 50 features to be our important features features = sorted(feats.items(), key=lambda x: x[1], reverse=True)[:50] imp_features=[] for f in features: imp_features.append(f[0]) #imp_features = tuple(imp_features) # - LABEL_MAP = { -6: "Not Happy", -9: "Not Happy", 1: "Not Happy", 2: "Not Happy", 3: "Not Happy", 4: "Not Happy", 5: "Not Happy", 6: "Not Happy", 7: "Happy", 8: "Happy", 9: "Happy", 10: "Happy" } # Convert class labels into text household_181["ratingnh"] = household_181["ratingnh"].map(LABEL_MAP) # + from sklearn.preprocessing import LabelEncoder # Extract our X and y data X = df[imp_features] y = household_181["ratingnh"] # Encode our target variable encoder = LabelEncoder().fit(y) y = encoder.transform(y) print(X.shape, y.shape) # + from yellowbrick.features import RadViz _ = RadViz(classes=encoder.classes_, alpha=0.35).fit_transform_show(X, y) # - # ## Regression # + from sklearn import metrics from sklearn.model_selection import KFold from sklearn.linear_model import SGDClassifier from sklearn import linear_model # - def get_internal_params(model): for attr in dir(model): if attr.endswith("_") and not attr.startswith("_"): print(attr, getattr(model, attr)) # + #Perform SGD Classifier model = SGDClassifier(loss="hinge", penalty="l2", max_iter=100) model.fit(X,y) get_internal_params(model) # + # Perform Lasso Classifier Classification model = linear_model.Ridge(alpha=1) model.fit(X,y) get_internal_params(model) # -
Household ML.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: ml # language: python # name: ml # --- # # "Authorship Identification: Part-1 (The baseline)" # > "Given a large number of authors of texts, the goal is to correctly identify the author of a given text. This problem is considered hard especially when less data is available and texts are small (typically with 500 words or less). In this part I have setup BiLSTM model as baseline for the task." # # - toc: false # - sticky_rank: 1 # - branch: master # - badges: true # - comments: true # - categories: [project, machine-learning, notebook, python] # - image: images/vignette/base.jpg # - hide: false # - search_exclude: false # # Abstract # Authorship identification is the task of identifying the author of a given text from a set of suspects. The main concern of this task is to define an appropriate characterization of texts that captures the writing style of authors.<br/> # As a published author usually has a unique writing style in his/her work. The writing style is mostly context independent and is discernible by a human reader.<br/> # In previous studies various stylometric models have been suggested for the aforementioned task e.g. BiLSTM, SVM, Logistic Regression, several other Deep Learning Models. But most of them fail or show poor results for either short passages or long passages and none of them were able to perform well in both cases.<br/> # *Previously the best performance at authroship identification is achieved by LSTM and GRU model*.<br/> # # ## Baseline Model # For setting up a baseline for the task, I used a **combination of stack of 1D-CNN with BiLSTM** which gives a **validation accuracy: ~62% and test accuracy: ~54%** while using a fairly simple BiDirectional LSTM and CNN Architecture. And unsurprisingly the results of baseline model are pretty close to the past best performing model without any type of Tuning. # # Model uses pretrained GloVe word Embeddings for text representation. GloVe uncased word embeddings were trained using Wikipedia 2014 + Gigaword 5 and it consists of 6B tokens, 400K vocab. Embeddings are available as 50d, 100d, 200d, & 300d vectors. [[Source]](http://nlp.stanford.edu/data/glove.6B.zip) # # >In most of the cases training word embeddings for the downstream task is a good idea and gives better results, albeit because of computational requirements I have used pretrained GloVE embeddings. # # Utilitiy functions # These are the functions that I'll be using to do redundant tasks in this part like: # 1. Plotting train history # 2. Saving figures # 3. Saving and Loading pickle objects # # take a look if interested! # + #collapse import matplotlib.pyplot as plt import numpy as np import pickle from pathlib import Path import keras def save_object(obj: object, file_path: Path) -> None: """ Save a python object to the disk and creates the file if does not exists already. Args: file_path - Path object for pkl file location obj - object to be saved Returns: None """ if not file_path.exists(): file_path.touch() print(f"pickle file {file_path.name} created successfully!") else: print(f"pickle file {file_path.name} already exists!") with file_path.open(mode='wb') as file: pickle.dump(obj, file, protocol=pickle.HIGHEST_PROTOCOL) print(f"object {type(obj)} saved to file {file_path.name}!") def load_object(file_path: Path) -> object: """ Loads the pickle object file from the disk. Args: file_path - Path object for pkl file location Returns: object """ if file_path.exists(): with file_path.open(mode='rb') as file: print(f"loaded object from file {file_path.name}") return pickle.load(file) else: raise FileNotFoundError def vectorize_sequence(sequences: np.ndarray, dimension: int = 10000): """ Convert sequences into one-hot encoded matrix of dimension [len(sequence), dimension] Args: sequences - ndarray of shape [samples, words] dimension = number of total words in vocab Return: vectorized sequence of shape [samples, one-hot-vecotor] """ # Create all-zero matrix results = np.zeros((len(sequences), dimension)) for (i, sequence) in enumerate(sequences): results[i, sequence] = 1. return results def plot_history( history: keras.callbacks.History, metric: str = 'acc', save_path: Path = None, model_name: str = None ) -> None: """ Plots the history of training of a model during epochs Args: history: model history - training history of a model metric - Plots: 1. Training and Validation Loss 2. Training and Validation Accuracy """ f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5)) ax1.plot(history.epoch, history.history.get( 'loss'), "o", label='train loss') ax1.plot(history.epoch, history.history.get( 'val_loss'), '-', label='val loss') ax2.plot(history.epoch, history.history.get( metric), 'o', label='train acc') ax2.plot(history.epoch, history.history.get( f"val_{metric}"), '-', label='val acc') ax1.set_xlabel("epoch") ax1.set_ylabel("loss") ax2.set_xlabel("epoch") ax2.set_ylabel("accuracy") ax1.set_title("Loss") ax2.set_title("Accuracy") f.suptitle(f"Training History: {model_name}") ax1.legend() ax2.legend() if save_path is not None: f.savefig(save_path) # - # # Structure of notebook # 1. [Data Preprocessing](#Data-Preprocessing)<br/> # a. [Load dataset](#Load-Dataset)<br/> # b. [Text Vectorization](#Text-Vectorization)<br/> # c. [Configure Dataset for faster training](#Configure-dataset)<br/> # <br/> # 2. [Modelling](#Modelling)<br/> # a. [Parse Glove Embeddings](#Embedding-parsing)<br/> # b. [Define ConvnetBiLSTM model](#Model)<br/> # c. [Load Embedding matrix](#load-embedding-matrix)<br/> # <br/> # 3. [Training and Evaluation](#Training-and-Evaluation) # # Data Preprocessing # Dataset: UCI C50 Dataset(small subset of origin RCV1 dataset) [[Source]](https://archive.ics.uci.edu/ml/datasets/Reuter_50_50#) # C50 dataset is widely used for authorship identification.<br/> # Dataset Specifications: # >Catagories/Authors: 50 <br/> # >Datapoints per class: 50 <br/> # >Total Datapoints: 5000 (4500 train, 500 test) # ## Loading and Preprocessing the dataset # 1. 80-20 train and validation split and 500 holdout datapoints.<br/> # 2. I'll use `text_dataset_from_directory` utility of keras library to load dataset which is faster than manually reading the text.<br/> # 3. In the preprocessing step, numbers and special characters except `{.} {,} and {'}` are removed from the dataset. # + # collapse # Import Python Regular Expression library import re src_dir = Path('data/C50_raw/') src_test_dir = src_dir / 'test' src_train_dir = src_dir / 'train' dst_dir = Path('data/C50/') dst_test_dir = dst_dir / 'test' dst_train_dir = dst_dir / 'train' test_sub_dirs = src_test_dir.iterdir() train_sub_dirs = src_train_dir.iterdir() for i, author in enumerate(test_sub_dirs): author_name = author.name dst_author = dst_test_dir / author_name dst_author.mkdir() for file in author.iterdir(): file_name = file.name dst = dst_author / file_name raw_text = file.read_text(encoding='utf-8') out_text = re.sub("[^A-Za-z.',]+", " ", raw_text) dst.write_text(out_text, encoding='utf-8') for i, author in enumerate(train_sub_dirs): author_name = author.name dst_author = dst_train_dir / author_name dst_author.mkdir() for file in author.iterdir(): file_name = file.name dst = dst_author / file_name raw_text = file.read_text(encoding='utf-8') out_text = re.sub("[^A-Za-z.',]+", " ", raw_text) dst.write_text(out_text, encoding='utf-8') # - # collapse nfiles_test = len(list(dst_test_dir.glob("*/*.txt"))) nfiles_train = len(list(dst_train_dir.glob("*/*.txt"))) print(f"Number of files in processed test dataset: {nfiles_test}") print(f"Number of files in processed train dataset: {nfiles_train}") # ## Load Dataset # + #collapse import keras import numpy as np import tensorflow as tf from keras import models, layers from keras.preprocessing import text_dataset_from_directory from keras.layers.experimental.preprocessing import TextVectorization import keras.callbacks as cb ds_dir = Path('data/C50/') train_dir = ds_dir / 'train' test_dir = ds_dir / 'test' seed = 123 batch_size = 32 train_ds = text_dataset_from_directory( train_dir, label_mode='categorical', seed=seed, shuffle=True, batch_size=batch_size, validation_split=0.2, subset='training' ) val_ds = text_dataset_from_directory( train_dir, label_mode='categorical', seed=seed, shuffle=True, batch_size=batch_size, validation_split=0.2, subset='validation') test_ds = text_dataset_from_directory( test_dir, label_mode='categorical', seed=seed, shuffle=True, batch_size=batch_size) # - # ## Inspect dataset class_names = test_ds.class_names class_names = np.asarray(class_names) print(f"nclasses: {len(class_names)}") print(f'first 4 classes/users: {class_names[:4]}') for texts, labels in train_ds.take(1): print("Shape of texts", texts.shape) print(f'Class of 2nd data point: {class_names[labels.numpy()[1].astype(bool)]}') #collapse MAX_LEN_TRAIN = 0 MAX_LEN_TEST = 0 for file in test_dir.glob('*/*.txt'): with file.open() as f: seq_len = 0 for line in f.readlines(): seq_len += len(line.split()) # print(seq_len) if MAX_LEN_TEST < seq_len: MAX_LEN_TEST = seq_len for file in train_dir.glob('*/*.txt'): with file.open() as f: seq_len = 0 for line in f.readlines(): seq_len += len(line.split()) # print(seq_len) if MAX_LEN_TRAIN < seq_len: MAX_LEN_TRAIN = seq_len print(f"length of largest article in train dataset: {MAX_LEN_TRAIN}") print(f"length of largest article in test dataset: {MAX_LEN_TEST}") #collapse for batch, label in iter(val_ds): index = np.argmax(label.numpy(), axis=1).astype(np.int) print(f'Users of first batch: {class_names[index]}') break # ## Text Vectorization # Text vectorization includes the following tasks using `TextVectorization` layer: # 1. `Standardization` # 2. `Tokenization` # 3. `Vectorization` # ### Initial run of the vectorization layers # 1. Make a text-only dataset (without labels), then call adapt # 2. Do not call adapt on test dataset to prevent data-leak # 3. train and save vocab to disk # # Note: Use it only for the first time or if vocab is not saved # + #collapse from utils import save_object ### Define vectorization layers VOCAB_SIZE = 34000 MAX_LEN = 1450 vectorize_layer = TextVectorization( max_tokens=VOCAB_SIZE, output_mode='int', output_sequence_length=MAX_LEN ) # Train the layers to learn a vocab train_text = train_ds.map(lambda text, lables: text) vectorize_layer.adapt(train_text) # Save the vocabulary to disk # Run this cell for the first time only vocab = vectorize_layer.get_vocabulary() vocab_path = Path('vocab/vocab_C50.pkl') save_object(vocab, vocab_path) vocab_len = len(vocab) print(f"vocab size of vectorizer: {vocab_len}") # - # ### Vectorization layers from saved vocab # Only run after first saving the vocabulary to the disk! # + # collapse # # Load vocab # from utils import load_object # vocab_path = Path('vocab/vocab_C50.pkl') # vocab = load_object(vocab_path) # VOCAB_SIZE = 34000 # MAX_LEN = 1500 # vectorize_layer = TextVectorization( # max_tokens=VOCAB_SIZE, # output_mode='int', # output_sequence_length=MAX_LEN, # vocabulary=vocab # ) # - # ## Configure dataset # This is the final step of the data-processing pipeline where the text is converted into vectors, and then to train the model faster, dataset is prefetched and cached before each epoch. # **Prefetch is especially efficient when training on a GPU** as the CPU fetch and cache the dataset while GPU is training in parallel and after finishing current epoch GPU doesn't have to wait for CPU to load the data for next epoch. # + def vectorize(text, label): text = tf.expand_dims(text, -1) return vectorize_layer(text), label AUTOTUNE = tf.data.AUTOTUNE def prepare(ds): return ds.cache().prefetch(buffer_size=AUTOTUNE) train_ds = train_ds.map(vectorize) val_ds = val_ds.map(vectorize) test_ds = test_ds.map(vectorize) # Configure the datasets for fast training train_ds = prepare(train_ds) val_ds = prepare(val_ds) test_ds = prepare(test_ds) # - # # Modelling # # Now comes the most interesting part!<br/> # Below cell first creates an `emb_index` dictionary which **maps words to a 100-Dimensional embedding vector** and then `emb_matrix` is created which maps each word in **C50 vocab** to corresponding embedding. # + ##### Pretrained Glove Embeddings ##### ## Parse the weights from utils import load_object emb_dim = 100 glove_file = Path(f'vocab/glove/glove.6B.{emb_dim}d.txt') emb_index = {} with glove_file.open(encoding='utf-8') as f: for line in f.readlines(): values = line.split() word = values[0] coef = values[1:] emb_index[word] = coef ##### Getting embedding weights ##### vocab = load_object(Path('vocab/vocab_C50.pkl')) emb_matrix = np.zeros((VOCAB_SIZE, emb_dim)) for index, word in enumerate(vocab): # get coef of word emb_vector = emb_index.get(word) if emb_vector is not None: emb_matrix[index] = emb_vector print(f"Embedding Dimensionality: {emb_matrix.shape}") # + import keras.backend as K from keras.utils import plot_model from keras import regularizers K.clear_session() lstm_model = models.Sequential([ layers.Embedding(VOCAB_SIZE, emb_dim, input_shape=(MAX_LEN,)), layers.SpatialDropout1D(0.3), layers.Conv1D(256, 11, activation='relu'), layers.MaxPooling1D(7), layers.Dropout(0.4), layers.BatchNormalization(), layers.Bidirectional(layers.LSTM(128, return_sequences=True)), layers.Dropout(0.3), layers.BatchNormalization(), layers.Conv1D(128, 3, activation='relu'), layers.MaxPooling1D(3), layers.Dropout(0.3), layers.BatchNormalization(), layers.Conv1D(64, 3, activation='relu'), layers.GlobalMaxPooling1D(), layers.Dropout(0.3), layers.BatchNormalization(), layers.Dense(128, activation='relu', kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4)), layers.BatchNormalization(), layers.Dense(50, activation='softmax') ]) # - # ## Detailed Architecture of the model # Load the `emb_matrix` as wieghts of the embedding layer of model and then set them as non-trainable. lstm_model.layers[0].set_weights([emb_matrix]) lstm_model.layers[0].trainable = False plot_model(lstm_model, show_layer_names=False, show_shapes=True, to_file="models/base.png") # # Training the model # + # from keras.optimizers import RMSprop K.clear_session() # optim = RMSprop(lr=1e-2) ################# Configure Callbacks ################# # Early Stopping es = cb.EarlyStopping( monitor='val_loss', min_delta=5e-4, patience=5, verbose=1, mode='auto', restore_best_weights=True ) # ReduceLROnPlateau reduce_lr = cb.ReduceLROnPlateau( monitor='val_loss', factor=0.4, patience=3, verbose=1, mode='auto', min_delta=5e-3, min_lr=1e-6 ) # Tensorboard tb = cb.TensorBoard( log_dir="./logs", write_graph=True, ) ################# Model Training ################# lstm_model.compile( loss='CategoricalCrossentropy', optimizer='adam', metrics=['acc'] ) lstm_history = lstm_model.fit( train_ds, validation_data=val_ds, epochs=100, callbacks=[es, reduce_lr, tb] ) lstm_model.save('models/base.h5') # + # Model evaluation print('Model evaluation on test dataset') lstm_model.evaluate(test_ds) # Plot training history plot_history( lstm_history, model_name="ConvnetBiLSTM", save_path=Path('plots/base.jpg') ) # - # # Summary # # The simple BiLSTM + Conv1D model performs fairly well considering that it was not tuned for performance and provides a good baseline to work with. A clear thing to note here is that, While performance of the model on validation dataset is quite good but it performs poorly on the test. Model doesn't generalize well on the holdout dataset which is our primary goal, in the text part I'll try to improve the accuracy to make it better than the baseline.
_notebooks/2021-04-27-ConvnetBiLSTM-C50.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Compile dataset of clones resistant to other drugs # # **<NAME>, 2021** # # These clones are resistant to Ixazomib and CB-5083. # I create a dataset of DMSO-treated clones to later apply the bortezomib resistance signature. # + import sys import pathlib import numpy as np import pandas as pd from pycytominer.cyto_utils import infer_cp_features sys.path.insert(0, "../2.describe-data/scripts") from processing_utils import load_data # - np.random.seed(1234) # + data_dir = pathlib.Path("../0.generate-profiles/profiles") cell_count_dir = pathlib.Path("../0.generate-profiles/cell_counts/") output_dir = pathlib.Path("data") profile_suffix = "normalized.csv.gz" # - datasets = { "ixazomib": { "2020_08_24_Batch9": ["218698"], "2020_09_08_Batch10": ["218854", "218858"], }, "cb5083": { "2020_08_24_Batch9": ["218696", "218774"], "2020_09_08_Batch10": ["218852", "218856"], } } # + full_df = [] for dataset in datasets: dataset_df = [] for batch in datasets[dataset]: plates = datasets[dataset][batch] df = load_data( batch=batch, plates=plates, profile_dir=data_dir, suffix=profile_suffix, combine_dfs=True, harmonize_cols=True, add_cell_count=True, cell_count_dir=cell_count_dir ) # Add important metadata features df = df.assign( Metadata_dataset=dataset, Metadata_batch=batch, Metadata_clone_type="resistant", Metadata_clone_type_indicator=1, Metadata_model_split="otherclone" ) df.loc[df.Metadata_clone_number.str.contains("WT"), "Metadata_clone_type"] = "sensitive" df.loc[df.Metadata_clone_number.str.contains("WT"), "Metadata_clone_type_indicator"] = 0 dataset_df.append(df) # Merge plates of the same dataset together dataset_df = pd.concat(dataset_df, axis="rows", sort=False).reset_index(drop=True) # Generate a unique sample ID # (This will be used in singscore calculation) dataset_df = dataset_df.assign( Metadata_unique_sample_name=[f"profile_{x}_{dataset}" for x in range(0, dataset_df.shape[0])] ) full_df.append(dataset_df) full_df = pd.concat(full_df, axis="rows", sort=False).reset_index(drop=True) # + # Reorder features common_metadata = infer_cp_features(full_df, metadata=True) morph_features = infer_cp_features(full_df) full_df = full_df.reindex(common_metadata + morph_features, axis="columns") print(full_df.shape) full_df.head() # - pd.crosstab(full_df.Metadata_dataset, full_df.Metadata_model_split) pd.crosstab(full_df.Metadata_clone_number, full_df.Metadata_model_split) output_file = pathlib.Path(f"{output_dir}/otherclones_normalized_profiles.tsv.gz") full_df.to_csv(output_file, sep="\t", index=False)
3.resistance-signature/8.compile-otherclone-dataset.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- hrs = input("Enter Hours:") h = float(hrs) rate = input("Enter the rate") r= float(rate) if(h<=40.0): pay= h*r elif(h>r): pay=(((40.0)*r)+((h-40.0)*(r*1.5))) print(pay)
DAY-1-01-01-21/Assignment-3-1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import pickle #import keras from sklearn import svm import random import os from math import floor import warnings from sklearn.preprocessing import StandardScaler, MinMaxScaler # + # If you want to train the model by yourself # train data with open('new_data_filter0.pickle', 'rb') as f: Data = pickle.load(f) Data = pd.DataFrame(Data) numtrain = 186 * 6 * 12 numval = 2678 # Train validation split idx = random.sample(range(numtrain), numval) # 80% for training all_index = list(np.setdiff1d(range(numtrain), idx)) # Getting X_train, X_val, y_train and scale them X_train = Data.iloc[all_index,:].drop(2251, axis = 1) X_val = Data.iloc[idx,:].drop(2251, axis = 1) y_train = pd.get_dummies(Data.iloc[all_index,:][2251]) #print(y_train) names = y_train.columns mapping = {} i = 0 for n in names: mapping[i] = n i+=1 #print(mapping) y_train = Data.iloc[all_index,:][2251] scaler = MinMaxScaler() X_train = scaler.fit_transform(X_train) X_val = scaler.transform(X_val) # - from sklearn.neighbors import KNeighborsClassifier model = KNeighborsClassifier(n_neighbors=140) model.fit(X_train, y_train) # + Test = pd.read_csv('test_data_19.csv') X_test = scaler.transform(Test.iloc[:,1:]) #pred = clf.predict(X_test) probas = model.predict_proba(X_test) # + top_n_predictions = np.argsort(probas, axis = 1)[:,-11:] result = [] for pre in top_n_predictions: pre_list = [] for index in pre: pre_list.append(mapping[index]) result.append(pre_list) for plist in result: print(plist) # -
code/knn.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt import os file = 'C:/Users/cathx/repos/argoverse-api/argo_test/image_2' mat = np.array([[[[4, 8], [8, 6]],[[2, 4], [4, 2]], [[1, 10], [1, 10]]], [[[1, 5], [1, 5]], [[7, 3], [7, 3]], [[9, 0], [0,9]]]]) mat mat.shape N, C, H, W = mat.shape mat_reshaped = mat.transpose(0, 2, 3, 1).reshape(N*H*W, C) mat_reshaped.shape mat_reshaped mean = np.sum(mat_reshaped, axis=0)/N batch_var = np.var(mat_reshaped, axis=0) mean batch_var # ### Using list comprehension inside a function explore # corr_prob = -np.log(exp[range(num_train), y]) # + # dscores[range(num_train), y] -= 1 # -
assignment2_colab/assignment2/Experiment.ipynb
(* -*- coding: utf-8 -*- (* --- *) (* jupyter: *) (* jupytext: *) (* text_representation: *) (* extension: .ml *) (* format_name: light *) (* format_version: '1.5' *) (* jupytext_version: 1.14.4 *) (* kernelspec: *) (* display_name: OCaml *) (* language: ocaml *) (* name: iocaml *) (* --- *) (* <h1> Produits défectueux </h1> *) (* <h2> Énoncé </h2> *) (* *) (* Une usine produit, grâce à trois machines $M_1$, $M_2$ et $M_3$, des pièces qui ont : *) (* <ul> *) (* <li> pour la machine $M_1$ un défaut $a$ dans 5% des cas ; *) (* *) (* <li> pour la machine $M_2$ un défaut $b$ dans 3% des cas ; *) (* *) (* <li> pour la machine $M_3$ un défaut $c$ dans 2% des cas. *) (* </ul> *) (* *) (* Une machine $M$ fabrique un objet en assemblant une pièce provenant de $M_1$, une pièce provenant de $M_2$ et une pièce provenant de $M_3$. Elle prend au hasard des pièces dans trois stocks comprenant un grand nombre de pièces. Les différentes pièces sont tirées au hasard et indépendamment les unes des autres. *) (* *) (* On désigne par $X$ la variable aléatoire qui, à chaque objet prélevé au hasard dans la production de $M$, associe le nombre de ses défauts. On souhaite connaître la loi de $X$. *) (* *) (* Effectuer 100 000 simulations pour évaluer $X$. *) (* *) (* Retrouver ce résultat théoriquement. *) (* <h2> Solution </h2> *) open Random;; Random.self_init;; (* + let taux_defauts=[|0.05;0.03;0.02|];; let rec compte_defauts num_machine nbre_defauts = if num_machine=3 then nbre_defauts else let r = Random.float 1. in if r>1.-.taux_defauts.(num_machine) then compte_defauts (num_machine+1) (nbre_defauts+1) else compte_defauts (num_machine+1) nbre_defauts;; let nbre_essais=100000;; let rec loop num_essai count = if num_essai = nbre_essais then (float_of_int count)/.(float_of_int nbre_essais) else loop (num_essai+1) (count+(compte_defauts 0 0));; print_float (loop 0 0);; (* -
Produits_defectueux/Produits_defectueux_OCaml_solution.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from selenium import webdriver from selenium.webdriver.common.keys import Keys driver = webdriver.Chrome("./chromedriver.exe") try: driver.get('https://naver.com') except: print("error occur") keyword = "올인원 패키지" elem = driver.find_element_by_id('query') elem.send_keys(keyword) elem.send_keys(Keys.RETURN) div = driver.find_element_by_class_name('_blogBase') elem = div.find_element_by_tag_name('ul') div = driver.find_element_by_class_name('_blogBase') blogs = div.find_elements_by_xpath('./ul/li') value = [] for blog in blogs: title = blog.find_element_by_class_name('sh_blog_title') print(title.text) link = title.get_attribute('href') print(link) #value.append({'title':title, 'link':link}) value driver.close() # + def keyword_setting(word): change_word = list(word) for i in range(len(change_word)): if(change_word[i]==' '): change_word[i]='+' change_word = ''.join(change_word) return change_word def keyword_setup(keyword): value_dict = [] URL = 'https://www.google.com/search?q=' +keyword_setting(keyword) driver = webdriver.Chrome("C:/Users/ksg/py_tutorial/chromedriver.exe") driver.implicitly_wait(1) driver.get(URL) driver.implicitly_wait(1) div = driver.find_element_by_id('search') url_list = div.find_elements_by_class_name('g') for i,val in enumerate(url_list): #print(i,end=' : ') title = val.text.split("\n")[0] #print(val.text.split("\n")[0],end =' => ') url = val.find_elements_by_tag_name('a') url=url[0].get_attribute('href') value_dict.append({"keyword":keyword,"num":i,"title":title,"url":url}) driver.close() return value_dict # - value = keyword_setup("올인원 패키지") value # + keyword_set=['linux 화면 녹화', '업무자동화', '컴퓨터공학 올인원', '올인원 패키지', 'UX/UI', '자바스크립트 인강', '모션그래픽 디자인', '재무회계 실무', '부동산 대체투자', '신사업 발굴', '디자인 툴', '데이터 분석', '딥러닝 강의', '인공지능 강의', 'HTML/CSS 인강', 'iOS 강의', 'OpenCV 강의', '디지털 마케팅', '업무자동화인강', 'MBA인강', '앱개발인강', '영상제작인강', '데이터분석강의' ] fc_related =[ '패스트캠퍼스', '패캠', 'fastcampus', 'Fastcampus', '페스트캠퍼스' ] channel_list=["http://media.fastcampus.co.kr/", "https://www.fastcampus.co.kr/", "https://m.blog.naver.com/fastcampus/", "https://m.post.naver.com/fastcampus/", "https://blog.naver.com/fastcampus/"] # - result_list=[] for i in keyword_set: result_list.append(keyword_setup(i)) from pprint import pprint pprint(result_list) for i, fc_word in enumerate(fc_related): print(fc_word) print("") for j, result in enumerate(result_list): for k, line in enumerate(result): if(line['title'].find(fc_word) != -1 ): print(i,end=" / ") print(keyword_set[j],end=" ") print(k,end=" ") print(line['title'],end=" => ") print(line['url']) print("====================================") print("")
fc_ranking.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <img src="https://datasciencecampus.ons.gov.uk/wp-content/uploads/sites/10/2017/03/data-science-campus-logo-new.svg" # alt="ONS Data Science Campus Logo" # width = "240" # style="margin: 0px 60px" # /> # # 2.0 Model Selection # # purpose of script: compare logreg vs knn on titanic_train # # import libraries import pandas as pd from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler import numpy as np from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score import matplotlib.pyplot as plt # import cached data from titanic_EDA.py titanic_train = pd.read_pickle('../../cache/titanic_train.pkl') # # ## Preprocessing # + # Define functions to preprocess target & features def preprocess_target(df) : # Create arrays for the features and the target variable target = df['Survived'].values return(target) def preprocess_features(df) : #extract features series features = df.drop('Survived', axis=1) #remove features that cannot be converted to float: name, ticket & cabin features = features.drop(['Name', 'Ticket', 'Cabin'], axis=1) # dummy encoding of any remaining categorical data features = pd.get_dummies(features, drop_first=True) # ensure np.nan used to replace missing values features.replace('nan', np.nan, inplace=True) return features # - # Need to use pipeline to select from best model & best parameters. Use the following workflow: # # * Train-Test-Split # * Instantiate # * Fit # * Predict # # # Prep data for logreg # preprocess target from titanic_train target = preprocess_target(titanic_train) #preprocess features from titanic_train features = preprocess_features(titanic_train) # # ## train_test_split # X == features. y == target. Use 25% of data as 'hold out' data. Use a random state of 36. X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.25, random_state=36) # # # ## Instantiate # + #impute median for NaNs in age column imp = SimpleImputer(missing_values=np.nan, strategy='median') # instantiate classifier logreg = LogisticRegression() steps = [ # impute medians on NaNs ('imputation', imp), # scale features ('scaler', StandardScaler()), # fit logreg classifier ('logistic_regression', logreg)] # establish pipeline pipeline = Pipeline(steps) # - # # # ## Train model pipeline.fit(X_train, y_train) # # ## Predict labels y_pred = pipeline.predict(X_test) # # # ## Review performance pipeline.score(X_train, y_train) pipeline.score(X_test, y_test) print(confusion_matrix(y_test, y_pred)) # + # print a classification report of the model's performance passing the true labels and the predicted labels # as arguments in that order print(classification_report(y_test, y_pred)) # - # Precision is 9% lower in the survived category. High precision == low FP # rate. This model performs 9 % better in relation to false positives # (assigning survived when in fact died) when class assigned is 0 than 1. # # Recall (false negative rate - assigning died but in truth survived) is 5% # higher in 0 class. # # The harmonic mean of precision and recall - f1 - has a 7 percent increase # when assigning 0 as survived. # # This has resulted in 133 rows (versus 90 rows in survived) of the true # response sampled faling within the 0 (died) category. # # Overall, it appears that this model is considerably better at predicting when # people died rather than survived. # # ## Receiver Operator Curve # predict the probability of a sample being in a particular class y_pred_prob = pipeline.predict_proba(X_test)[:,1] # calculate the false positive rate, true positive rate and thresholds using roc_curve() fpr, tpr, thresholds = roc_curve(y_test, y_pred_prob) # create the axes plt.plot([0, 1], [0, 1], 'k--') # control the aesthetics plt.plot(fpr, tpr, label='Logistic Regression') # x axis label plt.xlabel('False Positive Rate') # y axis label plt.ylabel('True Positive Rate') # main title plt.title('Logistic Regression ROC Curve (titanic_train)') # show plot plt.show() # To my eye it looks as though a p value of around 0.8 is going to be optimum. This curve can then be used against further ROC curves to visually compare performance. What is the area under curve? roc_auc_score(y_test, y_pred_prob) # An AUC of 0.5 is as good as a model randomly asisgning classes and being correct on average 50% of the time. Here we have an untuned AUC of 0.879 which can be used to compare against further models. Precision recall curve not pursued as class imbalance is not high # tidy up del fpr, logreg, pipeline, steps, thresholds, tpr, y_pred, y_pred_prob # *** # # ## KNearestNeighbours # + steps = [ # impute median for any NaNs ('imputation', imp), # scale features ('scaler', StandardScaler()), # specify the knn classifier function for the 'knn' tep below, specifying k=5 ('knn', KNeighborsClassifier(5))] # establish a pipeline for the above steps pipeline = Pipeline(steps) # - # # # ## Train model pipeline.fit(X_train, y_train) # # # ## Predict labels # Predict the labels for the test features using the KNN pipeline you have created y_pred = pipeline.predict(X_test) # # # ## Review performance pipeline.score(X_train, y_train) # As above, calculate accuracy of the classifier, but this time on the test data pipeline.score(X_test, y_test) print(confusion_matrix(y_test, y_pred)) # True positive marginally higher than in logreg and true negaive identical. print(classification_report(y_test, y_pred)) # Precision is still lower within the survived category, however the difference # has now reduced from 9 % to 8 % lower than the logreg model. Averages are up # marginally over logreg. # # Recall (false negative rate - assigning died but in truth survived) within the # survived predicted group has increased by 2 % than in logreg. # # harmonic mean of these - f1 is similarly reduced. f1 has been marginally # improved over logreg in both classes and average. # # support output has been unaffected. # # KNN appears to marginally outperform logreg. Now need to compare whether # titanic_engineered adds any value. # # ## Receiver Operator Curve # predict the probability of a sample being in a particular class y_pred_prob = pipeline.predict_proba(X_test)[:,1] # unpack roc curve objects fpr, tpr, thresholds = roc_curve(y_test, y_pred_prob) # plot an ROC curve using matplotlib below: # create axes plt.plot([0, 1], [0, 1], 'k--') # control aesthetics plt.plot(fpr, tpr, label='Logistic Regression') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Logistic Regression ROC Curve (titanic_train)') plt.show() # The curve looks very similar to the logreg model, I imagine AUC will be very similar also. roc_auc_score(y_test, y_pred_prob) # An AUC of 0.5 is as good as a model randomly assigning classes and being correct on average 50% of the time. Here we have an untuned AUC of 0.872 This is down on the logreg AUC by 0.007.
course_content/case_study/Case Study B/notebooks/answers/2_titanic_train-answers.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Lesson 1 - Summarizing Data # + import matplotlib.pyplot as plt import numpy as np import pandas as pd import statistics # %matplotlib inline # - # ### Make a blank DataFrame df = pd.DataFrame() # ### Populate it with data df['age'] = [28, 42, 27, 24, 35, 54, 35, 42, 37] # ## Measures of Central Tendency # ### Mean (using built-in Python functionality) mean_py = sum(df['age']) / len(df['age']) mean_py # ### Mean (using NumPy) mean_np = np.mean(df['age']) mean_np # ### Median (using built-in Python functionality) median_py = statistics.median(df['age']) median_py # ### Median (using NumPy) median_np = np.median(df['age']) median_np # ### Mode (using built-in Python functionality) mode_py = statistics.mode(df['age']) mode_py # ### Mode (using NumPy) # Generate a list of unique elements along with how often they occur. (values, counts) = np.unique(df['age'], return_counts=True) print(values, counts) # The location(s) in the values list of the most-frequently-occurring element(s). ind = [x[0] for x in list(enumerate(counts)) if x[1] == counts[np.argmax(counts)]] ind values # The most frequent element(s). modes = [values[x] for x in ind] modes # ## Measures of Variance # ### Variance (using NumPy) df['age'] # change delta degrees of freedom (ddof) to 1 from its default value of 0 var_np = np.var(df['age'], ddof=1) var_np # ### Variance (using Pandas) var_pd = df['age'].var() var_pd # ### Standard Deviation (using NumPy) std_np = np.std(df['age'], ddof=1) std_np # ### Standard Deviation (using Pandas) std_pd = df['age'].std() std_pd # ### Standard Error (using NumPy) se_np = std_np / np.sqrt(len(df['age'])) se_np # ### Standard Error Examples # + # First, create an empty dataframe to store your variables-to-be. pop = pd.DataFrame() # Then create two variables with mean = 60, one with a low standard # deviation (sd=10) and one with a high standard deviation (sd=100). pop['low_var'] = np.random.normal(60, 10, 10000) pop['high_var'] = np.random.normal(60, 100, 10000) # Finally, create histograms of the two variables. pop.hist(layout=(2, 1), sharex=True) plt.show() # Calculate and print the maximum and minimum values for each variable. print("\nMax of low_var and high_var:\n", pop.max()) print("\nMin of low_var and high_var:\n", pop.min()) # + # Take a random sample of 100 observations from each variable # and store it in a new dataframe. sample = pd.DataFrame() sample['low_var'] = np.random.choice(pop['low_var'], 100) sample['high_var'] = np.random.choice(pop['high_var'], 100) # Again, visualize the data. Note that here we're using a pandas method to # create the histogram. sample.hist() plt.show() # Check how well the sample replicates the population. print("Mean of low_var and high_var:\n", sample.mean()) # - print("Standard deviation of low_var and high_var:\n", sample.std(ddof=1)) # ## Describing Data with Pandas # + # Set up the data data = pd.DataFrame() data['gender'] = ['male'] * 100 + ['female'] * 100 # 100 height values for males, 100 height values for females data['height'] = np.append(np.random.normal(69, 8, 100), np.random.normal(64, 5, 100)) # 100 weight values for males, 100 weight values for females data['weight'] = np.append(np.random.normal(195, 25, 100), np.random.normal(166, 15, 100)) # - data.head(10) data.tail(10) data['height'].mean() data['height'].std() data.describe() data.groupby('gender').describe() data['gender'].value_counts() # # Lesson 2 - Basics of Probability # ## Perspectives on Probability # ### _Frequentist_ school of thought # - ### Describes how often a particular outcome would occur in an experiment if that experiment were repeated over and over # - ### In general, frequentists consider _model parameters to be fixed_ and _data to be random_ # ### _Bayesian_ school of thought # - ### Describes how likely an observer expects a particular outcome to be in the future, based on previous experience and expert knowledge # - ### Each time an experiment is run, the probability is updated if the new data changes the belief about the likelihood of the outcome # - ### The probability based on previous experiences is called the _"prior probability,"_ or the "prior," while the updated probability based on the newest experiment is called the _"posterior probability."_ # - ### In general, Bayesians consider _model parameters to be random_ and _data to be fixed_ # ------------------------------------------------------------------------------------------------------------------------- # ## Randomness # ## Sampling # ## Selection Bias # ------------------------------------------------------------------------------------------------------------------------- # ## Independence # ## Dependence # ------------------------------------------------------------------------------------------------------------------------- # ## Bayes' Rule # ## $P(A|B)=\frac{P(B|A)*P(A)}{P(B)}=\frac{P(B|A)*P(A)}{[P(A)*P(B|A)+P(A\sim)*P(B|A\sim)]}$ # # # ## Conditional Probability # ------------------------------------------------------------------------------------------------------------------------- # ## Evaluating Data Sources # - ## Bias # - ## Quality # - ## Exceptional Circumstance # ------------------------------------------------------------------------------------------------------------------------- # # The Normal Distribution and the Central Limit Theorem # ## Normality # ## Deviations from Normality and Descriptive Statistics (skewness) # ## Other Distributions # - ## Bernoulli # - ## Binomial # - ## Gamma # - ## Poisson # - ## Conditional Distribution # ## CLT and Sampling
thinkful/data_science/my_progress/intro_data_science_fundamentals/Unit_3_-_Statistics_for_Data_Science.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Clustering and Topic Modelling # --- # + from collections import defaultdict import nltk.corpus import numpy as np import matplotlib.pyplot as plt import itertools from gensim import corpora, models from nltk.corpus import wordnet as wn from operator import itemgetter import sklearn from sklearn import metrics, manifold import scipy from scipy import cluster import matplotlib as mpl import matplotlib.cm as cm from mpl_toolkits.mplot3d import Axes3D # plt.rcdefaults() plt.rcParams['figure.figsize'] = (15, 10) # - # --- # ## Clustering # # # The dataset in `./data/blogdata.txt` contains the frequencies of meaningful words (columns), over several blogs (rows). # # # **Task:** Use clustering to assess if there are **groups among these blogs** that employ similar words, talk about the same topics or have a similar writing style. # + blog2words = defaultdict(dict) with open("data/blogdata.txt", "r") as infile: words = infile.readline().strip().split("\t")[1:] # word indices (first row contains words) for line in infile: splLine = line.strip().split("\t") blog = splLine[0] # the first column contains the blog name raw_counts = splLine[1:] # the other columns contain word counts for i, c in enumerate(raw_counts): if c != "0": blog2words[blog][words[i]] = int(c) # only keep >0 and assume the rest is zero: efficient dictionary representation of sparse matrices # + # populate matrix blogs = sorted(blog2words.keys()) bwMat = np.zeros((len(blogs), len(words))) for ib,blog in enumerate(blogs): for w, v in blog2words[blog].items(): bwMat[ib, words.index(w)] = v print(bwMat) # - # #### Exercise # # Load and explore our dataset: # # # - how many nonzero values are there in the two datasets? # # - what are the most frequent words, and in which blogs are they used? # + # your code here # - # --- # ### Hierarchical Agglomerative Clustering # # The algorithm: # # ``` # Initialize each cluster to be a singleton # while more than one clusters exist do # Find the two most similar clusters # Merge the two clusters # end while # ``` # ![alt text](data/hr.png) # #### DISTANCE METRICS # # Clustering requires the use of a similarity/distance metric to estimate the distance between clusters. See the [SciPy documenation](https://docs.scipy.org/doc/scipy/reference/spatial.distance.html) for a list of measures. # # In what follows, we'll experiment with the **correlation** measure. # the agglomerative methods available in the SciPy package requires the similarity matrix to be condensed # (condense distance matrix = a flat array containing the upper triangular of the distance matrix) distMat = scipy.spatial.distance.pdist(bwMat, "correlation") # if we want to have a look at the square matrix with the distances, we can use print(scipy.spatial.distance.squareform(distMat)) # + distMat_sq = scipy.spatial.distance.squareform(distMat, "correlation") print("original table:", bwMat.shape, "\n") print("condensed dist:", distMat.shape) print(distMat, "\n") print("square dist:", distMat_sq.shape) print(distMat_sq) # - # #### LINKAGE CRITERIA # # Scipy allows us to use several **linkage criteria** to perform the clustering. See the [documentation](https://docs.scipy.org/doc/scipy/reference/cluster.hierarchy.html#hierarchical-clustering-scipy-cluster-hierarchy) for the full list. # # In class we mentioned that the strategy that works well for the majority of applications is the **average** linkage, so let's go with that: # The hierarchical clustering encoded as a linkage matrix. linkage_matrix = scipy.cluster.hierarchy.average(distMat) # Descrition of a linkage matrix from the official documentation: # # *"A (n-1) by 4 matrix Z is returned. At the i-th iteration, clusters with indices Z[i, 0] and Z[i, 1] are combined to form cluster n + i. A cluster with an index less than n corresponds to one of the original observations. The distance between clusters Z[i, 0] and Z[i, 1] is given by Z[i, 2]. The fourth value Z[i, 3] represents the number of original observations in the newly formed cluster."* print("linkage matrix:", linkage_matrix.shape) print("# some original observations") print(linkage_matrix[0]) print(linkage_matrix[1]) print("# final cluster") print(linkage_matrix[97]) # --- # #### DENDROGRAM # # A visualization of the structure produced by a hieararchical clustering algorithm: # # - datapoints = leaves # # # - horizontal lines = cluster merges # # # - y-axis values represent the similarity or distance of two clusters at the moment of merge # + # let's create the dendrogram scipy.cluster.hierarchy.dendrogram(linkage_matrix, labels = blogs, color_threshold = 0) plt.show() # + fig = plt.figure(figsize=(15, 20)) # now with leafs on the left, and the root node on the right scipy.cluster.hierarchy.dendrogram(linkage_matrix, labels = blogs, color_threshold = 0, orientation = 'right', leaf_font_size = 10) plt.show() # - # #### HOW MANY CLUSTERS? # # HAC does not require a pre-specified number of clusters, but we may want to partition our data as in flat clustering: # # - cut at a pre-specified **level of similarity**: by default, the `dendrogram()` method colors all the descendent links below a cluster node k the same color if k is the first node below a cut threshold t. Its default values is `0.7*max(linkage_matrix[:,2])`, but other values can be used instead. max_d = 0.75 * max(linkage_matrix[:,2]) # + scipy.cluster.hierarchy.dendrogram(linkage_matrix, labels = blogs, color_threshold = max_d) plt.show() # - # The `fcluster()` method, if the `"distance"` criterion is selected, allows us to retrieve the cluster id for each datapoint, when we cut our hierarchy at a given distance. clusters = scipy.cluster.hierarchy.fcluster(linkage_matrix, max_d, criterion = "distance") print(clusters) # + # printing the contents of each cluster cluster2blog = defaultdict(list) for bid, clusterid in enumerate(clusters): cluster2blog[clusterid].append(blogs[bid]) for cId, blog in cluster2blog.items(): print(cId, blog) # - # - cut where a **pre-specified number of *k* clusters** can be obtained: `dendrogram()` method allows us to visualize only the last *k* merged clusters # + scipy.cluster.hierarchy.dendrogram( linkage_matrix, truncate_mode='lastp', p = 10, # show only the last 10 merged clusters show_leaf_counts=True, # numbers in brackets are counts, others are ids show_contracted=True, # show dots where hidden clusters are merged ) plt.show() # - # Using the `"maxclust"` criterion of the `fcluster()` method, we can retrieve the ids of our desired *k* clusters. clusters = scipy.cluster.hierarchy.fcluster(linkage_matrix, 10, criterion = "maxclust") print(clusters) # ### *K*-means # # The algorithm: # # ``` # Initialize K randomly selected centroid # while not converge do # Assign each item to the cluster whose centroid is closest to it # Recompute centroids of the new cluster found from previous step # end while # ``` # ![alt text](data/k-means.png) # #### RESCALE YOUR DATA # # Before running *K*-means, it is wise to rescale our observations. The `whiten()` method can be used to rescale each feature dimension (in our case our word counts) by their standard deviation across all observations to give it unit variance. rescaledMat = scipy.cluster.vq.whiten(bwMat) # #### COMPUTING K-MEANS # # The `kmeans()` performs *K*-means on a set of observations: # # - the stopping criterion is that the change in distortion since the last iteration should be less than the parameter `"thresh"`(default = 1e-05); # # - **distortion**: the sum of the squared differences between the observations and the corresponding centroid. # # # - The number of times *K*-means should be run (default = 20), specified with parameter `"iter"`. # # # - For the iteration with the minimal distortion, it returns : # # - cluster centers: a $k$ by $N$ array of $k$ centroids, where the $i$th row is the centroid of code word $i$. The observation vectors and centroids have the same feature dimension; # - the distortion between the observations and the centroids. centroids, distortion = scipy.cluster.vq.kmeans(rescaledMat, 10) print(centroids[0]) # #### ASSIGNING DATAPOINTS TO CENTROIDS # # The `vq()` method can be used to assign each observation to a given cluster: # # - each observation vector is compared with the centroids and assigned to the nearest cluster. # # # - It returns: # - an array holding the code book index for each observation; # - the distortion between the observations and the centroids. clusters, distortion = scipy.cluster.vq.vq(rescaledMat, centroids) print(clusters) # + # human readable cluster2blog = defaultdict(list) for bid, clusterid in enumerate(clusters): cluster2blog[clusterid].append(blogs[bid]) for cId, blog in cluster2blog.items(): print(cId, blog) # - # #### PLOTTING OUR DATA # ##### Plotting the data directly # # Adjusted example and code from [scipy.cluster.vq.kmeans](https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.vq.kmeans.html): # + from scipy.cluster.vq import vq, kmeans, whiten # Create 50 datapoints in two clusters a and b pts = 30 a = np.random.multivariate_normal([0, 0], [[4, 1], [1, 4]], size=pts) b = np.random.multivariate_normal([30, 10], [[10, 2], [2, 1]], size=pts) c = np.random.multivariate_normal([0, 25], [[10, 2], [2, 1]], size=pts) features = np.concatenate((a, b, c)) # Whiten data whitened = whiten(features) # Find 2 clusters in the data codebook, distortion = kmeans(whitened, 3) # Plot whitened data and cluster centers in red plt.scatter(whitened[:, 0], whitened[:, 1]) plt.scatter(codebook[:, 0], codebook[:, 1], c='r') plt.show() # - # --- # #### Exercise # # `Zebo` was a social network encoraging people to create lists of things that they own and things that they would like to own. The dataset `./data/zebo.txt` reports an item-to-user matrix of binary values # # - **Task**: Use a clustering method to group together the preferences expressed by these users. # + # your code here # - # --- # ## Topic Modelling # # > What follows is a short tutorial on using the LDA implementation available in [Gensim](https://radimrehurek.com/gensim/). # > # > # > To install Gensim, either use the Anaconda Navigator or: # > # > ```conda install -c anaconda gensim``` # ### Pre-process the documents # # In this tutorial we will work with the `C-Span Inaugural Address Corpus` available in NLTK. print(nltk.corpus.inaugural.readme()) # As a first step, we lemmatize our corpus: # # - as usual, we need pos-tagging to properly use the WordNet based lemmatizer. # + un2wn_mapping = {"VERB" : wn.VERB, "NOUN" : wn.NOUN, "ADJ" : wn.ADJ, "ADV" : wn.ADV} inaug_docs = [] for speech_id in nltk.corpus.inaugural.fileids(): # NB: fileids() lemmatized_doc = [] for w, p in nltk.pos_tag(nltk.corpus.inaugural.words(speech_id), tagset="universal"): if p in un2wn_mapping.keys(): lemma = nltk.WordNetLemmatizer().lemmatize(w, pos = un2wn_mapping[p]) else: lemma = nltk.WordNetLemmatizer().lemmatize(w) lemmatized_doc.append(lemma.lower()) # case insensitive inaug_docs.append(lemmatized_doc) # - # ### Construct the document-term matrix # The `gensim.corpora.Dictionary()` class encapsulates the mapping between normalized words and their integer ids: # # - we use it to create a dictionary representation of our documents. inaug_dictionary = corpora.Dictionary(inaug_docs) print('Number of unique tokens:', len(inaug_dictionary)) # let's check each token's unique id print(dict(itertools.islice(inaug_dictionary.token2id.items(), 12))) print("word with id 8:", inaug_dictionary[8]) print("frequency of token 8:", inaug_dictionary.dfs[8]) # Using built-in function `filter_extremes()`, we can remove rare and common words based on their document frequency. # # - the `filter_extremes(self, no_below=5, no_above=0.5, keep_n=100000, keep_tokens=None)` allows us to remove words that appear in: # - less than `no_below` documents (absolute number); # - more than `no_above` documents (fraction of total corpus size); # - if tokens are given in `keep_tokens` (list of strings), they are kept regardless of all the other settings; # - after the other parameters have been applied, keep only the first `keep_n` most frequent tokens (all if `None`). # Filter out words that occur in less than 10 documents, or more than 50% of the documents. inaug_dictionary.filter_extremes(no_below=10, no_above=0.5) print('Number of unique tokens:', len(inaug_dictionary)) # The `doc2bow()` function is the most important `Dictionary()` method, whose function is to convert a collection of words to a **bag-of-words representation**. # Bag-of-words representation of the documents inaug_bow_corpus = [inaug_dictionary.doc2bow(d) for d in inaug_docs] # Such a representation returns, **for each document**, a list of `(word_id, word_frequency) 2-tuples`: # # - we can use the dictionary mapping to retrieve the lemma associated with a given id. # Our first document, i.e. at index 0 of `bow_corpus` print(nltk.corpus.inaugural.raw('1789-Washington.txt')[:1890]) # the BOW representation of the first document print(inaug_bow_corpus[0][:50]) # which words (and how often) appear in the first document? for i, freq in sorted(inaug_bow_corpus[0], key=itemgetter(1), reverse=True)[:15]: print(inaug_dictionary[i], "-->", freq) print("...") # --- # #### Applying the LDA model # Now we can train the LDA model by using the `gensim.models.ldamodel.LDAModel()` constructor. # # Parameters used in our example: # # - `num_topics`: how many topics do we want? In what follows, we set the number of topics to 5, because we want to have a few topics that we can interpret, but the number of topics is **data** and **application**-dependent; # # # - `id2word`: our previous dictionary needed to map ids to strings; # # # # - `passes`: how often we iterateover the entire corpus (default = 1). In general, the more the passes, the higher the accuracy. This number is also called `epochs` in ML literature. inaug_ldamodel = models.ldamodel.LdaModel(inaug_bow_corpus, num_topics=5, id2word = inaug_dictionary, passes= 25) # Even if we are not covering these issues, it is importatnt to know that: # # - you can use this model to infer the topic distribution in **a new unseen document**: # # ```python # doc_lda = inaug_ldamodel[doc_bow] # ``` # # # - you can **update** you model with novel data (instead of retraining from scratch): # # ```python # inaug_ldamodel.update(new_corpus) # ``` # ### Examining Topics # An immediate way to inspect our topics is by using the `show_topics()` method, that prints the most representative words for each topic (each topic is marked by an integer id), along with their probability. # let's see just 5 words per topic (default = 10) inaug_ldamodel.show_topics(num_words=5) # the setting formatted=False allows you to get rid of the word*probability format when retrieveing topics inaug_ldamodel.show_topics(formatted=False, num_words=10) # The `get_term_topics()` method returns the odds of that particular word belonging to a particular topic: # # - topics below a given threshold are ignored. inaug_ldamodel.get_term_topics("congress", minimum_probability = 1e-3) # The `get_document_topics()` returns several statistics describing the topic distribution in a document: # # - the topic distribution of the document; # # # - (if `per_word_topics=True`) the the topic distribution for each word in the document; # # # - (if `per_word_topics=True`) the probability of each word in each document to belong to a particular topic. # the topics of the first document of our corpus inaug_ldamodel.get_document_topics(inaug_bow_corpus[0], minimum_probability = 0) # the topics of ALL the documents of our corpus for doc_topics in inaug_ldamodel.get_document_topics(inaug_bow_corpus): print(doc_topics) # + # the topics of the first document of our corpus, of its words and the scaled prob values of each word. doc_topics, word_topics, phi_values = inaug_ldamodel.get_document_topics(inaug_bow_corpus[0], per_word_topics=True) # "Topic distribution for the whole document. Each element in the list is a pair of topic_id, # and the probability that was assigned to it." print("- Document topics:", doc_topics) # "Most probable topics for a word. Each element in the list is a pair of word_id, and a list of # topics sorted by their relevance to this word." print("\n- Word topics:", word_topics) # "Relevance values multipled by the feature length, for each word-topic combination. Each element # in the list is a pair of word_id, and a list of the values between this word and each topic." print("\n- Scaled phi values:", phi_values) # - # --- # #### Exercise # # What follows is the raw text from the Trump inaugural speech. # # Use your model to **infer and have a look at the topics distribution** of this document. trump_speech = "We, the citizens of America, are now joined in a great national effort to rebuild our country and to restore its promise for all of our people.\n\nTogether, we will determine the course of America and the world for years to come.\n\nWe will face challenges. We will confront hardships. But we will get the job done.\n\nEvery four years, we gather on these steps to carry out the orderly and peaceful transfer of power, and we are grateful to President Obama and First Lady <NAME> for their gracious aid throughout this transition. They have been magnificent.\n\nToday\u2019s ceremony, however, has very special meaning. Because today we are not merely transferring power from one Administration to another, or from one party to another \u2013 but we are transferring power from Washington, D.C. and giving it back to you, the American People.\n\nFor too long, a small group in our nation\u2019s Capital has reaped the rewards of government while the people have borne the cost.\n\nWashington flourished \u2013 but the people did not share in its wealth.\n\nPoliticians prospered \u2013 but the jobs left, and the factories closed.\n\nThe establishment protected itself, but not the citizens of our country.\n\nTheir victories have not been your victories; their triumphs have not been your triumphs; and while they celebrated in our nation\u2019s Capital, there was little to celebrate for struggling families all across our land.\n\nThat all changes \u2013 starting right here, and right now, because this moment is your moment: it belongs to you.\n\nIt belongs to everyone gathered here today and everyone watching all across America. \n\nThis is your day. This is your celebration.\n\nAnd this, the United States of America, is your country.\n\nWhat truly matters is not which party controls our government, but whether our government is controlled by the people.\n\nJanuary 20th 2017, will be remembered as the day the people became the rulers of this nation again. \n\nThe forgotten men and women of our country will be forgotten no longer.\n\nEveryone is listening to you now.\n\nYou came by the tens of millions to become part of a historic movement the likes of which the world has never seen before.\n\nAt the center of this movement is a crucial conviction: that a nation exists to serve its citizens.\n\nAmericans want great schools for their children, safe neighborhoods for their families, and good jobs for themselves.\n\nThese are the just and reasonable demands of a righteous public.\n\nBut for too many of our citizens, a different reality exists: Mothers and children trapped in poverty in our inner cities; rusted-out factories scattered like tombstones across the landscape of our nation; an education system, flush with cash, but which leaves our young and beautiful students deprived of knowledge; and the crime and gangs and drugs that have stolen too many lives and robbed our country of so much unrealized potential.\n\nThis American carnage stops right here and stops right now.\n\nWe are one nation \u2013 and their pain is our pain. Their dreams are our dreams; and their success will be our success. We share one heart, one home, and one glorious destiny.\n\nThe oath of office I take today is an oath of allegiance to all Americans.\n\nFor many decades, we\u2019ve enriched foreign industry at the expense of American industry;\n\nSubsidized the armies of other countries while allowing for the very sad depletion of our military;\n\nWe've defended other nation\u2019s borders while refusing to defend our own;\n\nAnd spent trillions of dollars overseas while America's infrastructure has fallen into disrepair and decay.\n\nWe\u2019ve made other countries rich while the wealth, strength, and confidence of our country has disappeared over the horizon.\n\nOne by one, the factories shuttered and left our shores, with not even a thought about the millions upon millions of American workers left behind.\n\nThe wealth of our middle class has been ripped from their homes and then redistributed across the entire world.\n\nBut that is the past. And now we are looking only to the future.\n\nWe assembled here today are issuing a new decree to be heard in every city, in every foreign capital, and in every hall of power.\n\nFrom this day forward, a new vision will govern our land.\n\nFrom this moment on, it\u2019s going to be America First.\n\nEvery decision on trade, on taxes, on immigration, on foreign affairs, will be made to benefit American workers and American families.\n\nWe must protect our borders from the ravages of other countries making our products, stealing our companies, and destroying our jobs. Protection will lead to great prosperity and strength.\n\nI will fight for you with every breath in my body \u2013 and I will never, ever let you down.\n\nAmerica will start winning again, winning like never before.\n\nWe will bring back our jobs. We will bring back our borders. We will bring back our wealth. And we will bring back our dreams.\n\nWe will build new roads, and highways, and bridges, and airports, and tunnels, and railways all across our wonderful nation.\n\nWe will get our people off of welfare and back to work \u2013 rebuilding our country with American hands and American labor.\n\nWe will follow two simple rules: Buy American and Hire American.\n\nWe will seek friendship and goodwill with the nations of the world \u2013 but we do so with the understanding that it is the right of all nations to put their own interests first.\n\nWe do not seek to impose our way of life on anyone, but rather to let it shine as an example for everyone to follow.\n\nWe will reinforce old alliances and form new ones \u2013 and unite the civilized world against Radical Islamic Terrorism, which we will eradicate completely from the face of the Earth.\n\nAt the bedrock of our politics will be a total allegiance to the United States of America, and through our loyalty to our country, we will rediscover our loyalty to each other.\n\nWhen you open your heart to patriotism, there is no room for prejudice.\n\nThe Bible tells us, \u201chow good and pleasant it is when God\u2019s people live together in unity.\u201d\n\nWe must speak our minds openly, debate our disagreements honestly, but always pursue solidarity.\n\nWhen America is united, America is totally unstoppable.\n\nThere should be no fear \u2013 we are protected, and we will always be protected.\n\nWe will be protected by the great men and women of our military and law enforcement and, most importantly, we are protected by God.\n\nFinally, we must think big and dream even bigger.\n\nIn America, we understand that a nation is only living as long as it is striving.\n\nWe will no longer accept politicians who are all talk and no action \u2013 constantly complaining but never doing anything about it.\n\nThe time for empty talk is over.\n\nNow arrives the hour of action.\n\nDo not let anyone tell you it cannot be done. No challenge can match the heart and fight and spirit of America.\n\nWe will not fail. Our country will thrive and prosper again.\n\nWe stand at the birth of a new millennium, ready to unlock the mysteries of space, to free the Earth from the miseries of disease, and to harness the energies, industries and technologies of tomorrow.\n\nA new national pride will stir our souls, lift our sights, and heal our divisions.\n\nIt is time to remember that old wisdom our soldiers will never forget: that whether we are black or brown or white, we all bleed the same red blood of patriots, we all enjoy the same glorious freedoms, and we all salute the same great American Flag.\n\nAnd whether a child is born in the urban sprawl of Detroit or the windswept plains of Nebraska, they look up at the same night sky, they fill their heart with the same dreams, and they are infused with the breath of life by the same almighty Creator.\n\nSo to all Americans, in every city near and far, small and large, from mountain to mountain, and from ocean to ocean, hear these words:\n\nYou will never be ignored again.\n\nYour voice, your hopes, and your dreams, will define our American destiny. And your courage and goodness and love will forever guide us along the way.\n\nTogether, We Will Make America Strong Again.\n\nWe Will Make America Wealthy Again.\n\nWe Will Make America Proud Again.\n\nWe Will Make America Safe Again.\n\nAnd, Yes, Together, We Will Make America Great Again. Thank you, God Bless You, And God Bless America." # + # your code here # - # --- # #### Visualizing Topics # When we have several document or topics, usually plotting data is the best way to make sense of your results. # # - First of all, let's encode our document to topic mapping in a numpy matrix to simplify our processing. # + docs_id = nltk.corpus.inaugural.fileids() doc2topics = np.zeros((len(docs_id), inaug_ldamodel.num_topics)) for di, doc_topics in enumerate(inaug_ldamodel.get_document_topics(inaug_bow_corpus, minimum_probability = 0)): for ti, v in doc_topics: doc2topics[di, ti] = v # print(doc2topics) # - # - We can check the **share of a given topic in the documents of our corpus** in a barplot: # + which_topic = 2 # try to change this and see what happens! ind = range(len(docs_id)) fig = plt.figure(figsize=(16, 8)) plt.bar(ind, doc2topics[:,which_topic]) plt.xticks(ind, docs_id, rotation = 90) plt.title('Share of Topic #%d'%which_topic) plt.tight_layout() # fixes margins plt.show() # - # - We can check the **share of all the topics in all the documents** by using a heatmap: fig = plt.figure(figsize=(16, 12)) plt.pcolor(doc2topics, norm=None, cmap='Blues') plt.yticks(np.arange(doc2topics.shape[0]), docs_id) plt.xticks(np.arange(doc2topics.shape[1])+0.5, ["Topic #"+str(n) for n in range(inaug_ldamodel.num_topics)], rotation = 90) plt.colorbar(cmap='Blues') # plot colorbar plt.tight_layout() # fixes margins plt.show() # - A nice way to visualize the distribution over words that characterizes each topic is by printing, for each topic, the top-assocaited words resized **according to their strength of association** with the topic. # > **Credits**: # > # > The following code has been adapted from the **Text Analysis with Topic Models for the Humanities and Social Sciences** tutorials by <NAME>. # + fig = plt.figure(figsize=(16, 10)) num_top_words = 10 topic2top_words = dict(inaug_ldamodel.show_topics(formatted=False, num_words = num_top_words)) fontsize_base = 25 / max([w[0][1] for w in topic2top_words.values()]) # font size for word with largest share in corpus for topic, words_shares in topic2top_words.items(): plt.subplot(1, inaug_ldamodel.num_topics, topic + 1) plt.ylim(0, num_top_words + 0.5) # stretch the y-axis to accommodate the words plt.xticks([]) # remove x-axis markings ('ticks') plt.yticks([]) # remove y-axis markings ('ticks') plt.title('Topic #{}'.format(topic)) for i, (word, share) in enumerate(words_shares): plt.text(0.3, num_top_words-i-0.5, word, fontsize=fontsize_base*share) plt.tight_layout() plt.show() # - # --- # #### Exercise # # * Try to play with the topic mode, and especially try to change the number of topics. What happens? How do they distribute over time and presidents? # # * Use Gensim's `gensim.models.Phrases` class to calculate high-frequency bigrams and trigrams, add them to your bag-of-words document representation and train the topic model again. Does this improve your results? # ---
notebooks/12_Clustering_TopicModelling.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import torch from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.optim import lr_scheduler from torch.optim.lr_scheduler import StepLR, MultiStepLR import numpy as np import matplotlib.pyplot as plt from math import * import time torch.set_default_tensor_type('torch.DoubleTensor') # activation function def activation(x): return x * torch.sigmoid(x) # build ResNet with one blocks class Net(nn.Module): def __init__(self,input_size,width): super(Net,self).__init__() self.layer_in = nn.Linear(input_size,width) self.layer_1 = nn.Linear(width,width) self.layer_2 = nn.Linear(width,width) self.layer_out = nn.Linear(width,1) def forward(self,x): output = self.layer_in(x) output = output + activation(self.layer_2(activation(self.layer_1(output)))) # residual block 1 output = self.layer_out(output) return output input_size = 1 width = 2 net = Net(input_size,width) def model(x): return x * (x - 1.0) * net(x) # exact solution def u_ex(x): return torch.sin(pi*x) # f(x) def f(x): return pi**2 * torch.sin(pi*x) grid_num = 200 x = torch.zeros(grid_num + 1, input_size) for index in range(grid_num + 1): x[index] = index * 1 / grid_num # loss function to DGM by auto differential def loss_function(x): h = 1 / grid_num sum_0 = 0.0 sum_1 = 0.0 sum_2 = 0.0 sum_a = 0.0 sum_b = 0.0 for index in range(grid_num): x_temp = x[index] + h / 2 x_temp.requires_grad = True # grad_x_temp = torch.autograd.grad(model(x_temp), x_temp, create_graph = True) grad_x_temp = torch.autograd.grad(outputs = model(x_temp), inputs = x_temp, grad_outputs = torch.ones(model(x_temp).shape), create_graph = True) grad_grad_x_temp = torch.autograd.grad(outputs = grad_x_temp[0], inputs = x_temp, grad_outputs = torch.ones(model(x_temp).shape), create_graph = True) sum_1 += ((grad_grad_x_temp[0])[0] + f(x_temp)[0])**2 for index in range(1, grid_num): x_temp = x[index] x_temp.requires_grad = True # grad_x_temp = torch.autograd.grad(model(x_temp), x_temp, create_graph = True) grad_x_temp = torch.autograd.grad(outputs = model(x_temp), inputs = x_temp, grad_outputs = torch.ones(model(x_temp).shape), create_graph = True) grad_grad_x_temp = torch.autograd.grad(outputs = grad_x_temp[0], inputs = x_temp, grad_outputs = torch.ones(model(x_temp).shape), create_graph = True) sum_2 += ((grad_grad_x_temp[0])[0] + f(x_temp)[0])**2 x_temp = x[0] x_temp.requires_grad = True # grad_x_temp = torch.autograd.grad(model(x_temp), x_temp, create_graph = True) grad_x_temp = torch.autograd.grad(outputs = model(x_temp), inputs = x_temp, grad_outputs = torch.ones(model(x_temp).shape), create_graph = True) grad_grad_x_temp = torch.autograd.grad(outputs = grad_x_temp[0], inputs = x_temp, grad_outputs = torch.ones(model(x_temp).shape), create_graph = True) sum_a = ((grad_grad_x_temp[0])[0] + f(x_temp)[0])**2 x_temp = x[grid_num] x_temp.requires_grad = True # grad_x_temp = torch.autograd.grad(model(x_temp), x_temp, create_graph = True) grad_x_temp = torch.autograd.grad(outputs = model(x_temp), inputs = x_temp, grad_outputs = torch.ones(model(x_temp).shape), create_graph = True) grad_grad_x_temp = torch.autograd.grad(outputs = grad_x_temp[0], inputs = x_temp, grad_outputs = torch.ones(model(x_temp).shape), create_graph = True) sum_b = ((grad_grad_x_temp[0])[0] + f(x_temp)[0])**2 sum_0 = h / 6 * (sum_a + 4 * sum_1 + 2 * sum_2 + sum_b) return sum_0 def error_function(x): error = 0.0 for index in range(len(x)): x_temp = x[index] error += (model(x_temp)[0] - u_ex(x_temp)[0])**2 return error / len(x) # set optimizer and learning rate decay optimizer = optim.Adam(net.parameters()) scheduler = lr_scheduler.StepLR(optimizer, 250, 0.8) # every 250 epoch, learning rate * 0.5 # + epoch = 5000 loss_record = np.zeros(epoch) error_record = np.zeros(epoch) time_start = time.time() for i in range(epoch): optimizer.zero_grad() loss = loss_function(x) loss_record[i] = float(loss) error = error_function(x) error_record[i] = float(error) np.save("unit_DGM_loss_1d_2.npy", loss_record) np.save("unit_DGM_error_1d_2.npy", error_record) if i % 1 == 0: print("current epoch is: ", i) print("current loss is: ", loss.detach()) print("current error is: ", error.detach()) loss.backward() optimizer.step() scheduler.step() torch.cuda.empty_cache() # clear memory time_end = time.time() print('total time is: ', time_end-time_start, 'seconds') # -
code/Results1D/wideNN quadratic/widenn_DGM_2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Tree Seed Dispersal modelling using ML # # data: [HF120](https://harvardforest1.fas.harvard.edu/exist/apps/datasets/showData.html?id=HF120) # # Use regression algorithms to predict distance that each seed travelled from the point at which it was dropped. # - height = height in meters at which we dropped each seed (meter) # - wind.velocity = wind velocity in meters/second taken with a digital anemometer as we dropped each seed (metersPerSecond) # # - data classification = distance in meters that each seed travelled from the point at which we dropped it (meter) # # # # # # ###### Tree Seed Dispersal modelling using Machine Learning # Forests are critically important for biodiversity and provide important health and economic benefits. Understanding forests' response to direct mortality resulting from infestation followed by defoliation and indirect mortality in the form of pre-emptive logging is however not very well understood. The efficacy of regeneration of vegetation following hemlock decline depends upon advance regeneration of seedlings and saplings, seed dispersal, and recruitment. In this study, we investigated whether the basic parameters of height of release and wind velocity can be used to model seed dispersal distance in areas both with and without canopies. For modelling, we trained three SVM based machine learning models that allow linear or nonlinear (polynomial and rbf) dependencies. Predicted values of dispersal distance generated by all three models did not provide a good fit to observed dispersal data. Poor fits of the model to the data are likely due to the very small size of the training dataset. Future research should compare model results across open areas and those with canopies, since it is known that latter diminished the effects of wind and height. More complex models and larger datasets are necessary to model highly non-linear seed dispersal patterns. # from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" # + import sys, os, pathlib, shutil, platform import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn import svm from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import ( classification_report, confusion_matrix, accuracy_score, ) from sklearn.model_selection import train_test_split, StratifiedShuffleSplit from sklearn.preprocessing import OrdinalEncoder from sklearn.compose import make_column_transformer from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline from sklearn.preprocessing import OneHotEncoder from sklearn.preprocessing import OrdinalEncoder from sklearn.preprocessing import StandardScaler # - # %matplotlib inline # + species = ['maple', 'oak', 'birch'] dataFileRootName=['hf120-01-', 'hf120-02-', 'hf120-03-'] dataFileName = [i + j + '.csv' for i, j in zip(dataFileRootName, species)] # myData = [pd.read_csv(str(pathlib.Path('./../data/hrvardf/HF120') / dataFileName)) for f in dataFileName] myData = (pd.concat([(pd.read_csv(str(pathlib.Path('./../data/harvardf/HF120') / f))).assign(spp=spc) for f, spc in zip(dataFileName, species)] ,ignore_index=True)) # - myData.shape myData.head(2) myData.tail(2) myData.info() myData.isnull().sum() # + myCols = ['height', 'spp', 'distance'] myData[myCols[0]].value_counts(dropna=False) myData[myCols[1]].value_counts(dropna=False) myData[myCols[2]].value_counts(dropna=False) myData.pivot_table(index = [myCols[0]] , columns = myCols[1] , values = myCols[2] , aggfunc=np.sum, fill_value=0) # - myCols = ['height', 'spp', 'wind.velocity'] myData[myCols[0]].value_counts(dropna=False) myData[myCols[1]].value_counts(dropna=False) myData[myCols[2]].value_counts(dropna=False) myData.pivot_table(index = [myCols[0]] , columns = myCols[1] , values = myCols[2] , aggfunc=np.sum, fill_value=0) filteredDataML = myData[myData['spp'].isin(['maple','oak'])] filteredDataML.shape filteredDataML.head() plt.figure(figsize=(8,5)) X_data, y_data = (filteredDataML["distance"].values, filteredDataML["height"].values) plt.plot(X_data, y_data, 'ro') plt.suptitle('Graph', y=1.02) plt.ylabel('distance') plt.xlabel('height') plt.show() X_data, y_data = (filteredDataML[['height','wind.velocity']].values, filteredDataML['distance'].values) import seaborn as sns sns.pairplot(filteredDataML[['height','wind.velocity', 'distance', 'spp']]) # + (filteredDataML[['height','wind.velocity', 'distance', 'spp']]).isnull().sum() # filteredDataML.dropna(inplace=True) # + from sklearn.model_selection import train_test_split from sklearn import linear_model from sklearn import metrics X_train, X_test, Y_train, Y_test = train_test_split(X_data, y_data, test_size=0.2, random_state = 1) # + model = linear_model.LinearRegression() model.fit(X_train, Y_train) # - Y_pred_train = model.predict(X_train) print('Coefficients:', model.coef_) print('Intercept:', model.intercept_) print('Mean squared error (MSE): %.2f' % metrics.mean_squared_error(Y_train, Y_pred_train)) print('Coefficient of determination (R^2): %.2f' % metrics.r2_score(Y_train, Y_pred_train)) Y_pred_test = model.predict(X_test) # + print('Coefficients:', model.coef_) print('Intercept:', model.intercept_) print('Coefficient of determination (R^2): %.2f' % metrics.r2_score(Y_test, Y_pred_test)) print('Mean Absolute Error:', metrics.mean_absolute_error(Y_test, Y_pred_test)) print('Mean Squared Error:', metrics.mean_squared_error(Y_test, Y_pred_test)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Y_test, Y_pred_test))) # + import matplotlib.pyplot as plt import numpy as np plt.figure(figsize=(5,11)) # 2 row, 1 column, plot 1 plt.subplot(2, 1, 1) plt.scatter(x=Y_train, y=Y_pred_train, c="#7CAE00", alpha=0.3) # Add trendline # https://stackoverflow.com/questions/26447191/how-to-add-trendline-in-python-matplotlib-dot-scatter-graphs z = np.polyfit(Y_train, Y_pred_train, 1) p = np.poly1d(z) plt.plot(Y_test,p(Y_test),"#F8766D") plt.ylabel('Predicted LogS') # 2 row, 1 column, plot 2 plt.subplot(2, 1, 2) plt.scatter(x=Y_test, y=Y_pred_test, c="#619CFF", alpha=0.3) z = np.polyfit(Y_test, Y_pred_test, 1) p = np.poly1d(z) plt.plot(Y_test,p(Y_test),"#F8766D") plt.ylabel('Predicted LogS') plt.xlabel('Experimental LogS') plt.savefig('plot_vertical_logS.png') plt.savefig('plot_vertical_logS.pdf') plt.show() # - # https://scikit-learn.org/stable/auto_examples/svm/plot_svm_regression.html from sklearn.svm import SVR # Fit regression model svr_rbf = SVR(kernel='rbf', C=100, gamma=0.1, epsilon=.1) svr_lin = SVR(kernel='linear', C=100, gamma='auto') svr_poly = SVR(kernel='poly', C=100, gamma='auto', degree=3, epsilon=.1, coef0=1) # + svrs = [svr_rbf, svr_lin, svr_poly] kernel_label = ['RBF', 'Linear', 'Polynomial'] model=list() for ix, svr in enumerate(svrs): model.append(svr.fit(X_train, Y_train)) # - for ix, svr in enumerate(svrs): # print(model[ix].support_) pass # + # plotted_col = 'height' # X_train[:,0] # + # Look at the results lw = 2 plotted_col = 'height' # 'height','wind.velocity' # model.fit(X_train, Y_train) # svrs = [svr_rbf, svr_lin, svr_poly] # kernel_label = ['RBF', 'Linear', 'Polynomial'] model_color = ['m', 'c', 'g'] fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(15, 10), sharey=True) for ix, svr in enumerate(svrs): # horiz_data = X_train[:,0] horiz_data = Y_train axes[ix].scatter(horiz_data, model[ix].predict(X_train), color=model_color[ix], lw=lw, label='{} model'.format(kernel_label[ix])) axes[ix].scatter((horiz_data)[model[ix].support_], Y_train[model[ix].support_], facecolor="none", edgecolor=model_color[ix], s=50, label='{} support vectors'.format(kernel_label[ix])) # axes[ix].scatter(X[np.setdiff1d(np.arange(len(X)), svr.support_)], # y[np.setdiff1d(np.arange(len(X)), svr.support_)], # facecolor="none", edgecolor="k", s=50, # label='other training data') axes[ix].legend(loc='upper center', bbox_to_anchor=(0.5, 1.1), ncol=1, fancybox=True, shadow=True) fig.text(0.5, 0.04, 'data', ha='center', va='center') fig.text(0.06, 0.5, 'target', ha='center', va='center', rotation='vertical') fig.suptitle("Support Vector Regression", fontsize=14) plt.show() # - for ix, svr in enumerate(svrs): print(str(svr)+':') Y_pred_test = model[ix].predict(X_test) print('Coefficient of determination (R^2): %.2f' % metrics.r2_score(Y_test, Y_pred_test)) print('Mean Absolute Error:', metrics.mean_absolute_error(Y_test, Y_pred_test)) print('Mean Squared Error:', metrics.mean_squared_error(Y_test, Y_pred_test)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(Y_test, Y_pred_test)))
TreeSeedDispersalModelling.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="fV7D-IiJsHJK" # # Tackle Data Drift in your ML pipeline # > A practical & easy method to detect when you have data issues in your machine learning pipeline. # # - toc: true # - branch: master # - badges: true # - hide_binder_badge: true # - comments: true # - categories: [fastpages, jupyter, bqml, gcp, data, machine learning] # - image: images/No_More_Just_Hoping.png # - hide: false # - search_exclude: true # + [markdown] id="zUFWJyx92j2n" # ### Background # + [markdown] id="4YvOdJyRBh0b" # I have been working on a really fun & interesting binary classification model at work. We have a training process that runs weekly, but our batch prediction process (let's call it "test-time inference" process like <NAME> does [here](https://www.fast.ai/2017/11/13/validation-sets/)) runs multiple times per day. Much of the feature data changes intraday, **so I have to be sure that the test-time inference data that we feed into our weekly trained model (a pickle file) has the same charateristics as the data that it trained on.** I have developed a script in BigQuery to use [BQML](https://cloud.google.com/bigquery-ml/docs/reference) that runs every time my test-time inference process runs to check that my "training" data looks similar enough to my "test-time inference" data. If a simple model can figure it out, then you have # an issue! and you should fail/stop your process! # + [markdown] id="o5iPW5HKrGXu" # ![](https://github.com/david-dirring/data-wrangler-in-ml-world/blob/master/_notebooks/my_icons/No_More_Just_Hoping.png?raw=1) # # + [markdown] id="0cO4hmvEyXw9" # **LEFT SIDE OF IMAGE BELOW:** When we learn about machine learning, we typically do the left side of the figure below. We get the data, we add features, we train, we hold out on some data, and see how we would do on that holdout set. What we don't learn about is how we should SAVE down that model file somewhere so that your "test-inference" process can pick it up later. # # + [markdown] id="chm_sNHiyb36" # **RIGHT SIDE OF IMAGE BELOW**: Before my process grabs the model file to make predictions on my test-inference data, I need to be sure that this data looks similar enough to the data that the model trained on. If you do not check this, you run the risk of decreasing your accuracy. I will explain how I do this below in BigQuery. # # + [markdown] id="MJYS9P6_ux-P" # ![](https://github.com/david-dirring/data-wrangler-in-ml-world/blob/master/_notebooks/my_icons/no_more_just_hoping_data_is_right_training_and_inference.png?raw=1) # + [markdown] id="1M8Qwsp7jjkg" # --- # + [markdown] id="0kZrkZtewFN8" # ## Simple BQML Script & Model to Alert You To Data Issues # + [markdown] id="UPfwVsmkwYdK" # I have been wrangling data and data storytelling for over 13 years. # I love BigQuery. I love how fast & powerful it is. I also love the work they are putting into BQML. # + [markdown] id="_bTCuf3F-Hk-" # # For this demonstration, I will pull in a dataset from a BigQuery public dataset -- Daily Liquor Sales in Iowa. We can pretend that we work at a government agency in Iowa and our boss needs us to predict liquor sales by THE GALLON haha. Not realistic, but fun to think about. # # The script below will roughly follow these steps: # # 1. **Build** a table # 2. **Split** table into chunks for train / test / validation / test-inference. *In real life, you'll have a process to update the test-inference dataset. The "train / test / validation" is the left side of image above. The "test-inference" is right side.* # 3. **Grab 10k records** from train and from test-inference. # 4. Build simple **BQML model** to predict which records are from "train" (0) or from "test-inference" (1) # 5. Look at AUC of simple model to determine if there are data issues. **Fail query if AUC is above your threshold**. I have found .6 or .65 work as a good threshold. AUC of .5 is basically completely random guesses, which implies that the model cannot discern which records are train and which are test-inference. # 6. If issue, then **return the features** that are contributing to the data # issue. # + [markdown] id="bYEz2qX83F2s" # >Note: At end of this post, you'll find the links to a colab notebook where you can run the code yourself. # + [markdown] id="AeK05mMS82Mp" # # # --- # # # + [markdown] id="8oZrp60tX25g" # ### **Steps 1 & 2:** Query, split the data # ![](https://github.com/david-dirring/data-wrangler-in-ml-world/blob/master/_notebooks/my_icons/no_more_just_hoping_query_time_periods_join.png?raw=1) # + [markdown] id="ziNck8ixgyu1" # Output from query above to help you understand the data: # ![](https://github.com/david-dirring/data-wrangler-in-ml-world/blob/master/_notebooks/my_icons/no_more_just_hoping_20_random_rows.png?raw=1) # + [markdown] id="X6cLalhm8pv6" # # # --- # # # + [markdown] id="YHe-uyqKjLZk" # ### **Steps 3 & 4:** Create model in BQML # ![](https://github.com/david-dirring/data-wrangler-in-ml-world/blob/master/_notebooks/my_icons/no_more_just_hoping_10k_random_rows_union_together.png?raw=1) # + [markdown] id="vjm3ReTN8lKg" # # # --- # # # + [markdown] id="zg_AfflTkTND" # ### **Step 5:** Check AUC, fail if above threshold # ![](https://github.com/david-dirring/data-wrangler-in-ml-world/blob/master/_notebooks/my_icons/no_more_just_hoping_error_running_query.png?raw=1) # + [markdown] id="NJMyffvy8wWF" # # # --- # # # + [markdown] id="dG7Yurw5kosV" # ### **Steps 6:** Return top contributing features # ![](https://github.com/david-dirring/data-wrangler-in-ml-world/blob/master/_notebooks/my_icons/no_more_just_hoping_auc_is_bad.png?raw=1) # # # if AUC is below .65, then no issues. # ![](https://github.com/david-dirring/data-wrangler-in-ml-world/blob/master/_notebooks/my_icons/no_more_just_hoping_auc_is_fine.png?raw=1) # + [markdown] id="V1zDII1apCHy" # # # --- # # # + [markdown] id="Bbf9z95qnC3O" # ### **In Closing:** Sleep well David, BQML checked, your data is good to go # I have implemented this process at work. I honestly sleep better knowing that I have this failsafe in place. We have to pull from over 25+ different sources for 120 or so features. If we mess something up or if an ETL process fails upstream before it gets to our process, it is wonderful to know right away instead of just watching performance plummet or watch it bounce around a day or two later. # # Please reach out on [Twitter](https://twitter.com/DavidDirring) or [LinkedIn](https://www.linkedin.com/in/ddirring/) if you have any questions. # + [markdown] id="UUrNcFiSlN_m" # If you would like to look at the full script, please follow this [link](https://github.com/david-dirring/data-wrangler-in-ml-world/blob/af14db13c2d83acf0f80a8b46ae580e458f3c5e5/_notebooks/scripts/tackle_data_drift_with_bqml_bigquery_script.ipynb) to run in colab. Here is a [link](https://github.com/david-dirring/data-wrangler-in-ml-world/blob/af14db13c2d83acf0f80a8b46ae580e458f3c5e5/_notebooks/scripts/tackle_data_drift_with_bqml_bigquery_script.sql) to a .sql file to copy/paste into BQ console. # + [markdown] id="iuRAm3i-D04v" # ![](https://github.com/david-dirring/data-wrangler-in-ml-world/blob/master/_notebooks/my_icons/no_more_just_hoping_david_sleeping.png?raw=1)
_notebooks/2021-06-10-tackle-data-drift-with-bqml.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Visualize toy dataset # The goal of the toy dataset is to make experiment with small vocabulary size. # # It would be nice if we could generate data that looks like real text **from afar**. # # For example we could try to make it zipfian (the log frequency of the tokens is linear with regards to the log rank of the tokens). # # Maybe we can also take a look at the variance in the positions of the words, i.e. we would like some words to be able to appear pretty much everywhere in the sentence whereas others should appear in the beginning. # # We can do quick experiments without worrying too much about this but eventually but it would be nice to be able to trust this model for more complex experiments, hyper parameters search, ... # # One possibility is to learn a model on real data and learn the distribution of the features and learned params. # ## TODO: # - maybe we can gain better control if we directly manipulate the norms of the vectors as they should correlate with the frequencies. # # + import numpy as np from dictlearn.generate_synthetic_data import FakeTextGenerator V = 100 embedding_size = 50 markov_order = 6 temperature=1.0 sentence_size = 20 model = FakeTextGenerator(V, embedding_size, markov_order, temperature) n_sentences=1000 sentences = model.create_corpus(n_sentences, 5, 10, 0.7, 0.1, 0.5) # - # ## Alternative model # + import numpy as np from dictlearn.generate_synthetic_data_alt import FakeTextGenerator embedding_size = 20 markov_order = 3 temperature=1.0 sentence_size = 20 model = FakeTextGenerator(100, 400, embedding_size, markov_order, temperature) n_sentences=1000 sentences = model.create_corpus(n_sentences, 5, 10, 0.7, 0.1) # + import matplotlib.pyplot as plt # %matplotlib inline plt.figure(figsize=(20, 20)) plt.imshow(model.features.T, interpolation='none') plt.colorbar() plt.show() # + import matplotlib.pyplot as plt # %matplotlib inline from collections import Counter def summarize(sentences, V, label): """ sentences: list of list of characters V: vocabulary size """ sentence_size = len(sentences[0]) # count tokens and their positions #positions = np.zeros((V,sentence_size)) unigram_counts = Counter() for sentence in sentences: for i,tok in enumerate(sentence): unigram_counts[tok] += 1 #positions[w, i] += 1 ordered_count = [c for _, c in unigram_counts.most_common()] print ordered_count[:100] print ordered_count[500:600] print ordered_count[-100:] total_word_count = sum(ordered_count) # compute empirical frequency ordered_freq = [float(oc)/total_word_count for oc in ordered_count] print len(ordered_count), len(ordered_freq), V plt.plot(range(len(ordered_freq)), ordered_freq) plt.title("word frequency ordered by decreasing order of occurences (rank) on " + label) plt.show() plt.plot(np.log(range(len(ordered_freq))), np.log(ordered_count)) plt.title("log(word frequency) / log(rank) on " + label) plt.show() # - # ## Study corpus summarize(sentences, model.V, "corpus") # Not really zipfian so far. Maybe read [that](http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005110) if we really care about that. # ## Study definitions definitions = [] for defs in model.dictionary.values(): definitions += defs summarize(definitions, V, "definitions")
visualize_toy_dataset.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Total statistics # # This notebook simply computes total interaction statistics. It takes a while as it needs to iterate through all the raw data! # + import polars as pl import itertools as itt from pathlib import Path import csv import json import ctypes as ct from tqdm import tqdm path = "../../../data/users/" int_types = {"directs": 4, "indirects": 5} files = [f.absolute() for f in Path(path + 'raw/').glob("*.csv")] csv.field_size_limit(int(ct.c_ulong(-1).value // 2)) print("Total files to be processed: {}".format(len(files))) # + total_interactions = { "directs": {"total": 0, "count": 0}, "indirects": {"total": 0, "count": 0}, } for f in tqdm(files): with open(f) as file: reader = csv.reader(file, delimiter=',') next(reader) #skip the header for i, row in enumerate(reader): for int_type, int_col_num in int_types.items(): count = len(json.loads(row[int_col_num])) total_interactions[int_type]['total'] += count total_interactions[int_type]['count'] += 1 # - total_interactions
src/data-collection-processing/user-data/01b. total_interaction_stats.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data Science Playook # # <p align="center"> # <img width="180" src="https://user-images.githubusercontent.com/19881320/54484151-b85c4780-4836-11e9-923f-c5e0e5afe866.jpg"> # </p> # # # ## Contact Information # # <NAME>: [LinkedIn](https://www.linkedin.com/in/williampontoncfsp/) # # Email: [@gorbulus](<EMAIL>) # # REPL: [@gorbulus](https://repl.it/@gorbulus) # # Github: [gorbulus](https://github.com/gorbulus) # # ## Overview # # This notebook will be a collection of Data Science basics, examples, and best practices for use as a reference guide. # # There are five sections of this guide broken down to the basic steps of the Data Analysis process. The first step is related to importing a dataset to your environment to be able to analyze and do the work. Some estimates show that Data Scientists can spend up to 80% of their time cleaning and organizing data for analysis and modeling. The second step is to define best practices for cleaning and organizing, how to handle ```NULL``` values, and how to merge and organize messy data. Once the dataset is normalized and cleaned, this guide will detail common statistical methods and define the values needed for visualization and final stats for the Interpretation section. Numerical Analysis is the 'magic' of Data Science, as this step often can expose anomalies and patterns in the data that humans alone might not have been able to interpret. The output of the Numerical Analysis step also powers the Visualizations that will be presented to the stakeholders in the final reporting, and is vital for the subsequent step of Interpretation and Reporting. Finally, the guide covers creating a deliverable to be passed off to other departments. The final result must be understandable by all audiences it is intended for, so knowing the goals of the project up front is imperative for keeping the results in the scope of the audience's understanding of the analysis. # # ### Data Science Steps # # - 0.0 Importing Data # # - 0.1 Cleaning & Organizing # # - 0.2 Numerical Analysis # # - 0.3 Visualizations # # - 0.4 Interpretation & Reporting # # ## Data Science with Python # # Python has a rich Data Science functionality that has been motivated by teams of scientists and engineers trying to solve scientific and engineering problems. Python's Object Oriented Design, ease of syntax, and available libraries make it the industry standard for Data Analysis. A 2016 study done by [O'Reily](https://www.oreilly.com/data/free/files/2016-data-science-salary-survey.pdf) shows that ```Python``` is now dominant over ```R``` throughout the Data Science community, favoring ```Python 3.6``` to the soon to be extinct ```Python 2.7```. I also plan to create a Data Science Playbook for ```R``` techniques in the future (I am still learning!). # # # ```Python``` has become the fastest growing programming language of 2019, and continues to remain the industry standard for modeling and analysis in the scientific and engineering industries. The Scientific Python Stack is an array of technologies that make Python so powerful for Data analysis and statistical prediction. # # To get everything running in this project, use ```pip install -r requirements.txt``` # # ## Project Stack # # <p align="center"> # <img src="https://user-images.githubusercontent.com/19881320/54723910-6457a880-4b3f-11e9-850b-8c2be2ff62a8.jpg"> # </p> # # ### Language # - Python 3.6 (replacing legacy Python 2.7 in 2020) # - Cython (a speedy C library for backing up numpy) # # ### Scientific & Numeric Power # - SciPy # - NumPy # - SciKitLearn # # ### Interactive Environment # - Anaconda IDE # - IPython Notebooks # - GitHub (version control) # - RMOTR Notebooks # # ### Data Science Libraries # - Analysis tools # - NumPy # - Pandas # - Cython # - Visualization tools # - Matplotlib # - Seaborn # - Bokeh #
Python for Data Analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # About this project # ## Why did I start this project? # # I believe everyone has some OCD in their daily lives. In my case, I have some Techno-OCD. Every now and then, it drives me crazy when I see my friends have countless unread emails in their inboxes or multiple tabs remain opened on their browsers. Apart from this, I also have a tiny pet peeve on Spotify playlist. As a loyal Spotify customer, I have been maintaining a personal playlist for over 6 years. Form time to time, I would dedicate my time to clean it up by removing old entries and adding new favorites exclusively fromr different playlists (mainly from the one called Today’s Top Hit). Sometimes, I told myself that I should not spend so much time on organizing this personal playlist. Yet I have an unique taste and I hate to hit “Next” every time I am offered with some automatically-generated songs that I do not enjoy. # # One day, when I was repeating this time consuming playlist process, this idea popped. Why don't I design an algorithm to help me finish this task, so that I have more time to organize my laptop folders? # ## Difference with Spotify # # You may have asked, Spotify also has song recommendation, is it not good enough for you? Short answer is Yes, it is not good enough. I enjoy Spotify recommended song and the algorithm is doing a good to recommend songs for me. Most of the recommended songs fits my mood and I give a credit for that. However, I have no control on what songs I would like to pick. Given my raw and desperate story about, I want to be recommend only from some designated playlist, which are the go-to playlist that I like and grab songs that I like from that playlist.
spotify_user_behaviour_predictor/_build/jupyter_execute/2_about_proj.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" # %matplotlib inline import SimpleITK as sitk import numpy as np import csv import os from PIL import Image import matplotlib.pyplot as plt import pandas as pd import scipy import cv2 from PIL import Image from skimage.segmentation import clear_border from skimage.measure import label, regionprops, perimeter from skimage import measure, morphology from skimage import util import h5py import zipfile from net_detector import * # + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" def load_itk_image(filename): itkimage = sitk.ReadImage(filename) numpyImage = sitk.GetArrayFromImage(itkimage) #numpyOrigin = np.array(list(reversed(itkimage.GetOrigin()))) #numpySpacing = np.array(list(reversed(itkimage.GetSpacing()))) #print(itkimage) return numpyImage #numpyOrigin#, numpySpacing, numpyImage # + _uuid="b3ddf1c5987a127a730582fb1f5b602edc4de802" def readCSV(filename): lines = [] with open(filename, "r") as f: csvreader = csv.reader(f) for line in csvreader: lines.append(line) return lines # + _uuid="0145d7b9ce0cca630aa0f1f8c6c4f2d2d03eb089" def worldToVoxelCoord(worldCoord, origin, spacing): stretchedVoxelCoord = np.absolute(worldCoord - origin) voxelCoord = stretchedVoxelCoord / spacing return voxelCoord def voxel_2_world(voxel_coordinates, origin, spacing): stretched_voxel_coordinates = voxel_coordinates * spacing world_coordinates = stretched_voxel_coordinates + origin return world_coordinates # + _uuid="a180163f71429245a1031e8cf0bf9f78db96eaae" def normalizePlanes(npzarray): maxHU = 400. minHU = -1000. npzarray = (npzarray - minHU) / (maxHU - minHU) npzarray[npzarray>1] = 1. npzarray[npzarray<0] = 0. return npzarray # + _uuid="acf377606e6bb036e0f371011413f42a725a58c6" def resize_image(numpyImage, numpySpacing): #calculate resize factor RESIZE_SPACING = [1, 1, 1] resize_factor = numpySpacing / RESIZE_SPACING new_real_shape = numpyImage.shape * resize_factor new_shape = np.round(new_real_shape) real_resize = new_shape / numpyImage.shape new_spacing = numpySpacing / real_resize new_img = scipy.ndimage.interpolation.zoom(numpyImage, real_resize) #print(new_img.shape) return new_img, new_spacing # + _uuid="1437dcd45763107035c6ac0dffc34cdff11f1426" def image_preprocess(slice): kernel = np.ones((3,3),np.uint8) lung_img = np.array(slice < 604, dtype=np.uint8) #Thresholds the image properly - keeping #(ret_val,lung_img) = cv2.threshold(slice, -700, -600,cv2.THRESH_BINARY) #Does not get rid of table marks - removing median = cv2.medianBlur(lung_img,5) #To remove salt & pepper noise(Median blur better than Gaussian - preserves edges) opening = cv2.morphologyEx(median, cv2.MORPH_OPEN, kernel) #lung_img, cv2.MORPH_OPEN, kernel) closing = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel) cleared = clear_border(closing) #opening) # labeled = label(cleared) #Select the 2 regions with largest areas areas = [r.area for r in regionprops(labeled)] areas.sort() if len(areas) > 2: for region in regionprops(labeled): if region.area < areas[-2]: for coordinates in region.coords: labeled[coordinates[0], coordinates[1]] = 0 segmented = np.array(labeled > 0, dtype=np.uint8) segmented = cv2.morphologyEx(segmented, cv2.MORPH_CLOSE, kernel) #Clean the areas inside lungs get_high_vals = segmented == 0 slice[get_high_vals] = 0 #Visualization #x, y = plt.subplots(1, 4, figsize=[20,20]) #y[0].set_title('Original Binary Image') #y[0].imshow(lung_img, plt.cm.bone) #y[1].set_title('Denoised Image') #y[1].imshow(cleared, plt.cm.bone) #y[2].set_title('Labeled Image') #y[2].imshow(segmented, plt.cm.bone) #y[3].set_title('Segmented lungs') #y[3].imshow(slice, plt.cm.bone) #plt.show() return slice # + _uuid="2d6ccd3c6adc1df9c6d9cef6e4297a10eebb088e" def segment_lung_from_ct_scan(ct_scan): return np.asarray([image_preprocess(slice) for slice in ct_scan]) # + _uuid="6b8e58d2ec18f69ec8d85e2eb859b45dd88088a0" #cands = readCSV(anno_path) #cand_path) def seq(start, stop, step=1): n = int(round((stop - start)/float(step))) #print(n) if n > 1: return([start + step*i for i in range(n+1)]) else: return([]) def draw_circles(image,cands,origin,spacing, filename): #make empty matrix, which will be filled with the mask RESIZE_SPACING = [1, 1, 1] image_mask = np.zeros(image.shape) #run over all the nodules in the lungs for ix, ca in enumerate(cands): if ca[0] == filename: #'1.3.6.1.4.1.14519.5.2.1.6279.6001.108197895896446896160048741492': #print(ca) #print(image)#(ca[4]) #get middel x-,y-, and z-worldcoordinate of the nodule radius = np.ceil(float(ca[4]))/2 #print(radius) coord_x = float(ca[1]) coord_y = float(ca[2]) coord_z = float(ca[3]) image_coord = np.array((coord_z,coord_y,coord_x)) #print(image_coord) #determine voxel coordinate given the worldcoordinate #print(image_coord, type(image_coord)) #print(origin, type(origin)) #print(spacing, type(spacing)) image_coord = worldToVoxelCoord(image_coord,origin,spacing) #determine the range of the nodule noduleRange = seq(-radius, radius, RESIZE_SPACING[0]) #print(noduleRange) #create the mask for x in noduleRange: for y in noduleRange: for z in noduleRange: coords = worldToVoxelCoord(np.array((coord_z+z,coord_y+y,coord_x+x)),origin,spacing) #print(coords, coords[0], coords[1], coords[2]) #if (np.linalg.norm(image_coord-coords) * RESIZE_SPACING[0]) < radius: try: image_mask[int(np.round(coords[0])), int(np.round(coords[1])), int(np.round(coords[2]))] = int(1) except: pass #print(image_mask.shape) return image_mask # - os.listdir('/media/demcare/1T_Storage/lavleen/lavleen/lung_masks/') # + def hdf5_list_files(name): list_files.append(name) list_files = [] train_data = [] train_origins = [] train_spacings = [] #dirc = os.listdir('./numpyimages/') #print(dirc) #for sset in dirc: #pth = '/media/demcare/1T_Storage/lavleen/lavleen/numpyimages_test/subset7.h5' #+str(sset) pth = './numpyimages_test/subset7.h5' #print(pth) read_train = h5py.File(pth, 'r')#'HDF5//subset0.h5','r') read_train.visit(hdf5_list_files) train_data.append(read_train) pth_o = './numpyorigins_test/subset7_origin.h5' #+str(sset) read_train_origin = h5py.File(pth_o, 'r')#'HDF5//subset0_origin.h5', 'r') train_origins.append(read_train_origin) pth_s = './numpyspacing_test/subset7_spacing.h5' #+str(sset) read_train_space = h5py.File(pth_s, 'r')#'HDF5//subset0_spacing.h5', 'r') train_spacings.append(read_train_space) #f.close() # - print(train_data) for file in train_data: if '1.3.6.1.4.1.14519.5.2.1.6279.6001.564534197011295112247542153557.mhd' in file: print(file) print(file['1.3.6.1.4.1.14519.5.2.1.6279.6001.564534197011295112247542153557.mhd'].value) print(len(list_files)) import math # + _uuid="5117d8d5a3ac3c76aac8c84860786bc4380f066c" anno_path = "annotations.csv" h5f_lungs = h5py.File('/media/demcare/1T_Storage/lavleen/lavleen/lung_masks/lung_masks.h5', 'a') h5f_nod = h5py.File('/media/demcare/1T_Storage/lavleen/lavleen/nodule_masks/nodule_masks.h5', 'a') def full_preprocessing(anno_path): cands = readCSV(anno_path) #Now, for multiple images files_already_read = [] count = 0 for filename in list_files: if filename not in files_already_read:# and filename[:-4] == '1.3.6.1.4.1.14519.5.2.1.6279.6001.108197895896446896160048741492': files_already_read.append(str(filename)) #print(filename) #img_path = '../input/subset0/subset0/subset0/' + str(filename[:-4]) + '.mhd' #print(img_path) #numpyImage, numpyOrigin, numpySpacing = load_itk_image(img_path) for file in train_data: if filename in file: numpyImage = file[filename].value for file in train_origins: if filename in file: numpyOrigin = file[filename].value for file in train_spacings: if filename in file: numpySpacing = file[filename].value #Discard 20 slices from both ends #print(numpyImage.shape) #print(len(numpyImage)) #numpyImage = numpyImage[20:120,:,:] #print(numpyImage.shape) #print(len(new_imga)) #Resize the image before preprocessing resized_img, new_spacing = resize_image(numpyImage, numpySpacing) #print(resized_img.shape) resized_img = resized_img[20:270,:,:] #print(numpyImage.shape) #Pre-process the image numpyImage = resized_img + 1024 processed_img = segment_lung_from_ct_scan(numpyImage) numpyImage = processed_img - 1024 #create nodule mask nodule_mask = draw_circles(numpyImage,cands,numpyOrigin,new_spacing, str(filename[:-4])) lung_img_512, nodule_mask_512 = np.zeros((numpyImage.shape[0], 512, 512)), np.zeros((nodule_mask.shape[0], 512, 512)) original_shape = numpyImage.shape for z in range(numpyImage.shape[0]): offset = (512 - original_shape[1]) upper_offset = int(np.round(offset/2)) lower_offset = int(offset - upper_offset) #print(z, upper_offset, lower_offset) new_origin = voxel_2_world([-upper_offset,-lower_offset,0],numpyOrigin,new_spacing) #print(numpyImage.shape) #print(nodule_mask.shape) lung_img_512[z, upper_offset:-lower_offset,upper_offset:-lower_offset] = numpyImage[z,:,:] nodule_mask_512[z, upper_offset:-lower_offset,upper_offset:-lower_offset] = nodule_mask[z,:,:] # save images. #np.save('./lung_masks/' + str(filename[:-4]) + '_lung_img.npz', lung_img_512) #np.save('./nodule_masks/' + str(filename[:-4]) + '_nodule_mask.npz', nodule_mask_512) try: h5f_lungs.create_dataset(str(filename[:-4])+'.npz', data=lung_img_512, compression='gzip') h5f_nod.create_dataset(str(filename[:-4]) + '.npz', data=nodule_mask_512, compression='gzip') except: pass count += 1 print(count) else: continue full_preprocessing(anno_path) h5f_lungs.close() h5f_nod.close() # + import tflearn from tflearn.layers.core import * from tflearn.layers.conv import * from tflearn.data_utils import * from tflearn.layers.merge_ops import * from tflearn.layers.normalization import * from tflearn.layers.estimator import regression from tflearn.helpers.trainer import * from tflearn.optimizers import * import tensorflow def nodule_rpn(): layer1 = input_data(shape=[None, 250, 512, 512, 1]) #batch size, X, Y, Z, channels layer1 = conv_3d(layer1, nb_filter=64, filter_size=3, strides=1, padding='same', activation='relu') layer1 = batch_normalization(layer1) #layer1 = dropout(layer1, keep_prob=0.8) layer1 = conv_3d(layer1, nb_filter=64, filter_size=3, strides=1, padding='same', activation='relu') #layer1 = batch_normalization(layer1) pool1 = max_pool_3d(layer1, kernel_size=2, strides=2) layer2 = conv_3d(pool1, nb_filter=128, filter_size=3, strides=1, padding='same', activation='relu') layer2 = dropout(layer2, keep_prob=0.8) layer2 = conv_3d(layer2, nb_filter=128, filter_size=3, strides=1, padding='same', activation='relu') pool2 = max_pool_3d(layer2, kernel_size=2, strides=2) layer3 = conv_3d(pool2, nb_filter=256, filter_size=3, strides=1, padding='same', activation='relu') layer3 = dropout(layer3, keep_prob=0.8) layer3 = conv_3d(layer3, nb_filter=256, filter_size=3, strides=1, padding='same', activation='relu') pool3 = max_pool_3d(layer3, kernel_size=2, strides=2) layer4 = conv_3d(pool3, nb_filter=512, filter_size=3, strides=1, padding='same', activation='relu') layer4 = dropout(layer4, keep_prob=0.8) layer4 = conv_3d(layer4, nb_filter=512, filter_size=3, strides=1, padding='same', activation='relu') pool4 = max_pool_3d(layer4, kernel_size=2, strides=2) layer5 = conv_3d(pool4, nb_filter=1024, filter_size=3, strides=1, padding='same', activation='relu') layer5 = dropout(layer5, keep_prob=0.8) layer5 = conv_3d(layer5, nb_filter=1024, filter_size=3, strides=1, padding='same', activation='relu') up6 = merge([conv_3d_transpose(layer5, nb_filter=2, filter_size=3, strides=1, output_shape=[32, 64, 64, 512], bias=False), layer4], mode='concat', axis=1) #output_shape=[250, 512, 512] layer6 = conv_3d(up6, nb_filter=512, filter_size=3, strides=1, padding='same', activation='relu') layer6 = dropout(layer6, keep_prob=0.8) layer6 = conv_3d(layer6, nb_filter=512, filter_size=3, strides=1, padding='same', activation='relu') up7 = merge([conv_3d_transpose(layer6, nb_filter=2, filter_size=3, strides=1, output_shape=[63,128,128,256], bias=False), layer3], mode='concat', axis=1) layer7 = conv_3d(up7, nb_filter=256, filter_size=3, strides=1, padding='SAME', activation='relu') layer7 = dropout(layer7, keep_prob=0.8) layer7 = conv_3d(layer7, nb_filter=256, filter_size=3, strides=1, padding='SAME', activation='relu') up8 = merge([conv_3d_transpose(layer7, nb_filter=2, filter_size=3, strides=1, output_shape=[125, 256, 256, 128], bias=False), layer2], mode='concat', axis=1) layer8 = conv_3d(up8, nb_filter=128, filter_size=3, strides=1, padding='SAME', activation='relu') layer8 = dropout(layer8, keep_prob=0.8) layer8 = conv_3d(layer8, nb_filter=128, filter_size=3, strides=1, padding='SAME', activation='relu') up9 = merge([conv_3d_transpose(layer8, nb_filter=2, filter_size=3, strides=1, output_shape=[250, 512, 512, 64], bias=False), layer1], mode='concat', axis=1) layer9 = conv_3d(up9, nb_filter=64, filter_size=3, strides=1, padding='SAME', activation='relu') layer9 = dropout(layer9, keep_prob=0.8) layer9 = conv_3d(layer9, nb_filter=64, filter_size=3, strides=1, padding='SAME', activation='relu') layer10 = conv_3d(layer9, nb_filter=1, filter_size=1, strides=1, activation='sigmoid') model = tflearn.DNN(layer10) return model # + import keras from keras.layers.convolutional import * from keras.layers import Dropout, Input from keras.layers import Conv3D, MaxPooling3D from keras.layers import * from keras.optimizers import SGD, Adam from keras.models import Model # change the loss function def dice_coef(y_true, y_pred): smooth = 1. y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum(y_true_f * y_pred_f) return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth) def dice_coef_loss(y_true, y_pred): return -dice_coef(y_true, y_pred) ''' The UNET model is compiled in this function. ''' def unet_model(): inputs = Input((1, 512, 512, 250)) #((1, 512, 512, 250)) conv1 = Conv3D(64, kernel_size=3, strides=1, activation='relu', padding='same')(inputs) #conv1 = Dropout(0.2)(conv1) print(conv1._keras_shape) conv1 = Conv3D(64, kernel_size=3, strides=1, activation='relu', padding='same')(conv1) print(conv1._keras_shape) pool1 = MaxPooling3D(pool_size=(2, 2, 2), dim_ordering="th")(conv1) print(pool1._keras_shape) conv2 = Conv3D(128, kernel_size=3, strides=1, activation='relu', padding='same')(pool1) print(conv2._keras_shape) #conv2 = Dropout(0.2)(conv2) conv2 = Conv3D(128, kernel_size=3, strides=1, activation='relu', padding='same')(conv2) print(conv2._keras_shape) pool2 = MaxPooling3D(pool_size=(2, 2, 2), dim_ordering="th")(conv2) print(pool2._keras_shape) #conv3 = Conv3D(256, kernel_size=3, strides=3, activation='relu', padding='same')(pool2) #print(conv3._keras_shape) #conv3 = Dropout(0.2)(conv3) #conv3 = Conv3D(256, kernel_size=3, strides=3, activation='relu', padding='same')(conv3) #print(conv3._keras_shape) #pool3 = MaxPooling3D(pool_size=(2, 2, 2), dim_ordering="th")(conv3) #print(pool3._keras_shape) conv4 = Conv3D(256, kernel_size=3, strides=1, activation='relu', padding='same')(pool2) print(conv4._keras_shape) #conv4 = Dropout(0.2)(conv4) conv4 = Conv3D(64, kernel_size=3, strides=1, activation='relu', padding='same')(conv4) print(conv4._keras_shape) #pool4 = MaxPooling3D(pool_size=(2, 2, 2), dim_ordering="th")(conv4) #conv5 = Conv3D(1024, kernel_size=3, strides=3, activation='relu', padding='same')(pool4) #conv5 = Dropout(0.2)(conv5) #conv5 = Conv3D(1024, kernel_size=3, strides=3, activation='relu', padding='same')(conv5) #up6 = merge([UpSampling3D(size=(2, 2, 2))(conv5), conv4], mode='concat', concat_axis=1) #conv6 = Conv3D(512, kernel_size=3, strides=3, activation='relu', padding='same')(up6) #conv6 = Dropout(0.2)(conv6) #conv6 = Conv3D(512, kernel_size=3, strides=3, activation='relu', padding='same')(conv6) #up7 = merge([UpSampling3D(size=(2, 2, 2))(conv6), conv3], mode='concat', concat_axis=1) #conv7 = Conv3D(256, kernel_size=3, strides=3, activation='relu', padding='same')(up7) #conv7 = Dropout(0.2)(conv7) #conv7 = Conv3D(256, kernel_size=3, strides=3, activation='relu', padding='same')(conv7) up8 = concatenate([UpSampling3D(size=(2, 2, 2), data_format='channels_first')(conv4), conv2], axis=1) conv8 = Conv3D(128, kernel_size=3, strides=1, activation='relu', padding='same')(up8) conv8 = Dropout(0.2)(conv8) conv8 = Conv3D(32, kernel_size=3, strides=1, activation='relu', padding='same')(conv8) up9 = concatenate([UpSampling3D(size=(2, 2, 2), data_format='channels_first')(conv8), conv1], axis=1) conv9 = Conv3D(64, kernel_size=3, strides=1, activation='relu', padding='same')(up9) conv9 = Dropout(0.2)(conv9) conv9 = Conv3D(64, kernel_size=3, strides=1, activation='relu', padding='same')(conv9) conv10 = Conv3D(1, 1, activation='sigmoid')(conv9) model = Model(input=inputs, output=conv10) model.summary() model.compile(optimizer=Adam(lr=1e-3), loss=dice_coef_loss, metrics=[dice_coef]) return model # + def lung_files_list(name): lung_files.append(name) def nod_files_list(name): nod_files.append(name) lung_files = [] nod_files = [] pth_l = '/media/demcare/1T_Storage/lavleen/lavleen/lung_masks/lung_masks.h5' #print(pth) lung_train = h5py.File(pth_l, 'r') lung_train.visit(lung_files_list) pth_n = '/media/demcare/1T_Storage/lavleen/lavleen/nodule_masks/nodule_masks.h5' nod_train = h5py.File(pth_n, 'r') nod_train.visit(nod_files_list) # - print(len(lung_files)), print(len(nod_train)) for file in lung_files: #print(file) #print(file[:64]) filename = file[:64] + '.npz' print(lung_train[file].shape, nod_train[filename].shape) #break # + # change the loss function def dice_coef(y_pred, true): smooth = 1. y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum(y_true_f * y_pred_f) return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth) def dice_coef_loss(y_pred, y_true): return -dice_coef(y_pred, y_true) # - def train_and_predict(use_existing): print('Loading and preprocessing train data...') count = 0 imgs_train = [] imgs_mask_train = [] for file in lung_train: #filename = file[:64] + '_nodule_mask.npz' imgs_train.append((lung_train[file].value).astype(np.float32)) imgs_mask_train.append((nod_train[file].value).astype(np.float32)) count += 1 if count == 51: break #imgs_test = np.load(img_path.split('/')[-1]+"testImages.npy").astype(np.float32) #imgs_mask_test_true = np.load(img_path.split('/')[-1]+"testMasks.npy").astype(np.float32) #mean = np.mean(imgs_train) # mean for data centering #std = np.std(imgs_train) # std for data normalization #imgs_train -= mean # images should already be standardized, but just in case #imgs_train /= std print('Creating and compiling model...') model = unet_model() #nodule_rpn() # # Saving weights to unet.hdf5 at checkpoints model_checkpoint = model.save('./nodule_rpn.tfl') # # Should we load existing weights? # Set argument for call to train_and_predict to true at end of script if use_existing: model.load('./nodule_rpn.tfl') # # The final results for this tutorial were produced using a multi-GPU # machine using TitanX's. # For a home GPU computation benchmark, on my home set up with a GTX970 # I was able to run 20 epochs with a training set size of 320 and # batch size of 2 in about an hour. I started getting reseasonable masks # after about 3 hours of training. # print('Fitting model...') #loss = tensorflow.losses.softmax_cross_entropy(imgs_train, imgs_mask_train) #train_ops = tflearn.TrainOp(loss=loss, optimizer=Adam(learning_rate=1e-4))#, metric=dice_coef) #trainer = tflearn.Trainer(train_ops=train_ops, tensorboard_verbose=0) #for file in lung_train: # imgs_train = lung_train[file].value # imgs_mask_train = nod_train[file].value #trainer.fit(imgs_train, imgs_mask_train, n_epoch=10, batch_size=2, shuffle=True, callbacks=[model_checkpoint]) # verbose=1, for img in imgs_train: for mask in imgs_mask_train: x = img.shape[0] y = img.shape[1] z = img.shape[2] img = img.reshape((1, -1, z, y, x)) x1 = mask.shape[0] y1 = mask.shape[1] z1 = mask.shape[2] mask = mask.reshape((1, -1,z1,y1,x1)) if x == 250: model.fit(img, mask, epochs=5, batch_size=1, verbose=1, shuffle=True) #, callbacks=[model_checkpoint]) # verbose=1, break #train_ops=train_ops, # loading best weights from training session print('-'*30) print('Loading saved weights...') print('-'*30) model.load('./nodule_rpn.tfl') print('-'*30) #print('Predicting masks on test data...') print('-'*30) #num_test = len(imgs_test) #imgs_mask_test = np.ndarray([num_test,1,512,512],dtype=np.float32) #for i in range(num_test): # imgs_mask_test[i] = model.predict([imgs_test[i:i+1]], verbose=0)[0] #np.save('masksTestPredicted.npy', imgs_mask_test) #mean = 0.0 #for i in range(num_test): # mean+=dice_coef_np(imgs_mask_test_true[i,0], imgs_mask_test[i,0]) #mean/=num_test #print("Mean Dice Coeff : ",mean) train_and_predict(False) # Setup the HDF5 file server # To add images to the database # + #h5f.close() #Store numpy image to the hdf5 files h5f = h5py.File('HDF5//subset2.h5', 'w') files_path = 'C://Users//Ajitesh//Downloads//subset2' list_files = os.listdir(files_path) for filename in list_files: #Extract the file from ZIP folder directly to the server file = files_path + '//' + filename if 'mhd' in file: #print(file) h5f.create_dataset(filename, data=load_itk_image(file), compression='gzip') h5f.close() # + h5f.close() #Store numpy image's origin to the hdf5 files h5f = h5py.File('HDF5//subset2_origin.h5', 'w') files_path = 'C://Users//Ajitesh//Downloads//subset2' #Practicum//subset0.zip' #'../input/subset0/subset0/subset0' list_files = os.listdir(files_path) for filename in list_files: #Extract the file from ZIP folder directly to the server file = files_path + '//' + filename if 'mhd' in file: #print(file) h5f.create_dataset(filename, data=load_itk_image(file)) h5f.close() # + #Store numpy image's spacing to the hdf5 files h5f = h5py.File('HDF5//subset2_spacing.h5', 'w') files_path = 'C://Users//Ajitesh//Downloads//subset2' #Practicum//subset0.zip' #'../input/subset0/subset0/subset0' list_files = os.listdir(files_path) for filename in list_files: #Extract the file from ZIP folder directly to the server file = files_path + '//' + filename if 'mhd' in file: #print(file) h5f.create_dataset(filename, data=load_itk_image(file)) h5f.close() # - print(list_files) # Read the files # + h5f_r = h5py.File('HDF5//subset0.h5', 'r') #keys= list(h5f_r) #print(keys) image = h5f_r['1.3.6.1.4.1.14519.5.2.1.6279.6001.122763913896761494371822656720.mhd'].value print(image) #h5f_r.close() # - for name in h5f_r: print(name) # + _uuid="b4055a57dab7ccdff14647962985980a66d57c5a" for file in lung_train: #filename = file[:64] + '_nodule_mask.npz' filename = file[:64] + '.npz' if filename == '1.3.6.1.4.1.14519.5.2.1.6279.6001.108197895896446896160048741492': print("found") imgs_train = lung_train[file].value imgs_mask_train = nod_train[file].value break for slice in imgs_train: for a_slice in imgs_mask_train: #Visualization x, y = plt.subplots(1, 2, figsize=[20,20]) #y[0].set_title('Original Binary Image') y[0].imshow(slice, plt.cm.bone) #y[1].set_title('Denoised Image') y[1].imshow(a_slice, plt.cm.bone) #y[2].set_title('Labeled Image') #y[2].imshow(segmented, plt.cm.bone) #y[3].set_title('Segmented lungs') #y[3].imshow(slice, plt.cm.bone) #plt.show() break # -
LUNA_neural (3).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.7.13 64-bit ('py37-dsup') # language: python # name: python3 # --- # # `stats` subpackage of `scipy` # # Let's first look how we draw some random variables. # # There are many probability distributions. Some are *continuous* while some are *discrete*. Examples of famous distributions are: # # * normal (norm) # * exponential (expon) # * poisson and # * bernoulli # %matplotlib inline import matplotlib.pyplot as plt # + # let us try to draw some random variables import numpy as np from scipy import stats rv = stats.norm() rv.random_state = 42 # this is how we make it debug-able. # - variable_1 = rv.rvs() print (variable_1) variable_2 = rv.rvs() print (variable_2) variables = rv.rvs(1000) print (variables[0:10]) plt.hist(variables) plt.show() # let's try uniform distribution rv_uniform = stats.uniform() rv_uniform.random_state = 42 vars_uniform = rv_uniform.rvs(size=1000) plt.hist(vars_uniform, bins=10) plt.show() # + # try expon distribution # + # try poisson distribution # + # try bernoulli distribution # - # ## Central Limit Theorem # # **This is for interviews.** In day-to-day operation, one does not need this to remember. # # * First, we assume there is a probability distribution, from which we can draw multiple random variables. # * We draw $j$ number of random variables. Then, we compute the means $\mu$ of this. We call these **1 sample**. # * We *repeat* it for sufficient number of times (1000) or at least 30!!!. # * Note: 30 is just a number from an example in a famous text book. Beware!!! # * When we try to figure out the distribution of the samples (the means), we will find it is **normally distributed**. # let's try np.mean (which we haven't yet) a = np.array([ [0, 1, 2], [0, 0, 0] ]) np.mean(a, axis=1) # + # number of sample num = [1, 10, 50, 100] # list of sample means means = [] # Generating 1, 10, 30, 100 random numbers from A UNIFORM DISTRIBUTION # taking their mean and appending it to list means. for j in num: # Generating seed so that we can get same result # every time the loop is run... rv = stats.bernoulli(p=0.10) rv.random_state = 42 x = [rv.rvs(size=j) for _ in range(0, 1000)] means.append(np.mean(x, axis=1)) k = 0 # plotting all the means in one figure fig, ax = plt.subplots(2, 2, figsize =(8, 8)) for i in range(0, 2): for j in range(0, 2): # Histogram for each x stored in means ax[i, j].hist(means[k], 10, density = True) ax[i, j].set_title(label = num[k]) k = k + 1 plt.show() # + # try it with other distributions such as norm, exponential, poisson or bernoulli etc. # - # let's see if a sample m belongs to a distribution x rv = stats.norm() rv.random_state = 42 x = rv.rvs(size=30) m = 0.3 print (stats.ttest_1samp(x, m)) # let's see if a sample m belongs to a distribution x rv = stats.norm() rv.random_state = 42 x = rv.rvs(size=30) m = 0.30 print (stats.ttest_1samp(x, m)) # let's see if a sample m belongs to a distribution x m = 0.9 print (stats.ttest_1samp(x, m)) # Smaller $p$ value means less confident to reject null-hypothesis that the the measurement $m$ does not belong to the sample $x$. In short, $p \leq 0.05$ means they are from two different distribution. # # Let's try t-test with two distributions. # let's see if x_1 and x_2 belongs to same distribution rv_1 = stats.norm() x_1 = rv.rvs(size=30) rv_2 = stats.norm() x_2 = rv.rvs(size=30) print (stats.ttest_ind(a=x_1, b=x_2)) # let's see if x_1 and x_2 belongs to same distribution rv_1 = stats.norm(loc=5, scale=4) x_1 = rv_1.rvs(size=300) rv_2 = stats.norm(loc=8, scale=20) x_2 = rv_2.rvs(size=300) print (stats.ttest_ind(a=x_1, b=x_2)) # # Assignment # # ![face model 50%](https://s3.ap-south-1.amazonaws.com/s3.studytonight.com/curious/uploads/pictures/1592469192-74364.png) # # 1. Write a function to move the centroid of `model_points` to (0, 0, 0). # 2. Write a function to move/resize mustache image to that of the `shifted_model_points`. # clone this and go into its directory # !git clone <EMAIL>:neolaw84/yadil.git # %pip install -U insightface onnxruntime import sys if "." not in sys.path: sys.path.append("./yadil") import requests import cv2 # %matplotlib inline import matplotlib.pyplot as plt import numpy as np from insightface.app import FaceAnalysis from yadil.image.face_model import model_points print (model_points.shape) # + def show_cv2_image(img): # <-- ဒီလိုရေးတာ function ကို define တယ်လို့ ခေါ်တယ်။ img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) plt.imshow(img_rgb) plt.show() def download_and_decode_cv2(url, grayscale=False): rr = requests.get(url) _nparr = np.frombuffer(rr.content, np.uint8) _img = cv2.imdecode(_nparr, cv2.IMREAD_COLOR) if grayscale: _img = cv2.cvtColor(_img, cv2.COLOR_RGB2GRAY) return _img def show_model_3d(points): ax = plt.axes(projection ="3d") ax.scatter3D(points[:, 0], points[:, 2], -points[:, 1]) ax.set_xlabel('X') ax.set_ylabel('Z') ax.set_zlabel('Y') plt.show() # - show_model_3d(model_points) # + # 1. shift bonding box centroid def shift_centroid_to_origin(points): # Move the input points so that their centroid (mid point of bounding box) is at origin. num_points = points.shape[0] min_x = min(points[:, 0]) min_y = min(points[:, 1]) min_z = min(points[:, 2]) max_x = max(points[:, 0]) max_y = max(points[:, 1]) max_z = max(points[:, 2]) x_ = np.full(shape=num_points, fill_value=(min_x + max_x) / 2.0) y_ = np.full(shape=num_points, fill_value=(min_y + max_y) / 2.0) z_ = np.full(shape=num_points, fill_value=(min_z + max_z) / 2.0) points[:, 0] = points[:, 0] - x_ points[:, 1] = points[:, 1] - y_ points[:, 2] = points[:, 2] - z_ return points shifted_model_points = shift_centroid_to_origin(model_points) show_model_3d(shifted_model_points) # - mustache = download_and_decode_cv2("https://i.ibb.co/hBX7Dpf/mustache.jpg", grayscale=False) show_cv2_image(mustache) print(shifted_model_points[33]) def plot_together(model_points, mustache): plt.xlim(-300, 300) plt.ylim(-300, 300) plt.imshow(mustache) plt.scatter(shifted_model_points[:, 0], -shifted_model_points[:, 1]) plt.show() # + # 2. do something here to resize/move mustache to match that of shifted_model_points[33] # Z would be the same as shifted_model_points[33]'s Z # mustache's mid-x and upper-mid-y should align to that of shifted_model_points[33]'s X and Y. # try cv2.getAffineTransform or cv2.estimateAffinePartial2D to get matrix M that maps mustache coordinates # to that of shifted_model_points. # use cv2.warpAffine(img, M) to do the actual mapping. # note: you can't place image in negative area. Thus, move the model points up and right by multiplying it with # inverse of discovered matrix. plot_together(shifted_model_points, mustache) # - original_img = download_and_decode_cv2("https://i.ibb.co/5WNdy1R/1200px-Tom-Holland-by-Gage-Skidmore.jpg", grayscale=False) show_cv2_image(original_img) app = FaceAnalysis(allowed_modules=["detection", "genderage", "landmark_3d_68"], providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']) app.prepare(ctx_id=0, det_size=(640, 640)) faces = app.get(original_img) f = faces[0] landmark = f["landmark_3d_68"] show_model_3d(landmark) # + # 3. solve the problem to discover matrix M to map shifted_model_points to that of discovered image. # Use the discovered matrix M to transform mustache to the image coordinates. # hint: mustache is very small. make a large matrix (with 3 channels) full of zeros the same size as tom's image. # then, copy mustache and multiply to see it move around. # - def put_mustache(original_img, mustache): mustache_resized = mustache roi = original_img mustache_resized_gray = cv2.cvtColor(mustache_resized, cv2.COLOR_RGB2GRAY) ret, mask = cv2.threshold(mustache_resized_gray, 120, 255, cv2.THRESH_BINARY) final_roi = cv2.bitwise_or(roi,roi,mask = mask) return final_roi new_img = put_mustache(original_img, mustache) show_cv2_image(new_img)
week08/02_assignment_1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## CSE583: Natural Language Processing # #### <NAME>, <NAME>, <NAME> # ### Procedural specification # #### Background # Natural language processing affords new possibilities for sentiment analysis of text. This is especially beneficial in everyday scenarios when interacting with any chatbot: for example, when asking Siri for weather updates or interfacing with a customer service bot. # # To this end, we seek to investigate how to better understand natural language processing. Our main problem being addressed is how to generate fake text that effectively matches the style of an input corpus. This program will specifically generate novel poems based on trained datasets of poems by different authors. # # #### User profile # Someone who wants to generate any fake text and who wants to know a little bit more about natural language processing. This user will have a basic knowledge of Python, and is able to run the program using CLI. # # # #### Data sources # We will use input data from a collection of poems via Project Gutenberg, on their poetry bookshelf: http://www.gutenberg.org/ebooks/bookshelf/60 # Poems by <NAME>: http://www.gutenberg.org/cache/epub/1567/pg1567.txt # Poems by <NAME>: http://www.gutenberg.org/cache/epub/2490/pg2490.txt # Poems by <NAME>: http://www.gutenberg.org/cache/epub/2678/pg2678.txt # # #### Use case # The main use case for this program is for generating fake text. Our program takes an input of some text, and outputs a wall of fake text with similar sentiment to the input text. # User input: A few paragraphs of text such as poems from various authors. # Output: Computer generated fake text that mimics the style of the input text. # # This text generator program is meant for fun. With a more advanced program, a person may be able to generate greeting cards or fortune cookies based on larger or smaller text inputs. These examples are implications for future work. For class purposes, our program may not be advanced enough to generate meaningful text in this manner. #
docs/.ipynb_checkpoints/procedural-specs-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="T7qJyb8ypLhA" # # Some Simple Corpus Statistics # # Let's Count the words in <NAME>. The book is part of nltk. # # If you get an error rum "nltk.downöload()" once and select "everything in the book". # + colab={"base_uri": "https://localhost:8080/"} id="RWoUMFerpLhF" outputId="742678d1-f520-48d9-896c-e5973d2cf571" import nltk nltk.download('gutenberg') words = nltk.corpus.gutenberg.words('melville-moby_dick.txt') # + colab={"base_uri": "https://localhost:8080/"} id="sET7IUbTpLhG" outputId="a1e0ecc6-a41e-4aad-b4ca-20713ffa5cf6" len(words) # + colab={"base_uri": "https://localhost:8080/"} id="4eEVL1AHpLhH" outputId="42dae9a9-af02-48ab-e718-ff0b374fb30c" words[:50] # + [markdown] id="phceL3l-pLhH" # How many types do we have? How many singletons (hapax leomena)? What is the frequency of the most frequent word? # # To answer these questions we use the FreqDist class, an extension of the Counter class. # + colab={"base_uri": "https://localhost:8080/"} id="rueIX-r0pLhI" outputId="36d4f7e4-09c0-44bd-d0e8-3c64ea9406ed" word_dist_mb = nltk.FreqDist(words) print('Tokens:',word_dist_mb.N()) print('Types:',word_dist_mb.B()) # identical to len(word_dist_mb)) mft = word_dist_mb.max() print('Hapax Legomena:',len(word_dist_mb.hapaxes())) print('Most frequent token:',word_dist_mb.max() ,'(',word_dist_mb[mft],')') print('Most common tokens:',word_dist_mb.most_common(10)) # + colab={"base_uri": "https://localhost:8080/", "height": 291} id="GacUpuespLhI" outputId="532de05d-c3d1-456f-a800-7da91ce80e5a" word_dist_mb.plot(20,cumulative=False) # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="a7tmtZ9SpLhJ" outputId="ba8ed87d-8060-47b4-ff96-2a79c90cb2f6" # %matplotlib inline import matplotlib.pyplot as plt wfreq_list = [(w,word_dist_mb[w]) for w in word_dist_mb] wfreq_list = sorted(wfreq_list,key = lambda x:x[1],reverse = True) plt.plot([f for w,f in wfreq_list], 'r') # + colab={"base_uri": "https://localhost:8080/", "height": 269} id="6NO0mSsHpLhK" outputId="bb0dbe5f-f297-45b7-b0c0-9d889639188a" plt.plot([f for w,f in wfreq_list], 'r') plt.yscale('log') plt.xscale('log') plt.show() # + [markdown] id="jZ7bYPM7pLhL" # # Same for Brown Corpus # + colab={"base_uri": "https://localhost:8080/"} id="FCfXELWupLhL" outputId="b15a106f-5f63-4987-8d9e-88cfac411f6b" from nltk.corpus import brown nltk.download('brown') word_dist_brown = nltk.FreqDist(brown.words()) print('Tokens:',word_dist_brown.N()) print('Types:',word_dist_brown.B()) # identical to len(word_dist_mb)) mft = word_dist_brown.max() print('Hapax Legomena:',len(word_dist_brown.hapaxes())) print('Most frequent token:',word_dist_brown.max() ,'(',word_dist_brown[mft],')') print('Most common tokens:',word_dist_brown.most_common(10)) # + colab={"base_uri": "https://localhost:8080/", "height": 274} id="Wi_UfSynpLhL" outputId="7ce61d0e-9718-4064-8ba2-76f54758cf2f" wfreq_list = [(w,word_dist_brown[w]) for w in word_dist_brown] wfreq_list = sorted(wfreq_list,key = lambda x:x[1],reverse = True) plt.plot([f for w,f in wfreq_list], 'r') plt.yscale('log') plt.xscale('log') plt.show()
Zipf/Zipf.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import folium from folium import plugins m = folium.Map([30, 0], zoom_start=3) wind_locations = [ [59.35560, -31.992190], [55.178870, -42.89062], [47.754100, -43.94531], [38.272690, -37.96875], [27.059130, -41.13281], [16.299050, -36.56250], [8.4071700, -30.23437], [1.0546300, -22.50000], [-8.754790, -18.28125], [-21.61658, -20.03906], [-31.35364, -24.25781], [-39.90974, -30.93750], [-43.83453, -41.13281], [-47.75410, -49.92187], [-50.95843, -54.14062], [-55.97380, -56.60156], ] wind_line = folium.PolyLine(wind_locations, weight=15, color="#8EE9FF").add_to(m) attr = {"fill": "#007DEF", "font-weight": "bold", "font-size": "24"} plugins.PolyLineTextPath( wind_line, ") ", repeat=True, offset=7, attributes=attr ).add_to(m) danger_line = folium.PolyLine( [[-40.311, -31.952], [-12.086, -18.727]], weight=10, color="orange", opacity=0.8 ).add_to(m) attr = {"fill": "red"} plugins.PolyLineTextPath( danger_line, "\u25BA", repeat=True, offset=6, attributes=attr ).add_to(m) plane_line = folium.PolyLine( [[-49.38237, -37.26562], [-1.75754, -14.41406], [51.61802, -23.20312]], weight=1, color="black", ).add_to(m) attr = {"font-weight": "bold", "font-size": "24"} plugins.PolyLineTextPath( plane_line, "\u2708 ", repeat=True, offset=8, attributes=attr ).add_to(m) line_to_new_delhi = folium.PolyLine( [ [46.67959447, 3.33984375], [46.5588603, 29.53125], [42.29356419, 51.328125], [35.74651226, 68.5546875], [28.65203063, 76.81640625], ] ).add_to(m) line_to_hanoi = folium.PolyLine( [ [28.76765911, 77.60742188], [27.83907609, 88.72558594], [25.68113734, 97.3828125], [21.24842224, 105.77636719], ] ).add_to(m) plugins.PolyLineTextPath(line_to_new_delhi, "To New Delhi", offset=-5).add_to(m) plugins.PolyLineTextPath(line_to_hanoi, "To Hanoi", offset=-5).add_to(m) m # + m = folium.Map() folium.plugins.AntPath( locations=wind_locations, reverse="True", dash_array=[20, 30] ).add_to(m) m.fit_bounds(m.get_bounds()) m
prototype/examples/PolyLineTextPath_AntPath.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="qmnJe06SfVMN" colab_type="text" # # Install transformers 2.8.0 # + id="_rtNjBbdY7RM" colab_type="code" outputId="44c7073e-71d1-4d0a-ff46-3790a89ca976" executionInfo={"status": "ok", "timestamp": 1588684974767, "user_tz": 240, "elapsed": 4028, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08772860912161050283"}} colab={"base_uri": "https://localhost:8080/", "height": 435} # !pip install transformers==2.8.0 # + [markdown] id="M_moWpok-qkX" colab_type="text" # # Set working directory to the directory containing `download_glue_data.py` and `run_glue.py` # + id="l0VFWNoOAfT-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="55d6c143-6320-4011-d9de-ffdbd002eb49" executionInfo={"status": "ok", "timestamp": 1588685129825, "user_tz": 240, "elapsed": 667, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08772860912161050283"}} # cd ~/../content/ # + id="JU2wFP6SbsZf" colab_type="code" colab={} import os # + id="LON2QVZObFug" colab_type="code" colab={} WORK_DIR = os.path.join('drive', 'My Drive', 'Colab Notebooks', 'NLU') # + id="2mWKKk1x86hQ" colab_type="code" outputId="a99ad5f5-0f00-49e6-cb9a-8bb576b12766" executionInfo={"status": "ok", "timestamp": 1588685182625, "user_tz": 240, "elapsed": 547, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08772860912161050283"}} colab={"base_uri": "https://localhost:8080/", "height": 34} os.path.exists(WORK_DIR) # + id="cU0UdJuH9F2Y" colab_type="code" outputId="3ed5d7c4-d97e-43f8-8344-18e9d3a90163" executionInfo={"status": "ok", "timestamp": 1588685185918, "user_tz": 240, "elapsed": 639, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08772860912161050283"}} colab={"base_uri": "https://localhost:8080/", "height": 34} pycharm={"name": "#%%\n"} # cd $WORK_DIR # + [markdown] id="INV4cGxyf2f5" colab_type="text" pycharm={"name": "#%% md\n"} # # Download GLUE data # + id="PWYcFKd9f6Fb" colab_type="code" colab={} GLUE_DIR="data/glue" # + id="_AJp5KgQglVh" colab_type="code" outputId="94b2d8c7-829d-49f0-f785-e4f37ff71532" executionInfo={"status": "ok", "timestamp": 1588685195281, "user_tz": 240, "elapsed": 2119, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08772860912161050283"}} colab={"base_uri": "https://localhost:8080/", "height": 34} # !echo $GLUE_DIR # + id="24BX_hKwFhdU" colab_type="code" outputId="94854550-8b27-4f47-a3c1-9f67a64e35b8" executionInfo={"status": "ok", "timestamp": 1588685197205, "user_tz": 240, "elapsed": 2539, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08772860912161050283"}} colab={"base_uri": "https://localhost:8080/", "height": 190} # !python download_glue_data.py --help # + id="rAqf1Fe6cZvI" colab_type="code" outputId="358e859d-8eed-4497-b0be-52ad353c47fa" executionInfo={"status": "ok", "timestamp": 1588685199241, "user_tz": 240, "elapsed": 2153, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08772860912161050283"}} colab={"base_uri": "https://localhost:8080/", "height": 51} pycharm={"name": "#%%\n"} # !python download_glue_data.py --data_dir $GLUE_DIR --tasks RTE # + [markdown] id="6aqkcWzfgFNI" colab_type="text" pycharm={"name": "#%% md\n"} # # Compute ELECTRA score on RTE task # + id="LFhfVflFdaVB" colab_type="code" colab={} TASK_NAME="RTE" # + id="q_1EtLRQ8-WF" colab_type="code" outputId="ee181acf-24b6-4d3f-e311-aa9a5d63d744" executionInfo={"status": "ok", "timestamp": 1588685205871, "user_tz": 240, "elapsed": 2434, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08772860912161050283"}} colab={"base_uri": "https://localhost:8080/", "height": 34} # !echo $TASK_NAME # + id="SPpl0f8wAkmf" colab_type="code" outputId="adbd92a9-8fad-4191-eddd-549d59e7fbfb" executionInfo={"status": "ok", "timestamp": 1588685210014, "user_tz": 240, "elapsed": 5189, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08772860912161050283"}} colab={"base_uri": "https://localhost:8080/", "height": 1000} # !python run_glue.py --help # + pycharm={"name": "#%%\n"} # !python run_glue.py \ # --model_type electra \ # --model_name_or_path google/electra-base-discriminator \ # --task_name $TASK_NAME \ # --do_train \ # --do_eval \ # --data_dir $GLUE_DIR/$TASK_NAME \ # --max_seq_length 128 \ # --per_gpu_eval_batch_size=64 \ # --per_gpu_train_batch_size=64 \ # --learning_rate 2e-5 \ # --num_train_epochs 3 \ # --output_dir ../${TASK_NAME}_run \ # --overwrite_output_dir # + id="HnxryCpnczcm" colab_type="code" outputId="8c25a506-ab96-45d6-a196-93ab49395dea" executionInfo={"status": "ok", "timestamp": 1588685577870, "user_tz": 240, "elapsed": 364336, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08772860912161050283"}} colab={"base_uri": "https://localhost:8080/", "height": 1000} # + id="_2xmVAZiqiCK" colab_type="code" colab={}
notebooks_GLUE/notebooks_electra/electra_rte.ipynb
# ##### Copyright 2021 Google LLC. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # overlapping_intervals_sample_sat # <table align="left"> # <td> # <a href="https://colab.research.google.com/github/google/or-tools/blob/master/examples/notebook/sat/overlapping_intervals_sample_sat.ipynb"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/colab_32px.png"/>Run in Google Colab</a> # </td> # <td> # <a href="https://github.com/google/or-tools/blob/master/ortools/sat/samples/overlapping_intervals_sample_sat.py"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/github_32px.png"/>View source on GitHub</a> # </td> # </table> # First, you must install [ortools](https://pypi.org/project/ortools/) package in this colab. # !pip install ortools # + # #!/usr/bin/env python3 # Copyright 2010-2021 Google LLC # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Code sample to demonstrates how to detect if two intervals overlap.""" from ortools.sat.python import cp_model class VarArraySolutionPrinter(cp_model.CpSolverSolutionCallback): """Print intermediate solutions.""" def __init__(self, variables): cp_model.CpSolverSolutionCallback.__init__(self) self.__variables = variables self.__solution_count = 0 def on_solution_callback(self): self.__solution_count += 1 for v in self.__variables: print('%s=%i' % (v, self.Value(v)), end=' ') print() def solution_count(self): return self.__solution_count def OverlappingIntervals(): """Create the overlapping Boolean variables and enumerate all states.""" model = cp_model.CpModel() horizon = 7 # First interval. start_var_a = model.NewIntVar(0, horizon, 'start_a') duration_a = 3 end_var_a = model.NewIntVar(0, horizon, 'end_a') unused_interval_var_a = model.NewIntervalVar(start_var_a, duration_a, end_var_a, 'interval_a') # Second interval. start_var_b = model.NewIntVar(0, horizon, 'start_b') duration_b = 2 end_var_b = model.NewIntVar(0, horizon, 'end_b') unused_interval_var_b = model.NewIntervalVar(start_var_b, duration_b, end_var_b, 'interval_b') # a_after_b Boolean variable. a_after_b = model.NewBoolVar('a_after_b') model.Add(start_var_a >= end_var_b).OnlyEnforceIf(a_after_b) model.Add(start_var_a < end_var_b).OnlyEnforceIf(a_after_b.Not()) # b_after_a Boolean variable. b_after_a = model.NewBoolVar('b_after_a') model.Add(start_var_b >= end_var_a).OnlyEnforceIf(b_after_a) model.Add(start_var_b < end_var_a).OnlyEnforceIf(b_after_a.Not()) # Result Boolean variable. a_overlaps_b = model.NewBoolVar('a_overlaps_b') # Option a: using only clauses model.AddBoolOr([a_after_b, b_after_a, a_overlaps_b]) model.AddImplication(a_after_b, a_overlaps_b.Not()) model.AddImplication(b_after_a, a_overlaps_b.Not()) # Option b: using a sum() == 1. # model.Add(a_after_b + b_after_a + a_overlaps_b == 1) # Search for start values in increasing order for the two intervals. model.AddDecisionStrategy([start_var_a, start_var_b], cp_model.CHOOSE_FIRST, cp_model.SELECT_MIN_VALUE) # Create a solver and solve with a fixed search. solver = cp_model.CpSolver() # Force the solver to follow the decision strategy exactly. solver.parameters.search_branching = cp_model.FIXED_SEARCH # Enumerate all solutions. solver.parameters.enumerate_all_solutions = True # Search and print out all solutions. solution_printer = VarArraySolutionPrinter( [start_var_a, start_var_b, a_overlaps_b]) solver.Solve(model, solution_printer) OverlappingIntervals()
examples/notebook/sat/overlapping_intervals_sample_sat.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.8 64-bit (''base'': conda)' # language: python # name: python3 # --- import pandas as pd import re import matplotlib.pyplot as plt # + def read(fn): lines = open(fn).read().lower() lines = [ x.strip() for x in lines.split("\n") if len(x)>2 ] return lines read('Data/AzPositive.txt')[:10] # - feedbacks = { 'AzPositive.txt' : 'What students like about Azure', 'AzNegative.txt' : 'What students like least', 'AzEduhub.txt' : "What feedback students shall give to Azure", 'AzComparison.txt': 'How Azure Compares with other Cloud providers', 'AzPMAdvice.txt' : 'What advice students would give to Azure PMs' } data = { k : read(f'Data/{k}') for k in feedbacks.keys() } from wordcloud import WordCloud,STOPWORDS wc = WordCloud(colormap='winter_r',background_color='white',width=800,height=400,stopwords=STOPWORDS|set(['azure','gcp','aws'])) fig,ax = plt.subplots(len(feedbacks),1,figsize=(10,30)) for i,(t,k) in enumerate(data.items()): ax[i].imshow(wc.generate_from_text(' '.join(k)).to_array()) ax[i].axis('off') ax[i].title.set_text(feedbacks[t])
workshop/sentiment_analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + dataset = 'Home_and_Kitchen' reviews_filepath = './data/raw_data/reviews_{0}_5.json.gz'.format(dataset) metadata_filepath = './data/metadata/meta_{0}.json.gz'.format(dataset) # + hide_input=true # # %load modules/scripts/Load\ datasets.py # In[6]: # reviews_filepath = '../../data/raw_data/reviews_Musical_Instruments_5.json.gz' # metadata_filepath = '../../data/metadata/meta_Musical_Instruments.json.gz' # In[7]: all_reviews = (spark .read .json(reviews_filepath)) all_metadata = (spark .read .json(metadata_filepath)) # + hide_input=true # # %load modules/scripts/Summarize\ reviews.py # # Summarize the reviews # In[36]: # all_reviews = (spark # .read # .json('../../data/raw_data/reviews_Musical_Instruments_5.json.gz')) # In[37]: from pyspark.sql.functions import col, expr, udf, trim from pyspark.sql.types import IntegerType import re remove_punctuation = udf(lambda line: re.sub('[^A-Za-z\s]', '', line)) make_binary = udf(lambda rating: 0 if rating in [1, 2] else 1, IntegerType()) reviews = (all_reviews .na.fill({ 'reviewerName': 'Unknown' }) .filter(col('overall').isin([1, 2, 5])) .withColumn('label', make_binary(col('overall'))) .select(col('label').cast('int'), remove_punctuation('summary').alias('summary')) .filter(trim(col('summary')) != '')) # ## Splitting data and balancing skewness # In[38]: train, test = reviews.randomSplit([.8, .2], seed=5436L) # In[39]: def multiply_dataset(dataset, n): return dataset if n <= 1 else dataset.union(multiply_dataset(dataset, n - 1)) # In[40]: reviews_good = train.filter('label == 1') reviews_bad = train.filter('label == 0') reviews_bad_multiplied = multiply_dataset(reviews_bad, reviews_good.count() / reviews_bad.count()) train_reviews = reviews_bad_multiplied.union(reviews_good) # ## Benchmark: predict by distribution # In[41]: accuracy = reviews_good.count() / float(train_reviews.count()) print('Always predicting 5 stars accuracy: {0}'.format(accuracy)) # ## Learning pipeline # In[42]: from pyspark.ml.feature import Tokenizer, HashingTF, IDF, StopWordsRemover from pyspark.ml.pipeline import Pipeline from pyspark.ml.classification import LogisticRegression tokenizer = Tokenizer(inputCol='summary', outputCol='words') pipeline = Pipeline(stages=[ tokenizer, StopWordsRemover(inputCol='words', outputCol='filtered_words'), HashingTF(inputCol='filtered_words', outputCol='rawFeatures', numFeatures=120000), IDF(inputCol='rawFeatures', outputCol='features'), LogisticRegression(regParam=.3, elasticNetParam=.01) ]) # ## Testing the model accuracy # In[43]: model = pipeline.fit(train_reviews) # In[44]: from pyspark.ml.evaluation import BinaryClassificationEvaluator prediction = model.transform(test) BinaryClassificationEvaluator().evaluate(prediction) # ## Using model to extract the most predictive words # In[45]: from pyspark.sql.functions import explode import pyspark.sql.functions as F from pyspark.sql.types import FloatType words = (tokenizer .transform(reviews) .select(explode(col('words')).alias('summary'))) predictors = (model .transform(words) .select(col('summary').alias('word'), 'probability')) first = udf(lambda x: x[0].item(), FloatType()) second = udf(lambda x: x[1].item(), FloatType()) predictive_words = (predictors .select( 'word', second(col('probability')).alias('positive'), first(col('probability')).alias('negative')) .groupBy('word') .agg( F.max('positive').alias('positive'), F.max('negative').alias('negative'))) positive_predictive_words = (predictive_words .select(col('word').alias('positive_word'), col('positive').alias('pos_prob')) .sort('pos_prob', ascending=False)) negative_predictive_words = (predictive_words .select(col('word').alias('negative_word'), col('negative').alias('neg_prob')) .sort('neg_prob', ascending=False)) # In[46]: import pandas as pd pd.set_option('display.max_rows', 100) pd.concat( [ positive_predictive_words.limit(100).toPandas(), negative_predictive_words.limit(100).toPandas() ], axis=1) # + hide_input=true # # %load modules/scripts/User\ trustedness.py # # User trustedness # ## Loading data # In[9]: # all_reviews = (spark # .read # .json('../../data/raw_data/reviews_Musical_Instruments_5.json.gz')) # ## Extracting ranking components # In[10]: reviews = all_reviews reviews_per_reviewer = reviews.groupBy('reviewerID').count() # In[31]: from pyspark.sql.functions import col, udf, avg from pyspark.sql.types import DoubleType helpfulness_ratio = udf( lambda (useful, out_of): useful / float(out_of + 1), returnType=DoubleType()) helpfulness = (reviews .select('reviewerID', helpfulness_ratio(col('helpful')).alias('helpfulness')) .groupBy('reviewerID') .agg(avg(col('helpfulness')).alias('helpfulness'))) # ## Computing rankings & visualizing the good and bad reviews from the most trusted users # In[32]: reviewers_trustedness = (helpfulness .join(reviews_per_reviewer, 'reviewerID') .select('reviewerID', (col('helpfulness') * col('count')).alias('trustedness'))) # In[ ]: reviewers_trustedness.limit(10).toPandas() # + hide_input=true # # %load modules/scripts/Recommender\ system.py # ## Loading and indexing the data for training # In[2]: # all_reviews = (spark # .read # .json('../../data/raw_data/reviews_Musical_Instruments_5.json.gz')) # In[4]: from pyspark.sql.functions import col, expr, udf, trim from pyspark.sql.types import IntegerType import re remove_punctuation = udf(lambda line: re.sub('[^A-Za-z\s]', '', line)) make_binary = udf(lambda rating: 0 if rating in [1, 2] else 1, IntegerType()) reviews = all_reviews.withColumn('label', make_binary(col('overall'))) # In[5]: from pyspark.ml.feature import StringIndexer from pyspark.ml import Pipeline indexing_pipeline = Pipeline(stages=[ StringIndexer(inputCol="reviewerID", outputCol="reviewerIndex"), StringIndexer(inputCol="asin", outputCol="asinIndex") ]) indexer = indexing_pipeline.fit(reviews) indexed_reviews = indexer.transform(reviews) # In[6]: train, _, test = [ chunk.cache() for chunk in indexed_reviews.randomSplit([.6, .2, .2], seed=1800009193L) ] # ## Balancing data # In[7]: def multiply_dataset(dataset, n): return dataset if n <= 1 else dataset.union(multiply_dataset(dataset, n - 1)) reviews_good = train.filter('label == 1') reviews_bad = train.filter('label == 0') reviews_bad_multiplied = multiply_dataset(reviews_bad, reviews_good.count() / reviews_bad.count()) train_reviews = reviews_bad_multiplied.union(reviews_good) # ## Evaluator # In[8]: from pyspark.ml.evaluation import RegressionEvaluator evaluator = RegressionEvaluator( predictionCol='prediction', labelCol='label') # ## Benchmark: predict by distribution # In[9]: from pyspark.sql.functions import lit average_rating = (train_reviews .groupBy() .avg('label') .collect()[0][0]) average_rating_prediction = test.withColumn('prediction', lit(average_rating)) average_rating_evaluation = evaluator.evaluate(average_rating_prediction) print('The RMSE of always predicting {0} stars is {1}'.format(average_rating, average_rating_evaluation)) # ## Recommender system # In[10]: from pyspark.ml.recommendation import ALS als = ALS( maxIter=15, regParam=0.1, userCol='reviewerIndex', itemCol='asinIndex', ratingCol='label', rank=24, seed=1800009193L) # ## Evaluating the model # In[14]: recommender_system = als.fit(train_reviews) # In[15]: predictions = recommender_system.transform(test) # In[16]: evaluation = evaluator.evaluate(predictions.filter(col('prediction') != float('nan'))) print('The RMSE of the recommender system is {0}'.format(evaluation)) # - # ## Select a product # + reviewed_products = (all_metadata .join(all_reviews, 'asin') .filter(''' categories is not null and related is not null''')) top_reviewed_products = (reviewed_products .groupBy('asin') .count() .sort('count', ascending=False) .limit(10)) top_reviewed_products.toPandas() # + hide_input=true # # %load modules/scripts/WebDashboard.py # # %load "/Users/Achilles/Documents/Tech/Scala_Spark/HackOnData/Final Project/Build a WebInterface/screen.py" # #!/usr/bin/env python from lxml import html import json import requests import json,re from dateutil import parser as dateparser from time import sleep def ParseReviews(asin): # Added Retrying for i in range(5): try: #This script has only been tested with Amazon.com amazon_url = 'http://www.amazon.com/dp/'+asin # Add some recent user agent to prevent amazon from blocking the request # Find some chrome user agent strings here https://udger.com/resources/ua-list/browser-detail?browser=Chrome headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36'} page = requests.get(amazon_url,headers = headers) page_response = page.text parser = html.fromstring(page_response) XPATH_AGGREGATE = '//span[@id="acrCustomerReviewText"]' XPATH_REVIEW_SECTION_1 = '//div[contains(@id,"reviews-summary")]' XPATH_REVIEW_SECTION_2 = '//div[@data-hook="review"]' XPATH_AGGREGATE_RATING = '//table[@id="histogramTable"]//tr' XPATH_PRODUCT_NAME = '//h1//span[@id="productTitle"]//text()' XPATH_PRODUCT_PRICE = '//span[@id="priceblock_ourprice"]/text()' raw_product_price = parser.xpath(XPATH_PRODUCT_PRICE) product_price = ''.join(raw_product_price).replace(',','') raw_product_name = parser.xpath(XPATH_PRODUCT_NAME) product_name = ''.join(raw_product_name).strip() total_ratings = parser.xpath(XPATH_AGGREGATE_RATING) reviews = parser.xpath(XPATH_REVIEW_SECTION_1) if not reviews: reviews = parser.xpath(XPATH_REVIEW_SECTION_2) ratings_dict = {} reviews_list = [] if not reviews: raise ValueError('unable to find reviews in page') #grabing the rating section in product page for ratings in total_ratings: extracted_rating = ratings.xpath('./td//a//text()') if extracted_rating: rating_key = extracted_rating[0] raw_raing_value = extracted_rating[1] rating_value = raw_raing_value if rating_key: ratings_dict.update({rating_key:rating_value}) #Parsing individual reviews for review in reviews: XPATH_RATING = './/i[@data-hook="review-star-rating"]//text()' XPATH_REVIEW_HEADER = './/a[@data-hook="review-title"]//text()' XPATH_REVIEW_POSTED_DATE = './/a[contains(@href,"/profile/")]/parent::span/following-sibling::span/text()' XPATH_REVIEW_TEXT_1 = './/div[@data-hook="review-collapsed"]//text()' XPATH_REVIEW_TEXT_2 = './/div//span[@data-action="columnbalancing-showfullreview"]/@data-columnbalancing-showfullreview' XPATH_REVIEW_COMMENTS = './/span[@data-hook="review-comment"]//text()' XPATH_AUTHOR = './/a[contains(@href,"/profile/")]/parent::span//text()' XPATH_REVIEW_TEXT_3 = './/div[contains(@id,"dpReviews")]/div/text()' raw_review_author = review.xpath(XPATH_AUTHOR) raw_review_rating = review.xpath(XPATH_RATING) raw_review_header = review.xpath(XPATH_REVIEW_HEADER) raw_review_posted_date = review.xpath(XPATH_REVIEW_POSTED_DATE) raw_review_text1 = review.xpath(XPATH_REVIEW_TEXT_1) raw_review_text2 = review.xpath(XPATH_REVIEW_TEXT_2) raw_review_text3 = review.xpath(XPATH_REVIEW_TEXT_3) author = ' '.join(' '.join(raw_review_author).split()).strip('By') #cleaning data review_rating = ''.join(raw_review_rating).replace('out of 5 stars','') review_header = ' '.join(' '.join(raw_review_header).split()) review_posted_date = dateparser.parse(''.join(raw_review_posted_date)).strftime('%d %b %Y') review_text = ' '.join(' '.join(raw_review_text1).split()) #grabbing hidden comments if present if raw_review_text2: json_loaded_review_data = json.loads(raw_review_text2[0]) json_loaded_review_data_text = json_loaded_review_data['rest'] cleaned_json_loaded_review_data_text = re.sub('<.*?>','',json_loaded_review_data_text) full_review_text = review_text+cleaned_json_loaded_review_data_text else: full_review_text = review_text if not raw_review_text1: full_review_text = ' '.join(' '.join(raw_review_text3).split()) raw_review_comments = review.xpath(XPATH_REVIEW_COMMENTS) review_comments = ''.join(raw_review_comments) review_comments = re.sub('[A-Za-z]','',review_comments).strip() review_dict = { 'review_comment_count':review_comments, 'review_text':full_review_text, 'review_posted_date':review_posted_date, 'review_header':review_header, 'review_rating':review_rating, 'review_author':author } reviews_list.append(review_dict) data = { 'ratings':ratings_dict, # 'reviews':reviews_list, # 'url':amazon_url, # 'price':product_price, 'name':product_name } return data except ValueError: print ("Retrying to get the correct response") return {"error":"failed to process the page","asin":asin} def ReadAsin(AsinList): #Add your own ASINs here #AsinList = ['B01ETPUQ6E','B017HW9DEW'] extracted_data = [] for asin in AsinList: print ("Downloading and processing page http://www.amazon.com/dp/"+asin) extracted_data.append(ParseReviews(asin)) #f=open('data.json','w') #json.dump(extracted_data,f,indent=4) print(extracted_data) from IPython.core.display import display, HTML def displayProducts(prodlist): html_code = """ <table class="image"> """ # prodlist = ['B000068NW5','B0002CZV82','B0002E1NQ4','B0002GW3Y8','B0002M6B2M','B0002M72JS','B000KIRT74','B000L7MNUM','B000LFCXL8','B000WS1QC6'] for prod in prodlist: html_code = html_code+ """ <td><img src = "http://images.amazon.com/images/P/%s.01._PI_SCMZZZZZZZ_.jpg" style="float: left" id=%s onclick="itemselect(this)"</td> %s""" % (prod,prod,prod) html_code = html_code + """</table> <img id="myFinalImg" src="">""" javascriptcode = """ <script type="text/javascript"> function itemselect(selectedprod){ srcFile='http://images.amazon.com/images/P/'+selectedprod.id+'.01._PI_SCTZZZZZZZ_.jpg'; document.getElementById("myFinalImg").src = srcFile; IPython.notebook.kernel.execute("selected_product = '" + selectedprod.id + "'") } </script>""" display(HTML(html_code + javascriptcode)) #spark.read.json("data.json").show() #====================================================== displayProducts( [ row[0] for row in top_reviewed_products.select('asin').collect() ]) # - selected_product = 'B0009VELTQ' # ## Product negative sentences # + from pyspark.ml.feature import Tokenizer from pyspark.sql.functions import explode, col import pandas as pd pd.set_option('display.max_rows', 100) product_words_per_reviewer = ( Tokenizer(inputCol='reviewText', outputCol='words') .transform(all_reviews.filter(col('asin') == selected_product)) .select('reviewerID', 'words')) word_ranks = (product_words_per_reviewer .select(explode(col('words')).alias('word')) .distinct() .join(negative_predictive_words, col('word') == negative_predictive_words.negative_word) .select('word', 'neg_prob') .sort('neg_prob', ascending=False)) word_ranks.limit(30).toPandas() # - selected_negative_word = 'noisy' # ## Trusted users that used the word # + from pyspark.sql.functions import udf, lit from pyspark.sql.types import BooleanType is_elemen_of = udf(lambda word, words: word in words, BooleanType()) users_that_used_the_word = (product_words_per_reviewer .filter(is_elemen_of(lit(selected_negative_word), col('words'))) .select('reviewerID')) users_that_used_the_word.toPandas() # - # ## Suggested products in the same category # + from pyspark.sql.functions import col product_category = (reviewed_products .filter(col('asin') == selected_product) .select('categories') .take(1)[0][0][0][-1]) print('Product category: {0}'.format(product_category)) # + import pandas as pd pd.set_option('display.max_colwidth', -1) last_element = udf(lambda categories: categories[0][-1]) products_in_same_category = (reviewed_products .limit(100000) .filter(last_element(col('categories')) == product_category) .select('asin', 'title') .distinct()) products_in_same_category.limit(10).toPandas() # + from pyspark.sql.functions import avg indexed_products = indexer.transform( products_in_same_category.crossJoin(users_that_used_the_word)) alternative_products = (recommender_system .transform(indexed_products) .groupBy('asin') .agg(avg(col('prediction')).alias('prediction')) .sort('prediction', ascending=False) .filter(col('asin') != selected_product) .limit(10)) alternative_products.toPandas() # - displayProducts([ asin[0] for asin in alternative_products.select('asin').collect() ]) reviews = (all_reviews .filter(col('overall').isin([1, 2, 5])) .withColumn('label', make_binary(col('overall'))) .select(col('label').cast('int'), remove_punctuation('summary').alias('summary')) .filter(trim(col('summary')) != '')) def most_contributing_summaries(product, total_reviews, ranking_model): reviews = total_reviews.filter(col('asin') == product).select('summary', 'overall') udf_max = udf(lambda p: max(p.tolist()), FloatType()) summary_ranks = (ranking_model .transform(reviews) .select( 'summary', second(col('probability')).alias('pos_prob'))) pos_summaries = { row[0]: row[1] for row in summary_ranks.sort('pos_prob', ascending=False).take(5) } neg_summaries = { row[0]: row[1] for row in summary_ranks.sort('pos_prob').take(5) } return pos_summaries, neg_summaries # + from wordcloud import WordCloud import matplotlib.pyplot as plt def present_product(product, total_reviews, ranking_model): pos_summaries, neg_summaries = most_contributing_summaries(product, total_reviews, ranking_model) pos_wordcloud = WordCloud(background_color='white', max_words=20).fit_words(pos_summaries) neg_wordcloud = WordCloud(background_color='white', max_words=20).fit_words(neg_summaries) fig = plt.figure(figsize=(20, 20)) ax = fig.add_subplot(1,2,1) ax.set_title('Positive summaries') ax.imshow(pos_wordcloud, interpolation='bilinear') ax.axis('off') ax = fig.add_subplot(1,2,2) ax.set_title('Negative summaries') ax.imshow(neg_wordcloud, interpolation='bilinear') ax.axis('off') plt.show() # - present_product('B00005KIR0', all_reviews, model) # ## References # # - <NAME>, <NAME>. *Modeling the visual evolution of fashion trends with one-class collaborative filtering.* WWW, 2016 # - <NAME>, <NAME>, <NAME>, <NAME>. *Image-based recommendations on styles and substitutes.* SIGIR, 2015 #
Trusted Users Insight System.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # PS88 Replication (Ritter and Conrad) # # A Challenge of working with the Ritter and Conrad paper is that there is tons of data: since they look at individual provinces and individual days, there are nearly 7 million observations. # # By default we can only have 1GB in memory in datahub, which is less than the full data file. So we will need to play some tricks. # # A lot of what we will do here is explore some ways to look at smaller slices and aggregations of the data, hopefully learning some things along the way. import pandas as pd import seaborn as sns # # Part 1 Exploration by year and country # First we will read in a dataframe that has all 6.8 million observations, but only contains the year the observation correspond to and the two main variables: the count of dissent events and the count of repression events. # # Note there are lots of obesrvations for each year, each corresponding to different regions/day, but I left that out so it doesn't overwhelm the memory of the notebook. ydr = pd.read_csv("https://berkeley.box.com/shared/static/2lzex61lm8irfqfwyffqb8vgalo7uj2p.csv") ydr # Let's use `value_counts()` to make a frequency table which tells us how often there is dissent in a region-day observation. ydr['dissentcount'].value_counts() # As we can see, in the vast majority of cases they do not detect any dissent in a given region/day. # # **Question 1.1. Use `value_counts` to make a frequency table of counts of repression in each region/day** # How often is there dissent and repression in a region/day? To see this bivariate relationship with out making a huge table, let's just check how often there is at least one event of each kind: pd.crosstab(ydr['dissentcount']>0, ydr['represscount']>0) # Another way to see this is as a proportion of all observations: pd.crosstab(ydr['dissentcount']>0, ydr['represscount']>0)/6809575 # So, 99.6% of observations have no repression or dissent, and less than .01% have both. # We can compute the average repressive events in a region/day using the `groupby` function. The syntax is a bit goofy. The line of code `df.groupby(groupvar, as_index=False)['summaryvar'].summarystat()` gives us the `summarystat` of `summaryvar` by `groupvar`. (The `as_index=False` puts the output as a dataframe, which will be useful for what we do next). # # For example, the following gives the mean of `represscount` by `year`. repressyr = ydr.groupby('year', as_index=False)['represscount'].mean() repressyr # One thing we can see here is that there is no data beyond 2009 though that paper says they have data from 1990-2012. (I'm pretty sure this isn't due to a mistake in merging on our part.) Not a huge deal though. # **Question 1.2 Create a dataframe called `dissentyr` which computes the average of `dissentcount` by year.** # Now we can put these together in one data frame. dissentyr['represscount'] = repressyr['represscount'] # We can plot the trends of repression and dissent over time with `sns.lineplot`. sns.lineplot(x="year", y="dissentcount", data=dissentyr) sns.lineplot(x="year", y="represscount", data=dissentyr) # It looks like these tend to go up and down together. # **Question 1.3. Use `sns.scatterplot` to make a graph with yearly average dissent on the x axis and yearly average repression on the y axis.** # At least in aggregate, there is a strong relationship between these variables. # To limit how much memory we are using, we can use the %reset command to delete everything stored in memory. This will give you a warning message, type y. # After doing this we need to reload the libraries we are using import pandas as pd import seaborn as sns # Now let's load in a dataframe that includes the country variable (called `adm0_name`) and our key variables. cdr = pd.read_csv("https://berkeley.box.com/shared/static/gebfvyi6zxghj766hj72aeu5tdk7q291.csv") cdr # **Question 1.4 Use the `groupby` function to compute the average levels of repression and dissent by country, and then put these together in a single data frame.** # **Question 1.5. Use `sns.regplot` to make a scatter plot with average dissent by country on the x axis and average repression by country on the y axis, with a best fit line. Describe what you find.** # ## Part 2: Replication(ish) # # Now let's move on to replicating the main table. Unfortunately there are too many observations to do this easily on datahub. # # A natural way to deal with this, particularly with a large data set, would be to take a random sample of all observations. With 6.8 million observations, even a 1% sample would leve us with 68,000 or so observations, which is a lot! # # However, recall the first thing we did with the full dataset was count how many instances there were with `dissentcount > 0` and `represscount > 0`. There were only about 4600 cases with repression and dissent, and so with a 1% sample we will be down to about 46 cases of both. It will be hard to make strong inferences here, (particularly since, as we will see, the relationship between rainfall and dissent is not very strong). # # Another alternative, which isn't perfect, is to keep all cases of some dissent, and then a random sample of the cases with no repression. As we will see, this will lead to pretty similar results to what the paper has. # # The data set we load up after our libraries contains all the cases with dissent, and 100,000 randomly chosen ones with no dissent. This gets the data frame down to a very manageable size import pandas as pd import seaborn as sns from linearmodels.iv import IV2SLS import statsmodels.formula.api as smf rc_samp = pd.read_csv("rc_samp0.csv") # Since we are working with a sample, the results won't exactly match the paper. Here is a version of columns 1-4 of table 1 from the paper on this sample. Like in the paper, each cell gives the coefficient on the variable in question, and the standard error in parenthesis. The key thing that you'll want to check matches are the coefficients on the different variables. # | | OLS (no instrument) | IV Regression | IV (Autocracies) | IV (Democracies) | # | ----------------- | ------------------------------------------------- | -------------- | ---------------- | ---------------- | # | | *Second Stage: The Effect of Dissent on Repression* | # | Mobilized Dissent | 0.2336 (.001) | \-0.0708 (.071) | .0086 (.041) | .2473 (.033) | # | Urbanization | 0.0324 (.010) | \-0.0738 (.027) | \-.0315 (.019) | \-.0062 (.015) | # | Constant | 0.0012 (.001) | 0.059 (.014) | .0472 (.211) | \-.0092 (.006) | # | | *First Stage: Instrumenting Mobilized Dissent* | # | Rainfall (ln) | | \-0.0083 (.001) | \-.0176 (.001) | .0248 (.003) | # | Annual Rainfall | | 1.3367 (.208) | 1.7708 (.230) | \-.9161 (.503) | # | Urbanization | | \-.3551 (.028) | \-.4000 (.034) | \-.2628 (.044 | # | Constant | | .1876 (.002) | .1937 (.002) | .1687 (.003) | # | | | | | | # | N | 107009 | 105438 | 83814 | 21502 | # **Question 2.1. Replicate column 1 of this table, by using `smf.ols` to run a regression with the count of repressive events (`represscount`) as the dependent variable, and the count of dissent events (`dissentcount`) and urbanization (`urban_mean`) as independent variables.** # Now let's move on to the instrumental variables regression in column 2. We'll work our way up, first computing the first stage. # # **Question 2.2. Replicate the first stage of the regression in column 2, by using `smf.ols` to run a regression with the count of dissent events as the dependent variable, and the natural log of rainfall (`lograin`), the amount of rain as a percent of annual rainfall (`rainannualpct`), and urbanization as independent variables.** # **Question 2.3. Now replicate the second stage with `IV2SLS.from_formula`. Recall the general syntax here is `IV2SLS.from_formula("DV ~ 1 + CONTROLS + [IV ~ INSTRUMENTS]", data=df").fit().summary` where `DV` is the dependent variable, `CONTROLS` are the control variables that go in both the first and second stage, `IV` is the key independent variable, and `INSTRUMENTS` are the instruments. As a hint, there is one variable to use for `CONTROLS` and two variables to use for `INSTRUMENTS`, so enter those as `INSTRUMENT1 + INSTRUMENT2`. You might get some red "warnings" about missing data, which you can ignore.** # **Question 2.4 [OPTIONAL]. Now replicate columns 3 and 4 by running the same first and second stage regressions, but with a sample of observations where the democracy variable is less than 0 (`latent_democracy < 0`) and then greater than 0 (`latent_democracy > 0`)** # ## Part 3: In the year 2000 # # Another way to cut back on the size of the data file, which also can provide some different insights, is to restrict attention to a smaller time window. Here are the relevant variables for our replication just for the year 2000. This alone gets us over 300,000 observations. rc2k = pd.read_csv("https://berkeley.box.com/shared/static/2mta14zoopm547uqr3ear2pvokfwzeaz.csv") rc2k # **Question 3.1. Run the same "column 1" regression that you did in 2.1, but with the year 2000 data. (Note: you can peek below for a table which contains the results of such regressions for 1990-2009 to check that you are getting the right coefficient on `dissentcount`.)** # **Question 3.2. Now use `smf.ols` to run the first stage of the column 2 regression.** # Note the coefficients are much closer to zero than they were on the sample we used in part 2 (though not far from the coefficients in the full sample reported in the paper). To get a sense of the magnitude here and see the relationship between our two instruments, let's look at a scatterplot of `lograin` and `rainannualpct`. sns.scatterplot(x='lograin', y='rainannualpct', data=rc2k) # The range of range of `rainannualpct` is about 0 to .5 (it's rare to get half of the rain received in a year in one day!). Your coefficient on this should be about .025, so going from the least to most rain by this measure only leads to a predicted increase of .01 dissent events, which is pretty tiny. The prediction using `lograin` is about the same. Again there are no absolute rules here, but this likely violates the "strong first stage" requirement for doing IV. Still, we can do it and see what happens. # # **Question 3.3. Use `IV2SLS.from_formula` to run the "column 2" instrumental variables regression. (Again, you can peek at the table introduced below to check that you get the right coefficient on `dissentcount`.)** # Of course, picking out the year 2000 is a bit arbitrary. Using the wonders of loops, I ran all the OLS and instrument variables regressions for each of the year 1990 and 2009 and stored the coefficient on `dissentcount` in a table we'll import here: yearsum = pd.read_csv("https://berkeley.box.com/shared/static/67025yl413c7v52wz8ke1mlkkjl2o44z.csv") yearsum # **Question 3.4. Use two calls of `sns.lineplot` to plot the OLS estimate of the effect of dissent on repression and the IV ("column 2") estimate of this effect as a function of the year** # What you should get is a relatively smooth line corresponding to the OLS estimate, and a very volatile one corresponding to the IV estimates. The takeaway here is that even we we have data on every region of a continent, and data by day, the relationship between rainfall and dissent is not strong enough to reliably use as an instrument. The original paper pools across about 20 years, which is enough to find a reasonably strong "first stage" estimate, but the fact that we need nearly 7 million data points for this is a bit worrying.
replications/PS88-Replication-RCv2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="FaIBmnXCknPl" # About the Dataset: # # 1. id: unique id for a news article # 2. title: the title of a news article # 3. author: author of the news article # 4. text: the text of the article; could be incomplete # 5. label: a label that marks whether the news article is real or fake: # 1: Fake news # 0: real News # # # # # + [markdown] id="k399dHafvL5N" # Importing the Dependencies # + id="-fetC5yqkPVe" executionInfo={"status": "ok", "timestamp": 1647054735024, "user_tz": -330, "elapsed": 1523, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhWxVC1BHEFf3s80x7jPDw6W1DBHaV8fzmvh2ApvQ=s64", "userId": "17074048428058188405"}} import numpy as np import pandas as pd import re from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score # + colab={"base_uri": "https://localhost:8080/"} id="1AC1YpmGwIDw" executionInfo={"status": "ok", "timestamp": 1647054738358, "user_tz": -330, "elapsed": 813, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhWxVC1BHEFf3s80x7jPDw6W1DBHaV8fzmvh2ApvQ=s64", "userId": "17074048428058188405"}} outputId="de3fc582-cdb5-4116-b698-cca30289236e" import nltk nltk.download('stopwords') # + colab={"base_uri": "https://localhost:8080/"} id="dxIOt3DowpUR" executionInfo={"status": "ok", "timestamp": 1647054745460, "user_tz": -330, "elapsed": 980, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhWxVC1BHEFf3s80x7jPDw6W1DBHaV8fzmvh2ApvQ=s64", "userId": "17074048428058188405"}} outputId="546d9247-52ce-44b1-ecd4-f7b112d5c7fb" # printing the stopwords in English print(stopwords.words('english')) # + [markdown] id="NjeGd1CLw_6R" # Data Pre-processing # + id="nCGcpu_1wzLw" executionInfo={"status": "error", "timestamp": 1647055171038, "user_tz": -330, "elapsed": 896, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhWxVC1BHEFf3s80x7jPDw6W1DBHaV8fzmvh2ApvQ=s64", "userId": "17074048428058188405"}} outputId="281bd1f5-a7f7-48a5-dd0f-cb9aa9dc028e" colab={"base_uri": "https://localhost:8080/", "height": 241} # loading the dataset to a pandas DataFrame news_dataset = pd.read_csv('/content/train.csv') # + colab={"base_uri": "https://localhost:8080/"} id="aRgmbYSbxV4-" executionInfo={"status": "ok", "timestamp": 1613899195297, "user_tz": -330, "elapsed": 2799, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi_B0XWKs7bMey8MrAh8rdvGsYLyZHci46B2PrrQQ=s64", "userId": "13966379820454708749"}} outputId="82bafe4f-211d-47b8-f4ed-a61b77370bc0" news_dataset.shape # + colab={"base_uri": "https://localhost:8080/", "height": 198} id="jjJ1eB6RxZaS" executionInfo={"status": "ok", "timestamp": 1613899195301, "user_tz": -330, "elapsed": 2772, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi_B0XWKs7bMey8MrAh8rdvGsYLyZHci46B2PrrQQ=s64", "userId": "13966379820454708749"}} outputId="9bb0c756-30e5-4919-a8e0-a688127261eb" # print the first 5 rows of the dataframe news_dataset.head() # + colab={"base_uri": "https://localhost:8080/"} id="QYkDi4SwxlKi" executionInfo={"status": "ok", "timestamp": 1613899195306, "user_tz": -330, "elapsed": 2748, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi_B0XWKs7bMey8MrAh8rdvGsYLyZHci46B2PrrQQ=s64", "userId": "13966379820454708749"}} outputId="204c4b11-2d09-4c3e-ff1f-4ee9d3c5bc92" # counting the number of missing values in the dataset news_dataset.isnull().sum() # + id="Mc04lQrhx57m" # replacing the null values with empty string news_dataset = news_dataset.fillna('') # + id="H7TZgHszygxj" # merging the author name and news title news_dataset['content'] = news_dataset['author']+' '+news_dataset['title'] # + colab={"base_uri": "https://localhost:8080/"} id="cbF6GBBpzBey" executionInfo={"status": "ok", "timestamp": 1613899195320, "user_tz": -330, "elapsed": 2704, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi_B0XWKs7bMey8MrAh8rdvGsYLyZHci46B2PrrQQ=s64", "userId": "13966379820454708749"}} outputId="f7776d4b-ab23-468c-8aa7-2ecbb67d7487" print(news_dataset['content']) # + id="LfBtAvLtzEo6" # separating the data & label X = news_dataset.drop(columns='label', axis=1) Y = news_dataset['label'] # + colab={"base_uri": "https://localhost:8080/"} id="oHPBr540zl1h" executionInfo={"status": "ok", "timestamp": 1613899195330, "user_tz": -330, "elapsed": 2660, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi_B0XWKs7bMey8MrAh8rdvGsYLyZHci46B2PrrQQ=s64", "userId": "13966379820454708749"}} outputId="45a8b684-6d35-43d4-f24f-8a8a091b69ad" print(X) print(Y) # + [markdown] id="0NwFcpqcz37a" # Stemming: # # Stemming is the process of reducing a word to its Root word # # example: # actor, actress, acting --> act # + id="Ga_DaZxhzoWM" port_stem = PorterStemmer() # + id="zY-n0dCh0e-y" def stemming(content): stemmed_content = re.sub('[^a-zA-Z]',' ',content) stemmed_content = stemmed_content.lower() stemmed_content = stemmed_content.split() stemmed_content = [port_stem.stem(word) for word in stemmed_content if not word in stopwords.words('english')] stemmed_content = ' '.join(stemmed_content) return stemmed_content # + id="MBUIk4c94yTL" news_dataset['content'] = news_dataset['content'].apply(stemming) # + colab={"base_uri": "https://localhost:8080/"} id="xmwK-zyO5Stg" executionInfo={"status": "ok", "timestamp": 1613899387805, "user_tz": -330, "elapsed": 1019, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi_B0XWKs7bMey8MrAh8rdvGsYLyZHci46B2PrrQQ=s64", "userId": "13966379820454708749"}} outputId="61de12b9-b9fa-486f-8c91-a5aec48d7fe6" print(news_dataset['content']) # + id="5ZIidnta5k5h" #separating the data and label X = news_dataset['content'].values Y = news_dataset['label'].values # + colab={"base_uri": "https://localhost:8080/"} id="3nA_SBZX6BeH" executionInfo={"status": "ok", "timestamp": 1613899529575, "user_tz": -330, "elapsed": 1308, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi_B0XWKs7bMey8MrAh8rdvGsYLyZHci46B2PrrQQ=s64", "userId": "13966379820454708749"}} outputId="3990e651-a18a-4191-c361-6854aa327caa" print(X) # + colab={"base_uri": "https://localhost:8080/"} id="NgkFGXkg6HS4" executionInfo={"status": "ok", "timestamp": 1613899566359, "user_tz": -330, "elapsed": 1077, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi_B0XWKs7bMey8MrAh8rdvGsYLyZHci46B2PrrQQ=s64", "userId": "13966379820454708749"}} outputId="c01c5aea-ece5-462b-f14f-a448409d0aa7" print(Y) # + colab={"base_uri": "https://localhost:8080/"} id="Iu2ZEBkL6QTm" executionInfo={"status": "ok", "timestamp": 1613899576968, "user_tz": -330, "elapsed": 944, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi_B0XWKs7bMey8MrAh8rdvGsYLyZHci46B2PrrQQ=s64", "userId": "13966379820454708749"}} outputId="cbb35faa-bd75-4fee-a2ba-f85f874fe068" Y.shape # + id="BMfepsQZ6TES" # converting the textual data to numerical data vectorizer = TfidfVectorizer() vectorizer.fit(X) X = vectorizer.transform(X) # + colab={"base_uri": "https://localhost:8080/"} id="MJj5esbs7Nzy" executionInfo={"status": "ok", "timestamp": 1613899825877, "user_tz": -330, "elapsed": 2173, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi_B0XWKs7bMey8MrAh8rdvGsYLyZHci46B2PrrQQ=s64", "userId": "13966379820454708749"}} outputId="381f4859-7593-474e-a827-f1f4de5bd894" print(X) # + [markdown] id="mKBRGiSQ7YCZ" # Splitting the dataset to training & test data # + id="VjMYwmBo7Pbx" X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2, stratify=Y, random_state=2) # + [markdown] id="rxDsQvgO8Oln" # Training the Model: Logistic Regression # + id="HrSItcqc7qAy" model = LogisticRegression() # + colab={"base_uri": "https://localhost:8080/"} id="fdVJ839l8Vgx" executionInfo={"status": "ok", "timestamp": 1613900347043, "user_tz": -330, "elapsed": 1315, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi_B0XWKs7bMey8MrAh8rdvGsYLyZHci46B2PrrQQ=s64", "userId": "13966379820454708749"}} outputId="129fab6a-f9d5-4ef8-eecf-93206f1e1e41" model.fit(X_train, Y_train) # + [markdown] id="sbPKIFT89W1C" # Evaluation # + [markdown] id="YG6gqVty9ZDB" # accuracy score # + id="VgwtWZY59PBw" # accuracy score on the training data X_train_prediction = model.predict(X_train) training_data_accuracy = accuracy_score(X_train_prediction, Y_train) # + colab={"base_uri": "https://localhost:8080/"} id="4L-r5mld-BFn" executionInfo={"status": "ok", "timestamp": 1613900579871, "user_tz": -330, "elapsed": 1160, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi_B0XWKs7bMey8MrAh8rdvGsYLyZHci46B2PrrQQ=s64", "userId": "13966379820454708749"}} outputId="c662ccad-b5be-4f0c-e087-f8785e1a9141" print('Accuracy score of the training data : ', training_data_accuracy) # + id="Kgcn13oO-H6e" # accuracy score on the test data X_test_prediction = model.predict(X_test) test_data_accuracy = accuracy_score(X_test_prediction, Y_test) # + colab={"base_uri": "https://localhost:8080/"} id="9TG0Yof1-vg2" executionInfo={"status": "ok", "timestamp": 1613900758883, "user_tz": -330, "elapsed": 1073, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi_B0XWKs7bMey8MrAh8rdvGsYLyZHci46B2PrrQQ=s64", "userId": "13966379820454708749"}} outputId="db5ab627-52bf-487c-df79-8d94647ffbe3" print('Accuracy score of the test data : ', test_data_accuracy) # + [markdown] id="Yun4seaE-6tV" # Making a Predictive System # + colab={"base_uri": "https://localhost:8080/"} id="lPjssDL_-zo8" executionInfo={"status": "ok", "timestamp": 1613901008150, "user_tz": -330, "elapsed": 1039, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi_B0XWKs7bMey8MrAh8rdvGsYLyZHci46B2PrrQQ=s64", "userId": "13966379820454708749"}} outputId="b8bcac5b-bab1-426c-ad89-d4791b411e31" X_new = X_test[3] prediction = model.predict(X_new) print(prediction) if (prediction[0]==0): print('The news is Real') else: print('The news is Fake') # + colab={"base_uri": "https://localhost:8080/"} id="8KaWdvDI_eUk" executionInfo={"status": "ok", "timestamp": 1613901013770, "user_tz": -330, "elapsed": 1134, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gi_B0XWKs7bMey8MrAh8rdvGsYLyZHci46B2PrrQQ=s64", "userId": "13966379820454708749"}} outputId="49cf7c89-08b0-4ce3-8510-82efa7447d54" print(Y_test[3]) # + id="JBbWkLGr_lb_"
templates/Fake News Prediction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm # ### Array operations with NumPy # Let's assume the next equation: # # $$ # u'_i = u_i - u_{i-1} # $$ # # If $u = [0,1,2,3,4,5]$, there are two different ways of computing the value of the array $u'$: # + u = np.array([0,1,2,3,4,5]) up1 = np.zeros(len(u)-1) for i in range(1, len(u)): up1[i-1] = u[i] - u[i-1] print(up1) # + up2 = u[1:] - u[0:-1] print(up2) # - # The differences in time for this case will be small. Let's apply it to a bigger array case. # + nx = 81 ny = 81 nt = 100 c = 1 dx = 2 / (nx-1) dy = 2 / (ny-1) CFL = 0.2 dt = CFL * dx / c x = np.linspace(0,2,nx) y = np.linspace(0,2,ny) u = np.ones((ny,nx)) un = np.ones((ny,nx)) # + # %%timeit #Initial conditions u = np.ones((ny,nx)) u[int(.5/dy):int(1/dy+1),int(0.5/dx):int(1/dx+1)] = 2 for n in range(nt+1): un = u.copy() row, col = u.shape for j in range(1,row): for i in range(1,col): u[j,i] = (un[j,i] - (c * dt/dx * (un[j,i] - un[j,i - 1])) - (c * dt/dy * (un[j,i] - un[j - 1, i]))) u[0,:] = 1 u[-1,:] = 1 u[:,0] = 1 u[:, -1] = 1 # + # %%timeit #Initial conditions u = np.ones((ny,nx)) u[int(.5/dy):int(1/dy+1),int(0.5/dx):int(1/dx+1)] = 2 for n in range(nt+1): un = u.copy() u[1:,1:] = (un[1:,1:] - (c * dt/dx * (un[1:,1:] - un[1:,0:-1])) - (c * dt/dy * (un[1:,1:] - un[0:-1,1:]))) u[0,:] = 1 u[-1,:] = 1 u[:,0] = 1 u[:, -1] = 1 # - # ### Linear Convection in 2D # Moving onto a two dimensional space will require a simple definition: # # $$ # \text{"A partial derivative wirh respect to x is the variation in the x direction at constant y"} # $$ # # A uniform 2D grid is defined by: # # $$ # x_i = x_0 + i \Delta x \qquad \qquad \qquad y_i = y_0 + i \Delta y # $$ # # The variable $u$ will now be defined such as $u_{i,j} = u(x_i,y_i)$. The equation that governs 2D linear convection is: # # $$ # \dfrac{\partial u}{\partial t} + c \dfrac{\partial u}{\partial x} + c \dfrac{\partial u}{\partial y} = 0 # $$ # # The equation can be discretized as follows: # # $$ # \dfrac{u^{n+1}_{i,j}-u^n_{i,j}}{\Delta t} + c \dfrac{u_{i,j}^{n}-u_{i-1,j}^n}{\Delta x}+c\dfrac{u_{i,j}^{n}-u_{i,j-1}^{n}}{\Delta y} = 0 # $$ # # Solving for the only variable that there is in the equation $\left(u_{i,j}^{n+1}\right)$, it yields: # # $$ # u_{i,j}^{n+1} = u_{i,j}^{n} - c \dfrac{\Delta t}{\Delta x} \left( u_{i,j}^{n}-u_{i-1,j}^n \right) - c \dfrac{\Delta t}{\Delta y} \left( u_{i,j}^{n}-u_{i,j-1}^n \right) # $$ # # We will solve this equation with the next set of initial conditions: # # $$ # u(x,y) = \begin{Bmatrix} 2 & \text{ for} & 0.5 \leqslant x, y \leqslant 1 \\ 1 & \text{ for} & \text{everywhere else} \\ \end{Bmatrix} # $$ # # and the next set of boundary conditions: # # $$ # u=1 \text{ for } \begin{Bmatrix} x=0,\ 2 \\ y=0,\ 2 \end{Bmatrix} # $$ # + nx = 81 ny = 81 nt = 100 c = 1 dx = 2 / (nx - 1) dy = 2 / (ny - 1) CFL = 0.2 dt = CFL * dx / c x = np.linspace(0,2,nx) y = np.linspace(0,2,ny) u = np.ones((ny, nx)) u[int(0.5/dy):int(1/dy+1),int(0.5/dx):int(1/dx+1)]=2 X, Y = np.meshgrid(x,y) # - #simple code to 3D plotting fig = plt.figure(figsize=(11,7), dpi=100) ax = fig.gca(projection='3d') ax.plot_surface(X, Y, u[:], cmap=cm.viridis) ax.set_xlabel('$x$') ax.set_ylabel('$y$') ax.set_zlabel('$z$') ax.grid(True) # + #Initial conditions u = np.ones((ny,nx)) u[int(.5/dy):int(1/dy+1),int(0.5/dx):int(1/dx+1)] = 2 for n in range(nt+1): un = u.copy() u[1:,1:] = (un[1:,1:] - (c * dt/dx * (un[1:,1:] - un[1:,:-1])) - (c * dt/dy * (un[1:,1:] - un[:-1,1:]))) u[0,:] = 1 u[-1,:] = 1 u[:,0] = 1 u[:, -1] = 1 # - #simple code to 3D plotting fig = plt.figure(figsize=(11,7), dpi=100) ax = fig.gca(projection='3d') ax.plot_surface(X, Y, u[:], cmap=cm.viridis) ax.set_xlabel('$x$') ax.set_ylabel('$y$') ax.set_zlabel('$z$') ax.grid(True)
12-steps-CFD/step05_linearConvection2D.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="AVOfyW__KqTM" colab_type="text" # # Using tf.keras # # This Colab is about how to use Keras to define and train simple models on the data generated in the last Colab [1_data.ipynb](https://github.com/tensorflow/workshops/tree/master/extras/amld/notebooks/solutions/1_data.ipynb) # + id="jHGIMZC5-JSR" colab_type="code" outputId="494f42f1-d1af-4aff-b195-a2bf44fee796" executionInfo={"status": "ok", "timestamp": 1579974836246, "user_tz": 480, "elapsed": 629, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 34} # In Jupyter, you would need to install TF 2.0 via !pip. # %tensorflow_version 2.x # + id="YvSp5-h8-IsG" colab_type="code" outputId="c996baf8-11d7-41f6-b057-71113b91e0ee" executionInfo={"status": "ok", "timestamp": 1579974844266, "user_tz": 480, "elapsed": 8640, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 34} import tensorflow as tf import json, os # Tested with TensorFlow 2.1.0 print('version={}, CUDA={}, GPU={}, TPU={}'.format( tf.__version__, tf.test.is_built_with_cuda(), # GPU attached? len(tf.config.list_physical_devices('GPU')) > 0, # TPU accessible? (only works on Colab) 'COLAB_TPU_ADDR' in os.environ)) # + [markdown] id="V5bA7N788eiT" colab_type="text" # > **Attention:** Please avoid using the TPU runtime (`TPU=True`) for now. The notebook contains an optional part on TPU usage at the end if you're interested. You can change the runtime via: "Runtime > Change runtime type > Hardware Accelerator" in Colab. # + [markdown] id="sAz1wTwOLX4j" colab_type="text" # ## Data from Protobufs # + id="vxwdPWVhEU97" colab_type="code" colab={} # Load data from Drive (Colab only). data_path = '/content/gdrive/My Drive/amld_data/zoo_img' # Or, you can load data from different sources, such as: # From your local machine: # data_path = './amld_data' # Or use a prepared dataset from Cloud (Colab only). # - 50k training examples, including pickled DataFrame. # data_path = 'gs://amld-datasets/zoo_img_small' # - 1M training examples, without pickled DataFrame. # data_path = 'gs://amld-datasets/zoo_img' # - 4.1M training examples, without pickled DataFrame. # data_path = 'gs://amld-datasets/animals_img' # - 29M training examples, without pickled DataFrame. # data_path = 'gs://amld-datasets/all_img' # Store models on Drive (Colab only). models_path = '/content/gdrive/My Drive/amld_data/models' # Or, store models to local machine. # models_path = './amld_models' # + id="IvyLtRFLfXZs" colab_type="code" outputId="62652af0-c027-4c03-a029-9d8d191a6ee5" executionInfo={"status": "ok", "timestamp": 1579975054096, "user_tz": 480, "elapsed": 61975, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 730} if data_path.startswith('/content/gdrive/'): from google.colab import drive drive.mount('/content/gdrive') if data_path.startswith('gs://'): from google.colab import auth auth.authenticate_user() # !gsutil ls -lh "$data_path" else: # !ls -lh "$data_path" # + id="4oCIc1XNTQ-v" colab_type="code" outputId="a3b285a2-bbe8-4c4b-9a13-da1d09ea2532" executionInfo={"status": "ok", "timestamp": 1579975054607, "user_tz": 480, "elapsed": 30005, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 52} labels = [label.strip() for label in tf.io.gfile.GFile('{}/labels.txt'.format(data_path))] print('All labels in the dataset:', ' '.join(labels)) counts = json.load(tf.io.gfile.GFile('{}/counts.json'.format(data_path))) print('Splits sizes:', counts) # + id="E9izb-IbIAhZ" colab_type="code" outputId="5d3286ec-02e8-457a-e15f-de4a8bcf77dc" executionInfo={"status": "ok", "timestamp": 1579975055536, "user_tz": 480, "elapsed": 30757, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 52} # This dictionary specifies what "features" we want to extract from the # tf.train.Example protos (i.e. what they look like on disk). We only # need the image data "img_64" and the "label". Both features are tensors # with a fixed length. # You need to specify the correct "shape" and "dtype" parameters for # these features. feature_spec = { # Single label per example => shape=[1] (we could also use shape=() and # then do a transformation in the input_fn). 'label': tf.io.FixedLenFeature(shape=[1], dtype=tf.int64), # The bytes_list data is parsed into tf.string. 'img_64': tf.io.FixedLenFeature(shape=[64, 64], dtype=tf.int64), } def parse_example(serialized_example): # Convert string to tf.train.Example and then extract features/label. features = tf.io.parse_single_example(serialized_example, feature_spec) label = features['label'] label = tf.one_hot(tf.squeeze(label), len(labels)) features['img_64'] = tf.cast(features['img_64'], tf.float32) / 255. return features['img_64'], label batch_size = 100 steps_per_epoch = counts['train'] // batch_size eval_steps_per_epoch = counts['eval'] // batch_size # Create datasets from TFRecord files. train_ds = tf.data.TFRecordDataset(tf.io.gfile.glob( '{}/train-*'.format(data_path))) train_ds = train_ds.map(parse_example) train_ds = train_ds.batch(batch_size).repeat() eval_ds = tf.data.TFRecordDataset(tf.io.gfile.glob( '{}/eval-*'.format(data_path))) eval_ds = eval_ds.map(parse_example) eval_ds = eval_ds.batch(batch_size) # Read a single batch of examples from the training set and display shapes. for img_feature, label in train_ds: break print('img_feature.shape (batch_size, image_height, image_width) =', img_feature.shape) print('label.shape (batch_size, number_of_labels) =', label.shape) # + id="lwlhvv1VM1wQ" colab_type="code" outputId="d26092d8-dfb5-4daf-a0a9-1c7c0357fa60" executionInfo={"status": "ok", "timestamp": 1579904121574, "user_tz": 480, "elapsed": 47824, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 381} # Visualize some examples from the training set. from matplotlib import pyplot as plt def show_img(img_64, title='', ax=None): """Displays an image. Args: img_64: Array (or Tensor) with monochrome image data. title: Optional title. ax: Optional Matplotlib axes to show the image in. """ (ax if ax else plt).matshow(img_64.reshape((64, -1)), cmap='gray') if isinstance(img_64, tf.Tensor): img_64 = img_64.numpy() ax = ax if ax else plt.gca() ax.set_xticks([]) ax.set_yticks([]) ax.set_title(title) rows, cols = 3, 5 for img_feature, label in train_ds: break _, axs = plt.subplots(rows, cols, figsize=(2*cols, 2*rows)) for i in range(rows): for j in range(cols): show_img(img_feature[i*rows+j].numpy(), title=labels[label[i*rows+j].numpy().argmax()], ax=axs[i][j]) # + [markdown] id="gZBM6OWqzv3L" colab_type="text" # ## Linear model # + id="28qPSmJMaxBh" colab_type="code" outputId="93e67728-b0d1-4109-ed96-f2fea2ba5f66" executionInfo={"status": "ok", "timestamp": 1579904121575, "user_tz": 480, "elapsed": 47815, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 230} # Sample linear model. linear_model = tf.keras.Sequential() linear_model.add(tf.keras.layers.Flatten(input_shape=(64, 64,))) linear_model.add(tf.keras.layers.Dense(len(labels), activation='softmax')) # "adam, categorical_crossentropy, accuracy" and other string constants can be # found at https://keras.io. linear_model.compile( optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy', tf.keras.metrics.categorical_accuracy]) linear_model.summary() # + id="_BRK6TYLa09h" colab_type="code" outputId="3e5f09ef-2feb-4f49-fe66-94dd56c2b18a" executionInfo={"status": "ok", "timestamp": 1579904139379, "user_tz": 480, "elapsed": 65612, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 90} linear_model.fit(train_ds, validation_data=eval_ds, steps_per_epoch=steps_per_epoch, validation_steps=eval_steps_per_epoch, epochs=1, verbose=True) # + [markdown] id="Pr0L9sVBqCj6" colab_type="text" # ## Convolutional model # + id="EXqkgNozzz5U" colab_type="code" outputId="497e79a6-3b86-4693-addb-e1f39f8581be" executionInfo={"status": "ok", "timestamp": 1579904139604, "user_tz": 480, "elapsed": 65830, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 550} # Let's define a convolutional model: conv_model = tf.keras.Sequential([ tf.keras.layers.Reshape(target_shape=(64, 64, 1), input_shape=(64, 64)), tf.keras.layers.Conv2D(filters=32, kernel_size=(10, 10), padding='same', activation='relu'), tf.keras.layers.ZeroPadding2D((1,1)), tf.keras.layers.Conv2D(filters=32, kernel_size=(10, 10), padding='same', activation='relu'), tf.keras.layers.MaxPooling2D(pool_size=(4, 4), strides=(4,4)), tf.keras.layers.Conv2D(filters=64, kernel_size=(5, 5), padding='same', activation='relu'), tf.keras.layers.MaxPooling2D(pool_size=(4, 4), strides=(4,4)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(256, activation='relu'), tf.keras.layers.Dropout(0.3), tf.keras.layers.Dense(len(labels), activation='softmax'), ]) # YOUR ACTION REQUIRED: # Compile + print summary of the model (analogous to the linear model above). # + id="DsIUYEHC17ft" colab_type="code" outputId="cbbbcdf6-9085-43f8-d9d4-e82e173b0b83" executionInfo={"status": "ok", "timestamp": 1579904215104, "user_tz": 480, "elapsed": 141322, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 158} # YOUR ACTION REQUIRED: # Train the model (analogous to linear model above). # Note: You might want to reduce the number of steps if if it takes too long. # Pro tip: Change the runtime type ("Runtime" menu) to GPU! After the change you # will need to rerun the cells above because the Python kernel's state is reset. # + [markdown] colab_type="text" id="E6W0XkCpB8fb" # ## Store model # + id="d5E4keWeliXm" colab_type="code" colab={} tf.io.gfile.makedirs(models_path) # + id="V94tLWcEJDeI" colab_type="code" colab={} # Save model as Keras model. keras_path = os.path.join(models_path, 'linear.h5') linear_model.save(keras_path) # + id="uJj4acwemWDy" colab_type="code" outputId="e3f5f375-b2fc-483e-9eeb-9b8c34bb1fde" executionInfo={"status": "ok", "timestamp": 1579904216898, "user_tz": 480, "elapsed": 143101, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 34} # Keras model is a single file. # !ls -hl "$keras_path" # + id="SHHez3GZJD4u" colab_type="code" outputId="c6cfe9c0-7fce-4c75-c356-4fb06f4b08de" executionInfo={"status": "ok", "timestamp": 1579904217244, "user_tz": 480, "elapsed": 143440, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 230} # Load Keras model. loaded_keras_model = tf.keras.models.load_model(keras_path) loaded_keras_model.summary() # + id="6RDHdn9wlSfe" colab_type="code" outputId="6f61c1cb-3af3-4749-bbfc-c8a94d07f96e" executionInfo={"status": "ok", "timestamp": 1579904224930, "user_tz": 480, "elapsed": 151118, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 107} # Save model as Tensorflow Saved Model. saved_model_path = os.path.join(models_path, 'saved_model/linear') linear_model.save(saved_model_path, save_format='tf') # + id="vdbtIUV8lbdz" colab_type="code" outputId="0830cb9b-3fac-47c3-dd07-f8255520c0da" executionInfo={"status": "ok", "timestamp": 1579904227015, "user_tz": 480, "elapsed": 153196, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 158} # Inspect saved model directory structure. # !find "$saved_model_path" # + id="oy1Y5yjyCCZ3" colab_type="code" outputId="e32850a7-61e2-4b42-d5ad-fa39c665c08c" executionInfo={"status": "ok", "timestamp": 1579904227430, "user_tz": 480, "elapsed": 153604, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 230} saved_model = tf.keras.models.load_model(saved_model_path) saved_model.summary() # + id="7SNX8oL6UFLH" colab_type="code" outputId="1ff5dbda-673c-4808-fdb2-c54cd9fb258c" executionInfo={"status": "ok", "timestamp": 1579904228845, "user_tz": 480, "elapsed": 155012, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 105} # YOUR ACTION REQUIRED: # Store the convolutional model and any additional models that you trained # in the previous sections in Keras format so we can use them in later # notebooks for prediction. # + [markdown] id="L519Q6CpQOwK" colab_type="text" # # ----- Optional part ----- # + [markdown] id="zRjVuDn9VhNG" colab_type="text" # ## Learn from errors # # Looking at classification mistakes is a great way to better understand how a model is performing. This section walks you through the necessary steps to load some examples from the dataset, make predictions, and plot the mistakes. # + id="xYydGO1CM7VG" colab_type="code" colab={} import collections Mistake = collections.namedtuple('Mistake', 'label pred img_64') mistakes = [] eval_ds_iter = iter(eval_ds) # + id="rDvDfnezM_R_" colab_type="code" outputId="699052a4-f982-4c30-d2da-ba2cee4072b7" executionInfo={"status": "ok", "timestamp": 1579975097740, "user_tz": 480, "elapsed": 552, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 34} for img_64_batch, label_onehot_batch in eval_ds_iter: break img_64_batch.shape, label_onehot_batch.shape # + id="YHdHqTdaRBZ_" colab_type="code" colab={} # YOUR ACTION REQUIRED: # Use model.predict() to get a batch of predictions. preds = # + id="uMGfVv3ONeyz" colab_type="code" outputId="3d8ab338-f185-4b61-b2e6-3562042760b0" executionInfo={"status": "ok", "timestamp": 1579904229129, "user_tz": 480, "elapsed": 155275, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/<KEY>", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 34} # Iterate through the batch: for label_onehot, pred, img_64 in zip(label_onehot_batch, preds, img_64_batch): # YOUR ACTION REQUIRED: # Both `label_onehot` and pred are vectors with length=len(labels), with every # element corresponding to a probability of the corresponding class in # `labels`. Get the value with the highest value to get the index within # `labels`. label_i = pred_i = if label_i != pred_i: mistakes.append(Mistake(label_i, pred_i, img_64.numpy())) # You can run this and above 2 cells multiple times to get more mistakes. len(mistakes) # + id="rwq_0tTDXLbJ" colab_type="code" outputId="98675cd3-3260-461e-d4e9-817981d1a7bd" executionInfo={"status": "ok", "timestamp": 1579904230327, "user_tz": 480, "elapsed": 156466, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 735} # Let's examine the cases when our model gets it wrong. Would you recognize # these images correctly? # YOUR ACTION REQUIRED: # Run above cell but using a different model to get a different set of # classification mistakes. Then copy over this cell to plot the mistakes for # comparison purposes. Can you spot a pattern? rows, cols = 5, 5 plt.figure(figsize=(cols*2.5, rows*2.5)) for i, mistake in enumerate(mistakes[:rows*cols]): ax = plt.subplot(rows, cols, i + 1) title = '{}? {}!'.format(labels[mistake.pred], labels[mistake.label]) show_img(mistake.img_64, title, ax) # + [markdown] id="eids_xB-QzYL" colab_type="text" # ## Data from DataFrame # # For comparison, this section shows how you would load data from a `pandas.DataFrame` and then use Keras for training. Note that this approach does not scale well and can only be used for quite small datasets. # + id="fKETADXa622b" colab_type="code" outputId="198f7844-9456-4865-c18a-67fa3063cf6a" executionInfo={"status": "ok", "timestamp": 1579904232507, "user_tz": 480, "elapsed": 158639, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 70} # Note: used memory BEFORE loading the DataFrame. # !free -h # + id="lVHGBT_q3-zq" colab_type="code" outputId="f468e68b-82d4-491b-d820-5719f75764a7" executionInfo={"status": "ok", "timestamp": 1579904273004, "user_tz": 480, "elapsed": 199128, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 87} # Loading all the data in memory takes a while (~40s). import pickle df = pickle.load(tf.io.gfile.GFile('%s/dataframe.pkl' % data_path, mode='rb')) print(len(df)) print(df.columns) # + id="GTnEjQIDfQeM" colab_type="code" outputId="ffda698c-e0bd-4e9a-825c-9fd7aad56285" executionInfo={"status": "ok", "timestamp": 1579904275288, "user_tz": 480, "elapsed": 201404, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 34} df_train = df[df.split == b'train'] len(df_train) # + id="Aagi5RJlLz4k" colab_type="code" outputId="c3f448aa-338e-4789-9c7c-0aebc38dadb7" executionInfo={"status": "ok", "timestamp": 1579904277142, "user_tz": 480, "elapsed": 203249, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 70} # Note: used memory AFTER loading the DataFrame. # !free -h # + colab_type="code" id="0gKHIOswcPM1" outputId="5208bf1d-d2e8-4fa2-800b-88ddeee3e0f4" executionInfo={"status": "ok", "timestamp": 1579904277845, "user_tz": 480, "elapsed": 203945, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 381} # Show some images from the dataset. from matplotlib import pyplot as plt def show_img(img_64, title='', ax=None): (ax if ax else plt).matshow(img_64.reshape((64, -1)), cmap='gray') ax = ax if ax else plt.gca() ax.set_xticks([]) ax.set_yticks([]) ax.set_title(title) rows, cols = 3, 3 _, axs = plt.subplots(rows, cols, figsize=(2*cols, 2*rows)) for i in range(rows): for j in range(cols): d = df.sample(1).iloc[0] show_img(d.img_64, title=labels[d.label], ax=axs[i][j]) # + id="41_3dkqMPmG0" colab_type="code" colab={} df_x = tf.convert_to_tensor(df_train.img_64, dtype=tf.float32) df_y = tf.one_hot(df_train.label, depth=len(labels), dtype=tf.float32) # + id="c7k02_QQSYs2" colab_type="code" outputId="a884ed09-6d9d-4dfc-f578-8539946c3a15" executionInfo={"status": "ok", "timestamp": 1579904314896, "user_tz": 480, "elapsed": 240985, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 70} # Note: used memory AFTER defining the Tenors based on the DataFrame. # !free -h # + id="zEwkFKFuTBu0" colab_type="code" outputId="70dd909a-cf87-435a-82ba-79c31893d506" executionInfo={"status": "ok", "timestamp": 1579904314897, "user_tz": 480, "elapsed": 240980, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 34} # Checkout the shape of these rather large tensors. df_x.shape, df_x.dtype, df_y.shape, df_y.dtype # + id="98NBAasHXfMq" colab_type="code" outputId="d9b631c9-31fc-412d-db39-0a062acbed48" executionInfo={"status": "ok", "timestamp": 1579904317379, "user_tz": 480, "elapsed": 243455, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 283} # Copied code from section "Linear model" above. linear_model = tf.keras.Sequential() linear_model.add(tf.keras.layers.Flatten(input_shape=(64 * 64,))) linear_model.add(tf.keras.layers.Dense(len(labels), activation='softmax')) # "adam, categorical_crossentropy, accuracy" and other string constants can be # found at https://keras.io. linear_model.compile( optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy', tf.keras.metrics.categorical_accuracy]) linear_model.summary() # How much of a speedup do you see because the data is already in memory? # How would this compare to the convolutional model? linear_model.fit(df_x, df_y, epochs=1, batch_size=100) # + [markdown] id="XAItlmUwv1TZ" colab_type="text" # ## TPU Support # # For using TF with a TPU we'll need to make some adjustments. Generally, please note that several TF TPU features are experimental and might not work as smooth as it does with a CPU or GPU. # # > **Attention:** Please make sure to switch the runtime to TPU for this part. You can do so via: "Runtime > Change runtime type > Hardware Accelerator" in Colab. As this might create a new environment this section can be executed isolated from anything above. # + colab_type="code" id="kTGjOm631BQw" outputId="4278689a-0721-4f63-c5e0-dc18548c2465" executionInfo={"status": "ok", "timestamp": 1579904388257, "user_tz": 480, "elapsed": 30360, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 34} # %tensorflow_version 2.x # + colab_type="code" outputId="cdc179c6-5918-4f35-f4b8-dcd5fde6e34c" executionInfo={"status": "ok", "timestamp": 1579904408572, "user_tz": 480, "elapsed": 50666, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} id="QgVEQxLo1BQ3" colab={"base_uri": "https://localhost:8080/", "height": 425} import json, os import numpy as np from matplotlib import pyplot as plt import tensorflow as tf # Disable duplicate logging output in TF. logger = tf.get_logger() logger.propagate = False # This will fail if no TPU is connected... tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # Set up distribution strategy. tf.config.experimental_connect_to_cluster(tpu) tf.tpu.experimental.initialize_tpu_system(tpu); strategy = tf.distribute.experimental.TPUStrategy(tpu) # Tested with TensorFlow 2.1.0 print('\n\nTF version={} TPUs={} accelerators={}'.format( tf.__version__, tpu.cluster_spec().as_dict()['worker'], strategy.num_replicas_in_sync)) # + [markdown] id="liv2SeFnXerM" colab_type="text" # > **Attention:** TPUs require all files (input and models) to be stored in cloud storage buckets (`gs://bucket-name/...`). If you plan to use TPUs please choose the `data_path` below accordingly. Otherwise, you might run into `File system scheme '[local]' not implemented` errors. # + id="9kWkMWjHxn09" colab_type="code" colab={} from google.colab import auth auth.authenticate_user() # Browse datasets: # https://console.cloud.google.com/storage/browser/amld-datasets # - 50k training examples, including pickled DataFrame. data_path = 'gs://amld-datasets/zoo_img_small' # - 1M training examples, without pickled DataFrame. # data_path = 'gs://amld-datasets/zoo_img' # - 4.1M training examples, without pickled DataFrame. # data_path = 'gs://amld-datasets/animals_img' # - 29M training examples, without pickled DataFrame. # data_path = 'gs://amld-datasets/all_img' # + id="nZLuuaY72RxT" colab_type="code" cellView="form" outputId="ac23d955-18f1-4900-b9ee-82dca2aea836" executionInfo={"status": "ok", "timestamp": 1579904479930, "user_tz": 480, "elapsed": 122010, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 87} #@markdown **Copied and adjusted data definition code from above** #@markdown #@markdown &nbsp;&nbsp; Note: You can double-click this cell to see its code. #@markdown #@markdown The changes have been highlighted with `!` in the contained code #@markdown (things like the `batch_size` and added `drop_remainder=True`). #@markdown #@markdown Feel free to just **click "execute"** and ignore the details for now. labels = [label.strip() for label in tf.io.gfile.GFile('{}/labels.txt'.format(data_path))] print('All labels in the dataset:', ' '.join(labels)) counts = json.load(tf.io.gfile.GFile('{}/counts.json'.format(data_path))) print('Splits sizes:', counts) # This dictionary specifies what "features" we want to extract from the # tf.train.Example protos (i.e. what they look like on disk). We only # need the image data "img_64" and the "label". Both features are tensors # with a fixed length. # You need to specify the correct "shape" and "dtype" parameters for # these features. feature_spec = { # Single label per example => shape=[1] (we could also use shape=() and # then do a transformation in the input_fn). 'label': tf.io.FixedLenFeature(shape=[1], dtype=tf.int64), # The bytes_list data is parsed into tf.string. 'img_64': tf.io.FixedLenFeature(shape=[64, 64], dtype=tf.int64), } def parse_example(serialized_example): # Convert string to tf.train.Example and then extract features/label. features = tf.io.parse_single_example(serialized_example, feature_spec) # Important step: remove "label" from features! # Otherwise our classifier would simply learn to predict # label=features['label']. label = features['label'] label = tf.one_hot(tf.squeeze(label), len(labels)) features['img_64'] = tf.cast(features['img_64'], tf.float32) return features['img_64'], label # Adjust the batch size to the given hardware (#accelerators). batch_size = 64 * strategy.num_replicas_in_sync # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! steps_per_epoch = counts['train'] // batch_size eval_steps_per_epoch = counts['eval'] // batch_size # Create datasets from TFRecord files. train_ds = tf.data.TFRecordDataset(tf.io.gfile.glob( '{}/train-*'.format(data_path))) train_ds = train_ds.map(parse_example) train_ds = train_ds.batch(batch_size, drop_remainder=True).repeat() # !!!!!!!!!!!!!!!!!!! eval_ds = tf.data.TFRecordDataset(tf.io.gfile.glob( '{}/eval-*'.format(data_path))) eval_ds = eval_ds.map(parse_example) eval_ds = eval_ds.batch(batch_size, drop_remainder=True) # !!!!!!!!!!!!!!!!!!! # Read a single example and display shapes. for img_feature, label in train_ds: break print('img_feature.shape (batch_size, image_height, image_width) =', img_feature.shape) print('label.shape (batch_size, number_of_labels) =', label.shape) # + id="nF5wDYvjyk1K" colab_type="code" outputId="f73c2530-fb42-49c7-a690-aa37c5c5acc1" executionInfo={"status": "ok", "timestamp": 1579904479930, "user_tz": 480, "elapsed": 122004, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 230} # Model definition code needs to be wrapped in scope. with strategy.scope(): linear_model = tf.keras.Sequential() linear_model.add(tf.keras.layers.Flatten(input_shape=(64, 64,))) linear_model.add(tf.keras.layers.Dense(len(labels), activation='softmax')) linear_model.compile( optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy', tf.keras.metrics.categorical_accuracy]) linear_model.summary() # + id="CAHTLSQs7hRd" colab_type="code" outputId="1ed70722-c850-410d-fdc0-16472b16576c" executionInfo={"status": "ok", "timestamp": 1579904507182, "user_tz": 480, "elapsed": 149250, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 90} linear_model.fit(train_ds, validation_data=eval_ds, steps_per_epoch=steps_per_epoch, validation_steps=eval_steps_per_epoch, epochs=1, verbose=True) # + id="NVaTmhROzHE-" colab_type="code" outputId="4ba9ebd6-1441-4075-a269-e8b212d2e5ac" executionInfo={"status": "ok", "timestamp": 1579904508096, "user_tz": 480, "elapsed": 150158, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 550} # Model definition code needs to be wrapped in scope. with strategy.scope(): conv_model = tf.keras.Sequential([ tf.keras.layers.Reshape(target_shape=(64, 64, 1), input_shape=(64, 64)), tf.keras.layers.Conv2D(filters=32, kernel_size=(10, 10), padding='same', activation='relu'), tf.keras.layers.ZeroPadding2D((1,1)), tf.keras.layers.Conv2D(filters=32, kernel_size=(10, 10), padding='same', activation='relu'), tf.keras.layers.MaxPooling2D(pool_size=(4, 4), strides=(4,4)), tf.keras.layers.Conv2D(filters=64, kernel_size=(5, 5), padding='same', activation='relu'), tf.keras.layers.MaxPooling2D(pool_size=(4, 4), strides=(4,4)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(256, activation='relu'), tf.keras.layers.Dropout(0.3), tf.keras.layers.Dense(len(labels), activation='softmax'), ]) conv_model.compile( optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) conv_model.summary() # + id="Ye5y9nIt7p1m" colab_type="code" outputId="02b99428-8865-488c-c023-33acd58cca8b" executionInfo={"status": "ok", "timestamp": 1579904592275, "user_tz": 480, "elapsed": 234331, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAYx167H2vNmFSKlsQkQY-bjbJ-3sPGymaG0kXO=s64", "userId": "08860260976100898876"}} colab={"base_uri": "https://localhost:8080/", "height": 176} conv_model.fit(train_ds, validation_data=eval_ds, steps_per_epoch=steps_per_epoch, validation_steps=eval_steps_per_epoch, epochs=3, verbose=True) conv_model.evaluate(eval_ds, steps=eval_steps_per_epoch)
extras/amld/notebooks/exercises/2_keras.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- from itertools import cycle lst = [x for x in range(int(1e6))] # %%timeit mysum = 0 for el in lst: mysum += el print(mysum) gen = cycle((x for x in range(int(1e6)))) # %%timeit mysum = 0 for i in lst: mysum += next(gen) print(mysum) def gen(): while True: yield 1 gen2 = gen() for i in range(10): print(next(gen2)) # %%timeit mysum = 0 for i in lst: mysum += next(gen2) print(mysum)
notebooks/Python/Python_Internals/listexp_vs_genexp.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/AnthonyGachuru/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/AnthonyG_LS_DS_124_Sequence_your_narrative.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] colab_type="text" id="JbDHnhet8CWy" # _Lambda School Data Science_ # # # Sequence your narrative # # Today we will create a sequence of visualizations inspired by [<NAME>'s 200 Countries, 200 Years, 4 Minutes](https://www.youtube.com/watch?v=jbkSRLYSojo). # # Using this [data from Gapminder](https://github.com/open-numbers/ddf--gapminder--systema_globalis/): # - [Income Per Person (GDP Per Capital, Inflation Adjusted) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv) # - [Life Expectancy (in Years) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv) # - [Population Totals, by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv) # - [Entities](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv) # - [Concepts](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv) # + [markdown] colab_type="text" id="zyPYtsY6HtIK" # Objectives # - sequence multiple visualizations # - combine qualitative anecdotes with quantitative aggregates # # Links # - [<NAME>’s TED talks](https://www.ted.com/speakers/hans_rosling) # - [Spiralling global temperatures from 1850-2016](https://twitter.com/ed_hawkins/status/729753441459945474) # - "[The Pudding](https://pudding.cool/) explains ideas debated in culture with visual essays." # - [A Data Point Walks Into a Bar](https://lisacharlotterost.github.io/2016/12/27/datapoint-in-bar/): a thoughtful blog post about emotion and empathy in data storytelling # + [markdown] colab_type="text" id="SxTJBgRAW3jD" # ## Make a plan # # #### How to present the data? # # Variables --> Visual Encodings # - Income --> x # - Lifespan --> y # - Region --> color # - Population --> size # - Year --> animation frame (alternative: small multiple) # - Country --> annotation # # Qualitative --> Verbal # - Editorial / contextual explanation --> audio narration (alternative: text) # # # #### How to structure the data? # # | Year | Country | Region | Income | Lifespan | Population | # |------|---------|----------|--------|----------|------------| # | 1818 | USA | Americas | ### | ## | # | # | 1918 | USA | Americas | #### | ### | ## | # | 2018 | USA | Americas | ##### | ### | ### | # | 1818 | China | Asia | # | # | # | # | 1918 | China | Asia | ## | ## | ### | # | 2018 | China | Asia | ### | ### | ##### | # # + [markdown] colab_type="text" id="3ebEjShbWsIy" # ## Check Version of Seaborn # # Make sure you have at least version 0.9.0. # # + id="wj_Zy1lkg0bD" colab_type="code" colab={} # #!pip freeze # + colab_type="code" id="5sQ0-7JUWyN4" outputId="e0fa00bc-51e5-4690-cb16-7dea6797a66a" colab={"base_uri": "https://localhost:8080/", "height": 35} import seaborn as sns sns.__version__ # + [markdown] colab_type="text" id="S2dXWRTFTsgd" # ## More imports # + colab_type="code" id="y-TgL_mA8OkF" colab={} # %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd # + [markdown] colab_type="text" id="CZGG5prcTxrQ" # ## Load & look at data # + colab_type="code" id="-uE25LHD8CW0" colab={} income = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv') # + colab_type="code" id="gg_pJslMY2bq" colab={} lifespan = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv') # + colab_type="code" id="F6knDUevY-xR" colab={} population = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv') # + colab_type="code" id="hX6abI-iZGLl" colab={} entities = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv') # + colab_type="code" id="AI-zcaDkZHXm" colab={} concepts = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv') # + colab_type="code" id="EgFw-g0nZLJy" outputId="3df05ab5-0cb2-4ff0-e239-128024f7835b" colab={"base_uri": "https://localhost:8080/", "height": 35} income.shape, lifespan.shape, population.shape, entities.shape, concepts.shape # + colab_type="code" id="I-T62v7FZQu5" outputId="4d1b049d-1c02-40c1-98bc-3533fd49d10e" colab={"base_uri": "https://localhost:8080/", "height": 206} income.head() # + colab_type="code" id="2zIdtDESZYG5" outputId="11d38b4f-bd4d-4f6d-bf9b-180c03f75113" colab={"base_uri": "https://localhost:8080/", "height": 206} lifespan.head() # + colab_type="code" id="58AXNVMKZj3T" outputId="6b936a3f-7741-46fa-f046-cbccfacfe8a4" colab={"base_uri": "https://localhost:8080/", "height": 206} population.head() # + colab_type="code" id="0ywWDL2MZqlF" outputId="fdb28087-2e02-4fe0-bd58-ed66fdd1f97d" colab={"base_uri": "https://localhost:8080/", "height": 261} pd.options.display.max_columns = 500 entities.head() # + colab_type="code" id="mk_R0eFZZ0G5" outputId="b6960334-c3af-4f8b-b6fa-edea752cb9ef" colab={"base_uri": "https://localhost:8080/", "height": 556} concepts.head() # + [markdown] colab_type="text" id="6HYUytvLT8Kf" # ## Merge data # + [markdown] colab_type="text" id="dhALZDsh9n9L" # https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf # + id="rpagbeEunu8e" colab_type="code" outputId="60651f36-42c9-43e9-8b40-c08801f590b0" colab={"base_uri": "https://localhost:8080/", "height": 54} print(income.shape) print(lifespan.shape) # + colab_type="code" id="A-tnI-hK6yDG" outputId="6b165c14-d4cc-4614-8ea7-7c74bd3bf146" colab={"base_uri": "https://localhost:8080/", "height": 243} #This is using the default parameters df = pd.merge(income, lifespan) print(df.shape) #This is doing the same thing but explicitly specifying parameters df = pd.merge(income, lifespan, on = ['geo', 'time'], how = 'inner') print(df.shape) df.head() # + id="krTzZKjtpi8k" colab_type="code" colab={} #df.isna().sum() # + id="3juRdcLUv_vv" colab_type="code" outputId="f91ac77e-d171-4096-e6fa-c05e9255eacd" colab={"base_uri": "https://localhost:8080/", "height": 225} df = pd.merge(df, population) print(df.shape) df.head() # + id="l01L-AYAxWV_" colab_type="code" outputId="8743bd27-dec0-41ab-832c-62bd360df986" colab={"base_uri": "https://localhost:8080/", "height": 146} entities['world_6region'].value_counts() # + id="PPOozwqVyAgY" colab_type="code" outputId="19373bbc-382d-4b06-edde-2807e4ec0ba9" colab={"base_uri": "https://localhost:8080/", "height": 109} entities['world_4region'].value_counts() # + id="mXstsO6v3QPB" colab_type="code" colab={} entity_columns_to_keep = ['country', 'name', 'world_4region', 'world_6region'] # + id="Tl9A2MQj4DaK" colab_type="code" outputId="3e802a90-95cf-4845-efc9-7dc7d77ad824" colab={"base_uri": "https://localhost:8080/", "height": 206} entities = entities[entity_columns_to_keep] entities.head() # + id="DHDVTJ5-57wX" colab_type="code" outputId="a4fa7083-bdec-4e73-db53-500f86989052" colab={"base_uri": "https://localhost:8080/", "height": 226} merged = pd.merge(df, entities, left_on = 'geo' , right_on = 'country') merged.head() # + id="NWwzPjHM71j_" colab_type="code" outputId="a6bbbbf5-628a-4559-b838-8b1b2b84190b" colab={"base_uri": "https://localhost:8080/", "height": 206} merged = merged.drop('geo', axis = 'columns') merged.head() # + id="LvYNR6Y18M_b" colab_type="code" outputId="d990a4d0-344e-4355-f468-72e2b82985b0" colab={"base_uri": "https://localhost:8080/", "height": 206} merged = merged.rename(columns = {'country': 'country_code', 'time': 'year', 'income_per_person_gdppercapita_ppp_inflation_adjusted': 'income', 'life_expectancy_years': 'lifespan', 'population_total': 'population', 'name': 'country', 'world_6region': '6region', 'world_4region': '4region'}) merged.head() # + [markdown] colab_type="text" id="4OdEr5IFVdF5" # ## Explore data # + colab_type="code" id="4IzXea0T64x4" outputId="418d4cd5-0935-4319-bcf2-7daaa5851acd" colab={"base_uri": "https://localhost:8080/", "height": 182} merged.dtypes # + id="Uyz1_wZe-apK" colab_type="code" outputId="66f339b5-2fc9-4b25-b154-b87b9dc17ba3" colab={"base_uri": "https://localhost:8080/", "height": 300} merged.describe() # + id="ccdn1TPu-cld" colab_type="code" outputId="d600bfea-945b-4385-809b-c4f08be6afae" colab={"base_uri": "https://localhost:8080/", "height": 175} #Non-numeric categories merged.describe(exclude = 'number') # + id="bax5pgW--0LS" colab_type="code" outputId="9305e590-2365-4fdd-a506-13e6550de879" colab={"base_uri": "https://localhost:8080/", "height": 733} merged.country.unique() # + id="trZcAlZN_QfP" colab_type="code" outputId="60713e35-7b1a-42f5-80c3-ee949a68a0f9" colab={"base_uri": "https://localhost:8080/", "height": 206} usa = merged[merged.country == 'United States'] usa.head() # + id="Sgy9egeVAVhA" colab_type="code" outputId="78db0a25-ae62-416f-b216-14f919b889bd" colab={"base_uri": "https://localhost:8080/", "height": 143} usa[usa.year.isin([1818, 1918, 2018])] # + id="QX9c6JwyAsRV" colab_type="code" outputId="67c27a8d-c848-4289-b657-e3e0432ad58c" colab={"base_uri": "https://localhost:8080/", "height": 143} china = merged[merged.country == 'China'] china[china.year.isin([1818, 1918, 2018])] # + [markdown] colab_type="text" id="hecscpimY6Oz" # ## Plot visualization # + colab_type="code" id="_o8RmX2M67ai" outputId="93c887a3-f567-4238-c334-64884e4c5143" colab={"base_uri": "https://localhost:8080/", "height": 206} import seaborn as sns now = merged[merged.year == 2018] now.head() # + id="UfoVo-McCqXS" colab_type="code" outputId="c48a4522-00bf-4dd1-e563-5355aad0ad6d" colab={"base_uri": "https://localhost:8080/", "height": 386} #import matplotlib.pyplot as plt sns.relplot(x = 'income', y = 'lifespan', hue = '6region', size = 'population', sizes = (30, 400), data = now) plt.xscale('log') plt.ylim(20, 85) plt.title('The world in 2018') plt.show() # + [markdown] colab_type="text" id="8OFxenCdhocj" # ## Analyze outliers # + colab_type="code" id="D59bn-7k6-Io" outputId="4152c345-193b-4ad4-abd9-b0fd5d152361" colab={"base_uri": "https://localhost:8080/", "height": 1000} #Qatar is the richest country in 2018 now.sort_values('income', ascending = False) # + id="zKaCwJZxH4mB" colab_type="code" outputId="d65958b0-fbaa-45e4-8ab3-4fcf71e84598" colab={"base_uri": "https://localhost:8080/", "height": 81} now_qatar = now[now.country == 'Qatar'] now_qatar.head() # + id="IaFBmQfBIQ6Q" colab_type="code" outputId="7f712d14-7607-4c61-a19b-cdb666fce1d5" colab={"base_uri": "https://localhost:8080/", "height": 386} sns.relplot(x = 'income', y = 'lifespan', hue = '6region', size = 'population', sizes = (30, 400), data = now) plt.xscale('log') plt.ylim(20, 85) plt.title('The world in 2018') plt.text(x = now_qatar.income - 5000, y = now_qatar.lifespan + 1, s = 'Qatar') plt.show() # + id="Sy04ELvoHRYJ" colab_type="code" colab={} # + [markdown] colab_type="text" id="DNTMMBkVhrGk" # ## Plot multiple years # + colab_type="code" id="JkTUmYGF7BQt" outputId="369c4486-aea6-4740-af74-87d81d6e4af8" colab={"base_uri": "https://localhost:8080/", "height": 393} years = [1818, 1918, 2018] centuries = merged[merged.year.isin(years)] #sns.set(rc = {'axes.facecolor': 'Black', 'figure.facecolor': 'blue'}) #sns.set(rc = {'axes.facecolor': 'gray', 'figure.facecolor': 'white'}) fig = sns.relplot(x = 'income', y = 'lifespan', hue = '6region', size = 'population', sizes = (30, 400), col = 'year', data = centuries) #fig, ax = plt.subplots() plt.xscale('log') #plt.ylim(20, 85) #plt.title('The world in 1818, 1918 & 2018') plt.text(x = now_qatar.income - 5000, y = now_qatar.lifespan + 1, s = 'Qatar') #s = fig.axes.flatten() #axes[0].set_title('1818') #axes[1].set_title('1918') #axes[2].set_title('2018 - AHA') plt.show() # + id="9nFZrqBySSN-" colab_type="code" outputId="f986cbcc-6f2a-4f2f-ee42-68bb69d9112f" colab={"base_uri": "https://localhost:8080/", "height": 393} sns.relplot(x = 'income', y = 'lifespan', hue = '6region', size = 'population', sizes = (30, 400), col = 'year', data = centuries) #fig, ax = plt.subplots() #fig.set(facecolor = 'blue') plt.xscale('log') #plt.ylim(20, 85) #plt.title('The world in 1818, 1918 & 2018') plt.text(x = now_qatar.income - 5000, y = now_qatar.lifespan + 1, s = 'Qatar') plt.show() # + id="yATJnsM7miI8" colab_type="code" colab={} # + [markdown] colab_type="text" id="BB1Ki0v6hxCA" # ## Point out a story # + colab_type="code" id="eSgZhD3v7HIe" colab={} years = [1918, 1938, 1978, 1998, 2018] decades = merged[merged.year.isin(years)] # + id="mV4WWxx4mjBn" colab_type="code" outputId="dbbebf1a-3a36-433e-d143-78e96c3c3186" colab={"base_uri": "https://localhost:8080/", "height": 389} sns.relplot(x = 'income', y = 'lifespan', hue = '6region', size = 'population', col = 'year', data = decades) plt.show() # + id="4Ejjzv5VwkjT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="cdce258a-2375-4803-9efc-4261c145173a" for year in years: sns.relplot(x = 'income', y = 'lifespan', hue = '6region', size = 'population', data = merged[merged.year == year]) plt.xscale('log') plt.xlim((150, 150000)) plt.ylim((0, 90)) plt.title(year) plt.axvline(x = 1000, color = 'grey') plt.axhline(y = 40, color = 'grey') # + id="0cKfOSHUn3gK" colab_type="code" outputId="e0c3fea3-ba48-4b21-b68a-b23d3f8e0397" colab={"base_uri": "https://localhost:8080/", "height": 1000} for year in years: sns.relplot(x = 'income', y = 'lifespan', hue = '6region', size = 'population', data = merged[merged.year == year]) plt.xscale('log') plt.xlim((150, 150000)) plt.ylim((0, 90)) plt.title(year) plt.axvline(x = 1000, color = 'grey') plt.axhline(y = 40, color = 'grey') # + id="KJEmFgcewi74" colab_type="code" colab={}
AnthonyG_LS_DS_124_Sequence_your_narrative.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Drug Interactions Network Analysis # ### Part 1 - Data Exploraton # #### Author: <NAME> # # #### Data Sources: # - https://snap.stanford.edu/biodata/datasets/10001/10001-ChCh-Miner.html # - https://github.com/snap-stanford/miner-data/tree/master/drugbank # ___ # ### 1. Import dependencies # + import pandas as pd import numpy as np import re import zipfile import json from pyvis.network import Network import networkx as nx from collections import Counter from itertools import combinations from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) # - # ___ # ### 2. Data preparation # + # Unzip all tar/zip files zip_files_list = [i for i in os.listdir('data') if i.endswith('.zip')] for file in zip_files_list: with zipfile.ZipFile(f'data/{file}', 'r') as zip_ref: zip_ref.extractall('data') os.listdir('data') # - # #### Drug Interactions (Drugbank) # + # Read DB mapping JSON with open('data/DB_mapping.json', 'r') as fp: db_mapping = json.load(fp) db_mapping # + # Import raw drugbank dataset df_db_int = pd.read_csv("data/ChCh-Miner_durgbank-chem-chem.tsv", sep='\t', header=None) df_db_int.columns = ['drug_1_code', 'drug_2_code'] # Perform code-name mapping df_db_int['drug_1_name'] = df_db_int['drug_1_code'].map(db_mapping) df_db_int['drug_2_name'] = df_db_int['drug_2_code'].map(db_mapping) new_cols = ['drug_1_code', 'drug_1_name', 'drug_2_code', 'drug_2_name'] df_db_int = df_db_int[new_cols] df_db_int.head() # - # #### 2(b) Polypharmacy side effects # + # Read CID mapping JSON with open('data/CID_mapping.json', 'r') as fp: cid_mapping = json.load(fp) cid_mapping # + # Import dataset df_poly_se = pd.read_csv("data/ChChSe-Decagon_polypharmacy.csv") df_poly_se.columns = ['drug_1_code', 'drug_2_code', 'side_effect_code', 'side_effect_description'] # Perform code-name mapping df_poly_se['drug_1_name'] = df_poly_se['drug_1_code'].map(cid_mapping) df_poly_se['drug_2_name'] = df_poly_se['drug_2_code'].map(cid_mapping) # Rearrange columns new_cols = ['drug_1_code', 'drug_1_name', 'drug_2_code', 'drug_2_name', 'side_effect_code', 'side_effect_description'] df_poly_se = df_poly_se[new_cols] df_poly_se.head() # - df_poly_se['side_effect_description'].value_counts()[:20] drugs = (df_poly_se['drug_1_name'].value_counts()).append(df_poly_se['drug_2_name'].value_counts()) drugs[:20] # + n = 20 df_poly_se_sm = df_poly_se[['drug_1_name', 'drug_2_name']] L = Counter([y for x in df_poly_se_sm.values for y in combinations(x, 2)]).most_common(n) combi_df = pd.DataFrame(L, columns=['Pair', 'Qty']) print(combi_df) # - # ___ # #### 2(c) Monopharmacy side effects df_mono_se = pd.read_csv("data/ChSe-Decagon_monopharmacy.csv") df_mono_se.head() # + # Import dataset df_mono_se = pd.read_csv("data/ChSe-Decagon_monopharmacy.csv") df_mono_se.columns = ['drug_code', 'side_effect_code', 'side_effect_description'] # Perform code-name mapping df_mono_se['drug_name'] = df_mono_se['drug_code'].map(cid_mapping) # Rearrange columns new_cols = ['drug_code', 'drug_name', 'side_effect_code', 'side_effect_description'] df_mono_se = df_mono_se[new_cols] df_mono_se.head() # - df_mono_se['drug_name'].value_counts()[:15] df_mono_se['side_effect_description'].value_counts()[:15]
01_Data_Exploration.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.7 64-bit (''semsegpy37'': conda)' # name: python37764bitsemsegpy37conda8dc00fd5bde141089e99646c7805a034 # --- import models # + import torch t =torch.load("decoder_epoch_20.pth") # - len(t['conv_last.4.weight']) torch.Size(t['conv_last.4.weight'].narrow(0,0,37))
extras/torch.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Bloom filters # - Powerful things that make hashing super useful # [Link to tutorial](http://www.geeksforgeeks.org/bloom-filters-introduction-and-python-implementation/)
misc/Bloom filters.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:scrna] # language: python # name: conda-env-scrna-py # --- # + colab={"base_uri": "https://localhost:8080/"} id="XBbdv4yGYZkR" outputId="774a9dd0-e871-444f-e7bf-1ec1993d6cb9" from google.colab import drive drive.mount('/content/drive', force_remount=True) # + colab={"base_uri": "https://localhost:8080/"} id="5jciRkyIY06k" outputId="3be9583e-66f9-47b6-913d-e3d1b93c6311" # !pip install pycm # + id="f8mNB-IEYSs6" import time import numpy as np import pandas as pd import argparse import matplotlib.pyplot as plt from copy import deepcopy from scipy import interpolate from sklearn.feature_selection import mutual_info_regression from scipy.stats import pearsonr import scipy.sparse import sys import pickle import re from pyitlib import discrete_random_variable as drv from dtit import dtit from scipy import stats from numpy import savetxt from numpy import genfromtxt import networkx as nx from scipy.stats import norm import itertools import math import copy from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score from sklearn.metrics import precision_recall_curve, roc_curve, auc, average_precision_score from sklearn.metrics import confusion_matrix from pycm import * # + id="j7pC-vCUYSs_" def conditional_mutual_info(X,Y,Z=np.array(1)): if X.ndim == 1: X = np.reshape(X, (-1, 1)) if Y.ndim == 1: Y = np.reshape(Y, (-1, 1)) if Z.ndim == 0: c1 = np.cov(X) if c1.ndim != 0: d1 = np.linalg.det(c1) else: d1 = c1.item() c2 = np.cov(Y) if c2.ndim != 0: d2 = np.linalg.det(c2) else: d2 = c2.item() c3 = np.cov(X,Y) if c3.ndim != 0: d3 = np.linalg.det(c3) else: d3 = c3.item() cmi = (1/2)*np.log((d1*d2)/d3) else: if Z.ndim == 1: Z = np.reshape(Z, (-1, 1)) c1 = np.cov(np.concatenate((X, Z), axis=0)) if c1.ndim != 0: d1 = np.linalg.det(c1) else: d1 = c1.item() c2 = np.cov(np.concatenate((Y, Z), axis=0)) if c2.ndim != 0: d2 = np.linalg.det(c2) else: d2 = c2.item() c3 = np.cov(Z) if c3.ndim != 0: d3 = np.linalg.det(c3) else: d3 = c3.item() c4 = np.cov(np.concatenate((X, Y, Z), axis=0)) if c4.ndim != 0: d4 = np.linalg.det(c4) else: d4 = c4.item() cmi = (1/2)*np.log((d1*d2)/(d3*d4)) if math.isinf(cmi): cmi = 0 return cmi # + id="38y542BkYStB" def pca_cmi(data, theta, max_order): genes = list(data.columns) predicted_graph = nx.complete_graph(genes) num_edges = predicted_graph.number_of_edges() print("Number of edges in the initial complete graph : {}".format(num_edges)) print() L = -1 nochange = False while L < max_order and nochange == False: L = L+1 predicted_graph, nochange = remove_edges(predicted_graph, data, L, theta) print("Order : {}".format(L)) print("Number of edges in the predicted graph : {}".format(predicted_graph.number_of_edges())) print() print() print() print("Final Prediction:") print("-----------------") print("Order : {}".format(L)) print("Number of edges in the predicted graph : {}".format(predicted_graph.number_of_edges())) nx.draw(predicted_graph, with_labels=True, font_weight='bold') print() return predicted_graph def remove_edges(predicted_graph, data, L, theta): initial_num_edges = predicted_graph.number_of_edges() edges = predicted_graph.edges() for edge in edges: neighbors = nx.common_neighbors(predicted_graph, edge[0], edge[1]) nhbrs = copy.deepcopy(sorted(neighbors))\ T = len(nhbrs) if T < L and L != 0: continue else: x = data[edge[0]].to_numpy() if x.ndim == 1: x = np.reshape(x, (-1, 1)) y = data[edge[1]].to_numpy() if y.ndim == 1: y = np.reshape(y, (-1, 1)) K = list(itertools.combinations(nhbrs, L)) if L == 0: cmiVal = conditional_mutual_info(x.T, y.T) if cmiVal < theta: predicted_graph.remove_edge(edge[0], edge[1]) else: maxCmiVal = 0 for zgroup in K: z = data[list(zgroup)].to_numpy() if z.ndim == 1: z = np.reshape(z, (-1, 1)) cmiVal = conditional_mutual_info(x.T, y.T, z.T) if cmiVal > maxCmiVal: maxCmiVal = cmiVal if maxCmiVal < theta: predicted_graph.remove_edge(edge[0], edge[1]) final_num_edges = predicted_graph.number_of_edges() if final_num_edges < initial_num_edges: return predicted_graph, False return predicted_graph, True # + colab={"base_uri": "https://localhost:8080/", "height": 413} id="VpKQJonVYStB" outputId="ef875652-bdcc-4227-b8db-f45e40942213" data = pd.read_csv('/content/drive/MyDrive/673:termproject/PC-CMI_Algorithm/Data/InSilicoSize10-Yeast3-trajectories.tsv', sep='\t') data = data.drop(['Time'], axis=1) data # + colab={"base_uri": "https://localhost:8080/", "height": 683} id="rbwWG8cKYStD" outputId="36501dbd-bef6-406c-a506-96659bef9c63" predicted_graph = pca_cmi(data, 0.02, 10) predicted_adjMatrix = nx.adjacency_matrix(predicted_graph) print(predicted_adjMatrix.todense()) # + colab={"base_uri": "https://localhost:8080/", "height": 720} id="sIScDGR2YStD" outputId="665392c2-412d-4d1d-ab9c-51e856ff40a1" benchmark_network = pd.read_csv('/content/drive/MyDrive/673:termproject/PC-CMI_Algorithm/Test/DREAM3GoldStandard_InSilicoSize10_Yeast3.txt', sep='\t', header=None) benchmark_network = benchmark_network.loc[benchmark_network[2] == 1] benchmark_network # + colab={"base_uri": "https://localhost:8080/", "height": 485} id="nHEF742sYStE" outputId="9702ef0c-34ad-499d-d710-845ae7c726f8" import matplotlib.pyplot as plt benchmark_graph = nx.Graph() for i in (1,10): benchmark_graph.add_node('G'+str(i)) for row in range(0,benchmark_network.shape[0]): benchmark_graph.add_edge(benchmark_network[0][row], benchmark_network[1][row]) nx.draw(benchmark_graph, with_labels=True, font_weight='bold') benchmark_adjMatrix = nx.adjacency_matrix(benchmark_graph) print(benchmark_adjMatrix.todense()) # + id="GyhS2TafYStE" # editDistance = nx.optimize_graph_edit_distance(predicted_graph, benchmark_graph) # print(list(editDistance)) # + id="boiMNRJgYStF" y_test = benchmark_adjMatrix.todense().flatten() y_pred = predicted_adjMatrix.todense().flatten() y_pred = np.asarray(y_pred) y_test = np.asarray(y_test) y_pred = y_pred.reshape(y_pred.shape[1],) y_test = y_test.reshape(y_test.shape[1],) # + colab={"base_uri": "https://localhost:8080/"} id="arEtqIaeYStF" outputId="cc2cfe87-c6c8-45ec-cda3-798485cf5177" cm = ConfusionMatrix(y_test, y_pred) # cm.relabel(mapping=classdict) print(cm.ACC_Macro) # + colab={"base_uri": "https://localhost:8080/"} id="C-oDo4w7qFA4" outputId="eac75044-9f48-4476-cf79-82ab28d3f10e" y_pred # + colab={"base_uri": "https://localhost:8080/", "height": 280} id="ELrecraRYStG" outputId="856eb0bc-1c86-4ac5-f03e-a418ab93cf3e" ns_fpr, ns_tpr, _ = roc_curve(y_test, y_pred) auc = roc_auc_score(y_test, y_pred) # plot the roc curve for the model plt.plot(ns_fpr, ns_tpr, label='AUC='+str(round(auc*100,2))+'%') # axis labels plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') # show the legend plt.legend(loc = 5) # show the plot plt.show() # + id="L15hYfebYStG"
Code/PC-CMI_Algorithm/PCA_CMI_Dream3_GRNIdentification.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/enriquemezav/spwlaunisc_PythonAppliedOG/blob/master/notebook/ws_spwlaunisc.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="s7cdZuKpOlCT" # # **Python Aplicado a la Industria del O&G** # --- # # > ***<NAME>.*** // [*<EMAIL>*](https://www.linkedin.com/in/enriquemezav/) # # Hola, somos un grupo de estudiantes miembros del Capítulo Estudiantil de la [***Society of Petrophysicists and Well Log Analysts***](https://www.spwla.org/) en la Universidad Nacional de Ingeniería; organizamos este curso de introducción a la programación en Python en colaboración con el **grupo de investigación TRM** de acceso abierto y gratuito al público, con el objetivo de mostrar su aplicación en la industria de Oil & Gas. # # <H1 align="center"><img src="https://i.ibb.co/0GKk29s/Dise-o-sin-t-tulo.png" width = 1000></H1> # # Antes de empezar tengo que advertirte que ningún lenguaje de programación, por simple que sea, puede aprenderse en profundidad en tan poco tiempo, a no ser que se requiera de experiencia previa en otros lenguajes. Dominar la programación precisa de experiencia, lo cual a su vez requiere de un tiempo mínimo que permita afianzar las estructuras mentales necesarias para entender la secuencia lógica a seguir para desarrollar un programa o proyecto de software. # # # + <h3><b>¿Por qué Python?</b></h3> # # [**Python**](https://www.python.org/) es un lenguaje de programación de alto nivel, presenta un código simple, por lo que es fácil de aprender, ya que requiere una sintaxis única que se centra en la legibilidad; esto explica la creciente popularidad que ha tenido en los ultimos tiempos. # # A pesar de su simpleza, es muy utilizado tanto en la industria para servidores y servicios web, así como también en el área academica para redes neuronales, deep learning, simulación, etc. # # <H1 align="center"><img src="https://miro.medium.com/max/986/1*S2AyJcdw8EPcn7gwDVSBCA.png" width = 400></H1> # # ># ***Empezamos!!!*** # # Este workshop está orientado a introducir los aspectos básicos de Python y el manejo de las librerías más utilizadas en el ámbito de la investigación para el análisis de datos como son Numpy, Matplotlib, Pandas y # Scipy. # + [markdown] id="CEGCJbXQGPGE" # ## **1. Fundamentos de programación en Python** # # Veamos los elementos fundamentales del lenguage de Python con sus variables, como la asignación de valores, tipos de variables (simples y compuestas), operadoraciones aritméticas y estructuras de control (condicionales, y bucles). # + colab={"base_uri": "https://localhost:8080/"} id="QAmWAuSfd2LU" outputId="9725a69e-9ba5-43c6-f283-7f42e8be5361" # mi primer programa print('!Bienvenidos al workshop, "Python Aplicado a la Industria del O&G"!') # + [markdown] id="ZPlUgJcKmBhk" # ### **Tipos de variables simples** # + id="3e9tfMsDOlCW" colab={"base_uri": "https://localhost:8080/"} outputId="b43f1b41-9062-4d11-a3f9-6160d1c40873" # número entero (int) x = 20 # número flotante (float) y = 0.35 # número complejo (complex) z = 3 + 4j # tipo booleano (bool) r = 1 < 3 # caracteres o texto (str) t = 'spwla uni student chapter' # objeto nulo (special) n = None print('x es una variable de tipo', type(x)) print('y es una variable de tipo', type(y)) print('z es una variable de tipo', type(z)) print('r es una variable de tipo', type(r)) print('t es una variable de tipo', type(t)) print('n es una variable de tipo', type(n)) # + colab={"base_uri": "https://localhost:8080/"} id="KLUKCrZinpKM" outputId="aeaa5e9b-db24-4118-aa70-87f179a5eb5b" # operaciones aritméticas print('La suma es: ', x + y) print('La diferencia es: ', z - y) print('La multiplicación es: ', x * y) print('La división es: ', z / x) # + [markdown] id="AdWXvdxtmsbT" # ### **Tipos de variables compuestos** # + colab={"base_uri": "https://localhost:8080/"} id="MpC_Trujmw2T" outputId="15461ff5-a14a-42e6-8916-a88d9f6020b2" # listas (list) ls = [1, 2, 3] # tuplas (tuple) tp = (1, 2, 3) # diccionarios (dict) dc = {'a':1, 'b':2, 'c':3} # conjuntos (set) st = {1, 2, 3} print('ls es una variable de tipo', type(ls)) print('tp es una variable de tipo', type(tp)) print('dc es una variable de tipo', type(dc)) print('st es una variable de tipo', type(st)) # + id="aDHrr15Kr1D3" # lista vacía list1= [] # lista de enteros list2 = [1, 2, 3, 4, 5] # lista con varios tipos de datos list3 = [81, 'SPWLA', 3.14, True] # lista con varios tipos de datos my_list = ['SPWLA', 12, [18, 'Tecnologías de Recobro Mejorado', False], 2.71828] # + colab={"base_uri": "https://localhost:8080/"} id="9oZ5ppfdsmWe" outputId="457780ce-2980-4a85-d99e-5293bf0277ec" # indexación en listas print(my_list[0]) print(my_list[-1]) print(my_list[2][1]) # + [markdown] id="5zY-hQczvBQG" # ### **Declaraciones condicional (if...elif...else)** # + colab={"base_uri": "https://localhost:8080/"} id="YFM5nuQZVeyL" outputId="f4756cca-0eb8-40da-f9b5-5b992c2b7204" 2# asignar a una variable un valor a = input('Ingrese el primero número:') b = input('Ingrese el segundo número:') # condicional if a == b: print('Los números', a, 'y', b, 'son iguales') elif a < b: print('El número', a, 'es menor que', b) else: print('El número', b, 'es menor que', a) # + [markdown] id="YWw-tDyMuxcJ" # ### **Controles de flujo (while loops y for loops)** # + id="_UW2jaZyXMTM" # crear una lista uni = ['UNP', 'UIS', 'UNALM', 'UDO', 'UFRJ'] country = ['Perú', 'Colombia', 'México', 'Venezuela', 'Brasil'] # + colab={"base_uri": "https://localhost:8080/"} id="Vk_2ohZHvWlj" outputId="cafdd50d-82a4-4321-a852-b14f4b2efac4" # usando un bucle for for i in range(len(uni)): print(uni[i], country[i], sep = ' -> ') # + colab={"base_uri": "https://localhost:8080/"} id="A0P37M9vwPmz" outputId="b8146f1b-31de-4d0c-a6fd-acce9979423f" # usando la función 'enumerate()' for i, valor in enumerate(uni): print(i, uni[i], country[i], sep=' -> ') # + colab={"base_uri": "https://localhost:8080/"} id="9Rrju0LJvblH" outputId="f11dafe1-0473-488f-9905-a5e260b2f087" # usando un bucle while i = 0 while i < len(uni): print(i, uni[i], country[i], sep = ' -> ') i += 1 # + [markdown] id="UYIZrS_6OlCX" # ## **2. Principales bibliotecas de Python** # Una librería o biblioteca es un conjunto de funciones implementadas por otro programador que nos facilitan realizar tareas, principalmente porque no debemos volver a programar este código. Será vital el uso de librerías para poder analizar los archivos con información. # # Las bibliotecas más importantes para el análisis de datos: ***Numpy, Pandas, Matplotlib***. # + id="8yjgDVwrOlCY" # importar librerías, estas ya han sido preinstaldas import numpy as np import matplotlib.pyplot as plt import pandas as pd # + [markdown] id="tSyGrkAlOlCY" # ### **Numpy: Scientific Computing** # # <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/1/1a/NumPy_logo.svg/1200px-NumPy_logo.svg.png" width = 300> # # [Numpy](https://numpy.org/) (**Num**-ber **Py**-thon) es la libreria estándar de Python para trabajar con vectores y matrices. Extiende la funcionalidad de python permitiendo el uso de expresiones vectorizadas (tales como las de Matlab, competencia en el campo de cálculo cientifico). # # + id="f7jAFqHUOlCZ" colab={"base_uri": "https://localhost:8080/"} outputId="a8819d14-e0a5-421b-96b4-27987834dd0c" # creación de matrices np.array([1, 2, 3, 4, 5]) # + colab={"base_uri": "https://localhost:8080/"} id="_4WIrIaB5q6g" outputId="e323b028-a342-4f8d-df24-556257e44990" # creación de una matriz con tipo de variable float np.array([1, 2, 3, 4, 5], dtype='float32') # + id="9eDpqztSOlCb" colab={"base_uri": "https://localhost:8080/"} outputId="a849da4e-7f30-498e-fc81-9769515c0eaa" # genera una matriz 1-d de 1 a 36, con un incremento de 3 np.arange(1, 36, 3) # + id="7W2CHnlAOlCc" colab={"base_uri": "https://localhost:8080/"} outputId="def12ec4-e208-4954-f26f-bbc2bd2bac56" # crear 9 puntos de igual espaciado en el rango de 0 a 100 np.linspace(0, 100, 12) # + id="LsytyA8SOlCd" colab={"base_uri": "https://localhost:8080/"} outputId="5200250e-99c9-49ca-ab05-adc4685faba3" # crea una matriz de 34 valores aleatorios en el rango 0-100 np.random.randint (1, 100, 34) # + id="k6w2zzz4OlCe" # operaciones básicas de ndarray array_A = np.array([[1, 2, 5], [7, 8, 2], [5, 7, 9]]) array_B = np.array([[5, 3, 1], [6, 7, 9], [2, 1, 2]]) # + id="StYFaw6yOlCe" colab={"base_uri": "https://localhost:8080/"} outputId="c676d3fd-e408-47dd-9849-e9936f799a56" # suma de matrices print(array_A - array_B) print() print(array_A + array_B) print() print(np.add(array_A, array_B)) # + id="huO5zALyOlCf" colab={"base_uri": "https://localhost:8080/"} outputId="32f91eca-9de2-4c7d-9844-974f4aa696b6" # producto de elementos print(array_A * array_B) print() print(array_A @ array_B) # + [markdown] id="I0_c9pe6OlCf" # ### **Matplotlib: Python Data Visualization Library** # # <img src="https://matplotlib.org/_static/logo2.png" width=400> # # [Matplotlib](https://matplotlib.org/) (**Mat**-h **Plot** **Lib**-rary) es la libreria estandar de Python para realizar gráficos de diversos tipos a partir de datos contenidos en listas o arrays en el lenguage de programación Python y su extensión matemática NumPy, es muy flexible y tiene muchos valores predeterminados que te ayudarán muchísimo en tú trabajo. # + id="ynoKFlefOlCg" # 100 números linealmente espacios x = np.linspace(-np.pi, np.pi, 100) # función seno, y = sen(X) y = np.sin(x) # rápida visualización # plt.plot(x, y) # + [markdown] id="grqLJZEmOlCg" # Ahora grafiquemos dos funciones más, $y=2sen(x)$ y $y=3sen(x)$. Esta vez modifiquemos algunos parámetros. # + id="agAkiOlVOlCh" colab={"base_uri": "https://localhost:8080/", "height": 360} outputId="45f93171-8f4c-42ea-f810-b378da79d9e0" # tamaño del gráfico plt.figure(figsize = (8, 5)) # ploteo de las tres funciones plt.plot(x, y, 'red', label='y = sin(x)') plt.plot(x, 2*y, 'green', label='y = 2sin(x)') plt.plot(x, 3*y, 'blue', label='y = 3sin(x)') # insertar título del gráfico plt.title('Funciones trigonométricas', size=20, pad=10) # insertar etiqueta de los ejes plt.xlabel('X') plt.ylabel('Y') # insertar legenda y posición plt.legend(loc='upper left') # limitar el eje x plt.xlim(-4, 4) plt.grid() # mostrar gráfico plt.show() # + [markdown] id="q_RADAaOOlCh" # ### **Pandas: Data Analysis Library** # # <img src="https://i.ibb.co/BKgmPsP/1200px-Pandas-logo-svg.png" width=400px> # # [Pandas](https://pandas.pydata.org/) (**Pa**-nel **Da** -ta) es una herramienta de manipulación y análisis de datos de código abierto rápida, potente, flexible y fácil de usar, construida sobre el lenguaje de programación Python. Asi mismo, un **dataframe** es una estructura de datos bidimensional, es decir, los datos se alinean de forma tabular en filas y columnas. # + id="AcXJLrYWOlCi" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="603ca61a-55bf-4bc2-c37a-6ed6c02240bc" # convertir los resultados trigonométricos en un dataframe ('hoja de cálculo') fun_trig = pd.DataFrame({'X': x, 'Sin(x)': y, '2 Sin(x)': 2*y, '3 Sin(x)': 3*y}) # visualización del dataframe fun_trig # + id="Kd3A1pGzOlCj" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="4256e6c8-8ef1-4434-bb81-544e8ca471a6" # mostrar primeras/últimas 5 filas del dataframe fun_trig.head() # fun_trig.tail() # + colab={"base_uri": "https://localhost:8080/"} id="c5SHlqqd_04x" outputId="00483026-9ae5-4c76-c064-97b4c2e8e894" # mostrar el nombre del índice y las columnas # fun_trig.index fun_trig.columns # + colab={"base_uri": "https://localhost:8080/", "height": 297} id="v7pW2KYHBAR9" outputId="70bab814-b135-4eb3-8487-21377df6b947" # descripción de estadística básica fun_trig.describe() # + id="sMM3tiBmOlCj" colab={"base_uri": "https://localhost:8080/"} outputId="9d1ca0e5-8bbc-4585-c07a-6ed982decfad" # operaciones en el dataframe print('La desviacíon típica de Sen(x) es: ', fun_trig['Sin(x)'].std()) print('La varianza del Sen(x) es: ', fun_trig['Sin(x)'].var()) print('El percentil 90 del Sen(x) es: ', fun_trig['Sin(x)'].quantile(0.9)) # + [markdown] id="KQzKIhbaOlCk" # ## **3. Conjunto de datos abiertos de Exploración** # # Acceder al conjunto de datos gratuito de la [Universidad de Kansas](https://ku.edu/), estos archivos ZIP contienen todos los archivos LAS disponibles en [Kansas Geological Survey](http://www.kgs.ku.edu/PRS/Scans/Log_Summary/index.html) **(KGS)**. Descargue el archivo comprimido de **`2020.zip`** y extraiga el archivo **`1051704679.LAS`**. # + id="aDkyiIM1OlCk" colab={"base_uri": "https://localhost:8080/"} outputId="8081f3d0-5ca3-42d2-cd94-2074139b10ae" # obtener el conjunto de datos del repositorio abierto (KGS) # !wget 'http://www.kgs.ku.edu/PRS/Scans/Log_Summary/2020.zip' # + id="4jwpiNerOlCl" # descomprima el archivo y guárdelo en el directorio 'KGS' # !unzip '/content/2020.zip' -d '/content/KGS_Data' # descomprima el archuivo las y guárdelo en el directorio 'logs' # !unzip '/content/KGS_Data/1051704679.zip' -d '/content/KGS_Data/log_1051704679' # + [markdown] id="M1acj-AdxIir" # # + Muchos desarrolladores de Python usan una herramienta llamada PIP para instalar paquetes a Python. # + id="w8VlS1wMOlCl" colab={"base_uri": "https://localhost:8080/"} outputId="f86fa1e2-27a8-4771-e181-d93a863c76dd" # instalar la biblioteca lasio para leer el registro de pozo # !pip install lasio # + id="7mprzPJNOlCm" # importar la bibliteca import lasio # lea el archivo LAS path = '/content/KGS_Data/log_1051704679/1051704679.las' well = lasio.read(path) # + colab={"base_uri": "https://localhost:8080/"} id="7UddOYw7c9rj" outputId="e9f011a2-44f2-43ba-d5ee-f84f092676e6" # información de registro en la parte del encabezado del archivo LAS # print(well.keys()) well.curves # + [markdown] id="0WddkcP5OLeQ" # ### **Visualización de los registros de pozo (Well Log)** # + colab={"base_uri": "https://localhost:8080/", "height": 299} id="ia44uVkgx0ON" outputId="af9acd21-e18b-4ab5-9508-b99b5eb4b8c1" # tamaño del gráfico plt.figure(figsize = (15, 4)) # traza de los datos del registro de GR plt.plot(well['DEPT'], well['GR'], color = 'black') plt.title('Gamma Ray', size = 18) plt.xlabel('GR (api)'); plt.ylabel('Depth (m)') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 762} id="E_eNPMjmGJzP" outputId="f7d83da5-d822-431b-9f6c-4608783a3a77" # tamaño y título del gráfico plt.figure(figsize=(12,10)) plt.suptitle('Well Logs: RUMBACK B #21-2', size=20, y =1.03) # traza de los registros: SP-GR-RT-RHOB-NPHI plt.subplot(1, 5, 1) plt.plot(well['SP'], well['DEPT'], color='green') plt.ylim(max(well['DEPT']), min(well['DEPT'])) plt.title('Self Potencial (SP)') plt.grid() plt.subplot(1, 5, 2) plt.plot(well['GR'], well['DEPT'], color='red') plt.ylim(max(well['DEPT']), min(well['DEPT'])) plt.title('Gamma Ray (GR)') plt.grid() plt.subplot(1, 5, 3) plt.plot(well['RT'], well['DEPT'], color='blue') plt.ylim(max(well['DEPT']), min(well['DEPT'])) plt.title('Resistivity (RT)') plt.semilogx() plt.grid() plt.subplot(1, 5, 4) plt.plot(well['RHOB'], well['DEPT'], color='orange') plt.ylim(max(well['DEPT']), min(well['DEPT'])) plt.title('Density (RHOB)') plt.grid() plt.subplot(1, 5, 5) plt.plot(well['NPHI'], well['DEPT'], color='purple') plt.ylim(max(well['DEPT']), min(well['DEPT'])) plt.title('Neutron Porosity (NPHI)') plt.grid() # establecer espacio entre los registros de pozo plt.tight_layout(1) plt.show() # + [markdown] id="4xElycvOtqRk" # ### **Visualización de datos en un crossplot (scatterplot)** # + colab={"base_uri": "https://localhost:8080/", "height": 409} id="9iTJSCKANcK2" outputId="fc92abbf-2f7c-407a-f351-5fc3e887c2f6" # tamaño del gráfico plt.figure(figsize=(10,6)) # traza del crossplot (RHOB-NPHI-DEPTH) plt.scatter( well['NPHI'], well['RHOB'], c = well['DEPT']) plt.title('Neutron - Density Plot', size = 20) plt.xlabel('NPHI (v/v)') plt.ylabel('RHOB (g/cc)') plt.colorbar() plt.show() # + [markdown] id="UvR1_lr43wLc" # ## **4. Conjunto de datos abiertos de Producción** # # Acceder al conjunto datos del historial de producción del [Campo Volve](https://www.equinor.com/en/how-and-why/digitalisation-in-our-dna/volve-field-data-village) en el Mar del Norte desde una base de datos disponible en [Zenodo](https://zenodo.org/) **(Alfonso Reyes)** y mostramos el plot de producción. # Mayor información del conjunto de datos, ir a [volve_eclipse_reservoir_v0.1](https://zenodo.org/record/2596620#.YEcF2GgzbIU). # + colab={"base_uri": "https://localhost:8080/"} id="ICscejIW5LEF" outputId="bed62dae-675e-4170-9db7-efe77749e78f" # obtener el conjunto de datos del repositorio abierto (Zenodo-A.R.) # !wget 'https://zenodo.org/record/2596620/files/f0nzie/volve_eclipse_reservoir-v0.1.zip' # + id="saw6YxQM9P9l" colab={"base_uri": "https://localhost:8080/"} outputId="6757cc80-2de5-4d51-8344-8be3cf18690d" # descomprima el archivo y guárdelo en el directorio 'VolveData' # !unzip '/content/volve_eclipse_reservoir-v0.1.zip' -d '/content/Volve_Data' # + colab={"base_uri": "https://localhost:8080/", "height": 419} id="vdTnoFJYWf_A" outputId="213808d5-046d-498f-dd71-2dcbf4132dfa" # defina el directorio del archivo de los datos de producción del campo Volve filepath = '/content/Volve_Data/f0nzie-volve_eclipse_reservoir-413a669/inst/rawdata/Volve production data.xlsx' # leer el excel con el directorio definido df = pd.read_excel(filepath, sheet_name='Monthly Production Data') df # + colab={"base_uri": "https://localhost:8080/"} id="EpShjmbPl8Md" outputId="e34df351-33d6-4af8-d571-1711fa6be43f" # ver cuantos pozos distintos tiene el archivo excel df['Wellbore name'].unique() # + [markdown] id="ZaKhQjk3O3_6" # ### **Visualización de la data de producción (History Matching)** # + colab={"base_uri": "https://localhost:8080/", "height": 419} id="Pj8Ecd_q4Mrw" outputId="91bae9d0-2dde-4fce-b0cc-9c0e75649cda" # conjunto de datos solo del pozo 15/9-F-12 df[df['Wellbore name'] == '15/9-F-12'] # + id="ZsMeqsmOHGbh" # seleccionamos la data de los pozos 15/9-F-12 well_prod = df[df['Wellbore name'] == '15/9-F-12'] well_prod.reset_index(drop = True, inplace = True) # definir los rates de cada fluido oil_rate = well_prod['Oil'] gas_rate = well_prod['Gas'] water_rate = well_prod['Water'] # definir el tiempo en meses a partir de Febrero del 2008 t = np.arange(len(well_prod)) # + colab={"base_uri": "https://localhost:8080/", "height": 463} id="x6wsGME6Hs2w" outputId="010f7419-a83b-4e34-b33d-b43324e89ec8" # tamaño del gráfico plt.figure(figsize=(14, 7)) # traza de los datos de producción de los fluidos plt.plot(t, oil_rate, label = 'Oil Production', lw = 2.4, color = 'green') plt.plot(t, gas_rate, label = 'Gas Production', lw = 2.4, color = 'red') plt.plot(t, water_rate, label = 'Water Production', lw = 2.4,color = 'blue') plt.title('History Matching over Months Since February 2008 - 15/9-F-12', size=15) plt.xlabel('Months Since February 2008', size=12) plt.ylabel('Monthly Production (Sm3)', size=12) plt.legend(fontsize = 'large') plt.semilogy(True) plt.ylim(10, 0.1e+9) plt.grid(which="both", color = 'steelblue') plt.show() # + [markdown] id="is0BNWla_4wd" # ### **Análisis de curvas de declinación (DCA) y pronósticos de producción** # + colab={"base_uri": "https://localhost:8080/", "height": 409} id="4LKT9wjuIXFx" outputId="f9e5b83d-50ca-408c-e565-f1711d752c6e" # graficar la producción de petróleo vs tiempo en meses plt.figure(figsize=(12, 6)) # traza de los datos de producción del petróleo plt.step(t, oil_rate, label = 'Well 15/9-F-12', lw = 2.4, color = 'green') plt.title('Oil Monthly Production over Months Since February 2008', size=15) plt.xlabel('Months Since February 2008', size=12) plt.ylabel('Oil Monthly Production (Sm3)', size=12) plt.axvspan(20, 82, color = 'lime', alpha = 0.25, lw = 2.5) plt.grid(axis = 'y', color = 'steelblue') plt.legend(fontsize = 'large') plt.show() # + id="okXVgMmNAGwU" # delimitar los datos de la región especificada well = well_prod[20: 82] # definir la producción y el tiempo q = well['Oil'] t = np.arange(len(well['Oil'])) # + [markdown] id="rVl7cCRs_SCp" # A continuación, hagamos el ajuste de la curva. En el ajuste de curva, siempre se recomienda normalizar nuestro conjunto de datos. El método más conocido en la normalización de datos, es el de dividir cada dato por su valor máximo. # + id="qb61E1Ip_UEW" # normalizar la producción y el tiempo t_normalized = t / max(t) q_normalized = q / max(q) # + [markdown] id="2pUtKoFu9hkN" # $$ Curva\ hiperbólica\ de\ Arps:\ \ \ q=\frac{q_i}{(1+b \cdot d_i \cdot t)^{1 / b}} $$ # + id="AtWdgy4mOjEY" # definamos la función de la curva hiperbólica de Arps def hyperbolic(t, qi, di, b): return qi / (np.abs((1 + b * di * t))**(1/b)) # + [markdown] id="LWgwZ8oOBOtR" # Para ajustar la curva haremos uso de la biblioteca de Scipy. Desde este paquete importaremos **`curve_fit`**. # + colab={"base_uri": "https://localhost:8080/"} id="aKfa4GNG7e9f" outputId="f4e7888c-0af8-4342-9fc9-5736ca4f03af" # importar curve_fit desde scipy.optimize from scipy.optimize import curve_fit # encontrar los valores de qi, di, b popt, pcov = curve_fit(hyperbolic, t_normalized, q_normalized) print('Matriz de popt:\n', popt) print('Matriz de pcov:\n', pcov) # + [markdown] id="PkX3TLO_CxL7" # Debido a que habíamos ajustado los datos normalizados, ahora necesitamos desnormalizar nuestros parámetros ajustados. # # $$q=\frac{q_i \cdot q_{max}}{(1+b \cdot \frac{d_i}{t_{max}} \cdot t)^{1 / b}}$$ # + colab={"base_uri": "https://localhost:8080/"} id="8rKU_lBS7o4r" outputId="b79e221a-46d1-46ee-fce9-6cdcf45c2579" # asignar los valores encontrados qi, di, b = popt # desnormalizamos qi y di qi = qi * max(q) di = di / max(t) # imprima los valores: qi, di y b print('Initial production rate:', np.round(qi, 3), 'Sm3') print('Initial decline rate:', np.round(di, 3), 'Sm3/m') print('Decline coefficient:', np.round(b, 3)) # + id="AX345nrE78OB" # ahora podemos pronosticar la tasa de producción t_forecast = np.arange(64) q_forecast = hyperbolic(t_forecast, qi, di, b) # + [markdown] id="kO7MJjGXKFES" # Finalmente, graficamos nuestro resultado DCA **(Decline Curve Analysis)**. # + colab={"base_uri": "https://localhost:8080/", "height": 415} id="PVMjhxU8bnvO" outputId="cb680937-46a6-4c61-be0f-2a3158cf77b7" # graficar la producción de petróleo con los pronósticos plt.figure(figsize=(12, 6)) plt.scatter(t, q, label = 'Production Data', color = 'darkblue') plt.plot(t_forecast, q_forecast, label = 'Forecast', ls = '--', lw = 2.4, color = 'red') plt.title('Oil Monthly Production (Well 15/9-F-12) - Result of DCA', size = 16, pad = 12) plt.xlabel('Months Since February 2008', size = 12) plt.ylabel('Oil Monthly Production (Sm3)', size = 12) plt.grid(axis = 'y', color = 'steelblue') plt.legend(fontsize = 'large') plt.show() # + [markdown] id="8FvHXGCxj4MS" # <big><p align="right"><b><FONT COLOR="DB0000">SPWLA Student Chapter</font> - <FONT COLOR="0014C0">Grupo de Investigación TRM</font></p></big> #
notebook/ws_spwlaunisc.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/PatrickAttankurugu/pandas/blob/main/notebooks/Series_At_A_Glance.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="IxWhJ8IJ3GK1" import pandas as pd import numpy as np # + colab={"base_uri": "https://localhost:8080/"} id="Qv0r_b0L3p4C" outputId="f02bff6b-868e-4813-8a43-757e7f5ef876" students=['Patrick','Attankurugu','Azuma','Young'] #But this is just a python list. To be able to get a pandas series of this list, let's pass the series aguement #as indicated in the cell below print(students) # + colab={"base_uri": "https://localhost:8080/"} id="jMENCp3-3s1_" outputId="ee055a7e-4e21-46b1-9824-86fbef972a69" pd.Series(students) #Now we have our list as a pandas Series # + [markdown] id="0gn5Yq5MJB8-" # A series is a one-dimensional sequence of values with associated labels # + [markdown] id="rISb3vIW4rAu" # # + colab={"base_uri": "https://localhost:8080/"} id="Eh78bdUr4lyy" outputId="00f59258-9aa4-46b3-f20e-b74a260b3210" ages=[20,30,2,58] pd.Series(ages) # + colab={"base_uri": "https://localhost:8080/"} id="KXYQHxdk_-8S" outputId="3adafe14-169a-46fd-e9ec-df505a70fa59" heights=[225.4,555.5,67.4,89.6] pd.Series(heights) # + [markdown] id="yBINswSsHmRn" # Unlike Numpy's np.arrays, Pandas pd.Series supports mixed data types # # --- # # # + id="JOlPBMSZIALq" # + [markdown] id="IURj1j7vBuOJ" # # + [markdown] id="a8I0quAaBx0Y" # Numpy arrays are more efficient than python list. It's so because numpy stores homogenious data types whilst python list by nature is designed to store all kinds of data in the same list # + id="mGNn5BehH-so" mixed=[True,'say',{'my_mood:100'}] # + colab={"base_uri": "https://localhost:8080/"} id="kfS0BQW_CFpo" outputId="dc923243-b979-4030-e2b4-eba57f8486fc" pd.Series(mixed) #Pandas did not complain, let's check for Numpy # + colab={"base_uri": "https://localhost:8080/"} id="bQYjLKABIcZ2" outputId="714b65d0-5602-42a9-c4e3-bc8b2887fd45" np.array(mixed) # + [markdown] id="wXef4NEDO2a-" # Parameters Vs Aguements # Parameters refers to the variables and aguements refers to the values that we pass to those variables. An example is indicated below. # pd.Series(data=students) # + id="C7G8lN3dIlRN" booksList=['Think and Grow Rich','The Richest man in Babylon','The Obstacle is the Way'] # + colab={"base_uri": "https://localhost:8080/"} id="kXYBTQYTSb9t" outputId="46707976-87b1-4ef9-f7d4-385bf985a37d" pd.Series(booksList) # + colab={"base_uri": "https://localhost:8080/"} id="sZWZ8oSPSc0F" outputId="6ef544bb-c7c7-44e6-ef78-500fd8233b34" pd.Series(714) # + [markdown] id="6TEqwgqebC1P" # But we can get a bit more advanced and provide our own labels. let's do this # + colab={"base_uri": "https://localhost:8080/"} id="85e0uMEDY2Ix" outputId="a958a198-35dc-4e18-96c8-3b0135ae9e57" pd.Series(data=booksList,index=['About finance','Also about finance','You have to break through']) #in this code, we change the labels of th data # + [markdown] id="gmsQ92LWcPJE" # But we can also drop the parameters and use only the arguments straight away # + colab={"base_uri": "https://localhost:8080/"} id="7lE1ugE3bq2I" outputId="db24ddfb-5be9-4538-c340-74f61afa5510" pd.Series(booksList,['About finance','Also about finance','You have to break through']) # + id="hBRElzqaca4S"
notebooks/Series_At_A_Glance.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + pycharm={"is_executing": false} from matplotlib import pyplot as plt from HodaDatasetReader import read_hoda_cdb, read_hoda_dataset from sklearn.metrics import classification_report import numpy as np from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation from keras.utils import np_utils, to_categorical print('Reading train dataset (Train 60000.cdb)...') X_train, Y_train = read_hoda_dataset(dataset_path='./DigitDB/Train 60000.cdb', images_height=32, images_width=32, one_hot=False, reshape=True) print('Reading test dataset (Test 20000.cdb)...') X_test, Y_test = read_hoda_dataset(dataset_path='./DigitDB/Test 20000.cdb', images_height=32, images_width=32, one_hot=False, reshape=True) # + pycharm={"is_executing": false} x_train = X_train.astype('float32') / 255 x_test = X_test.astype('float32') / 255 y_train = to_categorical(Y_train) y_test = to_categorical(Y_test) print(y_train[2]) # + [markdown] pycharm={} # # B # + pycharm={"is_executing": false} network = Sequential() network.add(Dense(512, activation='relu', input_shape=(32 * 32,))) network.add(Dense(10, activation='softmax')) network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) history = network.fit(x_train, y_train, epochs=2) score = network.evaluate(x_test, y_test) print('Test score:', score[0]) print('Test accuracy:', score[1]) print(history) y_pred = network.predict(x_test,verbose = 1) y_pred_bool = np.argmax(y_pred, axis=1) print(classification_report(Y_test, y_pred_bool)) plt.plot(history.history['acc']) plt.title('Train accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() plt.plot(history.history['loss']) plt.title('Train loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() # + [markdown] pycharm={} # # C - Adding Dropout # + pycharm={"is_executing": false} network_c = Sequential() network_c.add(Dropout(0.2, input_shape=(32*32,))) network_c.add(Dense(512, activation='relu')) network_c.add(Dropout(0.5)) network_c.add(Dense(10, activation='softmax')) network_c.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) history_c = network_c.fit(x_train, y_train, epochs=2) score_c = network_c.evaluate(x_test, y_test) print('Test score:', score_c[0]) print('Test accuracy:', score_c[1]) y_pred = network.predict(x_test,verbose = 1) y_pred_bool = np.argmax(y_pred, axis=1) print(classification_report(Y_test, y_pred_bool)) plt.plot(history_c.history['acc']) plt.title('Train accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() plt.plot(history_c.history['loss']) plt.title('Train loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() # + [markdown] pycharm={} # ### As we can see, using dropout increases the accuracy on training and testing. # + [markdown] pycharm={} # # D - Validation Set # + pycharm={"is_executing": false} network_d = Sequential() network_d.add(Dense(512, activation='relu')) network_d.add(Dense(10, activation='softmax')) network_d.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) history_d = network_d.fit(x_train, y_train, epochs=2, validation_split = 0.2) score_d = network_d.evaluate(x_test, y_test) print('Test score:', score_d[0]) print('Test accuracy:', score_d[1]) y_pred = network.predict(x_test,verbose = 1) y_pred_bool = np.argmax(y_pred, axis=1) print(classification_report(Y_test, y_pred_bool)) plt.plot(history_d.history['acc']) plt.title('Train accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() plt.plot(history_d.history['loss']) plt.title('Train loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() # + [markdown] pycharm={} # ### As we can see, using validation set increases the accuracy on training and testing and decreases loss.
code notebook.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: myenv # language: python # name: myenv # --- # + # Auto-reload modules when accessing them: # %load_ext autoreload # %autoreload 2 # Echo all output: from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" # Import own functions for image analysis: import analysis_funs as va data = va.load_data('attributes.json', prefix = '1.2/VG/1.2/') assert len(data) == 108077 # - data[0] synset = 'person.n.01' word_cnt, attr_shared_cnt, attr_cnt, query_cnt = va.count_attributes_per_synset(data, synset) print("-"*80 + '\n\n') va.plot_and_output_csv([word_cnt], ['synset'], 25, f"Attribute counts for: {synset}", 'attributes/count_' + synset, batch = False) print("-"*80 + '\n\n') va.plot_and_output_csv([attr_shared_cnt], ['synset'], 25, f"Attribute combinations counts: {synset}", 'attributes/combi_count_' + synset, batch = False) print("-"*80 + '\n\n') va.plot_and_output_csv([attr_cnt], ['synset'], 25, f"Object attribute counts: {synset}", 'attributes/raw_numbers_' + synset, batch = False) # + synset_attr_cnt, name_attr_cnt, synset_img_cnt, name_img_cnt, matches, rows = va.count_attribute_synsets(data, va.human_synsets) va.plot_and_output_csv([synset_attr_cnt], ['synset'], 40, f"Attributes with people synsets in attribute data", 'attributes/people_syns_attributes') va.plot_and_output_csv([name_attr_cnt], ['name'], 40, f"Attributes with people names in attribute data", 'attributes/people_names_attributes' ) print(f"Total numbers of attributes matched: {matches}") va.plot_and_output_csv([synset_img_cnt], ['synset'], 40, f"Images with people synsets in attribute data", 'attributes/people_syns_images') va.plot_and_output_csv([name_img_cnt], ['name'], 40, f"Images with people names in attribute data", 'attributes/people_names_images' ) print(f"Total numbers of images matched: {rows}")
Visual Genome - Attributes.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Converting *Exact* ADM Initial Data in the Spherical or Cartesian Basis to BSSN Initial Data in the Desired Curvilinear Basis # ## Author: <NAME> # ### Formatting improvements courtesy <NAME> # # ## This module is meant for use only with initial data that can be represented exactly in ADM form, either in the Spherical or Cartesian basis. I.e., the ADM variables are given $\left\{\gamma_{ij}, K_{ij}, \alpha, \beta^i\right\}$ *exactly* as functions of $(r,\theta,\phi)$ or $(x,y,z)$, respectively. If instead the initial data are given only numerically (e.g., through an initial data solver), then [the Numerical-ADM-Spherical/Cartesian-to-BSSNCurvilinear module](Tutorial-ADM_Initial_Data-Converting_Numerical_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb) will need to be used instead. # # # ### NRPy+ Source Code for this module: [BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py) # # # # ## Introduction: # Given the ADM variables: # # $$\left\{\gamma_{ij}, K_{ij}, \alpha, \beta^i\right\}$$ # # in the Spherical or Cartesian basis, and as functions of $(r,\theta,\phi)$ or $(x,y,z)$, respectively, this module documents their conversion to the BSSN variables # # $$\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\phi, K, \bar{\Lambda}^{i}, \alpha, \beta^i, B^i\right\},$$ # # in the desired curvilinear basis (given by reference_metric::CoordSystem). Then it rescales the resulting BSSNCurvilinear variables (as defined in [the covariant BSSN formulation tutorial](Tutorial-BSSN_formulation.ipynb)) into the form needed for solving Einstein's equations with the BSSN formulation: # # $$\left\{h_{i j},a_{i j},\phi, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}.$$ # # We will use as our core example in this module UIUC initial data, which are ([as documented in their NRPy+ initial data module](Tutorial-ADM_Initial_Data-UIUC_BlackHole.ipynb)) given in terms of ADM variables in Spherical coordinates. # # Table of Contents # $$\label{toc}$$ # # This module is organized as follows: # # 1. [Step 1](#initializenrpy): Initialize core Python/NRPy+ modules # 1. [Step 2](#cylindrical): Desired output BSSN Curvilinear coordinate system set to Cylindrical, as a proof-of-principle # 1. [Step 3](#admfunc): Converting ADM variables to functions of (${\rm xx0},{\rm xx1},{\rm xx2}$) # 1. [Step 4](#adm_jacobian): Applying Jacobian transformations to get in the correct ${\rm xx0},{\rm xx1},{\rm xx2}$ basis # 1. [Step 5](#adm2bssn): Perform the ADM-to-BSSN conversion for 3-metric, extrinsic curvature, and gauge quantities # 1. [Step 5.a](#adm2bssn_gamma): Convert ADM $\gamma_{ij}$ to BSSN $\bar{\gamma}_{ij}$ # 1. [Step 5.b](#admexcurv_convert): Convert the ADM extrinsic curvature $K_{ij}$ # 1. [Step 5.c](#lambda): Define $\bar{\Lambda}^i$ # 1. [Step 5.d](#conformal): Define the conformal factor variable $\texttt{cf}$ # 1. [Step 6](#rescale): Rescale tensorial quantities # 1. [Step 7](#code_validation): Code Validation against BSSN.ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear NRPy+ module # 1. [Step 8](#latex_pdf_output): Output this module to $\LaTeX$-formatted PDF # <a id='initializenrpy'></a> # # # Step 1: Initialize core Python/NRPy+ modules \[Back to [top](#toc)\] # $$\label{initializenrpy}$$ # # Step P0: Import needed Python/NRPy+ modules import sympy as sp import NRPy_param_funcs as par from outputC import * import indexedexp as ixp import reference_metric as rfm import BSSN.UIUCBlackHole as uibh import BSSN.BSSN_quantities as Bq # The EvolvedConformalFactor_cf parameter is used below # <a id='cylindrical'></a> # # # Step 2: Desired output BSSN Curvilinear coordinate system set to Cylindrical, as a proof-of-principle \[Back to [top](#toc)\] # $$\label{cylindrical}$$ # + # The ADM & BSSN formalisms only work in 3D; they are 3+1 decompositions of Einstein's equations. # To implement axisymmetry or spherical symmetry, simply set all spatial derivatives in # the relevant angular directions to zero; DO NOT SET DIM TO ANYTHING BUT 3. # Step P1: Set spatial dimension (must be 3 for BSSN) DIM = 3 # Set the desired *output* coordinate system to Cylindrical: par.set_parval_from_str("reference_metric::CoordSystem","Cylindrical") rfm.reference_metric() # Import UIUC Black Hole initial data uibh.UIUCBlackHole(ComputeADMGlobalsOnly=True) Sph_r_th_ph_or_Cart_xyz = [uibh.r,uibh.th,uibh.ph] alphaSphorCart = uibh.alphaSph betaSphorCartU = uibh.betaSphU BSphorCartU = uibh.BSphU gammaSphorCartDD = uibh.gammaSphDD KSphorCartDD = uibh.KSphDD # - # <a id='admfunc'></a> # # # Step 3: Converting ADM variables to functions of ${\rm xx0},{\rm xx1},{\rm xx2}$ \[Back to [top](#toc)\] # $$\label{admfunc}$$ # # ADM variables are given as functions of $(r,\theta,\phi)$ or $(x,y,z)$. We convert them to functions of $(xx0,xx1,xx2)$ using SymPy's `subs()` function. # + # Step 1: All input quantities are in terms of r,th,ph or x,y,z. We want them in terms # of xx0,xx1,xx2, so here we call sympify_integers__replace_rthph() to replace # r,th,ph or x,y,z, respectively, with the appropriate functions of xx0,xx1,xx2 # as defined for this particular reference metric in reference_metric.py's # xxSph[] or xxCart[], respectively: # UIUC Black Hole initial data are given in Spherical coordinates. CoordType_in = "Spherical" # Make sure that rfm.reference_metric() has been called. # We'll need the variables it defines throughout this module. if rfm.have_already_called_reference_metric_function == False: print("Error. Called Convert_Spherical_ADM_to_BSSN_curvilinear() without") print(" first setting up reference metric, by calling rfm.reference_metric().") exit(1) # Note that substitution only works when the variable is not an integer. Hence the # if isinstance(...,...) stuff: def sympify_integers__replace_rthph_or_Cartxyz(obj, rthph_or_xyz, rthph_or_xyz_of_xx): if isinstance(obj, int): return sp.sympify(obj) else: return obj.subs(rthph_or_xyz[0], rthph_or_xyz_of_xx[0]).\ subs(rthph_or_xyz[1], rthph_or_xyz_of_xx[1]).\ subs(rthph_or_xyz[2], rthph_or_xyz_of_xx[2]) r_th_ph_or_Cart_xyz_of_xx = [] if CoordType_in == "Spherical": r_th_ph_or_Cart_xyz_of_xx = rfm.xxSph elif CoordType_in == "Cartesian": r_th_ph_or_Cart_xyz_of_xx = rfm.xxCart else: print("Error: Can only convert ADM Cartesian or Spherical initial data to BSSN Curvilinear coords.") exit(1) alphaSphorCart = sympify_integers__replace_rthph_or_Cartxyz( alphaSphorCart, Sph_r_th_ph_or_Cart_xyz, r_th_ph_or_Cart_xyz_of_xx) for i in range(DIM): betaSphorCartU[i] = sympify_integers__replace_rthph_or_Cartxyz( betaSphorCartU[i], Sph_r_th_ph_or_Cart_xyz, r_th_ph_or_Cart_xyz_of_xx) BSphorCartU[i] = sympify_integers__replace_rthph_or_Cartxyz( BSphorCartU[i], Sph_r_th_ph_or_Cart_xyz, r_th_ph_or_Cart_xyz_of_xx) for j in range(DIM): gammaSphorCartDD[i][j] = sympify_integers__replace_rthph_or_Cartxyz( gammaSphorCartDD[i][j], Sph_r_th_ph_or_Cart_xyz, r_th_ph_or_Cart_xyz_of_xx) KSphorCartDD[i][j] = sympify_integers__replace_rthph_or_Cartxyz( KSphorCartDD[i][j], Sph_r_th_ph_or_Cart_xyz, r_th_ph_or_Cart_xyz_of_xx) # - # <a id='adm_jacobian'></a> # # # Step 4: Applying Jacobian transformations to get in the correct ${\rm xx0},{\rm xx1},{\rm xx2}$ basis \[Back to [top](#toc)\] # $$\label{adm_jacobian}$$ # # All ADM initial data quantities are now functions of xx0,xx1,xx2, but they are still in the Spherical or Cartesian basis. We can now directly apply Jacobian transformations to get them in the correct xx0,xx1,xx2 basis. The following discussion holds for either Spherical or Cartesian input data, so for simplicity let's just assume the data are given in Spherical coordinates. # # All ADM tensors and vectors are in the Spherical coordinate basis $x^i_{\rm Sph} = (r,\theta,\phi)$, but we need them in the curvilinear coordinate basis $x^i_{\rm rfm}= ({\rm xx0},{\rm xx1},{\rm xx2})$ set by the "reference_metric::CoordSystem" variable. Empirically speaking, it is far easier to write $(x({\rm xx0},{\rm xx1},{\rm xx2}),y({\rm xx0},{\rm xx1},{\rm xx2}),z({\rm xx0},{\rm xx1},{\rm xx2}))$ than the inverse, so we will compute the Jacobian matrix # # $$ # {\rm Jac\_dUSph\_dDrfmUD[i][j]} = \frac{\partial x^i_{\rm Sph}}{\partial x^j_{\rm rfm}}, # $$ # # via exact differentiation (courtesy SymPy), and the inverse Jacobian # $$ # {\rm Jac\_dUrfm\_dDSphUD[i][j]} = \frac{\partial x^i_{\rm rfm}}{\partial x^j_{\rm Sph}}, # $$ # # using NRPy+'s ${\rm generic\_matrix\_inverter3x3()}$ function. In terms of these, the transformation of BSSN tensors from Spherical to "reference_metric::CoordSystem" coordinates may be written: # # \begin{align} # \beta^i_{\rm rfm} &= \frac{\partial x^i_{\rm rfm}}{\partial x^\ell_{\rm Sph}} \beta^\ell_{\rm Sph}\\ # B^i_{\rm rfm} &= \frac{\partial x^i_{\rm rfm}}{\partial x^\ell_{\rm Sph}} B^\ell_{\rm Sph}\\ # \gamma^{\rm rfm}_{ij} &= # \frac{\partial x^\ell_{\rm Sph}}{\partial x^i_{\rm rfm}} # \frac{\partial x^m_{\rm Sph}}{\partial x^j_{\rm rfm}} \gamma^{\rm Sph}_{\ell m}\\ # K^{\rm rfm}_{ij} &= # \frac{\partial x^\ell_{\rm Sph}}{\partial x^i_{\rm rfm}} # \frac{\partial x^m_{\rm Sph}}{\partial x^j_{\rm rfm}} K^{\rm Sph}_{\ell m} # \end{align} # + # Step 2: All ADM initial data quantities are now functions of xx0,xx1,xx2, but # they are still in the Spherical or Cartesian basis. We can now directly apply # Jacobian transformations to get them in the correct xx0,xx1,xx2 basis: # alpha is a scalar, so no Jacobian transformation is necessary. alpha = alphaSphorCart Jac_dUSphorCart_dDrfmUD = ixp.zerorank2() for i in range(DIM): for j in range(DIM): Jac_dUSphorCart_dDrfmUD[i][j] = sp.diff(r_th_ph_or_Cart_xyz_of_xx[i],rfm.xx[j]) Jac_dUrfm_dDSphorCartUD, dummyDET = ixp.generic_matrix_inverter3x3(Jac_dUSphorCart_dDrfmUD) betaU = ixp.zerorank1() BU = ixp.zerorank1() gammaDD = ixp.zerorank2() KDD = ixp.zerorank2() for i in range(DIM): for j in range(DIM): betaU[i] += Jac_dUrfm_dDSphorCartUD[i][j] * betaSphorCartU[j] BU[i] += Jac_dUrfm_dDSphorCartUD[i][j] * BSphorCartU[j] for k in range(DIM): for l in range(DIM): gammaDD[i][j] += Jac_dUSphorCart_dDrfmUD[k][i]*Jac_dUSphorCart_dDrfmUD[l][j] * gammaSphorCartDD[k][l] KDD[i][j] += Jac_dUSphorCart_dDrfmUD[k][i]*Jac_dUSphorCart_dDrfmUD[l][j] * KSphorCartDD[k][l] # - # <a id='adm2bssn'></a> # # # Step 5: Perform the ADM-to-BSSN conversion for 3-metric, extrinsic curvature, and gauge quantities \[Back to [top](#toc)\] # $$\label{adm2bssn}$$ # # All ADM quantities were input into this function in the Spherical or Cartesian basis, as functions of r,th,ph or x,y,z, respectively. In Steps 3 and 4 above, we converted them to the xx0,xx1,xx2 basis, and as functions of xx0,xx1,xx2. Here we convert ADM quantities to their BSSN Curvilinear counterparts. # # # <a id='adm2bssn_gamma'></a> # # ## Step 5.a: Convert ADM $\gamma_{ij}$ to BSSN $\bar{\gamma}_{ij}$ \[Back to [top](#toc)\] # $$\label{adm2bssn_gamma}$$ # # We have (Eqs. 2 and 3 of [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)): # $$ # \bar{\gamma}_{i j} = \left(\frac{\bar{\gamma}}{\gamma}\right)^{1/3} \gamma_{ij}, # $$ # where we always make the choice $\bar{\gamma} = \hat{\gamma}$: # + # Step 3: All ADM quantities were input into this function in the Spherical or Cartesian # basis, as functions of r,th,ph or x,y,z, respectively. In Steps 1 and 2 above, # we converted them to the xx0,xx1,xx2 basis, and as functions of xx0,xx1,xx2. # Here we convert ADM quantities to their BSSN Curvilinear counterparts: # Step 3.1: Convert ADM $\gamma_{ij}$ to BSSN $\bar{gamma}_{ij}$: # We have (Eqs. 2 and 3 of [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)): # \bar{gamma}_{ij} = (\frac{\bar{gamma}}{gamma})^{1/3}*gamma_{ij}. gammaUU, gammaDET = ixp.symm_matrix_inverter3x3(gammaDD) gammabarDD = ixp.zerorank2() for i in range(DIM): for j in range(DIM): gammabarDD[i][j] = (rfm.detgammahat/gammaDET)**(sp.Rational(1,3))*gammaDD[i][j] # - # <a id='admexcurv_convert'></a> # # ## Step 5.b: Convert the ADM extrinsic curvature $K_{ij}$ \[Back to [top](#toc)\] # $$\label{admexcurv_convert}$$ # # Convert the ADM extrinsic curvature $K_{ij}$ to the trace-free extrinsic curvature $\bar{A}_{ij}$, plus the trace of the extrinsic curvature $K$, where (Eq. 3 of [Baumgarte *et al.*](https://arxiv.org/pdf/1211.6632.pdf)): # \begin{align} # K &= \gamma^{ij} K_{ij} \\ # \bar{A}_{ij} &= \left(\frac{\bar{\gamma}}{\gamma}\right)^{1/3} \left(K_{ij} - \frac{1}{3} \gamma_{ij} K \right) # \end{align} # + # Step 3.2: Convert the extrinsic curvature K_{ij} to the trace-free extrinsic # curvature \bar{A}_{ij}, plus the trace of the extrinsic curvature K, # where (Eq. 3 of [Baumgarte *et al.*](https://arxiv.org/pdf/1211.6632.pdf)): # K = gamma^{ij} K_{ij}, and # \bar{A}_{ij} &= (\frac{\bar{gamma}}{gamma})^{1/3}*(K_{ij} - \frac{1}{3}*gamma_{ij}*K) trK = sp.sympify(0) for i in range(DIM): for j in range(DIM): trK += gammaUU[i][j]*KDD[i][j] AbarDD = ixp.zerorank2() for i in range(DIM): for j in range(DIM): AbarDD[i][j] = (rfm.detgammahat/gammaDET)**(sp.Rational(1,3))*(KDD[i][j] - sp.Rational(1,3)*gammaDD[i][j]*trK) # - # <a id='lambda'></a> # # ## Step 5.c: Define $\bar{\Lambda}^i$ \[Back to [top](#toc)\] # $$\label{lambda}$$ # # To define $\bar{\Lambda}^i$ we implement Eqs. 4 and 5 of [Baumgarte *et al.*](https://arxiv.org/pdf/1211.6632.pdf): # $$ # \bar{\Lambda}^i = \bar{\gamma}^{jk}\left(\bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk}\right). # $$ # # The [reference_metric.py](../edit/reference_metric.py) module provides us with analytic expressions for $\hat{\Gamma}^i_{jk}$, so here we need only compute analytical expressions for $\bar{\Gamma}^i_{jk}$, based on the exact values provided in the initial data: # + # Step 3.3: Define \bar{Lambda}^i (Eqs. 4 and 5 of [Baumgarte *et al.*](https://arxiv.org/pdf/1211.6632.pdf)): # \bar{Lambda}^i = \bar{gamma}^{jk}(\bar{Gamma}^i_{jk} - \hat{Gamma}^i_{jk}). gammabarUU, gammabarDET = ixp.symm_matrix_inverter3x3(gammabarDD) # First compute Christoffel symbols \bar{Gamma}^i_{jk}, with respect to barred metric: GammabarUDD = ixp.zerorank3() for i in range(DIM): for j in range(DIM): for k in range(DIM): for l in range(DIM): GammabarUDD[i][j][k] += sp.Rational(1,2)*gammabarUU[i][l]*( sp.diff(gammabarDD[l][j],rfm.xx[k]) + sp.diff(gammabarDD[l][k],rfm.xx[j]) - sp.diff(gammabarDD[j][k],rfm.xx[l]) ) # Next evaluate \bar{Lambda}^i, based on GammabarUDD above and GammahatUDD # (from the reference metric): LambdabarU = ixp.zerorank1() for i in range(DIM): for j in range(DIM): for k in range(DIM): LambdabarU[i] += gammabarUU[j][k] * (GammabarUDD[i][j][k] - rfm.GammahatUDD[i][j][k]) # - # <a id='conformal'></a> # # ## Step 5.d: Define the conformal factor variable $\texttt{cf}$ \[Back to [top](#toc)\] # $$\label{conformal}$$ # # We define the conformal factor variable $\texttt{cf}$ based on the setting of the "BSSN_quantities::EvolvedConformalFactor_cf" parameter. # # For example if "EvolvedConformalFactor_cf" is set to "phi", we can use Eq. 3 of [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf), which in arbitrary coordinates is written: # # $$ # \phi = \frac{1}{12} \log\left(\frac{\gamma}{\bar{\gamma}}\right). # $$ # # Alternatively if "BSSN_quantities::EvolvedConformalFactor_cf" is set to "chi", then # $$ # \chi = e^{-4 \phi} = \exp\left(-4 \frac{1}{12} \left(\frac{\gamma}{\bar{\gamma}}\right)\right) # = \exp\left(-\frac{1}{3} \log\left(\frac{\gamma}{\bar{\gamma}}\right)\right) = \left(\frac{\gamma}{\bar{\gamma}}\right)^{-1/3}. # $$ # # Finally if "BSSN_quantities::EvolvedConformalFactor_cf" is set to "W", then # $$ # W = e^{-2 \phi} = \exp\left(-2 \frac{1}{12} \log\left(\frac{\gamma}{\bar{\gamma}}\right)\right) = # \exp\left(-\frac{1}{6} \log\left(\frac{\gamma}{\bar{\gamma}}\right)\right) = # \left(\frac{\gamma}{\bar{\gamma}}\right)^{-1/6}. # $$ # + # Step 3.4: Set the conformal factor variable cf, which is set # by the "BSSN_quantities::EvolvedConformalFactor_cf" parameter. For example if # "EvolvedConformalFactor_cf" is set to "phi", we can use Eq. 3 of # [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf), # which in arbitrary coordinates is written: # phi = \frac{1}{12} log(\frac{gamma}{\bar{gamma}}). # Alternatively if "BSSN_quantities::EvolvedConformalFactor_cf" is set to "chi", then # chi = exp(-4*phi) = exp(-4*\frac{1}{12}*(\frac{gamma}{\bar{gamma}})) # = exp(-\frac{1}{3}*log(\frac{gamma}{\bar{gamma}})) = (\frac{gamma}{\bar{gamma}})^{-1/3}. # # Finally if "BSSN_quantities::EvolvedConformalFactor_cf" is set to "W", then # W = exp(-2*phi) = exp(-2*\frac{1}{12}*log(\frac{gamma}{\bar{gamma}})) # = exp(-\frac{1}{6}*log(\frac{gamma}{\bar{gamma}})) = (\frac{gamma}{bar{gamma}})^{-1/6}. cf = sp.sympify(0) if par.parval_from_str("EvolvedConformalFactor_cf") == "phi": cf = sp.Rational(1,12)*sp.log(gammaDET/gammabarDET) elif par.parval_from_str("EvolvedConformalFactor_cf") == "chi": cf = (gammaDET/gammabarDET)**(-sp.Rational(1,3)) elif par.parval_from_str("EvolvedConformalFactor_cf") == "W": cf = (gammaDET/gammabarDET)**(-sp.Rational(1,6)) else: print("Error EvolvedConformalFactor_cf type = \""+par.parval_from_str("EvolvedConformalFactor_cf")+"\" unknown.") exit(1) # - # <a id='rescale'></a> # # # Step 6: Rescale tensorial quantities \[Back to [top](#toc)\] # $$\label{rescale}$$ # # We rescale tensorial quantities according to the prescription described in the [the covariant BSSN formulation tutorial](Tutorial-BSSN_formulation.ipynb) (also [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)): # \begin{align} # h_{ij} &= (\bar{\gamma}_{ij} - \hat{\gamma}_{ij})/\text{ReDD[i][j]}\\ # a_{ij} &= \bar{A}_{ij}/\text{ReDD[i][j]}\\ # \lambda^i &= \bar{\Lambda}^i/\text{ReU[i]}\\ # \mathcal{V}^i &= \beta^i/\text{ReU[i]}\\ # \mathcal{B}^i &= B^i/\text{ReU[i]}\\ # \end{align} # Step 4: Rescale tensorial quantities according to the prescription described in # the [BSSN in curvilinear coordinates tutorial module](Tutorial-BSSNCurvilinear.ipynb) # (also [Ruchlin *et al.*](https://arxiv.org/pdf/1712.07658.pdf)): # # h_{ij} = (\bar{gamma}_{ij} - \hat{gamma}_{ij})/(ReDD[i][j]) # a_{ij} = \bar{A}_{ij}/(ReDD[i][j]) # \lambda^i = \bar{Lambda}^i/(ReU[i]) # \mathcal{V}^i &= beta^i/(ReU[i]) # \mathcal{B}^i &= B^i/(ReU[i]) hDD = ixp.zerorank2() aDD = ixp.zerorank2() lambdaU = ixp.zerorank1() vetU = ixp.zerorank1() betU = ixp.zerorank1() for i in range(DIM): lambdaU[i] = LambdabarU[i] / rfm.ReU[i] vetU[i] = betaU[i] / rfm.ReU[i] betU[i] = BU[i] / rfm.ReU[i] for j in range(DIM): hDD[i][j] = (gammabarDD[i][j] - rfm.ghatDD[i][j]) / rfm.ReDD[i][j] aDD[i][j] = AbarDD[i][j] / rfm.ReDD[i][j] # <a id='code_validation'></a> # # # Step 7: Code Validation against BSSN.ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear module \[Back to [top](#toc)\] $$\label{code_validation}$$ # # Here, as a code validation check, we verify agreement in the SymPy expressions for BrillLindquist initial data between # 1. this tutorial and # 2. the NRPy+ [BSSN.ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py](../edit/BSSN/ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear.py) module. # # By default, we analyze these expressions in Spherical coordinates, though other coordinate systems may be chosen. # + import BSSN.UIUCBlackHole as uibh import BSSN.ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear as ADMtoBSSN returnfunction = uibh.UIUCBlackHole() mod_cf,mod_hDD,mod_lambdaU,mod_aDD,mod_trK,mod_alpha,mod_vetU,mod_betU = \ ADMtoBSSN.Convert_Spherical_or_Cartesian_ADM_to_BSSN_curvilinear("Spherical",uibh.Sph_r_th_ph, uibh.gammaSphDD, uibh.KSphDD, uibh.alphaSph, uibh.betaSphU, uibh.BSphU) print("Consistency check between this tutorial module and BSSN.ADM_Exact_Spherical_or_Cartesian_to_BSSNCurvilinear NRPy+ module: ALL SHOULD BE ZERO.") print("cf - mod_cf = " + str(cf - mod_cf)) print("trK - mod_trK = " + str(trK - mod_trK)) print("alpha - mod_alpha = " + str(alpha - mod_alpha)) for i in range(DIM): print("vetU["+str(i)+"] - mod_vetU["+str(i)+"] = " + str(vetU[i] - mod_vetU[i])) print("betU["+str(i)+"] - mod_betU["+str(i)+"] = " + str(betU[i] - mod_betU[i])) print("lambdaU["+str(i)+"] - mod_lambdaU["+str(i)+"] = " + str(lambdaU[i] - mod_lambdaU[i])) for j in range(DIM): print("hDD["+str(i)+"]["+str(j)+"] - mod_hDD["+str(i)+"]["+str(j)+"] = " + str(hDD[i][j] - mod_hDD[i][j])) print("aDD["+str(i)+"]["+str(j)+"] - mod_aDD["+str(i)+"]["+str(j)+"] = " + str(aDD[i][j] - mod_aDD[i][j])) with open("BSSN/UIUCBlackHole-CylindricalTest.h","w") as file: file.write(uibh.returnfunction) # - # <a id='latex_pdf_output'></a> # # # Step 8: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\] # $$\label{latex_pdf_output}$$ # # The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename # # [Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.pdf](Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.pdf) # # (Note that clicking on this link may not work; you may need to open the PDF file through another means.) # !jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb # !pdflatex -interaction=batchmode Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.tex # !pdflatex -interaction=batchmode Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.tex # !pdflatex -interaction=batchmode Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.tex # !rm -f Tut*.out Tut*.aux Tut*.log
Tutorial-ADM_Initial_Data-Converting_Exact_ADM_Spherical_or_Cartesian_to_BSSNCurvilinear.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: dapnn # language: python # name: dapnn # --- # Data Processing # === # > Handles the processing, including encoding of attributes, creation of sliding windows, adding of start and end events, generation of data loaders. # <h1>Table of Contents<span class="tocSkip"></span></h1> # <div class="toc"><ul class="toc-item"><li><span><a href="#Column-renaming-and-adding-of-start-and-end-events" data-toc-modified-id="Column-renaming-and-adding-of-start-and-end-events-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Column renaming and adding of start and end events</a></span></li><li><span><a href="#Trace-Splitting" data-toc-modified-id="Trace-Splitting-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Trace Splitting</a></span></li><li><span><a href="#Encoding-Techniques" data-toc-modified-id="Encoding-Techniques-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Encoding Techniques</a></span><ul class="toc-item"><li><span><a href="#PPObj" data-toc-modified-id="PPObj-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>PPObj</a></span></li><li><span><a href="#Categorization" data-toc-modified-id="Categorization-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Categorization</a></span></li><li><span><a href="#Fill-Missing" data-toc-modified-id="Fill-Missing-3.3"><span class="toc-item-num">3.3&nbsp;&nbsp;</span>Fill Missing</a></span></li><li><span><a href="#Z-score" data-toc-modified-id="Z-score-3.4"><span class="toc-item-num">3.4&nbsp;&nbsp;</span>Z-score</a></span></li><li><span><a href="#Date-conversion" data-toc-modified-id="Date-conversion-3.5"><span class="toc-item-num">3.5&nbsp;&nbsp;</span>Date conversion</a></span></li><li><span><a href="#MinMax-Scaling" data-toc-modified-id="MinMax-Scaling-3.6"><span class="toc-item-num">3.6&nbsp;&nbsp;</span>MinMax Scaling</a></span></li><li><span><a href="#One-HoT-Encoding" data-toc-modified-id="One-HoT-Encoding-3.7"><span class="toc-item-num">3.7&nbsp;&nbsp;</span>One HoT Encoding</a></span></li></ul></li><li><span><a href="#Window-Generation" data-toc-modified-id="Window-Generation-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Window Generation</a></span></li><li><span><a href="#Data-Loader" data-toc-modified-id="Data-Loader-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Data Loader</a></span><ul class="toc-item"><li><span><a href="#Integration-Samples" data-toc-modified-id="Integration-Samples-5.1"><span class="toc-item-num">5.1&nbsp;&nbsp;</span>Integration Samples</a></span></li></ul></li></ul></div> # + #default_exp data_processing # + #hide # %load_ext autoreload # %autoreload 2 # %matplotlib inline # - #export from dapnn.imports import * notebook2script(fname='02_data_processing.ipynb') # ## Column renaming and adding of start and end events # # def df_preproc_old (df, cols=['activity']): 'Prepocesses the df to be ready for Anomaly Detection it will add start/end events to every trace' df['event_id']= df.index df.index = df['trace_id'] df = df[["event_id", "trace_id"] + cols] for i in df['trace_id'].unique(): df_cop = df.loc[[i]] df.loc[i, 'event_id'] = np.arange(1, len(df_cop)+1) df.reset_index(drop=True, inplace=True) trace_ends = list(df.loc[df['event_id']==1].index)[1:] trace_ends.append(len(df)) new = np.insert(np.array(df),trace_ends, [-1, 0]+ ['end']*len(cols), axis=0) df = pd.DataFrame(data=new, columns=(["event_id", "trace_id"] + cols)) trace_starts = list(df.loc[df['event_id']==1].index) new = np.insert(np.array(df),trace_starts, [0, 0]+ ['start']*len(cols), axis=0) trace_starts = np.where(new[:,0]==0)[0] trace_ends = np.where(new[:,0]==-1)[0] new[trace_starts,1] = new[np.array(trace_starts)+1,1] new[trace_ends,1] = new[np.array(trace_starts)+1,1] new[trace_ends,0] = new[trace_ends-1,0]+1 df = pd.DataFrame(data=new, columns=(["event_id", "trace_id"] + cols)) df.index = df['trace_id'] return df #export def df_preproc (df,cols=['activity'],start_marker='###start###',end_marker='###end###'): # Add event log column df['event_id']=df.groupby('trace_id').cumcount()+1 # Work with numpy for performance boost eid_col = df.columns.to_list().index('event_id') # Get col idx col_idx=[df.columns.to_list().index(c) for c in cols] data= df.values # Add Start Events idx= np.where(data[:,eid_col]==1)[0] # start idx new = data[idx].copy() for c in col_idx: new[:,c]=start_marker new[:,eid_col]=0 data = np.insert(data,idx,new, axis=0) # Add End Events idx= np.where(data[:,eid_col]==0)[0][1:] # start idx without the first new = data[idx-1].copy() # get data from current last idx for c in col_idx: new[:,c]=end_marker new[:,eid_col]=new[:,eid_col]+1 data = np.insert(data,idx,new, axis=0) # take care of final last event last= data[-1].copy() for c in col_idx: last[c]=end_marker last[eid_col]+=1 data = np.insert(data,len(data),last, axis=0) df = pd.DataFrame(data,columns=df.columns) return df fn='data/csv/binet_logs/medium-0.3-4.csv.gz' df=pd.read_csv(fn) df.rename({'name':'activity','case:concept:name':'trace_id'},axis=1,inplace=True) if not 'activity' in df.columns: df.rename({'concept:name':'activity'},axis=1,inplace=True) df.head(2) # %%time df1 =df_preproc(df,cols=['activity','company','country','day','user']) # Add Start and End event, and rename columns #export def import_log(log_path,cols=['activity']): df=pd.read_csv(log_path) df.rename({'name':'activity','case:concept:name':'trace_id'},axis=1,inplace=True) if not 'activity' in df.columns: df.rename({'concept:name':'activity'},axis=1,inplace=True) df.rename({'case:pdc:isPos':'normal'},axis=1,inplace=True) df = df_preproc(df,cols) df.index=df.trace_id return df log = import_log('data/csv/PDC2020_ground_truth/pdc_2020_0000000.csv.gz') log[:35] import_log(fn,['activity','company','country','day','user']) # ## Trace Splitting # i.e. splitting in training, validation and test set # The `split_traces` function is used to split an event_log into training, validation and test set. Furthermore, it removes traces that are longer than a specific threshhold. #export def drop_long_traces(df,max_trace_len=64,event_id='event_id'): df=df.drop(np.unique(df[df[event_id]>max_trace_len].index)) return df #export def RandomTraceSplitter(split_pct=0.2, seed=None): "Create function that splits `items` between train/val with `valid_pct` randomly." def _inner(trace_ids): o=np.unique(trace_ids) np.random.seed(seed) rand_idx = np.random.permutation(o) cut = int(split_pct * len(o)) return L(rand_idx[cut:].tolist()),L(rand_idx[:cut].tolist()) return _inner #export def split_traces(df,df_name='tmp',test_seed=42,validation_seed=None): df=drop_long_traces(df) ts=RandomTraceSplitter(seed=test_seed) train,test=ts(df.index) ts=RandomTraceSplitter(seed=validation_seed,split_pct=0.1) train,valid=ts(train) return train,valid,test #hide a1,b1,c1=split_traces(log) a2,b2,c2=split_traces(log) test_ne(a1,a2),test_ne(b1,b2),test_eq(c1,c2); # ## Encoding Techniques # Categorization, Normalization, One-Hot, etc. # # ### PPObj # an object, that manages the pre-processing and knows date columns, cat columns and cont columns # with a few convenient functions #export class _TraceIloc: "Get/set rows by iloc and cols by name" def __init__(self,o): self.o = o def __getitem__(self, idxs): df = self.o.items if isinstance(idxs,tuple): rows,cols = idxs rows=df.index[rows] return self.o.new(df.loc[rows,cols]) else: rows,cols = idxs,slice(None) rows=np.unique(df.index)[rows] return self.o.new(df.loc[rows]) # + #export class PPObj(CollBase, GetAttr, FilteredBase): "Main Class for Process Prediction" _default,with_cont='procs',True def __init__(self,df,procs=None,cat_names=None,cont_names=None,date_names=None,y_names=None,splits=None, ycat_names=None,ycont_names=None,inplace=False,do_setup=True): if not inplace: df=df.copy() if splits is not None: df = df.loc[sum(splits, [])] # Can drop traces self.event_ids=df['event_id'].values if hasattr(df,'event_id') else None super().__init__(df) self.cat_names,self.cont_names,self.date_names=(L(cat_names),L(cont_names),L(date_names)) self.set_y_names(y_names,ycat_names,ycont_names) self.procs = Pipeline(procs) self.splits=splits if do_setup: self.setup() @property def y_names(self): return self.ycat_names+self.ycont_names def set_y_names(self,y_names,ycat_names=None,ycont_names=None): if ycat_names or ycont_names: store_attr('ycat_names,ycont_names') else: self.ycat_names,self.ycont_names=(L([i for i in L(y_names) if i in self.cat_names]), L([i for i in L(y_names) if i not in self.cat_names])) def setup(self): self.procs.setup(self) def subset(self, i): return self.new(self.loc[self.splits[i]]) if self.splits else self def __len__(self): return len(np.unique(self.items.index)) def show(self, max_n=3, **kwargs): print('#traces:',len(self),'#events:',len(self.items)) display_df(self.new(self.all_cols[:max_n]).items) def new(self, df): return type(self)(df, do_setup=False, **attrdict(self, 'procs','cat_names','cont_names','ycat_names','ycont_names', 'date_names')) def process(self): self.procs(self) def loc(self): return self.items.loc def iloc(self): return _TraceIloc(self) def x_names (self): return self.cat_names + self.cont_names def all_col_names(self): return ((self.x_names+self.y_names)).unique() def transform(self, cols, f, all_col=True): if not all_col: cols = [c for c in cols if c in self.items.columns] if len(cols) > 0: self[cols] = self[cols].transform(f) def new_empty(self): return self.new(pd.DataFrame({}, columns=self.items.columns)) def subsets(self): return [self.subset(i) for i in range(len(self.splits))] if self.splits else L(self) properties(PPObj,'loc','iloc','x_names','all_col_names') def _add_prop(cls, nm): @property def f(o): return o[list(getattr(o,nm+'_names'))] @f.setter def fset(o, v): o[getattr(o,nm+'_names')] = v setattr(cls, nm+'s', f) setattr(cls, nm+'s', fset) _add_prop(PPObj, 'cat') _add_prop(PPObj, 'cont') _add_prop(PPObj, 'ycat') _add_prop(PPObj, 'ycont') _add_prop(PPObj, 'y') _add_prop(PPObj, 'x') _add_prop(PPObj, 'all_col') # - ppObj=PPObj(log,cat_names=['activity'],y_names=['activity']) ppObj.ycat_names ppObj.iloc[0].show(max_n=20) # shows first trace # We can define various pre-processing functions that are executed, when `PPOBj` is instantiated. `PPProc` is the base class for a pre-processing function. It ensures, that setup of a pre-processing function is performed using the training set, and than it is applied to the validation and test set, with the same parameters. #export class PPProc(InplaceTransform): "Base class to write a non-lazy tabular processor for dataframes" def setup(self, items=None, train_setup=False): #TODO: properly deal with train_setup super().setup(getattr(items,'train',items), train_setup=False) #super().setup(items, train_setup=False) # Procs are called as soon as data is available return self(items.items if isinstance(items,Datasets) else items) @property def name(self): return f"{super().name} -- {getattr(self,'__stored_args__',{})}" # ### Categorization # i.e ordinal encoding # Implementation of ordinal or integer encoding. Adds NA values for unknown data. Implementation is pretty much taken from fastai. #export def _apply_cats (voc, add, c): if not is_categorical_dtype(c): return pd.Categorical(c, categories=voc[c.name][add:]).codes+add return c.cat.codes+add #if is_categorical_dtype(c) else c.map(voc[c.name].o2i) #export class Categorify(PPProc): "Transform the categorical variables to something similar to `pd.Categorical`" order = 2 def setups(self, to): store_attr(classes={n:CategoryMap(to.items.loc[:,n], add_na=True) for n in to.cat_names}, but='to') def encodes(self, to): to.transform(to.cat_names, partial(_apply_cats, self.classes, 1)) def __getitem__(self,k): return self.classes[k] log=import_log('data/csv/PDC2021_ground_truth/pdc2021_000000.csv.gz') traces=split_traces(log)[0][:100] splits=traces[:60],traces[60:80],traces[80:100] o=PPObj(log,None,cat_names='activity',splits=splits) m=CategoryMap(o.items.loc[:,'activity']) len(m) cat=Categorify() cat.setup(o) len(cat['activity']) df = pd.DataFrame({'a':[0,1,2,0,2]}) to = PPObj(df, Categorify, 'a') to.show() log=import_log('data/csv/binet_logs/bpic12-0.3-1.csv.gz') o=PPObj(log,Categorify,'activity') o.show() # ### Fill Missing # for continuous values # A pre-processing function that deals with missing data in continuous attributes. Missing data can be replaced with the median, mean or a constant value. Additionaly, we can create another boolean column that indicates, which rows were missing. Implementation is pretty much taken from fastai. #export class FillStrategy: "Namespace containing the various filling strategies." def median (c,fill): return c.median() def constant(c,fill): return fill def mode (c,fill): return c.dropna().value_counts().idxmax() #export class FillMissing(PPProc): order=1 "Fill the missing values in continuous columns." def __init__(self, fill_strategy=FillStrategy.median, add_col=True, fill_vals=None): if fill_vals is None: fill_vals = defaultdict(int) store_attr() def setups(self, dsets): missing = pd.isnull(dsets.conts).any() store_attr(but='to', na_dict={n:self.fill_strategy(dsets[n], self.fill_vals[n]) for n in missing[missing].keys()}) self.fill_strategy = self.fill_strategy.__name__ def encodes(self, to): missing = pd.isnull(to.conts) for n in missing.any()[missing.any()].keys(): assert n in self.na_dict, f"nan values in `{n}` but not in setup training set" for n in self.na_dict.keys(): to[n].fillna(self.na_dict[n], inplace=True) if self.add_col: to.loc[:,n+'_na'] = missing[n] if n+'_na' not in to.cat_names: to.cat_names.append(n+'_na') fill = FillMissing() df = pd.DataFrame({'a':[0,1,np.nan,1,2,3,4], 'b': [0,1,2,3,4,5,6]}) to = PPObj(df, fill, cont_names=['a', 'b']) to.show() # ### Z-score # Calculates standartization, also known as z-score formula. Copied from fastai. #export class Normalize(PPProc): "Normalize with z-score" order = 3 def setups(self, to): store_attr(but='to', means=dict(getattr(to, 'train', to).conts.mean()), stds=dict(getattr(to, 'train', to).conts.std(ddof=0)+1e-7)) return self(to) def encodes(self, to): to.conts = (to.conts-self.means) / self.stds def decodes(self, to): to.conts = (to.conts*self.stds ) + self.means df = pd.DataFrame({'a':[0,1,9,3,4]}) to = PPObj(df, Normalize(), cont_names='a') to.show() # ### Date conversion # Encodes a date column. Supports multiple information by using pandas date functions. This implementation is also based on the fastai but also supports relative duration from the first event of a case. #export def _make_date(df, date_field): "Make sure `df[date_field]` is of the right date type." field_dtype = df[date_field].dtype if isinstance(field_dtype, pd.core.dtypes.dtypes.DatetimeTZDtype): field_dtype = np.datetime64 if not np.issubdtype(field_dtype, np.datetime64): df[date_field] = pd.to_datetime(df[date_field], infer_datetime_format=True,utc=True) df = pd.DataFrame({'fu': ['2019-12-04', '2019-11-29', '2019-11-15', '2019-10-24']}) _make_date(df, 'fu') df.dtypes #export def _secSinceSunNoon(datTimStr): dt = pd.to_datetime(datTimStr).dt return (dt.dayofweek-1)*24*3600+ dt.hour * 3600 + dt.minute * 60 + dt.second #export def _secSinceNoon(datTimStr): dt = pd.to_datetime(datTimStr).dt return dt.hour * 3600 + dt.minute * 60 + dt.second #export Base_Date_Encodings=['Year', 'Month', 'Day', 'Dayofweek', 'Dayofyear','Elapsed'] #export def encode_date(df, field_name,unit=1e9,date_encodings=Base_Date_Encodings): "Helper function that adds columns relevant to a date in the column `field_name` of `df`." _make_date(df, field_name) field = df[field_name] prefix = re.sub('[Dd]ate$', '', field_name+"_") attr = ['Year', 'Month', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start', 'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start'] if time: attr = attr + ['Hour', 'Minute', 'Second'] for n in attr: if n in date_encodings: df[prefix + n] = getattr(field.dt, n.lower()) # Pandas removed `dt.week` in v1.1.10 if 'secSinceSunNoon' in date_encodings: df[prefix+'secSinceSunNoon']=_secSinceSunNoon(field) if 'secSinceNoon' in date_encodings: df[prefix+'secSinceNoon']=_secSinceNoon(field) if 'Week' in date_encodings: week = field.dt.isocalendar().week if hasattr(field.dt, 'isocalendar') else field.dt.week df.insert(3, prefix+'Week', week) mask = ~field.isna() elapsed = pd.Series(np.where(mask,field.values.astype(np.int64) // unit,None).astype(float),index=field.index) if 'Relative_elapsed' in date_encodings: df[prefix+'Relative_elapsed']=elapsed-elapsed.groupby(elapsed.index).transform('min') # required to decode! if 'Elapsed' in date_encodings: df[prefix+'Elapsed']=elapsed df.drop(field_name, axis=1, inplace=True) return [],[prefix+i for i in date_encodings] df = pd.DataFrame({'fu': ['2019-12-04', '2019-11-29', '2019-11-15', '2019-10-24']}) encode_date(df,'fu') df #export def decode_date(df, field_name,unit=1e9,date_encodings=Base_Date_Encodings): df[field_name]=(df[field_name+'_'+'Elapsed'] * unit).astype('datetime64[ns, UTC]') for c in date_encodings: del df[field_name+'_'+c] decode_date(df,'fu') df #export class Datetify(PPProc): "Encode dates, " order = 0 def __init__(self, date_encodings=['Relative_elapsed']): self.date_encodings=listify(date_encodings) def encodes(self, o): for i in o.date_names: cat,cont=encode_date(o.items,i,date_encodings=self.date_encodings) o.cont_names+=cont o.cat_names+=cat # Todo: Add decoding df = pd.DataFrame({'fu': ['2019-10-04', '2019-10-09', '2019-10-15', '2019-10-24']},index=[1,1,1,1]) o = PPObj(df,Datetify(date_encodings=['secSinceSunNoon','secSinceNoon','Relative_elapsed']),date_names='fu') o.xs # ### MinMax Scaling # Calculates the MinMax scaling from a column. #export class MinMax(PPProc): order=3 def setups(self, o): store_attr(mins=o.xs.min(), maxs=o.xs.max()) def encodes(self, o): cols=[i+'_minmax' for i in o.x_names] o[cols] = o.xs.astype(float) o[cols] = ((o.xs-self.mins) /(self.maxs-self.mins)) o.cont_names=L(cols) o.cat_names=L() # ### One HoT Encoding # Calculates the one-hot encoding of a column. It is required to first apply categorization on the same column, to deal with missing values. #export from sklearn.preprocessing import OneHotEncoder o=PPObj(log,[Categorify],cat_names=['activity']) len(o.xs),len(o.procs.categorify['activity']) o.xs.values x=o.xs.to_numpy() categories=[range(len(o.procs.categorify['activity'])),range(len(o.procs.categorify['activity']))] x=np.array(['a1','a2']) categories=[['a1','a2','a3']] ohe = OneHotEncoder(categories=categories) a=ohe.fit_transform(x.reshape(-1, 1)).toarray() a.shape categories=['a1','a2','a3'] #export class OneHot(PPProc): "Transform the categorical variables to one-hot. Requires Categorify to deal with unseen data." order = 3 def encodes(self, o): new_cats=[] for c in o.cat_names: categories=[range(len(o.procs.categorify[c]))] x=o[c].to_numpy() ohe = OneHotEncoder(categories=categories) enc=ohe.fit_transform(x.reshape(-1, 1)).toarray() for i in range(enc.shape[1]): new_cat=f'{c}_{i}' o.items.loc[:,new_cat]=enc[:,i] new_cats.append(new_cat) o.cat_names=L(new_cats) event_df=import_log('data/csv/PDC2021_training/pdc2021_0000000.csv.gz') # %%time o=PPObj(event_df,[Categorify(),OneHot()],cat_names=['activity']) # ## Window Generation # Here, we cover the sliding window generation #export def _shift_columns (a,ws=3): return np.dstack(list(reversed([np.roll(a,i) for i in range(0,ws)])))[0] #export def windows_fast(df,event_ids,ws=5,pad=None): max_trace_len=int(event_ids.max())+1 trace_start = np.where(event_ids == 0)[0] trace_len=[trace_start[i]-trace_start[i-1] for i in range(1,len(trace_start))]+[len(df)-trace_start[-1]] idx=[range(trace_start[i]+(i+1) *(ws-1),trace_start[i]+trace_len[i]+(i+1)*(ws-1)-1) for i in range(len(trace_start))] idx=np.array([y for x in idx for y in x]) trace_start = np.repeat(trace_start, ws-1) tmp=np.stack([_shift_columns(np.insert(np.array(df[i]), trace_start, 0, axis=0),ws=ws) for i in list(df)]) tmp=np.rollaxis(tmp,1) res=tmp[idx] if pad: res=np.pad(res,((0,0),(0,0),(pad-ws,0))) return res,np.where(event_ids != 0)[0] event_df=import_log('data/csv/PDC2020_ground_truth/pdc_2020_0000000.csv.gz') o=PPObj(event_df,Categorify(),cat_names=['activity'],y_names='activity') #o=o.iloc[0] len(o) len(o.items)-len(o) ws,idx=windows_fast(o.xs,o.event_ids,ws=5) ws,ws.shape # ## Data Loader # The prefixes are converted to a `pytorch.Dataset` and than to a `DataLoader` # A batch is than represented as a tuple of the form `(x cat. attr,x cont. attr, y cat. attr., y cont attr.)`. Also, categorical attributes are converted to a long tensor and continous attributes to a float tensor. # # If a dimensions of the batch is empty - e.g. the model does not use categorical input attributes - it is removed from the tuple. o=PPObj(event_df,Categorify(),cat_names=['activity'],y_names='activity') ws,idx=windows_fast(o.xs,o.event_ids,ws=10) ws,ws.shape o.ys.iloc[idx].values[16765] o.ys.groupby(o.items.index).transform('last').iloc[idx].values outcome=False if not outcome: y=o.ys.iloc[idx] else: y=o.ys.groupby(o.items.index).transform('last').iloc[idx] ycats=tensor(y[o.ycat_names].values).long() yconts=tensor(y[o.ycont_names].values).float() xcats=tensor(ws[:,len(o.cat_names):]).float() xconts=tensor(ws[:,:len(o.cat_names)]).long() xs=tuple([i for i in [xcats,xconts] if i.shape[1]>0]) ys=tuple([ycats[:,i] for i in range(ycats.shape[1])])+tuple([yconts[:,i] for i in range(yconts.shape[1])]) res=(*xs,ys) res[-1] #export class PPDset(torch.utils.data.Dataset): def __init__(self, inp): store_attr('inp') def __len__(self): return len(self.inp[0]) def __getitem__(self, idx): xs=tuple([i[idx]for i in self.inp[:-1]]) ys=tuple([i[idx]for i in self.inp[-1]]) #if len(ys)==1: ys=ys[0] return (*xs,ys) dls=DataLoaders.from_dsets(PPDset(res)) xcat,y=dls.one_batch() xcat.shape,y.shape o=PPObj(event_df,Categorify(),cat_names=['activity'],y_names='activity',splits=split_traces(event_df)) o.cat_names #export @delegates(TfmdDL) def get_dls(ppo:PPObj,windows=windows_fast,outcome=False,event_id='event_id',bs=64,**kwargs): ds=[] for s in ppo.subsets(): wds,idx=windows(s.xs,s.event_ids) if not outcome: y=s.ys.iloc[idx] else: y=s.ys.groupby(s.items.index).transform('last').iloc[idx] ycats=tensor(y[s.ycat_names].values).long() yconts=tensor(y[s.ycont_names].values).float() xconts=tensor(wds[:,len(s.cat_names):]).float() xcats=tensor(wds[:,:len(s.cat_names)]).long() xs=tuple([i for i in [xcats,xconts] if i.shape[1]>0]) ys=tuple([ycats[:,i] for i in range(ycats.shape[1])])+tuple([yconts[:,i] for i in range(yconts.shape[1])]) ds.append(PPDset((*xs,ys))) return DataLoaders.from_dsets(*ds,bs=bs,device=torch.device('cuda'),**kwargs) PPObj.get_dls= get_dls dls=o.get_dls() xb,yb=dls.one_batch() xb.shape,yb.shape # ### Integration Samples # This section shows, how the PPObj can be used to create a DataLoader for pedictive process analytics: # Next event prediction: # X: 'activity' # Y: 'activity' log=import_log('data/csv/PDC2020_ground_truth/pdc_2020_0000001.csv.gz') o=PPObj(log,Categorify(),cat_names=['activity'],y_names='activity',splits=split_traces(event_df)) dls=o.get_dls(windows=partial(windows_fast,ws=2)) o.show(max_n=2) xb,y=dls.one_batch() xb.shape,y.shape
02_data_processing.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # COMP30027 - Assignment 2 # ## Sentiment Analysis using Ensemble Stacking # #### <NAME> | Student ID: 998174 # ### Importing Libraries # + import pandas as pd import numpy as np from sklearn.feature_selection import SelectPercentile, SelectFpr, chi2, mutual_info_classif import matplotlib.pyplot as plt import seaborn as sns import pickle import scipy import matplotlib.pyplot as plt import seaborn as sns from mlxtend.classifier import StackingCVClassifier # - # conda install mlxtend --channel conda-forge # ### Load Datasets # #### Basic Datasets # + meta_train = pd.read_csv(r"review_meta_train.csv", index_col = False, delimiter = ',') text_train = pd.read_csv(r"review_text_train.csv", index_col = False, delimiter = ',') meta_test = pd.read_csv(r"review_meta_test.csv", index_col = False, delimiter = ',') text_test = pd.read_csv(r"review_text_test.csv", index_col = False, delimiter = ',') # - # #### Count Vectorizer vocab = pickle.load(open("train_countvectorizer.pkl", "rb")) vocab_dict = vocab.vocabulary_ text_train_vec = scipy.sparse.load_npz('review_text_train_vec.npz') text_test_vec = scipy.sparse.load_npz('review_text_test_vec.npz') # #### doc2vec 50, 100, 200 train_doc2vec50 = pd.read_csv(r"review_text_train_doc2vec50.csv", index_col = False, delimiter = ',', header=None) test_doc2vec50 = pd.read_csv(r"review_text_test_doc2vec50.csv", index_col = False, delimiter = ',', header=None) train_doc2vec100 = pd.read_csv(r"review_text_train_doc2vec100.csv", index_col = False, delimiter = ',', header=None) test_doc2vec100 = pd.read_csv(r"review_text_test_doc2vec100.csv", index_col = False, delimiter = ',', header=None) train_doc2vec200 = pd.read_csv(r"review_text_train_doc2vec200.csv", index_col = False, delimiter = ',', header=None) test_doc2vec200 = pd.read_csv(r"review_text_test_doc2vec200.csv", index_col = False, delimiter = ',', header=None) # ### Single Models # #### Additional Libraries # + from sklearn.linear_model import LogisticRegression, SGDClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import MultinomialNB,BernoulliNB, GaussianNB from sklearn.svm import LinearSVC, SVC from sklearn.ensemble import RandomForestClassifier from mlxtend.classifier import StackingClassifier from sklearn.tree import DecisionTreeClassifier from sklearn import svm from sklearn.dummy import DummyClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.model_selection import RandomizedSearchCV, GridSearchCV from sklearn.metrics import classification_report, accuracy_score, confusion_matrix from sklearn.model_selection import RandomizedSearchCV from sklearn.model_selection import cross_val_score, train_test_split, cross_validate from mlxtend.plotting import plot_learning_curves from mlxtend.plotting import plot_decision_regions import time # - # ### Count Vectorizer y = meta_train['rating'] X = text_train_vec X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.25, random_state=30027) # #### Testing different classifiers on Count Vectorizer feature set # + titles = ['Zero-R', #'GNB', 'MNB', 'LinearSVC', #'Decision Tree', #'KNN', #'Random Forest', #'Ada Boost', 'Logistic Regression'] models = [DummyClassifier(strategy='most_frequent'), #GaussianNB(), MultinomialNB(), svm.LinearSVC(), #DecisionTreeClassifier(), #KNeighborsClassifier(), #RandomForestClassifier(), #AdaBoostClassifier(), LogisticRegression()] for title, model in zip(titles, models): model.fit(X_train,y_train) start = time.time() acc = model.score(X_valid, y_valid) end = time.time() t = end - start print(title, "Accuracy:",acc, 'Time:', t) # - # ### Doc2Vec50 y = meta_train['rating'] X = train_doc2vec50 X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.25, random_state=30027) # #### Testing accuracies of different individual classifiers on Count Vectorizer feature set # + titles = ['Zero-R', 'GNB', #'MNB', 'LinearSVC', #'Decision Tree', #'KNN', #'Ada Boost', 'Random Forest', 'Logistic Regression'] models = [DummyClassifier(strategy='most_frequent'), GaussianNB(), #MultinomialNB(), svm.LinearSVC(), #DecisionTreeClassifier(), #KNeighborsClassifier(), #AdaBoostClassifier(), RandomForestClassifier(), LogisticRegression()] for title, model in zip(titles, models): model.fit(X_train,y_train) start = time.time() acc = model.score(X_valid, y_valid) end = time.time() t = end - start print(title, "Accuracy:",acc, 'Time:', t) # - # ### doc2vec100 y = meta_train['rating'] X = train_doc2vec100 X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.25, random_state=30027) # + titles = ['Zero-R', 'GNB', #'MNB', 'LinearSVC', #'Decision Tree', #'KNN', #'Ada Boost', #'Random Forest', 'Logistic Regression'] models = [DummyClassifier(strategy='most_frequent'), GaussianNB(), #MultinomialNB(), svm.LinearSVC(), #DecisionTreeClassifier(), #KNeighborsClassifier(), #AdaBoostClassifier(), #RandomForestClassifier(), LogisticRegression()] for title, model in zip(titles, models): model.fit(X_train,y_train) start = time.time() acc = model.score(X_valid, y_valid) end = time.time() t = end - start print(title, "Accuracy:",acc, 'Time:', t) # - # ### doc2vec200 y = meta_train['rating'] X = train_doc2vec200 X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.25, random_state=50) # + titles = ['Zero-R', 'GNB', #'MNB', 'LinearSVC', #'Decision Tree', #'KNN', #'Ada Boost', #'Random Forest', 'Logistic Regression'] models = [DummyClassifier(strategy='most_frequent'), GaussianNB(), #MultinomialNB(), svm.LinearSVC(), #DecisionTreeClassifier(), #KNeighborsClassifier(), #AdaBoostClassifier(), #RandomForestClassifier(), LogisticRegression()] for title, model in zip(titles, models): model.fit(X_train,y_train) start = time.time() acc = model.score(X_valid, y_valid) end = time.time() t = end - start print(title, "Accuracy:",acc, 'Time:', t) # - # ### Adding vote features to count vectorizer from scipy.sparse import hstack y = meta_train['rating'] X = text_train_vec three_features = meta_train[['vote_funny', 'vote_cool', 'vote_useful']].values X_train_full = hstack((X, three_features)) X_train, X_valid, y_train, y_valid = train_test_split(X_train_full, y, test_size=0.25, random_state=30027) # + titles = ['Zero-R', #'GNB', 'MNB', 'LinearSVC', #'Decision Tree', #'KNN', #'Random Forest', #'Ada Boost', 'Logistic Regression', 'SGDClasifier'] models = [DummyClassifier(strategy='most_frequent'), #GaussianNB(), MultinomialNB(), svm.LinearSVC(), #DecisionTreeClassifier(), #KNeighborsClassifier(), #RandomForestClassifier(), #AdaBoostClassifier(), LogisticRegression(), SGDClassifier()] for title, model in zip(titles, models): model.fit(X_train,y_train) start = time.time() acc = model.score(X_valid, y_valid) end = time.time() t = end - start print(title, "Accuracy:",acc, 'Time:', t) # - # ### Top 80% feature Selection from scipy import sparse data = pd.DataFrame(data=X_train.todense()) valid = pd.DataFrame(data=X_valid.todense()) # #### Don't run the next 2 if loading top_features using pickle features = data.columns k_best = SelectPercentile(chi2, percentile=80).fit(data, y_train) k_best_features_chi2 = [features[i] for i in k_best.get_support(indices=True)] with open("k_best_features_chi2.txt", "wb") as fp: #Pickling pickle.dump(k_best_features_chi2, fp) # #### Load top_features with open("k_best_features_chi2.txt", "rb") as fp: # Unpickling top_features = pickle.load(fp) # #### Don't run this, you can just load using the cell below # + X_train_new = data[data.columns[top_features]] X_valid_new = valid[valid.columns[top_features]] X_train_new = sparse.csr_matrix(X_train_new) X_valid_new = sparse.csr_matrix(X_valid_new) # - # #### Load X_train_new and X_valid_new using sparse.load_npz # + sparse.save_npz("X_train_new.npz", X_train_new) X_train_new = sparse.load_npz("X_train_new.npz") sparse.save_npz("X_valid_new.npz", X_valid_new) X_valid_new = sparse.load_npz("X_valid_new.npz") # - # ### Hyperparameter Tuning using GridSearchCV from sklearn.model_selection import GridSearchCV # + np.random.seed(999) nb_classifier = MultinomialNB() params_NB = {'alpha': np.arange(0,1.1,0.1)} best_MNB = GridSearchCV(estimator=nb_classifier, param_grid=params_NB, cv=5, verbose=1, scoring='accuracy', n_jobs=4) best_MNB.fit(X_train_new, y_train) print(best_MNB.best_params_) print(best_MNB.best_score_) # - # ### *This code takes forever to run! 15min+ # + np.random.seed(999) SGD_classifier = SGDClassifier() params_SGD = {'loss': ['log', 'modified_huber'], 'penalty': ['l1','l2'], 'epsilon': np.arange(0,1.1,0.1)} best_SGD = GridSearchCV(estimator=SGD_classifier, param_grid=params_SGD, cv=5, verbose=1, scoring='accuracy', n_jobs=4) best_SGD.fit(X_train_new, y_train) print(best_SGD.best_params_) print(best_SGD.best_score_) # + np.random.seed(999) LR_classifier = LogisticRegression() params_LR = {'penalty': ['l1','l2'], 'C': [0, 1, 10]} best_LR = GridSearchCV(estimator=LR_classifier, param_grid=params_LR, cv=5, verbose=1, scoring='accuracy', n_jobs=4) best_LR.fit(X_train_new, y_train) print(best_LR.best_params_) print(best_LR.best_score_) # + np.random.seed(999) knn_classifier = KNeighborsClassifier() params_knn = {'n_neighbors':[3,5,7,9,11,13,15], 'weights': ['uniform', 'distance']} best_knn = GridSearchCV(estimator=knn_classifier, param_grid=params_knn, cv=5, verbose=1, scoring='accuracy', n_jobs=4) best_knn.fit(X_train_new, y_train) print(best_knn.best_params_) print(best_knn.best_score_) # - # ### Ensemble Stacking Classifier # + base_clf1 = MultinomialNB(alpha=1) base_clf2 = SGDClassifier(loss="log", penalty='l2', epsilon=0.5) base_clf3 = LogisticRegression(penalty='l2', C=1) base_clf4 = KNeighborsClassifier(n_neighbors=7, weights='uniform') meta_clf = LogisticRegression(penalty='l2') # - stk_clf = StackingCVClassifier(classifiers=[base_clf1, base_clf2, base_clf3, base_clf4], meta_classifier=meta_clf, use_probas=True) stk_clf.fit(X_train_new, y_train) stk_clf.score(X_valid_new, y_valid) # ### Result / Analysis # #### Heatmap and results report # + category = [1,3,5] def report(clf, X_test, y_test): # generates a report summary y_pred = clf.predict(X_test) print(classification_report(y_test, y_pred)) print(f'Accuracy: {100*accuracy_score(y_pred, y_test):.2f}%') df = pd.DataFrame(confusion_matrix(y_test, y_pred, labels=category), index=category, columns=category) sns.heatmap(df, annot=True, fmt='d', cmap="Blues", annot_kws={"size": 20}) sns.set(font_scale=1.5) plt.xlabel("Predicted Label") plt.ylabel("True Label") plt.yticks(rotation=0) plt.show() return y_pred # - # #### MNB base_clf1.fit(X_train_new, y_train) report(base_clf1, X_valid_new, y_valid) # #### SGD base_clf2.fit(X_train_new, y_train) report(base_clf2, X_valid_new, y_valid) # #### LR base_clf3.fit(X_train_new, y_train) report(base_clf3, X_valid_new, y_valid) # #### KNN base_clf4.fit(X_train_new, y_train) report(base_clf4, X_valid_new, y_valid) # #### STK report(stk_clf, X_valid_new, y_valid) # ### Predicting on Test set for final Kaggle Submission X_test = text_test_vec three_features = meta_test[['vote_funny', 'vote_cool', 'vote_useful']].values X_test_new = hstack((X_test, three_features)) X_test_new_dense = pd.DataFrame(data=X_test_new.todense()) X_test_80 = X_test_new_dense[X_test_new_dense.columns[top_features]] X_test_sparse = sparse.csr_matrix(X_test_80) y_final_pred = stk_clf.predict(X_test_sparse) final_pred_df = pd.DataFrame() final_pred_df['Instance_id'] = range(1, len(y_final_pred) + 1) final_pred_df['rating'] = y_final_pred final_pred_df.set_index('Instance_id', inplace=True) import csv final_pred_df.to_csv('final_predictions.csv')
.ipynb_checkpoints/Ass2_code-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Importando librerias # %reset -f # # %matplotlib widget # Rutinas que requiere el sistema. import numpy as np import matplotlib.pyplot as plt # Esto controla el tamaño de las figuras en el script plt.rcParams['figure.figsize'] = (14, 7) import ipywidgets as ipw from ipywidgets import widgets, interact_manual from IPython.display import Image # Las rutinas que calculan posiciones y velocidades from F_FreeFall import FreeFall from FF_ideal import FF_V # Esto es para poder correr todo en linea ipw.interact_manual.opts['manual_name'] = "CALCULAR!" np.set_printoptions(formatter={'float': '{: 0.3f}'.format}) # - # # **CAÍDA LIBRE DE UNA ESFERA EN UN FLUIDO** # # El estudio del movimiento de partículas esféricas dentro de un fluido es de gran utilidad para revisar los conceptos de esfuerzo cortante, número de Reynolds y efectos de resistencia al movimiento por parte de un fluido a un cuerpo sólido. Para el experimento que se propone, imagine que una esfera de diámetro $D_p$ y densidad $\rho_P$ que parte desde una posición $z \ = \ 0 \ m$ ubicada en la parte alta de un espacio infinto lleno de un fluido con densidad $\rho_F$ y viscosidad cinemática $\nu_f$. En el instante en que la partícula es liberada, su velcidad inicial es $v_0 \ = \ 0 \ m/s$ y para efectos prácticos, el eje $z$ del movimiento aumentará hacia abajo en el sentido del movimiento de la esfera (Figura 1). # # <img src="201102_FreeFall_Image.png" alt="fishy" class="bg-primary mb-1" height="100px"> # # **Figura 1:** Esquema de una esfera cayendo en un fluido (Fuente: Biringen y Chow, 2011). # Como es de esperarse, la esfera aumentará su velocidad a medida que cae gracias al efecto de la gravedad. No obstante, ese movimiento no será uniformemente acelerado, ya que existen fuerzas viscosas ocasionadas por la interacción entre la superficie de la esfera y el fluido que irán en contra del movimiento e impedirán su aceleración. Dentro de esas fuerzas se encuentran: *i)* La fricción, ocasionada por la viscosidad del fluido en el que está cayendo la esfera, *ii)* la fuerza de flotación, ocasionada por la diferencia de densidades, *iii)* la fuerza de cuerpo acelerando, ocasionada por los flujos que se crean alrededor de la esfera cuando empieza a moverse y *iv)* el arrastre de onda, presente cuando las velocidades del fluido empiezan a aumentar. # **Variables utilizadas** # # A contiunuación se presenta una lista de las variables que se utilizan en el desdarrollo del presente taller con sus respectivas unidades de medida: # # $\rho_f \ (kg/m^3) \ = \ $ Densidad del fluido en el que cae la esfera # # $\rho_p \ (kg/m^3) \ = \ $ Densidad de la partícula que cae en el fluido # # $D_p \ (mm) \ = \ $ Diámetro de la partícula que cae en el fluido # # $\nu_f \ (m^2/s) \ = \ $ Viscosidad cinemática del fluido # **Suposiciones** # # * La posición inicial de la partícula es $z = 0 \ m$ y $z$ aumenta en el sentido del movimiento # * La partícula inicia su movimiento desde el reposo. Es decir, su velocidad inicial es $v_0 = 0 m/s$ # * La densidad de la partícula siempre debe ser mayor que la densidad del fluido para garantizar su caída # * La fuerza de fricción ocasionada por el movimiento es proporcional al cuadrado de la velocidad $(F_f \ \alpha \ \lvert v \rvert ^2)$ # # **Valores iniciales** # # Para el desarrollo del presente problema se asume que una esfera de vidrio de diámtro $D_p \ = \ 10 \ mm$ y densidad $\rho_p \ = \ 2200 \ kg/m^3$ cae en agua con densidad $\rho_f \ = \ 1000 \ kg/m^3$ y viscosidad cinemática $\nu_f \ = \ 1,14 \times 10^{-6} \ m^2/s$. # # Estos valores se dberán cambiar para observar los cambios en la caída libre de diferentes objetos en diferentes fluidos. El estudiante deberá investigar las diferentes propiedades de los fluidos en cualquier libro de texto de la asignatura. # =========================================================================== # Se define una función que engloba todo y corre el problema de caída libre # cuando se definen los valores de los parámetros que se van a variar para # el desarrollo del problema # =========================================================================== def RUN_ALL(rho_f, rho_p, nu, D): # Las condiciones iniciales del problema están "hard coded". No se puede # cambiar esta situación a menos que se cambien las funciones que hacen # los cálculos (Acá se ponen estas variables que son las mismas de la # rutina de cálculos) Z0 = 0 V0 = 0 h = 1e-3 tf = 3 # Se construye la variable ANSW que almacenará los vectores que serán # graficados. EL orden es el siguiente: T, Zi, Vi, Z, V ANSW = np.zeros((5, int(tf / h) + 1)) # Esto corre el caso ideal que no tiene fricción (la idea es poder # comparar lo que sucede en los dos casos) ANSW[1, :], ANSW[2, :] = FF_V(Z0, V0, h, tf) # Esto corre lo referente al caso con fricción y rozamiento ANSW[0, :], ANSW[3, :], ANSW[4, :] = FreeFall(rho_f, rho_p, nu, D) # Llamando a una función que haga gráficas bonitas en cuanto se refiere # a colores, fuentes y manejo del espacio plotresults(ANSW) # =========================================================================== # Haciendo la figura y poniéndola bonita para efectos de poder entrar a # hacerla en un app. Vamos a ver si la hacemos funcionar. # =========================================================================== def plotresults(ANSW): # plt.style.use('ggplot') fig = plt.figure(facecolor="white"); # fig = plt.subplots(nrows = 1, ncols = 2, sharey = True) # Para reducir espacio Labels = ["Posición teórica", "Posición real", "Velocidad Teórica", "Velocidad Real"] # Librería de colores (a lo mejor y no uso ninguno) Colors = ["darkmagenta","darkgreen","seagreen","dodgerblue","dimgrey"] FaceColors = ["lavenderblush","honeydew","mintcream","aliceblue","whitesmoke"] # Haciendo las gráficas por separado para que sea fácil poder cambiar # No necesito hacer ciclos porque son muy pocas. # Gráfica de posiciones contra tiempo ax1 = plt.subplot(1, 2, 1) ax1.plot(ANSW[0, :], ANSW[1, :], label = Labels[0], c = Colors[0], lw = 3, linestyle=':') ax1.plot(ANSW[0, :], ANSW[3, :], label = Labels[1], c = Colors[0], lw = 3) ax1.set_ylim([0, 4]) ax1.set_xlim([0, 2.5]) ax1.set_facecolor(FaceColors[0]) ax1.set_title('Posición vs tiempo', fontsize = 16) ax1.set_xlabel(r'Tiempo $t(s)$', fontsize = 10) ax1.set_ylabel(r'Posición $z(m)$', fontsize = 10) ax1.grid(True) # Gráfica de velocidades ax2 = plt.subplot(1, 2, 2) ax2.plot(ANSW[0, :], ANSW[2, :], label = Labels[2] ,c = Colors[3], lw = 3, linestyle = ':') ax2.plot(ANSW[0, :], ANSW[4, :], label = Labels[3], c = Colors[3], lw = 3) ax2.set_ylim([0, 4]) ax2.set_xlim([0, 2.5]) ax2.set_facecolor(FaceColors[3]) ax2.set_title('Velocidad vs tiempo', fontsize = 16) ax2.set_xlabel(r'Tiempo $t(s)$', fontsize = 10) ax2.set_ylabel(r'Velocidad $v(m/s)$', fontsize = 10) ax2.grid(True) # Mostrando los resultados en el notebook plt.show() # + # =========================================================================== # Funcion que corre todo lo que he progrmado y hace que salgan los sliders # para determinar los diferentes parámetros que gobiernan el fenómeno de la # caída libre de cuerpos en un fluido (desde el reposo) # =========================================================================== # Descripción de los sliders DESCR = [r"$\rho_f \ (kg/m^3)$",\ r"$\rho_p \ (kg/m^3)$",\ r"$\nu_f \ (m^2/s)$",\ r"$D_p \ (mm)$"] # Correr todo y poner los sliders en la pantalla interact_manual(RUN_ALL, \ rho_f=widgets.FloatText(description=DESCR[0], min=600, max=2000, value=1000 , readout_format='E'),\ rho_p=widgets.FloatText(description=DESCR[1], min=2000, max=1e4, value=2200 , readout_format='E'),\ nu=widgets.FloatText(description=DESCR[2], min=1e-8, max=0.1, value=1.14e-6 , readout_format='E'),\ D=widgets.FloatText(description=DESCR[3], min=0.1, max=100 , value=10, readout_format='E')); # - # Para poder ejecutar de forma correcta la práctica usted deberá correr este notebook en línea o tener unas especificaciones similares a las descritas a continuación: # + # Imprimiendo las dependencias para que esto pueda funcionar. # %load_ext watermark # python, ipython, packages, and machine characteristics # %watermark -v -m -p numpy,matplotlib,watermark # date print (" ") # %watermark -u -n -t -z
210118_FreeFall_00.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 1.1 Imports # + # import libraries import os import csv import numpy as np import pandas as pd import urllib from selenium import webdriver from selenium.webdriver.chrome.options import Options import concurrent.futures from concurrent.futures import ThreadPoolExecutor import time from PIL import Image # - # ignore warnings import warnings warnings.filterwarnings('ignore') # + # create function # open web browser def open_web_browser(): # incognito window chrome_options = Options() chrome_options.add_argument("--incognito") # add the argument and make the browser Headless. chrome_options.add_argument("--headless") # open web browser driver = webdriver.Chrome('C:/Users/Leung/Desktop/chromedriver.exe', options=chrome_options) return driver # - # list of categories categories = [ 'activewear', 'jackets', 'sweatshirts-hoodies' ] # # 1.2 Extracting Links # + # create function # get product links def product_links(driver, category): # category link URL = f'https://www.calvinklein.com/hk/en/women-apparel-{category}/' # navigate webpage driver.get(URL) # may need time sleep time.sleep(0.5) # open text file if os.path.exists(f"data/{category}_links.txt"): # open exist file f = open(f"data/{category}_links.txt", "w") else: # create new file f = open(f"data/{category}_links.txt", "x") # get all product elements product_elements = driver.find_elements_by_xpath('//a[@class="name-link"]') # get all product links f = open(f"data/{category}_links.txt", "a") for product_element in product_elements[:15]: f.write(product_element.get_attribute('href')) f.write('\n') f.close() # + # scrape all links # open web browser driver = open_web_browser() for category in categories: # extract and save product links product_links(driver, category) # close web browser driver.close() # - # # 1.3 Extracting Product Details # + # create function # get product details def product_detail(driver, URL): # input website driver.get(URL) # may need time sleep time.sleep(0.5) # get product name try: name = driver.find_element_by_tag_name('h1') name = name.text except: name = None # get product price try: price = driver.find_element_by_xpath('//span[@class="price-sales"]') price = price.text except: price = None # get product image try: # size img_width,img_height = 300,300 # get all images images = driver.find_elements_by_xpath('//img[@class="primary-image cloudzoom"]') # product image at index 0 img = images[0] # 'src' = get image source src = img.get_attribute('src') # download image urllib.request.urlretrieve(src, f'image/{name}.png') # resize image (smaller size) ori_img = Image.open(f'image/{name}.png') resize_img = ori_img.resize((img_width,img_height)) resize_img.save(f'image/{name}.png') img_file = f'image/{name}.png' except: img_file = None return name, price, img_file # + # for every category, scrape every link start_time = time.time() for category in categories: names = [] prices = [] img_files = [] urls = [] # open web browser driver = open_web_browser() # load all links links = [] f = open(f'data/{category}_links.txt','r') for link in f.read().split(): links.append(link) # scrape every link for link in links: url = link name, price, img_file = product_detail(driver, link) # append data into lists names.append(name) prices.append(price) img_files.append(img_file) urls.append(url) # close web browser driver.close() # convert to dataframe df = pd.DataFrame({ 'name': names, 'price': prices, 'img_file': img_files, 'url': urls }) # remove rows with missing values df.dropna(inplace=True) # reset index df.reset_index(drop=True, inplace=True) # select first 10 rows # df = df.iloc[:10] # save file df.to_csv(f'data/{category}.csv', index=False) end_time = time.time() print("total time taken:", end_time-start_time) # - # + # try multi-threading - creating empty files for category in categories: # create dataframe df = pd.DataFrame({ 'name': [], 'price': [], 'img_file': [], 'url': [] }) # save file df.to_csv(f'data/{category}.csv', index=False) # + # try multi-threading - scraping function def product_detail(driver, category, URL): # open web browser # driver = open_web_browser() # input website driver.get(URL) # may need time sleep time.sleep(0.5) # get product name try: name = driver.find_element_by_tag_name('h1') name = name.text except: name = None # get product price try: price = driver.find_element_by_xpath('//span[@class="price-sales"]') price = price.text except: price = None # get product image try: # size img_width,img_height = 300,300 # get all images images = driver.find_elements_by_xpath('//img[@class="primary-image cloudzoom"]') # product image at index 0 img = images[0] # 'src' = get image source src = img.get_attribute('src') # download image urllib.request.urlretrieve(src, f'image/{name}.png') # resize image (smaller size) ori_img = Image.open(f'image/{name}.png') resize_img = ori_img.resize((img_width,img_height)) resize_img.save(f'image/{name}.png') img_file = f'image/{name}.png' except: img_file = None # close web browser # driver.close() print(name, price, img_file) with open(f'data/{category}.csv', mode='a') as f: writer = csv.writer(f, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL) writer.writerow([name, price, img_file, URL]) # - # testing category = 'activewear' # + # try multi-threading - main function start_time = time.time() no_of_threads = 2 # open web browser # driver = open_web_browser() # load all links links = [] f = open(f'data/{category}_links.txt','r') for link in f.read().split(): links.append(link) # testing for 5 links links = links[:4] # args = ((driver,category,link) for link in links) # open unique driver for each thread drivers = [] for i in range(no_of_threads): drivers.append(open_web_browser()) # args = ((driver1, category, links[0]),(driver2, category, links[1])) # create args with corresponding drivers args = [] for i in range(0, len(links), no_of_threads): for j in range(no_of_threads): try: args.append((drivers[i+j], category, links[i+j])) except: pass args = tuple(args) with ThreadPoolExecutor(max_workers=no_of_threads) as executor: executor.map(lambda x: product_detail(*x),args) # close web browser for i in range(no_of_threads): drivers[i].close() end_time = time.time() print("total time taken:", end_time-start_time) # - with open(f'data/{category}.csv', mode='a') as f: writer = csv.writer(f, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL) writer.writerow([name, price, img_file, URL]) # performance driver3.close() # + # dataframe of 'activewear' df = pd.read_csv(f'data/{categories[0]}.csv') df.head() # + # dataframe of 'jackets' df = pd.read_csv(f'data/{categories[1]}.csv') df.head() # + # dataframe of 'sweatshirts-hoodies' df = pd.read_csv(f'data/{categories[2]}.csv') df.head() # - # + # END
.ipynb_checkpoints/1_data_collection-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Washington oil EXPORT from Salish Sea facilities # - Crude oil export counted if Delivered from a designated "Facility" (land-based) # - No ship-to-ship transfers included (yet). This means that tanker volume estimates may be disproportionately higher than actual if tankers act as fueling hub for ATBs and tugs # - All non-ATBs and non-tugs are categorized tankers # - Oil volumes organized by vessel type and by fuel type # - 88.87% of all tanker, ATB, and barge volume exports to the selected refineries and terminals is accounted for using the specified oil type clasifications (see percent_check in Out[11]) # - Total Export in this analysis: 4,424,259,209 gallons of oil export (See printout above In[9]) # # ### Known issues # - I need to convert to liters for our intents and purposes # - I don't yet know the proportion of fueling or cargo transfers that happens vessel-to-vessel as compared to from facility. I think the best way for me to estimate this will be to add up all fueling and cargo received by tanker, tank barge and ATB and subtract the values from facility-only transfers to get the ship-to-ship transfers. I presume the import pandas as pd import numpy as np import matplotlib.pyplot as plt # User inputs file_dir = '/Users/rmueller/Documents/UBC/MIDOSS/Data/DeptOfEcology/' file_name = 'MuellerTrans4-30-20.xlsx' # Import columns are: (E) 'StartDateTime, (H) Receiver, (O) Region, (P) Product, (Q) Quantity in Gallons, (R) Transfer Type (Fueling, Cargo, or Other)', (W) Deliverer type df = pd.read_excel(f'{file_dir}{file_name}',sheet_name='Vessel Oil Transfer', usecols="E,H,O,P,Q,R,W") # ### Extract data for oil cargo transferred to vessels for marine export approximation # Get all cargo fuel transfers bool_cargo = df['TransferType']=='Cargo' cargo_data = df[bool_cargo] oil_traffic = {} # ### Evaluate marine oil export # #### By vessel type # <NAME> (DOE) reccomends using "Facility" in "DelivererTypeDescription"[W] to flag all refineries and terminals # and "Region"[O] (which is identified by county) to ID all non-Salish-Sea traffic non_Salish_counties = ['Klickitat','Clark','Cowlitz','Wahkakum','Pacific','Grays Harbor'] # + # create dataset of export cargo in which export cargo is defined as # all cargo being transferred from (land-based) facility cargo_export = cargo_data[cargo_data.DelivererTypeDescription.str.contains('Facility')] # remove transfers from non-Salish-Sea locations print('Removing oil transfers from: ') for counties in non_Salish_counties: display(counties) cargo_export = cargo_export[~cargo_export.Region.str.contains(f'{counties}')] # need to re-set indexing in order to use row-index as data_frame index cargo_export.reset_index(drop=True, inplace=True) # + # introduce dictionary for cargo traffic oil_traffic['cargo'] = {} # ATB fuel export oil_traffic['cargo']['atb'] = {} oil_traffic['cargo']['atb']['percent_volume_export'] = {} # percentage of total crude export by oil type oil_traffic['cargo']['atb']['volume_export_total'] = 0 oil_traffic['cargo']['atb']['volume_export'] = [] # a vector of volumes ordered to pair with oil_type oil_traffic['cargo']['atb']['oil_type'] = [] # a vector of oil types ordered to pair with volume_export # barge fuel export oil_traffic['cargo']['barge'] = {} oil_traffic['cargo']['barge']['percent_volume_export'] = {} oil_traffic['cargo']['barge']['volume_export_total'] = 0 oil_traffic['cargo']['barge']['volume_export'] = [] oil_traffic['cargo']['barge']['oil_type'] = [] # tanker fuel export oil_traffic['cargo']['tanker'] = {} oil_traffic['cargo']['tanker']['percent_volume_export'] = {} oil_traffic['cargo']['tanker']['volume_export_total'] = 0 oil_traffic['cargo']['tanker']['volume_export'] = [] oil_traffic['cargo']['tanker']['oil_type'] = [] # total oil_traffic['cargo']['export_total'] = {} oil_traffic['cargo']['export_total']['all'] = 0 oil_traffic['cargo']['export_total']['atb_barge_tankers'] = 0 # identify ship traffic [nrows,ncols] = cargo_export.shape # total up volume of oil transferred onto ATB BARGES, non-ATB barges, and other vessels # create counter for vessel-type atb_counter = 0 barge_counter = 0 tanker_counter = 0 for rows in range(nrows): # Add up all oil import to refineries and terminals, regardless of vessel-type oil_traffic['cargo']['export_total']['all'] += cargo_export.TransferQtyInGallon[rows] # ATB if 'ATB' in cargo_export.Receiver[rows]: oil_traffic['cargo']['atb']['volume_export_total'] += cargo_export.TransferQtyInGallon[rows] oil_traffic['cargo']['atb']['volume_export'].append(cargo_export.TransferQtyInGallon[rows]) oil_traffic['cargo']['atb']['oil_type'] .append(cargo_export.Product[rows]) atb_counter += 1 # Barges elif ('BARGE' in cargo_export.Receiver[rows] or \ 'Barge' in cargo_export.Receiver[rows] or \ 'PB' in cargo_export.Receiver[rows] or \ 'YON' in cargo_export.Receiver[rows] or \ 'DLB' in cargo_export.Receiver[rows]): oil_traffic['cargo']['barge']['volume_export_total'] += cargo_export.TransferQtyInGallon[rows] oil_traffic['cargo']['barge']['volume_export'].append(cargo_export.TransferQtyInGallon[rows]) oil_traffic['cargo']['barge']['oil_type'].append(cargo_export.Product[rows]) barge_counter += 1 #display(cargo_data.Receiver[rows]) # Tankers else: oil_traffic['cargo']['tanker']['volume_export_total'] += cargo_export.TransferQtyInGallon[rows] oil_traffic['cargo']['tanker']['volume_export'].append(cargo_export.TransferQtyInGallon[rows]) oil_traffic['cargo']['tanker']['oil_type'].append(cargo_export.Product[rows]) tanker_counter += 1 #display(cargo_data.Receiver[rows]) oil_traffic['cargo']['export_total']['atb_barge_tankers'] = oil_traffic['cargo']['atb']['volume_export_total'] + oil_traffic['cargo']['barge']['volume_export_total'] + oil_traffic['cargo']['tanker']['volume_export_total'] atb_barge_tanker_percent = 100 * oil_traffic['cargo']['export_total']['atb_barge_tankers']/oil_traffic['cargo']['export_total']['all'] print('Volume of oil import captured by ATB, barge, and tank traffic used here: ' + str(atb_barge_tanker_percent) + '%') print('Total Export in this analysis: ' + str(oil_traffic['cargo']['export_total']['all']) + ' gallons') # - # Calculate percent of total transport by vessel type atb_percent = 100*oil_traffic['cargo']['atb']['volume_export_total']/oil_traffic['cargo']['export_total']['all'] barge_percent = 100*oil_traffic['cargo']['barge']['volume_export_total']/oil_traffic['cargo']['export_total']['all'] tanker_percent = 100*oil_traffic['cargo']['tanker']['volume_export_total']/oil_traffic['cargo']['export_total']['all'] print(atb_percent) print(barge_percent) print(tanker_percent) volume_export_byvessel = [oil_traffic['cargo']['atb']['volume_export_total'], oil_traffic['cargo']['barge']['volume_export_total'], oil_traffic['cargo']['tanker']['volume_export_total']] #colors = ['b', 'g', 'r', 'c', 'm', 'y'] labels = [f'ATB ({atb_percent:3.1f}%)', f'Tow Barge ({barge_percent:3.1f}%)',f'Tanker ({tanker_percent:3.1f}%)'] plt.gca().axis("equal") plt.pie(volume_export_byvessel, labels= labels) plt.title('Types of vessels by volume receiving oil as cargo from land-based facilities within the Salish Sea') # #### By oil type within vessel type classification # + oil_traffic['cargo']['atb']['CRUDE']=0 oil_traffic['cargo']['atb']['GASOLINE']=0 oil_traffic['cargo']['atb']['JET FUEL/KEROSENE']=0 oil_traffic['cargo']['atb']['DIESEL/MARINE GAS OIL']=0 oil_traffic['cargo']['atb']['DIESEL LOW SULPHUR (ULSD)']=0 oil_traffic['cargo']['atb']['BUNKER OIL/HFO']=0 oil_traffic['cargo']['atb']['other']=0 oil_traffic['cargo']['barge']['CRUDE']=0 oil_traffic['cargo']['barge']['GASOLINE']=0 oil_traffic['cargo']['barge']['JET FUEL/KEROSENE']=0 oil_traffic['cargo']['barge']['DIESEL/MARINE GAS OIL']=0 oil_traffic['cargo']['barge']['DIESEL LOW SULPHUR (ULSD)']=0 oil_traffic['cargo']['barge']['BUNKER OIL/HFO']=0 oil_traffic['cargo']['barge']['other']=0 oil_traffic['cargo']['tanker']['CRUDE']=0 oil_traffic['cargo']['tanker']['GASOLINE']=0 oil_traffic['cargo']['tanker']['JET FUEL/KEROSENE']=0 oil_traffic['cargo']['tanker']['DIESEL/MARINE GAS OIL']=0 oil_traffic['cargo']['tanker']['DIESEL LOW SULPHUR (ULSD)']=0 oil_traffic['cargo']['tanker']['BUNKER OIL/HFO']=0 oil_traffic['cargo']['tanker']['other']=0 oil_types = ['CRUDE', 'GASOLINE', 'JET FUEL/KEROSENE','DIESEL/MARINE GAS OIL', 'DIESEL LOW SULPHUR (ULSD)', 'BUNKER OIL/HFO', 'other'] percent_check = 0 for oil_name in range(len(oil_types)): # ATBs for rows in range(len(oil_traffic['cargo']['atb']['volume_export'])): if oil_types[oil_name] in oil_traffic['cargo']['atb']['oil_type'][rows]: oil_traffic['cargo']['atb'][oil_types[oil_name]] += oil_traffic['cargo']['atb']['volume_export'][rows] # Barges for rows in range(len(oil_traffic['cargo']['barge']['volume_export'])): if oil_types[oil_name] in oil_traffic['cargo']['barge']['oil_type'][rows]: oil_traffic['cargo']['barge'][oil_types[oil_name]] += oil_traffic['cargo']['barge']['volume_export'][rows] # Tankers (non-ATB or Barge) for rows in range(len(oil_traffic['cargo']['tanker']['volume_export'])): if oil_types[oil_name] in oil_traffic['cargo']['tanker']['oil_type'][rows]: oil_traffic['cargo']['tanker'][oil_types[oil_name]] += oil_traffic['cargo']['tanker']['volume_export'][rows] # calculate percentages based on total oil cargo exports oil_traffic['cargo']['atb']['percent_volume_export'][oil_types[oil_name]] = 100 * oil_traffic['cargo']['atb'][oil_types[oil_name]]/oil_traffic['cargo']['export_total']['all'] oil_traffic['cargo']['barge']['percent_volume_export'][oil_types[oil_name]] = 100 * oil_traffic['cargo']['barge'][oil_types[oil_name]]/oil_traffic['cargo']['export_total']['all'] oil_traffic['cargo']['tanker']['percent_volume_export'][oil_types[oil_name]] = 100 * oil_traffic['cargo']['tanker'][oil_types[oil_name]]/oil_traffic['cargo']['export_total']['all'] for name in ['atb', 'barge', 'tanker']: percent_check += oil_traffic['cargo'][f'{name}']['percent_volume_export'][oil_types[oil_name]] percent_check # - # #### Plot up ATB fuel types # + atb_volume_export = [oil_traffic['cargo']['atb']['CRUDE'], oil_traffic['cargo']['atb']['GASOLINE'], oil_traffic['cargo']['atb']['JET FUEL/KEROSENE'], oil_traffic['cargo']['atb']['DIESEL/MARINE GAS OIL'], oil_traffic['cargo']['atb']['DIESEL LOW SULPHUR (ULSD)'], oil_traffic['cargo']['atb']['BUNKER OIL/HFO']] labels = [] for ii in range(len(oil_types)-1): labels.append(f'{oil_types[ii]} ({oil_traffic["cargo"]["atb"]["percent_volume_export"][oil_types[ii]]:3.1f}) %') plt.gca().axis("equal") plt.pie(atb_volume_export, labels= labels) plt.title('Types of oil transport by volume for ATBs from Salish Sea facilities') # - # #### Plot up Barge fuel types # + # barge_volume_export = [oil_traffic['cargo']['barge']['CRUDE'], # oil_traffic['cargo']['barge']['GASOLINE'], # oil_traffic['cargo']['barge']['JET FUEL/KEROSENE'], # oil_traffic['cargo']['barge']['DIESEL/MARINE GAS OIL'], # oil_traffic['cargo']['barge']['DIESEL LOW SULPHUR (ULSD)'], # oil_traffic['cargo']['barge']['BUNKER OIL/HFO'], # oil_traffic['cargo']['barge']['other']] barge_volume_export = [oil_traffic['cargo']['barge']['JET FUEL/KEROSENE'], oil_traffic['cargo']['barge']['DIESEL/MARINE GAS OIL'], oil_traffic['cargo']['barge']['DIESEL LOW SULPHUR (ULSD)'], oil_traffic['cargo']['barge']['BUNKER OIL/HFO']] labels = [] for ii in [2,3,4,5]: labels.append(f'{oil_types[ii]} ({oil_traffic["cargo"]["barge"]["percent_volume_export"][oil_types[ii]]:3.1f}) %') plt.gca().axis("equal") plt.pie(barge_volume_export, labels= labels) plt.title('Types of oil transport by volume for barges from Salish Sea facilities') # - # #### Plot up Tanker fuel types # + tanker_volume_export = [oil_traffic['cargo']['tanker']['CRUDE'], oil_traffic['cargo']['tanker']['GASOLINE'], oil_traffic['cargo']['tanker']['JET FUEL/KEROSENE'], oil_traffic['cargo']['tanker']['DIESEL/MARINE GAS OIL'], oil_traffic['cargo']['tanker']['DIESEL LOW SULPHUR (ULSD)'], oil_traffic['cargo']['tanker']['BUNKER OIL/HFO'], oil_traffic['cargo']['tanker']['other']] labels = [] for ii in range(len(oil_types)): labels.append(f'{oil_types[ii]} ({oil_traffic["cargo"]["tanker"]["percent_volume_export"][oil_types[ii]]:3.1f}) %') plt.gca().axis("equal") plt.pie(tanker_volume_export, labels= labels) plt.title('Types of oil transport by volume for tankers from Salish Sea Facilities \n with percent of net crude export') # - cargo_export # #### This is the wrong way (includes ship-to-ship transfers) # + # # remove cargo fuel transfers to land-based entity in order to isolate export cargo # oil_traffic['cargo'] = {} # cargo_data = cargo_data[~cargo_data.Receiver.str.contains('Refinery')] # cargo_data = cargo_data[~cargo_data.Receiver.str.contains('Refining')] # this is specifically for U.S. Oil in Tacoma # cargo_data = cargo_data[~cargo_data.Receiver.str.contains('Terminal')] # cargo_data = cargo_data[~cargo_data.Receiver.str.contains('Petroleum')] # This is for smaller outfits, like Ranier Petroleum or Maxum Petrolem # cargo_data = cargo_data[~cargo_data.Receiver.str.contains('Inc')] # Covich, Petrocard # cargo_data = cargo_data[~cargo_data.Receiver.str.contains('LLC')] # Pacific Functional Fluids, LLC; Coleman Oil Company, LLC # cargo_data = cargo_data[~cargo_data.Receiver.str.contains('Ballard Oil Co.')] # cargo_data = cargo_data[~cargo_data.Receiver.str.contains('Reisner Distributor, Inc.')] # cargo_data = cargo_data[~cargo_data.Receiver.str.contains('NASWI')] # # This dataset contains Columbia River locations that need to be scrubbed out # cargo_data = cargo_data[~cargo_data.City.str.contains('vancouver')] # # cargo_data is no longer indexed chronologically after removing 'Refinery', 'Terminal' and 'NASWI' entries # # need to re-set indexing in order to use row-index as data_frame index # cargo_data.reset_index(drop=True, inplace=True) # [nrows,ncols] = cargo_data.shape # # introduce dictionary entries for fuel volume export # oil_traffic['cargo']['atb_volume_export'] = 0 # oil_traffic['cargo']['barge_volume_export'] = 0 # oil_traffic['cargo']['other_export'] = 0 # oil_traffic['cargo']['ship_to_ship'] = 0 # oil_traffic['cargo']['total'] = 0 # # carrier names # oil_traffic['cargo']['atb_carrier']={} # # total up volume of oil transferred onto ATB BARGES, non-ATB barges, and other vessels # for rows in range(nrows): # # from land-based terminals and refineries only # if 'Refinery' in cargo_data.Deliverer[rows] or 'Refining' in cargo_data.Deliverer[rows] or 'Terminal' in cargo_data.Deliverer[rows]: # if 'ATB' in cargo_data.Receiver[rows] : # oil_traffic['cargo']['atb_volume_export'] = oil_traffic['cargo']['atb_volume_export'] + cargo_data.TransferQtyInGallon[rows] # #display(cargo_data.Receiver[rows]) # #if cargo_data.Receiver[rows] not in oil_traffic['cargo']['atb_carrier']: # # oil_traffic['cargo']['atb_carrier'][f'{oil_traffic['cargo']['atb_carrier']}']=[f'{oil_traffic['cargo']['atb_carrier']}'] # elif 'BARGE' in cargo_data.Receiver[rows] or 'Barge' in cargo_data.Receiver[rows] or 'PB' in cargo_data.Receiver[rows] or 'YON' in cargo_data.Receiver[rows] or 'DLB' in cargo_data.Receiver[rows]: # and not 'ATB' in cargo_data.Receiver[rows]: # oil_traffic['cargo']['barge_volume_export'] = oil_traffic['cargo']['barge_volume_export'] + cargo_data.TransferQtyInGallon[rows] # #display(cargo_data.Receiver[rows]) # else: # oil_traffic['cargo']['other_export'] = oil_traffic['cargo']['other_export'] + cargo_data.TransferQtyInGallon[rows] # #display(cargo_data.Receiver[rows]) # else: # oil_traffic['cargo']['ship_to_ship'] = oil_traffic['cargo']['ship_to_ship'] + cargo_data.TransferQtyInGallon[rows] # oil_traffic['cargo']['total'] = oil_traffic['cargo']['atb_volume_export'] + oil_traffic['cargo']['barge_volume_export'] + oil_traffic['cargo']['other_export'] + oil_traffic['cargo']['ship_to_ship'] # oil_traffic['cargo']['total']/cargo_data.TransferQtyInGallon.sum() # - # ### Add up volume transferred by fuel type # + # Add up the total volume of marine transport by product (this includes all transfers) gas_export_data = cargo_data[cargo_data['Product']=='GASOLINE'] gas_export_total = gas_export_data['TransferQtyInGallon'].sum() gas_export_percent = 100*gas_export_total/cargo_data.TransferQtyInGallon.sum() diesel_export_data = cargo_data[cargo_data['Product']=='DIESEL/MARINE GAS OIL'] diesel_export_total = diesel_export_data['TransferQtyInGallon'].sum() diesel_export_percent = 100*diesel_export_total/cargo_data.TransferQtyInGallon.sum() bunker_export_data = cargo_data[cargo_data['Product']=='BUNKER OIL/HFO'] bunker_export_total = bunker_export_data['TransferQtyInGallon'].sum() bunker_export_percent = 100*bunker_export_total/cargo_data.TransferQtyInGallon.sum() jet_export_data = cargo_data[cargo_data['Product']=='JET FUEL/KEROSENE'] jet_export_total = jet_export_data['TransferQtyInGallon'].sum() jet_export_percent = 100*jet_export_total/cargo_data.TransferQtyInGallon.sum() ulsd_export_data = cargo_data[cargo_data['Product']=='DIESEL LOW SULPHUR (ULSD)'] ulsd_export_total = ulsd_export_data['TransferQtyInGallon'].sum() ulsd_export_percent = 100*ulsd_export_total/cargo_data.TransferQtyInGallon.sum() crude_export_data = cargo_data[cargo_data['Product'].str.contains('CRUDE')] crude_export_total = crude_export_data['TransferQtyInGallon'].sum() crude_export_percent = 100*crude_export_total/cargo_data.TransferQtyInGallon.sum() Other_fuel_total = cargo_data['TransferQtyInGallon'].sum() - crude_export_total - ulsd_export_total - jet_export_total - bunker_export_total - diesel_export_total - gas_export_total other_fuel_percent = 100*Other_fuel_total/cargo_data.TransferQtyInGallon.sum() total_percent = other_fuel_percent+crude_export_percent+ulsd_export_percent+jet_export_percent+bunker_export_percent+diesel_export_percent+gas_export_percent total_percent # - # ### Plot up results oil_export_values = [gas_export_total, diesel_export_total , bunker_export_total, jet_export_total, crude_export_total, ulsd_export_total, Other_fuel_total] #colors = ['b', 'g', 'r', 'c', 'm', 'y'] labels = [f'Gas ({gas_export_percent:3.1f}%)', f'Diesel ({diesel_export_percent:3.1f}%)', f'Bunker-C ({bunker_export_percent:3.1f}%)', f'Jet ({jet_export_percent:3.1f}%)', f'Crude ({crude_export_percent:3.1f}%)', f'Low Sulpher Diesel ({ulsd_export_percent:3.1f}%)', f'Other ({other_fuel_percent:3.1f}%)'] plt.gca().axis("equal") plt.pie(oil_export_values, labels= labels) plt.title('Marine Oil Export from WA Refineries and Terminals') cargo_data
notebooks/graphics/Oil_transport/Oil_Cargo_WA_export.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # "Replace elements with greatest element on right side" # > "[[Leetcode]](https://leetcode.com/problems/replace-elements-with-greatest-element-on-right-side)[Arrays]" # # - toc: true # - badges: true # - comments: true # - categories: [Problem Solving,Leetcode] # - comments: true # - author: <NAME> # # Problem Statement # # Given an array arr, replace every element in that array with the greatest element among the elements to its right, and replace the last element with -1. # # After doing so, return the array. [URL](https://leetcode.com/problems/replace-elements-with-greatest-element-on-right-side) # # # ### Example 1: # ``` # Input: arr = [17,18,5,4,6,1] # Output: [18,6,6,6,1,-1] # ``` # # ### Constraints: # # ``` # 1 <= arr.length <= 10^4 # 1 <= arr[i] <= 10^5 # ``` # # Approach 1 # - Naive implementation with Brute force approach. # + #collapse-hide from typing import List class Solution: def replaceElements(self, arr: List[int]) -> List[int]: for i in range(len(arr)-1): arr[i] = returnMax(arr[i+1:]) arr[len(arr)-1] = -1 return arr def returnMax(arr): for i in range(len(arr)-1): if arr[i] > arr[i+1]: arr[i], arr[i+1] = arr[i+1], arr[i] return arr[len(arr)-1] # - sol = Solution() sol.replaceElements([17,18,5,4,6,1]) testCastList[:10] len(testCastList) # ![](Images/Problem_solving/replaceElements/approach0_submission.PNG) # **Worst case performance in Time: $O(n^{2})$** # # Approach 2 # ![](Images/Problem_solving/replaceElements/approach2_workflow.png) #collapse-hide class Solution: def replaceElements(self, arr: List[int]) -> List[int]: arrLen = len(arr) maxSoFar = arr[arrLen-1] arr[arrLen-1] = -1 for i in range(arrLen-2, -1, -1): temp = arr[i] arr[i] = maxSoFar if temp > maxSoFar: maxSoFar = temp return arr sol = Solution() sol.replaceElements([17,18,5,4,6,1]) # ![](Images/Problem_solving/replaceElements/approach2_submission.PNG) # **Worst case performance in Time: $O(n)$**
_notebooks/2020-05-05-Replace-elements-with-greatest-element-on-right-side.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import tensorflow as tf import pandas as pd import matplotlib.pyplot as plt from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense, Activation from sklearn.model_selection import train_test_split import numpy as np from pandas import ExcelFile from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() sign = pd.read_csv('motion.csv',index_col='Label') sign # - x = sign[['LeftForeArm.x', 'LeftForeArm.y', 'LeftForeArm.z', 'LeftHand.x', 'LeftHand.y', 'LeftHand.z', 'LeftHandThumb1.x', 'LeftHandThumb1.y', 'LeftHandThumb1.z', 'LeftHandThumb2.x', 'LeftHandThumb2.y', 'LeftHandThumb2.z', 'LeftHandThumb3.x', 'LeftHandThumb3.y', 'LeftHandThumb3.z', 'LeftInHandIndex.x', 'LeftInHandIndex.y', 'LeftInHandIndex.z', 'LeftHandIndex1.x', 'LeftHandIndex1.y', 'LeftHandIndex1.z', 'LeftHandIndex2.x', 'LeftHandIndex2.y', 'LeftHandIndex2.z', 'LeftHandIndex3.x', 'LeftHandIndex3.y', 'LeftHandIndex3.z', 'LeftInHandMiddle.x', 'LeftInHandMiddle.y', 'LeftInHandMiddle.z', 'LeftHandMiddle1.x', 'LeftHandMiddle1.y', 'LeftHandMiddle1.z', 'LeftHandMiddle2.x', 'LeftHandMiddle2.y', 'LeftHandMiddle2.z', 'LeftHandMiddle3.x', 'LeftHandMiddle3.y', 'LeftHandMiddle3.z', 'LeftInHandRing.x', 'LeftInHandRing.y', 'LeftInHandRing.z', 'LeftHandRing1.x', 'LeftHandRing1.y', 'LeftHandRing1.z', 'LeftHandRing2.x', 'LeftHandRing2.y', 'LeftHandRing2.z', 'LeftHandRing3.x', 'LeftHandRing3.y', 'LeftHandRing3.z', 'LeftInHandPinky.x', 'LeftInHandPinky.y', 'LeftInHandPinky.z', 'LeftHandPinky1.x', 'LeftHandPinky1.y', 'LeftHandPinky1.z', 'LeftHandPinky2.x', 'LeftHandPinky2.y', 'LeftHandPinky2.z', 'LeftHandPinky3.x', 'LeftHandPinky3.y', 'LeftHandPinky3.z']] y = sign[['result']] x = scaler.fit_transform(x[:]) print(x) from sklearn.preprocessing import OneHotEncoder enc = OneHotEncoder() enc.fit(y) y_onehot = enc.transform(y).toarray() y y_onehot x_train_all,x_test,y_train_all,y_test = train_test_split(x,y_onehot, stratify=y_onehot,test_size=0.2, random_state=42) x_train,x_val,y_train,y_val = train_test_split(x_train_all,y_train_all,stratify=y_train_all,test_size=0.2, random_state=42) print(x.shape) print(x_train_all.shape) print(x_train.shape) print(x_val.shape) print(y_onehot.shape) print(y_train_all.shape) print(y_train.shape) print(y_val.shape) y_train_all # + model = Sequential() model.add(Dense(100, input_shape =(63,))) model.add(Activation('sigmoid')) model.add(Dense(5)) model.add(Activation('softmax')) model.compile(optimizer = 'sgd', loss='categorical_crossentropy', metrics=['accuracy']) model.summary() # - history = model.fit(x_train,y_train, epochs=30, validation_data = (x_val, y_val)) # + # 모델 구조 model = Sequential() model.add(Dense(100, input_shape =(63,))) model.add(Activation('sigmoid')) model.add(Dense(25)) model.add(Activation('sigmoid')) model.add(Dense(15)) model.add(Activation('relu')) model.add(Dense(5)) model.add(Activation('softmax')) model.compile(optimizer = 'sgd', loss='categorical_crossentropy', metrics=['accuracy']) model.summary() # - #데이터로 모델 학습하기 history = model.fit(x_train,y_train,epochs=50, validation_data = (x_val, y_val)) plt.plot(history.history['loss']) plt.xlabel('epoch') plt.ylabel('loss') plt.plot(history.history['accuracy']) plt.xlabel('epoch') plt.ylabel('accuracy') pred = model.predict(x_test) print(pred) pred_label = np.argmax(pred, axis=1) print(pred_label) pred_label[0:9] y_test[0:9]
Sign.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Transit model test 3 alone # # Just pulled in from the other notebook to be able to work on it directly. # # Make sure you update your configfile accordingly: # # ```ini # [setup] # data_set = simple_transit # ``` # + # Imports import os import numpy as np import matplotlib.pyplot as plt from astropy.constants import G import astropy.units as u from sherpa.models import model from sherpa.data import Data1D from sherpa.plot import DataPlot from sherpa.plot import ModelPlot from sherpa.fit import Fit from sherpa.stats import LeastSq from sherpa.optmethods import LevMar from sherpa.stats import Chi2 from sherpa.plot import FitPlot os.chdir('../../../exotic-ism') import margmodule as marg from config import CONFIG_INI # + # Test parameters planet_sys = CONFIG_INI.get('setup', 'data_set') dtosec = CONFIG_INI.getfloat('constants', 'dtosec') period = CONFIG_INI.getfloat(planet_sys, 'Per') Per = period * dtosec aor = CONFIG_INI.getfloat(planet_sys, 'aor') constant1 = (G * Per * Per / (4 *np.pi * np.pi))**(1/3) msmpr = (aor/(constant1))**3 print('msmpr: {}'.format(msmpr)) print('G: {}'.format(G.value)) print('Per: {} sec'.format(Per)) # limb darkening parameters c1 = 0.66396105 c2 = -0.12617095 c3 = 0.053649047 c4 = -0.026713433 # Create x-array for phase - careful, this is not a regular grid, but consists of three groups of data points data_x = np.array([-0.046, -0.044, -0.042, -0.040, -0.038, -0.036, -0.034, -0.032, -0.030, -0.006, -0.004, -0.002, 0.0, 0.002, 0.004, 0.006, 0.008, 0.01, 0.032, 0.034, 0.036, 0.038, 0.040, 0.042, 0.044, 0.046,0.048]) # Make denser and REGULAR x grid for plotting of smooth model smooth_x = np.arange(data_x[0], data_x[-1], 0.001) # Initial flux data original_y = np.array([1.0000000, 1.0000000, 1.0000000, 1.0000000, 1.0000000, 1.0000000, 1.0000000, 1.0000000, 1.0000000, 0.99000000, 0.99000000, 0.99000000, 0.99000000, 0.99000000, 0.99000000, 0.99000000, 0.99000000, 0.99000000, 1.0000000, 1.0000000, 1.0000000, 1.0000000, 1.0000000, 1.0000000, 1.0000000, 1.0000000, 1.0000000]) uncertainty = np.array([0.0004] * len(data_x)) random_scatter = np.array([0.32558253, -0.55610514, -1.1150768, -1.2337022, -1.2678875, 0.60321692, 1.1025507, 1.5080730, 0.76113001, 0.51978011, 0.72241364, -0.086782108, -0.22698337, 0.22780245, 0.47119014, -2.1660677, -1.2477670, 0.28568456, 0.40292731, 0.077955817, -1.1090623, 0.66895172, -0.59215439, 0.79973968, 1.0603756, 0.82684954, -1.8334587]) print('random_scatter.shape: {}'.format(random_scatter.shape)) data_y_no_slope = original_y + (random_scatter * uncertainty) # Create linear slope m_fac = 0.04 line = (data_x * m_fac) + 1.00 # add the systematic model (slope/line) to the y array data_y = line * data_y_no_slope # - # Quick visualization of data plt.scatter(data_x, data_y) plt.plot(data_x, line, c='r') plt.ylim(0.987, 1.004) # Make Sherpa data object out of this data = Data1D('example_transit', data_x, data_y, staterror=uncertainty) # create data object dplot = DataPlot() # create data *plot* object dplot.prepare(data) # prepare plot dplot.plot() # Create and visualize model model = marg.Transit(data_x[0], msmpr, c1, c2, c3, c4, flux0=data_y[0], x_in_phase=True, name='transit', sh=None) print(model) # + # Freeze almost all parameters # Note how m_fac stays thawed model.flux0.freeze() model.epoch.freeze() model.inclin.freeze() model.msmpr.freeze() model.ecc.freeze() model.hstp1.freeze() model.hstp2.freeze() model.hstp3.freeze() model.hstp4.freeze() model.xshift1.freeze() model.xshift2.freeze() model.xshift3.freeze() model.xshift4.freeze() print(model) # + # Set up statistics and optimizer stat = Chi2() opt = LevMar() opt.config['epsfcn'] = np.finfo(float).eps # adjusting epsfcn to double precision #print(stat) print(opt) # Set up fit tfit = Fit(data, model, stat=stat, method=opt) print('Fit information:') print(tfit) # - # Perform the fit fitresult = tfit.fit() print(fitresult) # ### Fit results # smooth model b0_smooth = marg.impact_param((model.period.val*u.d).to(u.s), model.msmpr.val, smooth_x, model.inclin.val*u.rad) smooth_y, _mulimbf2 = marg.occultnl(model.rl.val, model.c1.val, model.c2.val, model.c3.val, model.c4.val, b0_smooth) # + # Errors from Hessian calc_errors = np.sqrt(fitresult.extra_output['covar'].diagonal()) rl_err = calc_errors[0] m_fac_err = calc_errors[1] # Results print('rl = {} +/- {}'.format(model.rl.val, rl_err)) print('m_fac = {} +/- {}'.format(model.m_fac.val, m_fac_err)) print('Reduced chi-squared: {}'.format(fitresult.rstat)) # Display model from denser grid over real data after fitting plt.figure(figsize=(11, 5)) plt.plot(smooth_x, smooth_y, c='orange', label='smooth model') plt.errorbar(data_x, data_y, yerr=uncertainty, fmt='.', label='data') plt.plot(data_x, line, c='r', linestyle='dashed', label='systematic model') plt.legend() plt.xlabel('phase') plt.ylabel('flux') plt.title('Smooth model test 3 over actual data') # -
notebooks/dev/sherpa_integration/Transit model test 3 alone.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.10 64-bit (''language-detection'': conda)' # name: python3710jvsc74a57bd07cec81db05bb032c480a4394d2b8453a6daf84bc5a6e44507f5f38b80d55eb6f # --- # + id="QP2J7vNLN6fL" # !git clone https://github.com/stefan-matcovici/language-detection.git # !pip install -r language-detection/requirements.txt # + id="C_I01ilVaoAs" import sys sys.path.append('/content/language-detection/src') # + id="oxCIKz6L4lto" # %load_ext autoreload # %autoreload 2 # + id="9MSYGG3bOtDf" import pycld2 as cld2 import langid from iso639 import languages from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import CountVectorizer from sklearn.metrics import classification_report, confusion_matrix import seaborn as sns from matplotlib import pyplot as plt from tqdm.auto import tqdm from operator import itemgetter from data.willi2018 import Wili2018Dataset # + [markdown] id="CVhHybsu5qGk" # ###Select subset of languages from all 3 libraries # + id="byL-jtJAQIjO" cld2_languages = set(list(map(itemgetter(1), cld2.LANGUAGES))) langdetect_languages = ["af", "ar", "bg", "bn", "ca", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "fi", "fr", "gu", "he", "hi", "hr", "hu", "id", "it", "ja", "kn", "ko", "lt", "lv", "mk", "ml", "mr", "ne", "nl", "no", "pa", "pl", "pt", "ro", "ru", "sk", "sl", "so", "sq", "sv", "sw", "ta", "te", "th", "tl", "tr", "uk", "ur", "vi", "zh-cn", "zh-tw"] langid_languages = ["af", "am", "an", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "dz", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fo", "fr", "ga", "gl", "gu", "he", "hi", "hr", "ht", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "mt", "nb", "ne", "nl", "nn", "no", "oc", "or", "pa", "pl", "ps", "pt", "qu", "ro", "ru", "rw", "se", "si", "sk", "sl", "sq", "sr", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "vi", "vo", "wa", "xh", "zh", "zu"] common_languages = set(cld2_languages) & set(langid_languages) & set(langdetect_languages) common_languages = [languages.get(alpha2=x).part3.lower() for x in common_languages] # + id="9va0qbEtQkfO" target_languages = ['eng', 'rus', 'fra', 'spa', 'deu', 'ita', 'nld', 'jpn', 'ara', 'hin', 'urd', 'por', 'fas', 'kor', 'est', 'ron', 'swe', 'tha'] # + id="uQJiyz-uTXlU" assert len([lang for lang in target_languages if lang not in common_languages]) == 0 # + [markdown] id="zk7m96HoXTGr" # ###Download and load dataset # + id="rDzNnvDo4pLb" # !cd language-detection; chmod u+x scripts/download_wili2018.sh; ./scripts/download_wili2018.sh # + id="aj9-LWx8XLNU" wili2018 = Wili2018Dataset(target_languages) X_train, Y_train, X_test, Y_test = wili2018.get_data() # + [markdown] id="h7IMsLI7RlxZ" # ###CLD2 # --- # **Compact Language Detector 2** (CLD2) uses a **Naïve Bayes** classifier with different token algorithms. # + colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["7a9faa97757f4826b5980e3aa0fd50b4", "d7f31b057279409c9a4b834e6b2d0328", "945797977f29428995cab828f383d450", "64715444806549c7afd1ff0e9f6ffd18", "<KEY>", "e4870fa2a4c94fca8fe4e3b2ffa1baa8", "<KEY>", "<KEY>"]} id="XvxXEYTNP_W5" outputId="8cd27e41-f838-40be-a965-cc2f446d0070" y_pred = [] for txt in tqdm(X_test): try: normalized_text = txt.encode('utf8') detected_language_alpha_2 = cld2.detect(normalized_text)[2][0][1] detected_language_alpha_3 = languages.get(alpha2=detected_language_alpha_2).part3.lower() if detected_language_alpha_3 not in target_languages: y_pred.append('unk') else: y_pred.append(detected_language_alpha_3) except: y_pred.append('unk') # + [markdown] id="C2iRc-me6lau" # ####Classification report # + id="gvMlCxKxz7TH" colab={"base_uri": "https://localhost:8080/"} outputId="086d488a-090d-4eeb-c218-5d179a718a6a" print(classification_report(Y_test, y_pred, zero_division=0, digits=4)) # + [markdown] id="YkqVY-0Y6qsY" # ####Confusion matrix # + id="QD2_uTOj1atp" colab={"base_uri": "https://localhost:8080/", "height": 609} outputId="3bf86733-089d-4520-f037-e5204b95de9a" label_list = sorted(list(set(y_pred))) cm = confusion_matrix(Y_test, y_pred, labels=label_list) plt.figure(figsize = (20, 10)) sns.heatmap(cm, annot=True, fmt='d', xticklabels=label_list, yticklabels=label_list) # + [markdown] id="tjosnlxrTjZ6" # ###langid # --- # **Langid** uses **LD feature selection** and a **Naïve Bayes** classifier. # + colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["6edcd675a040431da8d1066135d09d5d", "95fddac91ca54cbe97c724b1200e0332", "68ba7c9e7d7c49879da52eed0fd0d077", "e3958a811476484b9ee761cc7bb9f7ee", "be13c04369a04fc7aa66d3bae8a2e431", "<KEY>", "bf0e45a8a1814224b22c0daec8065944", "5e7131e62f1d4399bf1259a28212b4e9"]} id="EuZ7kv6yQ2Kk" outputId="875aeea3-7ad8-44b0-88ca-1682c2b42bd6" y_pred = [] for txt in tqdm(X_test): detected_language_alpha_2 = langid.classify(txt) detected_language_alpha_3 = languages.get(alpha2=detected_language_alpha_2[0]).part3.lower() if detected_language_alpha_3 not in target_languages: y_pred.append('unk') else: y_pred.append(detected_language_alpha_3) # + [markdown] id="dug1qU-86zy2" # ####Classification report # + colab={"base_uri": "https://localhost:8080/"} id="3TWrujV0Q5nB" outputId="93e1e0d8-94e0-411b-f843-c62160594939" print(classification_report(Y_test, y_pred, zero_division=0, digits=4)) # + [markdown] id="qxMwSGFn64bd" # ####Confusion matrix # + id="fPF1hKo2vc9F" colab={"base_uri": "https://localhost:8080/", "height": 609} outputId="c40e2d08-b5fe-454d-efe9-dcc1e1f02778" label_list = sorted(list(set(y_pred))) cm = confusion_matrix(Y_test, y_pred, labels=label_list) plt.figure(figsize = (20, 10)) sns.heatmap(cm, annot=True, fmt='d', xticklabels=label_list, yticklabels=label_list) # + [markdown] id="VYADmZtZUAaJ" # ###Langdetect # --- # **Langdetect** uses character n-gram features, **Naïve Bayes** classifier, noise filter and character normalization # + id="tKQU7r8uSdjO" from langdetect import detect from langdetect import DetectorFactory DetectorFactory.seed = 0 # + id="1AElvoxG3ROV" colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["d71fa6648b1d46659231dd0d6e1dd60f", "12d7459a7d604e3794d50be6bb2bd9dc", "bb65930fe8ba494da92ef1697ef5477a", "<KEY>", "<KEY>", "2bb1fbf76ba24c428a4d4913c25213c7", "cf17a6e71a644ba6af17a0ca203526f8", "eea63ddcd01b4952a12080876bdd78a8"]} outputId="d18262b7-badb-4f90-880b-9a0b74154611" y_pred = [] for txt in tqdm(X_test): try: detected_language_alpha_2 = detect(txt) detected_language_alpha_3 = languages.get(alpha2=detected_language_alpha_2).part3.lower() if detected_language_alpha_3 not in target_languages: y_pred.append('unk') else: y_pred.append(detected_language_alpha_3) except: y_pred.append('unk') # + [markdown] id="CncSLkLx7Bcw" # ####Classification report # + id="dUxp4F-T3d9B" colab={"base_uri": "https://localhost:8080/"} outputId="a0f051d6-c0d4-4229-b0b6-2eb90d066e3a" print(classification_report(Y_test, y_pred, zero_division=0, digits=4)) # + [markdown] id="_G8SXC-S7FxA" # ####Confusion matrix # + id="BwSkeLUm38lE" colab={"base_uri": "https://localhost:8080/", "height": 613} outputId="8652e2af-b7dc-481b-ebcb-12fe8aaef56f" label_list = sorted(list(set(y_pred))) cm = confusion_matrix(Y_test, y_pred, labels=label_list) plt.figure(figsize = (20, 10)) sns.heatmap(cm, annot=True, fmt='d', xticklabels=label_list, yticklabels=label_list) # + id="OuSNI0xx884P"
notebooks/0.baseline-available-modules.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "8d3297dd-7a80-4963-b5e6-6cc25523ec5e", "showTitle": false, "title": ""} # File location and type file_location = "/FileStore/tables/posenet_data-1.csv" file_type = "csv" # CSV options infer_schema = "true" first_row_is_header = "true" delimiter = "," # The applied options are for CSV files. For other file types, these will be ignored. df = spark.read.format(file_type) \ .option("inferSchema", infer_schema) \ .option("header", first_row_is_header) \ .option("sep", delimiter) \ .load(file_location) #display(df) # + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "dfe5795a-d1b0-4816-bf84-c5ae02c8c901", "showTitle": false, "title": ""} from math import atan2, degrees def angle_between(x1, y1, x2, y2, x3, y3): deg1 = (360 + degrees(atan2(x1 - x2, y1 - y2))) % 360 print(deg1) deg2 = (360 + degrees(atan2(x3 - x2, y3 - y2))) % 360 return deg2 - deg1 if deg1 <= deg2 else 360 - (deg1 - deg2) # + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "6bee0783-6e1b-4b21-98c0-af159f8af968", "showTitle": false, "title": ""} from pyspark.sql.functions import udf from pyspark.sql.types import * angle_udf = udf(angle_between, DoubleType()) # + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "765b27b0-a71a-4365-b93a-fb56268fab20", "showTitle": false, "title": ""} angles = { 'angle1': ['nose_x', 'nose_y', 'leftWrist_x', 'leftWrist_y', 'rightWrist_x', 'rightWrist_y'], 'angle2': ['leftShoulder_x', 'leftShoulder_y', 'leftElbow_x', 'leftElbow_y', 'leftWrist_x', 'leftWrist_y'], 'angle3': ['rightWrist_x', 'rightWrist_y', 'rightElbow_x', 'rightElbow_y', 'rightShoulder_x', 'rightShoulder_y'], 'angle4': ['rightAnkle_x', 'rightAnkle_y', 'rightKnee_x', 'rightKnee_y', 'rightHip_x', 'rightHip_y'], 'angle5': ['leftAnkle_x', 'leftAnkle_y', 'leftKnee_x', 'leftKnee_y', 'leftHip_x', 'leftHip_y'], 'angle6': ['leftWrist_x', 'leftWrist_y', 'leftShoulder_x', 'leftShoulder_y', 'leftHip_x', 'leftHip_y'], 'angle7': ['rightWrist_x', 'rightWrist_y', 'rightShoulder_x', 'rightShoulder_y', 'rightHip_x', 'rightHip_y'], 'angle8': ['rightAnkle_x', 'rightAnkle_y', 'nose_x', 'nose_y', 'leftAnkle_x', 'leftAnkle_y'], 'angle9': ['rightWrist_x', 'rightWrist_y', 'rightHip_x', 'rightHip_y', 'rightAnkle_x', 'rightAnkle_y'], 'angle10': ['leftWrist_x', 'leftWrist_y', 'leftHip_x', 'leftHip_y', 'leftAnkle_x', 'leftAnkle_y'], 'angle11': ['leftAnkle_x', 'leftAnkle_y', 'leftHip_x', 'leftHip_y', 'rightAnkle_x', 'rightAnkle_y'], 'angle12': ['rightAnkle_x', 'rightAnkle_y', 'rightHip_x', 'rightHip_y', 'leftAnkle_x', 'leftAnkle_y'], } # + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "13c7bd64-ac0e-4b05-8968-b0426b43c6b1", "showTitle": false, "title": ""} for i in angles: print(i) df =df.withColumn(i, angle_udf(angles[i][0], angles[i][1], angles[i][2], angles[i][3], angles[i][4], angles[i][5])) # + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "2f23f030-9220-4d4b-99dd-7e0b46c8dc33", "showTitle": false, "title": ""} df.printSchema() # + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "fdc7350f-7f44-4ee2-a74e-2a2d82060b19", "showTitle": false, "title": ""} from pyspark.ml.feature import VectorAssembler from pyspark.ml.feature import StandardScaler #### Clustering Task 1 inputCols = ['angle1', 'angle11', 'angle12', 'leftWrist_y', 'rightWrist_y', 'nose_y'] assemble=VectorAssembler(inputCols=inputCols, outputCol='features') assembled_data=assemble.transform(df) #display(assembled_data.select('features')) # + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "b7872828-0baa-4c3d-a78d-fa2476553dba", "showTitle": false, "title": ""} ### Standardize the data scale=StandardScaler(inputCol='features',outputCol='standardized') data_scale=scale.fit(assembled_data) data_scale_output=data_scale.transform(assembled_data) #display(data_scale_output.select('standardized')) # + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "0265e496-dbd7-43f4-b19b-d459fd8234be", "showTitle": false, "title": ""} ### KMeans from pyspark.ml.clustering import KMeans from pyspark.ml.evaluation import ClusteringEvaluator silhouette_score=[] evaluator = ClusteringEvaluator(predictionCol='prediction', featuresCol='standardized', \ metricName='silhouette', distanceMeasure='squaredEuclidean') KMeans_algo=KMeans(featuresCol='standardized', k=2) KMeans_fit=KMeans_algo.fit(data_scale_output) output=KMeans_fit.transform(data_scale_output) score=evaluator.evaluate(output) silhouette_score.append(score) print("Silhouette Score:",score, "Number of Clusters:", 2) #display(output) # + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "49547f06-e848-47f9-87a4-d0dbbf207d7f", "showTitle": false, "title": ""} ### Clustering Task2 ### Change your input inputCols = ['angle1', 'leftWrist_y', 'rightWrist_y', 'nose_y'] assemble=VectorAssembler(inputCols=inputCols, outputCol='features') assembled_data=assemble.transform(df) scale=StandardScaler(inputCol='features',outputCol='standardized') data_scale=scale.fit(assembled_data) data_scale_output=data_scale.transform(assembled_data) silhouette_score=[] evaluator = ClusteringEvaluator(predictionCol='prediction', featuresCol='standardized', \ metricName='silhouette', distanceMeasure='squaredEuclidean') KMeans_algo=KMeans(featuresCol='standardized', k=2) KMeans_fit=KMeans_algo.fit(data_scale_output) output=KMeans_fit.transform(data_scale_output) score=evaluator.evaluate(output) silhouette_score.append(score) print("Silhouette Score:",score, "Number of Clusters:", 2) # + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "01e7d490-f15a-446a-a353-be73a99042b7", "showTitle": false, "title": ""} ### Clustering Task 3 (change your input cols) inputCols = ['angle1', 'angle2', 'angle3', 'angle4', 'angle5', 'angle6','angle7', 'angle8', 'angle9','angle10', 'angle11', 'angle12'] assemble=VectorAssembler(inputCols=inputCols, outputCol='features') assembled_data=assemble.transform(df) # + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "2642bcc3-6fc8-44e0-83d0-465d6a33bd06", "showTitle": false, "title": ""} scale=StandardScaler(inputCol='features',outputCol='standardized') data_scale=scale.fit(assembled_data) data_scale_output=data_scale.transform(assembled_data) # + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "6173d44b-8ac9-4f72-a935-e877ab3ea16a", "showTitle": false, "title": ""} from pyspark.ml.clustering import KMeans from pyspark.ml.evaluation import ClusteringEvaluator silhouette_score=[] evaluator = ClusteringEvaluator(predictionCol='prediction', featuresCol='standardized', \ metricName='silhouette', distanceMeasure='squaredEuclidean') for i in range(2,5): KMeans_algo=KMeans(featuresCol='standardized', k=i) KMeans_fit=KMeans_algo.fit(data_scale_output) output=KMeans_fit.transform(data_scale_output) score=evaluator.evaluate(output) silhouette_score.append(score) print("Silhouette Score:",score, "Number of Clusters:", i) # + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "d012a6da-9e7f-40f4-af67-fd2ea0900f10", "showTitle": false, "title": ""}
Cluster_Analytics.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:extreme] # language: python # name: conda-env-extreme-py # --- # # Preliminary 1 Simulations # # The code appears to be running so it is now possible to obtain some prelimiary results for the base set of paramters to investigate, $E_{\text{APPLIED}}$=-0.16, -0.18, -0.20, -0.22, -0.24, -0.26, -0.28, -0.30; $c_{\theta}^{\infty}$=0.006, 0.012 # + from distributed import LocalCluster from distributed import Client from extremefill2D.fextreme import init_sim, restart_sim, iterate_sim, multi_init_sim from extremefill2D.fextreme.plot import vega_plot_treants, vega_plot_treant import vega from extremefill2D.fextreme.tools import get_by_uuid, outer_dict, pmap from toolz.curried import map, pipe, curry import itertools # %reload_ext yamlmagic # - cluster = LocalCluster(nanny=True, n_workers=8, threads_per_worker=1) client = Client(cluster) client client.shutdown() treants = multi_init_sim('../../scripts/params1.json', '../../data', pmap(client), dict(appliedPotential=(-0.16, -0.18, -0.20, -0.22, -0.24, -0.26, -0.28, -0.30), bulkSuppressor=(0.006, 0.012)), tags=['prelim1']) print(treants) treant_and_errors = pmap(client)(iterate_sim(iterations=20, steps=100), treants)
notebooks/misc/prelim1_sims.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # > ### **Assignment 2 - Numpy Array Operations** # > # > This assignment is part of the course ["Data Analysis with Python: Zero to Pandas"](http://zerotopandas.com). The objective of this assignment is to develop a solid understanding of Numpy array operations. In this assignment you will: # > # > 1. Pick 5 interesting Numpy array functions by going through the documentation: https://numpy.org/doc/stable/reference/routines.html # > 2. Run and modify this Jupyter notebook to illustrate their usage (some explanation and 3 examples for each function). Use your imagination to come up with interesting and unique examples. # > 3. Upload this notebook to your Jovian profile using `jovian.commit` and make a submission here: https://jovian.ml/learn/data-analysis-with-python-zero-to-pandas/assignment/assignment-2-numpy-array-operations # > 4. (Optional) Share your notebook online (on Twitter, LinkedIn, Facebook) and on the community forum thread: https://jovian.ml/forum/t/assignment-2-numpy-array-operations-share-your-work/10575 . # > 5. (Optional) Check out the notebooks [shared by other participants](https://jovian.ml/forum/t/assignment-2-numpy-array-operations-share-your-work/10575) and give feedback & appreciation. # > # > The recommended way to run this notebook is to click the "Run" button at the top of this page, and select "Run on Binder". This will run the notebook on mybinder.org, a free online service for running Jupyter notebooks. # > # > Try to give your notebook a catchy title & subtitle e.g. "All about Numpy array operations", "5 Numpy functions you didn't know you needed", "A beginner's guide to broadcasting in Numpy", "Interesting ways to create Numpy arrays", "Trigonometic functions in Numpy", "How to use Python for Linear Algebra" etc. # > # > **NOTE**: Remove this block of explanation text before submitting or sharing your notebook online - to make it more presentable. # # # # Numpy Array Functions # # # ### Five Picks # # Functions belows are my five chosen functions from Numpy module. # # - function1 = np.concatenate # - function2 = np.transpose # - function3 = np.vstack # - function4 = np.hsplit # - function5 = np.linalg # # The recommended way to run this notebook is to click the "Run" button at the top of this page, and select "Run on Binder". This will run the notebook on mybinder.org, a free online service for running Jupyter notebooks. # !pip install jovian --upgrade -q import jovian jovian.commit(project='numpy-array-operations') # Let's begin by importing Numpy and listing out the functions covered in this notebook. import numpy as np # List of functions explained function1 = np.concatenate function2 = np.transpose function3 = np.vstack function4 = np.hsplit function5 = np.linalg # ## Function 1 - np.concatenate # # Array indexing is the same as accessing an array element. # + # Example 1 - working (change this) arr1 = [[1, 2], [3, 4.]] arr2 = [[5, 6, 7], [8, 9, 10]] np.concatenate((arr1, arr2), axis=1) # - # # + # Example 2 - working arr1 = [[1, 2, 9], [3, 4., 0]] arr2 = [[5, 6, 7], [8, 9, 10]] np.concatenate((arr1, arr2), axis=1) # - # # + # Example 3 - breaking (to illustrate when it breaks) arr1 = [[1, 2], [3, 4.]] arr2 = [[5, 6, 7], [8, 9, 10]] np.concatenate((arr1, arr2), axis=0) # - # # Some closing comments about when to use this function. jovian.commit() # ## Function 2 - Transpose # # Add some explanations # + # Example 1 - working arr1 = [[1, 2], [3, 4.]] i = np.transpose(arr1) print(i) # - # Example 2 shows how to create an empty array. Despite it's name np.empty() does not generate empty array but array of random data. # + # Example 2 - working arr2 = [[10, 2], [30, 4.]] i = np.transpose(arr2) print(i) # - # Explanation about example # + # Example 3 - breaking (to illustrate when it breaks) arr2 = [[10, 2], [30, 4.]] i = np.transpose(arr3) print(i) # - # Explanation about example (why it breaks and how to fix it) # Some closing comments about when to use this function. jovian.commit() # ## Function 3 - np.vstack # # Add some explanations # + # Example 1 - working a = [[1, 2], [3, 4.]] b = [[10, 2], [3, 40.]] i = np.vstack((a, b)) print(i) # - # # + # Example 2 - working i = np.hstack((a, b)) print(i) # - # Explanation about example # Example 3 - breaking (to illustrate when it breaks) i = np.hstack((A, b)) print(i) # Explanation about example (why it breaks and how to fix it) # Some closing comments about when to use this function. jovian.commit() # ## Function 3 - ??? # # Add some explanations # + # Example 1 - working a = np.array([[1, 3, 5, 7, 9, 11], [2, 4, 6, 8, 10, 12]]) # horizontal splitting print("Splitting along horizontal axis into 2 parts:\n", np.hsplit(a, 2)) # - # Explanation about example # Example 2 - working # vertical splitting print("\nSplitting along vertical axis into 2 parts:\n", np.vsplit(a, 2)) # Explanation about example # Example 3 - breaking (to illustrate when it breaks) print("\nSplitting along vertical axis into 2 parts:\n", np.vsplit(a, 2,1)) # Explanation about example (why it breaks and how to fix it) # Some closing comments about when to use this function. jovian.commit() # ## Function 4 - np.datetime64 # # Add some explanations # + # Example 1 - working # creating a date today = np.datetime64('2017-02-12') print("Date is:", today) print("Year is:", np.datetime64(today, 'Y')) # - # Explanation about example # Example 2 - working # creating array of dates in a month dates = np.arange('2017-02', '2017-03', dtype='datetime64[D]') print("\nDates of February, 2017:\n", dates) print("Today is February:", today in dates) # Explanation about example # Example 3 - breaking (to illustrate when it breaks) # arithmetic operation on dates dur = np.datetim64('2014-05-22') - np.datetime64('2016-05-22') print("\nNo. of days:", dur) print("No. of weeks:", np.timedelta64(dur, 'W')) # Explanation about example (why it breaks and how to fix it) # Some closing comments about when to use this function. jovian.commit() # ## Function 5 - np.linalg.matrix_rank # # Add some explanations # + # Example 1 - working A = np.array([[6, 1, 1], [4, -2, 5], [2, 8, 7]]) print("Rank of A:", np.linalg.matrix_rank(A)) print("\nInverse of A:\n", np.linalg.inv(A)) # - # Explanation about example # Example 2 - working print("\nMatrix A raised to power 3:\n", np.linalg.matrix_power(A, 3)) # Explanation about example # Example 3 - breaking (to illustrate when it breaks) print("\nMatrix A raised to power 3:\n", np.linalg.matrix_power(a, -3)) # Explanation about example (why it breaks and how to fix it) # Some closing comments about when to use this function. jovian.commit() # ## Conclusion # # Summarize what was covered in this notebook, and where to go next # ## Reference Links # Provide links to your references and other interesting articles about Numpy arrays: # * Numpy official tutorial : https://numpy.org/doc/stable/user/quickstart.html # * ... jovian.commit()
.ipynb_checkpoints/Assignment-2-numpy-array-operations-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ---- # <img src="../../files/refinitiv.png" width="20%" style="vertical-align: top;"> # # # Data Library for Python # # ---- # ## Eikon Data API - Time Series examples # This notebook demonstrates how to retrieve Time Series from Eikon or Refinitiv Workspace. # # #### Learn more # To learn more about the Data API just connect to the Refinitiv Developer Community. By [registering](https://developers.refinitiv.com/iam/register) and [login](https://developers.refinitiv.com/iam/login) to the Refinitiv Developer Community portal you will get free access to a number of learning materials like [Quick Start guides](https://developers.refinitiv.com/eikon-apis/eikon-data-api/quick-start), [Tutorials](https://developers.refinitiv.com/eikon-apis/eikon-data-api/learning), [Documentation](https://developers.refinitiv.com/eikon-apis/eikon-data-api/docs) and much more. # # #### About the "eikon" module of the Refinitiv Data Platform Library # The "eikon" module of the Refinitiv Data Platform Library for Python embeds all functions of the classical Eikon Data API ("eikon" python library). This module works the same as the Eikon Data API and can be used by applications that need the best of the Eikon Data API while taking advantage of the latest features offered by the Refinitiv Data Platform Library for Python. # # #### Getting Help and Support # # If you have any questions regarding the API usage, please post them on the [Eikon Data API Q&A Forum](https://community.developers.thomsonreuters.com/spaces/92/index.html). The Refinitiv Developer Community will be happy to help. # # ## Import the library and connect to Eikon or Refinitiv Workspace # + import refinitiv.data.eikon as ek import datetime ek.set_app_key('YOUR APP KEY GOES HERE!') # - # ## Get Time Series # #### Simple call with default parameters ek.get_timeseries(['VOD.L']) # #### Get Time Series with more parameters ek.get_timeseries(['VOD.L', 'GOOG.O'], start_date=datetime.timedelta(-1), end_date=datetime.timedelta(0), interval='minute')
Examples/5-Eikon Data API/EX-5.02.01 - Eikon Data API - TimeSeries.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.2 64-bit (''venv'': venv)' # language: python # name: python38264bitvenvvenva41776b7b4c24b1ea249e6dd67506952 # --- # # Chapter 4 : Write your first Agent # ## The Source class # ### The Source is the place where you implement the acquisition of your data # # Regardless of the type of data, the framework will always consume a Source the same way.<br> # Many generic sources are provided by the framework and it's easy to create a new one. # ### The Source strongly encourages to stream incoming data # # Whatever the size of the dataset, memory will never be an issue. # ### Example # Let's continue with our room sensors. # + tags=[] # this source provides us with a CSV line each second from pyngsi.sources.more_sources import SourceSampleOrion # init the source src = SourceSampleOrion() # iterate over the source for row in src: print(row) # - # Here we can see that a row is an instance of a Row class.<p> # # For the vast majority of the Sources, the provider keeps the same value during the datasource lifetime.<br> # We'll go into details in next chapters.<p> # # **In practice you won't iterate the Source by hand. The framework will do it for you.** # ## The Agent class # Here comes the power of the framework.<br> # By using an Agent you will delegate the processing of the Source to the framework.<p> # # Basically an Agent needs a Source for input and a Sink for output.<br> # It also needs a function in order to convert incoming rows to NGSI entities.<p> # # Once the Agent is initialized, you can run it ! # Let's continue with our rooms. # + tags=[] from pyngsi.sources.more_sources import SourceSampleOrion from pyngsi.sink import SinkStdout from pyngsi.agent import NgsiAgent # init the source src = SourceSampleOrion() # for the convenience of the demo, the sink is the standard output sink = SinkStdout() # init the agent agent = NgsiAgent.create_agent(src, sink) #run the agent agent.run() # - # Here you can see that incoming rows are outputted 'as is'.<br> # It's possible because SinkStdout ouputs whatever it receives on its input.<br> # But SinkOrion expects valid NGSI entities on its input.<p> # # So let's define a conversion function. # + from pyngsi.sources.source import Row from pyngsi.ngsi import DataModel def build_entity(row: Row) -> DataModel: id, temperature, pressure = row.record.split(';') m = DataModel(id=id, type="Room") m.add("dataProvider", row.provider) m.add("temperature", float(temperature)) m.add("pressure", int(pressure)) return m # - # And use it in the Agent. # + tags=[] # init the Agent with the conversion function agent = NgsiAgent.create_agent(src, sink, process=build_entity) # run the agent agent.run() # obtain the statistics print(agent.stats) # - # Congratulations ! You have developed your first pyngsi Agent !<p> # # Feel free to try SinkOrion instead of SinkStdout.<br> # Note that you get statistics for free. # Inside your conversion function you can filter input rows just by returning None.<br> # For example, if you're not interested in Room3 you could write this function. # + tags=[] def build_entity(row: Row) -> DataModel: id, temperature, pressure = row.record.split(';') if id == "Room3": return None m = DataModel(id=id, type="Room") m.add("dataProvider", row.provider) m.add("temperature", float(temperature)) m.add("pressure", int(pressure)) return m agent = NgsiAgent.create_agent(src, sink, process=build_entity) agent.run() print(agent.stats) # - # ### The side_effect() function # # As of v1.2.5 the Agent takes the `side_effect()` function as an optional argument.<br> # That function will allow to create entities aside of the main flow. Few use cases might need it. # + tags=[] def side_effect(row, sink, model) -> int: # we can use as an input the current row or the model returned by our process() function m = DataModel(id=f"Building:MainBuilding:Room:{model['id']}", type="Room") sink.write(m.json()) return 1 # number of entities created in the function agent = NgsiAgent.create_agent(src, sink, process=build_entity, side_effect=side_effect) agent.run() print(agent.stats) # - # ## Conclusion # This NGSI Agent developed in this chapter is a very basic one.<p> # Principles remain the same when things become more complex.
chapter4_agent.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (Anaconda) # language: python # name: anaconda3 # --- # Name: <NAME>, <NAME> # # Student ID: 1923165, 2303535 # # Email: <EMAIL>, <EMAIL> # # Course: CS510 Fall 2017 # # Assignment: Classwork 7 import matplotlib.pyplot as plt import numpy as np # ### Simple Plot Walk-Through # + plt.subplot(1, 1, 1) X = np.linspace(-np.pi, np.pi, 300, endpoint=True) C, S = np.cos(X), np.sin(X) plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi]) plt.yticks([-1, 0, +1]) plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi], [r'$-\pi$', r'$-\pi/2$', r'$0$', r'$+\pi/2$', r'$+\pi$']) plt.yticks([-1, 0, +1], [r'$-1$', r'$0$', r'$+1$']) ax = plt.gca() ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') ax.xaxis.set_ticks_position('bottom') ax.spines['bottom'].set_position(('data',0)) ax.yaxis.set_ticks_position('left') ax.spines['left'].set_position(('data',0)) plt.plot(X, C, color="blue", linewidth=2, linestyle="-", label="cosine") plt.plot(X, S, color="green", linewidth=2, linestyle="-", label="sine") plt.legend(loc='upper left') t = 2 * np.pi / 3 plt.plot([t, t], [0, np.cos(t)], color='blue', linewidth=2, linestyle="--") plt.scatter([t, ], [np.cos(t), ], 50, color='blue') plt.annotate(r'$sin(\frac{2\pi}{3})=\frac{\sqrt{3}}{2}$', xy=(t, np.sin(t)), xycoords='data', xytext=(+10, +30), textcoords='offset points', fontsize=20, arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2")) plt.plot([t, t],[0, np.sin(t)], color='green', linewidth=2, linestyle="--") plt.scatter([t, ],[np.sin(t), ], 50, color='green') plt.annotate(r'$cos(\frac{2\pi}{3})=-\frac{1}{2}$', xy=(t, np.cos(t)), xycoords='data', xytext=(-90, -50), textcoords='offset points', fontsize=20, arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2")) for label in ax.get_xticklabels() + ax.get_yticklabels(): label.set_fontsize(16) label.set_bbox(dict(facecolor='white', edgecolor='None', alpha=0.5)) # - # ### Contour Plot # + def f(x, y): return (1 - x / 2 + x ** 5 + y ** 3) * np.exp(-x ** 2 -y ** 2) n = 300 x = np.linspace(-5, 5, n) y = np.linspace(-5, 5, n) X, Y = np.meshgrid(x, y) plt.contourf(X, Y, f(X, Y), 8, alpha=1.0, cmap='jet') C = plt.contour(X, Y, f(X, Y), 8, colors='black', linewidth=.2) # - # ### Imshow # + def f(x, y): return (1 - x / 2 + x ** 5 + y ** 3) * np.exp(-x ** 2 - y ** 2) n = 25 x = np.linspace(-6, 6, 10 * n) y = np.linspace(-6, 6, 8 * n) X, Y = np.meshgrid(x, y) plt.imshow(f(X, Y)) # - # ### 3D Plots from mpl_toolkits.mplot3d import Axes3D # + fig = plt.figure() ax = Axes3D(fig) X = np.arange(-6, 6, 0.2) Y = np.arange(-6, 6, 0.2) X, Y = np.meshgrid(X, Y) R = np.sqrt(X**2 + Y**2) Z = np.sin(R) ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap='cool')
matplotlib-tutorial.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/LanceDdot/DSprojects/blob/main/CS131_8L_BM5_4Q2021_Module_3_Week_10_Group_3_Project_Model_Notebook_July_31%2C_2021.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="caXAnT1CYyBt" # ## **Mapúa University - School of Information Technology** # ***CS131-8L_BM5_4Q2021 (MODEL NOTEBOOK)*** # # * **Web Scraping Notebook** - # https://colab.research.google.com/drive/1QcnTAZW8WyHwOEKQrgeFU41qkUlcidhl?usp=sharing # # Group members: # * ADRIAS, <NAME>. # * DELARIARTE, <NAME>. # * EBANEN, <NAME>. # * ROMERO, <NAME>. # + id="BawGSkJb1d9_" # %tensorflow_version 2.x import pandas as pd import numpy as np import matplotlib.pyplot as plt import tensorflow as tf # %matplotlib inline from matplotlib.pylab import rcParams rcParams['figure.figsize']=20,10 from tensorflow.keras import Input, Model, models, layers from keras.models import Sequential from keras.layers import LSTM,Dropout,Dense from sklearn.preprocessing import MinMaxScaler from fbprophet import Prophet import tensorflow as tf from tensorflow import keras from sklearn.metrics import mean_squared_error import math import os # + [markdown] id="zyJqnUgb2h8Z" # # **Understanding Ethereum (ETH) Historical Data** # + [markdown] id="xA9coShX2wVi" # We'll begin loading the data into a dataframe; it's a good idea to inspect it before we begin modifying it. This assists us in understanding that we have the correct data and in gaining some insights into it. # <br> # <br> # The DataFrame that we will be using contains the **closing price** of Ethereum (ETH). # <br/> # **(August 7, 2015 — July 26, 2021)** # + [markdown] id="3E_ZrTR61mLa" # **Read Data** # + [markdown] id="7FI-WIqtOIjF" # Importing the dataset that we scraped from **Part 1: Web scraping Notebook** # + id="WGh4gXc11nSE" df = pd.read_csv('/content/ETH_Historical_Data.csv', parse_dates=['Date']) df = df.sort_values('Date') # + [markdown] id="zK5hh1xLD6rU" # Number of rows and cols in the dataset # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="GzH8w8PxObJL" outputId="e53a7b8c-98ac-4cf7-d6ff-b5d3aac9b21f" df.shape # + [markdown] id="8gfzDZFR2YB4" # **Head Method** # + colab={"base_uri": "https://localhost:8080/", "height": 203} id="UWd5PzBl2ZCx" outputId="8245caba-c814-4a4e-bcd9-f4b25807e94d" df.head() # + [markdown] id="h4Ji9vO92a-6" # **Tail Method** # + colab={"base_uri": "https://localhost:8080/", "height": 357} id="LH6c6KgY2czC" outputId="3b195594-4b26-4fbf-ab4a-59742b5e6863" df.tail(10) # + [markdown] id="KvzBpkmc3kH0" # # **Data Manipulation** # + [markdown] id="Mt57gDZi3nvO" # **Subsetting** # + [markdown] id="L8mRt21m31ZE" # Our dataframe includes six columns. Do we require all of them? Obviously not. We just need the “Date” and “Open” fields for our prediction project. Then let's get rid of the other columns. # + colab={"base_uri": "https://localhost:8080/", "height": 203} id="5dejCMzS3-LT" outputId="3ed92cb1-565e-4f61-d2a1-a9e91bbe1953" df = df[['Date', 'Close']] df.head() # + [markdown] id="1Fw2QGvz4Qd8" # **Data Types** # <br/> # <br/> # Let's look at the data types of the columns now. Because there is a **“$” symbol** in the closing price numbers, it is possible that it is not a float data type. When training the data, we must convert it to float or integer type since string datatype would not function with our model. # + id="KrNvmfG94TB8" df = df.replace({'\$':''}, regex = True) df = df.replace({'\,':''}, regex = True) # + [markdown] id="vz40Hx-N4ls9" # We can now transform the data type “Closing price” to float. In addition, we will transform the “Date” data to datetime type. # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="339pjdix4sLM" outputId="791ce334-3e11-431c-98aa-376c89d362c6" df = df.astype({"Close": float}) df["Date"] = pd.to_datetime(df.Date, format="%b %d %Y") df.dtypes # + [markdown] id="DYmXbN8Nyrpa" # **Index Column** # + [markdown] id="xiRsSlMJy5XS" # The date column will simply be defined as the dataframe's index value. This will come in handy during the data visualization stage. # # + id="dJWT9sody9Px" df.index = df['Date'] # + colab={"base_uri": "https://localhost:8080/", "height": 234} id="Fgj-unSL-HVS" outputId="02584127-f083-4eca-a39f-e078d1a262fe" df = df[['Close']] df.head() # + colab={"base_uri": "https://localhost:8080/", "height": 337} id="8sMx34oOzaER" outputId="f8326977-f478-4eb3-c222-6cfc2445eb42" plt.figure(figsize=(10,5)) plt.plot(df["Close"],label='Closing Price history') # + [markdown] id="NjGxde7B0SXi" # ### **Model 1 (LSTM Prediction Model)** # + [markdown] id="WFLLpXPC0XMo" # We will perform the most of the programming in this stage. First, we need to make a few simple changes to the data. When we have completed our data collection, we will utilize it to train our model. We will utilize the LSTM (Long Short-Term Memory) model as a neural network model. When making predictions based on time-series information, LSTM models perform well. # + [markdown] id="L3hSx61e0kvb" # **Data Preparation** # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="pBNOKabBF3Ap" outputId="c817786d-0877-41f8-8d79-c8c5313830f8" dataset = df.values dataset # + [markdown] id="NnlMmQaH0zVc" # **Min-Max Scaler** # + id="MMZnf-Nv01x5" scaler = MinMaxScaler(feature_range=(0, 1)) dataset = scaler.fit_transform(dataset) # + [markdown] id="lAXFoqNP1iOe" # **Train and Test Data** # <br/> # This stage comprises the preparation of both the train and test data. # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="lhZ2G8E_Hbf5" outputId="7259cbbb-3090-40c9-d9a7-3beca32751d2" train_size = int(len(dataset) * 0.67) test_size = len(dataset) - train_size train, test = dataset[0:train_size, :], dataset[train_size:len(dataset), :] print(len(train), len(test)) # + [markdown] id="uobIqe2TUxw7" # Convert an array of values into a dataset matrix # + id="9eJIvKi8Hh3T" def create_dataset(dataset, look_back=1): dataX, dataY = [], [] for i in range(len(dataset)-look_back-1): a = dataset[i:(i+look_back), 0] dataX.append(a) dataY.append(dataset[i + look_back, 0]) return np.array(dataX), np.array(dataY) # + id="NClXna5BHkVJ" look_back = 10 trainX, trainY = create_dataset(train, look_back=look_back) testX, testY = create_dataset(test, look_back=look_back) # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="Az4eAL7YHlB1" outputId="7e0f0e11-f003-49c9-9675-b137ee6fe487" trainX trainY # + [markdown] id="XErH6ff9U1GA" # Reshape input to be ```[samples, time steps, features]``` # + id="a7BLB4b4Hq2R" trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1])) testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1])) # + [markdown] id="tB1PGfRS1VRP" # **LSTM Model** # <br/> # We are defining the Long Short-Term Memory model in this stage. # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="XCiLLz_9HyA_" outputId="90d27790-36ea-4cec-aa25-b7e60f73d603" model = Sequential() model.add(LSTM(4, input_shape=(1, look_back))) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(trainX, trainY, epochs=100, batch_size=256, verbose=2) # + [markdown] id="8Yv0Kf0steyb" # **Prediction Function** # <br/> # In this step, we are running the model using the test data we defined in the previous step. # # # + id="mcDnsxEzH_SP" trainPredict = model.predict(trainX) testPredict = model.predict(testX) # + id="n-lPQQJlIB_g" trainPredict = scaler.inverse_transform(trainPredict) trainY = scaler.inverse_transform([trainY]) testPredict = scaler.inverse_transform(testPredict) testY = scaler.inverse_transform([testY]) # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="SzlbJ6nLID43" outputId="56c4bf37-ac39-4bac-ee66-d185a724237e" trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:, 0])) print('Train Score: %.2f RMSE' % (trainScore)) testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:, 0])) print('Test Score: %.2f RMSE' % (testScore)) # + [markdown] id="TqwDSDBGIOuU" # Shift train predictions for plotting. # + id="j2C0XNO9IMRF" trainPredictPlot = np.empty_like(dataset) trainPredictPlot[:, :] = np.nan trainPredictPlot[look_back:len(trainPredict) + look_back, :] = trainPredict # + [markdown] id="VQ11rvrSIW0A" # Shift test predictions for plotting # + id="ZisS3_ZQIXHf" testPredictPlot = np.empty_like(dataset) testPredictPlot[:, :] = np.nan testPredictPlot[len(trainPredict) + (look_back * 2) + 1:len(dataset) - 1, :] = testPredict # + [markdown] id="b-YzBmKJJR8z" # Data Visualization # + colab={"base_uri": "https://localhost:8080/", "height": 320} id="X8S4e5s9IqDl" outputId="69166379-bee0-46e0-a17b-996ac346b724" plt.figure(figsize=(10,5)) plt.plot(df['Close'], label='Actual') plt.plot(pd.DataFrame(trainPredictPlot, columns=["close"], index=df.index).close, label='Training') plt.plot(pd.DataFrame(testPredictPlot, columns=["close"], index=df.index).close, label='Testing') plt.legend(loc='best') plt.show() # + [markdown] id="lKLs91O-wBYt" # ### **Model 2 (Facebook Prophet Model)** # <br/> # Make sure you have the fbprophet model installed before importing the libraries. # <br/> # <br/> # The Facebook Prophet model can only be used with data that has a string time-series format in a column called "ds" and continuous values in a column named "y". # # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="RbGV3VHn6hD6" outputId="7595dd75-b9c1-495f-fbc4-bfb4c228847c" df = pd.read_csv('/content/ETH_Historical_Data.csv') df = df[["Date", "Close"]] df.columns = ["ds", "y"] print(df) # + [markdown] id="SjQNc6lWRI2e" # ### **Cleansing Data** # The criteria of ```float()``` # * A value must not contain spaces # * A value must not contain comma # * A value must not contain non-special characters (i.e. "inf" is a special character, but "fd" is not) # + [markdown] id="VS9NuqNxR1K6" # We will remove the **"$"** and **","** symbols in the dataset and transform the “Date” data to datetime type. # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="MrNU_LINQgFG" outputId="c4be53e1-dc74-48ca-8cbe-904e825b415b" df = df.replace({'\$':''}, regex = True) df = df.replace({'\,':''}, regex = True) df["ds"] = pd.to_datetime(df.ds, format="%b %d %Y") print(df) # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="hUKtbXwR6p4k" outputId="b0ec398a-8347-4db7-c1a4-3ffdc4d9e206" prophet = Prophet() prophet.fit(df) # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="42-H5p3a8uLC" outputId="f72a8d88-f620-4ab6-f679-eb52785ecafc" future = prophet.make_future_dataframe(periods=730) print(future) # + colab={"base_uri": "https://localhost:8080/", "height": 417} id="EDaO8vgY81VW" outputId="a7540d28-ab6f-44d6-8f77-e7e08ee4b099" forecast = prophet.predict(future) forecast[["ds", "yhat", "yhat_lower", "yhat_upper"]].tail(200) # + colab={"base_uri": "https://localhost:8080/", "height": 865} id="PGjDn4U7-TeP" outputId="fe9cae8c-24d1-4a50-8d21-e6d993eb94f7" prophet.plot(forecast)
CS131_8L_BM5_4Q2021_Module_3_Week_10_Group_3_Project_Model_Notebook_July_31,_2021.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Bitwise operator # # The bitwise operators are the operators used to perform the operations on the data at the bit-level. When we perform the bitwise operations, then it is also known as bit-level programming. It consists of two digits, either 0 or 1. It is mainly used in numerical computations to make the calculations faster. # # There are following bitwise operators: # # 1.Bitwise AND (syntax: X & Y) # 2.Bitwise OR(syntax: X | Y) # 3.Bitwise NOT(syntax: ~X ) # 4.Bitwise XOR (syntax: X ^ Y) # 5.Bitwise RIGHT SHIFT (syntax: X >>) # 6.Bitwise LEFT SHIFT (syntax: X <<) # # # ### Bitwise AND # # Returns 1 if both bits are 1 else 0. # # ### Bitwise OR # # Returns 1 if either of the bit is 1 else 0. # # ### Bitwise NOT # # Returns one's complement of the number. # # ### Bitwise XOR # # Returns 1(true) if one of the bit is 1 and the other is 0 else returns false(0). # # ### Bitwise left shift operator # # Bitwise left shift operator(<<) moves the bits of its first operand to the left by the number of places specified in its second operand.It also inserets enough zero bits to fill the gap that aries on the right edge of the new bit pattern. # # ### Bitwise right shift operator # # Bitwise right shift operator(>>)is analogous to the left one, but instead of moving bits to the left,it pushes them to the right by the specified number of places.The rightmost bits always get dropped. # # # # # # + x=20 y=15 #Bitwise AND print(x&y) #Bitwise OR print(x|y) #Bitwise NOT print(~x) #Bitwise XOR print(x^y) # - # x=-10 y=7 print(x>>1) print(y>>1) x=25 y=-20 print(x<<1) print(y<<1)
Python/Operators/Bitwise operator.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data storage handler # This data handler use [PyTables](https://www.pytables.org/) that is built on top of the HDF5 library, using the Python language and the NumPy package. It features an object-oriented interface that, combined with C extensions for the performance-critical parts of the code (generated using Cython), makes it a fast, yet extremely easy to use tool for interactively browse, process and search very large amounts of data. One important feature of [PyTables](https://www.pytables.org/) is that it optimizes memory and disk resources so that data takes much less space (specially if on-flight compression is used) than other solutions such as relational or object oriented databases. import tables import numpy as np import json from datetime import datetime filename = 'sampple_eeg.h5' CHANNELS = 16 from openbci_stream.utils import HDF5Writer, HDF5Reader # ## Writer writer = HDF5Writer('output.h5') # ### Add header header = {'sample_rate': 1000, 'datetime': datetime.now().timestamp(), } writer.add_header(header) # ### Add EEG data # The number of channels is defined with first data addition: eeg = np.random.normal(size=(16, 1000)) # channels x data writer.add_eeg(eeg) # The number of channels can not be changed after define the first package: eeg = np.random.normal(size=(8, 1000)) writer.add_eeg(eeg) # Is possible to add any amount of data: eeg = np.random.normal(size=(16, 500)) writer.add_eeg(eeg) # A `timestamp` can be added to the data, in a separate command, in this case, `timestamp` _could_ be an array of the same length of data. # + eeg = np.random.normal(size=(16, 2000)) timestamp = np.array([datetime.now().timestamp()]*2000) writer.add_eeg(eeg) writer.add_timestamp(timestamp) # - # Is possible to use the `timestamp` in a single command. # + eeg = np.random.normal(size=(16, 2000)) timestamp = [datetime.now().timestamp()] * 2000 writer.add_eeg(eeg, timestamp) # - # ### Extrapolated timestamps # # Usually, we never have the `datetimes` for each value in a package, instead, we have a single marker time for each package. This module was designed keeping mind these cases, so, is possible to set a single `timestamp` and `HDF5Reader` will interpolate the missing `timestamps` # + eeg = np.random.normal(size=(16, 2000)) timestamp = datetime.now().timestamp() writer.add_eeg(eeg, timestamp) # - # ### Add markers # # The `markers` can be added individually with their respective `timestamp`: timestamp = datetime.now().timestamp() marker = 'LEFT' writer.add_marker(marker, timestamp) # Or in a dictionary structure with the marker as `key` and events as a list of `timestamps`: # + markers = { 'LEFT': [1603150888.752062, 1603150890.752062, 1603150892.752062], 'RIGHT': [1603150889.752062, 1603150891.752062, 1603150893.752062], } writer.add_markers(markers) # - # ### Add annotations # # # The annotations are defined like in EDF format, with an `onset` and `duration` in seconds, and their corresponding `description`: writer.add_annotation(onset=5, duration=1, description='This is an annotation, in EDF style') writer.add_annotation(onset=45, duration=15, description='Electrode verification') writer.add_annotation(onset=50, duration=3, description='Patient blinking') # Do no forget to close the file writer.close() # ## Reader reader = HDF5Reader('output.h5') reader reader.eeg reader.markers reader.f.root.eeg_data reader.f.root.timestamp reader.timestamp reader.annotations reader.close() # ## Examples using the `with` control-flow structure # # This data format handlers can be used with the `with` structure: # ### Writer # # This example create a file with random data: # + from openbci_stream.utils import HDF5Writer from datetime import datetime, timedelta import numpy as np import os now = datetime.now() header = {'sample_rate': 1000, 'datetime': now.timestamp(), 'montage': 'standard_1020', 'channels': {i:ch for i, ch in enumerate('Fp1,Fp2,F7,Fz,F8,C3,Cz,C4,T5,P3,Pz,P4,T6,O1,Oz,O2'.split(','))}, } filename = 'sample-eeg.h5' if os.path.exists(filename): os.remove(filename) with HDF5Writer(filename) as writer: writer.add_header(header) for i in range(60*5): eeg = np.random.normal(size=(16, 1000)) aux = np.random.normal(size=(3, 1000)) timestamp = (now + timedelta(seconds=i+1)).timestamp() writer.add_eeg(eeg, timestamp) writer.add_aux(aux) # events = np.linspace(1, 59*30, 15*30).astype(int)*1000 # np.random.shuffle(events) markers = {} for i in range(5, 60*5//4, 4): markers.setdefault('RIGHT', []).append((now + timedelta(seconds=i+1)).timestamp()) markers.setdefault('LEFT', []).append((now + timedelta(seconds=i+2)).timestamp()) markers.setdefault('UP', []).append((now + timedelta(seconds=i+3)).timestamp()) markers.setdefault('DOWN', []).append((now + timedelta(seconds=i+4)).timestamp()) writer.add_markers(markers) writer.add_annotation((now + timedelta(seconds=10)).timestamp(), 1, 'Start run') writer.add_annotation((now + timedelta(seconds=60)).timestamp(), 1, 'Head moved') writer.add_annotation((now + timedelta(seconds=190)).timestamp(), 1, 'Blink') # - # ### Reader # + from openbci_stream.utils import HDF5Reader from matplotlib import pyplot as plt plt.figure(figsize=(16, 9), dpi=60) ax = plt.subplot(111) filename = 'sample-eeg.h5' with HDF5Reader(filename) as reader: channels = reader.header['channels'] sample_rate = reader.header['sample_rate'] t = np.linspace(0, 1, sample_rate) for i, ch in enumerate(reader.eeg[:, :sample_rate]): plt.plot(t, (ch-ch.mean())*0.1+i) ax.set_yticklabels(channels) # - # ## MNE objects # # `reader.get_epochs()` return an [mne.EpochsArray](https://mne.tools/stable/generated/mne.EpochsArray.html) object: # + reader = HDF5Reader('sample-eeg.h5') epochs = reader.get_epochs(tmin=-2, duration=6, markers=['RIGHT', 'LEFT']) # - # [mne.EpochsArray](https://mne.tools/stable/generated/mne.EpochsArray.html) needs an array of shape `(epochs, channels, time)`, we have channels and time, we only need to separate the epochs, here, an epoch is a trial, we will select trial by slicing around a `marker`. The argument `duration` in seconds set the _width_ for the trials, and `tmin` set _how much_ of signal before the marker will I analyze, finally, the argument `markers` set the labels to use, if empty, then will use all of them. # # This feature needs at least `montage`, `channels`, and `sample_rate` configured in the header. # + evoked = epochs['RIGHT'].average() fig = plt.figure(figsize=(10, 5), dpi=90) ax = plt.subplot(111) evoked.plot(axes=ax, spatial_colors=True); # - # _This is only random data._ # + times = np.linspace(-1, 3, 6) epochs['LEFT'].average().plot_topomap(times,) epochs['RIGHT'].average().plot_topomap(times,); # - reader.close() # ## Offset correction # # The offset correction is performed through the `start-offset` and `end-offset` headers (created automatically), it corresponds to the respective offset at beginning and end on a real-time acquisition, this feature is enabled by default and can be changed with the argument `offset_correction` in the `HDF5Reader` class. HDF5Reader('sample-eeg.h5', offset_correction=False) # ## RAW `ndarray` objects # `get_data()` return a standard array `(epochs, channels, time)` and their respective (`classes`), receives the same arguments that `get_epochs`: # + with HDF5Reader('sample-eeg.h5') as file: data, classes = file.get_data(tmin=-2, duration=6, markers=['RIGHT', 'LEFT']) data.shape, classes.shape # - # ## Export to EDF # # The European Data Format (EDF) is a standard file format designed for exchange and storage of medical time series, is possible to export to this format with the `to_edf` method. This file format needs special keys in the `header`: # # * **admincode:** str with the admincode. # * **birthdate:** date object with the the birthdate of the patient. # * **equipment:** str thats describes the measurement equpipment. # * **gender:** int with the the gender, 1 is male, 0 is female. # * **patientcode:** str with the patient code. # * **patientname:** str with the patient name. # * **patient_additional:** str with the additional patient information. # * **recording_additional:** str wit the additional recording information. # * **technician:** str with the technicians name. reader = HDF5Reader('sample-eeg.h5') reader.to_edf('sample-eeg.edf') reader.close() # ## Relative time markers # # In the previous examples, we are using the markers with a `timestamp`, this is an **absolute** reference, is an exact point in time, the reason of that is because this feature was designed for real-time streaming, in this kind of implementations the main reference is the system clock, this approach makes easy to implement a markers register in an isolated system. # # But, is possible to get relative time markers, based on the position in the data array using the attribute `reader.markers_relative`
docs/source/notebooks/07-data_storage_handler.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import transformers # This flag is the difference between SQUAD v1 or 2 (if you're using another dataset, it indicates if impossible # answers are allowed or not). squad_v2 = False model_checkpoint = "deepset/roberta-base-squad2" batch_size = 16 # + import json from datasets import Dataset # Read COVID dataset def load_json_file_to_dict(file_name): return json.load(open(file_name)) data_dict = load_json_file_to_dict("data/covid-qa/covid-qa-dev.json") # - def reconstruct_data(data_dict): result_dict = { 'id':[], 'title':[], 'context':[], 'question':[], 'answers':[] } for article in data_dict['data']: for paragraph in article['paragraphs']: for qa_pair in paragraph['qas']: for ans in qa_pair["answers"]: result_dict["answers"].append({ 'answer_start': [ans["answer_start"]], 'text': [ans["text"]] }) result_dict["question"].append(qa_pair["question"]) result_dict["context"].append(paragraph["context"]) result_dict["title"].append(paragraph["document_id"]) result_dict["id"].append(qa_pair["id"]) return result_dict.copy() sample_dataset = Dataset.from_dict(reconstruct_data(data_dict)) sample_dataset # + # TODO: modify this into covid dataset version # + from datasets import load_dataset, load_metric # load the dataset datasets = load_dataset("squad_v2" if squad_v2 else "squad") # - datasets["validation"] # + from transformers import AutoTokenizer # load the tokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) # tokenizer("What is your name?", "My name is Sylvain.") # + # train data preprocessing max_length = 384 # The maximum length of a feature (question and context) doc_stride = 128 # The authorized overlap between two part of the context when splitting it is needed. pad_on_right = tokenizer.padding_side == "right" def prepare_train_features(examples): # Some of the questions have lots of whitespace on the left, which is not useful and will make the # truncation of the context fail (the tokenized question will take a lots of space). So we remove that # left whitespace examples["question"] = [q.lstrip() for q in examples["question"]] # Tokenize our examples with truncation and padding, but keep the overflows using a stride. This results # in one example possible giving several features when a context is long, each of those features having a # context that overlaps a bit the context of the previous feature. tokenized_examples = tokenizer( examples["question" if pad_on_right else "context"], examples["context" if pad_on_right else "question"], truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) # Since one example might give us several features if it has a long context, we need a map from a feature to # its corresponding example. This key gives us just that. sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") # The offset mappings will give us a map from token to character position in the original context. This will # help us compute the start_positions and end_positions. offset_mapping = tokenized_examples.pop("offset_mapping") # Let's label those examples! tokenized_examples["start_positions"] = [] tokenized_examples["end_positions"] = [] for i, offsets in enumerate(offset_mapping): # We will label impossible answers with the index of the CLS token. input_ids = tokenized_examples["input_ids"][i] cls_index = input_ids.index(tokenizer.cls_token_id) # Grab the sequence corresponding to that example (to know what is the context and what is the question). sequence_ids = tokenized_examples.sequence_ids(i) # One example can give several spans, this is the index of the example containing this span of text. sample_index = sample_mapping[i] answers = examples["answers"][sample_index] # If no answers are given, set the cls_index as answer. if len(answers["answer_start"]) == 0: tokenized_examples["start_positions"].append(cls_index) tokenized_examples["end_positions"].append(cls_index) else: # Start/end character index of the answer in the text. start_char = answers["answer_start"][0] end_char = start_char + len(answers["text"][0]) # Start token index of the current span in the text. token_start_index = 0 while sequence_ids[token_start_index] != (1 if pad_on_right else 0): token_start_index += 1 # End token index of the current span in the text. token_end_index = len(input_ids) - 1 while sequence_ids[token_end_index] != (1 if pad_on_right else 0): token_end_index -= 1 # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index). if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char): tokenized_examples["start_positions"].append(cls_index) tokenized_examples["end_positions"].append(cls_index) else: # Otherwise move the token_start_index and token_end_index to the two ends of the answer. # Note: we could go after the last offset if the answer is the last word (edge case). while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char: token_start_index += 1 tokenized_examples["start_positions"].append(token_start_index - 1) while offsets[token_end_index][1] >= end_char: token_end_index -= 1 tokenized_examples["end_positions"].append(token_end_index + 1) return tokenized_examples # - # data tokenization tokenized_datasets = datasets.map(prepare_train_features, batched=True, remove_columns=datasets["train"].column_names) # + from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer # load model model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint) # + from transformers import default_data_collator # get data collator data_collator = default_data_collator # + # setup trainer model_name = model_checkpoint.split("/")[-1] args = TrainingArguments( f"{model_name}-finetuned-squad", evaluation_strategy = "epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=3, weight_decay=0.01, ) trainer = Trainer( model, args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], data_collator=data_collator, tokenizer=tokenizer, ) # - # validation set preprocessing def prepare_validation_features(examples): print(len(examples["question"])) # Some of the questions have lots of whitespace on the left, which is not useful and will make the # truncation of the context fail (the tokenized question will take a lots of space). So we remove that # left whitespace examples["question"] = [q.lstrip() for q in examples["question"]] print(len(examples["question"])) # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results # in one example possible giving several features when a context is long, each of those features having a # context that overlaps a bit the context of the previous feature. tokenized_examples = tokenizer( examples["question" if pad_on_right else "context"], examples["context" if pad_on_right else "question"], truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) print(len(tokenized_examples["input_ids"])) # Since one example might give us several features if it has a long context, we need a map from a feature to # its corresponding example. This key gives us just that. sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") # We keep the example_id that gave us this feature and we will store the offset mappings. tokenized_examples["example_id"] = [] for i in range(len(tokenized_examples["input_ids"])): # Grab the sequence corresponding to that example (to know what is the context and what is the question). sequence_ids = tokenized_examples.sequence_ids(i) context_index = 1 if pad_on_right else 0 # One example can give several spans, this is the index of the example containing this span of text. sample_index = sample_mapping[i] tokenized_examples["example_id"].append(examples["id"][sample_index]) # Set to None the offset_mapping that are not part of the context so it's easy to determine if a token # position is part of the context or not. tokenized_examples["offset_mapping"][i] = [ (o if sequence_ids[k] == context_index else None) for k, o in enumerate(tokenized_examples["offset_mapping"][i]) ] return tokenized_examples # + from datasets import Dataset test_sample_set_covid = Dataset.from_dict(sample_dataset[:5]) test_sample_set = Dataset.from_dict(datasets["validation"][:5]) # validation set tokenization validation_features = test_sample_set_covid.map( prepare_validation_features, batched=True, remove_columns=test_sample_set_covid.column_names ) # - key = "answers" test_sample_set[key], test_sample_set_covid[key] test_sample_set_covid, test_sample_set validation_features # inference on val set raw_predictions = trainer.predict(validation_features) raw_predictions # + import collections import numpy as np from tqdm.auto import tqdm # postprocessing on predictions def postprocess_qa_predictions(examples, features, raw_predictions, n_best_size = 20, max_answer_length = 30): all_start_logits, all_end_logits = raw_predictions # Build a map example to its corresponding features. example_id_to_index = {k: i for i, k in enumerate(examples["id"])} features_per_example = collections.defaultdict(list) for i, feature in enumerate(features): features_per_example[example_id_to_index[feature["example_id"]]].append(i) # The dictionaries we have to fill. predictions = collections.OrderedDict() # Logging. print(f"Post-processing {len(examples)} example predictions split into {len(features)} features.") # Let's loop over all the examples! for example_index, example in enumerate(tqdm(examples)): # Those are the indices of the features associated to the current example. feature_indices = features_per_example[example_index] min_null_score = None # Only used if squad_v2 is True. valid_answers = [] context = example["context"] # Looping through all the features associated to the current example. for feature_index in feature_indices: # We grab the predictions of the model for this feature. start_logits = all_start_logits[feature_index] end_logits = all_end_logits[feature_index] # This is what will allow us to map some the positions in our logits to span of texts in the original # context. offset_mapping = features[feature_index]["offset_mapping"] # Update minimum null prediction. cls_index = features[feature_index]["input_ids"].index(tokenizer.cls_token_id) feature_null_score = start_logits[cls_index] + end_logits[cls_index] if min_null_score is None or min_null_score < feature_null_score: min_null_score = feature_null_score # Go through all possibilities for the `n_best_size` greater start and end logits. start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist() end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist() for start_index in start_indexes: for end_index in end_indexes: # Don't consider out-of-scope answers, either because the indices are out of bounds or correspond # to part of the input_ids that are not in the context. if ( start_index >= len(offset_mapping) or end_index >= len(offset_mapping) or offset_mapping[start_index] is None or offset_mapping[end_index] is None ): continue # Don't consider answers with a length that is either < 0 or > max_answer_length. if end_index < start_index or end_index - start_index + 1 > max_answer_length: continue start_char = offset_mapping[start_index][0] end_char = offset_mapping[end_index][1] valid_answers.append( { "score": start_logits[start_index] + end_logits[end_index], "text": context[start_char: end_char] } ) if len(valid_answers) > 0: best_answer = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[0] else: # In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid # failure. best_answer = {"text": "", "score": 0.0} # Let's pick our final answer: the best one or the null answer (only for squad_v2) if not squad_v2: predictions[example["id"]] = best_answer["text"] else: answer = best_answer["text"] if best_answer["score"] > min_null_score else "" predictions[example["id"]] = answer return predictions # - # postprocessing final_predictions = postprocess_qa_predictions(test_sample_set_covid, validation_features, raw_predictions.predictions) pred_dict = {k:v for k,v in final_predictions.items()} def create_sample_gold(data_dict, pred_dict): gold_dict = { "data": [] } ids = pred_dict.keys() for article in data_dict['data']: gold_article = {"paragraphs":[]} for paragraph in article['paragraphs']: gold_paragraph = paragraph.copy() gold_paragraph["qas"] = [] for qa_pair in paragraph['qas']: if qa_pair["id"] in ids: gold_paragraph["qas"].append(qa_pair) gold_article["paragraphs"].append(gold_paragraph) gold_dict["data"].append(gold_article) return gold_dict gold_dict = create_sample_gold(data_dict, pred_dict) # save to the file def dict_to_json(data_dict, file_name): with open(file_name, "w") as outfile: json.dump(data_dict, outfile) dict_to_json(gold_dict, "sample_gold.json") dict_to_json(pred_dict, "sample_pred.json")
nlp_203/hw3/roberta_baseline.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # GEF Group 8 Elderly Project # ## Random math Question # + import re import random def replacenum(text, nlen): numamount = len(re.findall(r'\d+', text)) #print("num: " + str(numamount)) nums = re.sub(r'\d+', "%num%", text) #print(num0) final = nums for i in range(numamount): ran = random.randint(0,10**nlen-1) final = final.replace('%num%', str(ran), 1) #print("times:" + str(i)) #print(final) return(final) def calnum(text): formula = re.sub('=', "", text) return eval(formula) # - nnum = replacenum("2^2", 1) print(nnum) calnum(nnum) # ## Load _base_ Overlay # <div class="alert alert-box alert-info"> # Note that we load the base bitstream only once to use Grove module with PYNQ Grove Adapter and SEEED Grove Base Shield V2.0<br> # Please make sure you run the following cell before running either of the interfaces # </div> from pynq.overlays.base import BaseOverlay from pynq_peripherals import ArduinoSEEEDGroveAdapter base = BaseOverlay('base.bit') from time import sleep # # --- # ## Light System # + def lighton(): adapter = ArduinoSEEEDGroveAdapter(base.ARDUINO, D7='grove_led_stick') led_stick = adapter.D7 led_stick.clear() hexcolor = 0xFFFFFF for i in range(10): led_stick.set_pixel(i, hexcolor) led_stick.show() def lightoff(): adapter = ArduinoSEEEDGroveAdapter(base.ARDUINO, D7='grove_led_stick') led_stick = adapter.D7 led_stick.clear() # - lighton() lightoff() # # Door System # <div class="alert alert-box alert-warning"><ul> # <h4 class="alert-heading">Make Physical Connections </h4> # <li>Insert the Grove Base Shield into the Arduino connector on the board. Connect the Grove Servo to D6 connector of the Grove Base Shield.</li> # </ul> # </div> # + def doorOpen(): adapter = ArduinoSEEEDGroveAdapter(base.ARDUINO, D5='grove_servo') servo = adapter.D5 servo.set_angular_position(90) def doorClose(): adapter = ArduinoSEEEDGroveAdapter(base.ARDUINO, D5='grove_servo') servo = adapter.D5 servo.set_angular_position(0) # - doorOpen() doorClose() # ### Motion sensor (Not Done Yet) # + adapter = ArduinoSEEEDGroveAdapter(base.ARDUINO, D6='grove_pir') motionsensor = adapter.D6 adapter = ArduinoSEEEDGroveAdapter(base.ARDUINO, D5='grove_servo') servo = adapter.D5 print("start") for i in range(20): if motionsensor.motion_detected(): servo.set_angular_position(90) print("door open") else: servo.set_angular_position(0) print("door close") sleep(1) # - # ## Temperature/Fan # <div class="alert alert-box alert-warning"><ul> # <h4 class="alert-heading">Make Physical Connections </h4> # <li>Insert the Grove Base Shield into the Arduino connector on the board. Connect the Grove Temperature sensor module to A1 connector of the SEEED Grove Base Shield.</li> # </ul> # </div> from time import sleep def temp(): adapter = ArduinoSEEEDGroveAdapter(base.ARDUINO, A1='grove_temperature') temp_sensor = adapter.A1 for i in range(10): temperature = temp_sensor.get_temperature() print('Temperature: {:.2f} degree Celsius'.format(temperature)) sleep(1)
gef_p8_elderly.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Introduction à Python # + [markdown] slideshow={"slide_type": "-"} # > présentée par <NAME> # + [markdown] slideshow={"slide_type": "-"} # ## Quelques fonctions bien pratiques pour la manipulation d'itérables # + [markdown] slideshow={"slide_type": "slide"} # Commençons avec un itérable simple. # + slideshow={"slide_type": "fragment"} un_iterable_simple = ["JLR", "Jakarto", "CollPlan", "EvalWeb", "JLR", "JLR encore", "JLR toujours"] # + slideshow={"slide_type": "fragment"} un_iterable_simple.count("JLR") # Compte le nombre d'occurence d'un élément # + slideshow={"slide_type": "fragment"} un_iterable_simple.count("Jakarto cartographie 3d inc") # + slideshow={"slide_type": "fragment"} un_iterable_simple.index("Jakarto") # Récupère l'index de la première occurence d'un élément # + [markdown] slideshow={"slide_type": "-"} # **Rappel: les index en python commencent à 0 !** # + slideshow={"slide_type": "fragment"} un_iterable_simple.index("JLR", 1) # Récupère l'index de la prochaine occurence d'un élément à partir de l'index 1 # Renvoie une erreur si l'élément ne se trouve pas dans la sous-liste # + [markdown] slideshow={"slide_type": "subslide"} # Les prochaines méthodes sont bien utiles. Mais il faut les manipuler avec attention, car elles modifient directement l'objet sur lequel elles s'appliquent (pas de sauvegarde...). # + slideshow={"slide_type": "fragment"} un_iterable_simple.reverse() # Renverse l'ordre d'une liste (attention, modifie directement la liste) # + slideshow={"slide_type": "fragment"} un_iterable_simple # + slideshow={"slide_type": "fragment"} un_iterable_simple.sort() # Trie la liste dans l'ordre croissant # + slideshow={"slide_type": "fragment"} un_iterable_simple # + slideshow={"slide_type": "subslide"} les_plus_beaux = un_iterable_simple.pop() # + slideshow={"slide_type": "fragment"} un_iterable_simple # + slideshow={"slide_type": "fragment"} les_plus_beaux # + [markdown] slideshow={"slide_type": "slide"} # ## Quelques fonctions avancées utiles, surtout quand on manipule des données. # + slideshow={"slide_type": "fragment"} un_iterable_complexe = [] un_iterable_complexe.append({"nom": "Messal", "prénom": "Loïc", "employeur": "Jakarto", "age": 23}) un_iterable_complexe.append({"nom": "Lassem", "prénom": "Ciol", "employeur": "Otrakaj", "age": 17}) un_iterable_complexe.append({"nom": "Alssem", "prénom": "Icol", "employeur": "Torakaj", "age": 20}) un_iterable_complexe # + [markdown] slideshow={"slide_type": "subslide"} # #### Appliquer une même fonction sur tous les items # + [markdown] slideshow={"slide_type": "fragment"} # Supposons que l'on veuille récupérer l'age de l'ensemble de notre population dans une liste. # + slideshow={"slide_type": "fragment"} # solution naive recupere_age = [] for item in un_iterable_complexe: age = item["age"] recupere_age.append(age) # + slideshow={"slide_type": "fragment"} recupere_age # + [markdown] slideshow={"slide_type": "subslide"} # A priori, un proverbe québécois dit : "Un mauvais ouvrier a toujours de mauvais outils." [source](http://citation-celebre.leparisien.fr/citations/29396) # # Il est désormais temps de commencer à s'équiper. # + [markdown] slideshow={"slide_type": "subslide"} # On peut utiliser la compréhension des listes dans des cas simples : # + slideshow={"slide_type": "fragment"} recupere_age = [item["age"] for item in un_iterable_complexe] # à utiliser lorsque cela reste simple à comprendre # + slideshow={"slide_type": "fragment"} recupere_age # + [markdown] slideshow={"slide_type": "subslide"} # On peut aussi utiliser la fonction `map()` # + slideshow={"slide_type": "fragment"} # solution avancée def age(item): return item["age"] recupere_age = map(age, un_iterable_complexe) # map() applique la fonction age à tous les éléments de l'itérable # + slideshow={"slide_type": "fragment"} recupere_age # + [markdown] slideshow={"slide_type": "subslide"} # La fonction `map()` retourne un objet de type *map*. Convertissons le! # + slideshow={"slide_type": "fragment"} recupere_age = list(recupere_age) # Pour transformer explicitement l'objet map en liste # Attention, si on fait list(recupere_age) sans récupérer le résultat dans recupere_age, # recupere_age (en tant qu'objet map) sera un objet vide les prochaines fois qu'il sera utilisé # et list(recupere_age) sera alors une liste vide. # + slideshow={"slide_type": "fragment"} recupere_age # + [markdown] slideshow={"slide_type": "subslide"} # Bien! La fonction `age()` est plutôt simple. A-t-on vraiment besoin d'avoir une telle fonction ? (Si vous avez pensé aux fonctions anonymes (`lambda`), vous avez de bons réflexes.) # # C'est le moment d'utiliser nos bons outils. # + slideshow={"slide_type": "fragment"} # solution élégante recupere_age = list(map(lambda x: x["age"], un_iterable_complexe)) # + slideshow={"slide_type": "fragment"} recupere_age # + [markdown] slideshow={"slide_type": "slide"} # ### Même principe pour filtrer les données # + [markdown] slideshow={"slide_type": "subslide"} # Supposons que l'on veuille mettre les éléments ayant un age > 19 dans une liste. # + slideshow={"slide_type": "fragment"} # solution naive adultes_responsables = [] for item in un_iterable_complexe: if item["age"] > 19: adultes_responsables.append(item) # + slideshow={"slide_type": "fragment"} adultes_responsables # + slideshow={"slide_type": "subslide"} # solution élégante adultes_responsables = list(filter(lambda x: x["age"] > 19, un_iterable_complexe)) # + slideshow={"slide_type": "fragment"} adultes_responsables # + [markdown] slideshow={"slide_type": "fragment"} # Simple. Clair. Efficace. # + [markdown] slideshow={"slide_type": "slide"} # ### Trier une liste sans danger! # + [markdown] slideshow={"slide_type": "fragment"} # Supposons que l'on veuille trier les éléments selon leur age (par ordre croissant). # + slideshow={"slide_type": "fragment"} # solution naive un_iterable_complexe_trie = [] # on est capable d'ordonner la liste des ages, et on a la liste des personnes. # on peut donc parcourir la liste des ages triée, trouver la personne ayant cet age-là # il restera finalement à ajouter cette personne à la liste un_iterable_complexe_trie recupere_age.sort() for index in range(len(un_iterable_complexe)): age_recherche = recupere_age[index] # on récupère le index_ème age le plus petit # on cherche l'item ayant cet âge là item_trouve = None # défini une variable vide dans laquelle on va mettre l'item ayant l'age recherché for item in un_iterable_complexe: if item["age"] == age_recherche: item_trouve = item break # On a trouvé l'item cherché, on peut arrêter la recherche un_iterable_complexe_trie.append(item_trouve) un_iterable_complexe_trie # + slideshow={"slide_type": "subslide"} # solution un peu mieux (utilisation de filter) un_iterable_complexe_trie = [] # on est capable d'ordonner la liste des ages, et on a la liste des personnes. # on peut donc parcourir la liste des ages triée, trouver la personne ayant cet age-là # il restera finalement à ajouter cette personne à la liste un_iterable_complexe_trie recupere_age.sort() for index in range(len(un_iterable_complexe)): age_recherche = recupere_age[index] # on récupère le index_ème age le plus petit item_trouve = list(filter(lambda x: x["age"] == age_recherche, un_iterable_complexe)) # on cherche l'item ayant cet âge là un_iterable_complexe_trie = un_iterable_complexe_trie + item_trouve # addition de deux listes un_iterable_complexe_trie # + [markdown] slideshow={"slide_type": "subslide"} # **Question :** Qu'arrive-t-il lorsque deux personnes ont le même âge ? # + slideshow={"slide_type": "fragment"} un_iterable_complexe = [] un_iterable_complexe.append({"nom": "Messal", "prénom": "Loïc", "employeur": "Jakarto", "age": 23}) un_iterable_complexe.append({"nom": "Lassem", "prénom": "Ciol", "employeur": "Otrakaj", "age": 17}) un_iterable_complexe.append({"nom": "Alssem", "prénom": "Icol", "employeur": "Torakaj", "age": 20}) un_iterable_complexe.append({"nom": "Inex", "prénom": "Istant", "employeur": "Karotaj", "age": 20}) un_iterable_complexe # + slideshow={"slide_type": "subslide"} # solution un peu mieux (utilisation de filter) un_iterable_complexe_trie = [] # on est capable d'ordonner la liste des ages, et on a la liste des personnes. # on peut donc parcourir la liste des ages triée, trouver la personne ayant cet age-là # il restera finalement à ajouter cette personne à la liste un_iterable_complexe_trie recupere_age = list(map(lambda x: x["age"], un_iterable_complexe)) recupere_age.sort() for index in range(len(un_iterable_complexe)): age_recherche = recupere_age[index] # on récupère le index_ème age le plus petit item_trouve = list(filter(lambda x: x["age"] == age_recherche, un_iterable_complexe)) # on cherche l'item ayant cet âge là un_iterable_complexe_trie = un_iterable_complexe_trie + item_trouve # addition de deux listes un_iterable_complexe_trie # + [markdown] slideshow={"slide_type": "subslide"} # On a un nombre de personnes supérieures à la liste initiale... # # <NAME> et <NAME> ont été ajouté deux fois chacun... parce que filter retourne une liste de taille 2. # + [markdown] slideshow={"slide_type": "fragment"} # Prenons alors un seul élément résultant du filtre. # + slideshow={"slide_type": "subslide"} # un peu mieux (utilisation de filter) un_iterable_complexe_trie = [] # on est capable d'ordonner la liste des ages, et on a la liste des personnes. # on peut donc parcourir la liste des ages triée, trouver la personne ayant cet age-là # il restera finalement à ajouter cette personne à la liste un_iterable_complexe_trie recupere_age = list(map(lambda x: x["age"], un_iterable_complexe)) recupere_age.sort() for index in range(len(un_iterable_complexe)): age_recherche = recupere_age[index] correspondances = list(filter(lambda x: x["age"] == age_recherche, un_iterable_complexe)) item_trouve = correspondances[0] # récupération du premier élément filtré un_iterable_complexe_trie.append(item_trouve) # + slideshow={"slide_type": "subslide"} un_iterable_complexe_trie # + [markdown] slideshow={"slide_type": "fragment"} # <NAME> apparaît deux fois. # # Mais <NAME> a disparu... Ce n'est pas la bonne solution ! # + [markdown] slideshow={"slide_type": "subslide"} # On sent bien qu'on est proche de la solution. Il faudrait simplement enlever l'item à chaque fois que nous l'avons utilisé. # + slideshow={"slide_type": "fragment"} # un peu mieux (utilisation de filter) un_iterable_complexe_trie = [] # on est capable d'ordonner la liste des ages, et on a la liste des personnes. # on peut donc parcourir la liste des ages triée, trouver la personne ayant cet age-là # il restera finalement à ajouter cette personne à la liste un_iterable_complexe_trie recupere_age = list(map(lambda x: x["age"], un_iterable_complexe)) recupere_age.sort() for index in range(len(un_iterable_complexe)): age_recherche = recupere_age[index] correspondances = list(filter(lambda x: x["age"] == age_recherche, un_iterable_complexe)) item_trouve = correspondances[0] # récupération du premier élément filtré un_iterable_complexe.remove(item_trouve) un_iterable_complexe_trie.append(item_trouve) un_iterable_complexe_trie # + [markdown] slideshow={"slide_type": "subslide"} # Tadaaa! Nous avons toutes nos personnes dans la liste triée par âge. Et maintenant affichons l'iterable initial ! # + slideshow={"slide_type": "fragment"} un_iterable_complexe # + [markdown] slideshow={"slide_type": "fragment"} # En pensant résoudre un problème, vous en avez créé un autre. Félicitations, vous venez d'entrer dans le monde des développeurs. # + [markdown] slideshow={"slide_type": "subslide"} # Reprenons notre configuration initiale, cette fois, en sauvegardant notre liste dans une temporaire que l'on modifiera à souhait. # + slideshow={"slide_type": "fragment"} un_iterable_complexe = [] un_iterable_complexe.append({"nom": "Messal", "prénom": "Loïc", "employeur": "Jakarto", "age": 23}) un_iterable_complexe.append({"nom": "Lassem", "prénom": "Ciol", "employeur": "Otrakaj", "age": 17}) un_iterable_complexe.append({"nom": "Alssem", "prénom": "Icol", "employeur": "Torakaj", "age": 20}) un_iterable_complexe.append({"nom": "Inex", "prénom": "Istant", "employeur": "Karotaj", "age": 20}) un_iterable_complexe # + slideshow={"slide_type": "subslide"} # solution potable un_iterable_complexe_trie = [] # on est capable d'ordonner la liste des ages, et on a la liste des personnes. # on peut donc parcourir la liste des ages triée, trouver la personne ayant cet age-là # il restera finalement à ajouter cette personne à la liste un_iterable_complexe_trie recupere_age = list(map(lambda x: x["age"], un_iterable_complexe)) recupere_age.sort() # Copie naive d'une liste une_copie_d_un_iterable_complexe = [] for item in un_iterable_complexe: une_copie_d_un_iterable_complexe.append(item) for index in range(len(une_copie_d_un_iterable_complexe)): age_recherche = recupere_age[index] correspondances = list(filter(lambda x: x["age"] == age_recherche, une_copie_d_un_iterable_complexe)) item_trouve = correspondances[0] # récupération du premier élément filtré une_copie_d_un_iterable_complexe.remove(item_trouve) un_iterable_complexe_trie.append(item_trouve) # + slideshow={"slide_type": "subslide"} un_iterable_complexe_trie # + slideshow={"slide_type": "fragment"} un_iterable_complexe # + [markdown] slideshow={"slide_type": "fragment"} # On a enfin trouvé une solution! # # Ce qu'on constate, c'est qu'il devient rapidement compliqué de trier une liste lorsqu'on souhaite trier selon une caractéristique d'un item. C'est pour cela que la fonction `sorted()` a été inventée. # + slideshow={"slide_type": "fragment"} # solution élégante sorted(un_iterable_complexe, key=lambda x: x["age"]) # + [markdown] slideshow={"slide_type": "fragment"} # Et en une ligne, vous avez obtenu le résultat espéré. # + [markdown] slideshow={"slide_type": "subslide"} # Et si vous vouliez trier votre liste par âge, puis par employeur en cas d'égalité (cela suppose qu'il existe une relation d'ordre sur les valeurs de la clé *employeur*). # + [markdown] slideshow={"slide_type": "fragment"} # Dans notre cas, la relation d'ordre existe bien. Pour les chaines de caractère, la relation d'ordre par défaut est l'ordre alphabétique. # + slideshow={"slide_type": "fragment"} sorted(un_iterable_complexe, key=lambda x: (x["age"], x["employeur"])) # + [markdown] slideshow={"slide_type": "fragment"} # Grâce au lambda qui retourne désormais un tuple "d'importance", on obtient un super ordonnancement de notre liste. # + [markdown] slideshow={"slide_type": "fragment"} # Simple. Clair. Très efficace. # + [markdown] slideshow={"slide_type": "subslide"} # Note : si vous ressentez un drôle de sentiment, que vous trouvez la solution belle et qu'elle vous fait sourire, alors soit : # - vous avez véritablement l'âme d'un programmeur et un bel avenir dans ce domaine s'offre à vous # - fuyez tant qu'il est encore temps # # si cela ne vous fait aucun effet, alors soit: # - vous avez déjà connu ce sentiment (avec les mathématiques, probablement) # - le développement informatique n'est pas fait pour vous # - il vous en faut peut-être plus ? ;) # + [markdown] slideshow={"slide_type": "slide"} # [Prochain chapitre : La Programmation Orientée Objet](06_La_Programmation_Orientée_Objet.ipynb)
05_Fonctions_pratiques_pour_des_variables_complexes.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 (''.venv'': venv)' # name: pythonjvsc74a57bd067b393f23005f5647497c50fa99fb25b525d8642232b1bdc07a39bdb19f3ee4f # --- import os WM_PROJECT_USER_DIR=os.environ['WM_PROJECT_USER_DIR'] import sys sys.path.append(f"{WM_PROJECT_USER_DIR}/utilities") import numpy as np import pandas as pd import postProcess.polyMesh2d as mesh2d import postProcess.pyResconstruct as pyResconstruct import postProcess.pyFigure as pyFigure import postProcess.pyCompute as pyCompute import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec import matplotlib.ticker as mticker import json import proplot as plot import concurrent.futures # ## Cases Info # + case_T573="../../T573_Pe1e-3_modifiedRPM/" case_T673="../../T673_Pe1e-3_modifiedRPM/" case_T773="../../T773_Pe1e-3/" case_T873="../../T873_Pe1e-3/" data_folder="postProcess" save_folder="postProcess/images" transverse_data_folder="postProcess/transverseAveragedData" # + max_t_case_T573=str(list(np.sort([float(i) for i in pyFigure.get_times_from_data_folder(os.path.join(case_T573,transverse_data_folder))]))[-1]) print(f"max time of case_T573: {max_t_case_T573}") max_t_case_T673=str(list(np.sort([float(i) for i in pyFigure.get_times_from_data_folder(os.path.join(case_T673,transverse_data_folder))]))[-1]) print(f"max time of case_T673: {max_t_case_T673}") max_t_case_T773=str(list(np.sort([float(i) for i in pyFigure.get_times_from_data_folder(os.path.join(case_T773,transverse_data_folder))]))[-1]) print(f"max time of case_T773: {max_t_case_T773}") max_t_case_T873=str(list(np.sort([float(i) for i in pyFigure.get_times_from_data_folder(os.path.join(case_T873,transverse_data_folder))]))[-1]) print(f"max time of case_T873: {max_t_case_T873}") # - TMax_file_case_T573="/home/anoldfriend/OpenFOAM/anoldfriend-7/run/cokeCombustion/T573_Pe1e-3_modifiedRPM/postProcessing/minMaxComponents/0.01/fieldMinMax.dat" TMax_file_case_T673="/home/anoldfriend/OpenFOAM/anoldfriend-7/run/cokeCombustion/T673_Pe1e-3_modifiedRPM/postProcessing/minMaxComponents/0.15/fieldMinMax_0.1500012.dat" TMax_file_case_T773="/home/anoldfriend/OpenFOAM/anoldfriend-7/run/cokeCombustion/T773_Pe1e-3/postProcessing/minMaxComponents/0.01/fieldMinMax.dat" TMax_file_case_T873="/home/anoldfriend/OpenFOAM/anoldfriend-7/run/cokeCombustion/T873_Pe1e-3/postProcessing/minMaxComponents/0.01/fieldMinMax.dat" # ## Maximum Temperature with time # ### Max Point Temperature # + sampling_rate=100 df_maxT_case_T573=pyFigure.read_min_max_field(TMax_file_case_T573,sampling_rate,'T') df_maxT_case_T673=pyFigure.read_min_max_field(TMax_file_case_T673,sampling_rate,'T') df_maxT_case_T773=pyFigure.read_min_max_field(TMax_file_case_T773,sampling_rate,'T') df_maxT_case_T873=pyFigure.read_min_max_field(TMax_file_case_T873,sampling_rate,'T') fig, ax = plot.subplots( aspect=(4, 3), axwidth=4) ax.plot(df_maxT_case_T573["Time"],df_maxT_case_T573["max"],label="$\mathit{T}_{0}$ =573 K",ls=":") ax.plot(df_maxT_case_T673["Time"],df_maxT_case_T673["max"],label="$\mathit{T}_{0}$ =673 K",ls="-.") ax.plot(df_maxT_case_T773["Time"],df_maxT_case_T773["max"],label="$\mathit{T}_{0}$ =773 K",ls="--") ax.plot(df_maxT_case_T873["Time"],df_maxT_case_T873["max"],label="$\mathit{T}_{0}$ =873 K",ls="-") ax.legend(loc="upper right", ncol=1, fancybox=True) ax.format( xlabel="Time (s)", ylabel="Maximum Combustion Temperature (K)", ylim=(550,1000)) fig.savefig('./MaximumPointTemperature.jpg', dpi=600) # + max_component_file_case_T573="/home/anoldfriend/OpenFOAM/anoldfriend-7/run/cokeCombustion/T573_Pe1e-3_modifiedRPM/postProcessing/minMaxComponents2/62.51/fieldMinMax.dat" max_component_file_case_T673="/home/anoldfriend/OpenFOAM/anoldfriend-7/run/cokeCombustion/T673_Pe1e-3_modifiedRPM/postProcessing/minMaxComponents2/22.65/fieldMinMax.dat" max_component_file_case_T773="/home/anoldfriend/OpenFOAM/anoldfriend-7/run/cokeCombustion/T773_Pe1e-3/postProcessing/minMaxComponents2/20.51/fieldMinMax.dat" max_component_file_case_T873="/home/anoldfriend/OpenFOAM/anoldfriend-7/run/cokeCombustion/T873_Pe1e-3/postProcessing/minMaxComponents2/19.01/fieldMinMax.dat" df_combined_case_T573=pyCompute.computeMaxTemperatureAndOutletO2ConcHistory(max_component_file_case_T573,os.path.join(case_T573,transverse_data_folder)) df_combined_case_T673=pyCompute.computeMaxTemperatureAndOutletO2ConcHistory(max_component_file_case_T673,os.path.join(case_T673,transverse_data_folder)) df_combined_case_T773=pyCompute.computeMaxTemperatureAndOutletO2ConcHistory(max_component_file_case_T773,os.path.join(case_T773,transverse_data_folder)) df_combined_case_T873=pyCompute.computeMaxTemperatureAndOutletO2ConcHistory(max_component_file_case_T873,os.path.join(case_T873,transverse_data_folder)) # + fig, ax = plot.subplots( aspect=(4, 3), axwidth=4) ax.plot(df_combined_case_T573["Time"],df_combined_case_T573["Transverse_Tmax"],label="$\mathit{T}_{0}$ =573 K",ls=":") ax.plot(df_combined_case_T673["Time"],df_combined_case_T673["Transverse_Tmax"],label="$\mathit{T}_{0}$ =673 K",ls="-.") ax.plot(df_combined_case_T773["Time"],df_combined_case_T773["Transverse_Tmax"],label="$\mathit{T}_{0}$ =773 K",ls="--") ax.plot(df_combined_case_T873["Time"],df_combined_case_T873["Transverse_Tmax"],label="$\mathit{T}_{0}$ =873 K",ls="-") ax.legend(loc="upper right", ncol=1, fancybox=True) ax.format( xlabel="Time (s)", ylabel="Maximum transversely averaged temperature (K)", ) fig.savefig('./MaximumTransverselyAveragedTemperature.jpg', dpi=600) # - # ### comments # 1. With the Pe of 1e-3, the combustion temperature is stable no matter the firing temperature # 2. The combustion temperature at the firing stage increase greatly with the case of higher firing temperature due to intensive coke combustion at the inlet due to strong O2 diffusive flux. But for the cases with the firing temperature of 673K and 573K, this temperature increase is not observed # ## reaction rate with firing temperature # + df_rate_case_T573=pd.read_csv(os.path.join(case_T573,"postProcess/others/ReactionRateAndBurningRate.csv")) df_rate_case_T673=pd.read_csv(os.path.join(case_T673,"postProcess/others/ReactionRateAndBurningRate.csv")) df_rate_case_T773=pd.read_csv(os.path.join(case_T773,"postProcess/others/ReactionRateAndBurningRate.csv")) df_rate_case_T873=pd.read_csv(os.path.join(case_T873,"postProcess/others/ReactionRateAndBurningRate.csv")) coke_rate_case_T573=list(df_rate_case_T573[df_rate_case_T573["time"]==float(max_t_case_T573)]["vol_averaged_reaction_rate"])[0] coke_rate_case_T673=list(df_rate_case_T673[df_rate_case_T673["time"]==float(max_t_case_T673)]["vol_averaged_reaction_rate"])[0] coke_rate_case_T773=list(df_rate_case_T773[df_rate_case_T773["time"]==float(max_t_case_T773)]["vol_averaged_reaction_rate"])[0] coke_rate_case_T873=list(df_rate_case_T873[df_rate_case_T873["time"]==float(max_t_case_T873)]["vol_averaged_reaction_rate"])[0] T=[573,673,773,873] coke_rates=[coke_rate_case_T573,coke_rate_case_T673,coke_rate_case_T773,coke_rate_case_T873] # - fig, ax = plot.subplots( aspect=(4, 3), axwidth=4) ax.scatter(T,coke_rates) ax.format(xlabel="Initial Temperature (K)", ylabel="Volume-Averaged coke reaction rate (kg/m$^3$/s)") fig.savefig('./cokeReactionRate~T.jpg', dpi=600) # + pixelResolution=0.5e-6 DO2=7.63596e-6 def process_df_O2_flux(df_O2_flux_at_inlet,DO2=7.63596e-6,pixelResolution=0.5e-6): df_O2_flux_at_inlet["diffusive_flux"]=np.array(df_O2_flux_at_inlet["O2_diffusive_Flux_By_DO2"])*DO2 df_O2_flux_at_inlet["advective_flux"]=np.array(df_O2_flux_at_inlet["O2_adv_flux_by_deltaX"])*pixelResolution df_O2_flux_at_inlet["total_flux"]=df_O2_flux_at_inlet["diffusive_flux"]+df_O2_flux_at_inlet["advective_flux"] return df_O2_flux_at_inlet # + df_O2_flux_at_inlet_case_T573=pd.read_csv(os.path.join(case_T573,"postProcess/others/O2FluxsAtInlet.csv")) df_O2_flux_at_inlet_case_T673=pd.read_csv(os.path.join(case_T673,"postProcess/others/O2FluxsAtInlet.csv")) df_O2_flux_at_inlet_case_T773=pd.read_csv(os.path.join(case_T773,"postProcess/others/O2FluxsAtInlet.csv")) df_O2_flux_at_inlet_case_T873=pd.read_csv(os.path.join(case_T873,"postProcess/others/O2FluxsAtInlet.csv")) df_O2_flux_at_inlet_case_T573=process_df_O2_flux(df_O2_flux_at_inlet_case_T573,DO2 =4.87334E-06) df_O2_flux_at_inlet_case_T673=process_df_O2_flux(df_O2_flux_at_inlet_case_T673,DO2=6.20322E-06) df_O2_flux_at_inlet_case_T773=process_df_O2_flux(df_O2_flux_at_inlet_case_T773,DO2=7.63596e-6) df_O2_flux_at_inlet_case_T873=process_df_O2_flux(df_O2_flux_at_inlet_case_T873,DO2=9.16465E-06) # + total_O2_flux_case_T573=list(df_O2_flux_at_inlet_case_T573[df_O2_flux_at_inlet_case_T573["time"]==float(max_t_case_T573)]["total_flux"])[0] total_O2_flux_case_T673=list(df_O2_flux_at_inlet_case_T673[df_O2_flux_at_inlet_case_T673["time"]==float(max_t_case_T673)]["total_flux"])[0] total_O2_flux_case_T773=list(df_O2_flux_at_inlet_case_T773[df_O2_flux_at_inlet_case_T773["time"]==float(max_t_case_T773)]["total_flux"])[0] total_O2_flux_case_T873=list(df_O2_flux_at_inlet_case_T873[df_O2_flux_at_inlet_case_T873["time"]==float(max_t_case_T873)]["total_flux"])[0] total_O2_fluxs=[total_O2_flux_case_T573,total_O2_flux_case_T673,total_O2_flux_case_T773,total_O2_flux_case_T873] O2Fraction=0.22 rhoST=1.2758 area=1*1200*pixelResolution #z 1m total_O2_flux_ST_volume=[ flux/O2Fraction/rhoST/area*60 for flux in total_O2_fluxs] #m3 (ST)/(m2 min) # + fig, ax = plot.subplots( aspect=(4, 3), axwidth=4) ax.scatter(total_O2_flux_ST_volume,coke_rates) ax.format(xlabel="Air flux at inlet (m$^3$ (ST)/(m$^2$$\cdot$min))", ylabel="Volume-Averaged coke reaction rate (kg/m$^3$/s)") coef = np.polyfit(total_O2_flux_ST_volume,coke_rates,1) poly1d_fn = np.poly1d(coef) ax.plot(total_O2_flux_ST_volume,poly1d_fn(total_O2_flux_ST_volume),ls="--",color=plot.scale_luminance('cerulean', 0.5)) fig.savefig('./cokeReactionRate~AirFlux.jpg', dpi=600) # + diffusive_O2_flux_case_T573=list(df_O2_flux_at_inlet_case_T573[df_O2_flux_at_inlet_case_T573["time"]==float(max_t_case_T573)]["diffusive_flux"])[0] diffusive_O2_flux_case_T673=list(df_O2_flux_at_inlet_case_T673[df_O2_flux_at_inlet_case_T673["time"]==float(max_t_case_T673)]["diffusive_flux"])[0] diffusive_O2_flux_case_T773=list(df_O2_flux_at_inlet_case_T773[df_O2_flux_at_inlet_case_T773["time"]==float(max_t_case_T773)]["diffusive_flux"])[0] diffusive_O2_flux_case_T873=list(df_O2_flux_at_inlet_case_T873[df_O2_flux_at_inlet_case_T873["time"]==float(max_t_case_T873)]["diffusive_flux"])[0] diffusive_O2_fluxs=[diffusive_O2_flux_case_T573,diffusive_O2_flux_case_T673,diffusive_O2_flux_case_T773,diffusive_O2_flux_case_T873] O2Fraction=0.22 rhoST=1.2758 area=1*1200*pixelResolution #z 1m diffusive_O2_flux_ST_volume=[ flux/O2Fraction/rhoST/area*60 for flux in diffusive_O2_fluxs] #m3 (ST)/(m2 min) fig, ax = plot.subplots( aspect=(4, 3), axwidth=4) ax.scatter(diffusive_O2_flux_ST_volume,coke_rates) coef = np.polyfit(diffusive_O2_flux_ST_volume,coke_rates,1) poly1d_fn = np.poly1d(coef) ax.plot(diffusive_O2_flux_ST_volume,poly1d_fn(diffusive_O2_flux_ST_volume),ls="--",color=plot.scale_luminance('cerulean', 0.5)) ax.format(xlabel="Air diffusion flux at inlet (m$^3$ (ST)/(m$^2$$\cdot$min))", ylabel="Volume-Averaged coke reaction rate (kg/m$^3$/s)") fig.savefig('./cokeReactionRate~AirdiffusionFlux.jpg', dpi=600) # + advective_O2_flux_case_T573=list(df_O2_flux_at_inlet_case_T573[df_O2_flux_at_inlet_case_T573["time"]==float(max_t_case_T573)]["advective_flux"])[0] advective_O2_flux_case_T673=list(df_O2_flux_at_inlet_case_T673[df_O2_flux_at_inlet_case_T673["time"]==float(max_t_case_T673)]["advective_flux"])[0] advective_O2_flux_case_T773=list(df_O2_flux_at_inlet_case_T773[df_O2_flux_at_inlet_case_T773["time"]==float(max_t_case_T773)]["advective_flux"])[0] advective_O2_flux_case_T873=list(df_O2_flux_at_inlet_case_T873[df_O2_flux_at_inlet_case_T873["time"]==float(max_t_case_T873)]["advective_flux"])[0] advective_O2_fluxs=[advective_O2_flux_case_T573,advective_O2_flux_case_T673,advective_O2_flux_case_T773,advective_O2_flux_case_T873] O2Fraction=0.22 rhoST=1.2758 area=1*1200*pixelResolution #z 1m advective_O2_flux_ST_volume=[ flux/O2Fraction/rhoST/area*60 for flux in advective_O2_fluxs] #m3 (ST)/(m2 min) fig, ax = plot.subplots( aspect=(4, 3), axwidth=4) ax.scatter(advective_O2_flux_ST_volume,coke_rates) coef = np.polyfit(advective_O2_flux_ST_volume,coke_rates,1) poly1d_fn = np.poly1d(coef) ax.plot(advective_O2_flux_ST_volume,poly1d_fn(advective_O2_flux_ST_volume),ls="--",color=plot.scale_luminance('cerulean', 0.5)) ax.format(xlabel="Air advection flux at inlet (m$^3$ (ST)/(m$^2$$\cdot$min))", ylabel="Volume-Averaged coke reaction rate (kg/m$^3$/s)") fig.savefig('./cokeReactionRate~AirAdvectionFlux.jpg', dpi=600) # + diffusive_O2_flux_case_T573=list(df_O2_flux_at_inlet_case_T573[df_O2_flux_at_inlet_case_T573["time"]==float(max_t_case_T573)]["diffusive_flux"])[0] diffusive_O2_flux_case_T673=list(df_O2_flux_at_inlet_case_T673[df_O2_flux_at_inlet_case_T673["time"]==float(max_t_case_T673)]["diffusive_flux"])[0] diffusive_O2_flux_case_T773=list(df_O2_flux_at_inlet_case_T773[df_O2_flux_at_inlet_case_T773["time"]==float(max_t_case_T773)]["diffusive_flux"])[0] diffusive_O2_flux_case_T873=list(df_O2_flux_at_inlet_case_T873[df_O2_flux_at_inlet_case_T873["time"]==float(max_t_case_T873)]["diffusive_flux"])[0] diffusive_O2_fluxs=[diffusive_O2_flux_case_T573,diffusive_O2_flux_case_T673,diffusive_O2_flux_case_T773,diffusive_O2_flux_case_T873] O2Fraction=0.22 rhoST=1.2758 area=1*1200*pixelResolution #z 1m diffusive_O2_flux_ST_volume=[ flux/rhoST/area*60*60 for flux in diffusive_O2_fluxs] #m3 (ST)/(m2 h) fig, ax = plot.subplots( aspect=(4, 3), axwidth=4) ax.scatter(diffusive_O2_flux_ST_volume,coke_rates,label="numerical results") coef = np.polyfit(diffusive_O2_flux_ST_volume,coke_rates,1) poly1d_fn = np.poly1d(coef) ax.plot(diffusive_O2_flux_ST_volume,poly1d_fn(diffusive_O2_flux_ST_volume),ls="--",color=plot.scale_luminance('cerulean', 0.5), label="linear fit") # coef2 = np.polyfit(advective_O2_flux_ST_volume,coke_rates,1) # poly1d_fn2 = np.poly1d(coef2) # ax.plot(advective_O2_flux_ST_volume,poly1d_fn2(advective_O2_flux_ST_volume),ls="-.",color=plot.scale_luminance('red', 0.5)) ax.legend(fancybox=True,ncol=1) ax.format(xlabel="O$_2$ diffusion flux (m$^3$(O$_2$)/(m$^2$$\cdot$h))", ylabel="Volume-Averaged coke reaction rate (kg/m$^3$/s)") fig.savefig('./cokeReactionRate~O2diffusionFlux.jpg', dpi=600) # + advective_O2_flux_case_T573=list(df_O2_flux_at_inlet_case_T573[df_O2_flux_at_inlet_case_T573["time"]==float(max_t_case_T573)]["advective_flux"])[0] advective_O2_flux_case_T673=list(df_O2_flux_at_inlet_case_T673[df_O2_flux_at_inlet_case_T673["time"]==float(max_t_case_T673)]["advective_flux"])[0] advective_O2_flux_case_T773=list(df_O2_flux_at_inlet_case_T773[df_O2_flux_at_inlet_case_T773["time"]==float(max_t_case_T773)]["advective_flux"])[0] advective_O2_flux_case_T873=list(df_O2_flux_at_inlet_case_T873[df_O2_flux_at_inlet_case_T873["time"]==float(max_t_case_T873)]["advective_flux"])[0] advective_O2_fluxs=[advective_O2_flux_case_T573,advective_O2_flux_case_T673,advective_O2_flux_case_T773,advective_O2_flux_case_T873] O2Fraction=0.22 rhoST=1.2758 area=1*1200*pixelResolution #z 1m advective_O2_flux_ST_volume=[ flux/rhoST/area*60*60 for flux in advective_O2_fluxs] #m3 (ST)/(m2 h) fig, ax = plot.subplots( aspect=(4, 3), axwidth=4) ax.scatter(advective_O2_flux_ST_volume,coke_rates,label="numerical results") coef = np.polyfit(advective_O2_flux_ST_volume,coke_rates,1) poly1d_fn = np.poly1d(coef) ax.plot(advective_O2_flux_ST_volume,poly1d_fn(advective_O2_flux_ST_volume),ls="--",color=plot.scale_luminance('cerulean', 0.5), label="linear fit") ax.legend(fancybox=True,ncol=1) ax.format(xlabel="O$_2$ advection flux (m$^3$(O$_2$)/(m$^2$$\cdot$h))", ylabel="Volume-Averaged coke reaction rate (kg/m$^3$/s)") fig.savefig('./cokeReactionRate~O2AdvectionFlux.jpg', dpi=600) # - # ### Comments # 1. Almost linear with the diffusion flux, confirming the mechanism is the diffusion limited # ### Combustion front behavoirs # + df_transverse_case_T573=pd.read_csv(os.path.join(os.path.join(case_T573,transverse_data_folder),f"{max_t_case_T573}.csv")) df_transverse_case_T673=pd.read_csv(os.path.join(os.path.join(case_T673,transverse_data_folder),f"{max_t_case_T673}.csv")) df_transverse_case_T773=pd.read_csv(os.path.join(os.path.join(case_T773,transverse_data_folder),f"{max_t_case_T773}.csv")) df_transverse_case_T873=pd.read_csv(os.path.join(os.path.join(case_T873,transverse_data_folder),f"{max_t_case_T873}.csv")) # + c1 = plot.scale_luminance('cerulean', 0.5) c2 = plot.scale_luminance('red', 0.5) fig, axs = plot.subplots(ncols=1, nrows=4,aspect=(4, 1), axwidth=4, hspace=(0, ),sharex=3,sharey=3) axs_r=axs.alty() fig.text(0.985, 0.5, "Reaction Heat Rate (J/(m$^3\cdot$s))", color=c2, va='center', rotation='vertical') axs[0].plot(df_transverse_case_T873["x"],df_transverse_case_T873["O2Conc"],color=c1) axs_r[0].plot(df_transverse_case_T873["x"],df_transverse_case_T873["Qdot"],color=c2) axs[0].format(title="$\mathit{T}_{0}$ = 873 K",titleloc='ur') # axs[0].legend() axs[1].plot(df_transverse_case_T773["x"],df_transverse_case_T773["O2Conc"],color=c1) axs_r[1].plot(df_transverse_case_T773["x"],df_transverse_case_T773["Qdot"],color=c2) axs[1].format(title="$\mathit{T}_{0}$ = 773 K",titleloc='ur') # axs[1].legend() axs[2].plot(df_transverse_case_T673["x"],df_transverse_case_T673["O2Conc"],color=c1) axs_r[2].plot(df_transverse_case_T673["x"],df_transverse_case_T673["Qdot"],color=c2) axs[2].format(title="$\mathit{T}_{0}$ = 673 K",titleloc='ur') # axs[2].legend() axs[3].plot(df_transverse_case_T573["x"],df_transverse_case_T573["O2Conc"],color=c1) axs_r[3].plot(df_transverse_case_T573["x"],df_transverse_case_T573["Qdot"],color=c2) axs[3].format(title="$\mathit{T}_{0}$ = 573 K",titleloc='ur') # axs[3].legend() axs[0].format(xtickloc='top') axs.format(xlabel="X (m)",ylabel="O$_2$ mole concentration (mol/m$^3$)",ycolor=c1) axs_r[3].get_shared_y_axes().join(axs_r[3], axs_r[2]) axs_r[3].get_shared_y_axes().join(axs_r[3], axs_r[1]) axs_r[3].get_shared_y_axes().join(axs_r[3], axs_r[0]) axs_r.format(ylabel='',ycolor=c2) # axs_r.autoscale(tight=True) fig.savefig('./Combustion front behavoirs.jpg', dpi=600,bbox_inches='tight') # - # ## Comments # - With the increasing the firing temperature, the combustion control regime change from the kiniteics-limited to the diffusion-limited with the combustion front narrowed down
run/cokeCombustion/DaAnalysis/Pe1e-3/analysisDaEffect.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + pycharm={"name": "#%%\n"} # Copyright 2021 NVIDIA Corporation. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ================================ # + [markdown] pycharm={"name": "#%% md\n"} # <img src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png" style="width: 90px; float: right;"> # # # Exporting Ranking Models # # This notebook is created using the latest stable [merlin-tensorflow-training](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/merlin/containers/merlin-tensorflow-training/tags) container. # # In this example notebook we demonstrate how to export (save) NVTabular `workflow` and a `ranking model` for model deployment with [Merlin Systems](https://github.com/NVIDIA-Merlin/systems) library. # # Learning Objectives: # # - Export NVTabular workflow for model deployment # - Export TensorFlow DLRM model for model deployment # # We will follow the steps below: # - Prepare the data with NVTabular and export NVTabular workflow # - Train a DLRM model with Merlin Models and export the trained model # + [markdown] pycharm={"name": "#%% md\n"} # ## Importing Libraries # + [markdown] pycharm={"name": "#%% md\n"} # Let's start with importing the libraries that we'll use in this notebook. # + pycharm={"name": "#%%\n"} import os import nvtabular as nvt from nvtabular.ops import * from merlin.models.utils.example_utils import workflow_fit_transform from merlin.schema.tags import Tags import merlin.models.tf as mm from merlin.io.dataset import Dataset import tensorflow as tf # + [markdown] pycharm={"name": "#%% md\n"} # ## Feature Engineering with NVTabular # + [markdown] pycharm={"name": "#%% md\n"} # We use the synthetic train and test datasets generated by mimicking the real [Ali-CCP: Alibaba Click and Conversion Prediction](https://tianchi.aliyun.com/dataset/dataDetail?dataId=408#1) dataset to build our recommender system ranking models. # # If you would like to use real Ali-CCP dataset instead, you can download the training and test datasets on [tianchi.aliyun.com](https://tianchi.aliyun.com/dataset/dataDetail?dataId=408#1). You can then use [get_aliccp()](https://github.com/NVIDIA-Merlin/models/blob/main/merlin/datasets/ecommerce/aliccp/dataset.py#L43) function to curate the raw csv files and save them as parquet files. # + pycharm={"name": "#%%\n"} from merlin.datasets.synthetic import generate_data DATA_FOLDER = os.environ.get("DATA_FOLDER", "/workspace/data/") NUM_ROWS = os.environ.get("NUM_ROWS", 1000000) SYNTHETIC_DATA = eval(os.environ.get("SYNTHETIC_DATA", "True")) BATCH_SIZE = int(os.environ.get("BATCH_SIZE", 512)) if SYNTHETIC_DATA: train, valid = generate_data("aliccp-raw", int(NUM_ROWS), set_sizes=(0.7, 0.3)) # save the datasets as parquet files train.to_ddf().to_parquet(os.path.join(DATA_FOLDER, "train")) valid.to_ddf().to_parquet(os.path.join(DATA_FOLDER, "valid")) # + [markdown] pycharm={"name": "#%% md\n"} # Let's define our input and output paths. # + pycharm={"name": "#%%\n"} train_path = os.path.join(DATA_FOLDER, "train", "*.parquet") valid_path = os.path.join(DATA_FOLDER, "valid", "*.parquet") output_path = os.path.join(DATA_FOLDER, "processed") # + [markdown] pycharm={"name": "#%% md\n"} # After we execute `fit()` and `transform()` functions on the raw dataset applying the operators defined in the NVTabular workflow pipeline below, the processed parquet files are saved to `output_path`. # + pycharm={"name": "#%%\n"} # %%time user_id = ["user_id"] >> Categorify() >> TagAsUserID() item_id = ["item_id"] >> Categorify() >> TagAsItemID() targets = ["click"] >> AddMetadata(tags=[Tags.BINARY_CLASSIFICATION, "target"]) item_features = ["item_category", "item_shop", "item_brand"] >> Categorify() >> TagAsItemFeatures() user_features = ( [ "user_shops", "user_profile", "user_group", "user_gender", "user_age", "user_consumption_2", "user_is_occupied", "user_geography", "user_intentions", "user_brands", "user_categories", ] >> Categorify() >> TagAsUserFeatures() ) outputs = user_id + item_id + item_features + user_features + targets workflow = nvt.Workflow(outputs) train_dataset = nvt.Dataset(train_path) valid_dataset = nvt.Dataset(valid_path) workflow.fit(train_dataset) workflow.transform(train_dataset).to_parquet(output_path=output_path + "/train/") workflow.transform(valid_dataset).to_parquet(output_path=output_path + "/valid/") # + [markdown] pycharm={"name": "#%% md\n"} # We save NVTabular `workflow` model in the current working directory. # + pycharm={"name": "#%%\n"} workflow.save("workflow") # + [markdown] pycharm={"name": "#%% md\n"} # Let's check out our saved workflow model folder. # + pycharm={"name": "#%%\n"} # !pip install seedir # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} import seedir as sd sd.seedir( DATA_FOLDER, style="lines", itemlimit=10, depthlimit=3, exclude_folders=".ipynb_checkpoints", sort=True, ) # + [markdown] pycharm={"name": "#%% md\n"} # ## Build and Train a DLRM model # + [markdown] pycharm={"name": "#%% md\n"} # In this example, we build, train, and export a Deep Learning Recommendation Model [(DLRM)](https://arxiv.org/abs/1906.00091) architecture. To learn more about how to train different deep learning models, how easily transition from one model to another and the seamless integration between data preparation and model training visit [03-Exploring-different-models.ipynb](https://github.com/NVIDIA-Merlin/models/blob/main/examples/03-Exploring-different-models.ipynb) notebook. # + [markdown] pycharm={"name": "#%% md\n"} # NVTabular workflow above exports a schema file, schema.pbtxt, of our processed dataset. To learn more about the schema object, schema file and `tags`, you can explore [02-Merlin-Models-and-NVTabular-integration.ipynb](02-Merlin-Models-and-NVTabular-integration.ipynb). # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} # define train and valid dataset objects train = Dataset(os.path.join(output_path, "train", "*.parquet")) valid = Dataset(os.path.join(output_path, "valid", "*.parquet")) # define schema object schema = train.schema # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} target_column = schema.select_by_tag(Tags.TARGET).column_names[0] target_column # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} model = mm.DLRMModel( schema, embedding_dim=64, bottom_block=mm.MLPBlock([128, 64]), top_block=mm.MLPBlock([128, 64, 32]), prediction_tasks=mm.BinaryClassificationTask(target_column), ) # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} # %%time model.compile("adam", run_eagerly=False, metrics=[tf.keras.metrics.AUC()]) model.fit(train, validation_data=valid, batch_size=BATCH_SIZE) # + [markdown] pycharm={"name": "#%% md\n"} # ### Save model # + [markdown] pycharm={"name": "#%% md\n"} # The last step of machine learning (ML)/deep learning (DL) pipeline is to deploy the ETL workflow and saved model into production. In the production setting, we want to transform the input data as done during training (ETL). We need to apply the same mean/std for continuous features and use the same categorical mapping to convert the categories to continuous integer before we use the DL model for a prediction. Therefore, we deploy the NVTabular workflow with the Tensorflow model as an ensemble model to Triton Inference using [Merlin Systems](https://github.com/NVIDIA-Merlin/systems) library very easily. The ensemble model guarantees that the same transformation is applied to the raw inputs. # # Let's save our DLRM model. # + pycharm={"name": "#%%\n"} model.save("dlrm") # + [markdown] pycharm={"name": "#%% md\n"} # We have NVTabular wokflow and DLRM model exported, now it is time to move on to the next step: model deployment with [Merlin Systems](https://github.com/NVIDIA-Merlin/systems). # + [markdown] pycharm={"name": "#%% md\n"} # ### Deploying the model with Merlin Systems # + [markdown] pycharm={"name": "#%% md\n"} # We trained and exported our ranking model and NVTabular workflow. In the next step, we will learn how to deploy our trained DLRM model into [Triton Inference Server](https://github.com/triton-inference-server/server) with [Merlin Systems](https://github.com/NVIDIA-Merlin/systems) library. NVIDIA Triton Inference Server (TIS) simplifies the deployment of AI models at scale in production. TIS provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. It supports a number of different machine learning frameworks such as TensorFlow and PyTorch. # # For the next step, visit [Merlin Systems](https://github.com/NVIDIA-Merlin/systems) library and execute [Serving-Ranking-Models-With-Merlin-Systems](https://github.com/NVIDIA-Merlin/systems/blob/main/examples/Serving-Ranking-Models-With-Merlin-Systems.ipynb) notebook to deploy our saved DLRM and NVTabular workflow models as an ensemble to TIS and obtain prediction results for a qiven request. In doing so, you need to mount the saved DLRM and NVTabular workflow to the inference container following the instructions in the [README.md](https://github.com/NVIDIA-Merlin/systems/blob/main/examples/README.md).
examples/04-Exporting-ranking-models.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Rich Comparisons # This is quite staightforward. We can choose to implement any number of these rich comparison operators in our classes. # # Furthermore, if one comparison does not exist, Python will try to the reverse the operands and the operator (and unlike the arithmetic operators, both operands can be of the same type). # # Let's use a 2D `Vector` class to check this out: class Vector: def __init__(self, x, y): self.x = x self.y = y def __repr__(self): return f'Vector(x={self.x}, y={self.y})' v1 = Vector(0, 0) v2 = Vector(0, 0) print(id(v1), id(v2)) v1 == v2 # By default, Python will use `is` when we do not provide an implementation for `==`. In this case we have two different objects, so they do not compare `==`. # # Let's change that: class Vector: def __init__(self, x, y): self.x = x self.y = y def __repr__(self): return f'Vector(x={self.x}, y={self.y})' def __eq__(self, other): if isinstance(other, Vector): return self.x == other.x and self.y == other.y return NotImplemented v1 = Vector(1, 1) v2 = Vector(1, 1) v3 = Vector(10, 10) v1 == v2, v1 is v2 v1 == v3 # We could even support an equality comparison with other iterable types. Let's say we want to support equality comparisons with tuples: class Vector: def __init__(self, x, y): self.x = x self.y = y def __repr__(self): return f'Vector(x={self.x}, y={self.y})' def __eq__(self, other): if isinstance(other, tuple): other = Vector(*other) if isinstance(other, Vector): return self.x == other.x and self.y == other.y return NotImplemented v1 = Vector(10, 11) v1 == (10, 11) # In fact, although tuples do not implement equality against a `Vector`, it will still work because Python will reflect the operation: (10, 11) == v1 # We can also implement the other rich comparison operators in the same way. # # Let's implement the `<` operator: # We'll consider a Vector to be less than another vector if it's length (Euclidean) is less than the other. # # We're actually going to make use of the `abs` function for this, so we'll define the `__abs__` method as well. # + from math import sqrt class Vector: def __init__(self, x, y): self.x = x self.y = y def __repr__(self): return f'Vector(x={self.x}, y={self.y})' def __eq__(self, other): if isinstance(other, tuple): other = Vector(*other) if isinstance(other, Vector): return self.x == other.x and self.y == other.y return NotImplemented def __abs__(self): return sqrt(self.x ** 2 + self.y ** 2) def __lt__(self, other): if isinstance(other, tuple): other = Vector(*other) if isinstance(other, Vector): return abs(self) < abs(other) return NotImplemented # - v1 = Vector(0, 0) v2 = Vector(1, 1) v1 < v2 # What's interesting is that `>` between two vectors will work as well: v2 > v1 # What happened is that since `__gt__` was not implemented, Python decided to reflect the operation, so instead of actually running this comparison: # # ```v2 > v1``` # # Python actually ran: # # ```v1 < v2``` # What about with tuples? v1 < (1, 1) # And the reverse? (1, 1) > v1 # That worked too. How about `<=`, since we have `,` and `==` defined, will Python be able to use both to come up with a result? v1, v2 try: v1 <= v2 except TypeError as ex: print(ex) # Nope - so we have to implement it ourselves. Let's do that: # + from math import sqrt class Vector: def __init__(self, x, y): self.x = x self.y = y def __repr__(self): return f'Vector(x={self.x}, y={self.y})' def __eq__(self, other): if isinstance(other, tuple): other = Vector(*other) if isinstance(other, Vector): return self.x == other.x and self.y == other.y return NotImplemented def __abs__(self): return sqrt(self.x ** 2 + self.y ** 2) def __lt__(self, other): if isinstance(other, tuple): other = Vector(*other) if isinstance(other, Vector): return abs(self) < abs(other) return NotImplemented def __le__(self, other): return self == other or self < other # - v1 = Vector(0, 0) v2 = Vector(0, 0) v3 = Vector(1, 1) v1 <= v2 v1 <= v3 v1 <= (0.5, 0.5) # What about `>=`? v1 >= v2 # Again, Python was able to reverse the operation: # # ```v1 >= v2``` # # and run: # # ```v2 <= v1``` # We also have the `!=` operator: v1 != v2 # How did that work? # Well Python could not find a `__ne__` method, so it delegated to `__eq__` instead: # # ``` # not(v1 == v2) # ``` # # We can easily see this by adding a print statement to our `__eq__` method: # + from math import sqrt class Vector: def __init__(self, x, y): self.x = x self.y = y def __repr__(self): return f'Vector(x={self.x}, y={self.y})' def __eq__(self, other): print('__eq__ called...') if isinstance(other, tuple): other = Vector(*other) if isinstance(other, Vector): return self.x == other.x and self.y == other.y return NotImplemented def __abs__(self): return sqrt(self.x ** 2 + self.y ** 2) def __lt__(self, other): if isinstance(other, tuple): other = Vector(*other) if isinstance(other, Vector): return abs(self) < abs(other) return NotImplemented def __le__(self, other): return self == other or self < other # - v1 = Vector(0, 0) v2 = Vector(1, 1) v1 != v2 # In many cases, we can derive most of the rich comparisons from just two base ones: the `__eq__` and one other one, maybe `__lt__`, or `__le__`, etc. # # For example, if `==` and `<` is defined, then: # - `a <= b` is `a == b or a < b` # - `a > b` is `b < a` # - `a >= b` is `a == b or b < a` # - `a != b` is `not(a == b)` # # On the other hand if we define `==` and `<=`, then: # - `a < b` is `a <= b and not(a == b)` # - `a >= b` is `b <= a` # - `a > b` is `b <= a and not(b == a)` # - `a != b` is `not(a == b)` # So, instead of us defining all the various methods, we can use the `@total_ordering` decorator in the `functools` module, that will work with `__eq__` and **one** other rich comparison method, filling in all the gaps for us: # + from functools import total_ordering @total_ordering class Number: def __init__(self, x): self.x = x def __eq__(self, other): print('__eq__ called...') if isinstance(other, Number): return self.x == other.x return NotImplemented def __lt__(self, other): print('__lt__ called...') if isinstance(other, Number): return self.x < other.x return NotImplemented # - a = Number(1) b = Number(2) c = Number(1) a < b a <= b # You'll notice that `__eq__` was not called - that's because `a < b` was True, and short-circuit evaluation. In this next example though, you'll see both methods are called: a <= c # One thing I want to point out, according to the documentation the `__eq__` is not actually **required**. That's because as we saw earlier, all objects have a **default** implementation for `==` based on the memory address. That's usually not what we want, so we normally end up defining a custom `__eq__` implementation as well.
dd_1/Part 4/Section 04 - Polymorphism and Special Methods/03 - Rich Comparisons.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/srijan-singh/machine-learning/blob/main/Learning%20Classifiers/Random_KNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="6mnYNKJmEDWJ" import random # + id="uU9ED4-qBh5o" class BasicKNN(): def fit(self, X_train, y_train): self.X_train = X_train self.y_train = y_train def predict(self, X_test): predictions = [] for rown in X_test: label = random.choice(self.y_train) predictions.append(label) return predictions # + id="gcFh1XP_9fgC" from sklearn import datasets # + id="Rke_KMmEAmaP" iris = datasets.load_iris() # + id="E73ls0EOAn4e" X = iris.data y = iris.target # + id="wEf50X5vA6hi" from sklearn.model_selection import train_test_split # + id="HKKJHEftBKlF" X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .5) # + id="i1fVljWOBbxv" my_classifier = BasicKNN() # + id="fYxI5B1dB3Kc" my_classifier.fit(X_train, y_train) # + id="2Vr1r4_6CFKT" predictions = my_classifier.predict(X_test) # + id="37JlMLiGCQcG" from sklearn.metrics import accuracy_score # + id="HNnz8XWtCWr_" outputId="4b48fe76-e02a-4c49-8281-05ceb8309031" colab={"base_uri": "https://localhost:8080/", "height": 35} print(accuracy_score(y_test, predictions))
Learning Classifiers/Random_KNN.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 - AzureML # language: python # name: python3-azureml # --- # # Introduction # # This notebook shows the use of [Azure AutoML](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-automated-ml) on a (fake) variant data-set of multiple patients / samples. # # The fake data-set is generated within the notebook. Its features are explained below. # # The code used for AutoML is mainly taken from the [official tutorial on how to predict taxi fares](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-auto-train-models) # # The notebook was run from within an Azure ML Notebook that's connected to a workspace. See this [tutorial]( https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup) for how to get started. # ## Initial Setup: Explainer Dashbboard # # The Explainer Dashboard is used towards the end, but needs to be installed/enabled right at the start, official before the kernel is started. So I followed the instructions and restarted the kernel after the installation, since I was running this from within the ML Notebook. This is more straightforward if you are running your own Jupyter server, e.g on the Azure DSVM. # # See also the official docs on [Model interpretability with Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/machine-learning-interpretability-explainability) and this [Github issue](https://github.com/Azure/MachineLearningNotebooks/issues/506) # ! echo $PATH # ! ps aux | grep jupyter | grep -v grep # not sure if sudo is needed. if yes, prepend /anaconda/envs/azureml_py36/bin/ # ! jupyter nbextension install --py --sys-prefix azureml.contrib.explain.model.visualize # ! jupyter nbextension enable --py --sys-prefix azureml.contrib.explain.model.visualize # ! conda install -y nb_conda # `microsoft-mli-widget` is the widget we want. # ! jupyter nbextension list # # Create fake experiment # # - Generate fake variant calling matrix for a panel sequencing experiment # - Rows: samples # - Columns: variant positions # - Values are 0 for ref, 1 for alt het and 2 for alt hom import pandas as pd import numpy as np # + # number of samples num_samples = 500 # number of cases num_cases = 200 # number of overall sites num_sites = 100 # number of positions causal for phenotype num_causal_sites = 2 # background mutation frequency bg_mut_freq = 0.1 # - # create a random variant matrix containing: 0 ref, 1 alt het, 2 alt hom mat = np.random.choice(size=(num_samples, num_sites), a=[0, 1, 2], p=[1-bg_mut_freq, bg_mut_freq/2, bg_mut_freq/2]) # assign gender randomly gender_transl = {1: 'male', 2: 'female', 3: 'unknown'} gender_codes = list(gender_transl.keys()) gender = np.random.choice(size=(num_samples), a=gender_codes) # determine which gender got unlucky affected_gender = np.random.choice([k for k in gender_transl.keys() if k!='unknown']) print("Affected gender = {} ({})".format(affected_gender, gender_transl[affected_gender])) # pick causal sites import random causal_sites = sorted(random.sample(range(num_sites), num_causal_sites)) print(causal_sites, " zero offset") # set phenotype status to 0 for all samples status = np.zeros(num_samples, dtype=int) # pick cases randomly (saved in status array). set gender to affected gender. # make sure that at least one randomly chosen causal site is set to alt hom # for r in random.sample(range(num_samples), num_cases): status[r] = 1 if not any([mat[r, c]==2 for c in causal_sites]): i = random.choice(causal_sites) mat[r, i] = 2 gender[r] = np.random.choice([k for k,v in gender_transl.items() if k!=affected_gender]) # Put it all into a Pandas dataframe cols = ["site-{:d}".format(i+1) for i in range(num_sites)] df = pd.DataFrame(data=mat, columns=cols) ids = ["sample-{:d}".format(i+1) for i in range(num_samples)] df.insert(loc=0, column='ID', value=ids) df.insert(loc=1, column='Gender', value=gender) df.insert(loc=2, column='Status', value=status) df df.shape # save for later use df.to_csv("sample_matrix_clean.csv", index=False) # + import pandas as pd df = pd.read_csv("sample_matrix_clean.csv") # - df # keep a copy of the "annotation", i.e. all stuff that doesn't go into AutoML as features annotation = df[["ID", "Status", "Gender"]].copy() df = df.drop(["ID", "Status"], axis=1) # Split into training and test set from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(df, annotation['Status'], test_size=0.2, random_state=42) # # AutoML run (local) # # A very cool feature in AutoML is automatic preprocessing (see `preprocess` below), which can automatically impute missing values, encode values, add features, embed words etc. See [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-create-portal-experiments#preprocess) for more information. Since the data-set here is clean already, there is no need for this. Plus, I couldn't get the Explainer below to work if preprocessing was on... # + import logging automl_settings = { "iteration_timeout_minutes": 1, "iterations": 10, "primary_metric": 'accuracy', "preprocess": False, "verbosity": logging.INFO, "n_cross_validations": 5 } # + from azureml.train.automl import AutoMLConfig automl_config = AutoMLConfig(task='classification', debug_log='automated_ml_errors.log', X=X_train.values, y=y_train.values.flatten(), **automl_settings) # - # Connect to the ML workspace on Azure so that everything is logged there as well from azureml.core import Workspace ws = Workspace.from_config() from azureml.core.experiment import Experiment experiment = Experiment(ws, "vcf-classification-local") local_run = experiment.submit(automl_config, show_output=True) from azureml.widgets import RunDetails RunDetails(local_run).show() # # Predict outcome best_run, fitted_model = local_run.get_output() print(best_run) print(fitted_model) y_predict = fitted_model.predict(X_test.values) print("Sample\tPredicted\tActual") for idx, (dfidx, dfrow) in enumerate(X_test.iterrows()): print("{}\t{}\t{}".format(annotation.at[dfidx, 'ID'], y_predict[idx], annotation.at[dfidx, 'Status'])) # top 10 is enough if idx == 9: break print("...") # ## Print stats and plot a confusion Matrix # + # idea from https://datatofish.com/confusion-matrix-python/ y_actual = [] for dfidx, dfrow in X_test.iterrows():# what's the pandassy way of doing this? y_actual.append(annotation.at[dfidx, 'Status']) data = {'y_Predicted': y_predict, 'y_Actual': y_actual} df = pd.DataFrame(data, columns=['y_Actual','y_Predicted']) # + # stats from pandas_ml import ConfusionMatrix Confusion_Matrix = ConfusionMatrix(df['y_Actual'], df['y_Predicted']) Confusion_Matrix.print_stats() # - confusion_matrix = pd.crosstab(df['y_Actual'], df['y_Predicted'], rownames=['Actual'], colnames=['Predicted']) print(confusion_matrix) # + # idea from https://stackoverflow.com/questions/19233771/sklearn-plot-confusion-matrix-with-labels/48018785 import seaborn as sn import matplotlib.pyplot as plt ax = plt.subplot() sn.heatmap(confusion_matrix, annot=True, ax = ax); #annot=True to annotate cells # labels, title and ticks ax.set_xlabel('Predicted'); ax.set_ylabel('True'); ax.set_title('Confusion Matrix'); #ax.xaxis.set_ticklabels(['business', 'health']); #ax.yaxis.set_ticklabels(['health', 'business']); # - # ## Model Interpretability and Explainability # # Microsoft has [six guiding AI principles](https://blogs.partner.microsoft.com/mpn/shared-responsibility-ai-2/). One of these is transparency, which states that it must be possible to understand how AI decisions were made. This is where [model interpretability](https://docs.microsoft.com/en-us/azure/machine-learning/service/machine-learning-interpretability-explainability) comes into play. Here we will use a TabularExplainer to understand global behavior of our model. # + ## Note, explainer doesn't work if preprocessing was used because, input column names cannot be # found in fitted columns!? from azureml.explain.model.tabular_explainer import TabularExplainer # "features" and "classes" fields are optional. couldn't figure out how to use them explainer = TabularExplainer(fitted_model, X_train) # - # Now run the explainer. This takes some time... global_explanation = explainer.explain_global(X_train) # + # sorted feature importance values and feature names sorted_global_importance_values = global_explanation.get_ranked_global_values() sorted_global_importance_names = global_explanation.get_ranked_global_names() ## dict(zip(sorted_global_importance_names, sorted_global_importance_values)) # dictionary that holds the top K feature names and values feature_importance = global_explanation.get_feature_importance_dict() # - #for site, val in sorted(global_explanation.get_feature_importance_dict().items(), key=lambda x: x[1]): # print(site, val) print("Top 10: ", ", ".join(sorted_global_importance_names[:10])) print("Real causal sites {}".format(", ".join(["site-{:d}".format(i+1) for i in causal_sites]))) # + from azureml.contrib.explain.model.visualize import ExplanationDashboard dashboard = ExplanationDashboard(global_explanation, fitted_model, X_train) # - # Nothing shows? See [this Github issue](https://github.com/Azure/MachineLearningNotebooks/issues/506) # # Find the gender bias in the Explainer Dashboard...
data/vcf-classification-04092019.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # "Preventing a Hard Rain's Fall: the Persisting Threat of Nuclear Weapons" # > "An exploration of nuclear weapons threat and US-Russia relations in the domain of arms control." # # - toc:true # - branch: master # - badges: true # - comments: true # - author: <NAME> # - categories: [fastpages, jupyter] # # Preventing a Hard Rain's Fall: the Persisting Threat of Nuclear Weapons # # Date: March 21, 2022 # # Name: <NAME>, M.A. # ### About the author # <NAME> is a PhD Student at University of California, Los Angeles specializing in Slavic, East European and Eurasian Languages and Cultures while also pursuing a graduate certificate in Digital Humanities. This project is therefore part of her DH seminar titled Coding for Humanities. # ### Introduction and about the project # This final project is part of a larger research question surrounding the renewed concerns associated with nuclear arms proliferation, specifically targeting the issue of decaying nuclear deterrence and disarmament structure. At the present, the two countries that hold approximately 90% of all nuclear weapons, the U.S. and Russia, are seeing their relations deteriorate to the lowest point since the end of the Cold War. Although on January 3rd, 2022, China, France, Russia, the U.K. and the U.S. [issued a joint statement](https://www.whitehouse.gov/briefing-room/statements-releases/2022/01/03/p5-statement-on-preventing-nuclear-war-and-avoiding-arms-races/#:~:text=We%20affirm%20that%20a%20nuclear,and%20must%20never%20be%20fought.&text=We%20reaffirm%20the%20importance%20of,arms%20control%20agreements%20and%20commitments.) proclaiming that # > a nuclear war cannot be won and must never be fought, # # matters around diminishing nuclear weapons reduction commitments continue to persist. # In an age of heightened geopolitical escalation between NATO and Russia, especially since Russia again invaded Ukraine on February 24, 2022, and worsening relations between the U.S. and China, a perception of lowered threshold for the use of nuclear weapons is alarming. This project seeks to track historical changes in nuclear arsenals of the U.S. and Russia, note how these changes correspond to various global and bilateral treaties, and assess the possible scope of the next U.S.-Russia treaty on nuclear nonproliferation. # ### Data # This project will draw from several sets of data. First, <em>Arms Control Association</em> has a helpful fact sheet that shows "Nuclear Weapons: Who has What at a Glance." Drawing from this information, it is possible to present the data in a form of a map. Secondly, I have used the data presented in <em>Federation of American Scientists's</em> publication "Status of World Nuclear Forces" in order to create a stacked bar chart as well as line graph outlining the change in nuclear arsenal between 1945 and 2022. Finally, I have used <em>Bulletin of the Atomic Scientists's</em> data publications of nuclear forces by country by year as well as CSIS's missile defense project titled <em>Missle Threat</em> that also offers a valuable distribution of missles by country at the moment. This data was useful when creating a horizontal bar chart that compares nuclear bombs by their yields measured in kilotons. # ### Results and analysis # #### The map of nuclear weapon holding states # The map below shows which countries own nuclear weapons and how many warheads each of those states possess. Out of the nine countries represented, five are officially recognized by the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) as nuclear weapon states: China, France, Russia, the United Kingdom, and the United States (highlighted in green). The rest, India, Israel, North Korea, and Pakistan, are not members of the NPT and are therefore not officially recognized. # # Additionally, the map highlights the disparity in the number of nuclear warheads between Russia and the US versus the rest of the world. Although both countries have made considerable progress in limiting the number of warheads each of them has as part of its arsenal, more work remains to be done. # Folium map import pandas as pd import folium from folium import plugins #hide m = folium.Map() #hide url = 'https://raw.githubusercontent.com/python-visualization/folium/master/examples/data' country_shapes = f'{url}/world-countries.json' df = pd.DataFrame({'country':['China','France','Russia','United Kingdom','United States of America','India','Israel','Pakistan', 'North Korea'], 'iso_alpha':['CHN','FRA','RUS','GBR','USA','IND','ISR','PAK', 'PRK'], 'lat':[35.86166, 46.227638, 61.52401, 55.378051, 37.09024, 20.593684, 31.046051, 30.375321, 40.339852], 'lng':[104.195397, 2.213749, 105.318756, -3.435973, -95.712891, 78.96288, 34.851612, 69.345116, 127.510093], 'counts':[350, 290, 6257, 225, 5550, 156, 90, 165, 20]}) #hide folium.Choropleth(geo_data=country_shapes, #The GeoJSON data to represent the world country data=df, # the dataframe containing the data columns=['country', 'counts'], #The column aceppting list with 2 value; The country name and the numerical value key_on='feature.properties.name', fill_color='YlOrRd', nan_fill_color='lightgrey' ).add_to(m) # + tags=[] for lat, lon, name, counts in zip(df['lat'],df['lng'],df['country'],df['counts']): #Creating the marker folium.Marker( location=[35.86166, 104.195397], popup="China: 350, NPT Recognized Nuclear-Weapon State", icon=folium.Icon(color="green"), ).add_to(m) folium.Marker( location=[46.227638, 2.213749], popup="France: 290, NPT Recognized Nuclear-Weapon State", icon=folium.Icon(color="green"), ).add_to(m) folium.Marker( location=[61.52401, 105.318756], popup="Russia: 6,257, NPT Recognized Nuclear-Weapon State", icon=folium.Icon(color="green"), ).add_to(m) folium.Marker( location=[55.378051, -3.435973], popup="the UK: 225, NPT Recognized Nuclear-Weapon State", icon=folium.Icon(color="green"), ).add_to(m) folium.Marker( location=[37.09024, -95.712891], popup="the US: 5,550, NPT Recognized Nuclear-Weapon State", icon=folium.Icon(color="green"), ).add_to(m) folium.Marker( location=[20.593684, 78.96288], popup="India: 156, non-NPT Nuclear-Weapon Possessor", icon=folium.Icon(color="purple"), ).add_to(m) folium.Marker( location=[31.046051, 34.851612], popup="Israel: 90, non-NPT Nuclear-Weapon Possessor", icon=folium.Icon(color="purple"), ).add_to(m) folium.Marker( location=[30.375321, 69.345116], popup="Pakistan: 165, non-NPT Nuclear-Weapon Possessor", icon=folium.Icon(color="purple"), ).add_to(m) folium.Marker( location=[40.339852, 127.510093], popup="North Korea: 20, non-NPT Nuclear-Weapon Possessor", icon=folium.Icon(color="purple"), ).add_to(m) m m.save('mymapname.html') # - from IPython.display import IFrame IFrame(src='./mymapname.html', width=700, height=600) # #### Global nuclear warhead arsenal for 2022 # The chart below addresses the arsenal of nuclear warheads held by each of the countries of interest. As noted above, Russia and the US are the largest holders of nuclear weapons but the graph is also a useful visualization of the kinds of weapons that are part of each countries arsenal. import numpy as np import matplotlib.pyplot as plt df = pd.read_csv('data/nuclear_inventories.csv') #hide df #hide df.info() #hide countries = df.iloc[:, 0] #hide indx = np.arange(len(df)) plt.style.use('fivethirtyeight') df.plot(kind='bar', stacked=True) plt.xlabel('Country') plt.ylabel('Nuclear Warheads') plt.xticks(indx, countries) plt.title('Global Nuclear Warhead Arsenal, 2022') # #### Nuclear warhead arsenal excluding values for Russia and the US for 2022 # The below graph zeros in on data above by excluding values for Russia and the US, the largest holders of nuclear weapons (about 90s of all global arsenal). # + tags=[] df2 = df.drop( labels=[0,1], axis=0, inplace=False ) # - #hide df2 #hide indx2 = np.arange(len(df2)) df2.plot(kind='bar', stacked=True) plt.xlabel('Country') plt.ylabel('Nuclear Warheads') plt.xticks(indx2, ['China', 'France', 'United Kingdom', 'Pakistan', 'India', 'Israel', 'North Korea']) plt.title('Nuclear Warhead Arsenal excluding Russia and the US, 2022') # #### Global nuclear warhead arsenal between 1945 and 2022 # Since the present condition of global distribution of nuclear weapons is clear, it might be also useful to observe how nuclear arsenal has changed over time. The below chart provides a glimpse into the changes in not just nuclear weapons holding states but also in the number of weapons over time since 1945 all the way to 2022. The graph shows that in 1986, total number of nuclear warheads has reached an enormous quanity of 70,374 which is considerably higher than 12,705 today. warheads = pd.read_csv('data/warheads_1945_2022.csv') #hide warheads # + tags=[] #hide warheads.info() # + plt.figure(figsize=(13,9)) plt.style.use('fivethirtyeight') plt.plot(warheads.Year, warheads.Total,color='lightgrey', label='Total Arsenal') plt.plot(warheads.Year, warheads.US,color='cornflowerblue', label='US') plt.plot(warheads.Year, warheads.Russia,color='chocolate', label='Soviet Union/Russia') plt.plot(warheads.Year, warheads.China,color='orange', label='China') plt.plot(warheads.Year, warheads.France,color='steelblue', label='France') plt.plot(warheads.Year, warheads.UK,color='tomato', label='UK') plt.plot(warheads.Year, warheads['North Korea'], 'c', label='North Korea') plt.plot(warheads.Year, warheads.Pakistan, 'm', label='Pakistan') plt.plot(warheads.Year, warheads.India, color='pink', label='India') plt.plot(warheads.Year, warheads.Israel,color='gold', label='Israel') plt.plot(warheads.Year, warheads['South Africa'],color='yellowgreen', label='South Africa') #for country in warheads: #if country != 'Year': #plt.plot(warheads.Year, warheads[country]) plt.title('Global Nuclear Warhead Arsenal, 1945-2022') plt.xlabel('Year') plt.ylabel('Warheads') plt.axvline(x=1986, color='k', linestyle='--', linewidth=2) plt.text(1987, 70000, 'Total arsenal is 70,374') plt.legend() #plt.legend(['Total Arsenal','Soviet Union/Russia', 'US', 'China', 'France', 'UK', 'Pakistan', 'India', 'Israel', #'North Korea', 'South Africa']) current_values = plt.gca().get_yticks() plt.gca().set_yticklabels(['{:,.0f}'.format(x) for x in current_values]) # - # #### Change in Russian and US arsenal from 1945 to 2022. # The visualization below is a more interactive way of representing changes in Russia's and the US's nuclear arsenals. # + import altair as alt # make dataframe that has all countries condensed into single column # here I'm just doing Russia & US year = [] country = [] count = [] for i,row in warheads.iterrows(): year.append(row['Year']) country.append('Russia') count.append(row['Russia']) year.append(row['Year']) country.append('US') count.append(row['US']) df2 = pd.DataFrame({'year':year,'country':country,'count':count}) # make color scale pink_blue = alt.Scale(domain=('Russia', 'US'), range=["steelblue", "salmon"]) # make slider from 1945 - 2022 # based on year column, default value 2000 slider = alt.binding_range(min=1945, max=2022, step=1) select_year = alt.selection_single(name="year", fields=['year'], bind=slider, init={'year': 2000}) # make plot and link it to the slider source = df2 alt.Chart(source).mark_bar().encode( x=alt.X('country:N', title=None), y=alt.Y('count:Q', scale=alt.Scale(domain=(0, 41000))), color=alt.Color('country:N', scale=pink_blue), column='year:O' ).properties( width=75 ).add_selection( select_year ).transform_filter( select_year ) # - # ### Strategic nuclear arms control agreements between Russia and the US, 1945-2022 # The next chart tracks how Russian and the US's warhead counts responded to various strategic and nonstrategic (the INF Treaty) agreements between the two countries starting with the Strategic Arms Limitation Talks (SALT I) which produced the Anti-Ballistic Missile (ABM) Treaty and ending with the the New Strategic Arms Reduction Treaty (New START) which was extended in 2021 until 2026. # # After reaching its absolute maximum in 1986, the global number of warheads began to decrease. The change did not start with SALT I because while the agreement limited the number of intercontinental ballistic missiles (ICBM) and submarine-launched ballistic missile (SLBM) forces, it failed to address limits on strategic bombers or warhead count leaving both sides able to continue the build up of their nuclear weapons. # # It is the Intermediate-Range Nuclear Forces (INF) Treaty that was a landmark agreement that eliminated an entire class of weapons: the ground-launched ballistic and cruise missiles with ranges between 500 and 5,500 kilometers. The treaty entered into force in 1988 and due to its nature did not have an expiration date. Unfortunately, both sides' concerns around compliance grew with time which also mirrored the overall decline in US-Russia relations, and, in 2014, the same year Russia invaded Ukraine and annexed Crimea, the US publicly alleged that Russia was violating the treaty. The US ultimately withdrew from the INF in 2019 de facto killing it. # # Now, only the New START remains and it is set to expire in 2026. With modernization underway and with emergence of new technologies, such as the highly maneuverable hypersonic missiles, it is paramount for the US and Russia to negotiate a successor, or else risking welcoming a new 21st-century nuclear arms race. # + plt.figure(figsize=(13,9)) plt.plot(warheads.Year, warheads.Total,color='lightgrey', label='Total Arsenal') plt.plot(warheads.Year, warheads.US,color='cornflowerblue', label='US') plt.plot(warheads.Year, warheads.Russia,color='chocolate', label='Soviet Union/Russia') plt.title('Strategic & Nonstrategic Nuclear Arms Control Agreements b/w Russia and the US, 1945-2022') plt.xlabel('Year') plt.ylabel('Warheads') plt.axvline(x=1972, color='k', linestyle='--', linewidth=2) plt.text(1973, 70000, 'SALT I') plt.axvline(x=1988, color='k', linestyle='--', linewidth=2) plt.text(1989, 70000, 'INF') plt.axvline(x=1994, color='k', linestyle='--', linewidth=2) plt.text(1995, 70000, 'START I') plt.axvline(x=2003, color='k', linestyle='--', linewidth=2) plt.text(2004, 70000, 'SORT') plt.axvline(x=2011, color='k', linestyle='--', linewidth=2) plt.text(2012, 70000, 'New START') plt.legend() # - # #### Nuclear Warhead Arsenal excluding Russia and the US, 1945-present # By excluding data pertaining to Russia and the US, the next graph gives a clearer idea about nuclear warhead holding states over time and where they stand today. China, for instance, acquired nuclear weapons in 1964 and has seen a steady rise in their number ever since. In September 2021, [the news broke](https://www.armscontrol.org/act/2021-09/news/new-chinese-missile-silo-fields-discovered) that 250 and possibly more long-range missile silos are being constructed in three locations, suggesting that China might considerably expand its nuclear arsenal in the near future. As a result, while a trilateral agreement between China, Russia, and the US is not a real possibility, strategic talks with the purpose to bring in more transparency around nuclear weapon development and use should be a priority for the US. # # Another interesting case is an enfolding arms race between India and Pakistan. Neither have signed the NPT and India [conducted its first nuclear weapon test in 1974](https://www.jstor.org/stable/24905298). Later, in 1998, India and Pakistan began developing their nuclear arsenal and both countries' number of weapons is continuously on the rise. # # Israel is another country that never signed the NPT and maintains a policy of nuclear opacity, meaning that Israel [does not confirm nor deny posessing nuclear weapons](https://thebulletin.org/premium/2022-01/nuclear-notebook-israeli-nuclear-weapons-2022/). Not only does it make it challenging to estimate Israel's arsenal, Western government avoid branding Israel a [nuclear weapon holding state](https://thebulletin.org/premium/2022-01/nuclear-notebook-israeli-nuclear-weapons-2022/). # + plt.figure(figsize=(13,9)) plt.plot(warheads.Year, warheads.China,color='skyblue', label='China') plt.plot(warheads.Year, warheads.France,color='steelblue', label='France') plt.plot(warheads.Year, warheads.UK,color='tomato', label='UK') plt.plot(warheads.Year, warheads['North Korea'], 'c', label='North Korea') plt.plot(warheads.Year, warheads.Pakistan, 'm', label='Pakistan') plt.plot(warheads.Year, warheads.India, color='pink', label='India') plt.plot(warheads.Year, warheads.Israel,color='gold', label='Israel') plt.plot(warheads.Year, warheads['South Africa'],color='yellowgreen', label='South Africa') plt.title('Nuclear Warhead Arsenal w/ exclusion of Russia and the US, 1945-2022') plt.xlabel('Year') plt.ylabel('Warheads') plt.legend() # - # The visualization below is a more interactive way of representing changes in global nuclear arsenals with an exclusion of Russia and the US. # + # same as above for the rest of the countries year = [] country = [] count = [] for i,row in warheads.iterrows(): year.append(row['Year']) country.append('China') count.append(row['China']) year.append(row['Year']) country.append('France') count.append(row['France']) year.append(row['Year']) country.append('UK') count.append(row['UK']) year.append(row['Year']) country.append('Pakistan') count.append(row['Pakistan']) year.append(row['Year']) country.append('India') count.append(row['India']) year.append(row['Year']) country.append('Israel') count.append(row['Israel']) year.append(row['Year']) country.append('North Korea') count.append(row['North Korea']) year.append(row['Year']) country.append('South Africa') count.append(row['South Africa']) df2 = pd.DataFrame({'year':year,'country':country,'count':count}) # make color scale pink_blue = alt.Scale(domain=('China', 'France', 'UK', 'Pakistan', 'India', 'Israel', 'North Korea', 'South Africa'), range=["olivedrab", "royalblue", "orangered", "mediumpurple", "palevioletred", "teal", "darkorange", "gold"]) # make slider from 1945 - 2022 # based on year column, default value 2000 slider = alt.binding_range(min=1945, max=2022, step=1) select_year = alt.selection_single(name="year", fields=['year'], bind=slider, init={'year': 2000}) # make plot and link it to the slider source = df2 alt.Chart(source).mark_bar().encode( x=alt.X('country:N', title=None), y=alt.Y('count:Q', scale=alt.Scale(domain=(0, 600))), color=alt.Color('country:N', scale=pink_blue), column='year:O' ).properties( width=300 ).add_selection( select_year ).transform_filter( select_year ) # - # #### Comparison of nuclear bombs yields: bombs dropped on Hiroshima and Nagasaki vs bombs of today # In August 1945, [the US dropped two bombs](https://www.census.gov/history/pdf/fatman-littleboy-losalamosnatllab.pdf), one on Hiroshima and one on Nagasaki only three days apart. Casualties in each of these cities have reached 140,000 by the end of that year. Together the bombs have destroyed 8 square miles of the two cities. Some of today's nuclear bombs, such as B83 (the largest in US arsenal right now), are almost 60 times larger than the Fat Man that was dropped on Nagasaki. If such a bomb was dropped on a large populous city like Los Angeles today, [estimated](https://nuclearsecrecy.com/nukemap/?&kt=1200&lat=34.0453902&lng=-118.2525158&airburst=0&hob_ft=0&casualties=1&ff=50&psi=20,5,1&zm=11) fatalities number would reach 454,250 with hundreds of thousands more of those injured. A city like New York [would see](https://nuclearsecrecy.com/nukemap/?&kt=1200&lat=40.72422&lng=-73.9961&airburst=0&hob_ft=0&casualties=1&ff=50&psi=20,5,1&zm=11) over 1,4 million fatalities in addition to 1,6 million injuries. # # The threat of nuclear weapons use, however, does not only come from such high-yield (50+ kilotons of TNT) bombs. In 2020, the US Department of Defense [issued a statement](https://www.defense.gov/News/Releases/Release/Article/2073532/statement-on-the-fielding-of-the-w76-2-low-yield-submarine-launched-ballistic-m/#.Xjl0guT_p80.twitter) concerning the deployment of the new W76-2 low-yield (about 6 kilotons of TNT) submarine-launched ballistic missile (SLBM). In general, deployment of nuclear weapons proves a state's commitment to the strategy of detterence aimed at preventing the first use of nuclear weapons by credibly threatening a response that would inflict unacceptable damage in return. Critics of Washington's decision to deploy W76-2 SLBM [argue](https://thebulletin.org/2020/01/the-low-yield-nuclear-warhead-a-dangerous-weapon-based-on-bad-strategic-thinking/) that such a move does not aid deterrence and, on the contrary, makes a first nuclear strike much more likely. bombs = pd.read_csv('data/bomb_yield.csv') #hide bombs #hide bombs.info() # + bomb_title = bombs.iloc[:, 0] indx = np.arange(len(bombs)) bombs.plot(kind='barh',figsize=(10, 7), legend=None) plt.title('Nuclear Bombs by Comparison: 1945 vs Today') plt.xlabel('Yield (kilotons TNT)') plt.ylabel('Nuclear Bomb') plt.yticks(indx, bomb_title) plt.xticks(np.arange(0, 2000, 250)) # - # ### Discussion # While the last few decades have seen remarkable progress in the realm of arms control, the risk of nuclear war has not gone away and, more than that, the threat has reemerged in full force with Russia's recent invasion of Ukraine on February 24th, 2022. The US and Russia have seen sustained decline in relations following the end of the Cold War and yet both sides remained interested in placing limits on nuclear weapons. # # At the present, the New START treaty is the last remaining agreement between Russia and the US. The two states have successfully extended it in 2021 until 2026 and negotiating New START treaty's successor is of major importance. After the extension was finalized, Russian Deputy Foreign Minister Sergei Ryabkov [noted](https://www.armscontrol.org/act/2021-03/news/us-russia-extend-new-start-five-years) that "We (Russia and the US) now have a significant amount of time in order to launch and hold profound bilateral talks on the whole set of issues that influence strategic stability, ensure security of our state for a long period ahead." President Biden [sought](https://www.whitehouse.gov/briefing-room/statements-releases/2021/06/16/u-s-russia-presidential-joint-statement-on-strategic-stability/) to do just that with his bilateral Strategic Stability Dialogue. This initiative has now been halted following Russia's invasion of Ukraine and the future of arms control remains uncertain. # # Currently, it is impossible to predice how the Russo-Ukrainian War is going to end and how domestic factors will shape each country's committment to arms control. As soon as it is possible, however, Strategic Stability Dialogue or something equivalent to it should be resumed in order for negotiations to take place. Russia and the US have found common ground before amidst deep distrust and vigorous cometition with each other, the two sides should be prepared to do so again to avoid the implications of a renewed nuclear arms race.
_notebooks/2022-03-22-Final-Project.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/Aleph-GORY/calculus_labs/blob/main/2021-05-28_SumaRiemann.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="r1naPhJTkIq-" # #Cálculo diferencial e Integral 4 # ####2021/05/28 - Sumas de Riemann # + id="K_3Xam1mDTbd" import numpy as np import time from matplotlib import pyplot as plt # + [markdown] id="xyf5LtfzBONS" # La integral de Riemann para funciones de $\mathbb{R}^n$ en $\mathbb{R}$ utiliza las $n$-celdas cerradas como medio de aproximación. # # Una $n$-celda cerrada es por definición el producto cartesiano de $n$ intervalos cerrados del conjunto de los números reales, en este sentido, generalizan a los intervalos cerrados de $\mathbb{R}$. # + id="eb5fNmE4NsMB" class Celda(object): def __init__(self, componentes): self.dimen = len(componentes) self.componentes = componentes self.contenido = self.calcula_contenido() def calcula_contenido(self): contenido = 1 for componente in self.componentes: contenido = contenido * (componente[1] - componente[0]) return contenido # + id="KjIGdCBSBepv" # A es la celda [0,2]X[0,1] A = Celda([[0,2],[0,1]]) # + [markdown] id="VcrvBfPrBxJ3" # Método para generar la matriz $\mathcal{M}$ cuya entrada $(i,j)$ corresponde con el centro de la celda $C_{i,j}$ de la partición regular en $n\times n$ trozos de la $2$-celda A # + id="-TuggUkW4VOA" def calcula_M(A, n): # Obtener los intervalos componentes Ix = A.componentes[0] Iy = A.componentes[1] deltax = (Ix[1] - Ix[0])/n deltay = (Iy[1] - Iy[0])/n contenido = deltax*deltay # p es el centro de la celda 1,1 de la partición p = [Ix[0]+deltax/2, Iy[0]+deltay/2] # Creamos la matriz de n,n con los centros de las celdas M = [] for i in range(n): renglon = [] for j in range(n): renglon.append([p[0]+i*deltax, p[1]+j*deltay]) M.append(renglon) return M, contenido # + colab={"base_uri": "https://localhost:8080/"} id="ZCWIcDkl647M" outputId="f4f51d3c-2a15-4360-d02e-afe00cdbb71e" # Prueba para calcula_M A = Celda([[0,1], [2,4]]) M, cont = calcula_M(A, 2) # Esto solo es para imprimir bonito la matriz for renglon in M: s = '' for punto in renglon: s += '(' + str(punto[0]) + ', ' + str(punto[1]) + ') ' print(s) print('El contenido de la celda es', cont) # + [markdown] id="SZN7HEXh-6H_" # Este método calcula la suma de Riemann de $f$ sobre la celda $A$, con los puntos medios de la matriz $M$. # + id="naBLDe12_F5_" def suma_Riemann(f, A, n): M, cont = calcula_M(A, n) suma = 0 for r in M: for e in r: suma += f(e) return suma * cont # + colab={"base_uri": "https://localhost:8080/"} id="Kvfi8Amn_hGu" outputId="f14420c6-171d-4ef7-da23-292cc13e06fd" # Prueba de suma_Riemann con el ejemplo de la clase pasada def f(p): x, y = p[0], p[1] return x**2 * y A = Celda([[0,2],[0,1]]) n = 1000 aprox_integral = suma_Riemann(f, A, n) print('La aproximación de la integral con Suma de Riemann con n igual a', n,'es:') print(aprox_integral) # + [markdown] id="x2kNcXYKnXeE" # El valor exacto de esta integral es # # $$\int_{A}f = \int_0^2\int_0^1x^2 y\,dy\,dx = \frac{1}{2}\int_0^2 x^2\,dx = \frac{1}{2}\frac{8}{3} = \frac{4}{3}$$ # # **¡Nuestra función aproxima correctamente la integral!** # # Más aún el teorema de las sumas de Riemann nos asegura que cuanto mayor sea $n$, mejor aproximaremos la integral. # *¿Pero es esto posible?* # + [markdown] id="cXcvjTOrqM9d" # A continuación se muestra una gráfica del tiempo que tarda el método suma_Riemann(), en función del tamaño de la partición regular $n$ # # # + id="E3Bk1tomqNN5" tiempos = [] step = 500 max_n = 5000 axis = range(step, max_n, step) for x in axis: tic = time.time() aprox = suma_Riemann(f, A, x) toc = time.time() tiempos.append(toc-tic) # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="YApR6JGLtNr9" outputId="4a4d300a-bb59-4d82-825e-d1df9430ebe2" plt.plot(axis, tiempos, 'ro') plt.show # + [markdown] id="Ejdx9DST3D97" # Se observa que el tiempo varía cuadráticamente en función del tamño de la entrada, y también se observa que para $n>1500$, $t_n > 20s$. # # ¿Podemos hacerlo más rápidamente? # + [markdown] id="c1widUIw50__" # Usemos la librería de cómputo Numpy para optimizar la función suma_Riemann() # + id="L97Wg_hg6GKQ" def suma_Riemann_np(f, A, n): Ix = A.componentes[0] Iy = A.componentes[1] deltax = (Ix[1] - Ix[0])/n deltay = (Iy[1] - Iy[0])/n contenido = deltax*deltay p = [Ix[0]+deltax/2, Iy[0]+deltay/2] M = np.fromfunction( lambda i,j: f([p[0]+i*deltax, p[1]+j*deltay]), (n,n), dtype=np.longdouble ) return M.sum()*contenido # + colab={"base_uri": "https://localhost:8080/"} id="Nu2J80AC7m1i" outputId="f26d86a9-e755-440e-e641-f0773f06302c" aprox_integral = suma_Riemann_np(f, A, n) print('La aproximación de la integral con Suma de Riemann usando Numpy con n igual a', n,'es:') print(aprox_integral) # + [markdown] id="TOcxboZJ8Wgd" # Hagamos la misma gráfica de tiempo vs entrada con la nueva función. # + id="57TlV_2R8djx" tiempos_np = [] for x in axis: tic = time.time() aprox = suma_Riemann_np(f, A, x) toc = time.time() tiempos_np.append(toc-tic) # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="7x-1kava8lPS" outputId="ee1bd963-c231-446d-8472-38a9b5cbdc7f" plt.plot(axis, tiempos_np, 'bo') plt.show # + [markdown] id="Nt6YNn_j82pq" # A pesar de comportarse cuadráticamente, esta solución presenta una mejora en la velocidad importante, el primer método tardaba más de $20s$ para manejar una entrada con $n= 4500$ mientras que la implementación con numpy tarda $0.7s$ aproximadamente. # # La diferencia es más clara si vemos la siguiente gráfica. # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="c-niHn129P9i" outputId="6d3f8c49-4a2b-4fcd-d292-d6335926de3f" plt.plot(axis, tiempos, 'ro', axis, tiempos_np, 'bo') plt.show # + [markdown] id="8xoJoYv398Rs" # #Ejercicios # # # 1. Calcula con el *Teorema de Fubini* y con la función suma_Riemann_np la integral $\int_A f$ donde: # * $A = [5,6]\times [-3,1]$ y $f(x,y) = \sin(x)\cos(y)$ # * $A = [-11.45,-7.68]\times [0,1000]$ y $f(x,y) = \pi x^4+xy+e y^6$ # # # 2. Investiga sobre las funciones *np.fromfunction* y *np.sum* usadas en la función suma_Riemann_np. # #
calculo2/laboratorios/L2_IntegralesDefinidas.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # # Getting Started with PyTorch # > "Simple vision and image classification (CIFAR10) with CNNs using PyTorch." # # - toc:true # - branch: master # - badges: true # - comments: true # - categories: [python, machine learning, pytorch, vision, classification] # - image: images/logos/pytorch.png # - use_math: true # + [markdown] pycharm={"name": "#%% md\n"} # > Note: Code from the official [PyTorch 60-min-blitz tutorial](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html). # # # ## Loading the CIFAR10 Dataset # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} #collapse-hide import torch import torchvision import torchvision.transforms as transforms # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} #collapse-hide import matplotlib.pyplot as plt import numpy as np # functions to show an image def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() # get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() # show images imshow(torchvision.utils.make_grid(images)) # print labels print(' '.join('%5s' % classes[labels[j]] for j in range(4))) # + [markdown] pycharm={"name": "#%% md\n"} # ## Building a CNN Model with PyTorch # # Architecture: # * Input: 32x32-pixel images with 3 channels (RGB) &rarr; 3x32x32 images # * Convolutions with 3 input channels, 6 output channels, and 5x5 square convolution # &rarr; 6x28x28 images # * 2x2 max pooling (subsampling) &rarr; 6x14x14 images # * 6 input channels (from the previous Conv2d layer), 16 output channels, 5x5 square convolutions # &rarr; 16x10x10 images # * 2x2 max pooling (subsampling) &rarr; 16x5x5 images # * Fully connected linear (=dense) layer with `16x5x5=400` input size and 120 output; ReLU activation # * Fully connected layer with 120 input and 84 output; ReLU activation # * Fully connected output layer with 84 input and 10 output (for the 10 classes in the CIFAR10 dataset); no/linear activation # # Note that the layers are defined in the constructor and the activations applied in the `forward` function. # # To calculate the output size of a convolutional layer, use [this formula](https://stackoverflow.com/a/53580139/2745116): # # $\frac{W−K+2P}{S} +1$ with input size $W$ (width and height for square images), # convolution size $K$, padding $P$ (default 0), and stride $S$ (default 1). # # Further explanation on layer sizes: [Medium article by <NAME>](https://towardsdatascience.com/pytorch-layer-dimensions-what-sizes-should-they-be-and-why-4265a41e01fd) # # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() # + [markdown] pycharm={"name": "#%% md\n"} # Define the loss as cross entropy loss and SGD as optimizer. # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) # + [markdown] pycharm={"name": "#%% md\n"} # ## Training # # Over 5 epochs. # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} #collapse-output for epoch in range(5): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') # + [markdown] pycharm={"name": "#%% md\n"} # Save the trained model locally. # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} PATH = './cifar_net.pth' torch.save(net.state_dict(), PATH) # + [markdown] pycharm={"name": "#%% md\n"} # ## Testing the Trained Model # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} dataiter = iter(testloader) images, labels = dataiter.next() # print images imshow(torchvision.utils.make_grid(images)) print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4))) # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} # load the saved model (just for show; it's already loaded) net = Net() net.load_state_dict(torch.load(PATH)) # make predictions outputs = net(images) _, predicted = torch.max(outputs, 1) print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4))) # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs, 1) c = (predicted == labels).squeeze() for i in range(4): label = labels[i] class_correct[label] += c[i].item() class_total[label] += 1 for i in range(10): print('Accuracy of %5s : %2d %%' % ( classes[i], 100 * class_correct[i] / class_total[i])) # + [markdown] jupyter={"outputs_hidden": false} pycharm={"name": "#%% md\n"} # ## What Next? # # * [Using PyTorch Inside a Django App](https://stefanbschneider.github.io/blog/pytorch-django) # * [Other blog posts related to PyTorch](https://stefanbschneider.github.io/blog/categories/#pytorch) # * [Official PyTorch Tutorials](https://pytorch.org/tutorials/)
_notebooks/2020-12-30-pytorch-getting-started.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 选择 # ## 布尔类型、数值和表达式 # ![](../Photo/33.png) # - 注意:比较运算符的相等是两个等号,一个等到代表赋值 # - 在Python中可以用整型0来代表False,其他数字来代表True # - 后面还会讲到 is 在判断语句中的用发 a=1 b=1 a==b a=1 b=0 a==b 0 is 1 a = 'Joker' b = 'jhsdj' a>b #ASC||值 # ## 字符串的比较使用ASCII值 # ## Markdown # - https://github.com/younghz/Markdown # ## EP: # - <img src="../Photo/34.png"></img> # - 输入一个数字,判断其实奇数还是偶数 bool(4) bool(0) if condition: pass else: pass a=eval(input('请输入一个数')) if a % 2 ==0: print ('偶数') if a % 2 ==1: print ('奇数') # ## 产生随机数字 # - 函数random.randint(a,b) 可以用来产生一个a和b之间且包括a和b的随机整数 import random random.randint(2,3) # 产生一个随机数,你去输入,如果你输入的数大于随机数,那么就告诉你太大了,反之,太小了, # 然后你一直输入,知道它满意为止 random.randint(2,3) x=random.randint(0,10) while 1: y=eval(input('y')) if y > x: print('太大了') if y < x: print('太小了') if y ==x: print('正确') break # ## 其他random方法 # - random.random 返回0.0到1.0之间前闭后开区间的随机浮点 # - random.randrange(a,b) 前闭后开 random.random() random.randrange(0,10) #显示随机数 # ## EP: # - 产生两个随机整数number1和number2,然后显示给用户,使用户输入数字的和,并判定其是否正确 # - 进阶:写一个随机序号点名程序 number1=random.randint(0,10) number2=random.randint(0,10) while 1: x=eval(input('x')) y=eval(input('y')) if x+y>number1+number2: print('太大了') if x+y<number1+number2: print('太小了') if x+y==number1+number2: print('正确') break number1=random.randint(0,10) number2=random.randint(0,10) print(number1,number2) for i in range(5): x=eval(input('x')) if x==(number1+number2): print('正确') break else: print('错误') for i in range(5): x=eval(input('x')) if x==(number1+number2): print('正确') break else: print('错误') number1=random.randint(0,10) number2=random.randint(0,10) x=eval(input('x')) if x==(number1+number2): print('正确') break else: print('错误') # + #输入一个数:6 #你输入的数可以整除 2 和 3 # - x=eval(input('x')) for i in range(2,x): if x%i == 0: print(i,'可以整除') x=eval(input('x')) for i in range(2,x): if x%i == 0: print(i,'可以整除') # ## if语句 # - 如果条件正确就执行一个单向if语句,亦即当条件为真的时候才执行if内部的语句 # - Python有很多选择语句: # > - 单向if # - 双向if-else # - 嵌套if # - 多向if-elif-else # # - 注意:当语句含有子语句的时候,那么一定至少要有一个缩进,也就是说如果有儿子存在,那么一定要缩进 # - 切记不可tab键和space混用,单用tab 或者 space # - 当你输出的结果是无论if是否为真时都需要显示时,语句应该与if对齐 # ## EP: # - 用户输入一个数字,判断其实奇数还是偶数 # - 进阶:可以查看下4.5实例研究猜生日 # ## 双向if-else 语句 # - 如果条件为真,那么走if内部语句,否则走else内部语句 if condition: if condition: do something else: other do something else: #框架 other money = input('Money[yes/no]') if money =='yes': print('再问一个问题-') handsome = input('Handsome[yes/no]') if handsome =='yes': print('非常关键的一个问题') wife = input('wife[yes/no]') if wife == 'yes': print('不拒绝') else: print('马上结婚') else: print('考虑一下') else: print('Gun~') money = input('Money[yes/no]') if money =='yes': print('再问一个问题-') handsome = input('Handsome[yes/no]') if handsome =='yes': print('非常关键的一个问题') wife = input('wife[yes/no]') if wife == 'yes': print('不拒绝') else: print('马上结婚') else: print('考虑一下') else: print('Gun~') # ## EP: # - 产生两个随机整数number1和number2,然后显示给用户,使用户输入数字,并判定其是否正确,如果正确打印“you‘re correct”,否则打印正确错误 # ## 嵌套if 和多向if-elif-else # ![](../Photo/35.png) x1=eval(input('钱?[y/n]')) x2=eval(input('帅?[y/n]')) x3=eval(input('老婆?[y/n]')) X=x1+x2+x3 ifx==0: print('滚') ifx==2: priny('考虑一下') elifx3==0: print('马上结婚') ifx==3: print('有老婆,也要考虑一下') #错误 # ## EP: # - 提示用户输入一个年份,然后显示表示这一年的动物 # ![](../Photo/36.png) # - 计算身体质量指数的程序 # - BMI = 以千克为单位的体重除以以米为单位的身高的平方 # ![](../Photo/37.png) # + year=eval(input('请输入年份')) if year%12==0: print('猴') elif year%12==1: print('鸡') elif year%12==2: print('狗') elif year%12==3: print('猪') elif year%12==4: print('鼠') elif year%12==5: print('牛') elif year%12==6: print('虎') elif year%12==7: print('兔') elif year%12==8: print('龙') elif year%12==9: print('蛇') elif year%12==10: print('马') else: print('羊') # + 身高=eval(input('身高')) 体重=eval(input('体重')) BMI=体重/身高**2 if BMI <18.5: print('超轻') elif 18.5 <=BMI<25: print('标准') elif 25 <=BMI<30: print('超重') else: print('痴肥') # - # ## 逻辑运算符 # ![](../Photo/38.png) # ![](../Photo/39.png) # ![](../Photo/40.png) # ## EP: # - 判定闰年:一个年份如果能被4整除但不能被100整除,或者能被400整除,那么这个年份就是闰年 # - 提示用户输入一个年份,并返回是否是闰年 # - 提示用户输入一个数字,判断其是否为水仙花数 year = eval(input('请输入年份')) if (year % 4 == 0 and year % 100 !=0) or year % 400==0: print('闰年') else: print('平年') number = eval (input('数字')) bai = number // 100 shi = number //10 %10 ge = number %10 if bai **3 + shi **3 + ge **3==number: print('水仙花') for number in range(100,1000): bai = number // 100 shi = number //10 %10 ge = number %10 if bai **3 + shi **3 + ge **3==number: print(number,'水仙花') # ## 实例研究:彩票 # ![](../Photo/41.png) # # Homework # - 1 # ![](../Photo/42.png) a=eval(input('请输入一个数')) b=eval(input('请输入一个数')) c=eval(input('请输入一个数')) r1=(-b+(b**2-4*a*c)**0.5)/2*a r2=(-b-(b**2-4*a*c)**0.5)/2*a if b**2-4*a*c > 0: print(r1,r2) if b**2-4*a*c == 0: print(r1) if b**2-4*a*c < 0: print('The equation has no real roots') # - 2 # ![](../Photo/43.png) import random number1=random.randint(0,100) number2=random.randint(0,100) print(number1,number2) x=eval(input('请输入这两个整数的和')) if x==(number1+number2): print('真') else: print('假') # - 3 # ![](../Photo/44.png) d = eval(input('0-6内的任意一个数字')) t = eval(input('几天后')) if d==0: x = (t+d)%7 print(t,'天后是星期',x) if d==1: x = (t+d)%7 print(t,'天后是星期',x) if d==2: x = (t+d)%7 print(t,'天后是星期',x) if d==3: x = (t+d)%7 print(t,'天后是星期',x) if d==4: x = (t+d)%7 print(t,'天后是星期',x) if d==5: x = (t+d)%7 print(t,'天后是星期',x) if d==6: x = (t+d)%7 print(t,'天后是星期',x) # - 4 # ![](../Photo/45.png) a=eval(input('请输入一个数')) b=eval(input('请输入一个数')) c=eval(input('请输入一个数')) if a > b and b > c: print(c ,b, a) elif a > c and c > b: print(b ,c ,a) elif b > a and a > c: print(c, a, b) elif b > c and c > a: print(a, c ,b) elif c > a and a > b: print(b ,a, c) elif c > b and b > a: print(a ,b, c) # - 5 # ![](../Photo/46.png) weight1,price1=eval(input('请输入重量和价钱')) weight2,price2=eval(input('请输入重量和价钱')) if (price1/weight1)<(price2/weight2): print(weight1,price1) elif (price1/weight1)>(price2/weight2): print(weight2,price2) # - 6 # ![](../Photo/47.png) year=eval(input('请输入一个年份')) month=eval(input('请输入一个月份')) if ((year%4==0) and (year%100!=0)) or (year%400==0): if month<=7: if month%2==0: if month==2: print(year,'年',month,'月','有29天') else: print(year,'年',month,'月','有30天') else: print(year,'年',month,'月','有31天') else: if month%2==0: print(year,'年',month,'月','有31天') else: print(year,'年',month,'月','有30天') else : if month<=7: if month%2==0: if month==2: print(year,'年',month,'月','有28天') else: print(year,'年',month,'月','有30天') else: print(year,'年',month,'月','有31天') else: if month%2==0: print(year,'年',month,'月','有31天') else: print(year,'年',month,'月','有30天') # - 7 # ![](../Photo/48.png) import random a=random.randint(0,1) b=eval(input("输入猜测")) if a==b: print("正确") else: print("错误") # # - 8 # ![](../Photo/49.png) c=random.randint(0,2) I = eval(input('s0,r1,p2 ')) if c>I: print('I won') if c<I: print('I won') if c==I: print('it is a draw') # - 9 # ![](../Photo/50.png) import math y = int(input("请输入年份")) m = int(input("请输入月份")) q = int(input("请输入这是这个月的第几天")) a = q + ((26 * (m + 1)) / 10 ) j = y // 100 k = y % 100 b = k + (k / 4) +(j / 4) + 5 * j h = (a+b)%7 print(h) x = math.floor(h) if x == 0: print("今天是星期六") elif x == 2: print("今天是星期一") elif x == 3: print("今天是星期二") elif x == 4: print("今天是星期三") elif x == 5: print("今天是星期四") elif x == 6: print("今天是星期五") elif x == 1: print("今天是星期日") # - 10 # ![](../Photo/51.png) import random # + colour=random.randint(1,4) daxiao=random.randint(1,12) if colour==1: h='meihua' elif colour==2: h='hongtao' elif colour==3: h='fangkuai' elif colour==4: h='heitao' elif daxiao==1: daxiao='Ace' elif daxiao==2: daxiao='2' elif daxiao==3: daxiao='3' elif daxiao==4: daxiao='4' elif daxiao==5: daxiao='5' elif daxiao==6: daxiao='6' elif daxiao==7: daxiao='7' elif daxiao==8: daxiao='8' elif daxiao==9: daxiao='9' elif daxiao==10: daxiao='10' elif daxiao==11: daxiao='Jack' elif daxiao==2: daxiao='Queen' elif daxiao==2: daxiao='king' print('The card you picked is the',h,'of',daxiao) # - # - 11 # ![](../Photo/52.png) x = eval(input('输入一个三位整数')) if x%10==x//100: print('是回文数') else: print('不是回文数') # - 12 # ![](../Photo/53.png) x,y,z = eval(input('输入边长')) if (x+y>z) and (x+z>y) and (y+z>x): print(x+y+z) else: print('输入的边长不合法')
7.18+hexiaojing.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # __II. 파이썬 프로그래밍의 기초, 자료형__ # --- # ## __6. 집합 자료형__ # ### __1) 집합 자료형은 어떻게 만들까?__ # * 집합(Set)은 파이썬 2.3부터 지원되기 시작한 자료형 # * 집합에 관련된 것들을 쉽게 처리하기 위해 만들어진 자료형이다 s1 = set([1, 2, 3]) s1 # * 위와 같이 set()의 괄호 안에 리스트를 입력하여 만들거나 아래와 같이 문자열을 입력하여 만들 수도 있다 s2 = set('Hello') s2 # ### __2) 집합 자료형의 특징__ # * 중복을 허용하지 않는다 # * 순서가 없다(Unordered) # * 리스트나 튜플은 순서가 있기(ordered) 때문에 인덱싱을 통해 자료형의 값을 얻을 수 있지만<br> # set 자료형은 순서가 없기(unordered) 때문에 인덱싱으로 값을 얻을 수 없다 # * 만약 set 자료형에 저장된 값을 인덱싱으로 접근하려면 다음과 같이 리스트나 튜플로 변환한 후에 해야 한다 s1 = set([1, 2, 3]) print(s1) l1 = list(s1) print(s1) print(l1[0]) # ### __3) 집합 자료형 활용 방법__ # #### __가. 교집합, 합집합, 차집합 구하기__ # * set 자료형이 유용하게 사용되는 경우는 교집합, 합집합, 차집합을 구할 때이다 s1 = set([1, 2, 3, 4, 5, 6]) s2 = set([4, 5, 6, 7, 8, 9]) # * 교집합 print(s1 & s2) s1.intersection(s2) # * 합집합 print(s1|s2) s1.union(s2) # * 차집합 print(s1 - s2) print(s2 - s1) print(s1.difference(s2)) s2.difference(s1) # ### __4) 집합 자료형 관련 함수__ # #### __가. 값 추가 - set, update__ # * 이미 만들어진 set 자료형에 값을 추가할 수 있다 # + '''값 1개 추가(set)''' s1 = set([1, 2, 3]) s1.add(4) print(s1) '''값 여러개 추가(update)''' s1 = set([1, 2, 3]) s1.update([4, 5, 6]) s1 # + #### __나. 특정 값 제거 - remove__ # - s1 = set([1, 2, 3]) s1.remove(2) s1
II_06.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: exercise # language: python # name: exercise # --- # + # Note: inline iFrame exercise preview does not work in Chrome or Safari, please use Firefox, probably because of "Prevent cross-site tracking" enabled by default. from Exercise import Exercise, MarkdownBlock from config import URL, TOKEN import json import numpy as np import sympy as sp import matplotlib.pyplot as plt import pandas as pd plt.rcParams.update({'font.size': 20}) from sklearn.datasets import load_digits Exercise.URL = URL Exercise.TOKEN = TOKEN # + from sympy import Rational, Symbol, latex, UnevaluatedExpr u = lambda x : UnevaluatedExpr(x) # Helper functions def explain_add(a, b): assert(np.shape(a) == np.shape(b)) rows, columns = np.shape(a) return sp.Matrix([[Symbol(f"({latex(u(a[i,j]))} + {latex(u(b[i,j]))})") for j in range(columns)] for i in range(rows)]) def symbolic_matrix(character, rows, columns): return sp.Matrix([[Symbol(f"{{{character}}}_{{{i+1}, {j+1}}}") for j in range(columns)] for i in range(rows)]) # - # ### Integer addition # + e = Exercise("What is $1 + 1$?") e.add_answer(2, True, "That's right!") e.add_answer(0, False, "No, that's not right. Did you compute $1-1=0$ instead?") e.add_default_feedback("No, that's not right!") e.play() # e.write() # Show symbolic equivalence! # + [markdown] tags=[] # ### Parameterized integer addition # + m = "What is $@a + @b$?" a = np.random.randint(0, 10) b = np.random.randint(0, 10) params = {} params["a"] = a params["b"] = b e = Exercise(MarkdownBlock(m, params)) e.add_answer(a + b, True, "That's right!") e.display() e.write() e.play() # - # ### Vector addition # + m = "What is $@a + @b$?" a = sp.Matrix(np.arange(4)) b = sp.Matrix(np.flip(np.arange(4))) params = {} params["a"] = a params["b"] = b e = Exercise(MarkdownBlock(m, params)) e.add_answer(a + b, True, "That's right!") params = {} params["x"] = symbolic_matrix("a", 4,1) params["y"] = symbolic_matrix("b", 4,1) params["z"] = explain_add(params["x"], params["y"]) default_feedback = """Remember the definition of matrix addition: $@x + @y = @z$""" e.add_default_feedback(MarkdownBlock(default_feedback, params)) e.write() e.play() # - # ### Parameterized (both matrix dimensions and values) # + s = "What is $@a \cdot @b$" rows = np.random.randint(1, 6) columns = np.random.randint(1, 6) params = {} params["a"] = sp.Matrix(np.random.randint(5, size=rows*columns).reshape((rows,columns))) params["b"] = sp.Matrix(np.random.randint(5, size=(2+rows)*columns).reshape((columns,rows+2))) e = Exercise(MarkdownBlock(s, params)) ans = params["a"] * params["b"] display(ans) e.add_answer(params["a"] * params["b"], True, "That's right!") e.add_default_feedback("Nope, that's not right!") e.play() # - # ### Matrix visualization, contextualized exercise (MNIST dataset, hand-written digit recognition problem) # + # Helper functions digits = load_digits() sorted_indices = np.argsort(digits.target) nums = digits.images[sorted_indices] # Plot and save matrix image def save_image_for(matrix, filename): fig, ax = plt.subplots() ax.xaxis.set_label_position('top') ax.set_xticklabels([i for i in range(0, 9)]) ax.yaxis.set_label_position('left') ax.set_yticklabels([i for i in range(0, 9)]) # Minor ticks ax.set_xticks(np.arange(-.5, 10, 1), minor=True) ax.set_yticks(np.arange(-.5, 10, 1), minor=True) ax.grid(which='minor', color='black', linestyle='-', linewidth=2) ax.matshow(matrix, cmap='binary') plt.savefig(filename, dpi=300, bbox_inches='tight') # Return binary representation of image matrix def to_binary(m): return np.where(m > 7, 1, 0) # + t = r""" <div style="display: flex; align-items: center; justify-content: center; margin-bottom: 10px;"> $A = $<img src="zero_1.png" width="150"/> $B = $<img src="zero_2.png" width="150"/> $D = $<img src="diff.png" width="150"/> </div> $A = @z1, B = @z2, D = |A - B| = @d, \sum D = @s$ """ # TODO: illustrate non-binary case zero_1 = nums[0] zero_1 = to_binary(zero_1) zero_2 = nums[2] zero_2 = to_binary(zero_2) save_image_for(zero_1, "zero_1") save_image_for(zero_2, "zero_2") save_image_for(np.abs(zero_1 - zero_2), "diff") z1 = sp.Matrix(zero_1) z2 = sp.Matrix(zero_2) params = {} params["z1"] = z1 params["z2"] = z2 distance_matrix = np.abs(z1 - z2) d = sp.Matrix(distance_matrix) params["d"] = d params["s"] = np.sum(distance_matrix) e = Exercise(MarkdownBlock(t, params)) e.display() e.write() e.publish() # -
focus_group_demo_complete.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + id="eS2hsQVTryRn" import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt import warnings warnings.filterwarnings("ignore") from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" # + id="j6ktiW1rryRt" train = pd.read_csv('train.csv') test = pd.read_csv('test.csv') # df = pd.concat([train,test],0) train.drop('ID',1,inplace=True) # test # + id="oyUdUJP7ryRu" train.rename(columns={'Agency Type':'Agency_Type', 'Distribution Channel':'Distribution_Channel', 'Product Name':'Product_Name','Net Sales':'Net_Sales', 'Commision (in value)':'Commision'},inplace=True) # + colab={"base_uri": "https://localhost:8080/"} id="j3dAiaU6ryRu" outputId="9cf58d8d-2312-4ce7-de31-45152ce9803f" cat_cols = train.select_dtypes('object').columns num_cols = train.select_dtypes('number').columns num_cols,cat_cols=list(num_cols),list(cat_cols) num_cols.remove('Claim') num_cols,cat_cols # + colab={"base_uri": "https://localhost:8080/", "height": 334} id="xyONixEvryRv" outputId="fd981098-58bc-44b9-d2d0-3ac44662748d" train.describe() train.Claim.value_counts(normalize=True) # + colab={"base_uri": "https://localhost:8080/", "height": 786} id="KdayY85-ryRw" outputId="82fa7c6b-de3a-4f01-d588-fa4560f950dd" train[cat_cols] train[num_cols] # + colab={"base_uri": "https://localhost:8080/"} id="xozAcYHNryRw" outputId="78f23618-b337-40f6-9d86-f39af0116f32" # neg_dur_indices = list(train.Duration[train.Duration<0].index) # train.drop(neg_dur_indices,inplace=True) for i in cat_cols: train[i].value_counts() # + id="7BTlkHRVryRx" # def category(x): # if 'Comprehensive' in x: # return 0 # elif 'Travel' in x or 'Trip' in x or 'Ticket' in x or 'Vehicle' in x or 'Protect' in x: # return 1 # elif 'Plan' in x: # return 2 # else: # return 3 # train.Product_Name = train.Product_Name.apply(lambda x:category(x)) # train.Product_Name.value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="Eun-O2UOryRx" outputId="37db2f3a-7380-4dfd-c43a-91bd3945cf47" for i in cat_cols: train[i].unique() ##EDA Plots # fig, axs = plt.subplots(5,1,figsize=(10,40)) # for j,i in enumerate(num_cols): # axs[j].boxplot(train[i]) # axs[j].set_title(i) # f,ax=plt.subplots(4,1,figsize=(8,30)) # for j,i in enumerate(num_cols): # ax[j].hist(train[i]) # ax[j].set_title(i) # f,ax = plt.subplots(5,1,figsize=(8,40)) # sns.countplot(train.Claim,ax=ax[0]) # sns.countplot(train.Distribution_Channel,ax=ax[1]) # sns.countplot(train.Claim,hue=train.Distribution_Channel,ax=ax[2]) # sns.countplot(train.Agency_Type,ax=ax[3]) # sns.countplot(train.Claim,hue=train.Agency_Type,ax=ax[4]) # + id="JtUhC8QAryRy" # import pycountry_convert as pc # def country_to_continent(x): # if " " not in x: # x=x[0]+x[1:].lower() # else: # pos = [i for i,j in enumerate(x) if ' '==j] # pos = np.array(pos)+1 # x=x.lower() # x=x[0].upper()+x[1:] # for j in pos: # x = x[:j]+x[j].upper()+x[j+1:] # country_code = pc.country_name_to_country_alpha2(x, cn_name_format="default") # continent_name = pc.country_alpha2_to_continent_code(country_code) # return continent_name # train.Destination = train.Destination.apply(lambda x: 'CHINA' if x=='TAIWAN, PROVINCE OF CHINA' else x) # train.Destination = train.Destination.apply(lambda x:country_to_continent(x)) # + colab={"base_uri": "https://localhost:8080/", "height": 459} id="rJntwiBAryRy" outputId="b56f95d1-306c-4230-87d3-1dddba708414" pd.crosstab(train.Claim,train.Destination,margins=True).style.background_gradient(cmap='summer_r') pd.crosstab(train.Claim,train.Agency,margins=True).style.background_gradient(cmap='summer_r') pd.crosstab(train.Claim,train.Product_Name,margins=True).style.background_gradient(cmap='summer_r') # + colab={"base_uri": "https://localhost:8080/", "height": 287} id="nSfZgMzuryRz" outputId="a7ecde61-e7c1-4021-d45e-6c639ea2a1a1" sns.heatmap(train.corr(),annot=True,cmap='summer_r') # + id="N4Ael0lvryRz" from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.ensemble import BaggingClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import AdaBoostClassifier from sklearn.ensemble import GradientBoostingClassifier from sklearn.ensemble import VotingClassifier from sklearn.model_selection import GridSearchCV from sklearn.metrics import classification_report , accuracy_score from sklearn.metrics import confusion_matrix from sklearn.metrics import roc_auc_score , roc_curve,precision_score from sklearn.preprocessing import StandardScaler, Normalizer,LabelEncoder from xgboost import XGBClassifier # + colab={"base_uri": "https://localhost:8080/", "height": 195} id="nJ3kH5A__K8r" outputId="b232e0c5-b9aa-4bab-d191-5e6dda996d0f" train.head() # + colab={"base_uri": "https://localhost:8080/", "height": 195} id="He09BA3d_Q0c" outputId="30065f38-e4d6-473d-b265-a4b4b6293df4" test.head() # - cat_cols # + colab={"base_uri": "https://localhost:8080/", "height": 555} id="OlNmhgVKryR0" outputId="7c1abcec-0340-4ce6-d587-e7f772a8116a" lr=LabelEncoder() for i in train.columns: train[i] = lr.fit_transform(train[i]) #test[i] = lr.transform(test[i]) # - sns.heatmap(train.corr(),annot=True) # + id="z4kuwXZvryR0" train X=train.drop('Claim',1) y=train.Claim # + id="dHCEX6QKryR0" X y # + id="kjEh6UlJryR1" X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=0) # + id="lmjbMK3-ryR1" # #LogisticRegression # lr = LogisticRegression() # lr.fit(X_train,y_train) # y_pred = lr.predict(X_test) # lr.score(X_test,y_test) # confusion_matrix(y_test,y_pred) # precision_score(y_test,y_pred) # + id="Db1eS11FryR1" # train.Net_Sales.fillna(train.Net_Sales.mean(),inplace=True) # + id="3C4QtS1OryR1" # #Decision Tree Classifier # dt = DecisionTreeClassifier(criterion='entropy',max_depth=100,random_state=0) # dt.fit(X_train,y_train) # y_pred = dt.predict(X_test) # precision_score(y_test,y_pred)a # + id="uMYYn3egryR1" # from sklearn.model_selection import GridSearchCV # params = {'max_depth':[12,13,14],'learning_rate':[0.1,0.5,1],'n_estimators':[70,80,90,100]} # gbm1 = GradientBoostingClassifier() # grdclf1 = GridSearchCV(gbm1,params) # grdclf1.fit(X_train,y_train) # grdclf1.best_estimator_ # + id="w8XYlvIQryR2" # #Random Forest # rf = RandomForestClassifier(criterion='entropy',n_estimators=100,min_samples_split=4) # rf.fit(X_train,y_train) # y_pred = rf.predict(X_test) # precision_score(y_test,y_pred) # + id="K7M-QA2gryR2" # #Gradient Boosting Classifier # gbclf = GradientBoostingClassifier(random_state=0,n_estimators=85,learning_rate=0.5,max_depth=13) # gbclf.fit(X_train,y_train) # y_pred = gbclf.predict(X_test) # precision_score(y_test,y_pred) # + id="G3eDA2GSryR3" # import pycountry_convert as pc # def country_to_continent(x): # if " " not in x: # x=x[0]+x[1:].lower() # else: # pos = [i for i,j in enumerate(x) if ' '==j] # pos = np.array(pos)+1 # x=x.lower() # x=x[0].upper()+x[1:] # for j in pos: # x = x[:j]+x[j].upper()+x[j+1:] # country_code = pc.country_name_to_country_alpha2(x, cn_name_format="default") # continent_name = pc.country_alpha2_to_continent_code(country_code) # return continent_name # train.Destination = train.Destination.apply(lambda x: 'CHINA' if x=='TAIWAN, PROVINCE OF CHINA' else x) # train.Destination = train.Destination.apply(lambda x:country_to_continent(x)) # + id="YwBgO7F-ryR3" # country_code = pc.country_name_to_country_alpha2("Viet Nam", cn_name_format="default") # print(country_code) # continent_name = pc.country_alpha2_to_continent_code(country_code) # print(continent_name) # x='BOMBAY' # x=x[0]+x[1:].lower() # x # pos = [i for i,j in enumerate(x) if ' '==j] # x='VIET NAM SOLDIERS ARE VERY STRONG' # pos = np.array(pos)-1 # pos+[0] # # x='VIET NAM SOLDIERS ARE VERY STRONG' # pos = [i for i,j in enumerate(x) if ' '==j] # pos = np.array(pos)+1 # x=x.lower() # x=x[0].upper()+x[1:] # for j in pos: # x = x[:j]+x[j].upper()+x[j+1:] # x # + id="NRPtlJRyryR4" # train.Destination.unique() # + colab={"base_uri": "https://localhost:8080/"} id="IjJh0c_xryR4" outputId="c5e25d62-f981-4742-e822-98fc97c7c8f2" xgbclf = XGBClassifier(scale_pos_weight=0.2) xgbclf.fit(X_train,y_train) y_pred = xgbclf.predict(X_test) precision_score(y_test,y_pred) # + id="lCsdVm0EryR4" # from sklearn.model_selection import GridSearchCV # params = {'max_depth':[16,17],'scale_pos_weight':[1,2,3],'n_estimators':[130,150,170]} # gbm1 = XGBClassifier() # grdclf1 = GridSearchCV(gbm1,params) # grdclf1.fit(X_train,y_train) # grdclf1.best_estimator_ # + colab={"base_uri": "https://localhost:8080/"} id="CUJLy9HLryR2" outputId="34f2861f-c481-4a92-ac5a-139bb64d53b6" confusion_matrix(y_test,y_pred) # + id="aPjMmOJQryR4" # + colab={"base_uri": "https://localhost:8080/"} id="fNofCgbSryR4" outputId="e1fcc974-05d2-4f39-ae01-bd6a78a19f24" y_test # + id="yDowF3nBryR5"
Hackathon2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/nmcardoso/splus-website/blob/master/sdss_stamps.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + id="j1XSB7wco-oH" colab_type="code" colab={} # Image Options: # p = ["G", "L", "P", "S", "O", "B", "F", "M", "Q", "I", "A", "X"] # f = ["Grid", "Label", "PhotoObjs", "SpecObjs", "Outline", # "BoundingBox", "Fields", "Masks", "Plates", "InvertImage", "APOGEE", "2MASS Images"]; # + id="mVfIKmWjpYUE" colab_type="code" colab={} # url_format: http://skyserver.sdss.org/dr15/SkyServerWS/ImgCutout/getjpeg?TaskName=Skyserver.Chart.Navi&ra=229.525575753922&dec=42.7458537608544&scale=0.3&width=300&height=300&opt=X # + id="4tTG0KhBrOgg" colab_type="code" colab={} import requests from IPython.display import Image, display import pandas as pd import shutil import os import tarfile from google.colab import drive import copy # + id="yInCzLHg5ftv" colab_type="code" outputId="37c4d8ec-bee2-4604-e2a5-0477261f1ad4" colab={"base_uri": "https://localhost:8080/", "height": 34} drive.mount('/gdrive') # + id="mSxsgGlJ4pyI" colab_type="code" colab={} base_url = 'http://skyserver.sdss.org/dr15/SkyServerWS/ImgCutout/getjpeg' downloads_path = '/content/downloads' dataset_path = '/gdrive/My Drive/ml_datasets/splus_crossmatch.csv' data = None # + id="EOCgi3VJokGY" colab_type="code" colab={} def load_dataset(force=False): global data if (not data or force): data = pd.read_csv(dataset_path) def get_params(ra, dec, scale=0.2, width=200, height=200, opt=''): return { 'TaskName': 'Skyserver.Chart.Navi', 'ra': ra, 'dec': dec, 'scale': scale, 'width': width, 'height': height, 'opt': opt } def prepare_downloads_dir(classes): shutil.rmtree(downloads_path) os.mkdir(downloads_path) for c in classes: os.mkdir(f'{downloads_path}/{c}') def download_image(url, params, filename, output_path=downloads_path): resp = requests.get(url, params) if (resp.status_code == 200): with open(f'{output_path}/{filename}', 'wb') as f: f.write(resp.content) else: raise Exception(f'Error: {resp.status_code} [{filename}]') def make_tarfile(source, output): with tarfile.open(output, "w:gz") as tar: tar.add(source, arcname=os.path.basename(source)) print(f'Tarfile created successfully [{output}]') def batch_download(data, classes_column, limit=None): classes = list(final_data.groupby(classes_column).indices.keys()) prepare_downloads_dir(classes) total = limit if limit else data.shape[0] i = 0 for index, row in data.iterrows(): if (limit and i >= limit): break try: download_image( base_url, get_params(row['RA_2'], row['Dec_2'], scale=0.15), f'{row["ID"]}.jpg', output_path=f'{downloads_path}/{row[classes_column]}' ) print(f'Success [{row["ID"]}] ({(((i + 1)/total)*100):.2f}%)') except Exception as error: print(error) i += 1 # + id="Ev8KjjqsOtB5" colab_type="code" colab={} load_dataset() print(*data.columns, sep='\n') # + id="GZr7OSpKSRR2" colab_type="code" outputId="58c2d89f-f06f-4c17-c1e9-04d1ee8a6ded" colab={"base_uri": "https://localhost:8080/", "height": 867} kmap = {} for i, r in data.iterrows(): klass = r['gz2class'] kmap[klass] = kmap[klass] + 1 if klass in kmap else 1 sorted_kmap = sorted(kmap.items(), key=lambda x: x[1], reverse=True) print(*[f'{x[0]}: {x[1]}' for x in sorted_kmap[:50]], sep='\n') # + id="98Z2czIHpFAs" colab_type="code" outputId="f3418bbe-1f0e-4e5b-d014-b3b317afd49c" colab={"base_uri": "https://localhost:8080/", "height": 346} valid_classes = ['Ei', 'Er', 'Ec', 'Ser', 'Sc2m'] final_data = data[data['gz2class'].isin(valid_classes)] final_data.describe() # + id="U-MFw15at61o" colab_type="code" outputId="955075c0-305c-4e12-abeb-6a2dee6ca3df" colab={"base_uri": "https://localhost:8080/", "height": 1000} batch_download(final_data, 'gz2class') # + id="3U_g7nh9LpLf" colab_type="code" outputId="3bd2ac67-3738-44f4-818c-3e0f5dbbfb91" colab={"base_uri": "https://localhost:8080/", "height": 34} make_tarfile(downloads_path, '/gdrive/My Drive/ml_datasets/sdss_stripe82.tar.gz')
sdss_stamps.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from accelerator.elements import Drift import matplotlib.pyplot as plt drift = Drift(2) drift.m_h drift.m_h.twiss drift.plot() # # Effect of drift on the phasespace from accelerator import Lattice from accelerator import Beam lattice = Lattice([Drift(1)]) lattice u, u_prime, twiss, s = lattice.transport_beam([1,0,1], Beam(emittance=1)) plt.figure(figsize=(12,12)) plt.plot(u, u_prime) plt.gca().set_aspect('equal') # + import numpy as np lat = Lattice([Drift(1)]) lat.plot(xztheta_init=[0,0,np.pi/4])
noteboooks/drift.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # TF-IDF and BM25 # # Using the classic Cranfield dataset, this notebook shows how to use [TF-IDF](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) and [BM25](https://pypi.org/project/rank-bm25) to calculate the similarity scores between a query and the documents and show the evaluation scores, i.e., precision and recall. Note that the ranking of the returned documents is not yet considered. import numpy as np import pandas as pd # load data into dataframes docs = pd.read_json("data/cranfield_docs.json") queries = pd.read_json("data/cranfield_queries.json") relevance = pd.read_json("data/cranfield_relevance.json") docs.head() queries.head() queries.set_index('query_id', inplace=True) queries queries['query'][1] relevance.head() train_docs = docs['body'].tolist() # only need nltk if custom tokenizer is used import nltk nltk.download('popular') # download the nltk packages locally # + # https://tartarus.org/martin/PorterStemmer/def.txt from nltk.stem.porter import PorterStemmer porter_stemmer = PorterStemmer() import re # regular expression def stemming_tokenizer(str_input): words = re.sub(r"[^A-Za-z\-]", " ", str_input).lower().split() # delete non letter charactors #words = re.sub(r"[^A-Za-z0-9\-]", " ", str_input).lower().split() # include numbers words = [porter_stemmer.stem(word) for word in words] return words # + # train the tf-idf model # max_df: 0.9 - discard the term if the term appears more than 80% of all documents # min_idf: 0.1 - discard the term if the teram appears in less than 20% of all documents # ngram_range: (1,3) - try unigrams, bigrams and trigrams # tokenizer=stemming_tokenizer - use customized tokenizer # use_idf=True, norm='l2' default # using stemming_tokenizer improves precision/recall a lot! from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity tfidf_vectorizer = TfidfVectorizer( tokenizer=stemming_tokenizer, max_df=0.9, min_df=0.1, stop_words='english', use_idf=True, ngram_range=(1, 3) ) tfidf_vect= tfidf_vectorizer.fit(train_docs) # - # number of words/features len(tfidf_vect.get_feature_names()) # + # testing data # first sentence is the query # rest are the documents test_texts = [ "photo-thermoelastic investigation", "a simple model study of transient temperature and thermal stress distribution due to aerodynamic heating . the present work is concerned with the determination of transient temperatures and thermal stresses in simple models intended to simulate parts or the whole of an aircraft structure of the built- up variety subjected to aerodynamic heating . the first case considered is that of convective heat transfer into one side of a flat plate, representing a thick skin, and the effect of the resulting temperature distribution in inducing thermal stresses associated with bending restraint at the plate edges . numerical results are presented for the transient temperature differentials in the plate when the environment temperature first increases linearly with time and then remains constant, the period of linear increase representing the time of acceleration of the aircraft . corresponding thermal stress information is presented . the second case is that of the wide-flanged i-beam with convective heat transfer into the outer faces of the flanges . numerical results are presented for transient temperature differentials for a wide range of values of the applicable parameters and for an environment temperature variation as described above . corresponding thermal stresses in a beam of infinite length are determined . a theoretical analysis of the stress distribution in a beam of finite length is carried out and numerical results obtained for one case .", "photo-thermoelastic investigation of transient thermal stresses in a multiweb wing structure . photothermoelastic experiments were performed on a long multiweb wing model for which a theoretical analysis is available in the literature . the experimental procedures utilized to simulate the conditions prescribed in the theory are fully described . correlation of theory and experiment in terms of dimensionless temperature, stress, time, and biot number revealed that the theory predicted values higher than the experimentally observed maximum thermal stresses at the center of the web . ", ] test_result = tfidf_vect.transform(test_texts) pd.DataFrame(test_result.toarray(), columns=tfidf_vect.get_feature_names()) # - # cosine similarity for the testing data test_similarity = cosine_similarity(test_result) pd.DataFrame(test_similarity) # given a query and similarity_thershold return the ids of docs # this is the relevant docs that based on our tf-idf algorithm for the query def get_doc_ids_tfidf(query_id, similarity_threshold): all_docs = train_docs.copy() # make a copy of all docs list all_docs.insert(0, queries['query'][query_id]) # inser the current query at the begining of the list test_result = tfidf_vect.transform(all_docs) # generate the tfidf matrix # pd.DataFrame(test_result.toarray(), columns=tfidf_vect.get_feature_names()) df = pd.DataFrame(cosine_similarity(test_result)) # calculate the pair-wise similarity and convert to df df = df.rename(columns = {0:'similarity'}) # rename the first column df = df.sort_values('similarity', ascending=False) # sort the result based on similarity score df_filtered = df[df['similarity']>similarity_threshold] # filter the result based on similarity threshold returned_doc_ids_list = df_filtered.index.tolist() # get the ids of the returned docs return returned_doc_ids_list # get the doc ids with relevance score # this is the ground truth of relevance docs for the query based on the human annotated data def get_true_doc_ids(query_id, relevance_threshold): # filter based on r_score (1, 2, 3, or 4) and relevance_threshold true_doc_ids = relevance[(relevance['query_id']==query_id) & (relevance['r_score']<=relevance_threshold)] true_doc_ids_list = true_doc_ids['doc_id'].to_list() return true_doc_ids_list # + # calculate evaluation scores: precision and recall def show_eval_scores(returned_doc_ids_list, true_doc_ids_list): # true positive tp = [x for x in true_doc_ids_list if x in returned_doc_ids_list] #tp.sort() #print(tp, len(tp)) # false positive fp = [x for x in returned_doc_ids_list if x not in tp] #fp.sort() #print(fp, len(fp)) # false negative fn = [x for x in true_doc_ids_list if x not in tp] #fn.sort() #print(fn, len(fn)) precision = len(tp)/(len(tp)+len(fp)) recall = len(tp)/(len(tp)+len(fn)) print(f'precision is {precision:.3%}') print(f'recall is {recall:.3%}') return precision, recall # - # utility function to put everything together def show_result_tfidf(query_id, similarity_threshold, relevance_threshold): print(f"query: {queries['query'][query_id]}") print('calculating results......') # we set a similarity threshold and get the ids of those documents # similarity_threshold 0.65 is used in https://www.aaai.org/Papers/FLAIRS/2003/Flairs03-082.pdf returned_doc_ids_list = get_doc_ids_tfidf(query_id, similarity_threshold) print(f'total # of returned docs: {len(returned_doc_ids_list)}') # we choose relevance_threshold = 3, relevance 1, 2 and 3 mean relevant for returned documents # see readme for definitions about r_score true_doc_ids_list = get_true_doc_ids(query_id, relevance_threshold) print(f'total # of true docs: {len(true_doc_ids_list)}') print(f'similarity_threshold is {similarity_threshold} and relevance_threshold is {relevance_threshold}') show_eval_scores(returned_doc_ids_list, true_doc_ids_list) # + # given a query and similarity_thershold return the ids of docs # this is the relevant docs that based on our tf-idf algorithm for the query from rank_bm25 import BM25Okapi def get_doc_ids_bm25(query_id, bm25_top_n): all_docs = train_docs.copy() # make a copy of all docs list tokenized_corpus = [ stemming_tokenizer(doc) for doc in all_docs] bm25 = BM25Okapi(tokenized_corpus) query = queries['query'][query_id] tokenized_query = stemming_tokenizer(query) doc_scores = bm25.get_scores(tokenized_query) df = pd.DataFrame(doc_scores) df = df.rename(columns = {0:'bm25_score'}) # rename the first column df = df.sort_values('bm25_score', ascending=False) # sort the result based on similarity score df_filtered = df[0:bm25_top_n] # filter the result based on similarity threshold returned_doc_ids_list = df_filtered.index.tolist() # get the ids of the returned docs return returned_doc_ids_list # - # utility function to put everything together def show_result_bm25(query_id, bm25_top_n, relevance_threshold): print(f"query: {queries['query'][query_id]}") print('calculating results......') # we set a bm25_top_n and get the ids of those documents returned_doc_ids_list = get_doc_ids_bm25(query_id, bm25_top_n) print(f'total # of returned docs: {len(returned_doc_ids_list)}') # we choose relevance_threshold = 3, relevance 1, 2 and 3 mean relevant for returned documents # see readme for definitions about r_score true_doc_ids_list = get_true_doc_ids(query_id, relevance_threshold) print(f'total # of true docs: {len(true_doc_ids_list)}') print(f'bm25 top_n is {bm25_top_n} and relevance_threshold is {relevance_threshold}') show_eval_scores(returned_doc_ids_list, true_doc_ids_list) # query_id = 1, similarity_threshold = 0.3, relevance_threshold = 3 show_result_tfidf(1, 0.3, 3) show_result_bm25(1, 30, 3) # query_id = 2, similarity_threshold = 0.3, relevance_threshold = 3 show_result_tfidf(2, 0.3, 3) show_result_bm25(2, 36, 3)
cranfield/tfidf-bm25.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- class Blockchain(object): ... def new_transaction(self, sender, recipient, amount): """ Creates a new transaction to go into the next mined Block :param sender: <str> Address of the Sender :param recipient: <str> Address of the Recipient :param amount: <int> Amount :return: <int> The index of the Block that will hold this transaction """ self.current_transactions.append({ 'sender': sender, 'recipient': recipient, 'amount': amount, }) return self.last_block['index'] + 1
src/addToTransaction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: p38torch-pip # language: python # name: p38torch-pip # --- # ## Data Exploration import os #path2test="./data/test_set/" path2test = "../../../data/hc18/test_set/" path2train = "../../../data/hc18/training_set/" imgsList=[pp for pp in os.listdir(path2test) if "Annotation" not in pp] print("number of images:", len(imgsList)) import numpy as np np.random.seed(2019) rndImgs=np.random.choice(imgsList,4) rndImgs # ## Creating the Model import torch.nn as nn import torch.nn.functional as F # + class SegNet(nn.Module): def __init__(self, params): super(SegNet, self).__init__() C_in, H_in, W_in=params["input_shape"] init_f=params["initial_filters"] num_outputs=params["num_outputs"] self.conv1 = nn.Conv2d(C_in, init_f, kernel_size=3,stride=1,padding=1) self.conv2 = nn.Conv2d(init_f, 2*init_f, kernel_size=3,stride=1,padding=1) self.conv3 = nn.Conv2d(2*init_f, 4*init_f, kernel_size=3,padding=1) self.conv4 = nn.Conv2d(4*init_f, 8*init_f, kernel_size=3,padding=1) self.conv5 = nn.Conv2d(8*init_f, 16*init_f, kernel_size=3,padding=1) self.upsample = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) self.conv_up1 = nn.Conv2d(16*init_f, 8*init_f, kernel_size=3,padding=1) self.conv_up2 = nn.Conv2d(8*init_f, 4*init_f, kernel_size=3,padding=1) self.conv_up3 = nn.Conv2d(4*init_f, 2*init_f, kernel_size=3,padding=1) self.conv_up4 = nn.Conv2d(2*init_f, init_f, kernel_size=3,padding=1) self.conv_out = nn.Conv2d(init_f, num_outputs , kernel_size=3,padding=1) def forward(self, x): x = F.relu(self.conv1(x)) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv3(x)) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv4(x)) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv5(x)) x=self.upsample(x) x = F.relu(self.conv_up1(x)) x=self.upsample(x) x = F.relu(self.conv_up2(x)) x=self.upsample(x) x = F.relu(self.conv_up3(x)) x=self.upsample(x) x = F.relu(self.conv_up4(x)) x = self.conv_out(x) return x # + h,w=128,192 params_model={ "input_shape": (1,h,w), "initial_filters": 16, "num_outputs": 1, } model = SegNet(params_model) # - import torch device = torch.device('cuda:1' if torch.cuda.is_available() else 'cpu') model=model.to(device) # + import matplotlib.pylab as plt from PIL import Image from scipy import ndimage as ndi from skimage.segmentation import mark_boundaries def show_img_mask(img, mask): img_mask=mark_boundaries(np.array(img), np.array(mask), outline_color=(0,1,0), color=(0,1,0)) plt.imshow(img_mask) # - path2weights="./models/weights.pt" model.load_state_dict(torch.load(path2weights)) model.eval() # + from torchvision.transforms.functional import to_tensor, to_pil_image for fn in rndImgs: path2img = os.path.join(path2test, fn) img = Image.open(path2img) img=img.resize((w,h)) img_t=to_tensor(img).unsqueeze(0).to(device) pred=model(img_t) pred=torch.sigmoid(pred)[0] #mask_pred= (pred[0]>=0.5) mask_pred= (pred[0]>=0.5).cpu().numpy() plt.figure() plt.subplot(1, 3, 1) plt.imshow(img, cmap="gray") plt.subplot(1, 3, 2) plt.imshow(mask_pred, cmap="gray") plt.subplot(1, 3, 3) show_img_mask(img, mask_pred) # -
Chapter06/Chapter6-deployment.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Problem Statement # The task is to predict whether a potential promotee at checkpoint in the test set will be promoted or not after the evaluation process. import pandas as pd import numpy as np import seaborn as sb import matplotlib.pyplot as plt train_data='D:/My Personal Documents/Learnings/Data Science/Data Sets/WNS Analytics/train_LZdllcl.csv' test_data='D:/My Personal Documents/Learnings/Data Science/Data Sets/WNS Analytics/test_2umaH9m.csv' train=pd.read_csv(train_data) test=pd.read_csv(test_data) train.shape test.shape train['source']='train' test['source']='test' data = pd.concat([train, test],ignore_index=True) print (train.shape, test.shape, data.shape) data.info() data.apply(lambda x: sum(x.isnull())) data.apply(lambda x: len(x.unique())) plt.figure(figsize=(12,6)) sb.countplot(x='department', hue='education',data=data) sb.countplot(x='is_promoted',data=data) plt.figure(figsize=(12,6)) sb.countplot(x='is_promoted', hue='department',data=data) plt.figure(figsize=(12,6)) sb.countplot(x='is_promoted', hue='region',data=data) plt.figure(figsize=(12,6)) sb.countplot(x='is_promoted', hue='education',data=data) plt.figure(figsize=(12,6)) sb.countplot(x='is_promoted', hue='awards_won?',data=data) plt.figure(figsize=(12,6)) sb.countplot(x='is_promoted', hue='no_of_trainings',data=data) plt.figure(figsize=(12,6)) sb.countplot(x='is_promoted', hue='gender',data=data) plt.figure(figsize=(12,6)) sb.countplot(x='is_promoted', hue='previous_year_rating',data=data) plt.figure(figsize=(12,6)) sb.countplot(x='is_promoted', hue='KPIs_met >80%',data=data) plt.figure(figsize=(12,6)) sb.countplot(x='is_promoted', hue='recruitment_channel',data=data) plt.figure(figsize=(12,6)) sb.distplot(data[data.is_promoted==0]['avg_training_score']) def region_decode(region): if(region in ('region_2','region_22','region_7')): return 'region_A' elif(region in ('region_4','region_13','region_15')): return 'region_B' elif(region in ('region_28','region_26','region_23','region_27','region_31','region_17','region_25','region_16')): return 'region_C' else: return 'region_D' data['region']=data['region'].apply(region_decode) data.previous_year_rating[data.previous_year_rating.isnull()]=3.0 data.education[data.education.isnull()]="Bachelor's" def score_trans(score): if(score<55): return 1 elif(score>=55 and score<65): return 2 elif(score >=65 and score<75): return 3 elif(score>=75 and score<85): return 4 elif(score>=85 and score<90): return 5 else: return 6 data.head() data['avg_training_score']=data['avg_training_score'].apply(score_trans) def f_trans(col1,col2,col3): if(col1==1 or col2==1 or col3==5.0): return 1 else: return 0 data['top_performer'] = data.apply(lambda x: f_trans(x['KPIs_met >80%'], x['awards_won?'],x['previous_year_rating']), axis=1) data=pd.get_dummies(data,columns=['department','education','gender','recruitment_channel','region']) data.groupby('top_performer').employee_id.count() data.head() train=data[data.source=='train'] train.drop('source',axis=1,inplace=True) test=data[data.source=='test'] test.drop(['source','is_promoted'],axis=1,inplace=True) #Define target and ID columns: target = 'is_promoted' IDcol = ['employee_id'] from sklearn import model_selection, metrics def modelfit(alg, dtrain, dtest, predictors, target, IDcol, filename): #Fit the algorithm on the data alg.fit(dtrain[predictors], dtrain[target]) #Predict training set: dtrain_predictions = alg.predict(dtrain[predictors]) #Perform cross-validation: cv_score = model_selection.cross_val_score(alg, dtrain[predictors], dtrain[target], cv=20, scoring='neg_mean_squared_error') cv_score = np.sqrt(np.abs(cv_score)) #Print model report: print ("\nModel Report") print ("RMSE : %.4g" % np.sqrt(metrics.mean_squared_error(dtrain[target].values, dtrain_predictions))) print ("CV Score : Mean - %.4g | Std - %.4g | Min - %.4g | Max - %.4g" % (np.mean(cv_score),np.std(cv_score),np.min(cv_score),np.max(cv_score))) #Predict on testing data: dtest[target] = alg.predict(dtest[predictors]) #Export submission file: IDcol.append(target) submission = pd.DataFrame({ x: dtest[x] for x in IDcol}) submission.to_csv(filename, index=False) from sklearn.linear_model import LinearRegression, Ridge, Lasso predictors = [x for x in train.columns if x not in [target]+IDcol] # print predictors alg1 = LinearRegression(normalize=True) modelfit(alg1, train, test, predictors, target, IDcol, 'alg1.csv') coef1 = pd.Series(alg1.coef_, predictors).sort_values() coef1.plot(kind='bar', title='Model Coefficients',figsize=(10,6)) predictors = [x for x in train.columns if x not in [target]+IDcol] alg2 = Ridge(alpha=0.05,normalize=True) modelfit(alg2, train, test, predictors, target, IDcol, 'alg2.csv') coef2 = pd.Series(alg2.coef_, predictors).sort_values() coef2.plot(kind='bar', title='Model Coefficients') from sklearn.tree import DecisionTreeClassifier predictors = [x for x in train.columns if x not in [target]+IDcol] alg3 = DecisionTreeClassifier(max_depth=9, min_samples_leaf=60) modelfit(alg3, train, test, predictors, target, IDcol, 'alg3.csv') coef3 = pd.Series(alg3.feature_importances_, predictors).sort_values(ascending=False) coef3.plot(kind='bar', title='Feature Importances') ##### from sklearn.ensemble import RandomForestClassifier predictors = [x for x in train.columns if x not in [target]+IDcol] alg5 = RandomForestClassifier(n_estimators=200,max_depth=20, min_samples_leaf=60,n_jobs=4) modelfit(alg5, train, test, predictors, target, IDcol, 'alg5.csv') coef5 = pd.Series(alg5.feature_importances_, predictors).sort_values(ascending=False) coef5.plot(kind='bar', title='Feature Importances') train.info() from sklearn.ensemble import RandomForestClassifier predictors = ['avg_training_score','KPIs_met >80%','awards_won?','previous_year_rating','age','top_performer','length_of_service','region_region_A','region_region_B','region_region_C','region_region_D'] alg5 = RandomForestClassifier(n_estimators=200,max_depth=10, min_samples_leaf=50,n_jobs=4) modelfit(alg5, train, test, predictors, target, IDcol, 'alg5.csv') coef5 = pd.Series(alg5.feature_importances_, predictors).sort_values(ascending=False) coef5.plot(kind='bar', title='Feature Importances')
notebooks/WNS Analytics.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator import os import zipfile import matplotlib.pyplot as plt # %matplotlib inline import seaborn as sns sns.set_style('darkgrid') os.makedirs("./cat_dog") file = zipfile.ZipFile("./cats_and_dogs_filtered.zip","r") file.extractall("./cat_dog/") # + train_dir = "./cat_dog/cats_and_dogs_filtered/train/" train_cat = "./cat_dog/cats_and_dogs_filtered/train/cats/" train_dog = "./cat_dog/cats_and_dogs_filtered/train/dogs/" test_dir = "./cat_dog/cats_and_dogs_filtered/validation/" test_dog = "./cat_dog/cats_and_dogs_filtered/validation/dogs/" test_cat = "./cat_dog/cats_and_dogs_filtered/validation/cats/" # + print("Train data details :\n**************************************") print("Total no of Dogs' pic : %s"%str(len(os.listdir(train_dog)))) print("Total no of Cats' pic : %s"%str(len(os.listdir(train_cat)))) print("\nTest data details :\n**************************************") print("Total no of Dogs' pic : %s"%str(len(os.listdir(test_dog)))) print("Total no of Cats' pic : %s"%str(len(os.listdir(test_cat)))) # - batch_size = 20 image_height = 300 image_width = 300 train_datagen = ImageDataGenerator(rescale=(1.0 / 255.0)) test_datagen = ImageDataGenerator(rescale=(1.0 / 255.0)) # + train_data = train_datagen.flow_from_directory(train_dir, target_size=(image_height,image_width), batch_size=batch_size, class_mode='binary') test_data = test_datagen.flow_from_directory(test_dir, target_size=(image_height,image_width), batch_size=batch_size, class_mode='binary') # + #Basic NN # - model1 = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(image_height,image_width,3)), tf.keras.layers.Dense(256,activation="relu"), tf.keras.layers.Dense(1,activation="sigmoid") ]) model1.compile(optimizer=tf.keras.optimizers.RMSprop(),loss=tf.keras.losses.binary_crossentropy,metrics=['acc']) model1.summary() history1 = model1.fit(train_data,steps_per_epoch=1000//batch_size,epochs=5, validation_data=test_data,validation_steps=500//batch_size) def loss_acc(history): fig,ax = plt.subplots(1,2,figsize=(16,7)) ax[0].plot(history.history['acc'],label="Train",marker="o") ax[0].plot(history.history['val_acc'],label="Test",marker="o") ax[0].set_title("Accurecy") ax[0].legend() ax[1].plot(history.history['loss'],label="Train",marker="o") ax[1].plot(history.history['val_loss'],label="Test",marker="o") ax[1].set_title("loss") ax[1].legend() loss_acc(history1) # + #3layer cnn # + model2 = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(16,(3,3),activation='relu',input_shape=(image_height,image_width,3)), tf.keras.layers.MaxPool2D((2,2)), tf.keras.layers.Conv2D(32,(3,3),activation='relu'), tf.keras.layers.MaxPool2D((2,2)), tf.keras.layers.Conv2D(64,(3,3),activation='relu'), tf.keras.layers.MaxPool2D((2,2)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(256,activation="relu"), tf.keras.layers.Dense(1,activation="sigmoid") ]) model2.compile(optimizer=tf.keras.optimizers.RMSprop(),loss=tf.keras.losses.binary_crossentropy,metrics=['acc']) model2.summary() # - history2 = model2.fit(train_data,steps_per_epoch=1000//batch_size,epochs=5, validation_data=test_data,validation_steps=500//batch_size) loss_acc(history2) # + model3 = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(16,(3,3),activation='relu',input_shape=(image_height,image_width,3)), tf.keras.layers.MaxPool2D((2,2)), tf.keras.layers.Dropout(0.1), tf.keras.layers.Conv2D(32,(3,3),activation='relu'), tf.keras.layers.MaxPool2D((2,2)), tf.keras.layers.Dropout(0.1), tf.keras.layers.Conv2D(64,(3,3),activation='relu'), tf.keras.layers.MaxPool2D((2,2)), tf.keras.layers.Dropout(0.1), tf.keras.layers.Flatten(), tf.keras.layers.Dense(256,activation="relu"), tf.keras.layers.Dense(1,activation="sigmoid") ]) model3.compile(optimizer=tf.keras.optimizers.RMSprop(),loss=tf.keras.losses.binary_crossentropy,metrics=['acc']) model3.summary() history3 = model3.fit(train_data,steps_per_epoch=1000//batch_size,epochs=5, validation_data=test_data,validation_steps=500//batch_size) # - loss_acc(history3)
Practice/cat&dog.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # CSE 7324 Lab 5: Wide and Deep Networks # ### <NAME>, <NAME>, <NAME> and <NAME> # --- # ### 1. Preparation # --- # + # dependencies import pandas as pd import numpy as np import missingno as msno import matplotlib.pyplot as plt import re from sklearn.model_selection import train_test_split from textwrap import wrap from sklearn.preprocessing import StandardScaler import warnings warnings.filterwarnings("ignore") import math # %matplotlib inline # - # #### 1.1 Define Class Variables # --- # + # import data shelter_outcomes = pd.read_csv("../Project4/Data/aac_shelter_outcomes.csv") # filter animal type for just cats #cats = shelter_outcomes[shelter_outcomes['animal_type'] == 'Cat'] cats = shelter_outcomes #print(cats.head()) # remove age_upon_outcome and recalculate to standard units (days) age = cats.loc[:,['datetime', 'date_of_birth']] # convert to datetime age.loc[:,'datetime'] = pd.to_datetime(age['datetime']) age.loc[:,'date_of_birth'] = pd.to_datetime(age['date_of_birth']) # calculate cat age in days cats.loc[:,'age'] = (age.loc[:,'datetime'] - age.loc[:,'date_of_birth']).dt.days # get dob info cats['dob_month'] = age.loc[:, 'date_of_birth'].dt.month cats['dob_day'] = age.loc[:, 'date_of_birth'].dt.day cats['dob_dayofweek'] = age.loc[:, 'date_of_birth'].dt.dayofweek # get month from datetime cats['month'] = age.loc[:,'datetime'].dt.month # get day of month cats['day'] = age.loc[:,'datetime'].dt.day # get day of week cats['dayofweek'] = age.loc[:, 'datetime'].dt.dayofweek # get hour of day cats['hour'] = age.loc[:, 'datetime'].dt.hour # get quarter cats['quarter'] = age.loc[:, 'datetime'].dt.quarter # clean up breed attribute # get breed attribute for processing # convert to lowercase, remove mix and strip whitespace # remove space in 'medium hair' to match 'longhair' and 'shorthair' # split on either space or '/' breed = cats.loc[:, 'breed'].str.lower().str.replace('mix', '').str.replace('medium hair', 'mediumhair').str.strip().str.split('/', expand=True) cats['breed'] = breed[0] cats['breed1'] = breed[1] # clean up color attribute # convert to lowercase # strip spaces # split on '/' color = cats.loc[:, 'color'].str.lower().str.strip().str.split('/', expand=True) cats['color'] = color[0] cats['color1'] = color[1] # clean up sex_upon_outcome sex = cats['sex_upon_outcome'].str.lower().str.strip().str.split(' ', expand=True) sex[0].replace('spayed', True, inplace=True) sex[0].replace('neutered', True, inplace=True) sex[0].replace('intact', False, inplace=True) sex[1].replace(np.nan, 'unknown', inplace=True) cats['spayed_neutered'] = sex[0] cats['sex'] = sex[1] # add in domesticated attribute cats['domestic'] = np.where(cats['breed'].str.contains('domestic'), 1, 0) # combine outcome and outcome subtype into a single attribute cats['outcome_subtype'] = cats['outcome_subtype'].str.lower().str.replace(' ', '-').fillna('unknown') cats['outcome_type'] = cats['outcome_type'].str.lower().str.replace(' ', '-').fillna('unknown') cats['outcome'] = cats['outcome_type'] + '_' + cats['outcome_subtype'] # drop unnecessary columns cats.drop(columns=['animal_id', 'name', 'age_upon_outcome', 'date_of_birth', 'datetime', 'monthyear', 'sex_upon_outcome', 'outcome_subtype', 'outcome_type'], inplace=True) #print(cats['outcome'].value_counts()) cats.head() # + print("Default datatypes of shelter cat outcomes:\n") print(cats.dtypes) print("\nBelow is a description of the attributes in the cats dataframe:\n") # - print('Below is a listing of the target classes and their distributions:') cats['outcome'].value_counts() msno.matrix(cats ) # + cats.drop(columns=['breed1'], inplace=True) # Breed, Color, Color1, Spayed_Netured and Sex attributes need to be one hot encoded cats_ohe = pd.get_dummies(cats, columns=['breed', 'color', 'color1', 'spayed_neutered', 'sex','animal_type']) cats_ohe.head() out_t={'relocate_unknown':0,'euthanasia_court/investigation':0,'euthanasia_behavior':0,'euthanasia_suffering' : 0, 'died_in-kennel' : 0, 'return-to-owner_unknown' : 0, 'transfer_partner' : 0, 'euthanasia_at-vet' : 0, 'adoption_foster' : 1, 'died_in-foster' : 0, 'transfer_scrp' : 0, 'euthanasia_medical' : 0, 'transfer_snr' : 0, 'died_enroute' : 0, 'rto-adopt_unknown' : 1, 'missing_in-foster' : 0, 'adoption_offsite' : 1, 'adoption_unknown' :1,'euthanasia_rabies-risk' : 0, 'unknown_unknown' : 0, 'adoption_barn' : 0, 'died_unknown' : 0, 'died_in-surgery' : 0, 'euthanasia_aggressive' : 0, 'euthanasia_unknown' : 0, 'missing_unknown' : 0, 'missing_in-kennel' : 0, 'missing_possible-theft' : 0, 'died_at-vet' : 0, 'disposal_unknown' : 0, 'euthanasia_underage' : 0, 'transfer_barn' : 0} #output is converted from string to catogries 0 to 5 represent each output # separate outcome from data outcome = cats_ohe['outcome'] cats_ohe.drop(columns=['outcome']) print(cats_ohe.head()) # split the data X_train, X_test, y_train, y_test = train_test_split(cats_ohe, outcome, test_size=0.2, random_state=0) X_train.drop(columns=['outcome'], inplace=True) X_test.drop(columns=['outcome'], inplace=True) y_train = np.asarray([out_t[item] for item in y_train]) y_test = np.asarray([out_t[item] for item in y_test]) #print(X_train.shape, X_test.shape, y_train.shape, y_test.shape) # - # #### 1.2 Identify Cross Product Features # --- from sklearn import metrics as mt from sklearn.preprocessing import OneHotEncoder import keras # from keras.models import Sequential from keras.layers import Dense, Activation, Input from keras.layers import Embedding, Flatten, Concatenate from keras.models import Model keras.__version__ # + x_train_ar=X_train.values y_target_ar=np.asarray(y_train) x_test_ar=X_test.values y_test_ar=np.asarray(y_test) x_train_ar = StandardScaler().fit(x_train_ar).transform(x_train_ar) print(x_train_ar.shape) print(y_target_ar.shape) unique, counts = np.unique(y_target_ar, return_counts=True) np.asarray((unique, counts)) # + for i in range(78256): sex[0][i]=str(sex[0][i]) # - cats=cats.drop(columns=['color1']) cats['spayed_neutered'] = sex[0] cats=cats.dropna() cats=cats.drop(columns=['outcome']) cats # #### 1.3 Metrics # --- from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import StandardScaler categorical_headers = ['animal_type','breed','color', 'spayed_neutered','sex'] numeric_headers = ["age", "dob_month", "dob_day","dob_dayofweek","month","day","dayofweek","hour","quarter"] encoders = dict() for col in categorical_headers: cats[col] = cats[col].str.strip() encoders[col] = LabelEncoder() # save the encoder cats[col+'_int'] = encoders[col].fit_transform(cats[col]) for col in numeric_headers: cats[col] = cats[col].astype(np.float) ss = StandardScaler() cats[col] = ss.fit_transform(cats[col].values.reshape(-1, 1)) # #### 1.4 Splitting Data into Training and Test sets # --- # + from sklearn.model_selection import StratifiedShuffleSplit X_train, X_test, y_train, y_test=train_test_split(cats, outcome, test_size=0.2) print(X_train.shape) print(X_test.shape) # + ohe = OneHotEncoder() X_train_ohe = ohe.fit_transform(X_train[categorical_headers].values) X_test_ohe = ohe.fit_transform(X_test[categorical_headers].values) print(X_test_ohe.shape) print(X_train_ohe.shape) # + y_train = np.asarray([out_t[item] for item in y_train]) y_test = np.asarray([out_t[item] for item in y_test]) # + # let's start as simply as possible, without any feature preprocessing categorical_headers_ints = [x+'_int' for x in categorical_headers] # we will forego one-hot encoding right now and instead just scale all inputs # this is just to get an example running in Keras (don't ever do this) feature_columns = categorical_headers_ints+numeric_headers X_train_ar = ss.fit_transform(X_train[feature_columns].values).astype(np.float32) X_test_ar = ss.transform(X_test[feature_columns].values).astype(np.float32) y_train_ar = np.asarray(y_train) y_test_ar = np.asarray(y_test) print(feature_columns) # - # ### 2. Modeling # --- # #### 2.1 Wide and Deep Networks # --- # + # create sparse input branch for ohe from keras.layers import concatenate from keras import backend as K def recall_m(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) possible_positives = K.sum(K.round(K.clip(y_true, 0, 1))) recall = true_positives / (possible_positives + K.epsilon()) return recall def precision_m(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1))) precision = true_positives / (predicted_positives + K.epsilon()) return precision def f1_m(y_true, y_pred): precision = precision_m(y_true, y_pred) recall = recall_m(y_true, y_pred) return 2*((precision*recall)/(precision+recall+K.epsilon())) inputsSparse = Input(shape=(X_train_ohe.shape[1],),sparse=True, name='X_ohe') xSparse = Dense(units=100, activation='relu', name='ohe_1')(inputsSparse) xSparse1 = Dense(units=50, activation='relu', name='ohe_2')(xSparse) # create dense input branch for numeric inputsDense = Input(shape=(X_train_ar.shape[1],),sparse=False, name='X_Numeric') xDense = Dense(units=100, activation='relu',name='num_1')(inputsDense) xDense1 = Dense(units=50, activation='relu',name='num_2')(xDense) x = concatenate([xSparse1, xDense1], name='concat') predictions = Dense(1,activation='sigmoid', name='combined')(x) # This creates a model that includes # the Input layer and Dense layers model = Model(inputs=[inputsSparse,inputsDense], outputs=predictions) model.summary() # + model.compile(optimizer='sgd', loss='mean_squared_error', metrics=['acc', f1_m]) model.fit([ X_train_ohe, X_train_ar ], # inputs for each branch are a list y_train, epochs=20, batch_size=50, verbose=0) # + yhat = model.predict([X_train_ohe, X_train_ar]) # each branch has an input yhat = np.round(yhat) print(mt.confusion_matrix(y_train_ar,yhat),mt.accuracy_score(y_train_ar,yhat)) # - recall = model.evaluate([ X_train_ohe, X_train_ar ], y_train_ar, verbose=0) recall # + # we need to create separate sequential models for each embedding embed_branches = [] X_ints_train = [] # keep track of inputs for each branch X_ints_test = []# keep track of inputs for each branch all_inputs = [] # this is what we will give to keras.Model inputs all_branch_outputs = [] # this is where we will keep track of output of each branch for col in categorical_headers_ints: X_ints_train.append( X_train[col].values ) X_ints_test.append( X_test[col].values ) # get the number of categories N = max(X_ints_train[-1]+1) # same as the max(df_train[col]) # create embedding branch from the number of categories inputs = Input(shape=(1,),dtype='int32', name=col) all_inputs.append( inputs ) # keep track of created inputs x = Embedding(input_dim=N, output_dim=int(np.sqrt(N)), input_length=1, name=col+'_embed')(inputs) x = Flatten()(x) all_branch_outputs.append(x) # also get a dense branch of the numeric features all_inputs.append(Input(shape=(X_train_ar.shape[1],),sparse=False, name='numeric')) x = Dense(units=100, activation='relu',name='numeric_1')(all_inputs[-1]) all_branch_outputs.append( Dense(units=50,activation='relu', name='numeric_2')(x) ) # merge the branches together final_branch = concatenate(all_branch_outputs, name='concat_1') final_branch = Dense(units=1,activation='sigmoid', name='combined')(final_branch) model = Model(inputs=all_inputs, outputs=final_branch) model.summary() # + model.compile(optimizer='sgd', loss='mean_squared_error', metrics=['accuracy']) model.fit(X_ints_train + [X_train_ar], # create a list of inputs for embeddings y_train, epochs=20, batch_size=32, verbose=1) # - yhat = np.round(model.predict(X_ints_test + [X_test_ar])) print(mt.confusion_matrix(y_test,yhat),mt.accuracy_score(y_test,yhat)) # #### Model cross_columns1 # + # 'workclass','education','marital_status', # 'occupation','relationship','race', # 'sex','country' def f1_m(y_true, y_pred): precision = precision_m(y_true, y_pred) recall = recall_m(y_true, y_pred) return 2*((precision*recall)/(precision+recall+K.epsilon())) cross_columns = [['breed','animal_type'], ['color', 'sex'], ['spayed_neutered', 'sex'] ] #'workclass','education','marital_status','occupation','relationship','race','sex','country' # we need to create separate lists for each branch embed_branches = [] X_ints_train = [] X_ints_test = [] all_inputs = [] all_wide_branch_outputs = [] for cols in cross_columns: # encode crossed columns as ints for the embedding enc = LabelEncoder() # create crossed labels X_crossed_train = X_train[cols].apply(lambda x: '_'.join(x), axis=1) X_crossed_test = X_test[cols].apply(lambda x: '_'.join(x), axis=1) enc.fit(np.hstack((X_crossed_train.values, X_crossed_test.values))) X_crossed_train = enc.transform(X_crossed_train) X_crossed_test = enc.transform(X_crossed_test) X_ints_train.append( X_crossed_train ) X_ints_test.append( X_crossed_test ) # get the number of categories N = max(X_ints_train[-1]+1) # same as the max(df_train[col]) # create embedding branch from the number of categories inputs = Input(shape=(1,),dtype='int32', name = '_'.join(cols)) all_inputs.append(inputs) x = Embedding(input_dim=N, output_dim=int(np.sqrt(N)), input_length=1, name = '_'.join(cols)+'_embed')(inputs) x = Flatten()(x) all_wide_branch_outputs.append(x) # merge the branches together wide_branch = concatenate(all_wide_branch_outputs, name='wide_concat') wide_branch = Dense(units=1,activation='sigmoid',name='wide_combined')(wide_branch) # reset this input branch all_deep_branch_outputs = [] # add in the embeddings for col in categorical_headers_ints: # encode as ints for the embedding X_ints_train.append( X_train[col].values ) X_ints_test.append( X_test[col].values ) # get the number of categories N = max(X_ints_train[-1]+1) # same as the max(df_train[col]) # create embedding branch from the number of categories inputs = Input(shape=(1,),dtype='int32', name=col) all_inputs.append(inputs) x = Embedding(input_dim=N, output_dim=int(np.sqrt(N)), input_length=1, name=col+'_embed')(inputs) x = Flatten()(x) all_deep_branch_outputs.append(x) # also get a dense branch of the numeric features all_inputs.append(Input(shape=(X_train_ar.shape[1],), sparse=False, name='numeric_data')) x = Dense(units=20, activation='relu',name='numeric_1')(all_inputs[-1]) all_deep_branch_outputs.append( x ) # merge the deep branches together deep_branch = concatenate(all_deep_branch_outputs,name='concat_embeds') deep_branch = Dense(units=100,activation='relu', name='deep1')(deep_branch) deep_branch = Dense(units=50,activation='relu', name='deep2')(deep_branch) deep_branch = Dense(units=25,activation='relu', name='deep3')(deep_branch) final_branch = concatenate([wide_branch, deep_branch],name='concat_deep_wide') final_branch = Dense(units=1,activation='sigmoid',name='combined')(final_branch) model = Model(inputs=all_inputs, outputs=final_branch) # + from IPython.display import SVG from keras.utils.vis_utils import model_to_dot # you will need to install pydot properly on your machine to get this running SVG(model_to_dot(model).create(prog='dot', format='svg')) # + # #%%time model.compile(optimizer='adagrad', loss='mean_squared_error', metrics=['acc', f1_m]) # lets also add the history variable to see how we are doing # and lets add a validation set to keep track of our progress history = model.fit(X_ints_train+ [X_train_ar], y_train, epochs=30, batch_size=50, verbose=1, validation_data = (X_ints_test + [X_test_ar], y_test)) # - yhat = np.round(model.predict(X_ints_test + [X_test_ar])) print(mt.confusion_matrix(y_test,yhat), mt.accuracy_score(y_test,yhat)) # #### Model cross_columns2 # + # 'workclass','education','marital_status', # 'occupation','relationship','race', # 'sex','country' cross_columns = [['breed','sex'], ['color', 'spayed_neutered']] #'workclass','education','marital_status','occupation','relationship','race','sex','country' # we need to create separate lists for each branch embed_branches = [] X_ints_train = [] X_ints_test = [] all_inputs = [] all_wide_branch_outputs = [] for cols in cross_columns: # encode crossed columns as ints for the embedding enc = LabelEncoder() # create crossed labels X_crossed_train = X_train[cols].apply(lambda x: '_'.join(x), axis=1) X_crossed_test = X_test[cols].apply(lambda x: '_'.join(x), axis=1) enc.fit(np.hstack((X_crossed_train.values, X_crossed_test.values))) X_crossed_train = enc.transform(X_crossed_train) X_crossed_test = enc.transform(X_crossed_test) X_ints_train.append( X_crossed_train ) X_ints_test.append( X_crossed_test ) # get the number of categories N = max(X_ints_train[-1]+1) # same as the max(df_train[col]) # create embedding branch from the number of categories inputs = Input(shape=(1,),dtype='int32', name = '_'.join(cols)) all_inputs.append(inputs) x = Embedding(input_dim=N, output_dim=int(np.sqrt(N)), input_length=1, name = '_'.join(cols)+'_embed')(inputs) x = Flatten()(x) all_wide_branch_outputs.append(x) # merge the branches together wide_branch = concatenate(all_wide_branch_outputs, name='wide_concat') wide_branch = Dense(units=1,activation='sigmoid',name='wide_combined')(wide_branch) # reset this input branch all_deep_branch_outputs = [] # add in the embeddings for col in categorical_headers_ints: # encode as ints for the embedding X_ints_train.append( X_train[col].values ) X_ints_test.append( X_test[col].values ) # get the number of categories N = max(X_ints_train[-1]+1) # same as the max(df_train[col]) # create embedding branch from the number of categories inputs = Input(shape=(1,),dtype='int32', name=col) all_inputs.append(inputs) x = Embedding(input_dim=N, output_dim=int(np.sqrt(N)), input_length=1, name=col+'_embed')(inputs) x = Flatten()(x) all_deep_branch_outputs.append(x) # also get a dense branch of the numeric features all_inputs.append(Input(shape=(X_train_ar.shape[1],), sparse=False, name='numeric_data')) x = Dense(units=20, activation='relu',name='numeric_1')(all_inputs[-1]) all_deep_branch_outputs.append( x ) # merge the deep branches together deep_branch = concatenate(all_deep_branch_outputs,name='concat_embeds') deep_branch = Dense(units=100,activation='relu', name='deep1')(deep_branch) deep_branch = Dense(units=50,activation='relu', name='deep2')(deep_branch) deep_branch = Dense(units=25,activation='relu', name='deep3')(deep_branch) final_branch = concatenate([wide_branch, deep_branch],name='concat_deep_wide') final_branch = Dense(units=1,activation='sigmoid',name='combined')(final_branch) model1 = Model(inputs=all_inputs, outputs=final_branch) # + from IPython.display import SVG from keras.utils.vis_utils import model_to_dot # you will need to install pydot properly on your machine to get this running SVG(model_to_dot(model1).create(prog='dot', format='svg')) # + # %%time model1.compile(optimizer='adagrad', loss='mean_squared_error', metrics=['acc', f1_m]) # lets also add the history variable to see how we are doing # and lets add a validation set to keep track of our progress history1 = model1.fit(X_ints_train+ [X_train_ar], y_train, epochs=30, batch_size=50, verbose=1, validation_data = (X_ints_test + [X_test_ar], y_test)) # + from matplotlib import pyplot as plt # %matplotlib inline plt.figure(figsize=(15,11)) plt.subplot(2,2,2) plt.ylabel('MSE Training acc and val_acc') plt.xlabel('epochs Model 2') plt.plot(history1.history['f1_m']) plt.plot(history1.history['val_f1_m']) plt.subplot(2,2,4) plt.plot(history1.history['loss']) plt.ylabel('MSE Training Loss') plt.xlabel('epochs Model 2') plt.plot(history1.history['val_loss']) plt.xlabel('epochs Model 2') plt.subplot(2,2,1) plt.ylabel('MSE Training acc and val_acc') plt.xlabel('epochs Model 1') plt.plot(history.history['f1_m']) plt.plot(history.history['val_f1_m']) plt.subplot(2,2,3) plt.plot(history.history['loss']) plt.ylabel('MSE Training Loss') plt.xlabel('epochs Model 1') plt.plot(history.history['val_loss']) plt.xlabel('epochs Model 1') # - # Model 1 corsing # A-breed ,animal_type # B-sex ,color # C-sex, spayed_neutered # # model 2 is crossing : # A-breed,sex # B-color', spayed_neutered # # # shown above the accuracy and the valdation for both model1 and model2 i want to argue that model 1 is slightly better than model2 because of the following reasons : # # 1- the elements model 1 is crossing are highly corrlated, females always have more colors , breed is highly colrated with animal type so it provide a better genrliztion # # 2- because of the first point , model 1 resulted in a better over all accurace and valdation accuracy than model 2 # # 3- best accracy for model1 is 0.7781 and validation is 0.7545 # # ### Model A 5 layars # + # 'workclass','education','marital_status', # 'occupation','relationship','race', # 'sex','country' cross_columns = [['breed','animal_type'], ['color', 'sex'], ['spayed_neutered', 'sex'] ] #'workclass','education','marital_status','occupation','relationship','race','sex','country' # we need to create separate lists for each branch embed_branches = [] X_ints_train = [] X_ints_test = [] all_inputs = [] all_wide_branch_outputs = [] for cols in cross_columns: # encode crossed columns as ints for the embedding enc = LabelEncoder() # create crossed labels X_crossed_train = X_train[cols].apply(lambda x: '_'.join(x), axis=1) X_crossed_test = X_test[cols].apply(lambda x: '_'.join(x), axis=1) enc.fit(np.hstack((X_crossed_train.values, X_crossed_test.values))) X_crossed_train = enc.transform(X_crossed_train) X_crossed_test = enc.transform(X_crossed_test) X_ints_train.append( X_crossed_train ) X_ints_test.append( X_crossed_test ) # get the number of categories N = max(X_ints_train[-1]+1) # same as the max(df_train[col]) # create embedding branch from the number of categories inputs = Input(shape=(1,),dtype='int32', name = '_'.join(cols)) all_inputs.append(inputs) x = Embedding(input_dim=N, output_dim=int(np.sqrt(N)), input_length=1, name = '_'.join(cols)+'_embed')(inputs) x = Flatten()(x) all_wide_branch_outputs.append(x) # merge the branches together wide_branch = concatenate(all_wide_branch_outputs, name='wide_concat') wide_branch = Dense(units=1,activation='sigmoid',name='wide_combined')(wide_branch) # reset this input branch all_deep_branch_outputs = [] # add in the embeddings for col in categorical_headers_ints: # encode as ints for the embedding X_ints_train.append( X_train[col].values ) X_ints_test.append( X_test[col].values ) # get the number of categories N = max(X_ints_train[-1]+1) # same as the max(df_train[col]) # create embedding branch from the number of categories inputs = Input(shape=(1,),dtype='int32', name=col) all_inputs.append(inputs) x = Embedding(input_dim=N, output_dim=int(np.sqrt(N)), input_length=1, name=col+'_embed')(inputs) x = Flatten()(x) all_deep_branch_outputs.append(x) # also get a dense branch of the numeric features all_inputs.append(Input(shape=(X_train_ar.shape[1],), sparse=False, name='numeric_data')) x = Dense(units=20, activation='relu',name='numeric_1')(all_inputs[-1]) all_deep_branch_outputs.append( x ) # merge the deep branches together deep_branch = concatenate(all_deep_branch_outputs,name='concat_embeds') deep_branch = Dense(units=100,activation='relu', name='deep1')(deep_branch) deep_branch = Dense(units=50,activation='relu', name='deep2')(deep_branch) deep_branch = Dense(units=25,activation='relu', name='deep3')(deep_branch) deep_branch = Dense(units=15,activation='relu', name='deep4')(deep_branch) final_branch = concatenate([wide_branch, deep_branch],name='concat_deep_wide') final_branch = Dense(units=1,activation='sigmoid',name='combined')(final_branch) modelA = Model(inputs=all_inputs, outputs=final_branch) # + # %%time modelA.compile(optimizer='adagrad', loss='mean_squared_error', metrics=['acc', f1_m]) # lets also add the history variable to see how we are doing # and lets add a validation set to keep track of our progress historyA = modelA.fit(X_ints_train+ [X_train_ar], y_train, epochs=30, batch_size=50, verbose=1, validation_data = (X_ints_test + [X_test_ar], y_test)) # - # ### Model B 7 layers # + # 'workclass','education','marital_status', # 'occupation','relationship','race', # 'sex','country' cross_columns = [['breed','animal_type'], ['color', 'sex'], ['spayed_neutered', 'sex'] ] # we need to create separate lists for each branch embed_branches = [] X_ints_train = [] X_ints_test = [] all_inputs = [] all_wide_branch_outputs = [] for cols in cross_columns: # encode crossed columns as ints for the embedding enc = LabelEncoder() # create crossed labels X_crossed_train = X_train[cols].apply(lambda x: '_'.join(x), axis=1) X_crossed_test = X_test[cols].apply(lambda x: '_'.join(x), axis=1) enc.fit(np.hstack((X_crossed_train.values, X_crossed_test.values))) X_crossed_train = enc.transform(X_crossed_train) X_crossed_test = enc.transform(X_crossed_test) X_ints_train.append( X_crossed_train ) X_ints_test.append( X_crossed_test ) # get the number of categories N = max(X_ints_train[-1]+1) # same as the max(df_train[col]) # create embedding branch from the number of categories inputs = Input(shape=(1,),dtype='int32', name = '_'.join(cols)) all_inputs.append(inputs) x = Embedding(input_dim=N, output_dim=int(np.sqrt(N)), input_length=1, name = '_'.join(cols)+'_embed')(inputs) x = Flatten()(x) all_wide_branch_outputs.append(x) # merge the branches together wide_branch = concatenate(all_wide_branch_outputs, name='wide_concat') wide_branch = Dense(units=1,activation='sigmoid',name='wide_combined')(wide_branch) # reset this input branch all_deep_branch_outputs = [] # add in the embeddings for col in categorical_headers_ints: # encode as ints for the embedding X_ints_train.append( X_train[col].values ) X_ints_test.append( X_test[col].values ) # get the number of categories N = max(X_ints_train[-1]+1) # same as the max(df_train[col]) # create embedding branch from the number of categories inputs = Input(shape=(1,),dtype='int32', name=col) all_inputs.append(inputs) x = Embedding(input_dim=N, output_dim=int(np.sqrt(N)), input_length=1, name=col+'_embed')(inputs) x = Flatten()(x) all_deep_branch_outputs.append(x) # also get a dense branch of the numeric features all_inputs.append(Input(shape=(X_train_ar.shape[1],), sparse=False, name='numeric_data')) x = Dense(units=20, activation='relu',name='numeric_1')(all_inputs[-1]) all_deep_branch_outputs.append( x ) # merge the deep branches together deep_branch = concatenate(all_deep_branch_outputs,name='concat_embeds') deep_branch = Dense(units=200,activation='relu', name='deep1')(deep_branch) deep_branch = Dense(units=100,activation='relu', name='deep2')(deep_branch) deep_branch = Dense(units=50,activation='relu', name='deep3')(deep_branch) deep_branch = Dense(units=25,activation='relu', name='deep4')(deep_branch) deep_branch = Dense(units=15,activation='relu', name='deep5')(deep_branch) deep_branch = Dense(units=10,activation='relu', name='deep6')(deep_branch) final_branch = concatenate([wide_branch, deep_branch],name='concat_deep_wide') final_branch = Dense(units=1,activation='sigmoid',name='combined')(final_branch) modelB = Model(inputs=all_inputs, outputs=final_branch) # + # %%time modelB.compile(optimizer='adagrad', loss='mean_squared_error', metrics=['acc', f1_m]) # lets also add the history variable to see how we are doing # and lets add a validation set to keep track of our progress historyB = modelB.fit(X_ints_train+ [X_train_ar], y_train, epochs=30, batch_size=50, verbose=1, validation_data = (X_ints_test + [X_test_ar], y_test)) # + from matplotlib import pyplot as plt # %matplotlib inline plt.figure(figsize=(15,11)) plt.subplot(2,2,1) plt.ylabel('MSE Training acc and val_acc') plt.xlabel('epochs Model A') plt.plot(historyA.history['f1_m']) plt.plot(historyA.history['val_f1_m']) plt.subplot(2,2,3) plt.plot(historyA.history['loss']) plt.ylabel('MSE Training Loss and val_loss') plt.plot(historyA.history['val_loss']) plt.xlabel('epochs Model A') plt.subplot(2,2,2) plt.ylabel('MSE Training acc and val_acc') plt.xlabel('epochs Model B') plt.plot(historyB.history['f1_m']) plt.plot(historyB.history['val_f1_m']) plt.subplot(2,2,4) plt.plot(historyB.history['loss']) plt.ylabel('MSE Training Loss') plt.plot(historyB.history['val_loss']) plt.xlabel('epochs Model B') # - # model A and model B has diffrent number of layers : # # 1- Model A has 5 layers # # 2- model B has 7 layers, # # # model B seems to have a better accuracy that reached as high as 81 % but on the other hand the valdation loss got higher and higher and resulted in a loss valdation accuracy and the reasone behind that is an over fit accured because of the high number of layers , # # so i would say Modle B provided a better F1 accurace with a minmum valdation loss for itration bellow 15 epcos thus model B is better than model A only if we reduced the number of epocs to 15 which in turns increase the time effecance # # # ### MLP inputs = Input(shape=(X_train_ar.shape[1],)) x = Dense(units=100, activation='relu')(inputs) x = Dense(units=50, activation='relu')(x) x = Dense(units=25, activation='relu')(x) x = Dense(units=15, activation='relu')(x) x = Dense(units=10, activation='relu')(x) predictions = Dense(1,activation='sigmoid')(x) modelMLP1 = Model(inputs=inputs, outputs=predictions) # + modelMLP1.compile(optimizer='sgd', loss='mean_squared_error', metrics=['acc', f1_m]) modelMLP1.summary() # - historyMLP1 = modelMLP1.fit(X_train_ar, y_train, epochs=15, batch_size=50, verbose=1, validation_data = (X_test_ar, y_test)) # + # %%time modelB.compile(optimizer='adagrad', loss='mean_squared_error', metrics=['acc', f1_m]) # lets also add the history variable to see how we are doing # and lets add a validation set to keep track of our progress historyB = modelB.fit(X_ints_train+ [X_train_ar], y_train, epochs=15, batch_size=50, verbose=1, validation_data = (X_ints_test + [X_test_ar], y_test)) # + from sklearn import metrics as mt yhat_proba = modelMLP1.predict(X_test_ar) yhatMLP = np.round(yhat_proba) print(mt.confusion_matrix(y_test,yhatMLP),mt.accuracy_score(y_test,yhat)) from sklearn import metrics as mt yhat_proba1 = modelB.predict(X_ints_test+ [X_test_ar]) yhatB = np.round(yhat_proba1) print(mt.confusion_matrix(y_test,yhatB),mt.accuracy_score(y_test,yhatB)) # + plt.figure(figsize=(15,11)) plt.subplot(2,2,1) plt.ylabel('MSE Training acc and val_acc') plt.xlabel('epochs Model B') plt.plot(historyB.history['f1_m']) plt.plot(historyB.history['val_f1_m']) plt.subplot(2,2,3) plt.plot(historyB.history['loss']) plt.ylabel('MSE Training Loss and val_loss') plt.plot(historyB.history['val_loss']) plt.xlabel('epochs Model B') plt.subplot(2,2,2) plt.ylabel('MSE Training acc and val_acc') plt.xlabel('epochs MLP') plt.plot(historyMLP1.history['f1_m']) plt.plot(historyMLP1.history['val_f1_m']) plt.subplot(2,2,4) plt.plot(historyMLP1.history['loss']) plt.ylabel('MSE Training Loss and val_loss') plt.plot(historyMLP1.history['val_loss']) plt.xlabel('epochs MLP') # - # #### 2.3 Comparing Models with Standard MLP from sklearn import metrics fpr, tpr, thresholds = metrics.roc_curve(y_test, yhatMLP) AUCMLP=metrics.roc_auc_score(y_test, yhatMLP) AUCB=metrics.roc_auc_score(y_test, yhatB) plt.figure(figsize=(15,5)) plt.subplot(1,2,1) plt.plot(fpr, tpr,label= ['area under the curve =',AUCMLP]) plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.0]) plt.rcParams['font.size'] = 12 plt.title('ROC curve for MLP classifier') plt.xlabel('False Positive Rate (1 - Specificity)') plt.legend() plt.ylabel('True Positive Rate (Sensitivity)') plt.grid(True) from sklearn import metrics fpr, tpr, thresholds = metrics.roc_curve(y_test, yhatB) plt.subplot(1,2,2) plt.plot(fpr, tpr,label= ['area under the curve =',AUCB]) plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.0]) plt.rcParams['font.size'] = 12 plt.title('ROC curve for Deep and wide classifier') plt.xlabel('False Positive Rate (1 - Specificity)') plt.ylabel('True Positive Rate (Sensitivity)') plt.legend() plt.grid(True) # the plots above compares MLP with our best Deep and wide model we found the fellowing observation : # # 1- f1_ score for our Deep model is higher than the MLP # # 2- the f1 valdation accuracy for deep is slightly better on avrage than the MLP # # 3- the big difrance between both our deep model and the MLP, that there is big gab between the valdation accurace and the accuracy , in my opinion since deep network is better at genrlizing and wide network is better at memoriztion so : # A- the deep and wide model did so good with the test data so its vary genral to it # B- on the other hand the deep network had same results for both the training nd the test data which means that it better predicted our data set using only memoriztion # # i would argue that for this data set an Modle B did slightly better that the MLP as seen from the area under the curve value on the plots # ### 3. T-SNE Dimensionality Reduction # --- # ### 4. References # ---
project5/project5_template.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import sys import os if "/opt/condaenv/anaconda2/envs/sentimentInduction" not in sys.path: sys.path.append("/opt/condaenv/anaconda2/envs/sentimentInduction") if "/opt/condaenv/anaconda2/envs/sentimentInduction/lib/python2.7/site-packages" not in sys.path: sys.path.append("/opt/condaenv/anaconda2/envs/sentimentInduction/lib/python2.7/site-packages") import pandas as pd from socialsentRun.socialconfig import config COLLECT_ACCURACIS_DIR = config.get("COLLECT_ACCURACIS_DIR") import ujson open_file = os.path.join(COLLECT_ACCURACIS_DIR,"learn_evaluations_all.txt") dfv = pd.read_json(open_file,orient='records',lines=True,encoding="utf-8") dfv.head() dfmeans_vocab_sz = dfv.copy().set_index(['vocab_sz','method']) dfmeans_vocab_sz["mean_acccuracy_all_vocab_sz"] = dfmeans_vocab_sz.groupby(['vocab_sz','method'])['accuracy'].mean() # dfmeans_vocab_sz["var_acccuracy_all_vocab_sz"] = dfmeans_vocab_sz.groupby(['vocab_sz','method'])['accuracy'].var() dfmeans_vocab_sz.drop(['accuracy','emb_params'],inplace=True,axis=1) dfmeans_vocab_sz = dfmeans_vocab_sz.reset_index() dfmeans_vocab_sz = dfmeans_vocab_sz.drop_duplicates() dfmeans_vocab_sz.head(1) dfmeans_vocab_sz = dfmeans_vocab_sz.set_index(['vocab_sz','method']) dfmeans_vocab_sz = dfmeans_vocab_sz.round(2) dfmeans_vocab_sz = dfmeans_vocab_sz.unstack(1) dfmeans_vocab_sz.head() dfmeans_params = dfv.copy().set_index(['emb_params','method']) dfmeans_params["mean_acccuracy_all_params"] = dfmeans_params.groupby(['emb_params','method'])['accuracy'].mean() # dfmeans_params["var_acccuracy_all_vocab_sz"] = dfmeans_params.groupby(['vocab_sz','method'])['accuracy'].var() dfmeans_params.drop(['accuracy','vocab_sz'],inplace=True,axis=1) dfmeans_params = dfmeans_params.reset_index() dfmeans_params = dfmeans_params.drop_duplicates() dfmeans_params.head(1) dfmeans_params = dfmeans_params.set_index(['emb_params','method']) dfmeans_params = dfmeans_params.round(2) dfmeans_params = dfmeans_params.unstack(1) dfmeans_params.head(9) dfmeans_params.stack().query('method == "rand_walk"')['mean_acccuracy_all_params'].values
analyse_learn_accuracies_224022_docs.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" # # Traffic forecasting using graph neural networks and LSTM # # **Author:** [<NAME>](https://www.linkedin.com/in/arash-khodadadi-08a02490/)<br> # **Date created:** 2021/12/28<br> # **Last modified:** 2021/12/28<br> # **Description:** This example demonstrates how to do timeseries forecasting over graphs. # + [markdown] colab_type="text" # ## Introduction # # This example shows how to forecast traffic condition using graph neural networks and LSTM. # Specifically, we are interested in predicting the future values of the traffic speed given # a history of the traffic speed for a collection of road segments. # # One popular method to # solve this problem is to consider each road segment's traffic speed as a separate # timeseries and predict the future values of each timeseries # using the past values of the same timeseries. # # This method, however, ignores the dependency of the traffic speed of one road segment on # the neighboring segments. To be able to take into account the complex interactions between # the traffic speed on a collection of neighboring roads, we can define the traffic network # as a graph and consider the traffic speed as a signal on this graph. In this example, # we implement a neural network architecture which can process timeseries data over a graph. # We first show how to process the data and create a # [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for # forecasting over graphs. Then, we implement a model which uses graph convolution and # LSTM layers to perform forecasting over a graph. # # The data processing and the model architecture are inspired by this paper: # # <NAME>, <NAME>, and <NAME>. "Spatio-temporal graph convolutional networks: # a deep learning framework for traffic forecasting." Proceedings of the 27th International # Joint Conference on Artificial Intelligence, 2018. # ([github](https://github.com/VeritasYin/STGCN_IJCAI-18)) # + [markdown] colab_type="text" # ## Setup # + colab_type="code" import pandas as pd import numpy as np import os import typing import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers # + [markdown] colab_type="text" # ## Data preparation # + [markdown] colab_type="text" # ### Data description # # We use a real-world traffic speed dataset named `PeMSD7`. We use the version # collected and prepared by [Yu et al., 2018](https://arxiv.org/abs/1709.04875) # and available # [here](https://github.com/VeritasYin/STGCN_IJCAI-18/tree/master/data_loader). # # The data consists of two files: # # - `W_228.csv` contains the distances between 228 # stations across the District 7 of California. # - `V_228.csv` contains traffic # speed collected for those stations in the weekdays of May and June of 2012. # # The full description of the dataset can be found in # [Yu et al., 2018](https://arxiv.org/abs/1709.04875). # + [markdown] colab_type="text" # ### Loading data # + colab_type="code" url = "https://github.com/VeritasYin/STGCN_IJCAI-18/raw/master/data_loader/PeMS-M.zip" data_dir = keras.utils.get_file(origin=url, extract=True, archive_format="zip") data_dir = data_dir.rstrip(".zip") route_distances = pd.read_csv( os.path.join(data_dir, "W_228.csv"), header=None ).to_numpy() speeds_array = pd.read_csv(os.path.join(data_dir, "V_228.csv"), header=None).to_numpy() print(f"route_distances shape={route_distances.shape}") print(f"speeds_array shape={speeds_array.shape}") # + [markdown] colab_type="text" # ### sub-sampling roads # # To reduce the problem size and make the training faster, we will only # work with a sample of 26 roads out of the 228 roads in the dataset. # We have chosen the roads by starting from road 0, choosing the 5 closest # roads to it, and continuing this process until we get 25 roads. You can choose # any other subset of the roads. We chose the roads in this way to increase the likelihood # of having roads with correlated speed timeseries. # `sample_routes` contains the IDs of the selected roads. # + colab_type="code" sample_routes = [ 0, 1, 4, 7, 8, 11, 15, 108, 109, 114, 115, 118, 120, 123, 124, 126, 127, 129, 130, 132, 133, 136, 139, 144, 147, 216, ] route_distances = route_distances[np.ix_(sample_routes, sample_routes)] speeds_array = speeds_array[:, sample_routes] print(f"route_distances shape={route_distances.shape}") print(f"speeds_array shape={speeds_array.shape}") # + [markdown] colab_type="text" # ### Data visualization # # Here are the timeseries of the traffic speed for two of the routes: # + colab_type="code" plt.figure(figsize=(18, 6)) plt.plot(speeds_array[:, [0, -1]]) plt.legend(["route_0", "route_25"]) # + [markdown] colab_type="text" # We can also visualize the correlation between the timeseries in different routes. # + colab_type="code" plt.figure(figsize=(8, 8)) plt.matshow(np.corrcoef(speeds_array.T), 0) plt.xlabel("road number") plt.ylabel("road number") # + [markdown] colab_type="text" # Using this correlation heatmap, we can see that for example the speed in # routes 4, 5, 6 are highly correlated. # + [markdown] colab_type="text" # ### Splitting and normalizing data # # Next, we split the speed values array into train/validation/test sets, # and normalize the resulting arrays: # + colab_type="code" train_size, val_size = 0.5, 0.2 def preprocess(data_array: np.ndarray, train_size: float, val_size: float): """Splits data into train/val/test sets and normalizes the data. Args: data_array: ndarray of shape `(num_time_steps, num_routes)` train_size: A float value between 0.0 and 1.0 that represent the proportion of the dataset to include in the train split. val_size: A float value between 0.0 and 1.0 that represent the proportion of the dataset to include in the validation split. Returns: `train_array`, `val_array`, `test_array` """ num_time_steps = data_array.shape[0] num_train, num_val = ( int(num_time_steps * train_size), int(num_time_steps * val_size), ) train_array = data_array[:num_train] mean, std = train_array.mean(axis=0), train_array.std(axis=0) train_array = (train_array - mean) / std val_array = (data_array[num_train : (num_train + num_val)] - mean) / std test_array = (data_array[(num_train + num_val) :] - mean) / std return train_array, val_array, test_array train_array, val_array, test_array = preprocess(speeds_array, train_size, val_size) print(f"train set size: {train_array.shape}") print(f"validation set size: {val_array.shape}") print(f"test set size: {test_array.shape}") # + [markdown] colab_type="text" # ### Creating TensorFlow Datasets # # Next, we create the datasets for our forecasting problem. The forecasting problem # can be stated as follows: given a sequence of the # road speed values at times `t+1, t+2, ..., t+T`, we want to predict the future values of # the roads speed for times `t+T+1, ..., t+T+h`. So for each time `t` the inputs to our # model are `T` vectors each of size `N` and the targets are `h` vectors each of size `N`, # where `N` is the number of roads. # + [markdown] colab_type="text" # We use the Keras built-in function # [`timeseries_dataset_from_array()`](https://www.tensorflow.org/api_docs/python/tf/keras/utils/timeseries_dataset_from_array). # The function `create_tf_dataset()` below takes as input a `numpy.ndarray` and returns a # `tf.data.Dataset`. In this function `input_sequence_length=T` and `forecast_horizon=h`. # # The argument `multi_horizon` needs more explanation. Assume `forecast_horizon=3`. # If `multi_horizon=True` then the model will make a forecast for time steps # `t+T+1, t+T+2, t+T+3`. So the target will have shape `(T,3)`. But if # `multi_horizon=False`, the model will make a forecast only for time step `t+T+3` and # so the target will have shape `(T, 1)`. # # You may notice that the input tensor in each batch has shape # `(batch_size, input_sequence_length, num_routes, 1)`. The last dimension is added to # make the model more general: at each time step, the input features for each raod may # contain multiple timeseries. For instance, one might want to use temperature timeseries # in addition to historical values of the speed as input features. In this example, # however, the last dimension of the input is always 1. # # We use the last 12 values of the speed in each road to forecast the speed for 3 time # steps ahead: # + colab_type="code" from tensorflow.keras.preprocessing import timeseries_dataset_from_array batch_size = 64 input_sequence_length = 12 forecast_horizon = 3 multi_horizon = False def create_tf_dataset( data_array: np.ndarray, input_sequence_length: int, forecast_horizon: int, batch_size: int = 128, shuffle=True, multi_horizon=True, ): """Creates tensorflow dataset from numpy array. This function creates a dataset where each element is a tuple `(inputs, targets)`. `inputs` is a Tensor of shape `(batch_size, input_sequence_length, num_routes, 1)` containing the `input_sequence_length` past values of the timeseries for each node. `targets` is a Tensor of shape `(batch_size, forecast_horizon, num_routes)` containing the `forecast_horizon` future values of the timeseries for each node. Args: data_array: np.ndarray with shape `(num_time_steps, num_routes)` input_sequence_length: Length of the input sequence (in number of timesteps). forecast_horizon: If `multi_horizon=True`, the target will be the values of the timeseries for 1 to `forecast_horizon` timesteps ahead. If `multi_horizon=False`, the target will be the value of the timeseries `forecast_horizon` steps ahead (only one value). batch_size: Number of timeseries samples in each batch. shuffle: Whether to shuffle output samples, or instead draw them in chronological order. multi_horizon: See `forecast_horizon`. Returns: A tf.data.Dataset instance. """ inputs = timeseries_dataset_from_array( np.expand_dims(data_array[:-forecast_horizon], axis=-1), None, sequence_length=input_sequence_length, shuffle=False, batch_size=batch_size, ) target_offset = ( input_sequence_length if multi_horizon else input_sequence_length + forecast_horizon - 1 ) target_seq_length = forecast_horizon if multi_horizon else 1 targets = timeseries_dataset_from_array( data_array[target_offset:], None, sequence_length=target_seq_length, shuffle=False, batch_size=batch_size, ) dataset = tf.data.Dataset.zip((inputs, targets)) if shuffle: dataset = dataset.shuffle(100) return dataset.prefetch(16).cache() train_dataset, val_dataset = ( create_tf_dataset(data_array, input_sequence_length, forecast_horizon, batch_size) for data_array in [train_array, val_array] ) test_dataset = create_tf_dataset( test_array, input_sequence_length, forecast_horizon, batch_size=test_array.shape[0], shuffle=False, multi_horizon=multi_horizon, ) # + [markdown] colab_type="text" # ### Roads Graph # # As mentioned before, we assume that the road segments form a graph. # The `PeMSD7` dataset has the road segments distance. The next step # is to create the graph adjacency matrix from these distances. Following # [Yu et al., 2018](https://arxiv.org/abs/1709.04875) (equation 10) we assume there # is an edge between two nodes in the graph if the distance between the corresponding roads # is less than a threshold. # + colab_type="code" def compute_adjacency_matrix( route_distances: np.ndarray, sigma2: float, epsilon: float ): """Computes the adjacency matrix from distances matrix. It uses the formula in https://github.com/VeritasYin/STGCN_IJCAI-18#data-preprocessing to compute an adjacency matrix from the distance matrix. The implementation follows that paper. Args: route_distances: np.ndarray of shape `(num_routes, num_routes)`. Entry `i,j` of this array is the distance between roads `i,j`. sigma2: Determines the width of the Gaussian kernel applied to the square distances matrix. epsilon: A threshold specifying if there is an edge between two nodes. Specifically, `A[i,j]=1` if `np.exp(-w2[i,j] / sigma2) >= epsilon` and `A[i,j]=0` otherwise, where `A` is the adjacency matrix and `w2=route_distances * route_distances` Returns: A boolean graph adjacency matrix. """ num_routes = route_distances.shape[0] route_distances = route_distances / 10000.0 w2, w_mask = ( route_distances * route_distances, np.ones([num_routes, num_routes]) - np.identity(num_routes), ) return (np.exp(-w2 / sigma2) >= epsilon) * w_mask # + [markdown] colab_type="text" # The function `compute_adjacency_matrix()` returns a boolean adjacency matrix # where 1 means there is an edge between two nodes. We use the following class # to store the information about the graph. # + colab_type="code" class GraphInfo: def __init__(self, edges: typing.Tuple[list, list], num_nodes: int): self.edges = edges self.num_nodes = num_nodes sigma2 = 0.1 epsilon = 0.5 adjacency_matrix = compute_adjacency_matrix(route_distances, sigma2, epsilon) node_indices, neighbor_indices = np.where(adjacency_matrix == 1) graph = GraphInfo( edges=(node_indices.tolist(), neighbor_indices.tolist()), num_nodes=adjacency_matrix.shape[0], ) print(f"number of nodes: {graph.num_nodes}, number of edges: {len(graph.edges[0])}") # + [markdown] colab_type="text" # ## Network architecture # # Our model for forecasting over the graph consists of a graph convolution # layer and a LSTM layer. # + [markdown] colab_type="text" # ### Graph convolution layer # # Our implementation of the graph convolution layer resembles the implementation # in [this Keras example](https://keras.io/examples/graph/gnn_citations/). Note that # in that example input to the layer is a 2D tensor of shape `(num_nodes,in_feat)` # but in our example the input to the layer is a 4D tensor of shape # `(num_nodes, batch_size, input_seq_length, in_feat)`. The graph convolution layer # performs the following steps: # # - The nodes' representations are computed in `self.compute_nodes_representation()` # by multiplying the input features by `self.weight` # - The aggregated neighbors' messages are computed in `self.compute_aggregated_messages()` # by first aggregating the neighbors' representations and then multiplying the results by # `self.weight` # - The final output of the layer is computed in `self.update()` by combining the nodes # representations and the neighbors' aggregated messages # + colab_type="code" class GraphConv(layers.Layer): def __init__( self, in_feat, out_feat, graph_info: GraphInfo, aggregation_type="mean", combination_type="concat", activation: typing.Optional[str] = None, **kwargs, ): super().__init__(**kwargs) self.in_feat = in_feat self.out_feat = out_feat self.graph_info = graph_info self.aggregation_type = aggregation_type self.combination_type = combination_type self.weight = tf.Variable( initial_value=keras.initializers.glorot_uniform()( shape=(in_feat, out_feat), dtype="float32" ), trainable=True, ) self.activation = layers.Activation(activation) def aggregate(self, neighbour_representations: tf.Tensor): aggregation_func = { "sum": tf.math.unsorted_segment_sum, "mean": tf.math.unsorted_segment_mean, "max": tf.math.unsorted_segment_max, }.get(self.aggregation_type) if aggregation_func: return aggregation_func( neighbour_representations, self.graph_info.edges[0], num_segments=self.graph_info.num_nodes, ) raise ValueError(f"Invalid aggregation type: {self.aggregation_type}") def compute_nodes_representation(self, features: tf.Tensor): """Computes each node's representation. The nodes' representations are obtained by multiplying the features tensor with `self.weight`. Note that `self.weight` has shape `(in_feat, out_feat)`. Args: features: Tensor of shape `(num_nodes, batch_size, input_seq_len, in_feat)` Returns: A tensor of shape `(num_nodes, batch_size, input_seq_len, out_feat)` """ return tf.matmul(features, self.weight) def compute_aggregated_messages(self, features: tf.Tensor): neighbour_representations = tf.gather(features, self.graph_info.edges[1]) aggregated_messages = self.aggregate(neighbour_representations) return tf.matmul(aggregated_messages, self.weight) def update(self, nodes_representation: tf.Tensor, aggregated_messages: tf.Tensor): if self.combination_type == "concat": h = tf.concat([nodes_representation, aggregated_messages], axis=-1) elif self.combination_type == "add": h = nodes_representation + aggregated_messages else: raise ValueError(f"Invalid combination type: {self.combination_type}.") return self.activation(h) def call(self, features: tf.Tensor): """Forward pass. Args: features: tensor of shape `(num_nodes, batch_size, input_seq_len, in_feat)` Returns: A tensor of shape `(num_nodes, batch_size, input_seq_len, out_feat)` """ nodes_representation = self.compute_nodes_representation(features) aggregated_messages = self.compute_aggregated_messages(features) return self.update(nodes_representation, aggregated_messages) # + [markdown] colab_type="text" # ### LSTM plus graph convolution # # By applying the graph convolution layer to the input tensor, we get another tensor # containing the nodes' representations over time (another 4D tensor). For each time # step, a node's representation is informed by the information from its neighbors. # # To make good forecasts, however, we need not only information from the neighbors # but also we need to process the information over time. To this end, we can pass each # node's tensor through a recurrent layer. The `LSTMGC` layer below, first applies # a graph convolution layer to the inputs and then passes the results through a # `LSTM` layer. # + colab_type="code" class LSTMGC(layers.Layer): """Layer comprising a convolution layer followed by LSTM and dense layers.""" def __init__( self, in_feat, out_feat, lstm_units: int, input_seq_len: int, output_seq_len: int, graph_info: GraphInfo, graph_conv_params: typing.Optional[dict] = None, **kwargs, ): super().__init__(**kwargs) # graph conv layer if graph_conv_params is None: graph_conv_params = { "aggregation_type": "mean", "combination_type": "concat", "activation": None, } self.graph_conv = GraphConv(in_feat, out_feat, graph_info, **graph_conv_params) self.lstm = layers.LSTM(lstm_units, activation="relu") self.dense = layers.Dense(output_seq_len) self.input_seq_len, self.output_seq_len = input_seq_len, output_seq_len def call(self, inputs): """Forward pass. Args: inputs: tf.Tensor of shape `(batch_size, input_seq_len, num_nodes, in_feat)` Returns: A tensor of shape `(batch_size, output_seq_len, num_nodes)`. """ # convert shape to (num_nodes, batch_size, input_seq_len, in_feat) inputs = tf.transpose(inputs, [2, 0, 1, 3]) gcn_out = self.graph_conv( inputs ) # gcn_out has shape: (num_nodes, batch_size, input_seq_len, out_feat) shape = tf.shape(gcn_out) num_nodes, batch_size, input_seq_len, out_feat = ( shape[0], shape[1], shape[2], shape[3], ) # LSTM takes only 3D tensors as input gcn_out = tf.reshape(gcn_out, (batch_size * num_nodes, input_seq_len, out_feat)) lstm_out = self.lstm( gcn_out ) # lstm_out has shape: (batch_size * num_nodes, lstm_units) dense_output = self.dense( lstm_out ) # dense_output has shape: (batch_size * num_nodes, output_seq_len) output = tf.reshape(dense_output, (num_nodes, batch_size, self.output_seq_len)) return tf.transpose( output, [1, 2, 0] ) # returns Tensor of shape (batch_size, output_seq_len, num_nodes) # + [markdown] colab_type="text" # ## Model training # + colab_type="code" in_feat = 1 batch_size = 64 epochs = 20 input_sequence_length = 12 forecast_horizon = 3 multi_horizon = False out_feat = 10 lstm_units = 64 graph_conv_params = { "aggregation_type": "mean", "combination_type": "concat", "activation": None, } st_gcn = LSTMGC( in_feat, out_feat, lstm_units, input_sequence_length, forecast_horizon, graph, graph_conv_params, ) inputs = layers.Input((input_sequence_length, graph.num_nodes, in_feat)) outputs = st_gcn(inputs) model = keras.models.Model(inputs, outputs) model.compile( optimizer=keras.optimizers.RMSprop(learning_rate=0.0002), loss=keras.losses.MeanSquaredError(), ) model.fit( train_dataset, validation_data=val_dataset, epochs=epochs, callbacks=[keras.callbacks.EarlyStopping(patience=10)], ) # + [markdown] colab_type="text" # ## Making forecasts on test set # # Now we can use the trained model to make forecasts for the test set. Below, we # compute the MAE of the model and compare it to the MAE of naive forecasts. # The naive forecasts are the last value of the speed for each node. # + colab_type="code" x_test, y = next(test_dataset.as_numpy_iterator()) y_pred = model.predict(x_test) plt.figure(figsize=(18, 6)) plt.plot(y[:, 0, 0]) plt.plot(y_pred[:, 0, 0]) plt.legend(["actual", "forecast"]) naive_mse, model_mse = ( np.square(x_test[:, -1, :, 0] - y[:, 0, :]).mean(), np.square(y_pred[:, 0, :] - y[:, 0, :]).mean(), ) print(f"naive MAE: {naive_mse}, model MAE: {model_mse}") # + [markdown] colab_type="text" # Of course, the goal here is to demonstrate the method, # not to achieve the best performance. To improve the # model's accuracy, all model hyperparameters should be tuned carefully. In addition, # several of the `LSTMGC` blocks can be stacked to increase the representation power # of the model.
examples/timeseries/ipynb/timeseries_traffic_forecasting.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] Collapsed="false" # <img src='./img/EU-Copernicus-EUM_3Logos.png' alt='Logo EU Copernicus EUMETSAT' align='right' width='50%'></img> # + [markdown] Collapsed="false" # <br> # + [markdown] Collapsed="false" # <a href="./31_ltpy_AC_SAF_GOME-2_L2_case_study.ipynb"><< 31 - AC SAF - GOME-2 - Level 2 - Case studies </a><span style="float:right;"><a href="./33_ltpy_Arctic_Fires_case_study.ipynb">33 - Arctic fires case study >></a></span> # + [markdown] Collapsed="false" # # 3.2 AC SAF - GOME-2 - Level 3 - Workflow examples # + [markdown] Collapsed="false" # The AC SAF GOME-2 Level 3 longterm monthly average data products are helpful to get a better overall picture of an atmospheric compostion parameter. Following workflows will help you to get a better understanding of the behaviour of one specific parameter: # * <b>Visualization of global monthly aggregates</b><br> # You can visualize the monthly aggregates on a `2D Map` # * <b>Visualization of a longterm trend for one specific location or region</b><br> # You can generate a `time series plot` and visualize the longterm trend for one specific location. You can also compare two different locations and data values can also be spatially aggregated to visualize the global or regional longterm trend # # For all examples, we will make use of the [xarray](http://xarray.pydata.org/en/stable/index.html) library. `xarray` offers useful functions and wrappers for the efficient handling of multi-dimensional data in `netCDF`. # + [markdown] Collapsed="false" # This notebook contains examples of the Level 3 data product `Nitrogen dioxide`. At the beginning, there is a section that loads required libraries and the Level 3 data product as xarray `Datasets`. Some functions that will be reused will be defined as well. # + [markdown] Collapsed="false" # #### Module outline: # * **[Nitrogen dioxide - Workflow example](#no2)** # * [1 - Explore data with a 2D Map](#2d_map) # * [2 - Explore longterm trends of tropospheric column NO2 for specific locations](#trends) # * [3 - Add standard deviation information to tropospheric column NO2 levels for specific locations](#std) # + [markdown] Collapsed="false" # <hr> # + [markdown] Collapsed="false" # #### Load required libraries # + Collapsed="false" # %matplotlib inline import xarray as xr import pandas as pd import numpy as np from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() import matplotlib.pyplot as plt import cartopy.crs as ccrs from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER # + [markdown] Collapsed="false" toc-hr-collapsed=true # #### Load GOME-2 Level 3 data as xarray `Datasets` # - # Build a list of time coordinates with `pandas`, which you can use to assign the time dimension for the Level 3 data. # + Collapsed="false" # Build list of time coordinates with pandas time_coords = pd.date_range('2007-02', '2017-11', freq='MS').strftime("%Y-%m").astype('datetime64[ns]') time_coords # + [markdown] Collapsed="false" # ##### Load NO<sub>2</sub> data products # - # Now, let's load the Nitrogendioxide NO<sub>2</sub> parameter with `xarray` and assign values for the three dimensions, `latitude`, `longitude` and `time`. # + Collapsed="false" no2_ds = xr.open_dataset('./eodata/gome2/level3/no2/GOME_NO2_Global_201701_METOPB_DLR_v1.nc') no2 = xr.open_mfdataset('./eodata/gome2/level3/no2/*.nc', concat_dim='time', combine='nested', group='PRODUCT') no2_assigned = no2.assign_coords(latitude=no2_ds.latitude, longitude=no2_ds.longitude, time=time_coords) no2_assigned # + [markdown] Collapsed="false" # <hr> # + [markdown] Collapsed="false" toc-hr-collapsed=true # #### Visualization functions # + [markdown] Collapsed="false" # ##### Visualize a `2D xarray DataArray` with `matplotlib` and `cartopy` # + [markdown] Collapsed="false" # You can make use of the function `visualize_imshow`, which was defined in module [22_ltpy_AC_SAF_GOME-2_L3](./22_ltpy_AC_SAF_GOME-2_L3.ipynb#plotting). However, we have to adjust it that the function accounts for multi-dimensional `DataArrays`. Let's call the modified function `visualize_md_map`. # + Collapsed="false" def visualize_md_map(array, conversion_factor, timepos, projection, vmax, color_scale): fig=plt.figure(figsize=(15, 10)) ax=plt.axes(projection=projection) ax.coastlines() ax.set_global() gl = ax.gridlines(draw_labels=True, linestyle='--') gl.xlabels_top=False gl.ylabels_right=False gl.xformatter=LONGITUDE_FORMATTER gl.yformatter=LATITUDE_FORMATTER gl.xlabel_style={'size':14} gl.ylabel_style={'size':14} plt.xticks(fontsize=16) plt.yticks(fontsize=16) plt.xlabel("Longitude", fontsize=16) plt.ylabel("Latitude", fontsize=16) ax.set_title(array.long_name, fontsize=20, pad=20.0) tmp = array[timepos,:,:] img1 = plt.imshow(tmp*conversion_factor, extent=[array.longitude.min(),array.longitude.max(),array.latitude.max(),array.latitude.min()], cmap=color_scale, aspect='auto', vmin=0, vmax=vmax) cbar = fig.colorbar(img1, ax=ax, orientation='horizontal', fraction=0.04, pad=0.1) cbar.set_label(str(conversion_factor) + ' ' + array.units, fontsize=16) cbar.ax.tick_params(labelsize=14) plt.show() # + [markdown] Collapsed="false" # <hr> # + [markdown] Collapsed="false" toc-hr-collapsed=true # ## <a id="no2"></a>Nitrogen dioxide - Workflow example # + [markdown] Collapsed="false" # ### <a id="2d_map"></a>1) Explore data with a `2D Map` # + [markdown] Collapsed="false" # A first step to explore the data is to create a simple `2D Map`. We defined the function `visualize_map`, which takes the following kwargs: # * a xarray `DataArray`, # * the index of the time axis (month-1 that shall be plotted), # * the projection, # * maximum data value to adjust the colorbar, # * a title for the plot, and # * the unit for the y-axis # # You can explore different months by changing the number of the month. # + Collapsed="false" month = 10 # in Python iterations start at 0, thus 10 equals data values for month November visualize_md_map(no2_assigned.NO2trop, 1e-15, month, ccrs.PlateCarree(), 30, 'viridis') # + [markdown] Collapsed="false" # <br> # + [markdown] Collapsed="false" # ### <a id="trends"></a>2) Explore longterm trends of tropospheric column NO<sub>2</sub> for specific locations # + [markdown] Collapsed="false" # The above map shows that there is a prominent increase of the tropospheric column of nitrogen dioxide in the Beijing region. Let's have a look at the longterm trend for individual point locations. # # xarray's label based selection method `sel` allows to select data based on name or value. `sel` further allows for enabling nearest neighbor, which select the closes e.g. latitude or longitude value. xarray offers a simple plotting wrapper of Python's matplotlib. Thus, we can directly apply the `plot` function to a `DataArray` object. We can add additional specifications, e.g. if it shall be a line plot, what color and style the line shall have, etc. # # # Let's plot the temporal trends for two locations: `Beijing` and `Darmstadt`, to see how much more the tropospheric NO<sub>2</sub> levels in Beijing are elevated compared to a city in Germany. We specify latitude and longitude coordinates for both cities and plot the two DataArrays as line plots. # # NOTE: the NO<sub>2</sub> values are shown in 1e<sup>-15</sup> molecules per cm<sup>2</sup>. # + Collapsed="false" fig = plt.figure(figsize=(20,5)) # Latitude / Longitude coordinates for Darmstadt city1 = 'Darmstadt' lat1 = 49.875 lon1 = 8.650 # Latitude / Longitude coordinates for Beijing city2 = 'Beijing' lat2 = 39.908 lon2 = 116.397 conversion_factor = 1e-15 city1_total = no2_assigned.NO2trop.sel(latitude=lat1, longitude=lon1, method='nearest')*conversion_factor city1_total.plot.line(color='slategrey', linestyle='dashed', label=city1) city2_total = no2_assigned.NO2trop.sel(latitude=lat2, longitude=lon2, method='nearest')*conversion_factor city2_total.plot.line(linestyle='dashed',color='firebrick', label=city2) plt.xticks(fontsize=16) plt.yticks(fontsize=16) plt.title(no2_assigned.NO2trop.long_name + " - 2007-2017", fontsize=20, pad=20) plt.ylabel(str(conversion_factor) + ' ' + no2_assigned.NO2trop.units, fontsize=16) plt.xlabel('Month', fontsize=16) plt.legend(fontsize=16,loc=1) plt.show() # + [markdown] Collapsed="false" # <br> # + [markdown] Collapsed="false" # The plot above shows that the NO<sub>2</sub> levels in Beijing are in general higher than the levels in Darmstadt. # # There is a prominent spike of tropospheric NO<sub>2</sub> levels in Beijing in January 2013, which reflects the [Heavy haze pollution episode over central and western China](https://link.springer.com/article/10.1007/s11430-013-4773-4). # + [markdown] Collapsed="false" # <br> # + [markdown] Collapsed="false" # <a href="./31_ltpy_AC_SAF_GOME-2_L2_case_study.ipynb"><< 31 - AC SAF - GOME-2 - Level 2 - Case studies </a><span style="float:right;"><a href="./33_ltpy_Arctic_Fires_case_study.ipynb">33 - Arctic fires case study >></a></span> # + [markdown] Collapsed="false" # <hr> # + [markdown] Collapsed="false" # <p style="text-align:left;">This project is licensed under the <a href="./LICENSE">MIT License</a> <span style="float:right;"><a href="https://gitlab.eumetsat.int/eumetlab/atmosphere/atmosphere">View on GitLab</a> | <a href="https://training.eumetsat.int/">EUMETSAT Training</a> | <a href=mailto:<EMAIL>>Contact</a></span></p>
32_ltpy_AC_SAF_GOME-2_L3_case_study.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/JoeOlang/NLP/blob/main/Swahili/Swahili%20Word%20Embedding%20Word2Vec%20-%20Model/pytorch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + colab={"base_uri": "https://localhost:8080/"} id="wpongtwk4JOr" outputId="91a31fca-6c54-4a2f-e0b5-29eef2a6c084" from google.colab import drive drive.mount('/drive') # + id="1n_UI5YJRPyZ" import random import copy import time import pandas as pd import numpy as np import gc import re import torch #import spacy from tqdm import tqdm_notebook, tnrange from tqdm.auto import tqdm tqdm.pandas(desc='Progress') from collections import Counter from nltk import word_tokenize import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torch.utils.data import Dataset, DataLoader from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence from torch.autograd import Variable from sklearn.metrics import f1_score import os from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences # cross validation and metrics from sklearn.model_selection import StratifiedKFold from sklearn.metrics import f1_score from torch.optim.optimizer import Optimizer from sklearn.preprocessing import StandardScaler from multiprocessing import Pool from functools import partial import numpy as np from sklearn.decomposition import PCA import torch as t import torch.nn as nn import torch.nn.functional as F import matplotlib.pyplot as plt # + id="Do-fciOCb1PW" embed_size = 300 # how big is each word vector max_features = 120000 # how many unique words to use (i.e num rows in embedding vector) maxlen = 750 # max number of words in a question to use batch_size = 512 # how many samples to process at once n_epochs = 5 # how many times to iterate over all samples n_splits = 5 # Number of K-fold Splits SEED = 10 debug = 0 # + id="HrACsbXq48EY" train_path = '/content/drive/MyDrive/Colab Notebooks/NLP/Swahili/Train.csv' test_path = '/content/drive/MyDrive/Colab Notebooks/NLP/Swahili/Test.csv' # + colab={"base_uri": "https://localhost:8080/", "height": 355} id="B9_e878z5QEv" outputId="78c27311-4990-411d-c665-7dff5ba076eb" df = pd.read_csv(train_path) df.head(10) # + id="Z2kGxHl4dR6r" df['len'] = df['content'].apply(lambda s : len(s)) # + colab={"base_uri": "https://localhost:8080/", "height": 414} id="vw8Kph_CdvjS" outputId="da0cfbb4-ce8e-4eda-dc11-d777f4d14f9e" df # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="WGIlKJhPdbUz" outputId="b1896667-8b28-4c58-d032-64afbcb4f14a" df['len'].plot.hist(bins=100) # + id="imBGh1SaePOS" import re def clean_text(x): pattern = r'[^a-zA-z0-9\s]' text = re.sub(pattern, '', x) return x def clean_numbers(x): if bool(re.search(r'\d', x)): x = re.sub('[0-9]{5,}', '#####', x) x = re.sub('[0-9]{4}', '####', x) x = re.sub('[0-9]{3}', '###', x) x = re.sub('[0-9]{2}', '##', x) return x # + id="Fnbc3yWwes1r" # lower the text df["content"] = df["content"].apply(lambda x: x.lower()) # Clean the text df["content"] = df["content"].apply(lambda x: clean_text(x)) # Clean numbers df["content"] = df["content"].apply(lambda x: clean_numbers(x)) # + colab={"base_uri": "https://localhost:8080/"} id="7zyvSEkzfI-d" outputId="9822e6f3-ad3e-43e4-9535-7f5f3a3c2151" df['category'].unique() # + id="x66X7cVCfScD" from sklearn.model_selection import train_test_split train_X, test_X, train_y, test_y = train_test_split(df['content'], df['category'], stratify=df['category'], test_size=0.25) # + colab={"base_uri": "https://localhost:8080/"} id="I9LdkWxmfy3L" outputId="bb7aaeae-779a-4026-f61e-e35e38ace1b2" print("Train shape : ",train_X.shape) print("Test shape : ",test_X.shape) # + id="WbBa5yzaghss" ## Tokenize the sentences tokenizer = Tokenizer(num_words=max_features) tokenizer.fit_on_texts(list(train_X)) train_X = tokenizer.texts_to_sequences(train_X) test_X = tokenizer.texts_to_sequences(test_X) ## Pad the sentences train_X = pad_sequences(train_X, maxlen=maxlen) test_X = pad_sequences(test_X, maxlen=maxlen) # + id="eNc0mPpPgtDD" from sklearn.preprocessing import LabelEncoder le = LabelEncoder() train_y = le.fit_transform(train_y.values) test_y = le.transform(test_y.values) # + colab={"base_uri": "https://localhost:8080/"} id="KJ4mHYpJgwei" outputId="73edeeab-71d7-458a-8d43-52f77eafdd51" le.classes_ # + id="8DAdtib-gzf8" ## FUNCTIONS TAKEN FROM https://www.kaggle.com/gmhost/gru-capsule def load_glove(word_index): EMBEDDING_FILE = '/content/drive/MyDrive/Colab Notebooks/NLP/Swahili/embeddings300.txt' def get_coefs(word,*arr): return word, np.asarray(arr, dtype='float32')[:300] embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(EMBEDDING_FILE)) all_embs = np.stack(embeddings_index.values()) emb_mean,emb_std = -0.005838499,0.48782197 embed_size = all_embs.shape[1] nb_words = min(max_features, len(word_index)+1) embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size)) for word, i in word_index.items(): if i >= max_features: continue embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector else: embedding_vector = embeddings_index.get(word.capitalize()) if embedding_vector is not None: embedding_matrix[i] = embedding_vector return embedding_matrix # + id="PmKC41wVr4Kw" import os word_index = tokenizer.word_index embedings_index = {} f = open(os.path.join('', '/content/drive/MyDrive/Colab Notebooks/NLP/Swahili/embeddings300.txt'), encoding="utf-8") for line in f: values = line.split() word = values[0] coefs = np.asarray(values[1:]) embedings_index[word] = coefs f.close() embedding_dimension = 300 num_words = len(word_index) + 1 embedings_matrix = np.zeros((num_words, embedding_dimension)) for word, i in word_index.items(): if i > num_words: continue embedings_vector = embedings_index.get(word) if embedings_vector is not None: embedings_matrix[i] = embedings_vector # + colab={"base_uri": "https://localhost:8080/"} id="RbpXjhqKs4em" outputId="32d1bb4e-3e9b-4975-cd58-69da280694ef" np.shape(embedings_matrix) # + id="NXi0rvcbvCxV" embedding_matrix = embedings_matrix # + id="NGE023DBu23H" class CNN_Text(nn.Module): def __init__(self): super(CNN_Text, self).__init__() filter_sizes = [1,2,3,5] num_filters = 36 n_classes = len(le.classes_) self.embedding = nn.Embedding(max_features, embed_size) self.embedding.weight = nn.Parameter(torch.tensor(embedding_matrix, dtype=torch.float32)) self.embedding.weight.requires_grad = False self.convs1 = nn.ModuleList([nn.Conv2d(1, num_filters, (K, embed_size)) for K in filter_sizes]) self.dropout = nn.Dropout(0.1) self.fc1 = nn.Linear(len(filter_sizes)*num_filters, n_classes) def forward(self, x): x = self.embedding(x) x = x.unsqueeze(1) x = [F.relu(conv(x)).squeeze(3) for conv in self.convs1] x = [F.max_pool1d(i, i.size(2)).squeeze(2) for i in x] x = torch.cat(x, 1) x = self.dropout(x) logit = self.fc1(x) return logit # + colab={"base_uri": "https://localhost:8080/"} id="28J4c9nYvVe8" outputId="3ceff15e-a657-4857-d512-4f3a695bb760" n_epochs = 6 model = CNN_Text() loss_fn = nn.CrossEntropyLoss(reduction='sum') optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.001) model.cuda() # Load train and test in CUDA Memory x_train = torch.tensor(train_X, dtype=torch.long).cuda() y_train = torch.tensor(train_y, dtype=torch.long).cuda() x_cv = torch.tensor(test_X, dtype=torch.long).cuda() y_cv = torch.tensor(test_y, dtype=torch.long).cuda() # Create Torch datasets train = torch.utils.data.TensorDataset(x_train, y_train) valid = torch.utils.data.TensorDataset(x_cv, y_cv) # Create Data Loaders train_loader = torch.utils.data.DataLoader(train, batch_size=batch_size, shuffle=True) valid_loader = torch.utils.data.DataLoader(valid, batch_size=batch_size, shuffle=False) train_loss = [] valid_loss = [] for epoch in range(n_epochs): start_time = time.time() # Set model to train configuration model.train() avg_loss = 0. for i, (x_batch, y_batch) in enumerate(train_loader): # Predict/Forward Pass y_pred = model(x_batch) # Compute loss loss = loss_fn(y_pred, y_batch) optimizer.zero_grad() loss.backward() optimizer.step() avg_loss += loss.item() / len(train_loader) # Set model to validation configuration -Doesn't get trained here model.eval() avg_val_loss = 0. val_preds = np.zeros((len(x_cv),len(le.classes_))) for i, (x_batch, y_batch) in enumerate(valid_loader): y_pred = model(x_batch).detach() avg_val_loss += loss_fn(y_pred, y_batch).item() / len(valid_loader) # keep/store predictions val_preds[i * batch_size:(i+1) * batch_size] =F.softmax(y_pred).cpu().numpy() # Check Accuracy val_accuracy = sum(val_preds.argmax(axis=1)==test_y)/len(test_y) train_loss.append(avg_loss) valid_loss.append(avg_val_loss) elapsed_time = time.time() - start_time print('Epoch {}/{} \t loss={:.4f} \t val_loss={:.4f} \t val_acc={:.4f} \t time={:.2f}s'.format( epoch + 1, n_epochs, avg_loss, avg_val_loss, val_accuracy, elapsed_time)) # + id="WxYKAbCCC6Ni" torch.save(model,'/content/drive/MyDrive/Colab Notebooks/NLP/Swahili/textcnn_model') # + id="DNSoCIrlDB8b" def plot_graph(epochs): fig = plt.figure(figsize=(12,12)) plt.title("Train/Validation Loss") plt.plot(list(np.arange(epochs) + 1) , train_loss, label='train') plt.plot(list(np.arange(epochs) + 1), valid_loss, label='validation') plt.xlabel('num_epochs', fontsize=12) plt.ylabel('loss', fontsize=12) plt.legend(loc='best') # + colab={"base_uri": "https://localhost:8080/", "height": 733} id="izNUHndkDK1d" outputId="4e8f1f68-a32e-40ee-e514-89dade480169" plot_graph(n_epochs) # + colab={"base_uri": "https://localhost:8080/"} id="VtCZUiqDDbs8" outputId="08fdd8dd-20c0-48a3-ecb4-71b5af6c1037" # !pip install scikit-plot # + colab={"base_uri": "https://localhost:8080/", "height": 712} id="vaia8xBkDVv2" outputId="370773b4-ce4e-4503-8bf7-0abd7c283fa0" import scikitplot as skplt y_true = [le.classes_[x] for x in test_y] y_pred = [le.classes_[x] for x in val_preds.argmax(axis=1)] skplt.metrics.plot_confusion_matrix( y_true, y_pred, figsize=(12,12),x_tick_rotation=90) # + id="ayU-3bBaGKC9" class BiLSTM(nn.Module): def __init__(self): super(BiLSTM, self).__init__() self.hidden_size = 64 drp = 0.1 n_classes = len(le.classes_) self.embedding = nn.Embedding(max_features, embed_size) self.embedding.weight = nn.Parameter(torch.tensor(embedding_matrix, dtype=torch.float32)) self.embedding.weight.requires_grad = False self.lstm = nn.LSTM(embed_size, self.hidden_size, bidirectional=True, batch_first=True) self.linear = nn.Linear(self.hidden_size*4 , 64) self.relu = nn.ReLU() self.dropout = nn.Dropout(drp) self.out = nn.Linear(64, n_classes) def forward(self, x): #rint(x.size()) h_embedding = self.embedding(x) #_embedding = torch.squeeze(torch.unsqueeze(h_embedding, 0)) h_lstm, _ = self.lstm(h_embedding) avg_pool = torch.mean(h_lstm, 1) max_pool, _ = torch.max(h_lstm, 1) conc = torch.cat(( avg_pool, max_pool), 1) conc = self.relu(self.linear(conc)) conc = self.dropout(conc) out = self.out(conc) return out # + colab={"base_uri": "https://localhost:8080/"} id="JjWBQygzGJ4t" outputId="a859ff12-1a30-4cc7-e6b1-c9193988351d" n_epochs = 6 model = BiLSTM() loss_fn = nn.CrossEntropyLoss(reduction='sum') optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.001) model.cuda() # Load train and test in CUDA Memory x_train = torch.tensor(train_X, dtype=torch.long).cuda() y_train = torch.tensor(train_y, dtype=torch.long).cuda() x_cv = torch.tensor(test_X, dtype=torch.long).cuda() y_cv = torch.tensor(test_y, dtype=torch.long).cuda() # Create Torch datasets train = torch.utils.data.TensorDataset(x_train, y_train) valid = torch.utils.data.TensorDataset(x_cv, y_cv) # Create Data Loaders train_loader = torch.utils.data.DataLoader(train, batch_size=batch_size, shuffle=True) valid_loader = torch.utils.data.DataLoader(valid, batch_size=batch_size, shuffle=False) train_loss = [] valid_loss = [] for epoch in range(n_epochs): start_time = time.time() # Set model to train configuration model.train() avg_loss = 0. for i, (x_batch, y_batch) in enumerate(train_loader): # Predict/Forward Pass y_pred = model(x_batch) # Compute loss loss = loss_fn(y_pred, y_batch) optimizer.zero_grad() loss.backward() optimizer.step() avg_loss += loss.item() / len(train_loader) # Set model to validation configuration -Doesn't get trained here model.eval() avg_val_loss = 0. val_preds = np.zeros((len(x_cv),len(le.classes_))) for i, (x_batch, y_batch) in enumerate(valid_loader): y_pred = model(x_batch).detach() avg_val_loss += loss_fn(y_pred, y_batch).item() / len(valid_loader) # keep/store predictions val_preds[i * batch_size:(i+1) * batch_size] =F.softmax(y_pred).cpu().numpy() # Check Accuracy val_accuracy = sum(val_preds.argmax(axis=1)==test_y)/len(test_y) train_loss.append(avg_loss) valid_loss.append(avg_val_loss) elapsed_time = time.time() - start_time print('Epoch {}/{} \t loss={:.4f} \t val_loss={:.4f} \t val_acc={:.4f} \t time={:.2f}s'.format( epoch + 1, n_epochs, avg_loss, avg_val_loss, val_accuracy, elapsed_time)) # + colab={"base_uri": "https://localhost:8080/", "height": 733} id="TOgorm9MG9PU" outputId="0d6d4fb5-50a0-4a66-a35c-c05d105271d9" plot_graph(n_epochs) # + id="s-Z4lD1FHDYR" torch.save(model,'/content/drive/MyDrive/Colab Notebooks/NLP/Swahili/bilstm_model') # + colab={"base_uri": "https://localhost:8080/", "height": 712} id="ckIwm3uDHIhP" outputId="60bc8f41-e19c-434a-96dd-291d0c0bf94d" y_true = [le.classes_[x] for x in test_y] y_pred = [le.classes_[x] for x in val_preds.argmax(axis=1)] skplt.metrics.plot_confusion_matrix( y_true, y_pred, figsize=(12,12),x_tick_rotation=90) # + [markdown] id="SSTGcpfTHa8M" # ------ # + colab={"base_uri": "https://localhost:8080/", "height": 414} id="mHVLoTZhHalb" outputId="c859eef0-f483-4b1d-8619-a960ae9b8e2e" tdf = pd.read_csv(test_path) tdf # + colab={"base_uri": "https://localhost:8080/"} id="gxq9EBRsHrJh" outputId="f4f59f4c-5e28-4dee-fd90-2be1d807ade9" x = tdf['content'].values[1] print(x) # + id="Ewq15SpbHXP3" def predict_single(x): # lower the text x = x.lower() # Clean the text x = clean_text(x) # Clean numbers x = clean_numbers(x) # Clean Contractions # x = replace_contractions(x) # tokenize x = tokenizer.texts_to_sequences([x]) # pad x = pad_sequences(x, maxlen=maxlen) # create dataset x = torch.tensor(x, dtype=torch.long).cuda() pred = model(x).detach() pred = F.softmax(pred).cpu().numpy() pred = pred.argmax(axis=1) pred = le.classes_[pred] return pred[0] # + colab={"base_uri": "https://localhost:8080/"} id="e8GVCOHhH3JC" outputId="c5f81a03-6302-4509-fe19-b44c19581678" res = predict_single(x) print(type(res)) print(res) # + colab={"base_uri": "https://localhost:8080/"} id="Kh7GBFvbJ56S" outputId="b397be07-7be0-41bb-b1a9-38bcfd1d0bce" tdf["category"] = tdf["content"].apply(lambda x: predict_single(x)) # + id="biwtm3wXLo8W" tdf['kitaifa'], tdf['michezo'], tdf['biashara'], tdf['kimataifa'], tdf['burudani'] = [0,0,0,0,0] # + colab={"base_uri": "https://localhost:8080/"} id="MVFZ0zCXL2Tv" outputId="b1048cd8-00f0-49fd-99ef-a775ab47c85f" df['category'].unique() # + id="EQH1VdsOLa7I" def k_assign(x): if x == 'Kitaifa': val = 1 return int(val) else: return 0 def m_assign(x): if x == 'michezo': val = 1 return int(val) else: return 0 def b_assign(x): if x == 'Biashara': val = 1 return int(val) else: return 0 def ki_assign(x): if x == 'Kimataifa': val = 1 return int(val) else: return 0 def bu_assign(x): if x == 'Burudani': val = 1 return int(val) else: return 0 # + id="3r8MdWteMO8h" tdf['kitaifa'] = tdf['category'].apply(lambda x: k_assign(x)) tdf['michezo'] = tdf['category'].apply(lambda x: m_assign(x)) tdf['biashara'] = tdf['category'].apply(lambda x: b_assign(x)) tdf['kimataifa'] = tdf['category'].apply(lambda x: ki_assign(x)) tdf['burudani'] = tdf['category'].apply(lambda x: bu_assign(x)) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="7gmOzKjcKiWu" outputId="4b8688d8-7ae1-40ae-e472-0714a0574771" tdf # + id="<KEY>" del tdf['content'] del tdf['category'] # + id="CKgcTNdyOAQ8" tdf.to_csv('/content/drive/MyDrive/Colab Notebooks/NLP/Swahili/sub.csv', index=False)
Swahili/Word2Vec - Model/pytorch.ipynb